Published:

Jarle Aase

Implementing a unary, async client

bookmark 3 min read

In the server, we used two threads, one to run the servers event-loop, and one to handle signals. I did it that way, because the server is designed to run until it is stopped. The client on the other hand, will quit when it has finished it's work. For some clients, like command-line tools, that make sense. You may write a client that is part of a server, and who need to be prepared to handle rpc's at any time. In that case, just don't exit the event-loop when you are out of work, and you are fine ;)

The test-program that use this code has three inputs, the server address, the total number of requests to execute, and the number of parallel requests to run.

The client's initialization is even simpler than the server:

class SimpleReqResClient {
public:


     // Run the event-loop.
    // Returns when there are no more requests to send
    void run() {

        LOG_INFO << "Connecting to gRPC service at: " << config_.address;
        channel_ = grpc::CreateChannel(config_.address, grpc::InsecureChannelCredentials());

        stub_ = ::routeguide::RouteGuide::NewStub(channel_);

        ...
    }

private:
    // This is the Queue. It's shared for all the requests.
    ::grpc::CompletionQueue cq_;

    // This is a connection to the gRPC server
    std::shared_ptr<grpc::Channel> channel_;

    // An instance of the client that was generated from our .proto file.
    std::unique_ptr<::routeguide::RouteGuide::Stub> stub_;

    const Config& config_;
    std::atomic_size_t pending_requests_{0};
    std::atomic_size_t request_count{0};

Since our code is just meant for testing, we will add some work in run() before we enter the event-loop. The requests will be added to the queue, and executed in whatever order that pleases gRPC.

        // Add request(s)
        for(auto i = 0; i < config_.parallel_requests;  ++i) {
            createRequest();
        }

        ...

The event-loop itself is identical to the event-loop in the server, except for the while() condition.

        while(pending_requests_) {
            // FIXME: This is crazy. Figure out how to use stable clock!
            const auto deadline = std::chrono::system_clock::now()
                                  + std::chrono::milliseconds(500);

            // Get any IO operation that is ready.
            void * tag = {};
            bool ok = true;

            // Wait for the next event to complete in the queue
            const auto status = cq_.AsyncNext(&tag, &ok, deadline);

            // So, here we deal with the first of the three states: The status of Next().
            switch(status) {
            case grpc::CompletionQueue::NextStatus::TIMEOUT:
                LOG_DEBUG << "AsyncNext() timed out.";
                continue;

            case grpc::CompletionQueue::NextStatus::GOT_EVENT:
                LOG_TRACE << "AsyncNext() returned an event. The boolean status is "
                          << (ok ? "OK" : "FAILED");

                // Use a scope to allow a new variable inside a case statement.
                {
                    auto request = static_cast<OneRequest *>(tag);

                    // Now, let the OneRequest state-machine deal with the event.
                    // We could have done it here, but that code would smell really nasty.
                    request->proceed(ok);
                }
                break;

            case grpc::CompletionQueue::NextStatus::SHUTDOWN:
                LOG_INFO << "SHUTDOWN. Tearing down the gRPC connection(s) ";
                return;
            } // switch

We also have a class for each RPC request.

class OneRequest {
    public:
        OneRequest(SimpleReqResClient& parent)
                : parent_{parent} {

            // Initiate the async request.
            rpc_ = parent_.stub_->AsyncGetFeature(&ctx_, req_, &parent_.cq_);
            assert(rpc_);

            // Add the operation to the queue, so we get notified when
            // the request is completed.
            // Note that we use `this` as tag.
            rpc_->Finish(&reply_, &status_, this);

            // Reference-counting of instances of requests in flight
            parent.incCounter();
        }
    ...
private:
    SimpleReqResClient& parent_;

    // We need quite a few variables to perform our single RPC call.
    ::routeguide::Point req_;
    ::routeguide::Feature reply_;
    ::grpc::Status status_;
    std::unique_ptr< ::grpc::ClientAsyncResponseReader< ::routeguide::Feature>> rpc_;
    ::grpc::ClientContext ctx_;
}

In the constructor, we call AsyncGetFeature() which is a "stub" method generated for us by rpcgen from our proto-file. Note that we don't add a tag there. In stead, we call a method on the object returned, Finish(), where we supply our tag. In this case we will get only one event. Either we have a successful reply from the server, or we have a failure. So, a pretty simple state-machine, this time. Don't worry. It will get more complex when we start playing with streams ;)

The arguments to AsyncGetFeature() are a pointer to ctx_, a client context for gRPC - a common pattern in C programming. Then the request-argument. This is the request or message we send to the server. The request instance must be alive until the request is sent over the wire (or longer), so we use a class variable req_ for this. The last argument is a pointer to our queue. As mentioned before, I'm not too exited about using pointers for mandatory arguments in C++.

Note that we provide Finish() with a pointer to reply_. This is an instance of a protobuf message of the return-type for this RPC request. It's where gRPC will store the reply from the server. We also provide it with a pointer to status_, which is an instance of ::grpc::Status, a class wrapper around an enum that can identify a handful of common error-conditions. In the server, we supply a ::grpc::Status in it's Finish() call. So it's my understanding that our code in the server can use this state to tell the client about some common problems. However, some of the available error-codes, like UNAVAILABLE and CANCELLED suggests that gRPC itself may return an error-status to the client. So I would not place any bets on where a status_ error origins from. I'll just try to deal with them as well as possible.

This is what the RPC request state-machine look like:

     void proceed(bool ok) {
        if (!ok) [[unlikely]] {
            LOG_WARN << "OneRequest: The request failed.";
            return done();
        }

        // Initiate a new request
        parent_.createRequest();

        if (status_.ok()) {
            LOG_TRACE << "Request successful. Message: " << reply_.name();
        } else {
            LOG_WARN << "OneRequest: The request failed with error-message: " << status_.error_message();
        }

        // The reply is a single message, so at this time we are done.
        done();
    }

Note that we have to deal with two separate potential error states, the ok variable and the _status. Only if both are okay can we expect the reply to contain any valid or useful information for us.

The complete source code.