Published:

Jarle Aase

Implementing an async client with one message and one stream.

bookmark 10 min read

So far, our code has dealt with only one async operation in flight, and the tag has been a pointer to an instance of our request class.

When we deal with incoming streams in the client, we would like to keep two async requests in flight. One to connect, and then to perform the next read or write operation from/to the stream (depending on the streams direction), and one to get the final reply and status from the server. That means that we need at least two distinct tag addresses. We could be creative, and use this for one of the tags, and for example const auto tag = reinterpret_cast<void *>(reinterpret_cast<uint64_t>(this) + 1); for the other. Since this is returned from a normal memory allocation, we can be quite confident that it is aligned with a 4 or 8 byte boundary (depending on the target binary type, 32 bit or 64 bit). Therefore, we could look at the tag see if we need to round it down 1 byte to reach a boundary address. This would probably work, but it would be a typical example of premature optimization. A safer and approach, in my opinion, is to use a intermediate variable, which in turn has a pointer or reference to the Request object.

Below is a simplified example of this idea.

 1
 2    // Request object
 3    struct Request {
 4
 5        // intermediate type
 6        struct Tag {
 7            Request& request_;
 8
 9            // Just relay the event to the Request instance
10            void proceed() {
11                request_.proceed();
12            }
13        };
14
15        // State-machine for the request
16        void proceed();
17
18        // Two tags with different addresses
19        Tag first_{*this};
20        Tag second_{*this};
21    };
22
23    ...
24    // Address of first_ is used a the "tag".
25    StartAsyncSomething(cq_, &first_);
26
27    // Address of second_ is used a the "tag".
28    StartAsyncSomethingElse(cq_, &second_);
29
30    ...
31
32    // event-loop
33    while(true) {
34        void *tag;
35        cq_->Next(&tag);
36
37        // Call proceed() on the intermediate type.
38        static_cast<Request::Tag *>(tag)->proceed();
39    }
40
41    ...
42
43    // Our queue
44    grpc::CompletionQueue cq_;
45

Request Base class

In order to deal with the complexity of more than one request type, and more than one tag for each request instance, we start the client implementation by creating a base-class for the request handler.

We keep the state that is shared by all the request types in the base class. In our case, we also keep a reference to the "parent" class, so we can get access to its state.

The most interesting declaration is the pure virtual method proceed(). As you may notice, it has an argument in addition to ok. Since we intend to have multiple async operations in flight, and these can be called in seemingly random order (at least, the order in which we get the result from Read() and Finish() appears as random to me), it makes sense to add the type of operation that proceed should deal with.

 1    /*! Base class for requests
 2     *
 3     *  In order to use `this` as a tag and avoid any special processing in the
 4     *  event-loop, the simplest approach in C++ is to let the request implementations
 5     *  inherit form a base-class that contains the shared code they all need, and
 6     *  a pure virtual method for the state-machine.
 7     */
 8    class RequestBase {
 9    public:
10
11        RequestBase(UnaryAndSingleStreamClient& parent)
12            : parent_{parent} {
13            LOG_TRACE << "Constructed request #" << client_id_ << " at address" << this;
14        }
15
16        virtual ~RequestBase() = default;
17
18        // The state-machine
19        virtual void proceed(bool ok, Handle::Operation op) = 0;
20
21    protected:
22        // The state required for all requests
23        UnaryAndSingleStreamClient& parent_;
24        int ref_cnt_ = 0;
25        ::grpc::ClientContext ctx_;
26
27    private:
28        void done() {
29            // Ugly, ugly, ugly
30            LOG_TRACE << "If the program crash now, it was a bad idea to delete this ;)  #"
31                      << client_id_ << " at address " << this;
32
33            // Reference-counting of instances of requests in flight
34            parent_.decCounter();
35            delete this;
36        }
37    };

To this base-class, we add a Handle type that deals with the unique tags. All the async operations we have initiated will hit our proceed() method, so it makes sense to use reference counting to decide when to delete the request object.

I have added Handle as a sub-class to RequestBase. I think of them as one entity. That means that Handle can access protected and private variables and methods in RequestBase. This is a pattern that works well to solve some problems (for example containers and their iterators). It's not something I recommend as a general approach. In this case, I think it's effective.

 1    /*! Tag
 2        *
 3        *  In order to allow tags for multiple async operations simultaneously,
 4        *  we use this "Handle". It points to the request owning the
 5        *  operation, and it is associated with a type of operation.
 6        */
 7    class Handle {
 8    public:
 9        enum Operation {
10            CONNECT,
11            READ,
12            WRITE,
13            WRITE_DONE,
14            FINISH
15        };
16
17        Handle(RequestBase& instance, Operation op)
18            : instance_{instance}, op_{op} {}
19
20        /*! Return a tag for an async operation.
21            *
22            *  Note that we use this method for reference-counting
23            *  the pending async operations, so it cannot be called
24            *  for other purposes!
25            */
26        [[nodiscard]] void *tag() {
27            ++instance_.ref_cnt_;
28            return this;
29        }
30
31        void proceed(bool ok) {
32            --instance_.ref_cnt_;
33
34            instance_.proceed(ok, op_);
35
36            if (instance_.ref_cnt_ == 0) {
37                instance_.done();
38            }
39        }
40
41    private:
42        RequestBase& instance_;
43        const Operation op_;
44    };

Finally, since we have abstracted away the new complexity, the client class and it's event-loop remains simple.

For clarity, I have removed a few lines of code that deals mostly with new request-instances.

 1
 2    class UnaryAndSingleStreamClient {
 3    public:
 4
 5
 6        class RequestBase {
 7            class Handle {...}
 8            ...
 9        }
10
11        UnaryAndSingleStreamClient(const Config& config)
12            : config_{config} {}
13
14        // Run the event-loop.
15        // Returns when there are no more requests to send
16        void run() {
17
18            LOG_INFO << "Connecting to gRPC service at: " << config_.address;
19            channel_ = grpc::CreateChannel(config_.address, grpc::InsecureChannelCredentials());
20
21            stub_ = ::routeguide::RouteGuide::NewStub(channel_);
22
23            while(pending_requests_) {
24                // FIXME: This is crazy. Figure out how to use stable clock!
25                const auto deadline = std::chrono::system_clock::now()
26                                    + std::chrono::milliseconds(500);
27
28                // Get any IO operation that is ready.
29                void * tag = {};
30                bool ok = true;
31
32                // Wait for the next event to complete in the queue
33                const auto status = cq_.AsyncNext(&tag, &ok, deadline);
34
35                // So, here we deal with the first of the three states: The status of Next().
36                switch(status) {
37                case grpc::CompletionQueue::NextStatus::TIMEOUT:
38                    LOG_TRACE << "AsyncNext() timed out.";
39                    continue;
40
41                case grpc::CompletionQueue::NextStatus::GOT_EVENT:
42                    // Use a scope to allow a new variable inside a case statement.
43                    {
44                        auto handle = static_cast<RequestBase::Handle *>(tag);
45
46                        // Now, let the relevant state-machine deal with the event.
47                        // We could have done it here, but that code would smell **really** bad!
48                        handle->proceed(ok);
49                    }
50                    break;
51
52                case grpc::CompletionQueue::NextStatus::SHUTDOWN:
53                    LOG_INFO << "SHUTDOWN. Tearing down the gRPC connection(s).";
54                    return;
55                } // switch
56            } // event-loop
57
58            LOG_DEBUG << "exiting event-loop";
59            close();
60        }
61
62        void incCounter() {
63            ++pending_requests_;
64        }
65
66        void decCounter() {
67            --pending_requests_;
68        }
69
70    private:
71        // This is the Queue. It's shared for all the requests.
72        ::grpc::CompletionQueue cq_;
73
74        // This is a connection to the gRPC server
75        std::shared_ptr<grpc::Channel> channel_;
76
77        // An instance of the client that was generated from our .proto file.
78        std::unique_ptr<::routeguide::RouteGuide::Stub> stub_;
79
80        size_t pending_requests_{0};
81        const Config config_;
82    };
83

The event-loop above can deal with any number of request types and request instances. It's totally generic, as long as the tags are derived from a RequestBase::Handle *.

GetFeature

Now let's take a look at how the implementation of or familiar GetFeature request looks

 1    /*! Implementation for the `GetFeature()` RPC request.
 2     */
 3    class GetFeatureRequest : public RequestBase {
 4    public:
 5        GetFeatureRequest(UnaryAndSingleStreamClient& parent)
 6            : RequestBase(parent) {
 7
 8            // Initiate the async request.
 9            rpc_ = parent_.stub_->AsyncGetFeature(&ctx_, req_, &parent_.cq_);
10            assert(rpc_);
11
12            // Add the operation to the queue, so we get notified when
13            // the request is completed.
14            // Note that we use our handle's this as tag. We don't really need the
15            // handle in this unary call, but the server implementation need's
16            // to iterate over a Handle to deal with the other request classes.
17            rpc_->Finish(&reply_, &status_, handle_.tag());
18
19            // Reference-counting of instances of requests in flight
20            parent.incCounter();
21        }
22
23        void proceed(bool ok, Handle::Operation /*op */) override {
24            if (!ok) [[unlikely]] {
25                LOG_WARN << boost::typeindex::type_id_runtime(*this).pretty_name()
26                         << " - The request failed. Status: " << status_.error_message();
27                return;
28            }
29
30            if (status_.ok()) {
31                // Initiate a new request
32                parent_.nextRequest();
33            } else {
34                LOG_WARN << boost::typeindex::type_id_runtime(*this).pretty_name()
35                         << " - The request failed with error-message: " << status_.error_message();
36            }
37
38            // The reply is a single message, so at this time we are done.
39        }
40
41    private:
42        Handle handle_{*this, Handle::Operation::FINISH};
43
44        // We need quite a few variables to perform our single RPC call.
45        ::routeguide::Point req_;
46        ::routeguide::Feature reply_;
47        ::grpc::Status status_;
48        std::unique_ptr< ::grpc::ClientAsyncResponseReader< ::routeguide::Feature>> rpc_;
49    };

The handle variable, Handle handle_{*this, Handle::Operation::FINISH}; contains a reference to the request handler, and the only operation we need, Handle::Operation::FINISH. Note that the operation enum is ours, and has no meaning to gRPC.

Since the handle use reference-counting to delete the request instance, it will always be deleted just after GetFeatureRequest::proceed() returns.

ListFeatures

So far, the functionality is exactly as it was in the original client, although we have more abstractions and out new shiny Handle ;)

When we use gRPC streams, things gets more interesting.

While GetFeature() only ever issue one async operation, ListFeatures() which take a single argument as a request, and allows the server to send any number of messages in a stream as the reply - in addition to the grpc::Status when it is done.

Let's recap the proto-definition for ListFeatures().

1    rpc ListFeatures(Rectangle) returns (stream Feature) {}

A normal work-flow for this client, if the stream has two incoming messages, will go trough these states:

Note that the FAILED_READ is not an error in this case. If the protocol allows a variable number of messages in the stream, you will typically just start a new read operation for each successful read completion, until the read operation fails. The grpc::Status returned when the the finish operation compleat can tell you if there was an actual error.

The ordering of last two events, the failed read and finish appears random. Since we use reference-counting to decide when the request object has handled all it's pending requests, we don't need to care about the order.

Let's take a look at the implementation.

 1
 2    /*! Implementation for the `ListFeatures()` RPC request.
 3     */
 4    class ListFeaturesRequest : public RequestBase {
 5    public:
 6        // Now we are implementing an actual, trivial state-machine, as
 7        // we will read an unknown number of messages.
 8
 9        ListFeaturesRequest(UnaryAndSingleStreamClient& parent)
10            : RequestBase(parent) {
11
12            // Initiate the async request.
13            // Note that this time, we have to supply the tag to the gRPC initiation method.
14            // That's because we will get an event that the request is in progress
15            // before we should (can?) start reading the replies.
16            rpc_ = parent_.stub_->AsyncListFeatures(&ctx_, req_, &parent_.cq_, connect_handle.tag());
17
18            // Also register a Finish handler, so we know when we are
19            // done or failed. This is where we get the server's status when deal with
20            // streams.
21            rpc_->Finish(&status_, finish_handle.tag());
22        }
23
24        ...
25    private:
26        // We need quite a few variables to perform our single RPC call.
27
28        Handle connect_handle   {*this, Handle::Operation::CONNECT};
29        Handle read_handle      {*this, Handle::Operation::READ};
30        Handle finish_handle    {*this, Handle::Operation::FINISH};
31
32        ::routeguide::Rectangle req_;
33        ::routeguide::Feature reply_;
34        ::grpc::Status status_;
35        std::unique_ptr< ::grpc::ClientAsyncReader< ::routeguide::Feature>> rpc_;
36

Note that this requests initiate method AsyncListFeatures() takes a tag. That's because we will be reading from a stream, and we can't start reading until we are "connected", that is, until we have seen that tag in our proceed() method where ok == true.

In addition to starting the connect with a tag from our connect_handle variable, we also register a handler for Finish(). That means that if we run into an error during connect or read, we should get a value in status_ that we can examine when that tag is handed to proceed().

The state-machine has to deal with any one of the three events we may get (depending on the success of the connect).

 1    // As promised, the state-machine gets more complex when we have
 2        // streams. In this case, we have three states to deal with on each invocation:
 3        // 1) The state of the instance - how many async operations have we started?
 4        //    This is handled by reference-counting, so we don't have to deal with it in
 5        //    the loop. This greatly reduce the code below.
 6        // 2) The operation
 7        // 3) The ok boolean value.
 8        void proceed(bool ok, Handle::Operation op) override {
 9
10            switch(op) {
11
12            case Handle::Operation::CONNECT:
13                if (!ok) [[unlikely]] {
14                    LOG_WARN << me() << " - The request failed.";
15                    return;
16                }
17
18                // Now, register a read operation.
19                rpc_->Read(&reply_, read_handle.tag());
20                break;
21
22            case Handle::Operation::READ:
23                if (!ok) [[unlikely]] {
24                    LOG_TRACE << me() << " - Failed to read a message.";
25                    return;
26                }
27
28                // This is where we have an actual message from the server.
29                // If this was a framework, this is where we would have called
30                // `onListFeatureReceivedOneMessage()` or or unblocked the next statement
31                // in a co-routine waiting for the next request
32
33                // In our case, let's just log it.
34                LOG_TRACE << me() << " - Request successful. Message: " << reply_.name();
35
36
37                // Prepare the reply-object to be re-used.
38                // This is usually cheaper than creating a new one for each read operation.
39                reply_.Clear();
40
41                // Now, lets register another read operation
42                rpc_->Read(&reply_, read_handle.tag());
43                break;
44
45            case Handle::Operation::FINISH:
46                if (!ok) [[unlikely]] {
47                    LOG_WARN << me() << " - Failed to FINISH! Status: " << status_.error_message();
48                    return;
49                }
50
51                if (!status_.ok()) {
52                    LOG_WARN << me() << " - The request finished with error-message: " << status_.error_message();
53                }
54                break;
55
56            default:
57                LOG_ERROR << me()
58                          << " - Unexpected operation in state-machine: "
59                          << static_cast<int>(op);
60
61            } // state
62        }
63
64        std::string me() const {
65            return boost::typeindex::type_id_runtime(*this).pretty_name()
66                   + " #" + std::to_string(client_id_);
67        }

RecordRoute

The last request we will deal with in this article is RecordRoute.

1    rpc RecordRoute(stream Point) returns (RouteSummary) {}

Let's start with the request class, without it's state-machine.

 1
 2    /*! Implementation for the `RecordRoute()` RPC request.
 3     */
 4    class RecordRouteRequest : public RequestBase {
 5    public:
 6        // Now we are implementing an actual, trivial state-machine, as
 7        // we will send a fixed number of messages.
 8
 9        RecordRouteRequest(UnaryAndSingleStreamClient& parent)
10            : RequestBase(parent) {
11
12            // Initiate the async request.
13            // Note that this time, we have to supply the tag to the gRPC initiation method.
14            // That's because we will get an event that the request is in progress
15            // before we should (can?) start writing the requests.
16            rpc_ = parent_.stub_->AsyncRecordRoute(&ctx_, &reply_, &parent_.cq_, connect_handle.tag());
17
18            // Initiate a `Finish()` operation so we get the reply-message and a status from the server.
19            rpc_->Finish(&status_, finish_handle.tag());
20        }
21
22        ...
23
24    private:
25        // We need quite a few variables to perform our single RPC call.
26        size_t sent_messages_ = 0;
27
28        Handle connect_handle   {*this, Handle::Operation::CONNECT};
29        Handle write_handle     {*this, Handle::Operation::WRITE};
30        Handle write_done_handle{*this, Handle::Operation::WRITE_DONE};
31        Handle finish_handle    {*this, Handle::Operation::FINISH};
32
33        ::routeguide::Point req_;
34        ::routeguide::RouteSummary reply_;
35        ::grpc::Status status_;
36        std::unique_ptr< ::grpc::ClientAsyncWriter< ::routeguide::Point>> rpc_;
37    };
38

The first thing you might notice is that we now have four state handles. We will only use two at any given time.

Like before, we start two operations in our constructor; connecting to the server with the correct request, and tell gRPC to call our finish tag when the the library has received the servers final Status update.

We are giving the reply_ buffer to AsyncRecordRoute(), and the status_ buffer to Finish(). I'm not sure when the reply_ buffer is filled in. My assumption is that it is not safe for use until we are processing the finish event.

The state-machine is the most complex so far.

 1    void proceed(bool ok, Handle::Operation op) override {
 2
 3        switch(op) {
 4
 5        case Handle::Operation::CONNECT:
 6            if (!ok) [[unlikely]] {
 7                LOG_WARN << me() << " - The request failed.";
 8                break;
 9            }
10
11            // We are ready to send the first message to the server.
12            // If this was a framework, this is where we would have called
13            // `onRecordRouteReadyToSendFirst()` or or unblocked the next statement
14            // in a co-routine waiting for the next state
15            req_.set_latitude(50);
16            req_.set_longitude(sent_messages_);
17            rpc_->Write(req_, write_handle.tag());
18            break;
19
20        case Handle::Operation::WRITE:
21            if (!ok) [[unlikely]] {
22                LOG_TRACE << me() << " - Failed to write a message.";
23                break;
24            }
25
26            // This is where we have sent an actual message to the server.
27            // If this was a framework, this is where we would have called
28            // `onRecordRouteReadyToSendNext()` or or unblocked the next statement
29            // in a co-routine waiting for the next state
30
31            if (++sent_messages_ >= parent_.config_.num_stream_messages) {
32                LOG_TRACE << me() << " - We are done sending messages.";
33                rpc_->WritesDone(write_done_handle.tag());
34
35                // Now we have two pending requests, write done and finish.
36                break;
37            }
38
39            // Prepare the message-object to be re-used.
40            // This is usually cheaper than creating a new one for each read operation.
41            req_.Clear();
42
43            req_.set_latitude(100);
44            req_.set_longitude(sent_messages_);
45
46            // Now, lets register another write operation
47            rpc_->Write(req_, write_handle.tag());
48            break;
49
50        case Handle::Operation::WRITE_DONE:
51            if (!ok) [[unlikely]] {
52                LOG_WARN << me() << " - Failed to notify the server that we are done.";
53            }
54            break;
55
56        case Handle::Operation::FINISH:
57            if (!ok) [[unlikely]] {
58                LOG_WARN << me() << " - Failed to FINISH! Status: " << status_.error_message();
59                break;
60            }
61
62            // This is where we have sent all the message to the server.
63            // If this was a framework, this is where we would have called
64            // `onRecordRouteGotReply()` or or unblocked the next statement
65            // in a co-routine waiting for the next state
66
67            if (!status_.ok()) {
68                LOG_WARN << me() << " - The request finished with error-message: " << status_.error_message();
69            }
70            break;
71
72        default:
73            LOG_ERROR << me()
74                        << " - Unexpected operation in state-machine: "
75                        << static_cast<int>(op);
76
77            assert(false);
78
79        } // state
80    }
81
82    std::string me() const {
83        return boost::typeindex::type_id_runtime(*this).pretty_name()
84                + " #" + std::to_string(client_id_);
85    }
86

The extra state comes from the need to call WritesDone() when we are out of messages to send. This initiates another async operation. At that point we will get one event for WRITE_DONE and one event for FINISH. The the reference count will be 0, and the instance will be deleted.

The complete source code.