Published:

Jarle Aase

Implementing the full routeguide async client

bookmark 5 min read

Now, let's re-use the abstractions we created for the server to implement the final client using the gRPC async interface (or "stub" to be more accurate).

Before we look at the request implementation, lets take brief look at the client override for the event-loop.

 1    class EverythingClient
 2    : public EventLoopBase<ClientVars<::routeguide::RouteGuide>> {
 3    public:
 4
 5        ...
 6        EverythingClient(const Config& config)
 7            : EventLoopBase(config) {
 8
 9
10            LOG_INFO << "Connecting to gRPC service at: " << config.address;
11            grpc_.channel_ = grpc::CreateChannel(config.address, grpc::InsecureChannelCredentials());
12
13            grpc_.stub_ = ::routeguide::RouteGuide::NewStub(grpc_.channel_);
14            assert(grpc_.stub_);
15
16            // Add request(s)
17            LOG_DEBUG << "Creating " << config_.parallel_requests
18                    << " initial request(s) of type " << config_.request_type;
19
20            for(auto i = 0; i < config_.parallel_requests;  ++i) {
21                nextRequest();
22            }
23        }
24
25    private:
26        size_t request_count_{0};

In the constructor we set up the connection to the gRPC server, we create in instance of the "stub" the code generator created from our proto file and finally we call nextRequest() to initialize the first batch of outgoing requests. I have omitted the code for request-creation here, as it's irrelevant for the much more interesting gRPC Request code. The complete source code, including the test-client and test-server that consume and allow us to execute all the code we have been trough, is available on github.

GetFeature

Let's start with GetFeature as before.

 1    class GetFeatureRequest : public RequestBase {
 2    public:
 3
 4        GetFeatureRequest(EverythingClient& owner)
 5            : RequestBase(owner) {
 6
 7            // Initiate the async request.
 8            rpc_ = owner.grpc().stub_->AsyncGetFeature(&ctx_, req_, cq());
 9
10            // Add the operation to the queue. We will be notified when
11            // the request is completed.
12            rpc_->Finish(&reply_, &status_, handle_.tag(
13                Handle::Operation::FINISH,
14                [this, &owner](bool ok, Handle::Operation /* op */) {
15
16                    if (!ok) [[unlikely]] {
17                    LOG_WARN << me(*this) << " - The request failed.";
18                        return;
19                    }
20
21                    if (!status_.ok()) {
22                        LOG_WARN << me(*this) << " - The request failed with error-message: "
23                                 << status_.error_message();
24                    }
25                }));
26        }
27
28    private:
29        Handle handle_{*this};
30
31        // We need quite a few variables to perform our single RPC call.
32        ::grpc::ClientContext ctx_;
33        ::routeguide::Point req_;
34        ::routeguide::Feature reply_;
35        ::grpc::Status status_;
36        std::unique_ptr< ::grpc::ClientAsyncResponseReader<decltype(reply_)>> rpc_;
37
38    }; // GetFeatureRequest
39

It's quite simple, with the bulk of the code in one lamda function dealing with the Finish event.

There is still a little to much boilerplate code to declare the required variables and initiation. We could add another layer of abstraction by creating a Request template for the unary rpc request type. However, the gRPC code generator gives us little help to achieve that. It would have been nice if it gave us typedefs for the three variable types for req_, reply_ and rpc_. We could probably deduce the types from the initiator and finish method using some insane template meta-programming hacks, but I'm not going down that rabbit hole today. It would have been so much easier if the code generator just added the using statements for us :/

Let's continue with ListFeatures.

ListFeatures

We follow the same pattern as we did in the servers stream methods. We put the code to deal with an event-type in the lambda for that event. We shuffle the logic shared between connect and read in the read() method.

In the constructor, we initiate both a connect/request operation and a Finish operation. Hence, we have two Handle variables.

 1    class ListFeaturesRequest : public RequestBase {
 2    public:
 3
 4        ListFeaturesRequest(EverythingClient& owner)
 5            : RequestBase(owner) {
 6
 7            // Initiate the async request.
 8            rpc_ = owner.grpc().stub_->AsyncListFeatures(&ctx_, req_, cq(), op_handle_.tag(
 9                Handle::Operation::CONNECT,
10                [this](bool ok, Handle::Operation /* op */) {
11                    if (!ok) [[unlikely]] {
12                        LOG_WARN << me(*this) << " - The request failed (connect).";
13                        return;
14                    }
15
16                    read(true);
17            }));
18
19            rpc_->Finish(&status_, finish_handle_.tag(
20                Handle::Operation::FINISH,
21                [this](bool ok, Handle::Operation /* op */) mutable {
22                    if (!ok) [[unlikely]] {
23                        LOG_WARN << me(*this) << " - The request failed (connect).";
24                        return;
25                    }
26
27                    if (!status_.ok()) {
28                        LOG_WARN << me(*this) << " - The request finished with error-message: "
29                                 << status_.error_message();
30                    }
31            }));
32        }
33
34    private:
35        void read(const bool first) {
36
37            if (!first) {
38                // This is where we have an actual message from the server.
39                // If this was a framework, this is where we would have called
40                // `onListFeatureReceivedOneMessage()` or or unblocked the next statement
41                // in a co-routine waiting for the next request
42
43                // In our case, let's just log it.
44                LOG_TRACE << me(*this) << " - Request successful. Message: " << reply_.name();
45
46                // Prepare the reply-object to be re-used.
47                // This is usually cheaper than creating a new one for each read operation.
48                reply_.Clear();
49            }
50
51            // Now, lets register another read operation
52            rpc_->Read(&reply_, op_handle_.tag(
53                                    Handle::Operation::READ,
54                [this](bool ok, Handle::Operation /* op */) {
55                    if (!ok) [[unlikely]] {
56                        LOG_TRACE << me(*this) << " - The read-request failed.";
57                        return;
58                    }
59
60                    read(false);
61                }));
62        }
63
64        Handle op_handle_{*this};
65        Handle finish_handle_{*this};
66
67        ::grpc::ClientContext ctx_;
68        ::routeguide::Rectangle req_;
69        ::routeguide::Feature reply_;
70        ::grpc::Status status_;
71        std::unique_ptr< ::grpc::ClientAsyncReader< decltype(reply_)>> rpc_;
72    }; // ListFeaturesRequest
73

RecordRouteRequest

This is very similar to the previous example. We are just sending in stead of receiving over the stream.

 1    class RecordRouteRequest : public RequestBase {
 2    public:
 3
 4        RecordRouteRequest(EverythingClient& owner)
 5            : RequestBase(owner) {
 6
 7            // Initiate the async request (connect).
 8            rpc_ = owner.grpc().stub_->AsyncRecordRoute(&ctx_, &reply_, cq(), io_handle_.tag(
 9                Handle::Operation::CONNECT,
10                [this](bool ok, Handle::Operation /* op */) {
11                    if (!ok) [[unlikely]] {
12                        LOG_WARN << me(*this) << " - The request failed (connect).";
13                        return;
14                    }
15
16                    // The server will not send anything until we are done writing.
17                    // So let's get started.
18
19                    write(true);
20               }));
21
22            // Register a handler to be called when the server has sent a reply and final status.
23            rpc_->Finish(&status_, finish_handle_.tag(
24                Handle::Operation::FINISH,
25                [this](bool ok, Handle::Operation /* op */) mutable {
26                    if (!ok) [[unlikely]] {
27                        LOG_WARN << me(*this) << " - The request failed (connect).";
28                        return;
29                    }
30
31                    if (!status_.ok()) {
32                        LOG_WARN << me(*this) << " - The request finished with error-message: "
33                                 << status_.error_message();
34                    }
35               }));
36        }
37
38    private:
39        void write(const bool first) {
40
41            if (!first) {
42                req_.Clear();
43            }
44
45            if (++sent_messages_ > owner_.config().num_stream_messages) {
46
47                LOG_TRACE << me(*this) << " - We are done writing to the stream.";
48
49                rpc_->WritesDone(io_handle_.tag(
50                    Handle::Operation::WRITE_DONE,
51                    [this](bool ok, Handle::Operation /* op */) {
52                        if (!ok) [[unlikely]] {
53                            LOG_TRACE << me(*this) << " - The writes-done request failed.";
54                            return;
55                        }
56
57                        LOG_TRACE << me(*this) << " - We have told the server that we are done writing.";
58                    }));
59
60                return;
61            }
62
63            // Send some data to the server
64            req_.set_latitude(100);
65            req_.set_longitude(sent_messages_);
66
67            // Now, lets register another write operation
68            rpc_->Write(req_, io_handle_.tag(
69                Handle::Operation::WRITE,
70                [this](bool ok, Handle::Operation /* op */) {
71                    if (!ok) [[unlikely]] {
72                        LOG_TRACE << me(*this) << " - The write-request failed.";
73                        return;
74                    }
75
76                    write(false);
77                }));
78        }
79
80        Handle io_handle_{*this};
81        Handle finish_handle_{*this};
82        size_t sent_messages_ = 0;
83
84        ::grpc::ClientContext ctx_;
85        ::routeguide::Point req_;
86        ::routeguide::RouteSummary reply_;
87        ::grpc::Status status_;
88        std::unique_ptr<  ::grpc::ClientAsyncWriter< ::routeguide::Point>> rpc_;
89    }; // RecordRouteRequest
90

The final example is the bidirectional stream. Like in the server, we implement a Real Internet Chat (tm), where we just yell at the receiver, until we have yelled what was on our mind. Then we finish and wait for the server to say its final bits (the Status). Simultaneously, we read the messages from the server and discard them (like Real Internet Discussion Participants) until they have the decency to shut up.

  1
  2    class RouteChatRequest : public RequestBase {
  3    public:
  4
  5        RouteChatRequest(EverythingClient& owner)
  6            : RequestBase(owner) {
  7
  8            // Initiate the async request.
  9            rpc_ = owner.grpc().stub_->AsyncRouteChat(&ctx_, cq(), in_handle_.tag(
 10                Handle::Operation::CONNECT,
 11                [this](bool ok, Handle::Operation /* op */) {
 12                    if (!ok) [[unlikely]] {
 13                        LOG_WARN << me(*this) << " - The request failed (connect).";
 14                        return;
 15                    }
 16
 17                    // We are initiating both reading and writing.
 18                    // Some clients may initiate only a read or a write at this time,
 19                    // depending on the use-case.
 20                    read(true);
 21                    write(true);
 22                }));
 23
 24            rpc_->Finish(&status_, finish_handle_.tag(
 25                Handle::Operation::FINISH,
 26                [this](bool ok, Handle::Operation /* op */) mutable {
 27                    if (!ok) [[unlikely]] {
 28                        LOG_WARN << me(*this) << " - The request failed (finish).";
 29                        return;
 30                    }
 31
 32                    if (!status_.ok()) {
 33                        LOG_WARN << me(*this) << " - The request finished with error-message: "
 34                                 << status_.error_message();
 35                   }
 36                }));
 37        }
 38
 39    private:
 40        void read(const bool first) {
 41
 42            if (!first) {
 43                // This is where we have an actual message from the server.
 44                // If this was a framework, this is where we would have called
 45                // `onListFeatureReceivedOneMessage()` or or unblocked the next statement
 46                // in a co-routine waiting for the next request
 47
 48                // In our case, let's just log it.
 49                LOG_TRACE << me(*this) << " - Request successful. Message: " << reply_.message();
 50                reply_.Clear();
 51            }
 52
 53            // Now, lets register another read operation
 54            rpc_->Read(&reply_, in_handle_.tag(
 55                Handle::Operation::READ,
 56                [this](bool ok, Handle::Operation /* op */) {
 57                    if (!ok) [[unlikely]] {
 58                        LOG_TRACE << me(*this) << " - The read-request failed.";
 59                        return;
 60                    }
 61
 62                    read(false);
 63                }));
 64        }
 65
 66        void write(const bool first) {
 67
 68            if (!first) {
 69                req_.Clear();
 70            }
 71
 72            if (++sent_messages_ > owner_.config().num_stream_messages) {
 73
 74                LOG_TRACE << me(*this) << " - We are done writing to the stream.";
 75
 76                rpc_->WritesDone(out_handle_.tag(
 77                    Handle::Operation::WRITE_DONE,
 78                    [this](bool ok, Handle::Operation /* op */) {
 79                        if (!ok) [[unlikely]] {
 80                            LOG_TRACE << me(*this) << " - The writes-done request failed.";
 81                            return;
 82                        }
 83
 84                        LOG_TRACE << me(*this) << " - We have told the server that we are done writing.";
 85                  }));
 86
 87                return;
 88            }
 89
 90            // Now, lets register another write operation
 91            rpc_->Write(req_, out_handle_.tag(
 92                Handle::Operation::WRITE,
 93                [this](bool ok, Handle::Operation /* op */) {
 94                    if (!ok) [[unlikely]] {
 95                        LOG_TRACE << me(*this) << " - The write-request failed.";
 96                        return;
 97                    }
 98
 99                    write(false);
100                }));
101        }
102
103        Handle in_handle_{*this};
104        Handle out_handle_{*this};
105        Handle finish_handle_{*this};
106        size_t sent_messages_ = 0;
107
108        ::grpc::ClientContext ctx_;
109        ::routeguide::RouteNote req_;
110        ::routeguide::RouteNote reply_;
111        ::grpc::Status status_;
112        std::unique_ptr<  ::grpc::ClientAsyncReaderWriter< ::routeguide::RouteNote, ::routeguide::RouteNote>> rpc_;
113    };
114

The code is similar to the server implementation, except that we don't get the final bits to say ;)

The complete source code.

That concludes our walk-trough of how to use the gRPC async interfaces/stub.

The next (planned) articles will look at the callback interface to gRPC.