What the callback interface give us on the server side is a C++ virtual interface that let us override methods to implement the RPC's in our proto-file. That's exactely what I would expect from a RPC code generator for C++. I was a bit surprised when I first started to experiment with async gRPC some years ago - when there was only the lecacy async interface.
So, have they gotten it right, this time around? Yes and no.
Yes, because they now deal with all the details, so that we can deal with the RPC request directly. We implement an override for the interface created by rpcgen, and don't worry about what happens before that gets called, or after the request is complete. I emphasize request, because for the streaming RPC types, the callback is just the initiation of the work-flow. Our implementation methods for the RPC's are called from a fixes size thread pool owned by gRPC. If we do anything time consuming there, our server will not be very performant. So we have to return from the callback immediately. What we will do to facilitate streaming RPC's is to just instantiate an object that carries out the operations we need to complete the request. That's also what we did in the async implementation - only back then we pre-created one instance for each RPC method, and then we created a new instance as soon as we had a new RPC request in progress. With the callbacks, we just get notified when there is a new RPC and then it's up to us how we choose to proceed.
No, because we still have to create an implementation class for the state and event-flow for each stream API.
One thing to keep in mind is that the callback's may be called simultaneously from different threads. Our implementation must be thread-safe.
Simplified, what we implement here for the server-side stream RPC's looks like below. We create overrides for each RPC method, and we also create an implementation class for the streaming class required to read/write the stream.
;The really exiting good news is that unary RPC's (the ones without streams) are trivial to implement. I figure that these will be the majority in most real use-cases.
GetFeature
This is the override implementation of the GetFeature() method.
;Our callback will be called each time someone requests that RPC. Remember that we don't control the threads. gRPC will use it's own thread-pool and we must expect our method to be called potentially many times in parallel. If we use some shared resources, we must use a lock or some other synchronization strategy to handle race conditions.
Also remember that callbacks must return immediately. If you need to do some IO or heavy calculations, you must take the request and schedule it on another thread and then return from the callback. You can call Finish() later. This will of course add complexity to your code. But it is what it is. That's the rules we have to obey by. That was by the way also the case in our async implementations.
ListFeatures
Here we use the same pattern that gRPC team use in their example for the callback interface. We put the implementation for the stream class/work-flow inside the callback method override.
/*! RPC callback event for ListFeatures */ ::grpc::ServerWriteReactor< ::routeguide::Feature>* ;As you can see, the logic is similar to our final async example. But it is simpler. Simple is good. It gives us less opportunity to mess up ;)
RecordRoute
Here we read from the stream until it ends (ok == false). Then we reply.
Just as above, the code is simpler than before.
/*! RPC callback event for RecordRoute */ ::grpc::ServerReadReactor< ::routeguide::Point>* ;I'm a bit puzzled by the lack of typedefs in the generated headers from protobuf. In stead of using some nice simple typenames, we have to spell out the full template names and arguments in our code. That takes time. In general, all that stuff must be located and then copied and pasted from the generated headers.
RouteChat
Again, this is the most complex RPC type that gRPC supports.
To quickly recap the proto definition:
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}When we use the callbacks, we must be prepared for simultaneous events in multiple threads. In our async code, we used only one thread in the event loop, and that handled all the events. So we did not have to make our events handlers thread-safe. When we use the callbacks, we may not only have new RPC's coming in simultaneous in different threads, but we may have IO events (read/write) on a bidirectional stream coming in simultaneously in different threads. So be careful if you have non-const data shared by the event handlers.
This implementation borrows the core idea for how to do a "bidi" stream from our async code. It use read() and write() methods to initiate the async operations. Then it overrides the IO event handlers in the ServerBidiReactor<> interface to handle the completions.
As before, we don't call Finish() until we are done reading and writing. That is handled by finishIfDone().
/*! RPC callback event for RouteChat */ ::grpc::ServerBidiReactor< ::routeguide::RouteNote, ::routeguide::RouteNote>* Even the bidi RPC is notable simper than the async version. Things will undoubtedly be more messy in a real use-case where we deal with actual functionality and probable need to both queue outgoing messages (since we can only send one at a time) and relay requests and responses trough our own worker-threads so the server can tolerate the delays caused by disk or network latency and database queries. But the code required to deal with RPC's and streams is now down to a level where it's manageable ;)