Skip to main content

What The Hell Is gRPC?

Time is the longest distance between two places.
― Tennessee Williams

 If you are a software engineer and you work on websites or any other software which is in high demand by your consumer, but you see that some parts of your application say the products page gets higher demand than any other services you provide at some point of time or let's say you want to loosely couple your application for your application to be developed more easily and flexibly. 

These are a few use cases, there are many more. We are seeing a trend in the industry getting more aligned toward following microservices patterns while building the application.

The When, Why, Where, and How parts of microservices can be easily found on the internet. 

Here I want to talk more about some new technologies which have become an integral part when we talk about microservices. So this blog would be more fruitful for people who know the When, Why, Where, and How parts.


Today, I will talk about intercommunication between these microservices. How they can be made faster? I would specifically talk about gRPC, a project developed by Google, which leverages Web 2.0 to increase the speed of this intercommunication.

What is gRPC?

I will put it as simply as I can.
Suppose you have two machines. Machine A and Machine B.
Machine A uploads an image and sends it to machine B and then machine B does some enhancements to this image and sends it back to Machine A.
This is a classical example of two microservices running on two different servers.
I am sure you can figure out the upload part, but How do you send it?
We can opt for a Remote Procedure call. 
Remote procedure calls provide a way to execute codes on another machine that is a client application can directly call a method on a server application on a different machine as if it were a local object, making it easier for you to create distributed applications and services.
gRPC on the other hand provides a framework to make these RPC calls. Trust me it will be way faster than the API calls you trivially make. But before moving on, I would also like to discuss its disadvantages over the API-based approach.

  1. Limited support for browsers, hence majorly used between microservices for connection.
  2. Request and Response Data is machine readable in bytes.
  3. Service definition changes pose a problem in development.

What was the need for this?

As we saw that gRPC is an HTTP/2-based protocol. Let us understand the difference between the two.

HTTP/1

  • Request and Response are human-readable and then converted to binary.
  • No pipelining that is multiple requests = multiple connections
  • With HTTP/1.1 pipelining introduced, but still for requests to run in parallel, the response should also be returned in that order.
  • If one request gets held due to an HTTP connection or Database Connection, it will be delayed and a delayed response will cause all other responses to be blocked. This is called Head of line blocking.

HTTP/2

  • Data is transferred in a binary format and does not require waiting for any request to be completed as the data is reorganized at the target system after being sent by the host.
  • Terminologies related to HTTP/2 are Stream, Message, and Frame.
  • Stream is a bidirectional flow of data between client and server. It consists of a set of messages; each stream is uniquely identified by a stream id.
  • A message is a sequence of frames that correspond to your HTTP request or response.
  • Frame is the smallest unit that contains a specific type of information. It has a header and a body. A header frame contains a stream id to determine which stream it belongs to.
To understand gRPC, we first need to understand Protocol Buffers.

A server and a client can be in different languages, but a client can call a server method as it were its own method available locally. But if the server is written in C++ and the client is written in Java then 

How will Java Client call C++ Server method?

For this purpose we use ProtoBuffs  or Protocol Buffers.

These protobuffs act as intermediate between client and server to provide uniformity.

The first step when working with protocol buffers is to define the structure for the data you want to serialize in a proto file: this is an ordinary text file with a .proto extension. Protocol buffer data is structured as messages, where each message is a small logical record of information containing a series of name-value pairs called fields. Here’s a simple example:

message Person {
  string name = 1;
  int32 id = 2;
  bool has_ponycopter = 3;
}

Then, once you’ve specified your data structures, you use the protocol buffer compiler protoc to generate data access classes in your preferred language(s) from your proto definition. These provide simple accessors for each field, like name() and set_name(), as well as methods to serialize/parse the whole structure to/from raw bytes. So, for instance, if your chosen language is C++, running the compiler on the example above will generate a class called Person. You can then use this class in your application to populate, serialize, and retrieve Person protocol buffer messages.

You define gRPC services in ordinary proto files, with RPC method parameters and return types specified as protocol buffer messages:

// The greeter service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (HelloRequest) returns (HelloReply) {}
}

// The request message containing the user's name.
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings
message HelloReply {
  string message = 1;
}

gRPC uses protoc with a special gRPC plugin to generate code from your proto file: you get generated gRPC client and server code, as well as the regular protocol buffer code for populating, serializing, and retrieving your message types.

Now referring to the above example, if the server was written using c++ and had a method defined named SayHello(HelloRequest). The client can call this method using java as client will generate the service definition in JAVA whereas server will generate the stubs in c++.

What to do to develop a gRPC Service?

  1. Write a service definition using proto buffs.
  2. Generate stubs from the service definition file in the language of choice.
  3. Implement server methods/business logic and run a gRPC server.
  4. Implement client with the help of stubs.

Download Bloom RPC 

 Bloom RPC will act as a client that will send a request using a proto file as a protobuff to the server which in our case would be running in port 50051 of our local host. There are many ways to download this software, you can search on the internet and choose whatever suits you.

 Ok! Let's build Something.


The above diagram is the application that we will be creating using gRPC.
Let us first discuss a bit, what the flow is : 

  1. Bloom RPC will make a call to our User Service running on port 50051 using a request object defined in the service definition.
  2. User service will get user data from the inbuilt h2 database.
  3. Then  User Service will make a call to the Order Service Client, which will make a call to Order Service using the user id.
  4. Order Service will make a call to h2 DB and get the order associated with that user and return it to the Order Service Client and then it will be returned to the user service.
  5. User Service will then create a response object according to the response stub mentioned in the service definition of User Service.
  6. This Response will then be returned to Bloom RPC and we will be able to see it there.!!!! 
********************************Curtains Closed************************

Create Service Definition for Stubs

  1. Stubs are usually created using the files with the .proto extension also known as service definition files.
  2. These stubs will then be used by our custom applications such as services to transfer data between each other.

User Service Definition: 


The number of classes generated would be three namely :
  • UserService.class
  • UserRequest.class
  • UserResponse.class
Their definition which is what methods and what variables they have is also defined in this user.proto file.

Order Service Definition: 


How will we generate these stubs?

We will be able to do this using our pom.xml during the installation. Our Pom.xml looks something like this.
If you do not know what Pom.xml is, you can look it up on the internet.


In the target folder, after running the command for maven install, we will be able to see the stubs being generated.

Now next step is to create services using these stubs.


Now, We will first need some model classes. Some of you brilliant people may ask Why do we need model classes when we already have stubs!!

The answer is simple, we will get the data from the H2 database in our own model classes and then convert these models to stubs when we want to send the data across to another microservice.
So here, in our use case, we will get the user data from H2 into our user model class and then convert it to an Order Request stub later.

Creating Model Classes for exchanging data between databases and services

H2 Connection Class : 


This class is responsible for creating an H2 database connection and will leverage the initialize.sql file from the resources folder to create tables and insert data into those tables. Why the resources folder? Because the resources folder is automatically added to the classpath for execution by springboot.

Initialize.sql



Order.java

A class is used to get order data from the h2 database.


User.java


Creating DAO Classes

Dao Classes will be connecting to the database and getting us the data that we require.

UserDao.java


OrderDao.java

This class will get us order-related data from the database.


Now that we have our DAO classes ready, we can create services to connect to these DAO classes and get the data.

You will note that the services we create are implementing as well as using some of the stubs which we created previously. This is because services need to convert objects to stubs and send them to other services in a binary format.

OrderServiceImpl.java

This class will connect to OrderDAO and get the data required on the basis of the user id sent to us from the UserServiceImpl java class, which we will implement later.


You can compare the methods used in this class with the OrderService.proto Service Definition file and how we are populating the stubs.

UserServiceImpl.java

Before implementing UserServiceImpl, as we want to connect to OrderServiceImpl, which would run on a different server, we would need a client in the middle which would accept data from UserServiceImpl, convert it to the required stub and send it across to the OrderServiceImpl.

OrderClient.java





We can see here, that we use a channel, which is from the grpc library. Channel will be used to provide properties while making a connection between microservices such as port number.

We will be defining that in our UserServiceImpl next.




The UserServiceImpl is doing the following : 
  1. Gets user data based on what user Id we sent from BloomRPC.
  2. Then get orders by calling the client which would in turn call the orders microservice running on port 50052.
  3. Then it returns the data back to bloom RPC
But there is a major chunk missing- What are we running these services on?


Creating a User And Order Server

I hope you were with me so far.

First of all, we know that gRPC would be mainly used to increase intercommunication between microservices. In our case, the two microservices - User and Order - will run on two different servers.

Let us have a look at the code for both of these servers.



Another thing I want to point out is that we are running a User server on port 50051 and an Order server on port 50052 of the same machine.

The Order server would look like the following : 



As both these files contain the main method we can directly start these servers as we start normal java applications.

Running the Application :

  1. First start both servers as a java file. You may use any IDE or run command for compiling and running java classes.
  2. Open bloom RPC and import User.proto file, which we created above.


There is data with username as chandler, in the initialize.sql. We are using the username, then fetching the user ID, and then fetching the orders associated, which are then returned to us as the no_of_orders field.

You can put debuggers and check the flow of the code.









Comments

Popular posts from this blog

My Quarantine "Efforts"

A true relationship is two imperfect people refusing to give up on each other. – Them This blog is on relationships and of course a bit of coding, otherwise it would be stupid to put it here anyways. My girlfriend and I have been having some issues lately regarding the term "Efforts". Now when I put it in front of any of my friends, They have the same reply "Efforts" and when asked what efforts are? … Nothing. I get a supposedly deep answer - Efforts come from within. What does that even mean?. So I tried some stuff to get a response from her. Well for some context we came in relationship on 28 february,2020 and guess when the lockdown started in India. Don't guess, you may as well google it. It was not even a month before the lockdown began and we flew back home. She lives more than 1500km from my state. But you know sometime you just meet the right person and you just know. We knew we had to fight this as we had so much left to say to each ot...

Developing an app with react native and firebase - firestore - Part 1

Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. –John F. Woods Let's face it, we are two years into work from home, quarantined from the outside and this makes me wonder that how lucky I am as an engineer to be able to have so many domains open for me to work on, with which I could eventually make money, help others. In other words, it gives me the power to bring a change. I just can't think of a problem where technology cannot help. As soon as I find a problem in minutes my mind starts to work on how it can be solved technically. I think this development is what our college provides us with, to open our technical aspects. Today, I will be talking about react-native and firestore, making app development a little painless and fast. If we are looking to solve a simple everyday problem and want an app to do it for us, an app that is not very complex - react native is the way to go. So,...

My Every day's Dilemma

Ideas are easy. Implementation is hard. –Guy Kawasaki, Alltop Co-Founder and Entrepreneur Let me start this blog by asking a simple question. Do you want to be rich? We all know the answer to that. And usually, whenever I think in that perspective, I often think of having a passive income and then I open my laptop and start searching for ways to earn passive income. Results I get are freelancing (too many fake projects) and stuff I am not very interested in as an engineer. After searching for half an hour or so, I start thinking about what I can do with my skills and start thinking about automation and innovation. As I think about it, I usually get many ideas that could potentially help a lot of people and help me start a passive income through a startup. That is how my first venture in this domain actually failed. I just thought about the whole thing from an engineering perspective and I was a college student back then. I created the product but didn'...