All right. Thanks for joining me in this talk. My name is Venil Noronha and I worked with the VMware open source technology center. I have a fun job that is to contribute upstream to Istio and Envoy full time. Today we’ll have a look at rest API in some problems associated with rest APIs and how gRPC solves these problems. And finally we’ll have a look at how we can take the gRPC experience to applications running on a web browsers with the help of gRPC web and Istio. Here is a cool web app that’s running on the browser and there’s a web server that’s serving the HTML, Javascript and the styling files. We also have two services A and B and uh, they have a rest API associated with them and you can access these services through that API. Service A depends on a database and service B leverages the rest API of service A in order to provide its own functionality. So the good parts about this approach is that you get isolation. That is now if service B goes down, your system is still going to partially respond. Um, and that’s good. You also get scalability. That is now you can increase the number of replicas for each of your services. Introduce a load balancer in front of them and your system can be independently scaled. Your development and deployment can be, uh, Much more speedier now because these services are not a monolith. And finally you can pick the language of your choice to implement these services.

You just need to take care that the rest API works according to your contract. Here is a simple rest API. The end point is /API/users and the ID 123, the http verb is “get” and the responses as shown here. The first problem here is that json is not type safe, which means that the API is not type safe. Json was mainly designed for storing objects within javascript and in this case we are using it for the API request and response. So it’s not actually designed for compatibility. Uh, by that I mean forward and backward compatibility. Finally, json is very human readable. It’s very good for the eyes but probably not as good for the network.

And finally, this API here is not defined using a single IDL, that is the end point, the http verb and the response cannot be defined in a single language here. And if we implement the same API using protobufs, this is what it would look like. We have a service named user service. We have a function called find user, which takes a find user request object as input and returns a find user response object as output. And both, both of these messages have strongly typed fields within. So you get type safety and protobufs gives you, uh, forward and backward compatibility. For example, if you rename one of these fields on the server end, the client is still going to respond because these fields are not referenced by name anymore. They are referenced by their position within the message.

Protobufs when transferred over a gRPC are encoded as base 64 strings, which gives a better performance than json. And finally, the entire API here is written using the protobuf IDL. So you have a well defined contract definition. Now that we are happy with the gRPC and protobufs, we can replace the rest APIs for services A and B with gRPC. However, the Web UI can only speak http. So the communication between the front end and the backend cannot happen now over gRPC.

And one easy solution for this is to implement an http server, expose the rest API and map it to one of the gRPC backends. This is good if you have just a couple of services, but if you have a lot of microservices, maintenance would be a nightmare. So let’s take a look at a different approach here. For that quickly, service meshes. On the right side you see green boxes, which are your applications and the purple boxes are your proxies. Now the communication between these green boxes, that is services don’t happen directly, but it goes via this proxy. By doing that you get um, analytics of your requests and responses.

These proxies can also modify your requests and responses as well. And finally you can do fancy routing with these. So essentially it’s a dedicated infrastructure layer that handles service communication and they can manage complex topology of services, uh, with the help of these network proxies. They’re pretty lightweight and deployed alongside application processes. And your application don’t actually need to be aware that there is a proxy in front of them. I

f you look carefully at a single deployment, this is what it will look like. We have a service, a gRPC end point, and we have the proxy process in front of it. Both are two different processes. And the good part here is, uh, on my proxy, which serves as their default proxy for Istio has this a filter called gRPC web, which takes an http requests from the client and transcodes it into gRPC requests and it then converts the response back into http. Now if you can imagine, we can replace the http server that we built earlier with this proxy. And when we deployed the previous, uh, architecture on Istio, this is a what it would look like. So we have services A and B and both of these are proxies for the transcoding. And we also have proxies in front of the web UI and the database.

So now we understand why we need these proxies in front of services A and B. That is to transcode http requests to gRPC. But why do we need these proxies in front of the Web UI and the database? So for the web UI it can tell you, uh, the request size, the response size, the request duration, and other metrics. It can also do layer seven routing. And for the database it can tell you what tables are being accessed and what operations are being performed, and it can also do RBAC. So it gives you this generic transcoding for services A and B and it gives you observability for entire mesh. And here are all the benefits of having a service mesh.

Now we don’t need an external service registry to make your services discoverable. You also get robustness frameworks like rate-limiting, retries and circuit braking. You also get load balancing and test infrastructure so you can inject falls and delays and observe your system under such circumstances. Your services are dynamically configurable. That is you do not need to bring any of your services down in order to configure these proxies so you get better uptime. Istio by default gives you a whole set of dashboards so we can visualize these metrics. It can also do tracing and you can visualize your service mesh and finally using a service mesh, you can secure the communication happening between your services. That is by enabling mutual TLS and you can also do policies on your services.

To showcase a gRPC web with Istio, I built a small web app, I call it the emoji web app. It’s available at this URL and the code that I’m going to show you here may not compile – just a heads up there. I will explain the feature of this web app as we go along. The first thing here I define is the API, so the package is proto, the service name is emoji service and we have a RPC function called insert emojis, takes an emoji request as input and returns and emoji response. Messages have single string input and output fields and now when we implement this API and the server gets a request like this, that is a string with embedded keywords in it, the response would be a string with embedded unicode characters representing the emojis. Now that we have the API already, we can generate the Go and javascript bindings uses using these plugins. That is the gRPC plugin for generating the Go bindings and also the gRPC web plugin for generating the javascript bindings. Here is what the generated Go file looks like. We have the interface declaration here and this is the method that we need to implement. We also have two structures representing the request and response. Both have the string fields that we defined earlier. And this is the javascript file. We have the same kind of objects and we have the set input text and get output text functions and finally we also have a client with, the, which the gRPC plugins generated and now when we call the insert emojis function on this client, it’s going to do an RPC call at the send point and fire the callback when it receives a response.

Here’s the implementation of my server. It’s very simple. So we have this type call server with the insert emojis function. I’m using a package that I found on Github and when I pass in a string with embedded keywords in it, this returns a string with the Emoji icons in it. We take the output, output text and return it as response. In the main function, I create a simple gRPC server, register our implementation and finally start listening to request on Port 9000 we have the server ready and when I run it, this is what is going to do. It’s going to wait for requests on Port 9000. To test the API, I created a simple client using Go LAN again. I have two flags, server and texts. The server is the address of the server and the text field. will take the input from the user and send it back to the, um, send it to the backend to observe the response. In the main function, I create a connection to the, to the backend. And then I use the connection to create a new instance of the service client. And finally I take the user input, create a new request, send it to the backend and print out the response. We can go ahead and run this client. For that time using the Go run command along with a bunch of flags that I’m going to send.

The first one is the text that I want to send to the API, which is “I like pizza and sushi.” The next parameter would be the server address. In this case it’s local host Port 9000. And when I hit enter it’s going to send the request to the backend and as you see the request was purely plain text and the response has these unicode characters representing the emojis. So our API works and now we can go ahead and implement the web UI.

For that I have a simple HTML page here. We have two main tags. One is the div tag and one is the script tag, which is loading the javascript. The div tag is kind of special. I have marked it as editable using the content editable attribute and then when I’m going to type in characters, it’s going to call the insert emojis callback. What I’m trying here is that when we enter some text, the callback would be fired, which, which will take the text from the div, send it to the backend and when it receives the response it’s going to replace the text in the div with the response itself. So you can see the content update in real time.

Here’s the javascript. It’s very straightforward. We first load the protobuf generated bindings, then create a client to the backend, take a reference to the div and in the callback we create a new request. Take the input text from the div, set it as input text on the request and call the insert emojis function on the client itself. And finally in the callback, we take the output text from the backend and set it as the text in the div.

Now that we have all the code already, we can see the deployment diagram. When we deployed this application on Istio, this is what it would look like. We have the web UI and proxy in front of it. We also have a server with the gRPC API and also a proxy in front of it.

And Istio deploys this third proxy called the ingress gateway. This is a special proxy, so on the traffic entering the system will go through this proxy and it can decide how to route this traffic to the services in the backend. To deploy this on Kubernetes, I have some configuration here. The configuration here is for the server instance. And here we expose port 9000 and I have given it a special name, that is, gRPC web.

So when Istio notices this name, it’s going to enable the gRPC web filter on the Envoy in front of the service. And for the deployment, I have this image that I built with the server, expose Port 9000 and create a single replica for it.

You have a similar configuration for the web UI. It’s quite straight forward. We have Port 9001 open and uh, we name it as http and we use the image that we built for the web UI.

And finally here is the Istio configuration. So as you noticed. the API was in here says Istio dot something, which is because this is a CRD within Kubernetes.

And in this configuration we are exposing Port 80, setting the protocol as http and we can use this gateway now with the virtual service configuration, where we are doing this matching, so all the requests that enter the URI prefix of proto dot emoji service will be sent to the server instance. And the catchall route is web UI. So all the incoming requests for the HTML, Javascript, CSS will go to the web UI and only the API specific requests would be sent to the server. We can go ahead and deploy these configurations.

The first thing I deployed here is the server. I’m using the Istio cuttle ccommand to manually inject the proxy. The next thing I deploy is the web UI. And again, I will use the Isito cuttle command to inject the proxy here. We can also automate this. And, finally I deploy the gateway and virtual service configuration? Now that everything is up and running, we can fetch the pods running in the system. And as you notice both the services are up and running and we have two containers running within each pod. This is because one represents the service itself and the other represents the proxy. We can now go ahead and access the ingress gateway on the browser. This is the div that we created using the html that I showed earlier. And now when I type in characters, you see these request flowing to the backend and the text is going to be updated in real time. Notice that the end point is insert emojis. The status code is 200 and the protocol is http 1.1, but gRPC is based on http 2. That is the state code is not a gRPC. But uh, http to be specific and let’s have a look at the requests in more detail. So this is the whole end point here. That is proto dot emoji service slash insert emojis.

We also see a bunch of headers specific to gRPC web. Now these are the response headers. And you can also notice that the server here is is Istio – Envoy, which is the proxy itself. You’ll see similar headers in the request object and finally the payload here is a base 64 string. Okay, so this is all good. Uh, and our gRPC web application is working pretty good on Istio but apart from this feature is still gives us a whole bunch of dashboards as I mentioned earlier. Let’s have a look at a few.

Here is Grafana. We can see the request volume and the success rate for the web UI deployment here. The success rate is basically non five xx responses from this particular instance. We can also load a similar dashboard for the server instance. And it also shows us a whole bunch of other details. For example, we can see the response code for this particular instance. You can also see the request duration, the request size and the response size.

Next, let’s have a look at the tracing dashboard. Here is Jaeger and we are trying to find traces on the server deployment. Now when I’m click on a particular trace, is it’s going to tell me how the request flowed through the system. In this case a request went through the ingress gateway, then it reached the server end point and when I click on the server instance, it’s going to show me all the details about the request.

So you can see that the end point here is a proto dot emoji service slash insert emojis. The http method was “Get” and you can also see the user agent here, which is a Mozilla, that is the browser. However, the protocol here is http 2. Now if you remember the browser actually sent http 1.1 request and now here we see http 2, which is because the proxy has upgraded the request from http 1.1 to http 2 in order to satisfy the gRPC requirement. We can use this dashboard to figure out slow services and failing services. And finally, let’s look at Kiali, which we chose as our service graph. This graph is generated real time based on the traffic flowing through these proxies and we can click on a single instance here and find more details about the response codes and also the load on the system at that point in time.

So in conclusion, protobufs let you define API contracts and data models using the IDL and it also gives you forward and backward compatibility. gRPC is based on http 2 and it also generates client stops for you so you don’t need to do it yourself. And since it’s based on http 2, you get better network performance and also you get a better performance than json because it encodes objects into base 64. gRPC web lets you take the protobuf and gRPC experience onto browsers and it needs a proxy like Envoy for that. Istio gives you Envoy and a whole bunch of dashboards for metrics for tracing and service graph. And Istio is also extensible so you can create your own adapters and wire in the backend that you want. In my case, I created a plug in for wavefront.

So you can visualize the same metrics on wavefront now, get rid of your http. Questions? Yea? [unclear] So the question was that gRPC support bi-directional streaming, gRPC web does not do that. Um, and do we have plans to support bi-directional streaming with gRPC web? So the answer is no, because the browser cannot support the same use case. I was speaking to the maintainer of gRPC web last week and he mentioned about this problem that you cannot do this with browsers, but if Chrome decides to actually enable that, I think it’s going to be possible. Any other questions? Yes? [unclear] Envoy. Envoy. [unclear] So the question is, if I used a customized version of Envoy, or is it straight from Istio? The answer is it’s straight from Istio. So when we name a particular port as gRPC web dash something, it’s going to automatically enabled the gRPC web filter. So you don’t need a custom Envoy for that. [unclear] oh, okay. So for the previous question, I think you can do partial streaming with web sockets along with gRPC or gRPC web.

Any other questions? All right, thank you.