gRPC vs HTTP APIs

Avatar

James

ASP.NET Core now enables developers to build gRPC services. gRPC is an opinionated contract-first remote procedure call framework, with a focus on performance and developer productivity. gRPC integrates with ASP.NET Core 3.0, so you can use your existing ASP.NET Core logging, configuration, authentication patterns to build new gRPC services.

This blog post compares gRPC to JSON HTTP APIs, discusses gRPC’s strengths and weaknesses, and when you could use gRPC to build your apps.

gRPC strengths

Developer productivity

With gRPC services, a client application can directly call methods on a server app on a different machine as if it was a local object. gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. The server implements this interface and runs a gRPC server to handle client calls. On the client, a strongly-typed gRPC client is available that provides the same methods as the server.

gRPC is able to achieve this through first-class support for code generation. A core file to gRPC development is the .proto file, which defines the contract of gRPC services and messages using Protobuf interface definition language (IDL):

Greet.proto

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (HelloRequest) returns (HelloReply);
}

// The request message containing the user's name.
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings
message HelloReply {
  string message = 1;
}

Protobuf IDL is a language neutral syntax, so it can be shared between gRPC services and clients implemented in different languages. gRPC frameworks use the .proto file to code generate a service base class, messages, and a complete client. Using the generated strongly-typed Greeter client to call the service:

Program.cs

var channel = GrpcChannel.ForAddress("https://localhost:5001")
var client = new Greeter.GreeterClient(channel);

var reply = await client.SayHelloAsync(new HelloRequest { Name = "World" });
Console.WriteLine("Greeting: " + reply.Message);

By sharing the .proto file between the server and client, messages and client code can be generated from end to end. Code generation of the client eliminates duplication of messages on the client and server, and creates a strongly-typed client for you. Not having to write a client saves significant development time in applications with many services.

Performance

gRPC messages are serialized using Protobuf, an efficient binary message format. Protobuf serializes very quickly on the server and client. Protobuf serialization results in small message payloads, important in limited bandwidth scenarios like mobile apps.

gRPC requires HTTP/2, a major revision of HTTP that provides significant performance benefits over HTTP 1.x:

  • Binary framing and compression. HTTP/2 protocol is compact and efficient both in sending and receiving.
  • Multiplexing of multiple HTTP/2 calls over a single TCP connection. Multiplexing eliminates head-of-line blocking at the application layer.

Real-time services

HTTP/2 provides a foundation for long-lived, real-time communication streams. gRPC provides first-class support for streaming through HTTP/2.

A gRPC service supports all streaming combinations:

  • Unary (no streaming)
  • Server to client streaming
  • Client to server streaming
  • Bidirectional streaming

Note that the concept of broadcasting a message out to multiple connections doesn’t exist natively in gRPC. For example, in a chat room where new chat messages should be sent to all clients in the chat room, each gRPC call is required to individually stream new chat messages to the client. SignalR is a useful framework for this scenario. SignalR has the concept of persistent connections and built-in support for broadcasting messages.

Deadline/timeouts and cancellation

gRPC allows clients to specify how long they are willing to wait for an RPC to complete. The deadline is sent to the server, and the server can decide what action to take if it exceeds the deadline. For example, the server might cancel in-progress gRPC/HTTP/database requests on timeout.

Propagating the deadline and cancellation through child gRPC calls helps enforce resource usage limits.

gRPC weaknesses

Limited browser support

gRPC has excellent cross-platform support! gRPC implementations are available for every programming language in common usage today. However one place you can’t call a gRPC service from is a browser. gRPC heavily uses HTTP/2 features and no browser provides the level of control required over web requests to support a gRPC client. For example, browsers do not allow a caller to require that HTTP/2 be used, or provide access to underlying HTTP/2 frames.

gRPC-Web is an additional technology from the gRPC team that provides limited gRPC support in the browser. gRPC-Web consists of two parts: a JavaScript client that supports all modern browsers, and a gRPC-Web proxy on the server. The gRPC-Web client calls the proxy and the proxy will forward on the gRPC requests to the gRPC server.

Not all of gRPC’s features are supported by gRPC-Web. Client and bidirectional streaming isn’t supported, and there is limited support for server streaming.

Not human readable

HTTP API requests using JSON are sent as text and can be read and created by humans.

gRPC messages are encoded with Protobuf by default. While Protobuf is efficient to send and receive, its binary format isn’t human readable. Protobuf requires the message’s interface description specified in the .proto file to properly deserialize. Additional tooling is required to analyze Protobuf payloads on the wire and to compose requests by hand.

Features such as server reflection and the gRPC command line tool exist to assist with binary Protobuf messages. Also, Protobuf messages support conversion to and from JSON. The built-in JSON conversion provides an efficient way to convert Protobuf messages to and from human readable form when debugging.

gRPC recommended scenarios

gRPC is well suited to the following scenarios:

  • Microservices – gRPC is designed for low latency and high throughput communication. gRPC is great for lightweight microservices where efficiency is critical.
  • Point-to-point real-time communication – gRPC has excellent support for bidirectional streaming. gRPC services can push messages in real-time without polling.
  • Polyglot environments – gRPC tooling supports all popular development languages, making gRPC a good choice for multi-language environments.
  • Network constrained environments – gRPC messages are serialized with Protobuf, a lightweight message format. A gRPC message is always smaller than an equivalent JSON message.

Conclusion

gRPC is a powerful new tool for ASP.NET Core developers. While gRPC is not a complete replacement for HTTP APIs, it offers improved productivity and performance benefits in some scenarios.

gRPC on ASP.NET Core is available now! If you are interested in learning more about gRPC, check out these resources:

Avatar
James Newton-King

Principal Software Engineer, ASP.NET

Follow James   

38 comments

  • Avatar
    Mike-EEE

    MSFT in 2007: HEY WE MADE WCF IT USES RPC AND IT’S AWESOME!!

    MSFT in 2013: WHY ARE YOU USING RPC THERE IS REST NOW AND IT WASN’T MADE BY US!!! IT IS INFINITELY MORE DIFFICULT AND EXPENSIVE TO USE WITH ZERO OF THE TOOLING YOU USE AROUND IT LIKE YOU DO WITH WCF USE REST INSTEAD!!!!!!!

    MSFT in 2019: HEY HAVE YOU HEARD OF THIS THING CALLED RPC?!?!

    • Alfred White
      Alfred White

      @Mike-EEE

      Why do you make comments like this? Who is it helping?

      Only massive enterprises that couldn’t pivot continued to support WS-* past 2010.
      REST won the war, how was Microsoft supposed to respond? Ignore it?
      Now the industry is swinging back towards IDL based RPC, so Microsoft is responding again.
      The industry moves in cycles, and you can’t just wait about for 15 years until your tech is in fashion again.
      It isn’t “fair”, but it is reality.

      • Avatar
        Michael Taylor

        Also gRPC (https://grpc.io/faq/) isn’t a MS technology. It was written, with protobuf, by Google. They manage the spec and are spearheading the work to get it working with browsers. MS is just adding support for it like everybody else.

        • Avatar
          Mike-EEE

          Right, my remarks are more of a commentary on how MSFT went from being superior market leaders to irrelevant sissy followers in this space by not properly supporting the monumental achievement they had already accomplished with WCF.

          I know I know… it happened so long ago. And, really, I’m OK with it now. No really, I am.

          • Avatar
            Zendu

            I agree with MIKE. I think WCF achieved super generic implementation and in the way got very bulky/difficult to start with. Opinionated stuff like this is subset of WCF is much more efficient. Think of buying IKEA furniture vs building your own. πŸ™‚

      • Avatar
        Jonathan Bakert

        He makes comments like this because the development world is stuck in an obsessive compulsive cycle of academically fixing problems that don’t exist with solutions that don’t work for an audience that doesn’t get it

        • Avatar
          Kevin Weir

          Before I die I want to see how many more ways there are to do CRUD πŸ™‚

          I’ve being at this gig since the Cobol days and have seen it all.,,,, and then saw it again and again and again lol

      • Avatar
        Sam Wheat

        REST won the war

        No they didn’t. They won the battle. REST is now well understood to be a leaky abstraction and a needlessly limiting and complex technology.

        • Avatar
          Kevin Weir

          On the one hand REST essentially enabled the modern web but as a programming paradigm its inherently weak, fragile, complex and just plain painful from top to bottom. Things can improve somewhat if we can get back to having some decent state management and code execution options on the client. i.e. Web-assembly holds at least some promise in that regard.

    • Avatar
      James Newton-King

      I get that this is a joke, but it is always your responsibility to evaluate the right technology for your applications. Technology changes, and the design for a OS apps client calling a server is different to JavaScript in a browser calling a server. What worked for one use case is not necessarily good for the other. gRPC is a new tool in .NET developer’s toolkits. The information in this post is here to help you make an informed choice of when gRPC is a good choice for you.

      • Avatar
        Jonathan Bakert

        “Technology changes” I don’t know about that. The only thing that’s changed for me since I first launched qbasic and tossed a banana at someone was API hyper-evangelism.

        Massive API churn, convoluted framework of the month– these things aren’t technology. They’re tools, and often unnecessary ones at that.

        The point of coding is to create not to get stuck creating things that create then buttressing a new creation to the point of usability whilst the original creator is busy writing a new creation for another invented problem.

    • Avatar
      Ed Lance

      @Mike-EEE, right on! Before WCF it was .asmx SOAP, and the story sounds all too familiar. Hey! You can call methods on a server as though it were local! Code generation, bla bla bla.

      We heard this story around 2002. Same story, different platform. MAYBE it will catch on, MAYBE it will be better? One thing I learned from being around all this is not to jump on anyone’s bandwagon and start switching everything over. We’ll let this ferment for a while and see if anything comes of it.

      Oh and websockets. What happened there, idk.

  • Avatar
    Enrico Sabbadin

    We had .net remoting and we were fine with that πŸ™‚ ..
    We were told rpc style is bad .. and it’s back again ..
    Sometime I feel like being treated as an idiot πŸ™‚

  • Avatar
    JiΕ™Γ­ ZΓ­dek

    Messaging (=events, request&response pair) is the best. We switched from JSON format to MessagePack+COBS and it is fast enough. RPC is too strict and gives feeling of local resource, that’s very leaky and leading to misconceptions – like in R.I.P. Remoting.

    • Avatar
      Basil Thomas

      When you use gRPC as it was designed, simple message contracts is the default implementation and it is very strict with only the translation of the message contract into c# classes.
      e.g AddFuturesContractCommand & AddFuturesContractCommandResponse

      Both are message contract only with no leaky abstractions involved at all: the client command is executed on the server and the server returns with a command response to the client regardless of who is the client or server.

      This is exactly how I defined WCF messaging and it worked like a charm and thankfully moving to gRPC will be a breeze!!

    • Avatar
      Kevin Weir

      This notion that contracts are bad is quite humorous. As soon as you write a single web API operation you’ve created a contract the client is required to conform too to communicate. A stack like gRpc just formalizes that contract which I think is a VERY good thing. Heck the API space is trying to address the gap of missing contracts with the OpenAPI spec initialize What goes around comes around I guess.

      To me the problem isn’t so much the notion of contracts its how they’re implemented into the overall platform and technology and how easily they can be discovered and consumed.

    • Avatar
      James Newton-King

      protobuf-net.Grpc builds on top of .NET Core gRPC. We added some features to .NET Core gRPC to make it a seamless experience.

      protobuf-net.Grpc allows for code-first contracts, defined in C#. Note that you give up cross-language communication by not having a proto file.

      • Avatar
        Laszlo L

        Yes. I tried it, not yet in real development, just a few tests so not every aspect was tested. Looks great, and exactly what was missing for somebody who is get used to Postman. I just wanted to promote it here so it may get more attention (urging the developers even more to continue their work)
        BloomRPC is less than a year young while Protocol Buffers is almost two decades old, I contemplated using protobuf serialization years ago and was wondering why no such tool existed until recently.

  • Avatar
    Anil Raut

    Context – microservices architecture leveraging Kubernetes environments needs efficient inter service communication and also support external clients esp doesn’t support gRPC or simply put REST clients

    Understanding (as of today) – not all clients esp browsers support gRPC

    Need – expose, manage endpoints for gRPC as well as REST clients without duplication

    Question – what approach/pattern you suggest so we can achieve exposing services end points to support both gRPC and REST given we would need dual support for most of domain microservices?

  • Avatar
    Chris Woodward

    Hi James, thank you for your article. Watching the latest ASP.NET Community Standup this morning with David Fowler and Damian Edwards they mentioned that although you can now develop gRPC solutions you need to be aware of the limited deployment options available to you for the Windows platform. I may be wrong but I think they said that you cannot, at this time, deploy gRPC on Azure App Services or behind IIS. Apparently this is currently being worked on by the team but will not be resolved in the near future as there essentially needs to be a Windows Update for this to work. Perhaps you could clarify exactly what the deployment options are currently for developers that want to release gRPC solutions ?

    • Avatar
      James Newton-King

      You can host gRPC + ASP.NET Core in Kestrel. HttpSys and IIS support is coming. They require improvements to their HTTP/2 support to properly support gRPC.

      If you are hosting on Azure then Kestrel in a container with AKS works. Azure App Service is not supported because there are reverse proxies and load balancers in front of App Service that use HttpSys. HTTP/2 needs to be properly supported from end to end.

    • Avatar
      MCP.NET

      For those who doesn’t know anything else to use them.

      if you know what you doing you will already have your own way of doing things and will never ever depend on anyone else to tell you what to do or how to do it.
      Someone from Google wrote something for themselves, started ringing the bell, and sheep started following, like with many “new technologies” that came past years. You don’t know what to do and you try to implement that new “cool” technology hoping that it will solve all of your problems. And, at the end, your problems are still there and may become worst and company may go into abyss because of that.
      Sorry, but tired of all this nonsense.
      What you have today as SOA, microservices, etc… we have done that in year 1999/2000.
      When explain all that to newcomers, they end up speechless… They learn that what they think is “new cool technology” is something that may had be done decades ago.
      Yes, year 1999/2000 when everyone just used CORBA, DCOM, Remoting, Microsoft coming with .NET Remoting… we were doing web service calls to provide first ever Insurance Policy Portal for Members and Agents with REAL TIME data from backend mainframe systems. 100s of small satellite libraries written in COBOL invoked per web service call to provide JUST ONE operation/process/data back to a caller (same what someone is preaching that should be done with “microservices”). Unheard of at the time of monster monolithic mission-critical applications.
      BTW, web servers were written also in COBOL, with PASCAL libraries for networking as COBOL has no ability do provide network support :).

      Still looking for someone to says that they were doing the same in year 2000. No one comes forward yet.
      Know someone who may actually was doing that around year 2000, let me know, will like to chat with that smart person.

      Today, everyone so happy to say doing SOA, microservices, containers, Kubernetes, etc… but when asked what is the benefits of doing all that… then you just hear crickets…. or answer is: doing it just because someone else said that it was doing it and we have to be current and copy them… Funny!

      Never ever implement technology that is not required.
      Problem on hand will ask for certain solution, do not force solution on problem as will end up badly.
      Need to use gRPC, make deep thinking what that will solve for you that you can’t do with on example plain HTTP call. Do you really need binary data transfer to speedup data transfer because you have trillions transactions? Are you offered just with one type of data access point (gRPC web service)? Don’t do it just for the sake of doing it because it is a “cool new technology”.
      You ALWAYS must have a very deep reason to implement some new technology as that never comes cheap.

        • Avatar
          MCP.NET

          πŸ™‚πŸ™‚πŸ™‚πŸ™‚πŸ™‚

          You get it too.

          Wish that many more truly get it and understand what really software development is about.
          Then we will have way more better applications and systems. Not bloated with unnecessary things.

          “If you don’t know what you are doing , no language, no OS, no technology can help you with that. You will just sink deeper and deeper in mess that you are creating using new “cool” technologies.” Β©

          Just remember word “KISS” forever. πŸ™‚ That will help you tremendously.

          Developing simple application is so difficult.
          You may need to write just 3 line of code that will do all that is need but you end up with hundreds of line of code or even more. Why?
          It is not 1990 when people were bragging about million lines of code that their applications are build from (lines that do nothing πŸ™‚).
          It is 21st century. I need an application that has possibly few lines of code. That will be a work of genius.
          Anything more probably is a failure, developer just trying to survive wandering around and writing tons of code.
          You have 10,000 developers in company doing what? Writing unnecessary code. If they know better 9,900 probably will not be needed. 100 Experts can handle all as they will not waste time and money and will have rock solid applications which will require less support too.

  • Avatar
    jinglian cui

    So, Is it provide some program model for dev the p2p applications?Is it possible for insteading the traditional tcp program model?

  • Avatar
    Igor Krupin

    Cool stuff! I hand-rolled something similar, for microservice-to-microservice sync and async communication. I used JSON to serialize the payloads. Serialization was easy, deserialization was a bit tricky. I ended up adding type hints into the payload to aid the deserializer so that it knows exactly which type it is trying to deserialize. The cool part about JSON is that you can log and interrogate the payload and t-shoot should something go wrong. Also, for async flow you can park your RPC payloads in something like DynamoDB. Since it is JSON you can search the payloads, etc. Performance using JSON (+ gzip) was great once the serializer was able to serialize a certain type at least once (I`m assuming it uses some sort of type info cache). Cold serialization took some milliseconds longer.

    gRPC should further provide performance improvements, with trade off being that the payload is not in an easily digestible format.

    Also, MediatR makes inter-microservice communication simple. You have a request, a response, and a request handler. Creating a microservice interop layer is as simple as serializing the request and sending it to the receiving microservice. The microservice will run the request, issue a response, which then is serialized and sent back to the caller.

Leave a comment