RSocket for the internet: implementing http/2 based transport

March 18, 2020
RSocket java http2

[rsocket-transport-http2 on github]

Http2 is becoming language of the internet with little less than half of world traffic. It models requests and responses as binary frame streams multiplexed over single connection - major improvement over text based Http1 offering single shared stream.

This property enables support of different clients: browser applications over Http/REST, mobile/iot over GRPC, also mobile/iot/browser using RSocket-RPC - all served by same gateway/edge servers with common functionality (authorization, metrics, routing, load balancing etc) implemented in terms of Http2 streams.

Efficient internet protocol may be bad fit inside data center: RSocket gives option to transparently switch transport layer to less chatty one; means to keep latencies low with message level flow-control for request, requests concurrency control (leasing mechanism) for connection; programming model of composable asynchronous message streams with cancellation and error as first class citizens.

Clients do all interactions, servers are responders only

Client supported interactions are

Server initiated interactions are not supported because Http2 streams push-promise semantics is not suitable for RSocket.

Implied setup

RSocket has its own preface of either SETUP or RESUME frame that cant be understood by Http2.

Problem could be partially solved with Http2 SETTINGS custom parameters: their value size limit of 4 bytes is enough for keep-alive fields and request leasing flag. Downside is lack of support by popular proxies as custom parameters enable RSocket protocol specific features - hence requires custom solution.

That’s why each side of connection assumes initial state as follows:

Carrying RSocket streams with Http2

Lets start with RSocket 0 stream frames.

Keep-alives are translated to Http2 PING frames with small data payload of 8 bytes - just enough to measure RTT. Connection error frames are mapped to Http2 GO_AWAY with error code no_error(0) and message containing RSocket frame state in form error_code:error_message.

4 interactions streams are modelled after GRPC where Http2 stream is started with HEADERS frame carrying request/response metadata, and DATA frames containing length delimited RSocket frames.

Client request streams are terminated by DATA frame end_stream flag on successful completion/error, or RST_STREAM frame on cancellation. Server response streams are terminated by trailer HEADERS.

Sequences are illustrated by example below

Client request starts with HEADERS

  :method          POST
  :path            contains interaction name for plain RSocket:  
                   /rsocket/fnf, /rsocket/response,/rsocket/stream,/rsocket/channel,  
                   or call name for RSocket-RPC: /service/method  
  content-type     application/rsocket+http2      

followed by sequence of DATA frames carrying length delimited RSocket frames.

Last frame is designated with end_stream flag.

RSocket CANCEL frame of cancelled request is not encoded in DATA, instead mapped to Http2 RST_STREAM frame.

Server responds with HEADERS

    :status        200
    content-type   application/rsocket+http2  

followed by sequence of DATA frames, terminated by either success trailer HEADERS

    rsocket-status 0

or error trailers containing code and message of RSocket ERROR frame

   rsocket-status  <error_code>
   rsocket-message <error_message>

JVM implementation

Library relies on netty - defacto standard for non-blocking networked applications, and is already present in RSocket core library. netty-codec-http2 is the only additional direct dependency.

Few bullet points:

Client RSocketFactory can be configured by transport builder, with options set as required by transport contract outlined in Implied setup section

Mono<RSocket> client = NettyHttp2ClientTransport.builder()
        .address(host, port)
        .headers("authorization", "Basic YWxhZGRpbjpvcGVuc2VzYW1l")
    //  .rSocketRpc()   // sets Path header to RSocket-RPC service/method instead of rsocket/<interaction_name>  

Server can be configured in similar manner

Mono<CloseableChannel> server = NettyHttp2ServerTransport.builder()
        .address(host, port)
        .acceptor((setup, sendingSocket) -> Mono.just(new Responder()))

Source code is hosted on Github: rsocket-transport-http2 repository.

For runnable example build project

./gradlew clean build

Then start server and client:


Client terminal should display received messages counter:

00:12:18.946 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client Connected server on

00:12:23.956 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client received 238006 messages
00:12:28.955 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client received 374223 messages
00:12:33.955 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client received 393195 messages
00:12:38.955 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client received 358807 messages
00:12:43.955 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client received 376130 messages
00:12:48.954 rsocket-transport-netty-http2-nio-1 com.jauntsdn.rsocket.transport.http2.example.client.Client received 375942 messages

Lets check how single stream looks like on wire

Client request is sent after Http2 connection preface: 24 byte magic string and SETTINGS exchange.

Request is started with HEADERS frame containing :method=POST and :path=/rsocket/stream headers (we use plain RSocket), followed by DATA frame with RSocket request frame. Http2 stream ids start from 3 because first id is reserved for Http1 - Http2 protocol upgrade flow.

Server responds with HEADERS frame containing :status=200, followed by sequence of DATA frames holding length delimited RSocket frames, terminated by response trailer HEADERS with rsocket-status=0 denoting successful response completion.


We can evaluate transport performance by measuring interaction response latency and RPS with different concurrency limit on each run: 1, 4, 8. Lets start with request-stream as most common one.

Stream response contains 24 messages, message data is random and same string for given sequence, limited to 100 characters.
Epoll transport is enabled, tls is disabled, test host is 12 vCPU / 32 GB machine running Ubuntu 18.

Test server and client are available in rsocket-transport-http2-test module.

Request-stream test with concurrency limit = 8 can be started as follows

./ request-stream 8

Results are presented below

Concurrency 1

18:32:58.407 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient --- request-stream , concurrency 1 ---

18:32:58.408 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p50 => 126 microseconds
18:32:58.408 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p95 => 147 microseconds
18:32:58.408 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p99 => 164 microseconds
18:32:58.408 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient rps => 7813

Concurrency 4

18:34:16.801 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient --- request-stream , concurrency 4 ---

18:34:16.802 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p50 => 289 microseconds
18:34:16.802 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p95 => 395 microseconds
18:34:16.802 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p99 => 459 microseconds
18:34:16.802 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient rps => 13446

Concurrency 8

18:37:59.846 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient --- request-stream , concurrency 8 ---

18:37:59.846 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p50 => 547 microseconds
18:37:59.846 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p95 => 657 microseconds
18:37:59.846 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient p99 => 842 microseconds
18:37:59.846 rsocket-transport-netty-http2-epoll-1 com.jauntsdn.rsocket.transport.http2.perftest.TransportPerfClient rps => 14439

Under concurrency 8 test, 14k streams (and 14439 x 24 = 346536 messages) per second per core are served with sub millisecond latency.

Concurrency 1 demonstrates lowest latency - little more than 160 microseconds, but throughput is underutilized: around 8k streams per second only.

RSocket fork

Forking was motivated by series of problems making official organization repo rsocket/rsocket-java hardly usable.
Here is brief outline of blockers that made implementation on top of It not feasible:

Unfortunate bonus bit: rsocket/rsocket-java (and all of the above) is part of spring-boot:2.2.x / spring-integration:5.2.x - latest major releases.

Their users cant have RSocket with tls enabled transports, and with non-tls witness their services suddenly stop responding after few hours deployed.

Serving one million streams. Part 2. Double performance over original RSocket library

January 20, 2021
RSocket java

WebSockets over http/2: implementing RFC8441 with Netty

July 30, 2020
netty websocket http2 java

Serving one million streams. Part 1. RSocket limits on the JVM

June 24, 2020
RSocket java