- WebFlux vs. Blocking (MVC): user experience
- WebFlux vs. Blocking (MVC): performance
- WebFlux vs. Blocking (MVC): ease of development
WebFlux vs. Blocking (MVC): user experience
Let’s start our comparison with the most noticeable difference for Web Users: user experience. We’ll use the frontend for this comparison.
Playing the user’s role is quite simple in this application: enter values for the given parameters (pagination, page number, and size) and press either the ‘Reactive Request’ or the ‘Blocking Request’ button. That will put our implementation into practice, calling the backend services from the web page and processing the response either as small portions (WebFlux) or as a whole thing (Web MVC). This is covered by the previous chapters, so we won’t dive into details about how this works.
You’ll find out quickly that, for the non-paginated requests and big page sizes, the user’s experience for the reactive case is much better. You don’t need to wait for the request to finish to start reading the quotes. Besides, keep in mind that this is a simulated scenario in which we’re forcing a constant delay per element from the server. In a real situation, where the server’s response time and the network latency may vary unpredictably, the benefit would be even more noticeable.
As mentioned in the first chapter, it’s a known fact that web users abandon slow sites. Following the same example we used back then, it’s much better to show a few products of your online store as soon as possible to your potential customers than keeping them waiting for all the content to appear at once.
WebFlux vs. Blocking (MVC): performance
Interestingly enough, implementing a Full Reactive Web Stack introduces some extra delay in the total time required to retrieve a response from the server. That’s due to the extra processing needed to make the communication work. Therefore, we can’t extract conclusions by looking at the requests one by one.
The Reactive Web approach shines when we measure the whole performance of the server when handling multiple requests at the same time. In this section, we’ll go through different test configuration options and we’ll see the results in graphs that compare the WebFlux implementation with the Web MVC one.
The results come from a benchmark test that you can run yourself. In the GitHub backend’s project folder, spring-boot-reactive-web, you’ll find the test class
BenchmarkTest. Remove the
@Ignore annotation and run the tests or just use your IDE to execute it.
I used an Intel Core i7 @ 2,5 GHz with 1 processor and 4 cores for the test. The backend runs from IntelliJ IDEA (not Docker). I didn’t modify the default configuration for Netty, which in my case is running 8 parallel server threads.
The benchmark allows you to configure:
- The total number of requests to execute.
- The number of requests to run in parallel (parallelism).
Every request performed by the client should take at least one second since the delay per element is 100ms and we’re asking the server for 10 quotes. The existing code runs 1, 8, 32, 96 and 768 requests four times:
- With parallelism of 32 and invoking the reactive endpoint.
- With parallelism of 32 and invoking the blocking endpoint.
- With parallelism of 96 and invoking the reactive endpoint.
- With parallelism of 96 and invoking the blocking endpoint.
To avoid potential differences in the test implementation, both reactive and not reactive requests use a
WebClient, the reactive client that comes with the new Spring WebFlux. I also tried with RestTemplate for the blocking part with very similar results. The log levels of the application are in the repository so you can get some valuable information from them when running the Benchmark: threads started, locks acquired and released, etc.
Server’s Side: Requests Served per Second
As you can see in the graph below, WebFlux scores much better for a high load. It’s capable of processing almost all the requests in parallel, reaching 30 requests/second when we execute 32 requests at once and 90 requests/second for 96 simultaneous calls.
When we execute blocking calls, the server can’t process so many requests in parallel. It’s clearly limited by the number of server threads running, eight in this case.
This proves that a Reactive Web approach optimizes the server resources by unblocking the server’s threads and processing the requests separately, in parallel.
Client’s Side: Average Time per Request
This metric also shows WebFlux as the winner option. The Netty server keeps all the reactive requests close to the minimum of one second, which is actually the consequence of the server being able to cope with almost every request in parallel.
On the blocking landscape, the higher the number of parallel requests the longer the request takes to complete in average. It makes sense: given that all the server threads are busy, the remaining requests need to wait, thus increasing the average time. On the functional side, many users would be unhappy because the web page takes a long time to load.
Bear in mind that this benchmark shows a clear winner because it focuses on a key server resource: the number of parallel threads available. We could make the classic Web stack perform much better:
- A first option would be to increase the number of parallel threads available in the server, adjusting them to our needs.
- On the implementation side, we could switch to asynchronous processing at the controller layer. This could be achieved for example by returning a
Callableas a response. See the Asynchronous Requests section in the Spring docs for more information.
WebFlux vs. Blocking (MVC): ease of development
This is the most subjective angle that we can use to compare both approaches. However, I think it’s very important because it would be risky to blindly choose WebFlux because of the value it brings to user’s experience without considering other factors.
For instance, take into account the knowledge of Reactive Patterns in your organization. It’s clearly a change of paradigm and there are many ways in which things can go wrong if people are not properly trained in these concepts. You might end up with an amazingly-performing web server which is a nightmare to maintain from the developer’s point of view.
In my opinion, the readability of code may get worse with reactive patterns. Large chains of methods with a weird indentation where the reader struggles to figure out the logic behind unless there is good inline documentation or a very good code assistance from the IDE. On top of that, there is also extra complexity associated to debug these code blocks that do not return values immediately.
You should evaluate if a Full Reactive stack fits your case. Do you require SQL queries to complete your Web response? Bad news: reactive support for relational databases are still at a very early stage.
Look also at the client’s side: can your web clients switch to a reactive approach? If the answer is no, then the list of advantages you’ll get becomes smaller.
Writing Unit and Integration tests for reactive streams with WebFlux might be tricky. The Project Reactor documentation is a good starting point, showing how you can use for example
StepVerifier to check that your Fluxes and Monos work the way you want.
You can also find examples of Unit Tests in our backend application’s GitHub repository.
An extra inconvenience is that there isn’t a big community of developers from where you can see examples of testing with WebFlux yet. Besides, the tests are not as readable as the MVC ones, and they are also harder to debug if something goes wrong.
Like any other technology choice, WebFlux (and the Reactive Web approach in general) has advantages and downsides. Don’t go for it just because it’s the new thing to do.
As we saw in this guide, WebFlux can bring performance benefits and can also leverage the experience you provide to your users when they’re waiting for the response’s data. On the other hand, reactive programming comes with new ways of writing, testing, and debugging code, so it’s not a quick transition.
I always recommend a very good analysis of not only the requirements of your application but also other aspects of your organization. Is the development team ready for the change? Can you solve all the challenges you’ll face? Will you really use the benefits or are they just nice-to-haves?
For an existing project that may take advantage of it, I’d rather try it in a limited scope. You could implement this approach in a separate component (or module, or microservice). Evaluate the results; then, you can refactor the rest of the code if you see that it brings value.
Would you like to know more about Spring Boot applied to a Microservices Architecture? Learn all these concepts from a practical perspective with my new book.
Did you like this post? Get the complete Full Reactive series in a book format on Leanpub.
If you prefer the blog summaries, continue reading: