Full Reactive Stack: Conclusions

Full Reactive Stack: Conclusions

In this final section, we run the application and see how everything works for the two different approaches: WebFlux with Server-Sent Events and MVC blocking classic. This post compares those two alternatives as well, in terms of user’s experience, performance and ease of development.

This post is part of the Full Reactive Stack Guide:

  1. Full Reactive Stack - Introduction
  2. Backend side with WebFlux, Spring Boot and MongoDB
  3. Frontend side with Angular and EventSource
  4. Running the application, Comparing WebFlux and MVC, and Conclusions (this post)
Do you prefer the print-friendly version of the guide? Get access now to the mini-book (37 pages) at a reduced price. Download your copy on LeanPub.

Running the WebFlux - Angular application with Docker

In previous posts, we covered how to run the backend Spring Boot application and how to run the frontend with Angular. However, it’s much easier to run everything together in Docker containers. I won’t detail in this post how to configure everything needed to make this work - that’s a different blog entry -, but let’s see how to execute it.

Get the book Practical Software Architecture

Preparing our apps to be dockerized

We’ll create docker images from the build artifacts so, first, we need to make sure we generate the jar file for the Spring Boot application, and the HTML and Javascript content for the Angular code.

To build the backend you need to execute, from the spring-boot-reactive-web folder:

mvnw clean package

To make sure we get the Angular artifacts, run this command from the angular-reactive folder:

npm run ng build

You’ll find in the GitHub repository this docker-compose.yml file, prepared with everything needed to build and run the images.

version: "2"

services:
  mongo:
    image: mongo:3.4
    hostname: mongo
    ports:
      - "27017:27017"
    volumes:
      - mongodata:/data/db
    networks:
      - network-reactive

  spring-boot-reactive:
    build:
      context: ../spring-boot-reactive-web
    image: spring-boot-reactive-web-tpd
    environment:
      # Overrides the host in the Spring Boot application to use the Docker's hostname
      - SPRING_DATA_MONGODB_HOST=mongo
    ports:
      - "8080:8080"
    networks:
      - network-reactive

  angular-reactive:
    build:
      context: ../angular-reactive
    image: angular-reactive-tpd
    ports:
      - "8900:80"
    networks:
      - network-reactive

volumes:
  mongodata:

networks:
  network-reactive:

The build phase will use the corresponding Dockerfile of our backend and frontend folders to build the Docker images. Besides, this docker configuration creates the following:

  • A mongo container with a volume to persist the data between different executions.
  • A backend container connected to the mongo container and exposes the port 8080.
  • The frontend container exposing the internal port (80) to the host's 8900.

Therefore, to have our containers up and running you just need to run docker-compose from the docker folder:

docker-compose up

If you want to stop them you can run docker-compose stop from a different terminal, or press Ctrl-C from the one which is running the containers. You can also run docker-compose in daemon mode if you prefer so (with the -d flag).

WebFlux vs. Blocking (MVC): user experience

Let’s start our comparison with the most noticeable difference from the point of view of Web Users: user experience. We’ll use the frontend for this comparison.

Angular Reactive Quotes

Playing the user’s role is quite simple in this application: enter values for the given parameters (pagination, page number and size) and press either the Reactive Request or the Blocking Request button. That will put our implementation into practice, calling the backend services from the web page and processing the response either as small portions (WebFlux) or as a whole thing (MVC-like). This is covered by the previous posts, so I won’t dive into details.

You’ll find out quickly that for the non-paginated requests and for big page sizes, the Reactive user’s experience is way better. You don’t need to wait for the request to finish to start reading the quotes. And this is just a simulated scenario in which we’re forcing a constant delay per element from the server. In a real situation, where the server’s response time and the network latency may vary unpredictably, the benefit would be even more appreciated.

As mentioned in the first post of the series, it’s a fact that web users abandon slow sites. If you’re running, let’s say, an online shop, you better show a few products as soon as possible to your potential customers than keeping them waiting for all the content to appear at once.

WebFlux vs. Blocking (MVC): performance

This is an interesting topic: one might think that going reactive for the Web with WebFlux may introduce some total delay in the total time needed to deliver a response. As we’ll see in the benchmark figures, that’s true if we examine requests executed one by one, but where the Reactive web approach shines is when we measure the whole performance of the server when handling multiple requests at the same time.

The results shown are extracted from a benchmark that you can run yourself. In the GitHub backend’s project folder, spring-boot-reactive-web, you’ll find the test class BenchmarkTest. Remove the @Ignore annotation and run the tests or just use your IDE to execute it.

Benchmark details

I used an Intel Core i7 @ 2,5 GHz with 1 processor and 4 cores for the test. The backend runs from IntelliJ IDEA (not Docker). I didn’t modify the default configuration for Netty, which in my case is running 8 parallel server threads.

The benchmark allows you to configure:

  • The total number of requests to execute.
  • How many requests are going to be performed in parallel (parallelism).

Every request performed by the client should take at least one second since the delay per element is 100ms and we’re asking the server for 10 quotes. The existing code runs 1, 8, 32, 96 and 768 requests four times:

  • 32 threads reactive.
  • 32 threads blocking.
  • 96 threads reactive.
  • 96 threads blocking.

To avoid potential differences in the implementation, both reactive and not reactive requests are executed using a WebClient. I also tried with RestTemplate for the blocking part with very similar results. The log levels of the application are set in the repository so you can get some valuable information from them when running the Benchmark: threads started, locks being acquired and released, etc.

Server Side: Requests Served per Second

As you can see in the graph, WebFlux scores much better for a high load. It’s capable of processing almost all the requests in parallel, reaching 30 requests/second when we execute 32 requests at once and 90 requests/second for 96 simultaneous calls.

server requests per second MVC vs WebFlux

server requests per second MVC vs WebFlux

When we execute blocking calls, the server can’t process so many requests in parallel so it’s clearly limited by the number of server threads running, eight in this case.

This proves that a Reactive Web approach optimizes the server resources by unloading the server’s threads and processing the requests separately, in parallel.

Client-Side: Average Time per Request

This metric also shows WebFlux as the winner option. The Netty server keeps all the reactive requests close to the minimum of one second, which is indeed the consequence of the server being able to cope with almost every request in parallel.

On the blocking landscape, the higher the number of parallel requests the longer the request takes to complete in average. This makes sense: since all the server threads are busy, the remaining requests need to wait, thus increasing the average time. The business translation: most users are unhappy because the web page takes a long time to load.

Average time per request WebFlux vs MVC

Average time per request WebFlux vs MVC

WebFlux vs. Blocking (MVC): ease of development

This is the most subjective angle that we can use to compare both approaches. However, I consider it very important: it would be risky to blindly choose WebFlux because of its performance if our knowledge is not good enough to implement the solution using Reactive Patterns, or if there’s no good documentation for it. You might end up with an amazingly-performing web server which is a nightmare to maintain from the development point of view.

Suitability

On top of that, you should evaluate if a full-reactive stack fits your case. Do you require a SQL query to complete your Web response? Bad news: there is no good support for reactive SQL drivers yet. Analyze in advance if all your layers can handle this new programming approach or you may be blocked by the weakest chain element (the blocking one).

Testing

Writing Unit and Integration tests for reactive streams with WebFlux might be tricky. The Project Reactor documentation is a good starting point, showing how you can use StepVerifier to check that your Fluxes and Monos work the way you want. You can also find examples of Unit Tests inside our backend application in the GitHub repository.

An extra inconvenience is that there is not yet a big community of developers from where you can see examples of testing with WebFlux. Besides, the tests are not as readable as with MVC, and they are harder to debug if something goes wrong.

Conclusions

Like any other technology choice, WebFlux (and the Reactive Web approach in general) has advantages and downsides. Don’t go for it just because it’s the new thing or you want to practice (for that you can build sandbox experiments like this one).

As I covered along with these series, WebFlux brings performance benefits and can also leverage the experience you provide to your users when they’re waiting for the response’s data. On the other hand, reactive programming comes with new ways of writing, testing and debugging code so it’s not a quick transition.

Get the book Practical Software Architecture

I always recommend a very good analysis of not only suitability to what your application needs to accomplish, but also of your environment from a common-sense, and human-factor perspectives. Is the development team ready for the change? Can you solve all the challenges you’ll face? Will you really use the benefits or are they just nice-to-haves?

For an existing project that may take advantage of it, I’d rather try it in a limited scope - a few of your total requests that may, for example, map to a microservice (or module, or component) in your system. Evaluate the results, then you can smoothly refactor the rest if you like them.

I hope you enjoyed this guide. Feel free to give me some feedback via comments, Twitter, GitHub, or any other channel you prefer.

Do you like this guide's format? If you enjoyed it and want to get into the world of microservices with Spring Boot and Spring Cloud, get my book on the Apress Store or Amazon

All posts in this guide

Do you prefer the print-friendly version of the guide? Get access now to the mini-book (37 pages) at a reduced price. Download your copy on LeanPub.

Continue reading:

Moisés Macero's Picture

About Moisés Macero

Software Developer, Architect, and Author.
Do you need help?

Amsterdam, The Netherlands https://thepracticaldeveloper.com