9 Comments

  1. Can i know how you generated graph for performance comparison?

    1. Author

      It’s just one of the templates over an Excel spreadsheet, is that what you’re asking?

  2. Hello there,

    Thank you for this great article !

    I think there is an issue with the benchmark. Looking at the github repo, it seems to me that both the blocking part and the reactive part of the benchmark run with WebFlux in the server side. Yet, the article states that the blocking part runs on WebMVC (“WebFlux vs. Blocking (MVC): performance”).

    To obtain a fair comparison, the blocking part should indeed be realized on WebMVC. WebFlux having only 1 thread per core (to reduce context switch overhead), it is extremely sensible to blocking. This thread limit explains that the request served per second is limited to 8 which is the number of core here.

    Did I misunderstood something here ? Thanks for your insight.

    1. Author

      Hi Quentin. You have a good point about the web server set up. If you boot up the ‘classic’ web server version, you’ll get different server settings by default (basically more connection threads) that will lead to a better performance on blocking scenarios. However, the point of this comparison was demonstrating that the Reactive Web approach can make better use of the resources, in this case not being that limited by the number of threads.
      I’ll add your remark to the article, thanks a lot for your comment!

  3. I really don’t like this comparison
    You are proving your point by inserting Thread.sleep in the controllers which does not show a behaviour of a real-world app.
    In fact removing these sleeps shows that standard blocking requests behave better in terms of performance for simple repository retrieveing of elements and significantly better when there are some additional operations on the stream like mapping or filtering.
    It would be wiser to show that there are benefits of reactive programming when we are dealing with high latencies or unpredictable response times from some networks, but for typical scenarios it’s much wiser to use typical blocking model.

    1. Author

      Thanks for your comment! Well, not including Thread.sleep() would make the performance test also unfair when running it locally, since you would be dealing with the ideal scenario of no network latency. The fact that it’s there it’s properly explained along the posts – it is intended to simulate bad network conditions. I agree with you, and I believe it’s clearly stated in the guide (see Blocking vs. Non-blocking), that WebFlux has some nice advantages over the blocking model when there are unpredictable latencies. However, I don’t agree with your sentence “for typical scenarios it’s much wiser to use typical blocking model”. Defining a “typical scenario” is not a trivial task. Like many other software decisions, it depends on multiple factors and that’s what I try to explain in the conclusions. Ideally, it would be nice to compare both approaches without the sleep statement and perform requests from multiple geographical areas in the world to an average server, then I believe the point would be clear. I’ll try to set up something like that if I can.

  4. Thank you very much for the helpful writing.

    In your example, the controller provides the page attribute, but the client calling your service can not know how many pages are left, as the page is not given in the response.

    Do you have any idea how cursor-based-pagination can be implemented in a reactive way?

    1. Author

      Really good question. Check this answer on StackOverflow by a Spring Data Engineer. I agree with that answer: pagination goes a little bit against some reactive patterns. Also, bear in mind that the total number of records might grow while you consume them from the frontend.

      However, I do understand the requirement in some situations. For example, if you want to know in advance, on frontend side, how long is going to take to retrieve all results to provide a better user experience (like a progress indication). The simplest solution is just having a separate endpoint to provide this information. Check this pull request: https://github.com/mechero/full-reactive-stack/pull/1/files.

Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.