Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RabbitMQ queue and exchange setup #161

Closed
ilijaNL opened this issue Nov 14, 2021 · 8 comments
Closed

RabbitMQ queue and exchange setup #161

ilijaNL opened this issue Nov 14, 2021 · 8 comments

Comments

@ilijaNL
Copy link

ilijaNL commented Nov 14, 2021

Hello I just found out about this repo and this is what nodejs (typescript) ecosystem is currently missing for EDA however I have few questions about the RabbitMQ implementation.

Currently if I am looking at the implementation, the rabbitmq transporter creates exchanges per message type and one queue per service. I wonder why it is chosen for this kind of setup and not a queue per messagetype per service. E.g.:

service 1 has 2 handlers for event A and event B and service 2 has handler for event A and event C which now results in 2 queues. However why not creating service1.A, service1.B , service2.A and service2.C queues instead? This also makes it easier to deal with

The downside to this approach is the message is requeued at the end of the service queue, so
.

@adenhertog
Copy link
Contributor

adenhertog commented Nov 14, 2021

hey @ilijaNL!

A queue per message type per service would look something like this:

image

If this is the case, there'd be a few limitations that make it impractical for most use cases:

  • the service needs to open one channel per queue to read from it.
    • if you want to enforce a concurrency limit (ie: only process 1-n messages concurrently) then the service needs to round-robin read from each service. This can lead to starvation of larger queues as newer messages in shorter queues will be processed before older messages in longer queues.
    • if you want to read all queues fairly, then you open all channels at once. at this point you can't enforce concurrency so could easily end up saturating the service especially if you had a large number of queues
  • message order isn't adhered to since messages in shorter queues will go through before older messages in longer queues. In normal circumstances, this is dealt with using retries (as eventually the older message will be processed that provides the state that allows the newer message to succeed). However if there's a backlog that prevents the older message from being processed then the newer message could exhaust its retries and be dumped into the DLQ
  • I believe you'd still need the same message retry behaviour as a queue-per message type per service doesn't provide a way to count handle attempts of the mesage

@ilijaNL
Copy link
Author

ilijaNL commented Nov 15, 2021

hey @ilijaNL!

A queue per message type per service would look something like this:

image

If this is the case, there'd be a few limitations that make it impractical for most use cases:

  • the service needs to open one channel per queue to read from it.

    • if you want to enforce a concurrency limit (ie: only process 1-n messages concurrently) then the service needs to round-robin read from each service. This can lead to starvation of larger queues as newer messages in shorter queues will be processed before older messages in longer queues.
    • if you want to read all queues fairly, then you open all channels at once. at this point you can't enforce concurrency so could easily end up saturating the service especially if you had a large number of queues
  • message order isn't adhered to since messages in shorter queues will go through before older messages in longer queues. In normal circumstances, this is dealt with using retries (as eventually the older message will be processed that provides the state that allows the newer message to succeed). However if there's a backlog that prevents the older message from being processed then the newer message could exhaust its retries and be dumped into the DLQ

  • I believe you'd still need the same message retry behaviour as a queue-per message type per service doesn't provide a way to count handle attempts of the mesage

Thanks for your reply.

  1. I wonder how you define concurrency. If you want to achieve concurrency on service level you are right, that can be only achieved with one queue per service. However if you want to achieve concurrency on handler level, then another approach should be used.
  2. Depends how, but every handler (per service) should have equal chance.
  3. It doesnt, you still need some (dynamic?) retry queues, however when the message comes back to the queue, it wouldnt be blocked by other handlers messages.

I made a small playground for rabbitmq what i mean by queue per handler per service:
image

which can be reproduced here: http:https://tryrabbitmq.com/

@adenhertog
Copy link
Contributor

I'm struggling to understand what use case this would solve. Perhaps if you have an example then it might illustrate what needs to be achieved and how the queue-per-message-type approach would solve this?

I can share the perspective of how the transports are implemented -

Services can be written from a DDD perspective and handle messages from multi-domain, single-domain or even just a single aggregate root. If we're just talking about a single aggroot, like say a product order, the message stream might be:

  • PlaceOrder
  • PayOrder
  • ConfirmOrder
  • FulfilOrder
  • CloseOrder

If this is processed using a single queue then all messages will be processed in order. This will be the case even if there's a service queue backlog. This is also the case if the messages arrive immediately after one-another and there are multiple instances of the service processing the queue.

Contrast this to a queue-per-message-type. If a PayOrder arrives before a PlaceOrder has been processed (as there's a huge backlog in that queue), there's a good chance that it'll get handled, throw an error, retry until it's failed into the DLQ.

@ilijaNL
Copy link
Author

ilijaNL commented Nov 17, 2021

Thanks for your response. You are correct when we talk about commands. Commands indeed should arrive in and process in order, thus having one queue. However in most cases you never dispatch many related commands at a time. Looking at your example, you need some cheorgrafy/orchestration process. For example if you start with place order command, a event orderplaced is published and after that the payorder command will be send and so forth.

Now let's talk about publishing events. A big drawback with having one queue per service is that it blocks all not related in the same queue. For example:
Let's say I have a service called billing and it has 2 event handlers. OrderPlaced and OrderReturned. Now let's say many OrderPlaced are dispatched by some other service and the handler of OrderPlaced has a large processing time (e.g. I/o). Now consider new OrderPlaced events are coming faster than processed. Now some OrderReturned event comes in. It will take unnecessary long time to be at the front of the queue and be processed and perhaps because of that will block other workflows. Now consider this OrderReturned handler fails, which will put the event again at the end of the queue. Additionally the queue can become unnecessary large. The rabbitmq can be sharded easier when there are many queues instead of large.

Consider having queues per handler per service for events I don't see any drawbacks but only benefits. Perhaps you could give me some example where the event order (not commands) does matter?

@adenhertog
Copy link
Contributor

Thanks for the example. I totally agree with what you said here:

A big drawback with having one queue per service is that it blocks all not related in the same queue

How you decide to group handlers into services really depends on your application. NServiceBus recommends the fewer message handlers per service the better.

Personally, I've found it useful to have a dedicated service & queue just for workflow orchestration. This avoids the issue of message backlogs delaying "next steps". Beyond that, I might start with one service per domain and if a message type is causing delays then it can be shaved off into a dedicated service and scaled independently.

If you find that a queue-per-handler with multiple handlers per service is best for you, you should be able to start multiple instances of the bus - each with a single handler. I haven't personally done this but I imagine it should be fine all things considered.

@ilijaNL
Copy link
Author

ilijaNL commented Nov 18, 2021

Thanks for reply! Yes starting multiple instances is a possibility, didn't think about that, thanks! Talking about nservicebus, is the exchange and queue setup compared to nservicebus implementation for the rabbitmq transporter?

@adenhertog
Copy link
Contributor

It's the same fanout pub/sub model as in NServiceBus, though from memory their implementation makes use of a database to polyfill some limitations around RabbitMQ like retry backoffs etc that this library doesn't yet have

@ilijaNL ilijaNL closed this as completed Nov 23, 2021
@ilijaNL
Copy link
Author

ilijaNL commented Nov 23, 2021

Thanks for your explanation, I will try this package to see if this can be implemented in our system for messaging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants