In recent years, I have worked on several projects that require combining on-premises services with cloud-based SaaS solutions, particularly when integrating with Dataverse using Azure. However, some developers and architects find it challenging to understand the difference between event-driven and messaging systems, particularly when it comes to choosing between Event Hub and Service Bus in Azure. For Saas solutions like Dataverse, message brokers such as Azure Service Bus are typically the better choice. I will elaborate on this further below.
First, let’s understand the difference between Event brokers (i.e.: Azure Event Hub, Kafka) vs Message brokers (Azure Service Bus. Rabbit MQ)
Event Brokers Vs Message Brokers for Saas.
Enterprise messaging refers to the exchange of messages between different components of a distributed system. These messages can be sent synchronously or asynchronously, and they may be persisted for later processing. Enterprise messaging is often used in applications that require reliable, guaranteed delivery of messages, such as financial systems or order processing systems.
Event-driven architectures, on the other hand, are designed around the idea of reacting to events that occur within a system. Events can be thought of as notifications that something has happened, such as a change in state or the completion of a task. In an event-driven architecture, components are loosely coupled and communicate through events, rather than through direct messaging. Event-driven architectures are often used in systems that require real-time processing, such as IoT or streaming applications.
| Service | Purpose | Type | When to use |
|---|---|---|---|
| Event Grid | Reactive programming | Event distribution (discrete) | React to status changes |
| Event Hubs | Big data pipeline | Event streaming (series) | Telemetry and distributed data streaming |
| Service Bus | High-value enterprise messaging | Message | Order processing and financial transactions |
Azure service bus is better for long transactions.
Azure Service Bus works better with Dataverse and Graph Api for robust transactions like inserting data on a large entity with enough plugins (business logic) to slow it down.
The First reason I want to point is how Service Bus Function triggers process messages compared to Event hub function triggers.
Azure service Bus offers the concept of peek-lock:
“The Functions runtime receives a message in peek-lock mode. It calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the peek-lock timeout, the lock is automatically renewed as long as the function is running.”
So, why Peek Lock makes a difference? Because of “Parallel execution“. Another instance my pop in the side and try to process the same message.

The Service Bus trigger instance locks the message right after read (#1), stopping other instances from accessing it. After the code executes successfully, it releases the lock (#2) and remove the message from the queue.

Azure Event hub does not lock messages. If an Azure Function takes too long to complete the checkpoint (#1), another parallel instance will process that again creating a duplicate. Click here for more details on Even Hub duplicates and here to learn about Idempotency.
This does not happen all the time for every event. It can get worse if the destination system takes a considerable amount of time to complete the operation.
Retries and Persistence
Unlike Event Hub, Service Bus have state and move unprocessed messages to the dead letter queue providing more options to retry. Event hub does have some support to retry but maximum period allowed to keep it in the hub is 7 days. If a consumer failure jams the flow, you may end up not having time to process the stuck messages before they disappear. Event Hub is designed for streaming to consumers that can take high input fast. It can overwhelm Dataverse.
Event Hub can be used but…
As stated in the even hub documentation: “Designing your functions for identical input should be the default approach taken when paired with the Event Hub trigger binding.”
If you have no other choice but use Azure Event Hub to push data to Dataverse, you need to add something to stop duplicates because it will happen. Specially over large entities with long business logic
Distributing the load with Batches
If you check the settings available to both Azure Service Bus and Event hub, you will find batches. Batching in Event Hub is not quite the same thing.
Service Bus will pick a batch with the amount of messages you specify and can be set to wait for x amount of time until picking the next batch. Event hub does not pick messages. the consumer is a listener that is always receiving a broadcast. the setting batch is for the trigger to divide the load in batches once it receives the load.
Conclusion
Before selecting between an Event Broker and a Message broker is important to understand the destination System (Consumer). Event brokers work better to push data straight to a data storage without long transaction layers between. Databases, Data Storage or anything that process fast. You can still consider using it, especially if the system has the needs to process events in real/near real time. Just bear in mind you will need to make sure the destination system caters for idempotency.
Service Bus is a better option to push data to applications that can have long business logic on the top of the data storage/database. It not only handles the integration better but also offer possibilities like FIFO (First In First Out) and the ability to control message flow through batching.
Leave a Reply