windows service bus 1.0

Håkan Fröling

What it is not:

A service bus

Distributed


What it Is:

Message broker

Queues and Topics

As Azure Service Bus 

but without relaying

Microsoft define:

The Service Bus for Windows Server is a set of installable components that provides the messaging capabilities of the Windows Azure Service Bus on Windows Server. 


The purpose of the Service Bus for Windows Server is to provide similar capabilities across Windows Azure and Windows Server, and to enable flexibility in developing and deploying applications. It is built on the same architecture as the Service Bus cloud service and provides scale and resiliency capabilities. 


The programming model, Visual Studio support, and APIs exposed for developing applications are symmetric to that for the cloud service making it easier to develop applications for either, and switch between the two. Going forward, the experience for managing entities on the Windows Azure Management Portal will be consistent across the on-premises and cloud versions.

Comparing Service Bus for Windows Server with Windows Azure Service Bus

Although symmetry exists between Service Bus for Windows Server and Windows Azure Service Bus in APIs and messaging features, there are differences between the two Service Bus products.

Management

With respect to manageability, in a hosted Platform As A Service (Windows Azure) environment, the PaaS vendor (Microsoft) provides the management. With the Service Bus for Windows Server, the local administrator deploys, secures, scales, and monitors the Service Bus for Windows Server farm. 

Claim based security

In both Windows Azure and Windows Server, the Service Bus requires access tokens for authorizing access to its messaging entities. 

Because the Windows Azure Access Control Service (ACS) is not available on Windows Server, the Service Bus for Windows Server includes a simple Service Bus Security Token Service (SBSTS) integrated with the Windows security model. 

The SBSTS can issue Simple Web Tokens (SWTs) based on Windows identities (stored in the local Windows identity store or in Active Directory).

Address

The addressing schema is fixed in the Windows Azure Service Bus. In other words, all the endpoints have the Service Bus postfix added to the URL. 

With the Service Bus for Windows Server, you can use the fully qualified domain name (FQDN) of the hosts, or a mapped DNS entry representing your service.

Architecture


The stack

The Service Bus for Windows Server is built on 
  • Microsoft .NET Framework 4.0 PU3
  • Windows Server 2008 R2
  • SQL Server 2008 R2 and SQL Database
  • Windows PowerShell 3.0. 

All these platforms must be running on a 64-bit operating system. The storage layer for the system (SQL) can be on dedicated remote server or on one of the compute nodes or in Windows Azure SQL Database. The compute nodes used this stack can be hosted either on-premises or on Windows Azure IAAS.

Avalibillity and Scaling


Intra-Farm Communication Patterns

Because the Service Bus for Windows Server server farm is “highly available”, there is inter-process communication that spans local and remote computers.

The Gateway

The Service Bus gateway process is a stateless service and can communicate with the message broker on local or remote machines within the farm.

The message broker

  • The Service Bus message broker process on every machine registers with the Windows Fabric service (on the same machine). This registration indicates availability to host Windows Fabric services.

  • Windows fabric

    The Windows Fabric services on every machine communicate with each other to establish a “Highly Available Ring.”

    Service Bus Gateway

    All incoming requests from clients are first received and processed by the Service Bus Gateway. The protocol heads process the requests and perform the necessary authentication, authorization, and address resolution. The request is then forwarded to the appropriate message broker. In some cases, the client then communicates with the message broker directly for subsequent requests. 





    Clients can use Net.TCP or REST over HTTP as the protocol for communication with the Service Bus for Windows Server server.

    The wire


    The wire protocol is proprietary. 
    BUT 
    Microsft says:
    "AMQP 1.0 is a Standard (with a capital ‘S’)"

    "AMQP 1.0 support was added to the Windows Azure Service Bus as a preview feature in October 2012. It is expected to transition to General Availability (GA) in the first half of 2013."

    Windows Service Bus will support this 2013 Q2-Q3

    Service Bus Message Broker


    The message broker is an NT service that hosts the protocol head and one or more message containers. The service registers itself with Windows Fabric and acts as a host for the message container service.



    The lifecycle of the NT service is the basis for failover detection and high availability.

    Message Container

    You can view the message container as a logical collection of runtime (compute) logic that is backed by a persistent store (SQL Database). In the case of failover or for load balancing, the message container is a modular service that moves. Each message container is backed by a store (SQL database) and hosts a set of queues, topics, and subscriptions as determined by a simple round-robin capacity management algorithm. Entities such as queues, topics, and subscriptions placed in a container cannot be moved, and remain with the container and its associated database. All state related to the message container is persisted in the database.

    Windows Fabric

    This process contains the core logic necessary for high availability, farm and cluster formation, and load balancing across machines in the farm. The messaging broker service on each machine registers with the Windows Fabric process on respective machines. Windows Fabric determines the number of registered instances of the message container service and places them on the various machines in the farm based on a simple load balancing algorithm.

    Use Case

    Brokered messaging can be thought of as asynchronous, or “temporally decoupled.” Producers (senders) and consumers (receivers) do not have to be online at the same time. The messaging infrastructure reliably stores messages until the consuming party is ready to receive them. This allows the components of the distributed application to be disconnected, either voluntarily; for example, for maintenance, or due to a component crash, without affecting the whole system. Furthermore, the receiving application may only have to come online during certain times of the day, such as an inventory management system that only is required to run at the end of the business day.

    Task Queues

    By far the most common queueing pattern. Producers write messages to queues and don't wait for a reply. Consumers listen to the queue and process messages.
    Useful for: 
    • Load leveling
    • Performing work in the background that users don't need immediately 
    • Transparently scaling work (just add more consumers)
    Common implementation semantics: 
    • Consumers receive messages in FIFO order
    • Messages are redelivered if the consumer doesn't explicitly ack or delete them

    Delayed Jobs

    Typically a minor variant on the job queue. Producers can specify that the message delivery be delayed.
    Useful for: 
    • Retrying integration with a remote system. Example: You write a job to bill customers through PayPal at the end of each month. If paypal is unreachable you requeue the task with a 24 hour delay. 
    • Deferred state cleanup. You allocate a resource and queue a delayed message that tells a consumer to cleanup the state in a few hours. 
    • Grace periods. Example: You want a customer to be able to cancel sending an email or placing an order for up to x minutes. You enqueue the job with a delay. If the customer cancels within the grace period you simply delete the message from the queue.

    Fan-out

    Fanout destinations are similar to email aliases. The destination that producers write messages to is an alias that maps to one or more child queues. When the broker receives a new message it delivers that message to each child queue. Consumers dequeue from the child queues, not from the fanout destination.

    Useful for: 
    • Event notification. When an event occurs the producer sends a message. Subsystems that require notification are added to the fanout configuration. Each receive a copy of the message.
    • Logging

    Fan-in

    Fan-in pattern is a pattern which takes multiple channels as inputs and watches each of them until a message is observed.

    Demo


    Useful information


    Claims


    MSDN

    http://msdn.microsoft.com/en-us/library/jj656647.aspx
    http://pluralsight.com search for Service Bus


    Messages fans in, fans out and I'm a fan of messaging

    Thank you for listening

    windows service bus

    By froling

    windows service bus

    • 1,775