Wednesday, December 18, 2013

Starting you Azure project

Is this your project approach?

Azure Project X == Azure Subscription X
Azure Project X Budget == Azure Billing Alert on Azure subscription X
Azure Project X Monitoring == setup (SCOM) Azure monitoring
Azure Project DEV == prepared to support the application after Go-Live?

Azure Subscriptions
http://blog.kloud.com.au/2013/07/30/good-practices-for-manag…

Azure Billing
http://msdn.microsoft.com/en-us/library/windowsazure/dn47977…

Azure SCOM
http://blogs.technet.com/b/dcaro/archive/2012/05/03/how-to-monitor-your-windows-azure-application-with-system-center-2012-part-2.aspx

Regards,

Sander

Friday, December 13, 2013

The Enterprise Continuum – separation of concerns

There are so many options, and ways of developing a solution that I would like to share some of the guidelines we are developing internally. At this moment, I will do this on the blog you are reading, focusing on the solution architectures. In parallel I’m working for my company on the enterprise architectural guidelines, and I’m trying to follow TOGAF principles to lay down the architecture. I’m hoping to be able to define the architecture context, general architecture, and relate it to the solution architecture.

In the perfect world, with all the time to do this….this should result in Architectural concepts, which will be posted on: http://theenterprisecontinuum.blogspot.nl/ focused from the top-down approach of setting architectural requirements such as ‘every project must leverage a monitoring capability’, and this blog: http://snefs.blogspot.com where I will post the solution for this concept.

The first post (which is basically the same as this):

http://theenterprisecontinuum.blogspot.nl/2013/12/hi-there-what-can-you-expect-on-this.html

 

Cheers,

Sander

Sunday, December 08, 2013

Azure Service Bus – Error handling strategy

At this moment there are several ways to build exciting new applications. In several projects, we are using a hybrid/cloud architecture, specifically Windows Azure. In my upcoming posts I would like to share some of the guidelines we are developing internally, in this case specifically a way of handling errors in Azure queues/topic-subscriptions.

A lot of the Azure (integration) Architectures (and even between web-worker roles) will likely use some elements of the Azure Service Bus, or Azure Queues. Going through the different architectures is not part of this post, so I will suffice with a slide from the Service Bus Deep Dive presentation;

clip_image002[4]

Within our company Caesar, several internal systems have been created and where possible purchased. One of them, CRM4.0 was outdated, or not suited for all our requirements (among them Accessibility online). We decided to migrate our CRM system to the Cloud, using Dynamics CRM. As not all systems are migrate and we are in the process of analyzing the requirements and alternatives, we needed a solution for updating our internal systems which use CRM information.

As Dynamics CRM provides means to push updates to Windows Azure, we have implemented the following solution;

·         Dynamics CRM send Contacts to the Azure Service Bus Topic ‘Contacts’

o   For each system subscription, we have a subscription (e.g. contacts-systemA)

·         Dynamics CRM send Accounts to the Azure Service bus Topic ‘Accounts

·         An internal (windows) Services picks up messages from the subscriptions and sends them to the LOB systems

The following architecture explains this architecture:

clip_image004[4]

 

This worked fine, however, sometimes we had a problem processing messages. After diving into the problem we identified that malformed messages/incomplete accounts/contacts were send, which caused an error, which leaded to the Abandon, the message would remain on the queue, and thus, eventually the problem would occur…..we implemented a maximum number of errors strategy, so ultimately the processing service would stop. Implementing error handling, transient fault handling, and Email Listener did not prevent anything; we did not know when an error would occur and what the error would be.

We stretched the capabilities of the CRM Plugin and CRM configuration which allows you to send all fields, perform validations, however, several things can go wrong:

·         Technically

o   Transient faults – network hick-ups, Azure updates which terminate connections, these call all be handled by implementing the EntLib Transient Fault Handling block

o   Environment Configuration - Azure Topic/Subscriptions have not been created in the environment, these can all be prevented by using a strategy such as proposed in my earlier post

o   Management - Azure Storage account configuration is modified/removed, these risks can be minimized by implementing an solid Azure security policy (and not promoting everybody to co-administrator)

o   Server (processing service) is not available, this should be monitored and causes business issues, but due to the asynchronous setup of this architecture, does  not cause any issues in the system which are not solved when restarting this service

·         Functional error

o   Entity consistency

§  Contacts/Accounts are not valid as not all mandatory fields are set, these can be resolved by managing the CRM Plugin

o   Entity dependencies

§  Contact insert is not processed in the internal system, Contact update will fail

§  Account insert is not processed, relation with account cannot be made, this contact insert will fail

 

Given the problem, some can be solved by implementing readily available frameworks and components, however, for some errors, a strategy is in order. Let’s look at the aforementioned problem in relation to the operations. Processing messages has been implemented earlier by using the peek-lock model where a message is only marked as processed by the following operations on the brokered message:

·         Complete (everything went fine)

·         Abandon (an error occurred while processing)

·         Defer (meta-data can be added to the message, so that the message can be picked up at a later time)

clip_image006[4]

 

Will this solve a functional error? No!

So what we need is a strategy…which allows messages to be stored in a location, related to the queue/topic-subscription, but will not be processed, is ‘dead’, and is queues for further investigation, hence:

“All messages, which cannot be processed, are placed in the DeadLetter queue”

 

clip_image008[4]

 

clip_image010[4]

This will result in the following state:

clip_image012[4]

 

This however, poses several new challenges, what to do with the dead-letter messages, how to restart messages, in the next post I will explain my effort to implement a monitoring solution by using and evaluating several existing frameworks and technologies.

 

To be continued….

 

HTH,

 

Sander Nefs

Sunday, December 01, 2013

Architecture - ISO/IEC/IEEE 42010:2011

After following the IASA Architecture Core course, I like to continue with my personal learning and improvement, and focus on my architectural skills, among others. This year, by following a course on the Theory of Constraints, which is a really interesting theory which will help analyze the core issue behind a problem, and have followed the MetaPlan training which allows for a structured goal oriented brainstorm. For next year, I enrolled in a training on TOGAF. In my preparation for this, I stumbled upon the Open2Study website, where you can follow a lot of courses for free. I enrolled this weekend into the EntrArch course, which includes TOGAF. In one of the additional resources, it referred to a lot of very useful articles.

So after diving into a lot of them, for learning more about architectural styles, frameworks and more. I can recommend the following;

TOGAF

A Comparison of the Top Four Enterprise-Architecture Methodologies

Survey of Architecture Frameworks 

 

Cheers,

Sander

Friday, November 29, 2013

BizTalk User Group NL 28-11-2013

On 28-11-2013 the BizTalk User Group (LinkedIn group BTUG NL) meeting took place in Amsterdam, which was organized by Estreme. The purpose of the BizTalk User Group is to have regular meetings with members in the community on the topic of integration. Since Azure provides more and more integration capabilities, by means of the Azure Service Bus, and the Go-Live of Windows Azure BizTalk Services (WABS), the meetings are diverse and very interesting.

As Azure is very broad, the BTUG focuses on the following elements of the Microsoft Integration stack:

  • On Premise (WCF/SSIS/BizTalk/Windows Server Service Bus etc)
  • Cloud - Windows Azure (Windows Azure BizTalk Services / Service Bus etc).

Announcements

  • An upcoming event in January is the BizTalk Saturday, focused on Windows Azure BizTalk Services
  • Next year, a BTUG Beach event is organized, an informal community event
  • The next upcoming meeting will be held in March

Feedback BizTalk Summit - Steef - Jan Wiggers
Steef - Jan Wiggers provided a summary after attending the BizTalk Summit. This showed that BizTalk is here to stay with an improved release cadence:

  • Annual cumulative updates
  • 2 - yearly platform updates
  • Next year there will be a BizTalk 2013 R2
  • In 2015, a new version will be released
  • Improvements included in the upcoming releases are in the area of JSON support, HealthCare / SWIFT adapter-additions and an updated SB Adapter

Windows Azure BizTalk Services is now live and can be used in production and is improved in the areas of monitoring, archiving, EDI support and management by using PowerShell Command Let.

KAS Integrator - Johan Vrielink

At KAS Bank BizTalk has been implemented to handle transactions for stock exchanges. The KAS Integrator is a framework built on top of BizTalk which allows fully automated configuration of the environment. There are several services defined, on top of BizTalk, and a management portal which provides business rules, publish / subscription configurations, which has some similarities with the EDI partnering, which was pretty interesting. A customer with a clear vision and story was very great to have presenting a session. It showed some typical demands in the market; the automated configuration of middleware, ability to minimize development efforts for interfaces and gave great insight in how to think about challenges in future projects, e.g. by using PowerShell.

Integration Challenge : Custom Service Bus - Rob Kits
During the integration challenge, non-BizTalk products / solutions are shown and compared to BizTalk, which allows you to think about integration in a broader sense, where not every problem can be solved with a single tool. In this case it was a custom solution that used in locations where gas is distributed. In this environment, it is necessary that operators can configure/adjust/monitor the environment, and middleware such as BizTalk is too complex. It was based on PLC technology presented a solution that was brilliant in its simplicity. It again showed that an integration problem must be based on the needs and requirements, and not always with the potential features provided. I found that to be a nice analogy with cloud technology, where one of the advantages is that you pay for what you need, not necessarily what the technology can do.

Synchronous Service Bus - Martin Rienstra
BizTalk is not a golden hammer and certainly not suitable for all issues. At a client, about 80 interfaces were implemented in an intranet environment using a request-response pattern (synchronously). As BizTalk is designed with the principle of guaranteed delivery using the asynchronous pub/sub architecture (polling), BizTalk is not designed for low latency solutions. This does not mean BizTalk is not capable of handling these, this is possible, by using separate hosts, scaling out, separating the databases, however, there is due to the architecture, unpredictable latency.

The BizTalk product team has recognized this and stated that this is due to the architecture in BizTalk and will not be resolved, this kind of issues can be addressed by using different technologies.

Martin had previously looked at the Service virtualization platform MSE (Microsoft Service Engine), but this product is no longer developed (in this space there is only Sentinet). The requirements; configurable, manageable, and re-using the BizTalk maps. The solution consisted of an interesting mix of WCF custom behaviors allowing dynamic service to be generated using a configuration, which uses BizTalk artifacts (mappings / assemblies etc), with the great advantage that the existing BizTalk used solution could be reused. The disadvantage is that the services should run on BizTalk the machine because of the usage of BizTalk artefacts.

Summary

A great event and very interesting content, in future meetings we can expect a lot of great Integration Challenges, and I’m trying to arrange a session where one of my colleagues from Caesar will explain differences and comparisons between Sonic vs BizTalk vs Azure as I’ve seen a lot of interesting things after comparing BizTalk and Sonic;

  • Sonic has the choice between durable subscriptions and non-durable (using queues), where BizTalk always uses durable subscriptions, Azure provides in this context durable (Topic) and non-durable subscriptions (Queues)
  • Routing can be done schema based, where Sonic does this without enforcing a schema, where BizTalk requires a schema
  • Similarities between logical and physical separation of concerns (where Sonic works with an ESB Container and Broker concept) and BizTalk uses a Logical and Physical port)
  • And more….

 

Great to see everyone and I hope a lot of events like this will follow.

 

Regards,

Sander

Friday, November 22, 2013

‘ETW2.0’ - High performance tracing using EntLib SLAB

Are you writing an application that has high performance requirements, are you wondering how Azure Diagnostics works, do you want to write your own logging framework….this might help you out.

Not so long ago, the Application Server Group ISV Partner Advisory Team posted an excellent article on how to instrument specifically BizTalk applications, by leveraging the ETW infrastructure.

This allowed for significant high performance tracing and was measured against other frameworks as you can see in the diagram below;

image

In the latest EntLib releases, this has been included in the Semantic Logging Application Block (SLAB).

What’s really interesting is that there are 2 patterns which you can implement:

1. In Process, where the Host which performs the log data is written to the ETW Infrastructure and the Listener is subscribed to the ETW data

Follow link to expand image

2. Out of Process, where the Listener can be a Service outside of your application (most suitable for OnPremise usage)

Follow link to expand image

 

  • EventSource

With the Semantic logging application block, the idea is that the logging infrastructure is predefined (e.g. Start / Stop events which are logged) and that the application only provides the data/parameter used to log. You need to create an EventSource which contains all the LogEvents you would like to log. This means that, as well as with unit testing….you need to think, before you build. An example is shown below;

image

  • Sink

The great thing, and the reason I like this framework is that you are able to create the sinks, and due to the Out-Of-Process model, can leverage sinks which in itself are not high performance. Out of the box there are a number of sinks: SQL database, Windows Azure table, flat files and some others.

1. Example SQL Server Sink

image

2. Example SQL Server Result

image

  • Result and extension points

Writing using the EventSource will write to the ETW Infrastructure, which has almost no performance impact. The Out-Of-Process listener will pick up the messages in a windows service (can also be downloaded from the EntLib download link), the Sink writes the data to the destination of your choice.

In a post of Tomasso Groenendijk the option of using MongoDb is explained, with the idea of having a high performance tracing mechanism. With SLAB the same functionality is available.

Additionally, writing large amounts of data can be something you don’t want to do on you database used for your primary process, so creating a MongoDb Sink is still a viable option, however, for different purposes.

  • Getting started with SLAB

You can use the SLAB quite easily by using NuGet and search for Semantic, which will display the Application Block and the available Sinks for Windows Azure Tables and SQL Server as wel.

image

The hands-on labs and documentation should get you going quickly. As the EntLib settings can be configured outside of your code (recommended), diving in the EntLib config might not be as much fun as you would expect. For this, there is an EntLib configuration tool available.

  • Getting started with EntLib Config

The following link, contains a EntLib 6 configuration add-in which helps you create the Configuration settings for some of the Application blocks and the Windows Service for out-of-process logging.

1. Select the configuration console

image

image

2. Click on the Config file and open the editor

image

3. Select the block and visually configure the block

image

4. Example TransientFaulthandling Config

image

5. Usage TransientFaultHandling Config

image

 

 

 

Cheers,

Sander

Tuesday, October 15, 2013

Service bus management – here's a way

UPDATE: this post contains a lot of additional great suggestions
 
How to create a consistent DTAP environment, which standards to adhere to, what naming conventions to apply, how to document these….all questions….the answers….a little cloudy. In this post I show how I use the tool Service Bus Explorer to my advantage for some of these questions.
·         Goal: create a consistent environment for DTAP
·         Challenges
1.       Naming conventions on Windows Azure (sb / queues / topics etc)
2.       Isolation of environment and thus billing
3.       Repeatable steps / Configurable using a tool / xml configuration
 
#1: Naming conventions
Are they out there for Azure? This is a future subject for a WindowsAzure live chat session.
·         My #1 rule: make sure that the naming convention you think of are consistent
·         An example convention: <Customer><Project><Type><Artefact> (Contoso.Broker.Transform.FormatA2FormatB) would not translate well to an Azure queue or service bus due to the long name, so we need to split this up into smaller pieces, a way could be
1.       Create a subscription for the customer
2.       Start the entity name with the project
3.       The Azure Portal already takes care of separating the artefact types, so pre- postfixing it (sbbrokerorders / brokerqorderinqueue or brokerorderinqueue) would not improve or help in maintainability, this is up to you.
 
#2: Isolated billing
The only way to really isolate the billing, is to create several subscriptions. This is my recommendation anyway, as migrating subscriptions is something that you do through a support call and is thus out of your control in situations mentioned in my previous post
Dedicated service bus namespaces will show up in a different line in the exported bill as shown in the exported bill below

image

#3: Repeatable steps / configurable using a tool / xml configuration
My approach is shown in below and is applied on service bus namespaces and Queues. However, this is because the tool I’ve chosen is Service Bus Explorer, I’ve suggested this type of abilities to several vendors as they were open to feedback, so who know, this type of approach might become possible for all tools / portals.
Consider my list of service namespaces:
·         Sbprojecttest
·         Sbprojectdev

image
·         Download service bus explorer
There is a precompiled version already included in the release, you can also start visual studio and build this one.
 
image
 
·         Open the ServiceBusExporer.Exe.Config
Add the different namespaces you want to manage from ServiceBusExplorer as shown in the picture below.

 
image
·         Start Service Bus Explorer and connect to a service bus namespace (from the config)
image
·         Connect to the environment

image

·         Create the queues we would like to deploy consistently

image
·         Export the entities
image
·         View the exported settings
image
This is the first step, this config file is now exported as Xml. We can view all the settings and use a tool or program to manipulate this configuration file. We can add this configuration file into TFS etc.
·         Connect to a different service bus namespace (e.g. sbprojecttest)
We can see that at this point, there are no queues….
image
·         We will import the entities using the configuration file
image
·         The import will create the queues with the specified settings in the configuration file
image
 
·         In Azure, we now have 2 service bus namespaces, with a consistent configuration

image
image
 
Note: Yes, you can always use PowerShell. For my case, I wanted an approach I can easily explain to anyone capable of using a computer without installing anything. Additionally, you need to create these scripts yourself, as it is now, there are differences in On Premise Service Bus and Azure Service Bus, so that didn’t work for my scenario. Additionally, I only needed to retrieve the service bus namespace connectionstring and configure this in the Service Bus Explorer, and didn’t had to retrieve the publishsettings etc.
Note 2: Managing and Using the Queues should be separated (IMHO). By using a tool like Service Bus Explorer to create a consistent environment using config files or by using powershell. If your code creates a queue when it’s now there, shouldn’t an alarm go off? Aren’t the queues durable, and always available, I assume yes and like to monitor for this particular situation, as it’s probably a showstopper.
 
Note3: You can also create a dedicated subscription for DEV/TEST/ACC for each service bus namespace (depending how much you like to manage). For production it would be best to always create a dedicated subscription.

HTH,
Sander

Tuesday, October 08, 2013

Exposing an REST Endpoint (POST) which processes Xml

For my project I needed to expose an Endpoint, which accepts an HTTP Post with Xml as input/output. As I like to start small I’ve tried to start with a small test project. In this project, I wanted to learn how to expose an endpoint, how to set everything up and what options are there.

This, because there are a lot of resources on all sort of specific issues, but the ones I faced were scattered in several blogposts.

So for my project I developed a WCF Service, which was created using a WCF Webrole so that I can also publish it in Azure.

·         Create a new WCF Webrole

o   Create a new Cloud service

clip_image002

o   Create a new web role

clip_image004

o   Which should result in a similar structure

clip_image006

·         Configure the endpoint to use WCF-WebHTTP and allow a help page

At this point, you have a WCF Service, which is not yet exposed a WCF-WebHTTP endpoint. We can do this by:

o   simply changing the Web.Config

clip_image008

clip_image010

 

o   Add a method which accepts Xml and decorate it with the ‘WebInvoke’ attribute

clip_image012

 

·         View the help page

After deploying the webservice, we can navigate to the help page

clip_image014

·         Now we can view specific methods, and for each method the parameters are shown

I want to look into each specific option…

DataContract with CompositeType

This allows specifying a custom type, this is serialized into Xml and can be used and tested using the WCF Test client and is quite straightforward

clip_image016

We can use Fiddler to test this. Set the following:   Content-Type: text/xml

clip_image018

clip_image020

Primitive types

Primitive types, can be processed using supported formats out of the box, which are JSON / XML.

clip_image022

Format JSON supports sending plain data, we can do this using Fiddler, by setting Content-type: application/json

clip_image024

clip_image026

We see however, that the response is of the type XML format, so when using JSON, we need to explicitly set the response format.

We can also see that (although I’m not the single point of truth), the Format XML does not support sending plain data, format Xml uses a serializer, thus expects the input wrapped in an Xml string.

So a request must be submitted (according to the help page) as

clip_image028

A test using the xml format, using Fiddler, can be done by setting Content-type:  text/xml.

clip_image030

There is one option you can apply on the ServiceContract level, and that is, setting the XmlSerializerFormat.

clip_image032

This makes life a little bit easier….

clip_image034

However, this does not work….and I gave up…..for a day. My conclusions so far were:

·         Working with primitive types is not that simple L

·         Formats are limited (for my purpose) and I need to write a custom format

·         If such a simple approach already leads to this type of issues, how will I ever manage to process Xml

Xml Documents

The next day…i thought about my plan...All I wanted, was to start with the basics and build my way up, to send/receive Xml. I did this trying to start with primitive types and then use an XmlDocument etc. However, it didn’t worked as I wanted, so I thought…..let’s start from scratch, using XmlDocuments.

clip_image036

Using Fiddler to submit a request, using Content-Type: text/xml..

clip_image038

Response…bad request?

clip_image040

No….wrong code…you should always set the XmlSerializerFormat! (Unless using the Composite type!)

clip_image042

Victory!

clip_image043

 

So I learned 5 things:

·         The WebHTTP Help (<webHttp helpEnabled="true" />) is incredibly useful

·         WCF-WebHTTP works with primitive types using the XML format by default,this  implies the XmlSerializer, when you don’t want this, you need to implement your custom format

http://msdn.microsoft.com/en-us/library/ee476510.aspx

http://blogs.msdn.com/b/endpoint/archive/2010/02/01/returning-custom-formats-from-wcf-webhttp-services.aspx

·         Processing Xml can be done by creating a custom CompositeType, or by using an XmlDocument

·         The Content-Type is something you need to set right

·         As always, Fiddler is a great tool

 

HTH,

 

Sander