Tuesday, May 19, 2015

Focus and innovation - recap of the last 2 years

It's been a long time since my last post.

Hereby a partial explanation for that, with the note that this blog will remain active for the purpose of being a resource for historical references.

My IT career started with programming, and adventures within holland, as well as overseas. When I realised programming is something you can specify, I figured that providing added value is crucial for a long term career and decided to focus on process centric roles.

I started with webservice integration and eventually from that to process integration using BizTalk. The great feeling when discussing high level processes and implementing solutions within days, even highly scalable and interfacing with LOB systems is great.

After several years of implementing solutions like these, I started questioning if all of this work was justified by the business case, if the solution was actually contributing, if the choices had been valid. In other words, I needed to understand the choices made.

When you are hired to do a job, a lot of stuff has been done. If that job is to implement a product, chances are, questioning if that product is the best product for the long term strategic goals is not a great question if this isn't the best product.

I decided I should be able to make decisions based on the business case, before a product is chosen, and be able to facilitate the proces when tese choices are made. To do this, I made some career choices 2 years ago;
- work for a company with a broad portfolio
- work in a role where I'm able to be a customer advocate
- learn about techniques such as lean, theory of constraints to digg deeper, optimize processes, analyse problems
- create and validate business cases
which are product agnostic
- work on cloud computing projects
- work for a startup and create an integration based platform leveraging and evaluating vendors and technologies (amqp, rabbitmq, zeromq, mule, neuronesb, jitterbit, azure sb, wssb, ensemble, postgresql, mongodb, and more)
- start an integration team, not focusing on a product, but on how we can leverage capabilities
- expand my horizon

My goal was to do this during the next few years, I got the freedom to do this all within months. 2 years ago, I worked on BizTalk in an Azure VM, a platform supporting deployment on premises and Azure code, create an Azure service bus extension for a client to have a internal messaging solution,  alot of exciting stuff. Blogging this here didn't felt like a good fit, and as this knowledge isn't as static as with BizTalk, it would have been outdated within months, so I shared this to our customers and internally. As alot of companies already said that they're doing this stuff, what's the added value of telling how to do this...another reason is that  sometimes a competitive advantage is nice.

Having done all of this allowed me to look at the bigger picture, generate ideas, try out concepts.

It allowed me to go through this process again, even faster, broader. The last 8 months I've been side tracked, and worked on Internet of things projects and ideas even more out of my comfort zone. A colleague was fiddling with a Raspberry Pi, and we liked chatting about cool things we could do...someday. Within our BI team, a case was spotted where measuring data with sensors, would provide insights in a process which could then be optimized. They found an Excellent intern, who worked already with Arduino's and gave him the assignment to create a POC.

We were interested and offered to help. No help was needed we got in response. We wanted to do something with our ideas, and organized an internal hackathon, just having fun. We used Cloud9 to host a NodeJs chat application (i created this while waiting for new tires in the garage), I created Azure topics and my colleague used python/Nodejs on his Pi to subscribe to a message with a magic keyword which switched on a lamp.

Only a month later the intern asked if we could help, the next day a demo was due, and communication between the Arduino and Sql wasn't working. I created an API using NodeJs in 2 hours as I had no clue what an Arduino was. However, it worked like a charm.

I later took an Arduino with me to test some more. It started with the Arduino and added a messaging layer (service bus) In between to be able to scale the solution. I learned how the Arduino worked and wanted to know about other platforms....I've now worked with a lot of devices, platforms, and a sense of what options are out there, and how to scale and choose the right solution.

How, because I start with the business case, the need to be able to scale, the talk before the walk. As cloud computing is here to stay a logical fit is to leverage this in the IoT space, messaging, analytics, realtime dashboarding etc, all very much required in good solutions. What is the best product, platform, communication method, when to choose what and why? Almost the same method of thinking I learned 2 years ago helps me to make the right choices. Based on simple criteria, product independent, based on support, skillset, costs etc.

We're excited on working with cloud, it's a way to be cost effective, we're excited to work with devices, also because they are cost effective. But we're making the choices not because it is the coolest way, because it's the best solution that provides the added value for our customers.

So i'm really glad I'm able to think outside the box, and I couldn't have without having learned to think in terms of goals, long term solutions, added value and the question what the best solution would be. Without automatically thinking with a product hat on. BizTalk is a great product, I still use it sometimes, but I've also advised customers to go for a different vendor, the customer needs advice which helps him in the long term.

My journey to go from a technical guy with vision on integration to a guy with focus on strategy and IT was a journey I made and is difficult to explain without all of this background stuff.

So I'm going to be silent, at least, blogging wise on this blog.

My suggestion to anyone, who faced the struggle of wanting to know what's out there, realize that there are ways to expand the horizon, and hypes are not perse the way to do this.

I have learned more in the last to years than I thought I would, all by taking control and listening to myself and my gut feeling.

Hopefully this explains a little why this blog is not being updated, what I've been working on, my interests, and why I'm so active on so many different subjects on twitter and yammer...I'm not thinking in boxes, but in means to solve a business case. Technology is great, solving the real problem is even better.

I feel that this blog isn't the right place to describe what I'm working on, most of it is NDA, or rocket science 😉. Follow my twitter to get an idea of stuff I'm working on (snefs)

Cheers,

Sander

Friday, May 30, 2014

Azure implementation guidelines

In My post ‘Service Bus Management’ I pointed out a way of implementing a DTAP strategy for managing the Service Bus environment. There’s now a great post available from Microsoft which goes into more detail about other aspects of managing the Azure subscriptions and other artefacts which is quite useful when setting up your Azure environments.

Follow this link to the ‘Azure Implementation Guidelines’

HTH,

Sander

Friday, May 23, 2014

Increase your API usability by teaming up NuGet and Visual Studio

Tags van Technorati:

In my previous post I described a way to use NuGet for packaging libraries, often used in projects. This post is an addendum to my post ‘enter 2014 with NuGet’ look at the additional features possible. In this post i’m looking into Visual Studio features (project templates) and the NuGet package feature PowerShell.

I’ve been working on an API (see my next posts, Service Bus Series - ‘the case for service bus’), and during the development I wanted to make sure that explaining the API to another co-worker was a nicer experience than just going through the code. To ensure that the developers could easily connect and understand the usage of the API, I decided to create sample projects using templates. Templates are a great way to provide the user with a sample project from within Visual Studio. I wanted more… and with the features available in NuGet packages, i’ve found a way that worked for my scenario. I’ve been using the principles;

  1. Each project in Visual Studio can be exported as a template
  2. New Projects can be created based on a template, the user get the template in his overview of available project types
  3. Exporting templates results in a .Zip file in the <UserProfile> \ Documents \ Visual Studio <version> \ Templates \ Project Templates directory .
  4. NuGet allows by means of a 'Tools' content type to perform custom actions, which are executed when creating a reference of the NuGet package from PowerShell

By combining #1-3 with #4 we can create a NuGet package which adds a project template to the user of the NuGet package. This can be simply done by following the steps mentioned below;

 

image

 

  • Export Template

image

In Visual Studio select ‘File’ \ ‘Export Templates’ (in the solution which contains the project to be exported as template)

    1. Select the type ‘Project template’
    2. Select the project

image

3. Go Through the wizard steps

image

Note: This .zip file can be used in a different Visual Studio environment

 

Files can be added into a NuGet package, using the Content folder. In the NuGet package Explorer we can add a template;

1. Add a content folder

image

2. Add the content

image 

3. Browse to the exported template

 image

4. The endresult in the package explorer

image

 

  • NuGet and PowerShell

The final step is the configuration of a PowerShell script, which allows the Content to be installed in the folder of the user so that the project template is available. For this we need to perform the steps;

1. Add a tools folder

image

2. Add Install.ps1 (which is executed when you install the package

image

2. Add Install.ps1 (which is executed when you install the package) and add the following script

image

Note: This script will install the package into the user templates folder, and assumes (visual studio version, zipfile name)

Note 2: the complete script is copied below;

param($installPath, $toolsPath, $package, $project)
$documents = [System.IO.Path]::Combine((Get-Item env:USERPROFILE).Value, "Documents\Visual Studio 2013\Templates\ProjectTemplates")
$templateFile = "Contoso.ServiceBus.API.zip"
$template = [System.IO.Path]::Combine([System.IO.Path]::GetDirectoryName($project.FileName), $templateFile)
$templateDestination = [System.IO.Path]::Combine($documents, $templateFile)
Copy-Item $template $templateDestination

  • The endresult

Assuming the package is exported into the NuGet repository, the NuGet package source is configured, a new solution has been created, we can now use the package.

1. Manage NuGetPackages and click install

image

The PowerShell script is executed;

image

 

2. From now on we can create a sample project based on the template

image

image

 

If you are commited to re-using packages, and want to create an API or reusable library, this can be quite helpful. Although integrating NuGet in TFS is different, all the features presented are available, so improving the Dev-Ops integration using TFS is a nice way to improve this proces.

For me, this allowed me to provide more assistence to the end user of the API. I’ve currently devided the API into an package and created another package with the Templates. This post just gives an overview of the power that you have with NuGet.

 

HTH,

Sander.

Thursday, April 10, 2014

New Azure Portal – Preview

The new Azure Portal (in preview) is really great. Bundles a lot of the functionality that is sometimes hard to retrieve

  • Azure Health (e.g. BizTalk Services / Service Bus)
  • Billing information
  • View your resources (at this moment only Resource Groups / WebSites / Team Projects / SQL Databases / MySQL Databases)
  • Notifications (the alert functionality)
  • The concept of ‘Journeys’ (preselection filters you’ve implicitly placed so that you can quickly look at the dashboard the way you want)

Some screenshots to show how much more insight this provides…

Azure Health

image

 

Billing information

image

View your resources (at this moment only Resource Groups / WebSites / Team Projects / SQL Databases / MySQL Databases)

image

Notifications

image

Journeys (preselection filters you’ve implicitly placed so that you can quickly look at the dashboard the way you want)

image

 

The portal brings a lot of features together that had to be done using different areas, like the billing portal, which only showed a few billing details, where an excel file was the only way to retrieve details, and the Azure health, which was located somewhere else.

So far, a great portal for a preview!

 

Cheers,

Sander

Wednesday, April 09, 2014

Add project reference? Enter 2014….Using NuGet for packaging projects and dependencies

 

[UPDATE] Sample solution available on OneDrive (https://onedrive.live.com/?cid=5eaaef40eefdaddb&id=5EAAEF40EEFDADDB%21109)

A solution consists of projects, projects uses components, components are of a specific version, changing the components thus requires a versioning strategy….which one are you using? All solutions which are architected into several tiers, have a form of layering with (which I hope) an abstraction of re-usable components. A project which is used to implement automated order approvals (AOA), with a requirement to implement logging, and retrieve something out of the database will likely have the minimum of 3 projects

  • Solution AOA
    • Project - AOA.Core
    • Project - AOA.DataProject
    • Project - Common.Logging

Let’s assume, that the Logging library is also used by another solution: BOB

  • Solution BOB
    • Project – BOB.Core
    • Project – BOB DataProject
    • Project - Common.Logging

When the requirements of logging change, and the project BOB needs a specific change due to legislation requirements. What options do we have to make sure that either the AOA solution does not break because it’s automatically using the latest version?

  1. Using the latest version of all the projects / components (e.g. using a build server)
    • Project approach: using project references to existing / local projects
    • Risk: the latest version has breaking changes
  2. Using a specific version by copying the dll to a local solution folder
    • Project approach: using local dll references
    • Risk: Managing the versions will lead to an enormous spreadsheet
  3. Using a SharedAssemblies folder which contains the latest version
    • Project approach: using Shared dll references
    • Risk: Managing the versions will lead to an enormous spreadsheet

Basically, we don’t know which version is used, we don’t control any of this, and the solutions provided are not sufficient. Enter NuGet….

image

So NuGet can be used to overcome this problem, by packaging components, versioning and even manage dependencies. When a project has a dependency and another component, it is possible to define this relationship in a NuGet package, and by doing this, retrieving the NuGet package, will automatically retrieve the dependent package. This allows you to ensure that the correct versions are retrieved, and the self-created dll-hell can be mitigated.

How can we do this? (example feed and solution can be found at: https://onedrive.live.com/?cid=5eaaef40eefdaddb&id=5EAAEF40EEFDADDB%21109)

Our client Contoso is developing a solution, which makes use of the Contoso.SB.Library. This library provides functionality in conjunction with the Azure Service Bus Library, providing automatic property promotion used to enable ‘automatic’ topics/subscriptions (just a example, of a solution that might exist Knipogende emoticon).

1. ProjectReference

This API is referenced by using a project reference, so the latest version of the SB.Library is always used….we would like to have more control. For this we can create a custom package, using the tool ‘NuGet Package Explorer’. Using this tool, it is possible to create a NuGet package of the Contoso.SB.Library, and be able to manage the used 3rd-party references, versioning and implement a changemanagement process.

We can create a new project

2. PackageExplorer

We then need to define the properties (my colleague Sybren Kuiper is working on a NuGet Packager which automatically retrieves this information from the project file), at this moment, we need to do this manually.

3. PackageLibrary

The Contoso.SB.Library uses the ServiceBus.v1_1 NuGet Package, so we need to add this dependency. This means that adding a reference to the Contoso.SB.Library will automatically retrieve the ServiceBus.v1_1 package.

4. PackageLibraryAddReference

Here you see the dependency defined in the Contoso.SB.Library

5. PackageLibraryDependencies

At this moment we only have a skeleton, we can now add the Library by adding a Lib Folder:

6. PackageLibraryAddLib

And add the Dll by adding an existing file

7. PackageLibraryAddExisting

Select the dll…

8. PackageLibraryAddExisting2

And save the package…

9. PackageSave

We can publish the package to a feed, such as NuGet.Org, but we can also simply use a file share to build our own private NuGet repository. In this case the repository is ‘C:\Temp\NuGetFeed’, this should offcourse be a UNC Path.

10. FeedSave

!!BACK TO VISUAL STUDIO!!

 

Click on: Tool\NuGet Package Manager \ Package Manager Settings

11. FeedConfig1

Add a new feed, and specify the path, for example our repository in ‘C:\Temp\NuGetFeed’:

12. FeedConfig2

In the solution we can now manage the NuGet packages for the solution;

13. MAnageNuGet

And add a reference to the Service Bus Library Functions, notice the dependencies (my colleague Sybren Kuiper is working on a NuGet HelpFile builder, which automatically generates a SandCastle help file, from the project file, which is then used in the NuGet package so that the help is available in this window), at this moment, we need to do this manually.

14. NuGetReference

As we can see, the dependency leads to the installation of the ServiceBus.v1_1, the version WE specified!

15. NuGetReference2

Restrictions may apply

16. LicenseAccept

So far….the same reference, only more work? No…

What we did, was abstracted the versioning and dependencies out of the solution. This means that the package is in control of the dependencies and not the project. This allows for a managed release of components, AND, new versions are available in a notification menu. This can be automated, used in a TFS build, etc.

image

What happens if we release a new version of the component? (using the NuGet Package Explorer)…

17. UpdatePackage

We can save the new version, where the name is generated based on the specified version (at this moment manually). This allows us to have control of which version is used in our project, as we can always go back to a previous version. In this case we would like to see that the project using version 1.0.0 still works, and that we get an update that there a new version. So we are going to save the new version, alongside the existing version.

18. UpdateVersion

!!BACK TO VISUAL STUDIO!!

If we manage the NuGet Packages for the solution, we now see that there is an installed version 1.0.0 and that there is an update, when it was published, and the new version 1.0.1. This can be automated, so that we have an overview of the used versions. In this case, we can simply click ‘Update’ to retrieve the new version.

19. UpdateVersion2

To be able to communicate with the Service Bus, we need several references, and by using the NuGet Packaging solution, we can ensure that we always have the version we need.

20. ReferencesEndResult

 

There you go, versioning strategy in place….governance fase 1 completed…get the TFS guy and set up the rest….

Some tips:

  • The NuGet packages can provide significant added value to components which are used in project, however not always during development
  • Conditional dependencies can be usefull
  • Added source/content such as sample can also be delivered using Project Templates which are problably more helpfull

 

HTH,

Sander

Wednesday, December 18, 2013

Starting you Azure project

Is this your project approach?

Azure Project X == Azure Subscription X
Azure Project X Budget == Azure Billing Alert on Azure subscription X
Azure Project X Monitoring == setup (SCOM) Azure monitoring
Azure Project DEV == prepared to support the application after Go-Live?

Azure Subscriptions
http://blog.kloud.com.au/2013/07/30/good-practices-for-manag…

Azure Billing
http://msdn.microsoft.com/en-us/library/windowsazure/dn47977…

Azure SCOM
http://blogs.technet.com/b/dcaro/archive/2012/05/03/how-to-monitor-your-windows-azure-application-with-system-center-2012-part-2.aspx

Regards,

Sander

Friday, December 13, 2013

The Enterprise Continuum – separation of concerns

There are so many options, and ways of developing a solution that I would like to share some of the guidelines we are developing internally. At this moment, I will do this on the blog you are reading, focusing on the solution architectures. In parallel I’m working for my company on the enterprise architectural guidelines, and I’m trying to follow TOGAF principles to lay down the architecture. I’m hoping to be able to define the architecture context, general architecture, and relate it to the solution architecture.

In the perfect world, with all the time to do this….this should result in Architectural concepts, which will be posted on: http://theenterprisecontinuum.blogspot.nl/ focused from the top-down approach of setting architectural requirements such as ‘every project must leverage a monitoring capability’, and this blog: http://snefs.blogspot.com where I will post the solution for this concept.

The first post (which is basically the same as this):

http://theenterprisecontinuum.blogspot.nl/2013/12/hi-there-what-can-you-expect-on-this.html

 

Cheers,

Sander

Sunday, December 08, 2013

Azure Service Bus – Error handling strategy

At this moment there are several ways to build exciting new applications. In several projects, we are using a hybrid/cloud architecture, specifically Windows Azure. In my upcoming posts I would like to share some of the guidelines we are developing internally, in this case specifically a way of handling errors in Azure queues/topic-subscriptions.

A lot of the Azure (integration) Architectures (and even between web-worker roles) will likely use some elements of the Azure Service Bus, or Azure Queues. Going through the different architectures is not part of this post, so I will suffice with a slide from the Service Bus Deep Dive presentation;

clip_image002[4]

Within our company Caesar, several internal systems have been created and where possible purchased. One of them, CRM4.0 was outdated, or not suited for all our requirements (among them Accessibility online). We decided to migrate our CRM system to the Cloud, using Dynamics CRM. As not all systems are migrate and we are in the process of analyzing the requirements and alternatives, we needed a solution for updating our internal systems which use CRM information.

As Dynamics CRM provides means to push updates to Windows Azure, we have implemented the following solution;

·         Dynamics CRM send Contacts to the Azure Service Bus Topic ‘Contacts’

o   For each system subscription, we have a subscription (e.g. contacts-systemA)

·         Dynamics CRM send Accounts to the Azure Service bus Topic ‘Accounts

·         An internal (windows) Services picks up messages from the subscriptions and sends them to the LOB systems

The following architecture explains this architecture:

clip_image004[4]

 

This worked fine, however, sometimes we had a problem processing messages. After diving into the problem we identified that malformed messages/incomplete accounts/contacts were send, which caused an error, which leaded to the Abandon, the message would remain on the queue, and thus, eventually the problem would occur…..we implemented a maximum number of errors strategy, so ultimately the processing service would stop. Implementing error handling, transient fault handling, and Email Listener did not prevent anything; we did not know when an error would occur and what the error would be.

We stretched the capabilities of the CRM Plugin and CRM configuration which allows you to send all fields, perform validations, however, several things can go wrong:

·         Technically

o   Transient faults – network hick-ups, Azure updates which terminate connections, these call all be handled by implementing the EntLib Transient Fault Handling block

o   Environment Configuration - Azure Topic/Subscriptions have not been created in the environment, these can all be prevented by using a strategy such as proposed in my earlier post

o   Management - Azure Storage account configuration is modified/removed, these risks can be minimized by implementing an solid Azure security policy (and not promoting everybody to co-administrator)

o   Server (processing service) is not available, this should be monitored and causes business issues, but due to the asynchronous setup of this architecture, does  not cause any issues in the system which are not solved when restarting this service

·         Functional error

o   Entity consistency

§  Contacts/Accounts are not valid as not all mandatory fields are set, these can be resolved by managing the CRM Plugin

o   Entity dependencies

§  Contact insert is not processed in the internal system, Contact update will fail

§  Account insert is not processed, relation with account cannot be made, this contact insert will fail

 

Given the problem, some can be solved by implementing readily available frameworks and components, however, for some errors, a strategy is in order. Let’s look at the aforementioned problem in relation to the operations. Processing messages has been implemented earlier by using the peek-lock model where a message is only marked as processed by the following operations on the brokered message:

·         Complete (everything went fine)

·         Abandon (an error occurred while processing)

·         Defer (meta-data can be added to the message, so that the message can be picked up at a later time)

clip_image006[4]

 

Will this solve a functional error? No!

So what we need is a strategy…which allows messages to be stored in a location, related to the queue/topic-subscription, but will not be processed, is ‘dead’, and is queues for further investigation, hence:

“All messages, which cannot be processed, are placed in the DeadLetter queue”

 

clip_image008[4]

 

clip_image010[4]

This will result in the following state:

clip_image012[4]

 

This however, poses several new challenges, what to do with the dead-letter messages, how to restart messages, in the next post I will explain my effort to implement a monitoring solution by using and evaluating several existing frameworks and technologies.

 

To be continued….

 

HTH,

 

Sander Nefs

Sunday, December 01, 2013

Architecture - ISO/IEC/IEEE 42010:2011

After following the IASA Architecture Core course, I like to continue with my personal learning and improvement, and focus on my architectural skills, among others. This year, by following a course on the Theory of Constraints, which is a really interesting theory which will help analyze the core issue behind a problem, and have followed the MetaPlan training which allows for a structured goal oriented brainstorm. For next year, I enrolled in a training on TOGAF. In my preparation for this, I stumbled upon the Open2Study website, where you can follow a lot of courses for free. I enrolled this weekend into the EntrArch course, which includes TOGAF. In one of the additional resources, it referred to a lot of very useful articles.

So after diving into a lot of them, for learning more about architectural styles, frameworks and more. I can recommend the following;

TOGAF

A Comparison of the Top Four Enterprise-Architecture Methodologies

Survey of Architecture Frameworks 

 

Cheers,

Sander

Friday, November 29, 2013

BizTalk User Group NL 28-11-2013

On 28-11-2013 the BizTalk User Group (LinkedIn group BTUG NL) meeting took place in Amsterdam, which was organized by Estreme. The purpose of the BizTalk User Group is to have regular meetings with members in the community on the topic of integration. Since Azure provides more and more integration capabilities, by means of the Azure Service Bus, and the Go-Live of Windows Azure BizTalk Services (WABS), the meetings are diverse and very interesting.

As Azure is very broad, the BTUG focuses on the following elements of the Microsoft Integration stack:

  • On Premise (WCF/SSIS/BizTalk/Windows Server Service Bus etc)
  • Cloud - Windows Azure (Windows Azure BizTalk Services / Service Bus etc).

Announcements

  • An upcoming event in January is the BizTalk Saturday, focused on Windows Azure BizTalk Services
  • Next year, a BTUG Beach event is organized, an informal community event
  • The next upcoming meeting will be held in March

Feedback BizTalk Summit - Steef - Jan Wiggers
Steef - Jan Wiggers provided a summary after attending the BizTalk Summit. This showed that BizTalk is here to stay with an improved release cadence:

  • Annual cumulative updates
  • 2 - yearly platform updates
  • Next year there will be a BizTalk 2013 R2
  • In 2015, a new version will be released
  • Improvements included in the upcoming releases are in the area of JSON support, HealthCare / SWIFT adapter-additions and an updated SB Adapter

Windows Azure BizTalk Services is now live and can be used in production and is improved in the areas of monitoring, archiving, EDI support and management by using PowerShell Command Let.

KAS Integrator - Johan Vrielink

At KAS Bank BizTalk has been implemented to handle transactions for stock exchanges. The KAS Integrator is a framework built on top of BizTalk which allows fully automated configuration of the environment. There are several services defined, on top of BizTalk, and a management portal which provides business rules, publish / subscription configurations, which has some similarities with the EDI partnering, which was pretty interesting. A customer with a clear vision and story was very great to have presenting a session. It showed some typical demands in the market; the automated configuration of middleware, ability to minimize development efforts for interfaces and gave great insight in how to think about challenges in future projects, e.g. by using PowerShell.

Integration Challenge : Custom Service Bus - Rob Kits
During the integration challenge, non-BizTalk products / solutions are shown and compared to BizTalk, which allows you to think about integration in a broader sense, where not every problem can be solved with a single tool. In this case it was a custom solution that used in locations where gas is distributed. In this environment, it is necessary that operators can configure/adjust/monitor the environment, and middleware such as BizTalk is too complex. It was based on PLC technology presented a solution that was brilliant in its simplicity. It again showed that an integration problem must be based on the needs and requirements, and not always with the potential features provided. I found that to be a nice analogy with cloud technology, where one of the advantages is that you pay for what you need, not necessarily what the technology can do.

Synchronous Service Bus - Martin Rienstra
BizTalk is not a golden hammer and certainly not suitable for all issues. At a client, about 80 interfaces were implemented in an intranet environment using a request-response pattern (synchronously). As BizTalk is designed with the principle of guaranteed delivery using the asynchronous pub/sub architecture (polling), BizTalk is not designed for low latency solutions. This does not mean BizTalk is not capable of handling these, this is possible, by using separate hosts, scaling out, separating the databases, however, there is due to the architecture, unpredictable latency.

The BizTalk product team has recognized this and stated that this is due to the architecture in BizTalk and will not be resolved, this kind of issues can be addressed by using different technologies.

Martin had previously looked at the Service virtualization platform MSE (Microsoft Service Engine), but this product is no longer developed (in this space there is only Sentinet). The requirements; configurable, manageable, and re-using the BizTalk maps. The solution consisted of an interesting mix of WCF custom behaviors allowing dynamic service to be generated using a configuration, which uses BizTalk artifacts (mappings / assemblies etc), with the great advantage that the existing BizTalk used solution could be reused. The disadvantage is that the services should run on BizTalk the machine because of the usage of BizTalk artefacts.

Summary

A great event and very interesting content, in future meetings we can expect a lot of great Integration Challenges, and I’m trying to arrange a session where one of my colleagues from Caesar will explain differences and comparisons between Sonic vs BizTalk vs Azure as I’ve seen a lot of interesting things after comparing BizTalk and Sonic;

  • Sonic has the choice between durable subscriptions and non-durable (using queues), where BizTalk always uses durable subscriptions, Azure provides in this context durable (Topic) and non-durable subscriptions (Queues)
  • Routing can be done schema based, where Sonic does this without enforcing a schema, where BizTalk requires a schema
  • Similarities between logical and physical separation of concerns (where Sonic works with an ESB Container and Broker concept) and BizTalk uses a Logical and Physical port)
  • And more….

 

Great to see everyone and I hope a lot of events like this will follow.

 

Regards,

Sander

Friday, November 22, 2013

‘ETW2.0’ - High performance tracing using EntLib SLAB

Are you writing an application that has high performance requirements, are you wondering how Azure Diagnostics works, do you want to write your own logging framework….this might help you out.

Not so long ago, the Application Server Group ISV Partner Advisory Team posted an excellent article on how to instrument specifically BizTalk applications, by leveraging the ETW infrastructure.

This allowed for significant high performance tracing and was measured against other frameworks as you can see in the diagram below;

image

In the latest EntLib releases, this has been included in the Semantic Logging Application Block (SLAB).

What’s really interesting is that there are 2 patterns which you can implement:

1. In Process, where the Host which performs the log data is written to the ETW Infrastructure and the Listener is subscribed to the ETW data

Follow link to expand image

2. Out of Process, where the Listener can be a Service outside of your application (most suitable for OnPremise usage)

Follow link to expand image

 

  • EventSource

With the Semantic logging application block, the idea is that the logging infrastructure is predefined (e.g. Start / Stop events which are logged) and that the application only provides the data/parameter used to log. You need to create an EventSource which contains all the LogEvents you would like to log. This means that, as well as with unit testing….you need to think, before you build. An example is shown below;

image

  • Sink

The great thing, and the reason I like this framework is that you are able to create the sinks, and due to the Out-Of-Process model, can leverage sinks which in itself are not high performance. Out of the box there are a number of sinks: SQL database, Windows Azure table, flat files and some others.

1. Example SQL Server Sink

image

2. Example SQL Server Result

image

  • Result and extension points

Writing using the EventSource will write to the ETW Infrastructure, which has almost no performance impact. The Out-Of-Process listener will pick up the messages in a windows service (can also be downloaded from the EntLib download link), the Sink writes the data to the destination of your choice.

In a post of Tomasso Groenendijk the option of using MongoDb is explained, with the idea of having a high performance tracing mechanism. With SLAB the same functionality is available.

Additionally, writing large amounts of data can be something you don’t want to do on you database used for your primary process, so creating a MongoDb Sink is still a viable option, however, for different purposes.

  • Getting started with SLAB

You can use the SLAB quite easily by using NuGet and search for Semantic, which will display the Application Block and the available Sinks for Windows Azure Tables and SQL Server as wel.

image

The hands-on labs and documentation should get you going quickly. As the EntLib settings can be configured outside of your code (recommended), diving in the EntLib config might not be as much fun as you would expect. For this, there is an EntLib configuration tool available.

  • Getting started with EntLib Config

The following link, contains a EntLib 6 configuration add-in which helps you create the Configuration settings for some of the Application blocks and the Windows Service for out-of-process logging.

1. Select the configuration console

image

image

2. Click on the Config file and open the editor

image

3. Select the block and visually configure the block

image

4. Example TransientFaulthandling Config

image

5. Usage TransientFaultHandling Config

image

 

 

 

Cheers,

Sander