Thursday, December 21, 2006

Hey Ma look at me! I do SOA now.

One of the worst things about SOA has been the band-wagon jumping by vendors. These have fallen into broadly three groups
  1. Hey look something new, bugger its not technology, quick lets develop some implementation stuff and claim the badge anyway.
  2. Hey look its a techy thing, we do that already don't we? Quick jig and bob's your uncle.
  3. Bloody hell that has a lot of Buzz, lets slap some lipstick on the pig and claim its SOA.
The first group include the likes of IBM, BEA, Oracle, SAP, Sonic and a few others. Companies that have recognised that they don't have the current products to do the business impact side of SOA, but have at least recognised that their technology stack needs to be different to older style technologies (I'm talking about IBM product here BTW not things like CBM). So Oracle have actually built middleware, IBM have abandoned the old MQ based technologies for new development and BEA have clearly split application development from business as a domain. I'd lob REST in this group too. Behind the scenes they'll all say that they see the business thing, its just that they can't see how to build or flog products that do that right now.

The second group are the EAI vendors and the vendors who had some sort of "component" based approach and are just seeing XML and WS-* as another implementation pattern. They've pretty much decided that its a techy thing and its just another protocol for them to handle. These are the people who can't believe their luck. 2 years ago no-one was buying EAI, suddenly its an "SOA stack" and its popular again. At least these folks though are misunderstanding within a specific context (technology), they say that SOA is important, they just tend to stress technology and implementation more than the first group, often because they can't afford the re-development costs. They too see the business thing, but they are even further away from being able to do it than the first group.

The third group is the worst though, these are the people with a particular world view and a particular line to pedal who decide that "SOA cannot be done without X", linked from this post is an article by IDS Scheer on how SOA "requires" BPM, indeed how SOA "starts and finishes with business process". IDS Scheer are far from being alone in this group, but this is a cracking example of the sort of thing that is wrong with IT.

Here we have a paper in 2006 that actually does the WS = SOA thing, something that actually proposes that Services should be created bottom-up and it is process, including the execution level processes that drive and define SOA. I've talked about POA v SOA before but to be brief, if you start with process and do process based definition then you are doing POA and are 100% not doing SOA. On page 95 of Enterprise SOA Adoption Strategies I discuss how business process works in an SOA context (and the similarity of process & service at high levels of abstraction), but this isn't want IDS Scheer or the other "business" modelling tools are doing, what they are doing is taking exactly the same model they had before and just going "hey look if we consume a WSDL we are doing SOA".

This last group is easily the worst and most disreputable, particularly as they claim that they can model businesses effectively and are the most disconnected from the technology. Their claim that they are a requirement for SOA without actually offering anything that is SOA is quite gobsmackingly bold faced.

Anyway I hope everyone has a great break and comes back in 2007 ready to face SOA as a business challenge rather than as just technical implementation. Oh and to people around the world, there are two British Christmas things that you need to get, firstly Christmas Crackers and secondly Mince Pies. Christmas isn't Christmas without gunpowder, a silly hat and alcohol laced titbits.

Technorati Tags: ,, ,

The perfect SOA environment?

At my department's Christmas bash last night the discussion was many and varied, ranging from the stupidest thing we've been asked at US immigration (an impossible to judge competition due to the level of entries), getting a bill through the houses of parliament to starting off in IT and your first language/OS.

Well my Boss cracked up with his first platform and it enable the dynamic provisioning of services, each service ran in its own dedicated partition and could be tuned, upgraded or deployed independently of all the others had a rigid definition for these service environments while enabling efficient communication via distributed shared memory and clustering. Development could be done in various different languages and still run on the same common platform.

So what exactly do modern platforms have over VME(Virtual Machine Environment), isn't this exactly what efforts like VMWare and the Hypervisor on Intel chips are trying to re-create today? While the concept of containment and deployment of individually active services sounds very like the goals of SCA.

This is one of the major reasons that SOA can't be about throwing away old systems and replacing them with new technologies. VME, if architect correctly, could act as a very efficient way of delivering certain types of services in an organisation, similar claims could be made for LPARs and other "older" technologies. If they are working and delivering effectively against the business goals, and are actually providing facilities not currently available in "modern" environments which are helping management and control then take advantage of that rather than ripping and replacing.

The perfect SOA environment is heterogeneous and takes account of what you already have and what you need to do, and is managed and evolved in line with the services and capabilities of the business. The worst is one where technology is being used as the driver and words like "Web Services", "REST", "BPEL" and "WCF" are being used as justifications.

Technorati Tags: ,

Thursday, December 14, 2006

Sad but it is December...

A simple game of blog tag for Christmas, on Sunday, Jeff Pulver started a game of "blog-tag" in which participants are asked to share 5 little known things about themselves, and then tag 5 people on their blogroll. Well as per the headline link I got tagged and hell its December.

Okay here are my five
  1. I hate horror films
  2. I'm nuts about snowboarding and watersports, I don't just mean I like them but that I'll drive 5 hours from SFO to Tahoe for 2 days and catch the red-eye to Toronto when there are only 5 open runs at Heavenly type nuts
  3. Every father in the world thinks his kids are the best... but only I am right
  4. I think Java (and all other C syntax languages) are rubbish, and much prefer Ada/Eiffel
  5. I met my wife in an Irish Pub in Paris, we went to a club, chatted the night away ended up walking along the Seine and then having breakfast in a little cafe just across the river from Notre Dame.
So who shall I tag... well why not Stefan Tilkov, Mike Morris, Andy Hedges, Dan Creswell and Cliff Richard?

Technorati Tags:,

Why contracts give flexibility....

As per the last post there has been the standard "flashback" from a bunch of people who crusade first and think later.

So lets explain in simple terms how strict contracts give flexibility, and to make it really easy I'm going to use four examples, firstly REST (which the "don't validate" crowd follow) and secondly LEGO(r) which the "I don't understand SOA but I'm going to use this as an analogy" crowd try to go for. The final examples are... well I won't spoil the surprise... So I should cover lots of bases here

So first off lets look at HTTP, it has a very strict contract with a limited number of possible actions, GET/POST/PUT/etc and an extremely limited way of getting to those actions. Now interestingly the entire concept of REST is that this limited and tightly specified set of actions when rigorously enforced provides an infinitely flexible architecture on which to built things. So HTTP is in fact an area which has a pretty tight set of up front validation (try sending random packets to Apache or Firefox to see where it gets you) and an extremely limited and well specified set of actions with which to interact with the server.

So simply put the REST people who are claiming that you shouldn't validate are exactly the same people who are claiming that you should rigidly adhere to a tight specification and reject anything that (for instance) stops GET being idempotent. So here is a great example of a tight specification being used as the implementation approach for complete flexibility.

Next up lets take Lego, you have a "brick" it has a bottom which has set dimensions and a top which has set dimensions, amazingly despite limiting the specification of the brick to having exactly the same form and function top and bottom the potential for Lego is practically infinite, you can do Star Wars, Air Force One and really odd stuff. This is all managed via a very strict contract as to how the different bricks connect.

My third example is the wonderful world of IKEA. IKEA uses pretty much the same screws in every thing they do, they use the same sort of connectors in everything they do, and lots of times they use the same size of wood, just combined in different ways. Having tight specifications like this that mean they only need certain sizes of doweling or screw gives them an immense (and profitable) flexibility in what they can create.

Tight specification can often be used as the basis of flexibility rather than rigidity, but having these tight specification means that you, as an engineer, need to understand the boundaries within which you operate.

The final example comes from the wonderful world of Pimp my Ride and more particularly the world of tyres and "rims". Having a set size for wheels and a tight specification doesn't decrease the flexibility it positively increases it. There are a massive amount of tyre and rim combinations available so one car could have three tyres from one manufacture alone and a set of 4 rims from the first hit on Google. For the mathematically minded that is 12 different combinations on a single crappy Ford Focus.

This is why specifications are engineering, if you could say "I wish that I could just do this because its cool" then you'd be a liberal arts major. Having specifications can increase flexibility when you know how to use them and use them in a smart way. This is the reason that screws, nails, hinges, doors, trains and lots of other things obey a specification. Specifications are the best way to get things to inter-operate. The REST crowd should know this better than most with their mantra of "standard HTTP", but unfortunately too often people are beguiled by an idea which makes them contradict the benefits of what they are saying is right.

Specifications give flexibility because they tell people what can and cannot be done, people are then very adept at modifying their behaviour and their world to cope with these limitations.

It would be completely wrong to say that this was Darwin in action, but its completely right to say its how adaptation works.

Technorati Tags: , ,

Wednesday, December 13, 2006

Can we please be engineers not optimists?

Via Stefan Tilkov I came across this Mark Baker article and well I'm going to set myself up on the opposite side of the fence here, in fact I'm going to go over that fence, across the field and sit in the pub watching him sink slowly into a quagmire.

One of the worst things I see in IT over and over again is the rage against formalism and verification, its just bizarre how often it happens. Now Mark might have a great new idea for time dependent evolvable validation... but for me he is wrong in principle.

First of all lets start with a the basic tenant that everyone should hold dear

KISS - Keep it Simple Stupid

And another is one of the few new Agile terms that I liked which formalised an age old saying

YAGNI - You aren't going to need it.

And before I get into it also have a look at The second System effect from the best book in IT.

When dealing with 3rd parties a basic rule of thumb is trust no-one and CYA (Cover your ass) this means verifying both what you send out as well as verifying what you receive. There are very good engineering reasons for this. Lets say that you have said that there is a limit of two items per order, as that is the business decision. If someone sends 2344 you want to reject that straight away, and schema verification is a great way to do that.

Engineering means designing a system that works for the task it is required to do, and if one of the things it needs to do is allow change then it needs to allow that change in a controlled and measured way.

The concept that Mark puts forwards of validating only at the last possible moment is somewhat beguiling as it has the idea that everything will turn out right, but if the poor bugger on the order system is expecting massive numbers then the odds are he won't have coded it expect them. This isn't their fault, if you say "its 1 or 2" then its pointless him writing loads of code to handle a million items.

Validating at the edges however does require you accept that when change happens it may break things. But the reality is that most of the time this is what you want. if you are dealing with someone who suddenly sends in a new document with a field that says "by responding to this message you are agreeing to give us all your money and your first born child" then you'd like to know.

Validate as early as possible, make sure that things are correct and hold people to account as early as possible. I've seen companies lose literally millions of pounds (billions of dollars :) because they didn't enforce interface contracts effectively, this led to problems down the line and systems that were very hard to untangle.

There are an awful lot of stupid people out there and it is your job to protect your systems and your business from their stupidity, this means not trusting what they sent you and checking it yourself for validity. Optimism is not a good engineering trait, pessismism is. That is why you validate what you send to make sure it doesn't come back on you, and validate what you receive to make sure the other person isn't an idiot.

To use a physical engineering analogy, you check that the wing is put on properly as soon as it is attached, not by looking out of the window at 30,000ft to see if it is still there.

Technorati Tags: ,

Tuesday, December 12, 2006

REST URI naming convention

Over on the Yahoo SOA list there has been a REST discussion where I've continued to not see the benefits of REST for prime time business development....

But there was also a discussion about URIs and whether they should be opaque (meaningless) or sensibly named. I'd like to make a REST proposal

All URIs in REST must be meaningful, they should clearly articulate the type of resource and its place in the hierarchy, individual resource identification should be done via a GUID. The GUID is used to standardise the use of IDs and to provide a clear identifier of the meaningful and opaque parts of theURI.

would be a "good" URI starting point and
would be an individual resource identifier.
would be the customer starting point and

would be a specific customer.

To me the lack of agreement on naming conventions to REST is indicative of its prototyping feel. Agreeing that URIs MUST have meaningful names would make it simpler to debug, simpler for developers to understand and will clearly separate roots from resources.


Technorati Tags:

Monday, December 11, 2006

Predictions for 2007

Okay its time to don my cap of predictions for 2007, the following should be thought about as reliable as the average horoscope, i.e. I'm making them up off the top of my head.

Products and vendors

  1. First off 2007 will the year of the SOA technical "platform" not exactly news as quite a few people are claiming this today, but 2007 will be the year that really sees this become useful. With Oracle, BEA and IBM all looking like having pretty major releases next years its going to be an entertaining marketplace.
  2. Smaller vendors will struggle even more, the justifications for picking niche products will increase.
  3. The "rich and fat" ESB model will die on its arse. Products will start to clearly split the communications infrastructure from the application infrastructure.
  4. Microsoft will slip further behind in the enterprise space while they wait for Longhorn server
  5. IBM will finally admit that it does have a cohesive strategy based around J2EE and that the non-J2EE bits are going to EOL.
  6. Rumours of BEA being bought out by Chelsea FC will abound, then die away once Roman Abramovich realises they don't posses a top quality international striker.
  7. SCA will become the "accepted" enterprise way of doing things
  8. REST will be delivered into the stacks so they can remain buzzword compliant, REST advocates will denounce them both as heretics and as proof that REST is the "answer".
  9. Business level modelling will continue to be a pipe dream, filled either with overly complex tools or insanely technical ones.
  10. Oracle will buy some more companies, probably including some sort of registry
  11. IBM will buy some more companies, probably focused around provisioning and management
  12. Windows Workflow exploits will show up in the wild
  13. Some product vendors will finally get the difference between product and application interfaces and stop confusing the two.
  14. Questions will be asked about why you have to pay so much money for an invoicing process in an ERP.
  15. Java being Open Sourced will not be the big "wow" that Slashdot predicted
  16. ERP vendors will start to get their heads around SaaS licensing models
  17. Hardware virtualisation will become the "norm" for new deployments
  1. WS-Contract and WS-SLA will remain in the future while WS-* concentrates on more technical challenges.
  2. WS-* will continue to be plagued by insanely simple bugs in various implementations, but vendors will hopefully each have just one WS stack (rather than all having multiples like they do now).
  3. BPEL 2.0 will go up a hype curve like almost no technology in history... people will then complain about their visual COBOL applications being unmaintainable.
  4. WS-* will split into competing factions, those that think everything must be done in "pure" WS-*, and those that think that sometimes its okay to not use the standard way if its actually simpler.
  1. REST will start re-creating the sorts of things that WS-* has, so await "RESTful security" and "RESTful reliability" as well as "RESTful resource descriptions" being bandied about.
  2. REST will aim for a MIME type explosion, this won't get very far and lead to lots of "local" standards that are nothing of the sort.
  3. REST will split into competing factions, those that hold to the "literal truth" of the REST paper, and a more progressive sect who treat it as a series of recommendations that are to be applied with thought.
  1. IT will continue to not care about the TCO and will focus on the cost of development
  2. Some major IT horror stories will emerge based on SOA and REST "failures" the reality will be that the project was screwed from the start but it was a damned fine scapegoat
  3. More engineering and measurable solutions will be required by the business
  4. Business will demand more visible value from IT
  5. Offshoring will continue, and South America will rise further as an offshore location
  6. The business will want to see IT clearly split the utility from the value
  7. IT will continue to focus on technical details and miss the business big picture
Well that is it for starters like all good horoscopes there are some specific elements that won't come true and a bunch of generalities that I can claim did. But my big prediction for 2007?
  1. Sun will finally get their act together and pull all those brilliant minds into a cohesive enterprise strategy and ditch all the fan-boy bells and whistles that have dogged their recent past.
Well I can dream can't I?

Technorati Tags: , ,

Sunday, December 03, 2006

SOA is people

With all this REST v WS-* type of arguments flying about I thought I'd remind everyone of the most important thing in delivering successful Service based projects, systems and enterprises.
Yes folks the thing that will actually make or break your systems in the majority of cases will be the people. This means the talent of the people defining and delivering and the cultural changes in the wider organisation to accept and take advantage of the services effectively.

All too often in IT's obsession with technology we argue about the pointless and ignore the critical. If your SOA strategy and direction isn't centred around the people and practice changes, then it won't deliver the benefits you expect.

Technorati Tags: ,

Sunday, November 26, 2006

Rating WS-*

I blogged back in January on the lack of a WS-Contract, so I thought it was time to review the landscape and see what WS-* standards there are these days, and how much wasted effort there has been. I'm excluding all the vertical standards from this (like Translation Web Services et al) as they are addressing specific business problems and so at least its an attempt to work from a business perspective, the goal here is to look at the technical standards.

To this end I'm going to split the WS-* standards into five groups
  1. Good Idea - The standards that make sense and I can see being used
  2. Good Or Dumb - The standards that have potential to go either way
  3. DOA - The standards that have either died or shouldn't have been looked at
  4. Dangerous Idea - The standards that are down right dangerous and will cause more harm than good
  5. MIA - The standards that just don't exist, but which should
Good Idea
  • WSDL - Its the interface, its a good idea, and WSDL 2.0 has some decent extensions that will cause issues (callbacks) but are there for a decent reason
  • WS-Addressing - It was needed and it will help more complex deployments
  • WS-Policy - Great idea, shame its not being universally adopted elsewhere
  • WS-BPEL 2.0 - Just need that human workflow in there chaps
  • WSDM - Good idea, and it seems like the implementation might be on the way too.]
  • WS-RM - Reliability is a good idea
  • WS-Trust, WS-Security etc - Damn good idea to do this in a standardised way
  • WS-Federation - Federated things are good, centralised things are bad
Good Or Dumb
  • Semantic Web Services - Could very easily turn into intellectual masturbation and very little real world practicality. Could also be fronted by decent tools and provide a great way to add more formalism to a contract.
  • WS-Context - Looks like a good spec on paper, the devil in this one will be in the implementation, bit worried about the WS-CTX service which looks like a big central attempt to manage context, and the lack of reference to process standards such as WS-BPEL, could DOA in a year.
  • UDDI - Never realised its grand vision, sure its still going inside some decent products but it clearly isn't the standard it was cracked up to be
  • WSN - Web Service Notification, again its down to the implementations here, I haven't seen much out in the wild that indicates its going strong, even though its a 1.3 standard.
  • WSRF - Resource management, it sounds beguiling but I'd bet on this moving into Dangerous as product implementations start coming out and making resource state into some complex beast, they could however make it trivial and please the "stateless" invocation crews.

  • WS-Choreography - sounded so good, but just doesn't seem to have the weight behind it
  • FWSI - Hadn't even heard of it till I started this post
  • WSRP - RSS does a better job
  • Web Service Quality Model - Sounded good... but has it gone anywhere?
  • WS-Reliability et al - killed off by the better WS-RM standards
  • WS-Discovery - Jini broadcast discovery for Web Services... oh god.
Dangerous Idea
  • WS-TX - Two Phase Commit via Web Services, I'm sorry this is in my dumb ideas group. People will expose this via the internet and then "wonder" why they are having massive performance problems. If something is so closely bound in that it needs 2PC, then build it tightly bound, not using Web Services.
  • WS-BusinessActivity - Shifts logic around into the ether, not a great idea
  • WS-Contract - Defining business contracts on services, include pre, post and invariants
  • WS-SLA - defining the SLA for both the service, and the invocation.
So there are quite a few good ideas, and a whole heap of not very good ideas. But it is good to see that the basics of security, reliability and identity are being covered. To be honest its better than I expected to see, and I've deliberately excluded all of the random WS-* proposals that have never made it into a multi-consortium group or a standards body.

Any I've missed or any ratings that are wrong?

Technorati Tags: ,, ,

Wednesday, November 22, 2006

Want to be cool? Learn REST. Want a career? Learn WS

I've been reading about, and playing with, various different REST and WS approaches recently. I have to admit that when knocking up a quick demo of something over a website where I am doing both sides that REST is very nice and quick, and WS is still more complex than it should be in the tools.

But as with any technical discuss there is another thing that should drive you when you look to your decisions. This is whether you want to get fired for a decision when something goes wrong, and whether you want your skills to be relevant in 3 years time.

REST v WS is pointless from a technical perspective IMO, and will become more so as tooling sets improve.

From a business perspective however the choice is much more stark, and really doesn't come down well for the folks in the REST camp.

Out in the big wide world of the great employed of IT there are four dominant software players, these people represent probably the majority of IT spend in themselves and influence probably a good 95% of the total IT strategy out there on planet earth. Those four companies are SAP, Oracle, IBM and Microsoft.

These are the companies who your CIO goes to visit and sits through dinners and presentations on their product strategy, and what they are pushing is WS-* in all its ugly glory. This means that in 3 years time you 100% will have WS-* in your company, in a company you work with, or in a company you want to work with. Sure you can argue that its harder and more difficult than REST, in the same way as you can argue that Eiffel, PHP or Ruby are more productive languages. Some people will get to use these language commercially, some people will get to use REST commercially.

Everyone will have to use WS-* commercially if they want to interact with systems from the major software vendors.

I'm not saying its right, just that its reality. The best technology isn't the technical purest, the most productive or the easiest, its the one that the most people use and which has the widest acceptance and adoption. For shifting data across the internet this means its what SAP, Oracle, IBM & Microsoft say, its also what the various vertical standards (who the big boys aim to implement "out of the box") who have also all gone for WS-*.

The technical discussion is pointless, the commercial discussion is mute. But hey lets continue having the discussion on REST v WS because it makes us feel cool and trendy. Its about time that IT people realised that we need to have discussions based on commercial realities not on technical fantasies.

Technorati Tags: , , ,

Tuesday, November 21, 2006

Using SOA to understand competitive advantage

One of the bits that I've talked about at various conferences and in the book is using SOA to understand the business value that various services have.

I was chatting on the phone today with someone around this topic and I thought it was worth a post on why treating SOA as a technology thing missed the real power and opportunity of what IT can deliver. There was a research report from the Economist recently that said that IT would need to move away from reducing cost and towards delivering value and of course that the business and IT have different views on how this will happen and what the barriers would be.

But lets start first with the IT department being expected to deliver value, this means that you have to understand the bits that add value, and of course the bits that don't. If you are viewing SOA as a technology thing then it really isn't going to help, as you can't really start ascribing value to a WSDL or a BPEL process, its just to low level to consider investment or cost cutting down there.

This means you have to have some way of understanding the business, some way of understanding what you will need to deliver to the business, and therefore some way of understanding the different values and drivers of the different parts of the business. Some people might say "That is what enterprise architecture is for" but I'd have to disagree as its really just the first step in enterprise architecture and its also the first step in actually managing an IT department, and I don't see the concept of "value" early on in the likes of Zachman or TOGAF .

This is why I argue for a simple approach to the business service architecture, because it quickly gets us to the stage where we can understand what the services are and we can then use that information to understand the business value.

By understanding which services are "under the line" and which are above the line you can quickly understand where you should cut cost, and where the business would appreciate suggestions as to added value. Speaking to someone from work a week or so ago he mentioned working with someone who did a similar exercise a few years back and found that they had over 50 projects in areas below the line.

As the business starts expecting IT to deliver more value its going to be essential for IT to understand more about which parts of the business deliver actual value, and which are just IT things that we think add value but in fact no-one actually cares.

For instance, how many Finance or HR services would live above the line? How much competitive advantage is there in having a customised Invoicing process? How much advantage is there in having an EAI tool or an ESB? How much value is there in REST v WS? Once you understand the value you can start delivering actual benefit and realising that the technology isn't important, its the end-game that matters.

For SOA to give competitive advantage it means knowing what advantage looks like.

Technorati Tags: ,

Thursday, November 16, 2006

XML is not human readable

Over the last couple of weeks I've heard the same phrase said by about five different people
The advantage of XML is that its human readable, this is why Web Services are better than previous technologies.
Now I'm not going to get in a readability of WSDL v IDL (hint: the winner isn't WSDL). But I think its worth examining the whole concept of XML and whether it should be human readable, particularly when it comes to business processes, service descriptions and service contracts.

So should a "good" Web Service description be human readable? Lets examine the purpose of that description

  1. To enable consumers to call the service correctly
  2. errr... that is pretty much it
So given that goal what is the best way to show this to both systems and to people? The answer is of course to have a common technical language that enables accurate exchange, and then have this rendered for different types of people and systems, so Java code turns into "Java", C# turns it into C# and for people it gets rendered into a nice picture that shows the methods and the constraints.

WSDL and BPEL (especially BPEL) are examples of that technical language. There was never a goal for them to be human readable, they are aiming to be machine readable. The Geo Ripping wsdl is a very simple self contained example as to why XML isn't designed for humans to read.

Sure when it comes to debugging you can print out the XML and a skilled person can spot some of the errors, but then you could do this with RMI, CORBA, DCOM and even C (using a hex editor in the later case) but the idea of "human readable" is that anyone could read the SOAP messages or BPEL process context, and this 100% isn't true.

Or to look at it another way....

XML is not human readable, its not designed to be human readable and you shouldn't try and make it human readable. Just because something is in Unicode doesn't mean that anyone can read it. French, Chinese, Klingon (WTF?), Japanese, German, English, Urdu and many other languages can be written in Unicode, and XML should be viewed in the same way but as a language with lots of unrequired syntax, no real semantics and pretty random grammar in general.

Think of XML as being English spoken by a sulky French teenager, lots and lots of grunts that mean nothing to anyone and the occasional fragment of something that no-one actually properly understands.

Reading BPEL is like trying to understand a conversation between a sulky French teenager and a sulky American teenager... in Chinese when they've only had two lessons and you don't speak Chinese.

XML is as hum

Technorati Tags: , ,

Tuesday, November 07, 2006

What Geo ripping means to the enterprise

The other reason for Geo ripping wikipedia was to explore what can be done with the unstructured information that is created inside organisations and how easy it would be to
  1. Re-purpose the information
  2. Give credence to quality
  3. Turn human focused information into systems focused information
The first piece that is critical is that the Wikipedia information isn't truly unstructured. So I was really taking templated information out, which meant it was much easier than truly unstructured information. But this is a pretty standard case when you think about information that is stored in Access databases or Excel sheets where templated or semi-structured information is the norm which makes it a reasonable use case to think about how current information in things like Excel et al can be turned into information that can be directly used elsewhere in the enterprise.

So that is stage one, which leads directly to stage two, namely the question of data quality and provenance. If I release information that is manually created into a spreadsheet (but on which critical decisions are currently based) and allow that to be directly integrated elsewhere without the human judgement and oversight, how do consumers know the quality or provenance of the data? How do I state on my Web Service "this service shouldn't be used for anything serious like nuclear power or making actual decisions" without it become the standard shrink wrapped license that all software vendors tag on, and everyone ignores?

The threat here is that Line of Business (LOB) will use this sort of approach to create a web service like "Current Sales Budget" which contains not only out of date information, but information that has incorrect assumptions. This will then be consumed by others who think it is the "real" current sales budget. This is a big risk in businesses especially if used for modelling and the like as small errors in one place can lead to massive errors at the end. Data provenance is going to be a big issue in this world of "easy" to develop Web Services.

The final element is about going the other way from the previous goal of IT which has tended to turn systems information into human focused information. The goal here is to take all of the information created in these collaborative and participative systems and turn it back into something that the enterprise can use, hence the reason I wanted to take a Wiki and put the information into a database.

So my little experiment proved that it can be done, and that its liable to be an issue in terms of data and provenance. Not sure on the solution yet, but at least it give me something to think about.

Technorati Tags: , , , , , ,

Geo-ripping Wikipedia

As part of my on going quest to stop my drift into senility and powerpoint (the difference is marginal) and make sure that when I recommend things to clients that they actually work I went in search of some Web Services to play with. Now there used to be a useful bunch over a Capescience, now they've just got the Global Weather one which is okay, but I could do with more than one (and I hate stock quote examples). I also wanted to see what could be done to get some interesting information out of wikipedia, so I hatched a plan.

The idea was to create a very simple Web Service which took the Wikipedia Backup file and then extracted from it the geolocation information that now exists on lots of the pages.

Stage one was doing the georip, this was very simple. I elected to use a StAX parser (the reference implementation infact) as the file is excessively large. Using StAX was certainly quick (it takes Windows longer to copy the file somewhere than it does StAX to do the parse) but there are a few odd elements in using it (which I'll probably put a separate post on). That gave me a simple database schema (created using Hibernate 3 as an EJB 3 implementation and using inheritance in the objects mapped down onto the tables, very nice implementation by the Hibernate folks).

Next up was the bit that I thought should be simple, after all I had my domain model, I had my queries, I had even built some service facades, so how long could it possibly take to get this onto a server? The answer was far too long, particularly if you try and use Axis2 v1.0 (which I just couldn't get to work), switching to Axis 1.4 picked up the pace considerably, and thanks to some server coaching by Mr Hedges its now up and running.

There are around 70,000 geolocations that I managed to extract from Wikipedia. Some of these aren't accurate for several reasons
1) They just aren't accurate in Wikipedia
2) There were multiple co-ordinates in the page, so I just picked the first one
3) There are 8 formats that I identified, there could be more which would be parsed wrong.

So there it is, extracting 70,000 locations out of unstructured information and repurposing it via a Web Service for external access. Couple of notes

1) The Wikipedia -> DB translation is done offline as a batch
2) Name based queries are limited to ten returns
3) Any issues let me know
4) Don't rely on any information in this for anything serious.

The next stage is to get some Web Services up that run WS-Security and some other WS-* elements so there are more public testing services for people to use.

Technorati Tags: , , , , ,

Thursday, November 02, 2006

Heisenberg's SOA - the uncertainty of state

Heisenberg uncertainty principle in physics basically says that you can't know both where something is and how fast it is going, the more accurately you know one, the less accurately you know the other.

The thread reference above was talking about statelessness and whether this is a good or a bad thing (answer: depends) but it raised an interesting thought in my mind about back-end reconciliation and the non-availability of actual present state.

Like the busy beaver (you can't know how much resources something will take) and the halting problem (or when it will stop) there is another problem in complex systems in that as a consumer you cannot know what the current internal state of a service is, and indeed you shouldn't care.

A few examples:

When I place an order I get a new order ID, before I make the call I have no idea what that ID will be, but it is a critical piece of information for me as it is my key into the tracking process. That ID represents a critical piece of state information that is internal to the service.

Depending how the company "picks" its products in the warehouse and handles its inventory also impacts when I will get the product. If they decrement inventory at sale then I should get it quickly if it said it was in stock. If they decrement at pick then I'm in a race condition with others to get the element. Again this is a question of the internal representation of state and its processing by the service, and as a consumer I have no way of knowing that state before I ask the service the question, and I cannot guarantee that the answer will be the same when I make my next request to the service.

All very obvious of course, but it does have an impact on the concept of stateless systems. It means that if you try and be stateless by having "before" and "after" requests then they are liable to fail as something else will alter the "before" state independently of what you are trying to do. Its even more complex in those areas that do backend consolidation, either based on timestamps or (in the case of a certain airline) when they receive the request on the mainframe. In these cases its impossible to say at 1pm what the 1pm answer actually is, the system doesn't know yet, and you certainly don't know.

This uncertainty of state is actually a good thing, it is letting the service manage its world in the most effective way that it can on behalf of its consumers. If the state had to be stored externally to the service (in some process message or something) then the level and degree of complexity would be unmanageable.

So maybe the uncertainty principle in SOA is that the simpler the interaction with the service, the less you know about the impact.

Statelessness is good sometimes, but its not something to pursue at the expense of simplicity.

Technorati Tags: ,

Thursday, October 26, 2006

Technologists to shoot...

Thanks to Gervas Douglas and the yahoo list I've got a cracking example of the sort of thing I mean....

According to Forrester, an ESB typically includes communication
infrastructure; request routing and version resolution (mediation);
transformation and mapping; service orchestration, aggregating and
process management; transformation management; security; quality of
service; service registry and metadata management; extensibility for
message enrichment; monitoring and management; and support for the
service life cycle.
Oh god, how complex do you want one product to be?

But by far the biggest change in the ESB market, he said, is "you find
more and more larger vendors adding either a standalone ESB, like IBM
and BEA, or including those features in basic integration suites, like
Tibco and WebMethods and Oracle. Over time, I believe standalone ESBs
will gradually be consumed by the larger vendors and become more basic
infrastructure technology," Vollmer said.
Genius, this doesn't actually make any sense at all. Standalone ESBs (as produced by IBM/BEA) will gradually be consumed by the larger vendors. What the hell does that mean, he has just said that the larger vendors actually are either choosing to a) consider ESB as a lightweight platform to link things or b) are lobbing stuff onto existing products. I have three issues with this

1) Oracle's ESB is miles more like IBM/BEA than it is Tibco/Webmethods... and I'd bet more on them going for that model in future
2) The consumed line makes no sense
3) How on earth can the ESB with its millions of features he described before ever become a "basic" infrastructure technology.

This is exactly the sort of thing I was ranting about in the last post. The whole article is purely about how technologies will "solve" problems and move you towards SOA, and there isn't even a consistent line about what the technology is.

Technologies won't solve those old problems. If SOA is just about building applications in the same old way using Unicode based transport mechanisms then we've regressed. If its about using EAI type tools, then we've regressed.

Not one of these people actually suggested that SOA was about altering the way you think about systems. It read exactly like a bunch of people arguing that C++, Java, Smalltalk, Eifflel, UML, BON, Schloer-Mellor et al were the "way to do OO" rather than OO actually being about a shift in the way people think about designing systems. SOA works a level above OO in that it is about the architecture of multiple as well as single systems, but still the line is pushed that buying this technology will solve the problems created by the one they sold you last year.

Technorati Tags: ,

SOA - shoot the technologists

There are lots of statements made about SOA not being something you can buy from a vendor. I'm getting more extreme in my views... I think it isn't even something you can build. Clearly there can be technology delivered that meets an SOA, but can you actually do SOA if you are thinking about it as something that is just built?

People seem to be approaching SOA more and more as something that is resident in IT only, or even worse is where IT "understands" the business and builds things that are more responsive... without actually getting the business to own or define anything.

If SOA is going to actually make a difference to what is a failing industry then it needs to impact the structural problems, rather than just trying to deliver a new set of technology projects. 80%+ of IT spend is on "business as usual" aka "keeping the lights on" the support and bug fixing of current solutions, paying the maintenance licenses, patching things and generally just keeping them on life support. And then with the 70% or so of new projects that actually fail to deliver what was expected this means...

This means that around 6% of IT spend actually delivers new value to the business. This is fundamentally broken. Therefore if SOA is going to succeed it needs to help more projects succeed, and more importantly either reduce the 80% spend or help deliver more value for that 80%. This means that SOA truly has to be about how you govern and deliver IT, including how you continually deliver IT to the business once it has gone live.

This is why I'd argue that SOA isn't even something that gets built, its something that becomes a part of your culture, it changes all aspects of your IT organisation. Otherwise it really just will be yet another techy idea that delivers little or no value down the line.

IT is a broken industry, it takes more than an ESB, some Web Services and changing some design guidelines to fix a problem this large.

Technorati Tags: ,

Monday, October 23, 2006

When to kill your SOA project

There has been a decent amount of press recently about SOA projects "failing to deliver value" or indeed just plain failing. This isn't that much of a surprise really in an industry where failure has been the repeated norm for many a decade. What I'd like to know about these failures is whether they have read Mythical Man Month, Peopleware and Death March. In other words are the projects failing because of "SOA" or because the organisation wasn't able to do that sort of project using the tools that it had? Organisations that see SOA as a technology based thing will be particularly in danger to this sort of problem as they try and solve problems with technology without understanding truly what they are delivering. This "lipstick on the pig" approach to SOA tends to deliver failing projects pretty much all the time as it is an attempt to solve an underlying major problem by applying a new technology as sticking plaster.

SOA projects are not "special" and mystically better than non-SOA and you need to know when to kill them off and the three books above will give you some good indicators. But some very obvious ones are

  1. Project has slipped 50% on its timescales, and you aren't even developing yet
  2. You are using early access software and you don't have a link to the dev team
  3. Most projects failed under the old process and it hasn't been changed
  4. There was no additional time added for learning new technologies
  5. The vendor cackled when selling you the product
  6. You've included several brand new products, and its taking much longer to integrate than expected
All of these should help you to can the project early and do a proper post-mortem on the project so you can get it right next time.

In many ways you should be more often looking to kill of new SOA projects as they will have new technologies, new process requirements, new governance requirements and the odds are you'll get it wrong (particularly if you don't get outside help).

If you really want SOA to succeed then you have to look to fix the basics first, this means understanding the business services, the right process and governance model and putting in place the right training to make sure it all works.

SOA isn't magic pixie dust to be added to projects to make them succeed, technology based SOA in particular suffers from the challenges that any technology driven exercise does, namely it rarely delivers what the business wants.

Kill early, kill often. I've seen far to many projects fail at year 2 that had been known as failures since week 1.

Technorati Tags:

Wednesday, October 18, 2006

SOA is more about old technology than new

I've been spending a lot of time recently looking at how a service oriented approach helps around older applications in particular how it helps to move them from being static systems into more agile and responsive ones. During this I've done quite a bit of research into how much people spend just on "keeping the lights on" as opposed to actually moving forwards.

Its about 80% of the IT spend out there. Which means that for SOA to actually matter, namely making the 80/20 rule apply to SOA then its actually going to be not about new technologies, web services and the like, its going to be about how we reduce that 80% and start making existing systems pay for themselves.

The goal of SOA is often touted as being the regeneration of these estates, but most of the time I'm just seeing people proposing new IT projects that aim to put "lipstick on the pig" of legacy applications. This isn't solving the problem its just trying to hide it, and is liable to be as successful at hiding it as EAI was when it promised to do the same thing in the late 90s early 00s.

This is why I argue that SOA has to be about changing the way you think about your IT organisation and the IT estate. It has to impact not just new technology builds, but more critically it must impact the old technology. This means that SOA must change the way that application maintenance is done, the way that help desks are set up and the way that change requests are handled. It even means changing the way IT is organised and in particular the current "handover" of applications to the life-support system that most applications move into once they go live.

SOA's buzz and hype is driven in the desire to sell new projects and sell new technology licenses, this misses the opportunity for most companies which is to move their IT organisation away from a project or technology focused approach towards being truly organised and responsive to how the business operates.

People have often said "you can do SOA with CICS/COBOL" and they are right, I'd go further in fact. If your organisation isn't doing SOA in its legacy and old ERP systems then you are just playing with new toys and aren't actually solving the problem that IT has created.

Technorati Tags: ,

Sunday, October 08, 2006

AXIS2 debugging between London and Mumbai

Well I had a 9 hour flight today so I thought I'd try and get AXIS2 working... it must be something I've done... right?

I'm using a simple WSDL file with 3 methods, used the WSDL2Java to generate the stub and the skeleton and the test code. This didn't work, however thank god for open source as this meant I could trudge through the code to understand why on earth things didn't work properly. So the service was basically

interface GeoLocation{
GeoLocation findByIATA(iata String);
GeoLocation findByICAO(icao String);
GeoLocation[] findByName(geoName String);

And no matter which method I called it ALWAYS saw it as findByIATA on the server, a bit of a challenge as the SOAP message and responses are typed to the specific request, so SOAPFaults abounded as it complained I was passing the wrong information to the method, in fact I was getting the wrong method for the request!

First top tip is that Axis2 doesn't have hot deploy, if you upload the same WebService archive this doesn't take effect until you restart tomcat, which is a bit naff.

Second top tip is download the Java VM documentation, being at 35,000 feet over Iran isn't the best point to forget the arcane syntax for starting tomcat remotely in debug mode.

Third tip is that when Tomcat says "Failed to shutdown" its lying, its shutdown, dead gone, buried and out of there.

These two combined led to a very painful cycle of adding logging messages through the code in a great bug hunt. First off I thought it was a client side problem, this was denied by the client by the cunning rouse of actually having the right SOAP envelope. So this meant that the fault was on the more painful to debug element, the server. A quick cointreau to stiffen my resolve and it was confirmed that all of the right information was passing across the wire.

Next up the great issue of the Axis 2 src build not actually producing something that runs. So its the world's nastiest development approach, taking a working jar file, exploding it and inserting a new .class file... nice. This narrowed it down further to say that the loading of the different end-points was being done correctly but it was the mapping of the request on to the end-point that was going horribly wrong

Landing in Mumbai brought up the solution for the debugger, setting JAVA_OPTS in Tomcat to be -Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n, the problem I'd had on the flight was that I'd put the address in as localhost:8000 and it didn't work.

So surely now moving down the home straight for debugging, a simple watch expression helped to determine when the AxisOperation was set to the wrong thing... oh wait it wasn't 100% simple because the naming conventions are all over the place, msgContext, msgctx, messageContext... this is why naming conventions do actually matter. The target then was identified as the SOAPActionBasedDispatcher from axis2.

Then we find it after a couple of step intos, surely this must be the answer "findOperation"... ummm but SOAPAction is a blank string. I hate it when the problem is actually right at the start but you find it at the point where the variable is finally processed. The reason its finding SOMETHING to call is that the mapping table in AXIS2 has a bunch of options including "" which is assigned to (I assume and now I've got to find out) the first operation it gets.

So now its about debugging right from the start again and finding out why that SOAPAction isn't being set correctly.... oh great we are into the servlet API now, but its now looking like I'm back to square 1 and it might actually be an issue on the client side after all (the HTTPHeader doesn't appear to have the SOAPAction set it in to anything). Then its down to MessageContext, oh yes, here we have a cracker.

MessageContext.getSoapAction() returns options.getAction(), this is null. However msgContext.getAxisOperation().getSoapAction() returns the right value. Another cracking example of how not to code, having two different variables with the same name that aren't actually the same variable isn't a bright idea.

Now we move onto another patch on Axis2, and I run into another annoyance, the maven build is now farting over some test element it can't do, something to do with jibx... errr I don't care if jibx tests fail, I want my sodding jar file, so its explode and combine time again....

And then it all works fine. So to the chaps at Axis, the fix is to modify MessageContext.getSoapAction() to just reference the SoapAction of the AxisOperation. Bug raised and reference here, but it really was a bit worrying to see some basic development practice errors, particularly the thing that the bug turned out to be, duplicate places where information could be stored.

The worst bit about this is that I've just wasted 4 hours that I should have been asleep because I just couldn't let it lie.

Technorati Tags: , ,

Thursday, October 05, 2006

Schema envy

I've been messing around with some of the official schemas out there with the goal of breaking tools. The one linked from the title is an impressive 240k or so and includes 6 other schemas which aren't exactly slim either. Its a cracker to throw at DOM based parsers out there, for some reason they have issues.

But what it made my think, there is the ACID test for CSS, but where is the evil test for Schema that really sets out to break tools? One that implements all of XML Schema and uses it in fully compliant, but deeply twisted ways.

Anyone know of the ultimate test schema?

Technorati Tags: ,

Tuesday, October 03, 2006

Axis 2, Maven and the problems with distributed systems

I've had a lesson this evening in the fragility of distributed systems. I decided to try out Axis 2, the 1.0 release, and to further the education I brought down the source version and decided to build it. Now Maven works with a whole list of repositories and Axis has a whole heap of dependencies, which in themselves have dependencies.

And like any good distributed system where you don't have control over the remote servers... these dependencies don't exist anymore. One was maven-itest-plugin, the other was a rather specifically named stax-utils-20060501.jar and there could have been others. The solution for anyone who wants to know is to edit the file and add in as a repository, it all seems to work fine then.

But here is a good open source project, run by good guys who know their stuff, and they are being bitten by the age old problem of distributed systems. Namely that they've designed something, or are using something, that assumes that everything is always available. This is a common mistake that people make when building distributed systems and SOA isn't going to solve this problem any time soon. What this example does show is that building robust systems is all about being prepared for failures and building systems that are designed to fail gracefully.

Oh and after I got it built and deployed it exceptioned with a NullPointer in the init...

Technorati Tags: , , ,

Sunday, October 01, 2006

Virtual Machine top tip of the day

After wondering why my computer was running like a dog, I checked the CPU usage and saw that much to my surprise that the Linux VMWare image was running at 100% of one of the cores. What was going wrong I wondered?

Then I suddenly realised that the default screen saver for Fedora is one of those pretty animated ones. This reminded me of when people had X terminals rather than workstations and the bane of the sys admins life was people running XEarth and the like, which destroyed both network and CPU.

So if you are using VMs, and aren't running them headless, then remember the screen saver!

Technorati Tags:

Friday, September 29, 2006

Worst description of Java Beans ever?

JavaBeans is a component object model with API's that extend Java's reuse capabilities by enabling third parties can create applications from reusable components. JavaBeans is the Java analog of Microsoft's ActiveX, and of OMG's CORBA. Unlike ActiveX, JavaBeans is intended to be platform-independent.

By an assistant professor at the University of Virginia.


Technorati Tags:

Wednesday, September 27, 2006

Enterprise SOA Adoption strategies

Whey hey I've had a book published by the nice chaps at Infoq. Free download if you register, but of course its much nicer to have a hardcopy on your shelf at home. The book is an SOA book, not a SOD IT book and I've tried to cover as much as possible on how to define your SOA and the potential impact that this could have.

Comments, critisisms, recommendations and requests welcome.

Technorati Tags: , , ,

Monday, September 25, 2006

Why Best of Breed and loose coupling doesn't always work

Over the weekend there was another great example of why best of breed software choices aren't always the smartest. It demonstrated why the ability to work together to get something done is more important than just lobbing together the best things you can find. Sure the Americans on paper had the better players... but the Europeans had the better team, much the better team as it turned out.

From an SOA perspective this is an important lesson, it means that its actually the links between services are critical success factors, rather than just the services themselves. When people talk about getting in lots of best of breed services and just orchestrating them together they are missing a key point about cohesion.

Loose coupling isn't the best thing in the world all of the time. Sometimes things are better if they are naturally cohesive and actually have a high degree of coupling. This can make it easier to get them to work together and actually deliver more value because of this cohesion.

So be very careful when obsessing about loose coupling or using only "Best of Breed", think about the right level of coupling and the services that work for you.

And remember... don't listen to Accenture, when doing SOA its better to be a team than a tiger.

Technorati Tags: ,

Wednesday, September 20, 2006

SOA RM - exit poll indicates its going to be a standard

Okay there is still a few days to go on the vote, but the standards vote for the SOA Reference Model has passed the threshold it needs to become a standard, so as long as nobody votes "no" its going to make it. This means that for the first time in the history of SOA there is actually a standard that isn't about shifting XML from one place to another but is actually about the key concepts of SOA and the guiding principles that people applying SOA should abide by.

Phenomenal achievement by the team and in particular the chair who herded the cats, nice one Duane.

If you are doing SOA and you haven't read it already, then you really are causing yourself a lot of pain when you are trying to explain SOA to people. Get yourself a copy of the SOA Reference Model and have a read. If you use this as the definition of SOA in your organisation this gives you a much firmer base both with the business and IT and stops squabbling over minor points of detail. When you do use it, let the group know, we'd appreciate the feedback.

Whether you are doing WS, REST, Business Service Architectures, Shared Services or People Services the OASIS SOA Reference Model can define the basic principles in a consistent manner.

At last we have a standard for SOA that is about SOA and not about the technology.

Technorati Tags: , , ,

SOA Meeting Rules

From Steve's book of applying the blindingly obvious to SOA...
  1. Meetings shall have an agenda which will
    1. Identify which services an item refers to
    2. Have a proposed lead for the item
    3. A RACI for each item
  2. During meetings only people who are RAC in the RACI can talk on that item, if you are "informed" shut up
  3. Minutes will be kept for the meeting with actions assigned to both services and individuals these will be translated into work tasks or bug reports associated with the service.
All I'm really saying is that SOA doesn't mean that you can just get a bunch of people in a room without an agenda and hope to actually acheive things.

Technorati Tags: ,

Monday, September 18, 2006

Dark SOA revisited

Having been involved in a couple of REST v WS discussions recently, I've decided that not only does Dark SOA really exist, it is actually more like a black hole than Dark Matter. We all talk about business agility and flexibility, but then start arguing over the technology and pretty much forget about the business side of things in a mistaken belief that technology is what they wanted in the first place.

This attractive force of desiring to focus on technology rather than the business is really damaging to getting SOA delivering on the promises, its really creating a black hole into which IT people continue to fall while still protesting that everything is fine.

If SOA is technology then its another TLA and another black hole down which money and broken promises are thrown. And while its kind of interesting to watch these projects and ideas fall to their deaths and be compacted down on top of the previous generations of IT to form a super dense impenetrable IT estate, this time with XML as well, its not much fun when I realise that I'm probably going to have to work at some of these companies at some stage in my career.

Technology SOA is Dark SOA, its black hole creating supermass that brings everything down with it.

The clue is in the question folks, want business agility? Focus on the BUSINESS

Technorati Tags: ,

Thursday, September 14, 2006

Why REST v WS is irrelevant in two pictures

So here is a simple picture to describe how I think people should expose internal functionality to external consumers. There are a couple of things here
  1. Internal Services have "business" interfaces, these are what the developers, analysts, architects and the business understand in terms of the capabilities being offered. This is the "code" interface (not XML)
  2. NEVER EVER EVER expose this directly. This means you now have tightly bound your internal code to your external customers
So the first thing is to add a Facade where mediation can if requiredbusiness style interface, its server side and its still not in XML.

Then you sit down and decide how to expose it externally. Will it be REST, will it be WS? Will you decide to use pub/sub notifications, will you decide to use flying monkeys? Quite frankly who cares?

This external service is the one where you will decide to plump for WS or REST, or hell why not go mad and support both. Now converting a business interface into REST probably means having multi URI points to represent different managed resources, but that isn't exactly difficult either.

All we nmultio do in this case is to run three external interfaces, which are just doing data exchange, there is no actually functionality here as that lives down in the backend.

This way as our backend functionality changes we can prevent our clients having to directly upgradebackendan in fact use thisbackendch to support multiple interfaces concurrently, based on a single business service implementation.

Equally we can make changes to the external interface, to add security (always a good idea), to optimise the interface or to move it from a custom schema to an industry standard one. The important bit here is that you've modelled your business service correctly and it has the right capabilities which you are then choosing to expose. The mechanism that you use for exposure isn't important at all and 100% isn't where the flexibility comes from.

Flexibility comes from that internal service being able to react to the business and change with the business. That is the architectural challenge. REST v WS is an implementation challenge, which is something to worry about (its part of SOD IT) but its not what gives business agility or flexibility.

I'd really strongly recommend that people always use the pattern above, its saved me lots of pain over the years.

Technorati Tags: , , ,

Wednesday, September 13, 2006

Microsoft commits to Web Service Openness

There has been quite a bit of noise going across the wires after Microsoft announced they they are allowing anyone to use their patents with respect to several key WS-* standards. Its their Open Specification Promise. It only covers those patents described in detail (rather than referenced) which is a bit of a shame, but its a big step in the right direction to removing ridiculous patent blockers to technology adoption.

It includes WSDL, so I'm assuming that applies to any WSDL specification, and a whole heap of the latest and greatest standards that people will begin to want to use.

Nice one Microsoft, just one small question...

What about BPEL? Is it that there are no Microsoft patents in this area, or that these aren't being opened up yet?

Technorati Tags: ,

Sunday, September 10, 2006

SOA Governance - an IT organisation

One of the big questions of SOA is what it does to the normal IT organisations. I've blogged a couple of times about how SOA helps create a series of smaller projects inside a larger programme which gives you smaller teams and helps (even without using Agile methods) create more Agile projects.

This move towards service development pretty much demands that you move towards a service oriented organisation. To ensure that services are developed effectively and maintained effectively you don't want to keep having lots of different people having to learn how the service works and as SOA is about becoming more business oriented you need to maintain that knowledge at all levels, this means dedicating people to specific areas and services.

This means changing the IT organisation and its structure to match the business service architecture, including treating IT as a business domain. Taking the Level 0, Level 1, Level 2 approach from the SOA methodology this drives through into the organisation structure.

There are three distinct areas in the org chart
  1. Business - The business owners of the various services, potentially delegating authority on lower level services to people inside IT while retaining a sign-off role
  2. Service Delivery - The part of IT that delivers the business services
  3. Platform Delivery - The part of IT that delivers the underlying platforms and the technical services that are required to deliver the business services.
Each of these has different KPIs, and the Service Delivery teams have two sets of measures.

Business - Must commit to a long term view of IT if they want an IT organisation that evolves in the way they want and reduces the overall cost of IT and gives them the dynamic change they want. This means agreeing to committed plans that enable multi-year planning, not multi-week. It also means that the business must be aware that if they do want to do something tactical that it will require an "offset" investment.

Platform Delivery -Responsible for defining the IT standards and delivering the mechanism for measuring and enforcing them. This means providing the base platforms and functionality. It is this team that is responsible for defining what technical tools should be used and how they should be implemented.

Service Delivery - Measured by the business against delivery and by platform against quality.

Business Services

Each service has two "heads", one from the business, one from IT. The business person is responsible for defining what the service is the IT person is responsible for how it is delivered. Questions like "XML Schema" et al are the responsibility of IT.

Within a Service (e.g. Level 1 Services inside Level 0) the IT leader is responsible to the leader of the level above.
And for each area you have a similar structure (depending on the service structure)
The most senior architect within a given area is responsible to the platform team for the quality of their delivery. This architect is responsible for working with the platform team to determine the best technology approach. The business service architect knows the drivers and change required, and the platform architect aims to deliver a consistent infrastructure so together they've got to select the right solution to deliver both the service, and the overall estate.
The Platform team provides the technical expertise to make sure that the business services are delivered correctly. It might even include some common technical elements that are required across all the different service areas, and critically someone to make sure that the data being collected is of a decent quality and at least vaguely consistent! The platform team isn't service based in the way that others are, its capability based, so in here you get the testing teams, deployment teams and all of those bits that everyone requires.

Then once we have this all in place and we've targeted people as owners (and please do note that one architect or business person can support multiple services within a domain if they are all relatively small) we actually need to get onto development.

The recommendation here isn't to have every service with a dedicated delivery team, you just want a couple of core people to be permanently assigned, the architect, analyst and probably the technical lead. It is a good idea to have developers at least generally assigned to one service area (Level 0 or Level 1) of course.

When you have a programme kicking off the first job is obviously to determine the services that are impacted by the programme. This means that project/programme managers live elsewhere in the organisation and are assigned to programmes as they are kicked off. Once this is done the programme manager and the lead service architects from the impacted services (and hint of the day restricting programmes to one top level service makes things easier) and decides on the right approach and delivery method(s). Then you assign the resources to the programme which will then be directed into the various service teams.

The programme team is responsible for the overall requirements and delivery, and making sure that the various service changes are integrated together and the quality of the final programme delivery. The development effort is led by the service lead (or architect) with resources assigned by the programme manager.

If platform changes are required then its pretty much the same except that it includes assignment of resources into the platform team. Once the programme is finished its transitioned into standard running and the development teams are wound down.

During the delivery the service leads are responsible to their senior architect and the platform architect for the quality of the delivery. The testing team that is assigned comes from the platform team and is responsible for ensuring that the delivery meets the platform standards. This dual responsibility is aimed at reducing the "rush to live" quality cutting that reduces the long term flexibility of the solution. Sure sometimes the programme has to go live, but again this means that it needs some offset funds, the things the business committed to up front.

The key here is that everyone is bought into the service architecture, the governance is there to enforce what needs to be done and it does that by taking that service architecture as its base. This makes sure that SOA isn't just a series of powerpoints, its baked into how IT operates and delivers.

Technorati Tags: , ,

Saturday, September 09, 2006

SOA - Federation and the pursuit of liberty

Way back when Microsoft annouced Passport as the way for applications to share identies, namely by having a "trusted" party in the middle (Microsoft) who would validate all the identities. Now this didn't go well for several reasons, the fact that the Banks didn't like the concept of a single organistion basically being able to tax every transaction was certainly one, the lack of general trust in Microsoft's privacy rules was certainly another. The alternative, much prefered by the banks and by people who don't like the idea of one company holding everything is to have a federated security model, and thus was born the Liberty Alliance.

Now federated identity is much to be desired in SOA as it provides a great way to be more loosely coupled around one of the critical NFRs on the systems, but it is pretty hard. Looking over at the Macehiter Ward-Dutton (the analysts who blog) I came across an article on the latest round of companies to pass Liberty certification. This is great news as we move towards more collaborative business applications this sort of security problem is going to become much more common. Imagine doing multi-supplier collaborative applications without a decent federated security, you'd end up in the old B2B application scenario that requires a big blob in the centre.

Liberty will hopefully enable applications to better collaborate between organisations, this is certainly going to be critical as systems become more external and more dynamic.

Technorati Tags: , , ,

Thursday, September 07, 2006

Efficiency in the IT Supply Chain

I was speaking with Bola Rotibi at the non-blogging analysts Ovum (get with the programme guys) today about trends and skills in the future and we got talking about what roles won't exist or will be squeezed as these new SOA approaches and technologies begin to take hold.

So we got chatting about thinking about IT delivery like a supply chain, and thinking about what the most inefficent parts of the current IT supply chain are. Now unlike a decent SOA approach the current supply chain is very very process oriented. Now the most common supply chain these days (and sure Agile people can have a shout about being better but lets just wait a minute eh?)

(Apologies for the huge gaps on the image but I've had to use PPT and its "Save As" is slide only not just selected (ala Open Office))

Now the key bit in this inefficient supply chain is the production of word documents (again Agile folks stop shouting at the back) but in large scale organisations there hasn't really been a choice (anyone seen Agile in a 30,000 employee organistion done top to bottom?) and this supply chain has been repeated over and over again.

Now what should be the goal of the new supply chain for IT? Well taking a lesson from... IT.. the key here is going to be automate the wasteful human parts of the process, namely the human interaction steps that produce only intemediary information. This means our new process needs to be something like
So this means a few things, firstly we need a hell of a lot better tooling than the current approaches (and I include cue-cards and the post-it note mock-ups that people pass off as better than word), secondly that the people whose current job is just to act as an interface between the business and IT should be getting a little nervous, along with the people whose job it is to take architectures and requirements and create designs that are then handed to developers, and the final one is that the levels of skills of people needs to be raised a level. So while tools will automate out parts of the IT supply chain it needs a higher degree of skills than current exhibited by most people in these roles. No longer will architects have the comfort zone of designers who brush of the rough edges from their powerpoints, nor will system analysts be able to hide behind what the biz analysts create and hand-over. It means that there need to be tools that outline what the business wants (business SOA) and then people need the skills to enrich that with the right solution.

So in the future IT supply chain, and by that I mean the supply chain on the large scale not just for "agile" point projects. This is for packages, software development and the whole lifecycle.

So if you view your current IT delivery as a supply chain, where do you see your inefficencies?

Technorati Tags: , ,