Wednesday, August 30, 2006

Java is dead - bugger I'll just do interesting stuff then

There have been a series of articles and blog posts around the impending death of Java, in paticular J2EE because its too complex. Now I've got a couple of issues with this, the first one is that it seems to think that the height of IT sophistication is lobbing a web site on a database, the second is what complex appears to mean these days.

Taking the second point first, is J2EE really that complex? I've been playing about with the latest BEA, IBM, Sun (Glassfish) and Oracle stuff recently trying to work out the "hard" bit. Now what I'm assuming here is that you don't want to actually code stuff if you can avoid that, so I've been using the JSF builders, playing with the new EJB3 data and invocations stuff and also using some of the process engines out there to string things together. Lobbing in some messaging stuff to try out EDA.

Now I'm a manager type these days, powerpoint jockey and creator of word documents, what I mean is that I don't code on a day to day basis these days, development isn't my job anymore. So surely if I can do it, its not that complex. When I compare it with J2EE 1.1 (which I wouldn't have touched with a barge pole) or J2EE 1.3 (the first one I took vaguely seriously) then JavaEE 5.0 is an absolute dream. The presentation stuff is miles easier in JSF than it was in JSP, the EJB bit is a massive step up, and the Glassfish deployment stuff is sooooo much easier than doing it in app servers in 1.1, 1.2 and 1.3.

So where exactly is J2EE more complex for developers? Sure its got a lot of facilities, but I actually found it easier to understand and use, paticularly with the new tools, than previous editions. Now clearly I've only based the assessment on actually using them to knock up some demo apps, rather than on a project. But have the people saying its very complex even downloaded the thing?

Now the other bit about the death of Java, and J2EE, that confuses me is this idea that lobbing stuff on databases is actually an achievement and that Web Sites represent a massively important part of IT. First off you've got the re-architecting of SAP and Oracle application servers (just a multi billion dollar market but hey, its not "sexy") around the J2EE platform, and the consolidation of the integration market around Java, and J2EE. Oh and of course the human workflow and business process markets, oh and pretty much all the RFiD work being done out there, and the whole mobile phone market.

This to me is the thing about Java, sure you could lob up a web-page on a database with Java, but you can also build all of the bits that live everywhere else in the organisation. When looking at SOA this means you are pretty much 100% guarenteed to have Java, and J2EE within your organisation, if not today then certainly when you do that big applications upgrade. I've always been a fan of standardising around a single language for basic development, just because it helps when looking for people and also it means you don't have increased complexity of management and support, and Java tends to stack up as the only valid language for that sort of approach.

So Java is dead, apart from all of the growth that it will drive in the bits of IT that actually represent a challenge, rather than lobbing web-pages on bloody databases.



Technorati Tags: , , ,

Monday, August 28, 2006

SOD IT and be proud

Mr Lowe made a comment on his blog recently that I was chastising the use of SOD IT and sure I've had a go at people who mistake SOD IT for SOA, but there is actually another problem in here.

People don't like to admit that they do "development" or "delivery" anymore, its got to be all "architectural" and visionary, strategic and the like. Its a sad state of affairs when people have to dress up a great ability to delivery something as being architecture in the mistaken belief that this makes it more important. SOA is about making the business vision drive IT, but somebody has to turn that vision into bits and bytes which is why SOD IT is actually important.

SOD IT says that traditional delivery models that are based purely around projects need to change to move towards programme management of service projects. SOD IT says that automation of testing at the unit and system test level is critical for ensuring that a service is operating successfully. SOD IT says that requirements should be gathered in line with the services and that development teams should be based around the services. SOD IT is not just the same old technology delivery but this time with a different picture to aim out. SOA fundamentally changes software delivery, it moves it from the project oriented mentality of today towards a business focused view.

So don't dress up the delivery as architecture, its not, be proud of the fact that SOD IT is where the rubber hits the road and the SOA vision is turned into operational reality.

SOD IT without SOA isn't possible, and SOA without SOD IT won't deliver. As much as businesses and IT need to change the way they think about systems and IT in general they also need to change the way they think about the specifics of IT delivery.

So next time you are looking at an IT project and you've already created your business focused SOA and are moving into delivery, remember what Nike say...

just SOD IT.


Technorati Tags: , , ,

SOA and OO its not just data shifting

One of the biggest disappointments that I've had in reviewing Java projects is the death of Object Orientation, sure we have "Customer" as an object but its just about data shifting rather than the objects representing both data and behaviour. The rise of Web Services and the tooling of these is actually making this situation quite a lot worse. So when all of these tools generate objects from services they generate data shifters. The phrase POJO is held up as a good thing because POJOs are simple, and in someways they are, but the question is are these objects becoming far too plain? Logic that was in previous generations hidden within the object is now isolated within classes that could easily be mistake for C programmes, i.e. they just do data processing.

Lets split objects into a couple of layers of intelligence

The data shifter - Basically a C structure in a class file

This is the worst case where the object has just been generated and is a series of get and set methods with precisely zero behaviour, unless you count assignment as behaviour (which I wouldn't). Here you aren't getting any real benefits of using an OO language, as much as you pretend your code is OO its really C code in class files.

Valid fields - Taking the C structure and turning it into Ada

This is still really just procedural code but at least using Ada rather than C. At this stage the XML Schema/Database restrictions have been applied into the generated objects so you can only set valid values into the object. So if you try and set age to "-1" then an exception is thrown calling you a muppet. This level should be the very lowest that a system should stoop to.

Valid Object - Object assures its integrity

The next stage is where the object starts doing more complicated rules, like checking that "Date of Birth" isn't in the future or that "Total" isn't a field its actually a function that adds up all the elements in the "OrderList". This is a good place to be starting from, I've seen several times cases where Order "objects" have a "Total" field that was passed from a Web Service along with the order line and then the total field becomes... well bollocks.

The reason these problems happen is that on one side of the equation (lets say Service A) there is an objection something like

interface Order{

public int getTotal();

public List getOrderLines();

.....

}

and the implementation of getTotal is something like

int returnValue = 0;

for(OrderLine nextLine : this.orderLines){

returnValue = returnValue + nextLine.getValue();

}

return returnValue;

When the WSDL is generated this creates two elements "total" and "orderLines", on the otherside this is the imported and of course the dumb IDE has no clue that total is a function, it just knows it has a value, so now you have a case where while the interface is the same there is now a get and a set function and they both just do an assignment to a class variable.

So when generating these objects you need to make sure that the fields shouldn't really be functions. It would of course be great if there was a way of describing this relationship within the XML document, so it was automatically generated.

My Object - OO designing on both sides

Now the best place to get to would be a recognition that maybe the needs on both sides around behaviour are actually different. The basics around validity will be the same, but the behavioral aspects of the object will be different. So if I'm a supplier of widgets and I work with a supplier of doo-hickies to produce ooojamaflips then my "Customer" object needs to have a "validate" method which checks that the customer doesn't have any outstanding bills, and it makes that check on my local systems. Equally the 4PL that we use to do the assembly needs to check "validate" on customer against its own systems and also needs to have a "notify" method on the order object that sends a message to the customer as its stage and position changes through the supply chain.

SOA doesn't replace OO as a good practice, and the fact that SOA tools generate base classes that are thick is pig-shit doesn't mean you should use them directly. There are wonderful mechanisms in OO like "inheritance" which can enable you to extend these objects with your own service specific behaviour. The piece that should be industrialised is the bit upto "Valid Object" as these rules should be integral parts of what the object is meant to represent. It really is ridiculous that people are still waiting till a database violation occurs before checking that the data is right.

I had a crack with obligate at doing some of the validation stuff, but its custom and not generated (yet) and a bit of a pain for objects. Hibernate has some bits around data validation which is pretty nice, but doesn't really do valid objects yet, and of course is database rather than Service/XML driven. Its one of the sad facts about IT that we appear to take a leap forwards and lose some of our valuables on the way. It will be interesting to see if OO continues to wane or if generally people start realising that OO lives inside the service and that the old OO rules, and benefits, still apply.

Technorati Tags: , , , ,

Non-drivers for SOA

When people are establishing their SOA elements one of the important set of elements that gets captured are the principles and drivers for the architecture. What is sometimes over looked, but is equally important, are the non-drivers and non-principles for the adoption of SOA in the organisation. These things fall into two broad groups firstly the common myths of the organisation, and secondly the implementation details that need to be either factored in during implementation, or factored out.

The idea here is to establish the things that either aren't important, or just aren't actually an issue that needs considering. One common one I've found in the last few years is performance, the only times I've seen actual performance issues is when the application has been badly designed or implemented, the capacity of most hardware today really does make performance a secondary concern. If you don't currently have any performance issues then performance is a non-driver for your SOA, this doesn't mean its not something to look at during implementation, but that you shouldn't be optimising your SOA for performance. Another example of these myths is "we can't do that sort of thing here" this is often brought up because some previous IT elements have failed. You really can't afford however to have a driver which is basically acceptance of failure, its something to lob on the risk log but you shouldn't be looking at SOA from the perspective of lowest common denominator.

The second group are the things to worry about later "Our project Management approach doesn't handle services", "I'm not sure how our procurement rules would work", "Our use-case templates don't mention services", "We don't have a services document in the project portfolio and others of that ilk here. These are again non-drivers for the architecture as they are about the things that happen next. This doesn't mean that you sit in an ivory tower appart from implementation but that these are exactly the things that you need to look at once you have a proper service architecture and are looking at how to roll it out across the organisation, these are the operational challenges rather than fundamental barriers.

The point of having these lists of non-drivers is that it helps to close down discussions once you've agree what they are. This helps you to focus on the real drivers and principles and to shutdown pointless avenues that don't deliver any value.

There is a phrase that half of being smart is knowing what you're dumb at, there is an equivalent here, half of good SOA is knowing what is important the other half is knowing what isn't.

Technorati Tags: ,

Single Canonical form - not for SOA

A very short while ago I made a post about the starting point for SOA and in the follow-up I made a comment that I wasn't a big fan of canonical data models, and I was asked to clarify why, so here goes.

First off lets agree what we mean by this and I'll take this
Therefore, design a Canonical Data Model that is independent from any specific application. Require each application to produce and consume messages in this common format.
As the definition. I've seen, and led, quite a few projects that have used a canonical data model, and for some things its worked and for lots its failed (of course the ones I led were the ones that worked :) ). So its not that canonical forms are a really bad thing, its just that they aren't the ultimate solution.

Taking the manufacturing service Level 0 as out start
and thinking about product and customer as our two data elements to worry about. There are three approaches here, all of which could be called canonical to some degree but which represent very different approaches to the solution.

Just the facts

The first is the one I've used most often to success in this area, and its focus is all about the interactions between multiple services. The rule here is basically common demoninator, the objective is to find the minimum set of data that can be used to effectively communicate between areas on a consistent basis. The goal here isn't that this should be used on 100% of occasions but that it represents 70-80% of the interactions.

In this model we might even get to the stage where its ProductID and CustomerID that are shared and we have a standard provisioning approach for the two to ensure that IDs are unique. But most often its a small subset that enables each service to understand what the other is talking about and then translate it into its own version. So in this model the "canonical" form is very small, really just a minimal reference set. This does mean that sometimes conversations have to take place outside of this minimal reference set, and that is fine, but its more costly so the people making that call need to be aware that now they are completely responsible for managing change of that interaction.

So in this model we might say that all product elements are governed by the productID, customer consists of Name and Address, but when sales talk to finance to bill a customer for an order they also include the product description from their marketing literature to help it make sense on the invoice. Here we would model this extra bit of information either as an extension to the previous data model, or just consider it bespoke for that transaction. This would mean that the sales service team would now be responsible for the evolution of that data description rather than using the global model which would be owned and maintained... well globally. The objective when communicating is to use this minimal reference set as much as possible, as this reduces effort, and the goal of the team that maintains it is to keep it small so its easier for them.

This model is paticularly effective in data exchange projects like reporting or on base transactional elements, but its the one I've seen used most effectively especially when combined with strong governance and enforcement around the minimal reference set. A great advantage of this model is that it reduces the risk of work being done in the wrong place, if all you have is CustomerID you are unlikely to undertake a massive fraud profiling project, something better off left to finance.

When you talk, we listen

The next approach, and already getting into dangerous territory IMO is to create a superset of all interactions between services. In this world the goal is to capture a canonical form that represents 100% of the possible interactions between services. Thus if a service might need 25 fields of product information then the canonical form has those 25 fields. The problem with this model is that there is a lot of crap flying about that is just there for edge cases. It can be made to work but it makes day-to-day operations harder and tends to lead to blurring of boundaries between areas/services and increases the risk of duplicate functionality. Its also a real issue of information overload. What I've tended to see happen in this model is that people start adding fields "incase" to the model and also start consuming and operating on fields "because they can". This isn't sensible.

I'd like my project to fail please

The final approach is the mythical "single canonical form" this beast is the one that knows everything, its like enterprise but even worse. This one creates a single data model that represents not only the superset of interactions, but the superset of internals as well. So it models both how finance and manufacturing view products and lobs them together, considers how sales and distribution view customer and lobs them together. Once this behemouth is created it then mandates this as the interaction between the areas with (in my experience) disasterous consequences. Its too complex, it removes all boundaries and controls and it ends up with ridiculous information exchanges where both parties known how internally the two areas operate. When external parties are brought into the mix it gets ever more complicated and fragile.

Summary

So I'm not 100% against canonical in terms of an intermediary for data exchange, but I do think that a canonical form needs to be kept very small and compact and that exchanges outside of that canonical form must be allowed for and managed. In the same way as there isn't an enterprise service bus that does everything there is equally no single canonical form that works everywhere. Flexibility comes from mandation for certain elements and increased costs for stepping outside, but stepping outside has to be allowed to enable systems, people and organisations to operate effectively.


Technorati Tags: , ,

Saturday, August 26, 2006

Virtualisation and the future application development

For the second time this year I've taken delivery of an dual core (Intel Centrino Duo). The first one I got was the fantastically superb Acer, on which I've installed VMWare. Last week I was asked the quesiton "we've got a Mac Book Pro in the cupboard do you want it?" and oddly I replied "oh okay then". Being a Mac it was of course impossible to connect to our companies systems so on went Parallels and a standard company image (for those using Radia out there in VM land there are some registry elements that need to be removed) went on and soon I was not only connecting to the company network I was also VPN'ing into it.

The Mac is now my "official" company laptop, and the Acer is my "official" development laptop. Mainly because VMWare Server runs like a dream on the Acer which makes running servers really easy, while Parallels is aimed more at a virtual desktop environment which is what I need for Outlook and the MS Office crew.

Running on Dual Core makes this all the more effective paticularly for development, as you can can run the IDE and the native OS in one core and then run a huge array of servers dedicated to the other core (along with the virus scanner and the other annoyances). This sort of model also makes it much easier to start up projects than its been historically.

The old cycle was "get kit, get software, learn how to install software, install on kit", this has recently been improved by developers doing everything locally but its been a right pain in the arse making sure that everyone has the same dev configuration, and putting things like Oracle on the machine just makes it run slower even when you aren't using it. And when you screw it up and have to do the uninstall re-install cycle something always seems to be left behind.

A virtual machine environment changes this. Instead of everyone installing an application server you just have a single virtual machine image that everyone can copy. Mess it up and you just pull down a fresh one. Running virtual machines locally only means that people can be just as effective at home as at work and you aren't having issues with someone else killing the box you are working on.

So if you are looking at upgrading your development hardware, or want to build a case around it I'd heartily recommend looking at things like VMWare Server, Xen, Parellels or Microsoft's Virtual Server as an integral part of those solutions. There are already off the shelf images available and within a project or company environment you should look at establishing your own.

So rather than in future having some poor sap having to work his way around the different developers to keep solving lots of minor config problems you can have that person set up the "right" configuration and then have everyone download it.

Some recommendations
  1. For Server VMs keep the network connection local
  2. Don't worry about "slow" these are single user instances so keep it minimal
  3. Memory is king, 2 GB is the minimum amount you should be looking at
  4. Servers = Headless, you don't want developers dealing with these directly if you can avoid it, these are remote deployment boxes
  5. Images are disposable, don't worry about fancy OS support functions, its bare bones stuff
As core counts and memory goes ever higher and virtualisation becomes ever more effective (Parallels running windows on one 2Ghz core was more than twice as fast as windows native on a 1.4Ghz Celeron M machine) then this really enables us to change the way we develop.

Now there is the licensing question of course....

Technorati Tags: , , ,,

Monday, August 21, 2006

Consumer SOA - my card, my rules

The other night while drinking with the Cliff Richard of SOA I ordered a round with my credit card and got the old "denied" from the machine. Now the initial result is that I've saved ten quid and fifteen pence, but its still ruddy irritating when this happens. A quick call to the company the next day highlighted the problem. I'd tried to buy some software on the internet from a company called DXO (for anyone with a digital SLR this software is the mutt's nuts) and this had been flagged as "suspicious" hence the temporary block on my card. This is about the third time this year alone that this has happened on one card or another, and I know that the bank is doing its best to save me money. Basically they've set up a system like this...

Which is great when it stops the fraud... but come the bill it ended up giving a "false positive"
Leaving Mr Lowe to pick up the tab.

Now picture a world in the future where the credit card provider enables me to provide some of my own rules, like "Don't let my wife buy another dog" or provide temporary rules like "today I will be mostly buying computers over the internet" or "Yes I am actually paying a bar bill".
This would mean that the bank's rules would also check my rules to see if there is an over-ride for what they've just seen.
And I'd end up paying for the bill, lose ten pounds and not have a reason to write a blog item. Still taking this further you might like to have your own fraud detection, maybe something you "buy" from one credit card company and then apply to all of them, or just some specific convenience functions like a validation via your mobile phone provided by your provider. This way you are not only adding additional "meta-data" but also adding in process elements.

Now this isn't half as daft as it sounds when you think about both consumer and commercial demands around next generation services, or indeed about building services today. If you are building payment then you need to split out the fraud detection service as, while tightly coupled to some extent, it will have large amounts of configuration change around the rules and potentially the process and integration points. This might be as simple as multiple different rules sets for different scenarios, or might be as complex as entirely different systems depending on what you are processing.

When looking at things like Vendor managed inventory, think about the future when those vendors will want to deploy their own tracking and eventing processes into your infrastructure. There are lots of different scenarios when this sort of external modification is desired. I'm not talking here about the mythical "change the business rules at runtime" I'm talking about delegating authority for rules or process to external parties within defined and controlled boundaries.

Right now there aren't ways that technologies or even business models really allow this sort of approach, at best delegation is enable "outside the walls", but you can be guaranteed that in future they will be. So its right to start thinking about how that might impact you, don't engineer the solution just yet because its not worth the effort but at least think about how such a solution would impact what you are delivering.

There is nothing in SOA says that you own everything that you execute, and if SOA is to provide view that the business wants then they are going to want to allow this sort of delegation. This means our thinking, solutions, technology and governance is going to change in the coming years in fairly dramatic ways.

Technorati Tags: ,

Sunday, August 20, 2006

The starting point for SOA is...

Now a quick Google search on the starting point of SOA turned up great first article which argues that Enterprise Architecture has no intrinsic value until you consider SOA ...
The business value of EA only becomes truly clear (beyond simply reducing costs in the IT infrastructure) when you begin to think of SOA as a strategy for implementing EA. Then the architecture effort has a built-in business goal: create an architecture designed to deliver services to the business in terms it can understand. Simply put, it's the first technical concept I've seen in ten years that actually drives alignment between IT and the business
Which I heartily agree with and indeed in general the links are full of some pretty good advice. Oh and of course of bad advice like ESB is the start. The clue is, as Roy Walker would say, in the question.

The starting point for SOA, for Service Oriented Architecture?

That'll be the Services, the ones that deliver services to the business in the way it understands.

So to be really, really clear. The starting point for SOA is


The Services


The next up comes the architecture.

Technorati Tags: ,

Guaranteed productivity gains with SOA

Recently there have been yet another slew of claims from people claiming they have discovered the Silver Bullet of software engineering. Now Fred Brooks wrote an couple of essays No Silver Bullet and Silver Bullet re-fired. The first essay has a great line

There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity
Get that, there is no single development that will give an order-of-magnitude improvement. And yet people are still claiming three to five times productivity gains. This is muppetry of the worst sort.

So I'm here to say exactly how you get a guaranteed FIVE TIMES productivity gain on your projects. This revolutionary approach will ensure projects deliver much faster, have higher quality and are more maintainable.

I call the solution "Employing shit hot people to do the work". In this solution instead of worrying about the right language for the project you employ a bunch of shit hot developer-architects and then let them pick the right technology, methodology and approach.

Now the only minor problem with this is that I'd say that only 1% of people I've ever worked with fall into this area, and I've never worked on a project with more than two people in that group. I'm not talking about people who are "good" coders, I'm talking about the people who when given a new system to debug are at the line with the error in under an hour after teams of "experts" have spent weeks looking at it. I'm talking about people who when given a vague description of a solution have already outlined the final solution and how to solve it within the course of a day. I'm talking about people who look at the solution and then pick the right technology to solve each of the bits. The sort of people who use new technologies and make bug reports including line numbers. The sort of people who discuss solutions with the business in their language. The sort of people who are great at the abstract and brilliant at the detail. People who can hold in their head the whole architecture, and still knock out code for the most complex bit of the system. The sort of people who don't disappear up their technology backsides and develop ivory tower solutions that no-one uses, these are the sort of people who build great technology that is fit for purpose, no more, no less. People who are known to throw toys out of prams at the right times and for the right reasons

People who are, in short, rather rarer than hens teeth. But if you really, seriously, want to get a big productivity leap then its the only guaranteed solution. Quality of people is the most important thing when considering the success of projects, the technology they use can only help or hinder based on that quality.

With SOA you do at least have a chance to target these people at the overall and then to get them to solve the toughest challenges, while leaving the rest of the developers to deliver the other services. This is potentially one of its biggest gains, the ability to have a structured way in which to really get these over-achievers to deliver, rather than have them dragged down to the level of the weakest. By providing clear boundaries and clear technology, methodology and complexity guidance you can use SOA to target your delivery based around the most important factor, the quality of your delivery team, rather than having to "make do" across everything.

This won't give you a massive productivity gain in a single step, its the application of lots of best practices and keeping in mind that need to attract, and retain, some people from that 1%.

Technorati Tags: ,

Wednesday, August 16, 2006

The biggest lie in IT - Enterprise

Everyone knows the old line about how to tell if a salesman/husband/politician is lying... their lips are moving. But what about in IT? Well software doesn't have lips but if it contains the word Enterprise you can guarentee many things
  1. You'll have more than one of them
  2. None of them will work across the enterprise, maybe not even across a department
  3. It will have upgrade problems
  4. You'll try and consolidate them... and probably fail
Enterprise Resource Planning (ERP) was a great example, I've worked with companies that had over one hundred ERP systems. Enterprise Application Integration (EAI) was another, get a couple of those together and its like watching cats fight. Enterprise Data Models that tried to be the ultimate truth are another cracker. And now we have Enterprise Service Bus (ESB) which pretends that its going to be the only thing you need and it will work everywhere.

There is no such thing as a single Enterprise wide tool for any company of reasonable complexity and its plain silly to believe that the next technology wave will be any different from previous ones in terms of delivering on the promise of the "one ring to bind them" vision of ERP/EAI/EDM/ESB.

Technorati Tags: , ,

Tuesday, August 15, 2006

SOA is about people, or why you need admin support

One of my favourite tech books in recent years has been Tom DeMarco's Slack which outlines how organisations have removed their flexibility by not giving people slack time in which to do things beyond their normal job.


I realised on seeing it on my shelf the other day that this is one of the things that I've often not stressed when advising people on doing SOA to give themself more agility, namely making sure that people have time in which to be more agile. Its impossible for people to think about new ways of working or try out new options if all of their time is 100% allocated to doing the day-to-day.

It is a critical success factor in making any organisation more agile or dynamic, certainly not limited to SOA, people need time in which to become flexible and to think about new ideas. They don't need to be tracked to the nth degree so they have no available time in which to actually add the extra value that the company is so keen to get. Its got some great information and case studies/gems that you can use to justify and explain how having time dedicated to "nothing" actually helps you to deliver to the company what the company really wants you to do.

SOA success is all about the people and to get that flexibility they are going to need Slack.

Technorati Tags: ,

SOA charging models, when to pay?

Well I've just spent two superb weeks down in Cornwall at a place called St Mawes and it appears that those Cornish folks have cracked the question as to when you should be paying for using a service.


As you drive into Cornwall on the A30 there are no speed cameras but as you come out of Cornwall there are a bunch. Which is as it should be in a decent service architecture. Charging people to initiate a transaction is an extremely dangerous position to get into as it leads to charging yourself when you just "ping" the service to check it is up and it leads to customers being charged for network failure. Charging on exit ensures that the consumer isn't paying without getting something for it (the real-world effect of the service).

However while the Cornish model works because the A30 is really the only way in or out it shouldn't be applied as a blanket statement for all services. The reason is that consumers can get cunning about this sort of element.

At University they took away our free printer and made us pay, per sheet, for printouts. The cost was only payable however if the whole print out was successful, you didn't have to pay if there were any errors. Given then we were using lp/lpr and had a course that taught us how to code in postscript it was a matter of moments before a quick alias occured that appended a postscript error to any document sent to the printer. Thus meaning that we got all of the real-world effects that we required, but the service provider received none of the money. They could have switched to payment on all sheets that didn't have errors, but we could have faked that too :)

So there is no one solution in complex process environments as consumers may only be after part of the effect. This does mean that when planning charging models for SOA you have to think not about just "pay for call" but about the most appropriate charging model for a given method of interaction, and of course have an automated way of measuring that.

When defining a service think about whether something is interruptable, whether additional information can be appended after the effect to invalidate payment, but not the request and about the point at which the consumer receives the value the service delivers.

I'd suggest three broad groups

1) Have a returns policy - charge on entry. In this case you charge for invocation but have to have a returns policy for people to object to charges. This should only be used in cases where you can really minimise the amount of returns
2) Charge on exit - This requires a clear process in which value is obtained purely at the end of the transaction and requires you to prevent people exiting or corrupting post this value
3) Charge at key points, identify key points in the interaction that deliver incremental pieces of value to the consumer and charge for these

In reality these charging models are part of the business activity monitoring exercise for the service and the two elements should be very much in step.

If dealing externally remember people will play silly buggers to avoid paying, and they will internally unless you put in place some sort of remediation plan, like preventing them from accessing the service or changing their charging model (initiation with no returns).

Charging for service isn't something that has had a lot of thought so far with people tending to assume that payment on initiation is fine. Cornwall has clearly planned its speed camera strategy on SOA principles, so should you.


Technorati Tags: ,