Wednesday, July 30, 2008

Who are your stakeholders?

Okay next up on the project is the stateholder groups. I've split these into three groups
  1. Sponsors and Blockers
  2. Enablers
  3. Informed or Enabled
The first group are those people who if you don't get them properly engaged and deliver what they want they the project will fail, or be seen to have failed, these tend to be the key high-level people who will ultimately judge the success of the solution. They are also the people who can make it much easier or harder for you depending on their expectations of the project.

The second group are the people who will actually help you get things done, this means they will personally help you get things moving and need to be directly involved in the actual work of the project. The project might be changing what they do or they might be the manager or customer of the people who will be changing their practices as a result of your project. They aren't direct blockers from on-high like the previous group but they need to feel actively involved in the project or it will be an outside thing being imposed on them.

The last group are the people who will either be told about what you are doing or will actually be impacted once the solution is live. So they might be the users of the old system you are replacing, the people in the warehouse who will have to cope with the new shelf stacking process or just senior people who like to feel they are in the first group but who in reality have no impact on your programme but you must keep sweet. Needless to say don't tell these senior people that you have lobbed them in this group.

The importance of getting your stakeholder management right is essential to the project. One of the bits from a business SOA perspective that I always stress is getting people's motivations clear about the service and this is an approach I apply to the project.

Why do the sponsors want to be engaged? What will they get out of it? What won't they like?

Repeat for the different groups and you now have a clear picture of the external factors on the project and you can start making plans to ensure that you meet the stakeholder objectives.



Technorati Tags: ,

Tuesday, July 29, 2008

Project Vision - Keep it short

Okay so we've been moving for a whole week here and the most important bit has been knocked down to the two paragraphs and a bullet list that it needs to be. The project vision.

Now reading around some people seem to think that the vision should be a document that includes all the detail around the business case, outline plan and other elements. For me that is the kick-off document and its something that only has a life expectancy of about 3 weeks when everyone is on board and you are into the detail.

The vision for me is something very clear that says
  • What we are going to achieve for the business
  • How we will measure success
  • What is important to delivery
Basically I do a "look back" exercise and imagine that the project has finished successfully and what is it that people are saying is brilliant about the project, I also think about what the focus of this is and what are the things we had to overcome.

So this short piece, less than 100 words, basically says to anyone inside or outside the project what we are going to do. It does so in terms of words that describe achievements as in "will deliver the business change and solutions required to drive a 10% increase in sales" and measures "this will be measured based on new transactions through the Gerald system".

So everyone knows on the project what our objectives are and its not long enough for someone to have an excuse to have ignored it, its also concise enough for the person funding it to agree that this is why they are putting up the cash and hold me accountable with clarity for delivering it.

Technorati Tags: ,

Wednesday, July 23, 2008

Starting a new project

Okay so I'm starting off a new programme at work, its not in the "normal" space of most enterprise IT but its going to include business change, IT implementation, a bunch of integration and a load of web tech. Over the course of the next few months I'll run through the various things that I've had to do, why I had to do them and what worked and what didn't. I'll tag the posts "gerald" to give a simple way of tracking them.

Technorati Tags: ,

Monday, July 21, 2008

Thinking about service levels

One piece that I've banged on about consistently for the last few years has been the importance of SLAs. SLAs that define not simply things like security and reliability (technical elements) but which also define the business contract and costs. These contracts must always work two ways. The consumer has a contract of things they must do to invoke the capability and have certain expectations of what they want. The service must have a contract of what it commits to do and what are the costs (time, financial, etc) for doing so.

In part this for me is about making pieces like network latency visible to developers and architects but more so making the actual business costs and value visible.

As an example take Paris, that city of surly waiters, stunningly dressed women and the complete decampment of people in August (lots of restaurants and shops are closed in August). Now every tourist wants to "go up the Eiffel tower" but if we look at the contracts here then there are other options that you could take from a "business" perspective.

What do you want to really do? The answer is get a birds eye view of Paris and see all of its wonders laid out before you. Now what are the wonders? The Louvre, Notre Dame, The eiffel tower, arc de triomphe. Now I'd argue that there is a service out there which costs less, in both time and money, which provides a better view, because the Eiffel tower is in it and which therefore represents the best business choice.

So the choice is basically queueing at the Eiffel tower

Or viewing at the Tour Montparnasse


The point here is that its a contract that enables this visibility and choice to the developer/architect/business rather than it being something implicit in the behaviour of a service. If you don't know what the contract is and most importantly what its price is (remembering that time is a cost after all) then you can't make a rational decision. Note here that I'm not talking about UDDI style dynamic discovery and binding I'm talking about making business decisions on a contractual basis.

Another example from Paris (if you haven't been to Paris BTW, stay in the 6th when you do, its "real" Paris in terms of being what you imagine). Now people catch trains and often you want to have a meal or a snack before you travel. Each of these places offers a varying type of contract which broadly covers the following
  • Quality of food
  • Quality of surroundings
  • Speed of service
  • Cost
Now lots of train stations have fast food joints, sit down light bite places and even sometimes something where you can get a reasonable meal. The point as a consumer is that my contract is defined by when my train leaves as well as my budget, this impacts what service I will use. If I know about the services before hand however I can plan to arrive at an earlier point if there is something I want to go to.

Now in Paris at Gare de Lyon there is "Le Train Bleu" this is just about the best damned railway restaurant in existence. They have a set of different menus including a "short" (by French standards) one for those who are about to travel.

The point here is that the contract for what could seem to be the same service (food at a railway station) can vary hugely from fast food through to a gourmet experience in an amazingly decorated french restaurant the contract isn't simply about price, or time, or quality, or trust its about a combination of those things and then weighing up which makes the most sense from a business perspective based on the current demands and ambitions of the business.

Exposing network effects to developers and architects is a tiny part of the problem, the real problem is exposing them to the business costs of their decisions and that requires a different degree of formalism and planning.

Once contracts are formalised it becomes possible to automate but the first key is being able to understand the contracts. There is still no WS-SLA or WS-Contract and the current push appears to be away from considering these elements and back to considering only technical aspects of service.

So think about the full contract not just on the immediately obvious this means not following the tourists in wasting 1/2 a day queueing for the Eiffel tower, but going to Montparnasse and having time to take in the cemetery and Jardin de Luxembourg and finishing up at Saint Sulpice having an espresso watching the beautiful people walk by. Now that is quality of service.

Technorati Tags: ,

Google Map Rolling on a Mac

I love "silly" demos that combine technologies. Raul Guiu has written a very small app that uses the Mac's motion sensor to navigate around Google Maps. Now my first thought was "nice demo, can't see an application" then I started playing "follow the roads" which was fun for a short period of time. Now however I can think of quite a few bits where adding in motion and GPS could really commoditise certain areas. Take the "Fitness Watch" market, it costs £100-£200 for a watch and the "premium" ones are those with GPS. Now however with my iPhone I already have a GPS but I need to combine the information. The motion sensor could also act as a pedometer as well as indicating on a bike how fast I am going and what sort of lean I got into the corners.

This sort of "personal network" approach (which runs against the "one central device" theory) would give me all of the information I need (and more) and do so over a very limited area (I have them on me, if I don't then they aren't involved).

Its something that I expect we will see more of as things like the iPod/Nike link up are expanded and players like Polar want to get into the act. The key here will be around the short network comms and not burning the battery to get that done.

So yes its just using a Mac to navigate a Map, but it does go to show that not all Mashups have to be of web information. Mashing up personal information in a localised way can be just as, if not more, useful.



Technorati Tags: ,

Monday, July 14, 2008

SOA success - value over technology

There have been a bunch of articles around recently on SOA and judging the success but it was this one at HBR that best sums up the business view of what good looks like. Entitled Investing in the IT That Makes a Competitive Difference its all about companies being smart in the application of technology and choosing the areas where it drives differentiation. One of the bits I've gone on about in the book is value classification.

The point is that most companies have a distorted IT investment view with most of the investment going "below the line" and very little going above it. This is madness and is a major reason why businesses get annoyed. The point of the areas below the line is cost rationalisation and even elimination. SaaS is a good example of elimination, you eliminate the data centre and other support costs and switch it into a business metric charge (most often number of users). But you won't differentiate on any business services in this area as if you can do SaaS then so can the competition but it doesn't matter as this isn't where you really compete. Think of your phone system. Does it matter that you use the same mobiles as the competition? Nope. Lots of IT is in the same basket.
we believe that an overabundance of new technologies is not the fundamental driver of the change in dynamics we’ve documented. Instead, our field research suggests that businesses entered a new era of increased competitiveness in the mid-1990s not because they had so many IT innovations to choose from but because some of these new technologies enabled improvements to companies’ operating models and then made it possible to replicate those improvements much more widely.


So its about where technology impacts the operating model, not just about technology. They make a great play however on the importance of standardisation in driving successful change
First, deploy a consistent technology platform. Then separate yourself from the pack by coming up with better ways of working. Finally, use the platform to propagate these business innovations widely and reliably.


In the value classification this means keeping vanilla down the bottom and innovating at the top (where it has the most impact). Its the standardisation around things like SAP that give you the ability to then roll out change more widely, its not simply about having a new shiny technology. So with SAP the Basis platform is below the line, but is a consistent base. You then use Netweaver and new technologies above the line to give you the competitive advantage.

This is the SOA play. Making the business not care about the technology you are using but is asking for two things
  1. Standardisation
  2. Focus on investment
Standardisation means rationalisation, not just of the IT but most importantly of the business. The business needs to operate in line with the package, not the package in line with the business. Where they need (not want) to differ then you are looking to focus investment at these areas of competitive advantage.

This bit above the line is where you can start playing with Web 2.0 (a Web 2.0 invoicing process... I don't think so) to really understand where it makes an impact. Having it make the impact however is about enabling it to build on those stable foundations and foundations that are able to change in both business and IT in a coordinated manner.

So the conversation has switched from "SOA" and technologies towards a goal of competitive differentiation. It is however the SOA bit that makes the split between standardisation and innovation possible. Without this standardised glue and the business commitment to alignment below the line you are just building on quicksand.

This is a real judge of when SOA is successful, when the business doesn't care about the SOA programmes, when it doesn't care about BPEL/Web Services/REST/etc but what it does care about is that it exists when it becomes an integral part of the companies goal of differentiating itself by providing the platform between the standardised and the innovation and being the mechanism via which those changes can be rolled out across the business.

So if you are doing a service architecture make sure that you do the value classification so you can explain to the business where you want them to invest and where the business and IT need to rationalise and standardise.


Technorati Tags: ,

http://www.blogger.com/img/gl.link.gif

Friday, July 11, 2008

I hate technology, I love solutions

I've begun to realise something recently, I don't like technology. I love working in IT but its never the technology bits that get me excited, its always what you can do with them this is why I loathe discussions on "X is better than Y" as its really about what can you do with X and Y. Most of the time you can do the same thing, but maybe one of them makes it easier to reach the end goal.

This came home to me today when I took delivery of my lush new 3G iPhone. Seriously I've not been this excited about a gadget since the PS2 came out. The reason I like it isn't that its a smartphone or that its 3G, my work phone is 3G and its a smartphone and I loathe and detest it with a passion. No the reason I like it is because its such a damn smooth solution it just plain works. I don't care if they use Linux, OSX, Windows or Symbian, I don't care whether the applications "look the same" as the ones on my laptop (which is a Mac), I just care that it works and it gets the hell out of my way when I want to do things.

Oh but woe is me, the registration doesn't work their servers are getting canned and falling over. That is the technology part of the equation, someone somewhere hasn't scaled the infrastructure correctly and done the right sort of caching to make it all work. This first of all tells me I've been a complete sheep in buying the 3G iPhone because clearly there are loads of other people doing the same thing and secondly that technology breaks.

That is the difference, solutions are the bits that you use as an end-user. You only find out about the technology when it breaks.


Anyway its all gone through now and the bit I loved best.... when it rejected the default browser on my desktop VM because it fired up IE ;)

Technorati Tags:

Reminding vendors of previous statements

I had a call the other day where the vendor said something that made be bork. They were talking about what "non sophisticated" IT users (IT literate business people) can do. During the course of this the vendor (who sells quite a bit of stuff) said that "Web Services" were usable by end users in their BPM product as they hide the XML because XML is too complex and its easy for these users to "string together" Web Services to build new applications

The next chap then came on talking about some stuff that accessed databases and said that "Web Services and XML are too complex" for this type of user group. What these users want is access to the database and the ability to query and filter the information, its all about the information for these users.

The next chap was then talking about ATOM and said that "Web Services are too complex" and that "people don't want to see databases" but that XML was a "natural" way of thinking about data. What the users want to see is the relationships between information not just the data and filtering. They don't want to build new applications they just want to re-purpose what they have.

In the area of "business users can use X" its the biggest lie that vendors push out. SQL was that thing and so was COBOL when it kicked off. I pointed out on the call that they'd just undermined each of their other products with their statements.

You could have heard a penny drop.

The reality is that different products work in different areas and different tasks require different solutions.

Its also fun to remind vendors of when they told you that X was the best thing and that the right approach was to do Y with that X. Now they are saying that Y (and maybe even X) is wrong, point it out to them. If they want to sell you something they should have a good answer as to why the world has changed and it shouldn't just be about buzzwords.

Technorati Tags: ,

Tuesday, July 08, 2008

Why people matter more in architecture than technology

I've said before about Ivory towers v long term but I thought it was briefly worth bringing it up on the back of the Convenience v Correctness piece.

Correctness is normally used to describe the technology side, solution X is more "correct" because it obeys more rules or exhibits better technology pieces. The goal in this mindset is to be as correct as possible, rather than being good enough for he job at hand.

If architecture is to be about delivering what the business wants then there are three things that should be at the front of an architects mind
  1. Is it good enough
  2. Can my team deliver it?
  3. Can we support it?
These are the three primary elements to consider, not whether the solution is "the most correct" or whether it represents the "best" technology but whether what is being proposed is good enough and whether the people that are going to have to cope with the proposed solution will be able to do so.

Convenience over Correctness therefore should mostly focus on convenience as that will help deliver something at a better business price and which more developers (read cheaper developers) will be able to deliver and which more people in support (read cheaper support) will be able to maintain.

Architecturally it is the convenience that represents the best choice even though technologically it is not the correct choice. If I have a team of VB 6 developers and have to build a distributed multi-threaded application then the technologically correct choice might be to architect using REST and Erlang, however the business correct decision would be to architect around the limitations of the team and the platform to enable that team to deliver the solution. Yes it could be argued that hiring 20 Erlang guys with REST experience could have got the job done quicker head-to-head, but not if you factor in hiring time, recruitment costs and the increased salary costs and this was also not the challenge that the architect was presented with.

Architecting from the Ivory tower is easy. Assuming everyone is as smart as you are is easy. The real skill is in understanding the complexities and picking the technologies and approach that is most convenient for the people you have and good enough for the problem being solved.

If architects are to change the perception of IT then we must focus on business objectives over technological correctness. Brunel was right that the "correct" gauge should be wider as the previous one was merely a convenience. That was the engineering and technology view, the business view was that standardisation and the majority was the correct view for economic development.

In delivering business solutions it is the business correctness that matters over the technology correctness this means focusing on the people available for delivery and matching their skills to the architectural proposals and focusing on support to ensure that it will operate effectively.

Architecture should be about making it work in reality, not making it work better on paper.


Technorati Tags: ,

Friday, July 04, 2008

Convenience over Correctness - a deconstruction

So I wrote a quick piece on Steve Vinoski's IEEE article and a criticism was levelled by an anonymous poster that I didn't dissect the article piece by piece... never one to not learn from such constructive criticism I thought I'd do just that....

So the paper starts of with a reasonable discussion of the origins of RPC. Noting indeed that its history dates back to the 1970s. This makes the first paragraph pretty much unique in that its well referenced and backed up by facts.

On to the second paragraph. CORBA is lumped in as a "newer" technology that is on the wane, this is slightly odd given that in IT a nearly 20 year old technology is rarely considered "new", its certainly not a term I've heard applied to C++ or even Java recently. The point on CORBA is that its "too complicated". At this stage Steve fails to mention that IIOP, the core protocol of CORBA, was in fact adopted by J2EE so could be argued as being the most successful RPC approach of all time. It is also worth noting that DDI wasn't simply an RPC/IDL approach, but I guess the writer knows that.

The third paragraph states that SOAP is on the wane as well. This is backed up with a huge set^H^H^H^H^H^H^H^Hno statistics what so ever. The writer seems to have a nice blind spot when it comes to enterprise applications and be unaware that the likes of SAP and Oracle are using Web Services extensively in the extension and integration of their package solutions, admittedly this is only a multi-billion dollar industry and indeed probably represents the lion share of IT spending... hang on. Doesn't this mean that in fact SOAP is being used extensively and indeed its use is growing as people upgrade to Oracle Fusion and SAP netweaver? Isn't it also true that governments are more and more using Web Services to integrate between countries and departments and that companies are using SOAP pretty much as the default in B2B interactions. Indeed people describe using SOAP for these areas as a best practice. On the wane? More likely its just that the people using them have a career.

The writer gives himself away by proposing that people are considering a Facebook approach as the next thing.... err not in the companies I work with, absolutely no-one has suggested doing that. This suggests a blog rather than enterprise oriented research style.

The fourth paragraph details, again without reference beyond the fact that transaction management was included in Argus, that RPC is fundamentally flawed. Now if its fundamentally flawed this should mean that its impossible to build a distributed system. It references Jim Waldo's paper (but not Deutsch's fallacies) that remote calls are different to local ones. Err that isn't a flaw of RPC its just a fact of life when doing remote systems... what on earth could the writer mean? Surely the writer isn't assuming that just because RPC was the dominant approach that the local/remote problem goes away just because you do remote work via another method?

So the writer asks
Why, then, do we continue to use RPC-oriented systems when they're fraught with well-known and well-understood problems?
So far these problems have been detailed as
  1. Remote calls have more issues than local ones
  2. Remote transaction processing is a bitch
There are no other issues raised and both of these points fall into the "well duh" school of pointing out the obvious.

Then onto the meat of the article (emphasis mine).
RPC-oriented systems aim to let developers use familiar programming language constructs to invoke remote services, passing requests and data to them and expecting more data in response.
Let me for a moment shift that to the "familiar" world of the Web and REST.
Web-oriented systems aim to let developers use familiar web constructs to invoke remote resources, passing requests and data to them and expecting more data in response.
Ummm I'm not seeing the huge difference at this stage. The next bit is basically explaining how a proxy infrastructure works and how it gives developers a consistent approach to the invocation of local or remote services. It almost sounds like its pushing this point in that its abstracting the developers away from the detail of the network. But that isn't the case the writer in fact thinks this is the worst kind of thing.
Unfortunately, this approach is all about developer convenience. It's a classic case of everything looking like a nail because all we have is a hammer. In an object-oriented language[...], we represent remote services as objects and call methods or member functions on them. [...] We have a general-purpose imperative programming language hammer, so we treat distributed computing as just another nail to bend to fit the programming models that such languages offer.
What a load of crap. Seriously this is an unmitigated pile of tripe in what it means to write distributed systems. It makes two basic errors
  1. That the architecture and design of a system is focused on a programming language
  2. See number 1
Now (to quote Red Dwarf) I know that's really only one problem but its such a big one I thought I'd mention it twice. I've built distributed systems for years and I've know about Waldo, Deutsch and indeed plain old common sense for all of that time. The programming language (Ada, C, C++, Java, LISP, etc) didn't matter what I was first interested in was the entities that interacted. This is what SOA is all about, its not about the objects and the methods its about the interacting entities and the knowledge that this interaction must be coarse grained. Architectures are normally programming language independent and anyone who starts a distributed system by writing C++ is a muppet who should be fired. Focusing on the programming language indicates a very narrow perspective on what it takes to build a distributed system and indeed indicates a focus on the worst place to start considering distribution.

The issue raised is that by providing an abstraction that hides the network its a really bad thing and just about convenience. No it isn't. I've built distributed systems and I've had to manage teams who delivered the architectures I created and I'll say that
  • 60% of the people didn't understand the challenges and wouldn't have understood Waldo
  • 30% would have read it and got it wrong
  • 6% Understand the challenges and can make a decent crack at it with minor problems
  • 4% actually understand what it takes
The writer's position is that everyone should know about the network, this is a false position as it requires everyone to be of the same sophistication as the, very talented, writer. The reason that the abstraction works is that smart people write the architecture and the interfaces and then manage others into the delivery, thus the lack of talent in the great unwashed is hidden from themselves as they do not see the complexity. This is important, if you allow people without the talent to start designing distributed systems then it will go tits up, if however you can architect the system so they are not aware of the distribution and the efficiency is managed via the architecture then you will deliver a successful project.

The next section is about the issues of data mapping between programming languages, something that is limited to IDL as a scope but is an issue with any two systems interacting via a 3rd party notation. Its an issue in XML as much as it is with CORBA's IDL, people can point to less issues but not to the elimination of issues. It points out a few hacks from CORBA (e.g. by-value) but oddly (or conveniently) misses out the similar issues with SOAP and XML. This section is okay as it talks about some of the challenges, it suggests however that the problems of intermediary mapping are limited to RPC systems, while for some this is true for others it is not. How about validated enumerations in XML for instance? They've always been a bugger in languages that don't have the concept of limited enumeration sets. The point around arrays mapping to an array or a class is as true in XML and messaging or REST systems as it is in RPC. The writer omits this issue however as it undermines the point that he is trying to make.

In the next section however a wonderfully wrong statement kicks it off
The illusion of RPC - the idea that a distributed call can be treated the same as a local call - ignores not only latency and partial failure[...]
Errr apart from muppets who thinks that a distributed RPC call can be treated the same as a remote call? Every single RPC approach has a litany of exceptions that you have to catch because you are doing a remote call, that is baked into the frameworks. That however is only the coding highlight. The reality is that all decent architects have always known that the two are different and have worked accordingly. This really is the worst kind of argument. Setting up a completely specious argument as the basis for this area really does undermine the whole credibility of the writer. If the writer is saying that when they built RPC systems in the past, which they apparently have, that they treated remote and local as the same then the writer is a muppet. I don't think the writer is a muppet therefore I have to conclude that he is pushing an agenda and is making the facts fit that agenda.

The next bit is a really cracker though. RPC systems lack the ability to do intermediation which is defined as
caching, filtering, monitoring, logging, and handling fan-in and fan-out scenarios
The suggestion in the text around is that its impossible to build a large scale system using RPC as it lacks these abilities. Now apart from the Business Service Bus stuff I've talked about or the MCI approach that Anne Thomas Manes advocates this must come as a big surprise the to the companies doing ESBs, Web Service monitoring or Web Service gateways which in fact allow all of these elements in just such the RPC environment that the writer claims isn't possible. Nothing in RPC could ever indicate whether a result is cacheable for instance. Not sure how Jini's leasing fails to do this or how having a field saying "cachetime: x" wouldn't work.

REST of course address all of these concerns (and more) by having... a field that says you can cache the result something that is trivial to add to an RPC environment but is lauded as hugely different. REST is great because it doesn't fit with normal programming language abstractions.

Want to bet?

Lets say we have a set of resources, Articles, Writers, Comments. Articles have content and links to writers and comments. Each comment has a link to other comments (sub-comments) and its writer.

Now lets say I decide to hide all this from a developer, how hard would it be? The answer is of course it would be trivial.

Article:
content
writer: Writer(link)
foreach(comment in comments):
commentList.add(Comment(comment.link))

Comment:
content
writer: Writer(link)
foreach(comment in comments):
commentList.add(Comment(comment.link))

Writer:
name
foreach(article in articles):
articleList.add(article.link)
foreach(comment in comments):
commentList.add(Comment(comment.link))



Now when I retrieve an Article I process the XML, for ever link I find to a comment or writer I create an object passing the link to the constructor, this then loads the object. I could do this based on a config file which generates the classes automatically. Hell its just JAXB for Atom. So in fact you can make the REST approach fit perfectly. This works for GET and PUT (every time you update a field just do a PUT) and for POST? Well if we have a POST to /newarticle then we have a factory which creates an article etc etc etc POST a comment on an article? how about Article.addComment(content, user)? Have it return the URI and your away

However we can hide it even more dreadfully by doing it all dynamically at runtime. Every property field can be retrieved on request by performing a GET and an XPath. In otherwords we can hide the network so well that every single property request results in a network request.

The next bit is that of course the URI can hide whether a call is local or remote. Something like Google Gears for instance could be used to hide that but you could go further and have a protocol of local:// which indicates an object call, so if you say local://comment/ then it parses the local object for the given comment id. Thus the URI itself can be used to subvert the local/remote visibility and indeed to subvert the use of HTTP with its caching.

The statement is made that the "hypermedia constraint" will prevent this sort of approach. Now I'm clearly missing something because I can obviously map all of the links, dynamically at runtime if required or statically via generation, I can clearly map all of the possible requests (again dynamically if required) and I can provide this via a standard programming approach in an OO language. In Java a dynamic proxy approach could work while in dynamic languages its just about getting the property name asked for and then creating the entity. So I can do a static generation (JAXB style) for a given point in time or have a massive overhead at runtime, all of which I can hide from the developer. Further more the caching is set by the server side which means that the developer has no clue whether a request on a retrieved object will result in a local fetch of cached information or a traversal across the network. These look identical as its hidden in the HTTP handling.

So REST can be made to fit in a "normal" programming language (unless someone can show me an example that couldn't fit) so again this isn't a real argument its just prejudice and an example of the lack of sophistication in current REST frameworks and tools.

The writer admits that he used to "push" RPC (vendors and drug dealers... spot the difference) but doesn't admit that people have made RPC work for distributed systems. The writer then wishes that everyone used async as the standard approach. Again this misses the point that for most people async is a mine of issues where they will bugger things up rather than it representing any improvement. The theme of wishing that everyone in IT was as smart as the writer is fine, but its not something on which to build the next generation of IT. As far as I can see the more people who come into IT the lower the average IQ in IT seems to go.

He then campaigns for Erlang with some features that look a lot like Ada's task management approach, which was brilliant and readable which puts it at least one ahead of Erlang. Hell lets go for Occam, it was great for multi-threading.

The next section is just wishing that everyone had done something differently. Well I wish that people weren't focused on character saving over legibility and how much better it would be as a result, but I haven't got my wish and its a beer discussion rather than a serious one.

The final bit is wishing that everyone would agree with the writer so everything will just be better. Because in his words people who don't learn from history are doomed to repeat it. It this article however the writer has appeared to have not learnt from history as he has chosen to selectively ignore facts that undermine his case and make statements that do not stand up to scrutiny. So in summary
  • No decent architect built an RPC system which assumed local = remote
  • SOAP systems in particular have done RPC with intermediation
  • You can hide the network in REST systems
  • Wishing that everyone is as smart as you is not the way to improve the lot of IT
  • Development language and framework is tertiary to design and architecture in distribution
  • Large scale distributed RPC systems exist and work today
  • Distributed computing is about the architecture not about the coding
  • The transactions of the world reside on systems that use RPC
This article certainly does appear to be a case of just Convenience over Correctness problem the writer rails against.

Hopefully the commenter is now happy that I've taken the time to breakdown in detail the issues I had with the article.

Technorati Tags: ,

Tool and reality blindness


There are ways of developing robust distributed applications that don’t require code-generation toolkits, piles of special code annotations, or brittle enterprisey frameworks.


Is the view of Steve Vinoski. Its a general rant against RPC (its really really bad BTW) and in praise of the wonderful programming language renaissance we’re currently experiencing.

Ummm I'm finding it really hard to think of something that doesn't require code generation as this is what any compiler does and as for "special code annotations" that is exactly what all "higher order" languages are about rather than the register shifting and low level elements of assembler and as for the brittle enterprisey frameworks... we all know how brittle the likes of SAP, Oracle and IBM mainframes are let alone the J2EE and other application servers that power the worlds largest industries and indeed the world economy.

Its really rather sad when people get Technology fundamentalism and think that a technology shift represents a genuine silver bullet.

One size doesn't fit all and no technology change will get IT out of this current mess. People have built robust distributed applications for years even decades which suggests that maybe this current trend isn't the only way of getting things done.

Technorati Tags: ,