Thursday, September 24, 2009

REST Blueprints and Reference Architectures

Okay so the REST-* stuff appears to have rapidly descended into pointless diatribe which is a shame. One of the questions is what it should be instead (starting with REST-TX and REST-Messaging wasn't a great idea) and around a few internal and external discussions its come down to a few points
  1. What is "best practice"
  2. What is the standard way to document the interactions available & required
  3. How do we add new MIME types
Quite a bit of the technical basics have been done but before we start worrying about a "standard" way of defining reliability in a REST world (yes GET is idempotent.... what about POST?) we should at least agree on what good looks like.

Back in the day Miko created the "SOA Blueprint" work around Web Services, an attempt to define a concrete definition of "good", unfortunately it died in OASIS (mainly due to lack of vendor engagement) but I think the principles would be well applied here.

The other piece that is good (IMO) is the SOA Reference Model, Roy Fielding's paper pretty much defines that reference model but what it doesn't have is a reference architecture. Saying "The internet is the reference architecture" doesn't really help much as that is like saying that a mountain is a reference architecture for a pyramid.

Now one of the elements here is that there appears to be some parts of the REST community that feel that Enterprise computing must all "jump" to REST and the internet or are in fact therefore irrelevant to REST. This isn't very constructive as the vast majority of people employed in IT are employed in just those areas. B2B and M2M communications with a decent dose of integration are the standard problems for most people, not how to integrate with Yahoo & Amazon or build an internet facing website.

For the enterprise we have to sacrifice a few cows that hopesfully aren't sacred but I've heard bandied around
  1. You can't just say "its discoverable" - if I'm relying on you to ship $500m of products for me then I don't want you messing around with the interface without telling me
  2. You can't just say "late validation" - I don't want you making a "best guess" at what I meant and me doing the same, I want to know that you are shipping the right thing to the right place
  3. You can't just say "its in the documentation" - I need something to test that you are keeping to your side of the bargin, I don't want just English words telling me stuff I want formal definitions... contracts.
  4. You can't just say "look at this URI" - we are embarking on a 5 month project to build something new, you haven't done your stuff yet, you don't have a URI yet and I need to Mock up your side and you need to Mock mine while we develop towards the release date. Iterative is good but we still need to have a formal clue as to what we are doing
  5. You can't say 'that isn't REST' if you don't have something objective to measure it against
So what I'd suggest is that rather than having the REST-* piece looking at the technical standards we should really be focusing on the basics mentioned above. We should use Roy's paper as the Reference Model from which an enterprise Reference Architecture can be created and agree on a standard documentation approach for the technical part of that reference architecture.

In other words
  1. REST Reference Model - Roy's paper - Done
  2. REST Reference Architecture - TBD and not HTTP centric
  3. REST Blueprints - Building the RA into a concrete example with agreed documentation approaches (including project specific MIME types)
Right now burn me at the stake as a heretic

Technorati Tags: ,

Monday, September 21, 2009

Theory v Practice - the opposite view

There is an age old saying
In theory, practice and theory are the same thing, in practice they aren't

This is true 90% of the time, but in Engineering it isn't always the case. I was speaking to someone a day or so ago about interviews and they were nervous as the job they were applying for required a specific programming skill and they had only done "a bit" of it.

What I told this poor young fool was that as they had talent (and they do) this lack of experience was just a minor element. Could they learn more in the week before the interview? I asked. "Sure" came the reply.

Well there you go. Any if they ask questions about threading and deadlocks can you answer them.

"Well I know the theory but not the syntax"

And it was here than I imparted the knowledge... Its actually the theory that counts not the syntax. To this end I'll tell two tales.

My first job interview was for a start-up company. They had some interesting bits around Eiffel and were trying to create a meta-language on Eiffel that enabled multiple different GUIs and Databases from a single code base. Part of this would require me to know C. I was asked

"Do you know C"

"Sure" I said.

"You'll have to take a coding test next week to check" they said

This gave me 7 days to learn C, a language I'd never coded in before. By the end of that week I was coding with pointers to functions which took pointers to arrays of functions as arguments. The reason was I understood the theory and could quickly apply it to the new syntax.

I got the job..... but they went bust 6 months later owing me 2 months wages so it wasn't the best story.

Now for another story, a good friend wanted to shift out of his current IT job which didn't do coding into a coding job. He had a bunch of theory and brains but no experience. I boldly said that I could coach him through a C++ interview in a couple of weeks. For 2 weeks we talked about classes, STL, friends and lots of other things.

He got to the interview, chatted for 30 minutes about computing in general and was asked the killer question

"So you know C++"

To which he quickly replied "Yes".... and the interview was over. He got the job and was pretty bloody good at it, despite the level of bluffing (although the single word "Yes" isn't the strongest bluff in the world).

The point is that if you understand the theory of programming languages and computing then individual languages are just a set of syntax that implements that theory in a specific context. Unfortunately in IT very few people understand the theory and are therefore condemned to badly implement software in the manner of an orang-outang who doesn't understand English but has a dictionary of English words to point at.

Lots of times Theory is less important than practice, but in IT if you don't know the theory then the odds are you'll be rubbish at the practice.

Technorati Tags: ,

Wednesday, September 16, 2009

REST-* can you please grow up

Well didn't Mark Little just thrown in a grenade today around REST-* by daring to suggest that maybe just maybe there needs to be a bit more clarity on how to use REST effectively.

As he said this "The REST-* effort might end up documenting what already exists which indicates that part of the challenge is that lots of people don't really know what REST is and certainly struggle as they look to build higher class systems and interoperate between organisations.

Part of this is of course about up-front contracts and the early v late validation questions. But part of it also appears to be pure snobbery and a desire to retain arcane knowledge which goes back to that "Art v Engineering" debate.

A few choice quotes from twitter

"Dear REST-*. Get a fucking clue. Atom and AtomPub already do messaging. No new specification needed, that's just bullshit busy work." - Jim Webber

"REST might lack clear guidelines, but something called REST-* with a bunch of vendors is hardly going to help!"
- Jim again

"and if they think REST lacks guidelines for messaging/security/reliability/etc.., they're not looking hard enough" - Mark Baker

Now part of Mark Little's point appears to be that we need more clarity around what good should be in the REST world and this needs to be easier to access than it currently is. I've seen some things described as REST that were truly horrific and I've seen other bits in REST that were a superb use of that approach. The problem that all of them had was in learning about Atom, AtomPub how to use them, how to use MIME types and of course the balance between up front contracts and late evaluation.

Would it really be such a bad thing to have an effort that got people together and had them agree on the best practices and then have the vendors support developers in delivering against that practice?

The answer of course is only yes if you want to remain "1337" with your arcane skills where you can abuse people for their lack of knowledge of AtomPub and decry their use of POST where quite clearly a DELETE should have been used.

If REST really is some sword of Damocles that can cut through the integration challenges of enterprises and the world then what is the problem with documenting how it does this and having a few standards that make it much clearer how people should be developing. Most importantly so people (SAP and Oracle for instance) can create REST interfaces in a standardised way that can be simply consumed by other vendors solutions. It can decide whether WADL is required and whether Atom and AtomPub really cover all of the enterprise scenarios or at least all of the ones that count (i.e. lets not have a REST-TX to match the abomination of WS-TX).

This shouldn't be an effort like WS-*, its first stage should be to do what Mark Little suggested and just document what is already there is a consistent and agreed manner which vendors, developers and enterprises can agree on as the starting point and that this starting point would be clearly documented under some form of "standards" process.

Would that be a bad thing?

Update: Just found out that one of the two things that they want to do is REST-TX... its like two blind men fighting.

Technorati Tags: ,

Business Utilities are about levers not CPUs

"As a Service" is a moniker tagged against a huge number of approaches. Often it demonstrates a complete marketing and intelligence fail and regularly it just means a different sort of licensing model.

"As a Service" tends to mean utility pricing and the best "As a Service" offers have worked out both what their service is and what its utility is. have a CRM/Sales Support service (or set of service) and the utility is "people". Its a pretty basic utility and not connected to the value but this makes sense in this area as we are talking about a commodity play and hence a simple utility works.

Amazon with their Infrastructure as a Service/Cloud offer have worked out that they are selling compute, storage and bandwidth. Obvious eh? Well not really as some others appear to confuse or mesh the three items together which doesn't really drive the sort of conservation behaviour you'd want.

The point about most of these utilities though is that are really IT utilities. SFDC measures the number of people who are allowed to "log on" to the system. Amazon measure the raw compute pieces. If you are providing the base services this is great. But what if you are trying to then build these pieces for business people and they don't want to know about GB, Gbps, RAM or CPU/hrs? Then its about understanding the real business utility.

As an example lets take retail supply chain forecasting, a nice and complex area which can take a huge amount of CPU power and where you have a large bunch of variables
  1. The length of time taken to do the forecast
  2. The potential accuracy of the forecast
  3. The amount of different data feeds used to create the forecast
  4. The granularity of the forecast (e.g. Beer or Carlsburg, Stella, Bud, etc)
  5. Number of times per day to run it
Now each of these has an impact on the cost which can be estimated (not precisely as this is a chaotic system). You can picture a (very) simplified dashboard

So in this case a very rubbish forecast (one that doesn't even take historical information) costs less than its impact. In other words you spend $28 to lose $85,000 as a result of the inaccuracy. As you tweak the variables the variables the price and accuracy vary enabling you to determine the right point for your forecasts.

The person may choose to run an "inaccurate but cheap" forecast every hour to help with tactical decisions and run an "accurate and a bit expensive" forecast every day to help with tactical planning and run a weekly "Pretty damned accurate but very expensive" forecast.

The point here is that the business utilities may eventually map down to storage, bandwidth, CPU/hrs but you are putting them within the context of the business users and hiding away those underlying IT utilities.

Put it this way, when you set the Oven to "200 C" you are choosing a business utility. Behind the scenes the power company is mapping this to a technical utility (kW/h) but your decision is base on your current business demand. With the rise of smart metering you'll begin to be able to see what the direct impact is on your business utility decision on cost.

This is far from a simple area but it is where IT will need to get to in order to clearly place the controls into the hands of the business.

They've paid for the car, shouldn't we let them drive?

Wednesday, September 02, 2009

Why I like Open Source documentation

I've got someone creating a structured Semantic Wiki for me at the moment and we are using Semantic Forms. One of the things we needed to do was pre-populate the fields. This means something like


With the query string set... The documentation said

query_string is the set of values that you want passed in through the query string to the form. It should look like a typical URL query string; an example would be "namespace=User&User[Is_employee]=yes".
Now this is accurate but misses out a couple of important bits.

  1. The Namespace doesn't actually matter unless you are using namespaces (we aren't)
  2. The second "User" doesn't refer to the form name or to the namespace it refers to the template name
  3. The underscore is only valid if you actually put it in the field name yourself (i.e. unlike other bits in MediaWiki where "Fred Jones == Fred_Jones" that isn't true
So after a bit of randomly focused hacking I found the solution.... and what did I do. I updated the documentation to add
The format of a query string differs from the form_name in that it uses the template name. As an example if you have a "Person" template (Template:Person) and a Person Form (Form:Person_Form) for entry then it is the names from the Template that matter. An example to populate the Home Telephone field would therefore be: {{#forminput:PersonForm||Add Person|Person_Form[Home Telephone]=555-6666}} N.B. The FORM uses underscores while the field uses spaces.
Now this could be written better I agree, but the point is that the next poor bugger through will now have a better starting place than we did. Adding examples is something that is particularly useful in much documentation and something that is often missing. I regularly find myself Googling for an example after failing to understand something that the person writing the documentation clearly felt was beneath them to explain.

For commercial software you'd clearly like to see a bit more of an editorial process to make sure its not stupid advice like "Install this Malware", but its an area where more companies could benefit from improvements in customer service and self-help by enabling people to extend their current documentation in ways that better fit how end-users see their technologies.