Showing posts with label REST. Show all posts
Showing posts with label REST. Show all posts

Tuesday, July 30, 2013

Surely REST isn't the travelling salesman does design?

Occasionally I run across things on the web, this time tweeted by Mark Baker (@distobj), that make me read them several times.  The link he tweeted is to this page on a Nokia API and in particular this section...
Biggest advantages noticed is that the API itself becomes the documentation of the API
  • Without HATEOAS developers would get data, go to a documentation, and figure out how to build the next request
  • With HATEOAS developers learn what are the available next actions
Either of these would get me firing you I think.  I'm a big fan of documentation and I'm a big fan of design.  What I'm not a big fan of is people who treat the design process as a series of abstract steps when the only thing that matters is the next step.

Lets be clear, having clear documentation available is important.  What would be poor would be two things:
  1. Having to wait for someone to build an application before I can start writing a client for it
  2. Having documentation that is some sort of 'mazy of twisty passages' where you only see one step ahead
This to me is at the heart of the death of design in IT, the lauding of everything as architecture and the massive chasm that appears to be developing between that and writing the code.  I'm repeatedly seeing approaches which are code centric rather than design centric and the entire history of IT tells us that this isn't the best way forwards.  Don't try me on the 'I just refactor' line as if that is an answer, spending 1 day thinking about a problem and planning out the solution (design) is worth 5 days of coding and 20 days of subsequent refactoring.

I'd really like a developer to be able to map out what they want to do, be able to read the documentation in one go and understand how they are going to deliver on the design.  I don't want a developer context switching between API, code and pseudo-design all the time and getting buried in the details.

This is part of my objection to REST, the lack of that up-front work before implementation - if I have separate client and service teams I don't want to run a waterfall project of 'service finishes, start client' and if those are two separate firms I want to be able to verify which one has screwed up rather than a continual cycle of 'its in the call response' and 'it was different yesterday'.  In other words I'd like people to plan for outcomes.  This doesn't mean not using REST for implementation it just means that the design stage is still important and documentation of the interface is an exchangeable artefact.  Now if the answer is that you have a Mock of the API up front and a webcrawler can extract the API documentation into a whole coherent model then all well and good.

Because the alternative is the travelling salesman problem.  If I don't know the route to get things done and am making a decision on the quickest way one node at a time then I'm really not looking at the efficiency of the end-to-end implementation just the easiest way to do the next step.

This current trend of code centric thinking is retarding enterprise IT and impacting the maintainability of REST (and other) solutions.  This isn't a question of there being a magic new way of working that means design isn't important (there isn't) its simply a question of design being continually undermined and discarded as an important part of the process.  Both of the scenarios outlined in the article are bad, neither represents good practice.  Choosing whether your manure sandwich is on a roll or a sub doesn't change the quality of the filling.

Think first, design first, publish first... then code.



Monday, February 04, 2013

People are the problem can we stop pretending its technology

A friend of mine the other day said an amazing thing
I like coding in C++
I mean, seriously?  The land of friends, of people writing C code and debugging nightmares, had things got that much better, I mean I know there are some good threading libraries now but seriously, C++ is nice?
All of the idiots code in Java, they don't know C++
And there we have the point.  Its not about what technology is best its about the people using them, I'll guarantee that if the idiots were in C++ he'd be having more problems but because they are scared of it he can get more done in C++ safely as for them its terra-incognita.  This for me is why debates around SOAP v REST are pointless and make me quite angry.  People pontificate on 'REST scales better' or something else that doesn't matter 99.99% of the time (as in yes it might, but if something else scales acceptably then its not an issue), its like the 'Assembler is more efficient' bullshit that those of us who dared to code in C will remember.

The worst piece about the technology marketing community, by which I mean analysts and vendors, is the ability to hype something that doesn't matter because its a new technology.  It isn't that this technology has to make things better, hell it can actually make things worse, but all it needs is to have some technical reason why its better than something else.  'Its faster' in a place where that isn't important, 'Its quicker to develop your first solution' but a bitch to maintain. We've heard them all down the years.

So as part of my desire to see Thinking is Dead proven wrong I'd like to start a simple campaign.  Everything an analyst, vendor, consultant or developer tells you that something is 'better' ask the following three simple questions

  1. How does it reduce the support costs
  2. How does it reduce the salary levels of my developers
  3. How does it have a measurable impact on its own to our top or bottom line
This last point is critical.  I've seen some crackers down the year around integration technologies in particular 'We used technology X and shipped $1bn in products, therefore X delivered $1bn in revenue' no it didn't, the only question is if it cost less to develop and support technology X, the best that a technology in integration can hope for is a cost reduction in integration TCO, it will never on its own deliver the value because the value is about the information or transaction it delivers.  If it does that more cheaply then its a cost saving, but its never a revenue generator.


There are places where technology can have a top-line impact but those are very minimal (Predictive Analytics and HPC are about the only two I can name) everywhere else its an enabler for people to deliver value.  So the goal of technology is to make the people work better, the people work more efficiently.  Having a technology that is 5% better than another technology at technology stuff but 10% worse from a people perspective is like comparing getting the horse with being driven to near extinction for native americans.  Sure its a benefit, but it really doesn't outweigh the costs.

Thursday, March 01, 2012

WebSockets v REST (v WS-*) more pointless than eclipse v Emacs (v vi)

Well the shiny kids have a new toy... its WebSockets and by jiminy if it isn't just the best thing ever.  Actually to be fair to people promoting WebSockets they do appear to be taking a more rational, and less religious, stance than the REST area but this discussion remains pointless.  Mark Little's recent InfoQ post was a good pitch on the debate.

Just as REST v WS-* is pointless so the WebSockets debate is pretty pointless as well.  Its a new way of shifting information from A to B, its an 'improvement' over REST and WS-* in some ways and not in others but it doesn't actually make the job of conceptualising solutions any easier.
Its a rubbish picture but it gets the point over.  REST, WS-*, WebSockets, IIOP, MQ, Flying Pigeons, Rats with backpacks and any other data shifting technology are about providing the mechanism via which two services can communicate.  Its a required piece but the key question is 'what is the cheapest way', the value is provided by what goes across that mechanism and to define that you need to understand how the producers and consumers need to interact and that requires a conceptual model and thinking.

The hardest part of IT remains in that conceptual part, the planning, the thinking and the representing back to the business what is being done.  REST, WS-*, WebSockets or any other mechanism do precisely bugger all in helping to get that done.   The question I'd pose is this

Its 2012 now and what has improved in the last ten years in terms of making systems easier to conceptualise and the business concepts easier to communicate and what has been done to make the translation of that into technology (producer, interaction, consumer) much simpler and straight forward?

From where I'm standing the answer appears to be a whole heap of bugger all. Does WebSockets make this problem much easier or is it another low-level coding mechanism?  Its of course the latter. When people moan about the walled garden of Apple or 'monolithic' ERP they are missing a key point:
Technical excellence in the mechanism isn't important, its excellence in delivering the value that counts.
See you in another 7 years for the next pointless debate.

Thursday, December 22, 2011

RESTs marketing problem and how Facebook solved it

Earlier in the year I commented on REST being still born in the enterprise and now Facebook have now deprecated the REST API in favour of a Graph API now I could choose to say this is 'proof' that REST doesn't work for the Web either. That would be silly for a couple of reasons

  1. The new API appears to be RESTful anyway
  2. REST clearly can work on the web
No, what this really shows is that you have an issue with naming conventions.   The folks at Facebook called the first API the 'REST API' which meant when they felt that there were problems with it they then had two options
  1. Have a new API called REST API 2.0
  2. Create a new name
Now the use of the term 'Graph' I think is actually a good move and one that is much more effective than the term 'REST' in describing what 'REST' is actually good at: the traversal of complex, inter-related, networks of information.  Now this is actually a concept that resonates and has much less of the religious fundamentalism that often comes with REST.

Pulling this into the Business Information space of enterprises could be an interesting way of starting to shift reporting and information management solutions away from structured SQL type approaches into more adhoc and user centric approaches.  'Graph based reporting' is something I could see catching on much better than 'REST'.  So have Facebook actually hit on a term that will help drive RESTs adoption?  Probably not in the system to system integration space, but possibly in the end-user information aggregation/reporting space.

Time will of course tell, but dropping the term 'REST' from the name is a good start.


Technorati Tags: ,

Wednesday, June 01, 2011

What REST needs to do to succeed in the enterprise

In the spirit of constructive criticism here is what REST needs to do in order to succeed in the enterprise and B2B markets, the sort of markets that make actual revenues and profits as opposed to hype markets with the stability of a bubble.

First off there is the mental change required, four steps here.
  1. Focus on how people and especially teams work
  2. Accept that HTTP isn't a functional API
  3. Accept that enterprise integration, B2B and Machine to Machine require a new approach
  4. Accept that the integration technology isn't the thing that delivers value
The point here is that REST won't move on and be successful beyond blogs and some very cool web sites and technologies unless it shifts away from technical purism and focuses instead on making the delivery of enterprise software solutions easier. This means helping disparate teams to work better together, and how do you do that.....
DEFINE A CONTRACT
Seriously its that easy. The reason why WSDL succeeded in the enterprise is that it gave a very simple way of doing just this. The interface contract needs to define a limited number of things
  1. What is the function being invoked (for REST this could just be a description)
  2. What data can be passed and will be returned
  3. How to invoke it (that would be the URI and the method (POST, GET, PUT, DELETE))
This contractual definition should be a standard which is agreed, and adhered to, by core enterprise vendors and supported by tools. Now before people scream "but that is against what REST is about" well then you have a simple choice
  1. REST remains a niche technology
  2. REST becomes used in the enterprise
Now in order to become more used we need to also agree things like how you do user authentication, field level security & encryption, rules for reliability on non-idempotent requests, so you know whether your POST request really worked....

So what else does REST need to do? Well it needs to focus on tools because plumbing has zero value. Dynamism does happen but its measured in weeks and months not in days which means an agile release process can handle it perfectly well so all that dynamism and low level coding doesn't really add anything to enterprise development.

This is a key point, something I raised in 2006 (SOA v REST more pointless than vi v emacs) the value is what happens AFTER the call is made, focusing on making the calling "better" is just pointless, the aim is to make the calling as simple as possible.

So basically to succeed REST needs to copy the most successful part of SOAP... the WSDL, sorry folks but an "improved" WSDL based around REST and the associated tooling is required.

Or alternatively the REST crowd could just bury its head in the sand and pretend that its the fault of the enterprise that REST isn't being adopted.

And remember:

There is no value in integration only in what integration enables.





Technorati Tags: ,

Saturday, May 28, 2011

REST isn't undead in the enterprise... its still born

Its always depressing to see fanbois bleating and moaning about their beloved technology or piece of bling not being universally liked. This is normally put down, in a wonderfully immature way, to the failure of "the other side" to see their point of view rather than any innate failings of their beloved approach. SOAP is not Dead - Its Undead is a classic of the genre.

Why hasn't REST succeeded in the enterprise? Not of course because it isn't actually any better than SOAP for enterprise scenarios and is indeed much, much worse in many. Nope.

But first there is the reason why REST is successful
His presentation showed that 73% of the APis on Programmable Web use REST. SOAP is far behind but is still represented in 17% of the APIs.
This is like going to France, doing a language survey and declaring that French is the most popular language in the US. So what would doing the same query on the likes of Oracle, SAP, IBM or Microsoft's enterprise technology stacks deliver? I assumed the number to beat would be huge and got ready for some serious searching.... but the number to beat is 2368... errr seriously? I've worked in single enterprises where they had more SOAP endpoints than that. When you include the libraries of WSDLs from SAP and Oracle and the Behemoth that is Oracle AIA has so many that Oracle don't boast about it as it might make it look complicated. Back in 2005 folks at Oracle boasted about over 3000 web services across their applications. Now before people bleat about this being proof of SOAP complexity... that just makes you a hypocrite if you on one hand use the ProgrammableWeb stats as "proof" of RESTs success but then try and use the massive volume of WSDLs out there as proof of SOAP's "complexity.

I'm also here not even into the global standards that use SOAP every day for B2B, people like SWIFT, Open Airlines... shall I go on and on? 2400 APIs is a success? SOAP isn't anywhere near that? Like I say its like going to France and claiming French is the most spoken language on the planet.

All this just proves what I've said for a long time. REST works for information traversal, but its not set up for the enterprise. So what is the issue with REST not displacing SOAP in the enterprise?
"All the tools, hires, licenses & codebase has been built around SOAP for a decade," Loveless wrote on Twitter. "Hard to turn on a dime."

Wow, the bare facedness of this statement is hard to beat. REST has been kicking on this door for over half of that time and some folks argue that in fact it predates SOAP. So it really is bullshit to claim that its all the fault of tools & codebase. SOAP replaced old EAI approaches in a couple of years in new enterprise projects. We went from a situation with everyone in about 1998 doing proprietary EAI integration, with occasional CORBA for RPC, to everyone by 2002 doing Web Services with some JMS. People in 2005 were telling me that REST was the future and REST would win, and now SIX YEARS LATER people are bleating about 10 years of SOAP adoption...

If an approach is better for integration in the enterprise it will be adopted. REST isn't better, yet, for enterprise integration because it fundamentally remains a developer approach not a professional enterprise approach. SOAP isn't complex, technically it might suck (hell my father said "great we've now got enough processing cycles to burn that ASCII-RPC has finally made it") but conceptually its simple, and when managing complex estates with lots of different people that conceptual simplicity on the head.

Michael Cote of Redmonk hits the nail partially on the head when he says
"As enterprise development teams start including cloud technologies in their applications, incompatible cloud platforms and APIs will be a huge road block," said Michael Cote, analyst at RedMonk. "We're already seeing a clamoring for tools and services that integrate this spaghetti bowl of end-points, and they're only going to become more important to realizing the benefits of cloud development."
In other words the lack of a formal contract and standard interface mechanism remains the real reason why REST isn't being adopted in the enterprise.

What SOAP did was solve a problem that the enterprise had. How do I describe integration interfaces so my systems on different technology stacks can communicate and do so in a way that enables my teams to work independently of each other. REST does not solve this problem in an effective way and bleating about "dynamic interfaces" being "better" misses the whole point of what has made B2B and Machine 2 Machine integration successful down the years, namely a focus on people-centric approaches rather than technical centric ones.

Unfortunately with REST there appears to be an active movement to stop this professionalism creeping into it and defining new standards that will actually make REST better for the enterprise.

REST needs a standard way to publish its API and for a way to notify changes in that API. This is a "solved" problem in IT but for some reason the REST community appears to prefer blaming others for the lack of enterprise success of their technology rather than facing up to the simple reality:
SOAP got some things right and the biggest thing it got right was a shareable and toolable contract (WSDL) which enabled interfaces to be published in a standard way which included by functional and data standards.

SOAP isn't undead, its very much living in the enterprise and indeed being the only real viable approach when integrating package solutions from a number of vendors (a massive piece of enterprise IT). REST however barely registers, less than 2500 APIs after all these years of development? Pathetic.

REST for the enterprise isn't undead... its been still-born for over five years.

Technorati Tags: ,

Monday, May 16, 2011

One year on: zero progress for Enterprise REST as Java races backwards

Just under a year ago I blogged about how REST and Sun had put Enterprise IT back five years so I thought it was about time to update that view and see what has happened in the last 12 months in the enterprise integration and governance space.

So on the REST front we've seen.... ummm.... struggling here.

Lets be clear, I'm talking here about REST as an enterprise integration approach, not as a way of exposing a Web API for content aggregation but as a functional integration approach for enterprises. Something to replace the "fundamentally flawed" WS-* that REST is so much better than. So what is the progress this year? Zip, zero, nada. Yup a few minor tweaks into enterprise stacks that say they can produce REST interfaces, but in reality most of them can't and the key problems of interface publication, versioning and testing remain unsolved.

Am I being harsh on REST? I don't think so. Its had more than enough time, and hype, to address the real problems of enterprise computing and step out of the niche, and in revenue terms it is a niche, of Web content aggregation. REST is great for that niche, but that wasn't the pitch made, the pitch made was that REST was great, WS-* sucked and REST would solve all the problems that existed with WS-*. This hype, and the smart people who followed it, has led to a stagnation in enterprise integration that is really hitting enterprise computing in real revenue terms. Its slowing the adoption of cloud computing and generally meaning that IT departments are less credible and less successful than they should be.

So one year on and REST continues to flatter to deceive.

What about Java? Well here the story is actually worse. I honestly believe that the REST crowd are a smart bunch of bunnies and are trying to solve problems, just not realising that most developers aren't as smart as them and that interface dynamism is actually a bad thing. With Java that is sadly not the case. With Mark Rheinhold now actively subverting the JCP and the debacle in the JCP generally over the last year its hard to think that the situation is doing anything other than getting worse.

What does this mean for Enterprise IT? Well a couple of things, it means that SAP, IBM and Oracle the three headed beasts of most IT estates don't have a clear future integration improvement roadmap, its all still based around WS-* as it was 4 to 5 years ago and they are all adding proprietary "tweaks" which "help" people when using their platforms. It also means that their core platform, Java, is stagnating and not getting some of the fundamental changes it needs to address the cloud and dynamic scalability. In particular the continuing "kitchen sink" approach of the Java dictatorship means that reduced profile VMs that are tuned to specific tasks (like Mobile potentially) just aren't being addressed which is leading directly to a fragmentation in the core operating platform.

What does then mean? Well with REST it means that enterprise IT shops are relying on WS-* for integration but still coming across things like vendors not having WS-Security support and with the decline of WS-I the number of "strange" defects is liable to be on the rise. Efforts around more formal contract approaches are dead. This means that the "spaghetti" of enterprise IT, which looked to improving in the first 6 years of the new millenium is actually now getting worse again. REST aimed at WS-* squarely and surely and has certainly hit it with a wounding shot, unfortunately it turns out that REST is incapable of being the replacement for WS-* it wanted to be. REST is Brutus, WS-* is Caesar.

And with Java? It means we are seeing language fragmentation and platform fragmentation which means that the support costs of IT estates are going to rise and so the on going challenge of reducing operating costs to enable investment in new solutions is swinging back against the business and towards entrenched IT estates.

I really can't see how a large scale enterprise IT programme is better off in 2011 than it was in 2005.


Technorati Tags: ,

Sunday, January 16, 2011

Using REST and the cloud to meet unexpected and unusual demand

I’m writing this because its something that I recommended to a client about 3 years ago and I know they haven’t adopted it because they’ve suffered a number of outages since then. The scenario is simple, you’ve designed your website to cope with a certain level of demand and you’ve given yourself about 50% leeway to cope with any spikes. Arguably this means that you are spending 1/3 more on hardware and licenses than you need to but realistically its probably a decent way of capacity planning without getting to complex.

Now comes the problem though. Every so often this business gets unexpected spikes, these spikes aren’t a result of increased volume through the standard transactions but are a result of a peak on specific parts of their site, often new parts of the site related to (for instance) sales or problem resolution. The challenge is that these spikes are anything from 300% to 1000% over their expected peak and the site just can’t handle it.

So what is the solution? The answer is to use the power of HTTP and in particular the power of the redirect. I’m saying that this is REST but its something I’d done before I knew about REST but I’m not one to let a bit of reality to get in the way of marketing ;) When I’d done it previously it was prior to cloud but the architecture was basically the same.

First you split your infrastructure architecture into two parts

  1. The redirecting part (hosted in the cloud, or at least on a separately scalable part of your infrastructure)
  2. The bit that does the work


The redirect just sends an HTTP redirect (code 307 so it isn’t cached) to the new site, so lets say http://example.com goes to http://example.com/home its important to not here that this is the only page we are redirecting, its not a case that every page has this just the main page because when there is a mega-spike it tends to come via the homepage.

Now I’m always one to talk about being chatting but the wonder of a redirect is that the user sees a URL flicker in their browser and then the normal page loads. This is certainly an overhead of a single call but from experience this isn’t a big deal in modern sites where you have a page made up of multiple fragments, the additional redirect doesn’t add a significant amount and its only on the initial page load which now takes two network hits rather than one… its an increase in latency for that homepage but not much of an increase in terms of load time.

Now lets wander into the world of cloud, what does this get us then and why is it worth adding this overhead?

Well when you have an extraordinary event you should really think about creating new pages for it rather than just tacking pages onto your normal site, if you are in a scenario where 70-98% of your visitors are looking a specific piece of content then you are much better of thinking in terms of a microsite rather than adding it to your normal site.

All of the old URIs that go beyond the main page should still go to their old places but the home page needs to be redirected to your new microsite. Now some people will be screaming “just use a load balancer” and they have a bit of a point but I’ve always been a bit of a fan of offloading processing onto the client and this is exactly what the redirect does.

So now the redirect site uses the same template as the home site in case of CSS and key navigation but it doesn’t include all of the dynamic bits and fragments that were on the old front page it includes two things

  1. The information directly related to the extraordinary event
  2. Links off to the normal site

So now our original redirection goes from http://example.com to http://example.com/event and we scale the event part to our new demand. If its truly extraordinary then you are better off doing it as static pages and having people making the modifications (even if updates are ever 5 minutes then its cost wise a lot less than call centre staff). The point is simple you are scaling the extraordinary using the cloud.

So spotted the big point here? Its something that you can do with a traditional infrastructure and then make the shift to cloud for what cloud is real good at - handling spikes. You don’t have to redesign your current site to scale dynamically you just have to use a very simple policy and have a cloud solution that you can rapidly put up in the event of an massive spike.

A couple of hints:

  1. Have the images for the spike ready to go and monitor at the redirect level to automatically kick-in the spike protector
  2. Have an automatic process to dump a holding page onto the spike protector which tells people that more information is coming soon, they’ll tend to refresh rather than go to the rest of the site

You don’t need the normal commercial licenses as you can do it via static uploads (the normal site can do its dynamic magic on your old infrastructure) or a temporary OSS solution.

I'm often confused as to why people try and scale to meet extraordinary demand on a normal architecture, people seem to not realise that most spikes aren’t a result of your core business getting 500% more popular over night, its normally a result of a specific promotion or problem and its that specific area which needs scaling. If its a promotion you need to scale the people hitting that promotion and then look at either scaling the payment piece, putting in place a temporary process or throttling the requests through that part of the process. If its an issue then treat the site like a news site and statically publish updates.

So there you go by using the power of a simple command - “redirect” - you can take advantage of cloud quickly and effectively and if you never get the extraordinary event it doesn’t cost you much, if anything.

So get on with the power of redirect and link it to the power of the cloud because that is when technical things are actually interesting, when they can simply be used to solve a problem cheaply that previously was too expensive to solve.

Technorati Tags: ,

Sunday, June 20, 2010

REST has put enterprise IT back five years, Sun has put it back ten

Okay I've watched REST, Clojure and the other shiny new things rise up and for the last 9 months I've been back in the bowels of large, lets just say massive, scale enterprise IT delivery and I've come to a conclusion.

IT is in a worse place now than it was 5 years ago. The "thinkers" in IT are picking up the shiny new tools and shiny new things and yet again continuing to miss the point of what makes enterprise IT better. There are a few key points that need to be remembered.
  1. Art v Engineering is still the big problem - 90%+ of people in IT aren't brilliant, a large proportion aren't even good
  2. Contracts really matter - without them everything becomes tightly bound no matter what people claim about "dynamism"
  3. No technology has ever, or will ever, deliver a magnitude increase in performance
  4. The hardest part of IT is integrating systems and service not integrating people. People are good at context shifting and vagueness, good interfaces are fantastic optimisers but even average user interfaces can be worked around by users.
The likes of Clojure and REST haven't improved this situation in the enterprise in any marked way. Its been 5+ years since REST started being hyped and in comparison with the benefits that WS-* delivered to the enterprise in 5 years its been zip, zero, nada. The "dynamic" languages piece that kicked off about the same time has delivered similar benefits to large scale enterprise computing (you know the stuff that keeps most people in IT in a job) over the "old school" Java approach.

A few years ago I said that if you wanted a career then learn Web Services, if you want to be cool learn REST. Since then its become clear that some people have made careers in REST... but you know what?
  1. Its as hard to do a large scale programme with a large integration bent as it was 5 years ago.
  2. There are less really good enterprise qualified developers as they've got "dynamic" language experience and struggle, or worse bring the dynamic language in and everyone else struggles
  3. Vendors have been left to their own devices which has meant less innovation, less challenge and higher costs as the Open Source crowd waste time on pet projects that aren't going to dent enterprise budgets

In 5 years from 1998-2003 Java and Web Services went from near-zero to being everywhere, innovation was being done at the edges and then being applied to the enterprise. It was a melting pot and it improved enterprise IT, thanks in part to the smart people working at the edge and the way this pushed the vendors forwards...

Well now SAP, Oracle and IBM are still heavily backing Web Services but there is a big problem....

No one is holding them to account. With all of the cool, and smart, kids off playing with REST and Clojure and the like we've got an intellectual vacuum in enterprise IT that is being "filled" by the vendors in the only way that vendors know how. Namely by increasing the amount of proprietary extensions and software and pushing their own individual agendas.

So we get the bizarre world in which Siebel's Web Service stack has a pre-OASIS specification of WS-Security, last updated in 2006 by the looks of it. We get a world where IBM is still pushing the old MQSI cart horse as an "Advanced ESB" and generally the innovation in this space has just collapsed in the last few years. Working on a programme doing integration which uses Web Services in 2010 feels pretty much like 2005, sure some specifications have come out and there is some improvement but overall its pretty stagnant.

"Oh do REST then" I hear the snake-oil salesmen cry. Really? If you had to do an integration between 20 different programmes and over 300 systems in a heavily regulated area then you'd use REST? High value assured transactions between different vendors and providers over a "dynamic" interface?

Give me a break.

What works in these areas is what has always worked
  1. Define your interfaces, nail them down, get them right
  2. Test like a bastard to those interfaces
You can't do complex programmes without having those firm areas, this is why major engineering programmes don't have variable interfaces for screws. Now before someone pipes up with a nice edge case where 200 people did something complex then please do that 20 times in a single organisation and give me a call.

In 2006 I asked why there was no WS-Contract and the real reason was that it wasn't a good vendor specification (WS-TX is that) but its a brilliant enterprise specification. So WS just had Security and Reliability, important things for the enterprise, but didn't make then next step.

And what has REST given us in the last few years? Errr please, folks... Now in my current engagement there was a great area where REST would have been very useful (Reference Data) and some where it would have been quite useful (Federated Data Network Navigation). The problem is two fold
  1. Most people in IT appear to know bugger all about it. Now I continue to be surprised at how little people who work in IT read about what is going on, but I was really surprised at how little traction REST had.
  2. EVERYTHING is manual and there is still no standardised way to communicate on WTF you are doing between teams
Now if you had 2 then you can do 1. I did this with WS back in 2000-1 when most people thought I was just making up this WS stuff I could run between the data centres over port 443, I had interfaces, they had tools, we got it working.

Now before RESTafarians jump up and talk about all of the wonderful WEB things they've been doing. That is great and wonderful, but its not my world, my world is having 300 systems that vary from 20+ years old to just being built and I need to get them all working together. Even when REST was the right "architectural" solution it was the wrong "programme" solution as it would have driven us off a cliff. My world does have stratospherically large budgets however... you know what, if you want to make real cash wouldn't it be a good idea to address the richest part of the IT market?

But my real ire I reserve for a company I used to really respect but which, at the same time as REST began to get a load of buzz, drove a huge amount of enterprise IT off a cliff. When Java SE 6 was released I said it wasn't enterprise grade and indeed very rapidly the stupidity of the decision to push JAX-WS into JavaSE became apparent (yes please note I was massively against WS-* in JavaSE, partly because if someone wants to be a RESTafarian why the hell should they have to have WS-* cruft in their environment?). This was also the release that added in scripting to Java SE.

I'm now seeing 4 years later the impact of this stupidity on the enterprise. Java SE 6 is dribbling in under the application servers but the mentality that it represented, namely that Sun was more interested in "Joe Sixpack" and the cool crowd than the enterprise really helped to ensure that it was left to the vendors to undertake the enterprise side and Java began to stop being the innovative platform or language. The bits that the enterprise wanted, things like profiles and dynamic loading, were deferred into Java SE 7, which is now (by my reckoning) 2 years over due.

Sun championed the new "cool" languages and undermined the whole thing which made Java good for the enterprise, consistency. Having lots of different languages is rubbish in the enterprise, having the same basic platform from 4 different vendors is much much better on every level. So now we have people proposing doing programmes with 4 or 5 different languages and its being seen as "reasonable", we are also seeing some great developers doing some interesting things on platforms that will never bring benefits.... I can't help but wonder whether Spring or Hibernate would ever have delivered any benefit if it wasn't for the fact that they operated on the common platform... oh wait I don't have to wonder, they wouldn't have been anywhere near as successful.

So the last 5 years have been poor ones for enterprise IT. WS-* is the only viable system to system integration mechanism for large scale programmes, but its stagnating. REST has some cool stuff at the front end and for people who can invest in only the very highest calibre individuals but is delivering bugger all for the average enterprise environment.

Why is this a problem?

Well most of the modern world is driven by computers, computers are what makes economies tick and its integration that matters in a networked world. The immature and techno-centric way in which IT approaches enterprise solutions is ensuring that far from being an accelerator that works ahead of business demand it is IT that is all to often the dragging anchor to growth. This obsession with purist solutions that solve problems that don't exist are just exercises in intellectual masturbation which are actively harming developed economies.

Far too much shiny, shiny, far too little getting hands dirty with pragmatic realities.

So maybe I'll come out of this funk and people will point to the wonderful things that are massive improvements over the past 5 years and how some of the biggest enterprise IT challenges in the world are being solved by people in application support thanks to these new developments.....

But I won't hold my breath

Technorati Tags: ,

Thursday, February 04, 2010

Why contracts are more important than designs

Following on from my last post on why IT evolution is a bad thing I'll go a stage further and say that far too much time is spent on designing the internals of elements of services and far too little on their externals. Some approaches indeed claim that working on those sorts of contracts is exactly what you shouldn't do as its much better for the contract to just be "what you do now" rather than having something fixed.

To my mind that view point is just like the fake-Agile people who don't document because they can't be arsed rather than because they've developed hugely high quality elements that are self-documenting. Its basically saying that everyone has to wait until the system is operable before you can say what it does. This is the equivalent of changing the requirements to fit the implementation.

Now I'm not saying that requirements don't change, and I'm not advocating waterfall, what I'm saying is that as a proportion of time allocated in an SOA programme the majority of the specification and design time should be focused on the contracts and interactions between services and the minority of time focused around the design of how those services meet those contracts. There are several reasons for this
  1. Others rely on the contracts, not the design. The cost of getting these wrong is exponential based on the number of providers. With the contracts in place and correct then people can develop independently which significantly speeds up delivery times and decreases risk
  2. Testing is based around the contracts not the design. The contract is the formal specification, its what the design has to meet and its this that should be used for all forms of testing
  3. The design can change but still stay within the contract - this was the point of the last post
The reality however is that IT concentrates far too much on the design and coding of internals and far too little on ensuring the external interfaces are at least correct for a given period of time. Contracts can evolve, and I use the term deliberately, but most often older contracts will still be supported as people migrate to newer versions. This means that the contracts can have a significantly longer lifespan than the designs.

As people rush into design and deliberately choose approaches that require them to do as little as possible to formally separate areas and enable concurrent development and contractual guarentees they are just creating problems for themselves that professionals should avoid.

Contracts matter, designs are temporary.

Technorati Tags: ,

Is IT evolution a bad thing?

One of the tenants of IT is that enabling evolution, i.e. the small incremental change of existing systems, is a good thing at that approaches which enable this are a good thing. You see it all the time when people talk about Agile and code quality and clearly there are positive benefits to these elements.

SOA is often talked about as helping this evolutionary approach as services are easier to change. But is the reality that actually IT is hindered by this myth of evolution? Should we reject evolution and instead take up arms with the Intelligent design mob?

I say yes, and what made me think that was reading from Richard Dawkins in The Greatest Show on Earth: The Evidence for Evolution where he points out that quite simply evolution is rubbish at creating decently engineered solutions
When we look at animals from the outside, we are overwhelmingly impressed by the elgant illusion of design. A browsing giraffe, a soaring albatross, a diving swift, a swooping falcon, a leafy sea dragon invisible amoung the seaweed [....] - the illusion of design makes so much intuitive sense that it becomes a positive critical effort to put critical thinking into gear and overcome the seductions of naive intuition. That's when we look at animals from the outside. When we look inside the impression is opposite. Admittedly, an impresion of elegant design is conveyed by simplified diagrams in textbooks, neatly laid out and colour-coded like and engineer's blueprint. But the reality that hits you when you see an animal opened up on a dissecting table is very different [....] a haphazard mess that we actually see when we open a real chest.


This matches my experience of IT. The interfaces are clean and sensible. The design docs look okay but the code is a complete mess and the more you prod the design the more gaps you find between it and reality.

The point is that actually we shouldn't sell SOA from the perspective of evolution of the INSIDE at all we should sell it as an intelligent design approach based on the outside of the service. Its interfaces and its contracts. By claiming internal elements as benefits we are actually undermining the whole benefits that SOA can actually deliver.

In otherwords the point of SOA is that the internals are always going to be a mess and we are always going to reach a point where going back to the whiteboard is a better option than the rubbish internal wiring that we currently have. This mentallity would make us concentrate much more on the quality of our interfaces and contracts and much less on technical myths for evolution and dynamism which inevitably lead into a pit of broken promises and dreams.

So I'm calling it. Any IT approach that claims it enables evolution of the internals in a dynamic and incremental way is fundamentally hokum and snake oil. All of these approaches will fail to deliver the long term benefits and will create the evolutionary mess we see in the engineering disaster which is the human eye. Only by starting from a perspective of outward clarity and design and relegating internal behaviour to the position of a temporary implementation will be start to create IT estates that genuinely demonstrate some intelligent design in IT.


Technorati Tags: ,


PS. I'd like to claim some sort of award for claiming Richard Dawkins supports Intelligent Design

Monday, January 25, 2010

Define the standards FIRST

One of the bits that often surprises, no infact not suprises it stuns me, is the amazing way that people don't define the standards they are going to use for their project, programme or SOA effort right at the start. This means the business, requirements and technical standards.

Starting with the business architecture that means picking your approach to defining the business services. Now you could use my approach or something else but what ever you do it needs to be consistent across the project and across the enterprise if you are doing a broader transformation programme.

On requirements its about structuring those requirements against the business architecture and having a consistent way of matching the requirements against the services and capabilities so you don't get duplication.

These elements are about people processes and documentation and they really aren't hard to set up and its very important that you do this so your documentation is in a consistent format that flows through to delivery and operations.

The final area are the technical standards and this is the area where there really is the least excuse. Saying "but its REST" and claiming that everything will be dynamic is a cop-out and it really just means you are lazy. So in the REST world what you need to do is
  1. Agree how you are going to publish the specifications to the resources, how will you say what a "GET" does and what a "POST" does
  2. Create some exemplar "services"/resources with the level of documentation required for people to use them
  3. Agree a process around Mocking/Proxying to enable people to test and verifying their solutions without waiting for the final solution
  4. Agree the test process against the resources and how you will verify that they meet the fixed requirements of the system at that point in time
This last one is important. Some muppet tried to tell me last year that as it was REST that the resource was correct as it was in itself it was the specification of what it should do and the test harnesses should dynamically discover only what the REST implementation already did. This was muppetry of the highest order and after forcing the individual to ingest a copy of the business requirements document we agreed that the current solution didn't match the business requirements no matter how dynamically it failed to do so.

So with REST there are things that you have to do as a project and programme and they take time and experience and you might get them wrong and need them updating. If you've chosen to go Web Services however and you haven't documented your standards then to be frank you really shouldn't be working in IT.

So in Web Service world it really is easy. First off do you want to play safe and solid or do you need lots of call-backs in your Web Services. If you are willing to cope without callbacks then you start off with the easy ones
  1. WS-I Basic Profile 1.1
  2. WSDL 1.1
  3. SOAP 1.1
Now if you want call-backs its into WSDL 2.0 and there are technical advantages to that but you can get hit by some really gnarly XML marshalling and header clashes that exist when going between non-WS-I compliant platforms. You could choose to define your own local version of WS-I compliance based around WSDL 2.0 but most of the time you are better off investing in some decent design and simple approaches like having standard matched schemas for certain process elements and passing the calling service name which can then be resolved via a registry to determine the right call-back service.

Next up you need to decide if you are going WS-* and if so what do you want
  1. WS-Security - which version, which spec
  2. WS-RM - which version, which spec
  3. WS-TX - your kidding right?
For each of these elements it is really important to say which specification you are going to use as some products claim they support a specification but either support an old version or, more impressively, support a version of the standard from before it was even submitted to a standards organisation.

The other pieces is to agree on your standard transport mechanism being HTTP. Seriously its 2010 and its about time that people stopped muttering "performance" and proposing an alternative solution of messaging. If you have real performance issues then go tailored and go binary but 99.999% of the time this would be pointless and you are better off using HTTP/S.

You can define all of these standards before you start a programme and on the technical side there really is little excuse in the REST world and zero excuse in the WS-* world not to do this.
SO



Technorati Tags: ,

Saturday, January 09, 2010

Think in RPC develop in anything

Gregg Wonderly made a good comment on the Yahoo SOA list the other day
I think one of the still, largely unrecognized issues is that developers really should be designing services as RPC interfaces, always. Then, different service interface schemes, such as SOAP, HTTP (Rest et.al.), Jini, etc., can be more easily become a "deployment" technology introduction instead of a "foundation" technology implementation that greatly limits how and under what circumstances a service can be used. Programming Language/platform IDEs make it too easy to "just use" a single technology, and then everything melds into a pile of 'technology' instead of a 'service'.


The point here is that conceptually RPC is very easy for everyone to understand and at the highest levels it provides a consistent view. Now before people shriek that "But RPC sucks" I'll go through how it will work.

First off lets take a simple three service system where from an "RPC" perspective we have the following:

The Sales Service which has capabilities for "Buy Product" and "Get Forecast"

The Finance Service which has capabilities for "Report Sale" and "Make Forecast"

The Logistics Service which has capabilities for "Ship Product" and "Get Delivery Status"

There is also a customer who can "Receive Invoice"

Now we get into the conceptual design stage and we want to start talking through how these various services work and we use an "RPC language" to start working out how things happen.

RPC into Push
When we call "Make Forecast" on the Finance Service it needs to ask the Sales Service for its Forecast and therefore does a "Get Forecast" call on the Sales Service. We need the Forecast to be updated daily.

Now when we start working through this at a systems level we see that the mainframe solution of the Finance team is really old and creaky but it handles batch processing really well. Therefore given our requirement for a daily forecast what we do is take a nightly batch out of the CRM solution and Push it into the Mainframe. Conceptually we are still doing exactly what the RPC language says in that the data that the mainframe is processing has been obtained from the Sales area, but instead of making an RPC call to get that information we have decided in implementation to do it via Batch, FTP and ETL.

RPC into Events
The next piece that is looked at is the sales to invoice process Here the challenge is that historically there has been a real delay in getting invoices out to customers and it needs to be tightened up much more. Previously a batch has been sent at the end of each day to the logistics and finance departments and they've run their own processing. This has led to problems with customers being invoiced for products that aren't shipped and a 48 hour delay in getting invoices out.

The solution is to run an event based system where Sales sends out an event on a new Sale, this is received by both Finance and the Logistics department . The Logistics department then Ships the Product (Ship Product) after which it sends a "Product Shipped" event which results in the Finance department sending the invoice.

So while we have the conceptual view in RPC speak we have an implementation that is in Event Speak.

RPC into REST
The final piece is buying the products and getting the delivery status against an order. The decision was made to do this via REST on a shiny new website. Products are resources (of course), you add them to a shopping basket (by POSTing the URI of the product into the basket) this shopping basket then gets paid for and becomes an Order. The Order has a URI and you just simply GET to have the status.

So conceptually its RPC but we've implemented it using REST.

Conceptual v Delivery

The point here is that we can extend this approach of thinking about things in RPC terms through an architecture and people can talk to each other in this RPC language without having to worry about the specific implementation approach. By thinking of simply "Services" and "Capabilities" and mentally placing them as "Remote" calls from one service to another we can construct a consistent architectural model.

Once we've agreed on this model, that this is what we want to deliver, we are then able to design the services using the most appropriate technology approach. I'd contend that there really aren't any other conceptual models that work consistently. A Process Model assumes steps, a Data Model assumes some sort of entity relationship a REST model assumes its all resources and an Event model assumes its all events. Translating between these different conceptual models is much harder than jumping from a conceptual RPC model that just assumes Services and Capabilities with the Services "containing" the capabilities.

So the basic point is that architecture, and particularly business architecture, should always be RPC in flavour. Its conceptually easier to understand and its the easiest method to transcribe into different implementation approaches.


Technorati Tags: ,

Thursday, September 24, 2009

REST Blueprints and Reference Architectures

Okay so the REST-* stuff appears to have rapidly descended into pointless diatribe which is a shame. One of the questions is what it should be instead (starting with REST-TX and REST-Messaging wasn't a great idea) and around a few internal and external discussions its come down to a few points
  1. What is "best practice"
  2. What is the standard way to document the interactions available & required
  3. How do we add new MIME types
Quite a bit of the technical basics have been done but before we start worrying about a "standard" way of defining reliability in a REST world (yes GET is idempotent.... what about POST?) we should at least agree on what good looks like.

Back in the day Miko created the "SOA Blueprint" work around Web Services, an attempt to define a concrete definition of "good", unfortunately it died in OASIS (mainly due to lack of vendor engagement) but I think the principles would be well applied here.

The other piece that is good (IMO) is the SOA Reference Model, Roy Fielding's paper pretty much defines that reference model but what it doesn't have is a reference architecture. Saying "The internet is the reference architecture" doesn't really help much as that is like saying that a mountain is a reference architecture for a pyramid.

Now one of the elements here is that there appears to be some parts of the REST community that feel that Enterprise computing must all "jump" to REST and the internet or are in fact therefore irrelevant to REST. This isn't very constructive as the vast majority of people employed in IT are employed in just those areas. B2B and M2M communications with a decent dose of integration are the standard problems for most people, not how to integrate with Yahoo & Amazon or build an internet facing website.

For the enterprise we have to sacrifice a few cows that hopesfully aren't sacred but I've heard bandied around
  1. You can't just say "its discoverable" - if I'm relying on you to ship $500m of products for me then I don't want you messing around with the interface without telling me
  2. You can't just say "late validation" - I don't want you making a "best guess" at what I meant and me doing the same, I want to know that you are shipping the right thing to the right place
  3. You can't just say "its in the documentation" - I need something to test that you are keeping to your side of the bargin, I don't want just English words telling me stuff I want formal definitions... contracts.
  4. You can't just say "look at this URI" - we are embarking on a 5 month project to build something new, you haven't done your stuff yet, you don't have a URI yet and I need to Mock up your side and you need to Mock mine while we develop towards the release date. Iterative is good but we still need to have a formal clue as to what we are doing
  5. You can't say 'that isn't REST' if you don't have something objective to measure it against
So what I'd suggest is that rather than having the REST-* piece looking at the technical standards we should really be focusing on the basics mentioned above. We should use Roy's paper as the Reference Model from which an enterprise Reference Architecture can be created and agree on a standard documentation approach for the technical part of that reference architecture.

In other words
  1. REST Reference Model - Roy's paper - Done
  2. REST Reference Architecture - TBD and not HTTP centric
  3. REST Blueprints - Building the RA into a concrete example with agreed documentation approaches (including project specific MIME types)
Right now burn me at the stake as a heretic

Technorati Tags: ,

Wednesday, September 16, 2009

REST-* can you please grow up

Well didn't Mark Little just thrown in a grenade today around REST-* by daring to suggest that maybe just maybe there needs to be a bit more clarity on how to use REST effectively.

As he said this "The REST-* effort might end up documenting what already exists which indicates that part of the challenge is that lots of people don't really know what REST is and certainly struggle as they look to build higher class systems and interoperate between organisations.

Part of this is of course about up-front contracts and the early v late validation questions. But part of it also appears to be pure snobbery and a desire to retain arcane knowledge which goes back to that "Art v Engineering" debate.

A few choice quotes from twitter

"Dear REST-*. Get a fucking clue. Atom and AtomPub already do messaging. No new specification needed, that's just bullshit busy work." - Jim Webber


"REST might lack clear guidelines, but something called REST-* with a bunch of vendors is hardly going to help!"
- Jim again

"and if they think REST lacks guidelines for messaging/security/reliability/etc.., they're not looking hard enough" - Mark Baker

Now part of Mark Little's point appears to be that we need more clarity around what good should be in the REST world and this needs to be easier to access than it currently is. I've seen some things described as REST that were truly horrific and I've seen other bits in REST that were a superb use of that approach. The problem that all of them had was in learning about Atom, AtomPub how to use them, how to use MIME types and of course the balance between up front contracts and late evaluation.

Would it really be such a bad thing to have an effort that got people together and had them agree on the best practices and then have the vendors support developers in delivering against that practice?

The answer of course is only yes if you want to remain "1337" with your arcane skills where you can abuse people for their lack of knowledge of AtomPub and decry their use of POST where quite clearly a DELETE should have been used.

If REST really is some sword of Damocles that can cut through the integration challenges of enterprises and the world then what is the problem with documenting how it does this and having a few standards that make it much clearer how people should be developing. Most importantly so people (SAP and Oracle for instance) can create REST interfaces in a standardised way that can be simply consumed by other vendors solutions. It can decide whether WADL is required and whether Atom and AtomPub really cover all of the enterprise scenarios or at least all of the ones that count (i.e. lets not have a REST-TX to match the abomination of WS-TX).

This shouldn't be an effort like WS-*, its first stage should be to do what Mark Little suggested and just document what is already there is a consistent and agreed manner which vendors, developers and enterprises can agree on as the starting point and that this starting point would be clearly documented under some form of "standards" process.

Would that be a bad thing?

Update: Just found out that one of the two things that they want to do is REST-TX... its like two blind men fighting.

Technorati Tags: ,

Thursday, August 20, 2009

What conference calls tell us about REST

I've just got off a conference call, the topic isn't important. What is important is that at the end of the call lots of other people started joining. Why was this? Well they were joining the next call that the meeting organiser had.

This got me thinking about REST and resource identifiers and why if you are doing REST its really important to understand what the right resource is. With conference calls there are basically two choices
  1. Have a unique conference number by person, this person therefore can just hand it out to people and they can dial in at anytime for a meeting
  2. Have a unique conference number by meeting. So when you want a meeting you have to arrange it and get a unique ID
Now the first one basically means that you always have a meeting ID but of course has a major problem: Your meetings can all begin melding into one.

The second is what you should be doing as it means that the meeting is the resource and the participants join the meeting. Someone can still be the chair if required but its the meeting that is the discreet entity.

The point here is that when doing REST you need to think about the implications of your resource hierarchy selections and not tie them to the first thing that you think makes sense.

Technorati Tags: ,

Thursday, January 15, 2009

REST is a crap name in a web world

Okay I was looking around for some REST stuff today, specifically around performance tuning for the mandlebrot stuff. So I thought I'd search for "REST performance tuning" and its no better on yahoo. Like when Microsoft talked about having a language called "COOL" and its bad enough that they ended up with .NET (image searching for COOL .NET application tuning).

For something that was designed for the Web, indeed which helped design the protocol behind the Web there really wasn't much thought put into naming it so it works well on the Web.

Technorati Tags: ,

Thursday, January 08, 2009

REST is dead long live the Web

REST met its demise on January 1, 2009, when it was wiped out by the catastrophic impact of the economic recession. REST is survived by its offspring: mashups, SaaS, Cloud Computing, and all other architectural approaches that depend on the web.

REST had begun to gain some traction in 2008 as the "next big thing" in technology, promoted by vendors, analysts and champions as being the only way forward, often by the same people who had promoted both EAI and Web Services as the only way forward. The economic downturn however has led to people looking at REST as nothing more than a new technology driven fad that was disconnected from the daily problem of a profitable business. Proponents would laud Google, Amazon and a small number of new startup companies as being the example that all the old crusty companies should follow.

These old crusty companies however have heard it all before, both from the .com boomers who were meant to replace them and from the technology vendors who have shipped them varying degrees of snake-oil over the year. Fortunately all is not doom and gloom for REST as these old crusty companies are doing exactly what they did with .com and looking at what they can do to drive down costs and increase profitability by using the web. As REST proponents shout about PUT/DELETE/POST and GET and whether anything from a browser can truly be "RESTful" because it doesn't have DELETE then the business users are looking at the Web, and more especially the services delivered via the Web as an excellent way of managing their IT costs.

Integrating these new Web delivered services into their enterprise often means using exactly the approach that REST proponents advise, but it is not REST that is important it is the Service. REST vainly tried to make itself the thing that people should care about but the sad reality was that its role was simply in helping people connect to the services that they use.

Already REST advocates are leaving the funeral a promoting the "web centric view" as being the only way of the future, but the crusty old companies continue to operate successfully often using systems that predate the web and chuckle at the cute naivety of these technology prophets.

Surviving REST are a series of technologies that at their core are about using the principles of REST hidden away in their dark hearts like a secret that must not be told. Mashups and SaaS often rely on REST but proclaim instead the business benefits, the productivity gains or the business service that they deliver. The biggest child of REST is the Web, it shouts as a colossus across the globe a shiny beacon of light which proclaims the success of its heritage, but no-one knows or cares about its parentage only about its usefulness.

So RIP REST the business never really knew you at all.

With deference to Anne

Technorati Tags: ,

Friday, November 28, 2008

REST of SOA the questions

After the REST of SOA a couple of people came up and asked me questions that could basically be summarised in the following
We've got a server development team that loves REST but as a Flash/Web/Ajax developer its really hard for me to work with
The two main reasons citied were
  1. The lack of PUT/DELETE from a browser, but server teams who still wanted to use it
  2. The size limit on GET
Now on the GET limit and the PUT/DELETE I suggested that the dev team should think about using a Proxy and having that map the URIs from those that work to the web to those that they want to use internally. But what it really came down to was that you had this amazing disconnect between the people doing the server side stuff and the poor buggers who had to consume it.

Technological fanatisicm doesn't work no matter what the technology.

Technorati Tags: ,

Tuesday, November 25, 2008

REST of SOA

So last week at AdobeMAX I did my first public presentation on doing REST and SOA together. Thanks to Duane for that and to the person who dropped out leaving me with the baby :)

Now I know they recorded the audio and video so when I find that I'll link to it.

What I said was that the REST model works in the interactional space of applications, especially in those which are focused around data navigation. I admitted that I found it a bit fan-boyish when it first came out but that there are areas where it does deliver value.

Thanks to Ben Scowen I had a whole set of detail around REST as he has done a massive REST Web programme, so kudos to Ben on that. I also wanted to make sure that people who attended would have some real detail around REST rather than just the picture presentations I normally use.

My major bit on the presentation around REST was the concept of state management in REST and the fact that (for me) this is the bit that really differentiates REST and which is the hardest to get your brain around.

The other major bit was the concept of thinking about the services and then using the URIs and methods as the way to separate the implementation of the services, I used an internal example as a way to do that.

So until Adobe release the audio et al, here is just the powerpoint


What I said throughout was that it was about picking the right tool for the job and understanding what works right in your environment. Some people followed up with questions afterwards that indicated that REST isn't quite the happy place for everyone.

Technorati Tags: ,