Now I was sitting at a table at the time and we all pretty much agreed the statement was bollocks. So I thought I'd do a quick guide to create performant Web Services and XML applications.
- Use XML at the boundaries. If you are writing Java/C/Ada/Python/Ruby etc then once you've made the hit to translate into your native language... keep it there. When you cross another service boundary then that probably needs XML, but don't keep marshalling to/from XML every time you move around inside a service.
- KISS. If you have 25 XSLTs and indirections in your ESB then it will run like a dog
- Don't have a central bus that everything goes through, think federation
- BUY HARDWARE
If you are out by a low multiple (under 3) or a few percentage points then scale your XML processing like a Web site (lots of thin servers at the front). If you are out by high multiples then you've screwed up somewhere.
Now the later one can be that the vendor has screwed up, so make them fix it. But in the last 7 years of doing XML and Web Services I can say that in environments running lots and lots of transactions (thousands a second even) that I've yet to find that XML and Web Services didn't scale.
The key if you have a performance problem is to really look at the pipeline and see where the real time is being spent. Several times I've had people claim "Web Service performance issues" then found that 1ms is being spent in the WS bit and 5 seconds in the application, or people blaming a specific element (XSLT/XML parser/etc) and then finding out that it is a bug in an implementation (one in a certain vendors stack looked almost like a wait loop bug). These elements aren't performance issues with Web Services or XML, they are bugs and issues with the applications and implementations.
XML and Web Services are not the most efficient things in the world. But then I'm currently working on a computer where this browser (inefficient editor) is running in side a VM (inefficient) on a box with 3 other VMs running on it (very inefficient)... and it is still working fine. A home computer now comes for a reasonable price with FOUR CPUs inside it (remember when a front end started at 4 x 1 CPU systems?) and Moores Law continues to run away. The issue therefore isn't whether XML/WS (or event XML/REST) is the most efficient way of running something but whether it is good enough to be used. New approaches such as streaming XML take the performance piece on still further and make it even less of an issue than it was before. This is about changing the libraries however and not the principles. XML/Web Service works from a server perspective.
So server processing isn't the issue. So maybe its network bandwidth... again I'd say no. Lob on gzip and you will burn up some more CPUs but again that is cheap in comparison with spending a bunch of time writing, testing and debugging software. This creates a much lighter over the wire message and again just seems to be fine in implementations where the wire size is important.
The only piece I've found that continues to be an issue is something that is nothing to do with Web Services or XML, but which is more of an issue in REST, and that is chatty behaviour over the network. Being chatty over a network gives latency, no matter what your bandwidth is there will still be a degree of latency and you can't get away from that.
So basically the solution is to communicate effectively and accurately and to do so in reasonably coarse grained ways. This is hardly news at all as it has applied to all communication systems in IT. As networks speed up and latency comes down then maybe even this will become less of a problem, but for today its certain a place to avoid excess. Google for instance on a second search (cached) to "Steve Jones SOA blog" responds in 80ms. A coarse grained approach that has 1 interaction per request and 10 requests will have a network induced lag of at least 800ms, a chatty approach that has 5 interactions per request will have 4000ms or 4 seconds. Chatty = non performant.
So basically I've found that Web Services and XML do scale, hardware is cheap, that Stupidity doesn't scale and that networks still introduce latency.
Its not rocket science but some people seem to think that its still 1999.
Technorati Tags: SOA, Service Architecture
2 comments:
Steve, do you have any experience with hardware XML accelerators (some described here)?
What's your opinion about "Infrastructural SOA" products like SOA Software's?
Personally I think there are many orthogonal aspects which can be solved easily by infrastructural products, like hardware appliances one can plug into the system architecture.
Regards,
Maurizio
Maurizio,
Hardware appliances are also good choices if you want to get a bit more power around XML processing and transformation, there really isn't any reason why you couldn't use these things as they are just more dedicated versions of the software solutions.
The point is, as you say, that these hardware appliances can be added in to solve any perceived lack of performance (the same goes for compression using GZIP if you want as well) before you start moving towards a more proprietary solution.
Steve
Post a Comment