This one is about how important it is to have perfect accuracy. For some things its a really bad idea not to have perfect accuracy, banks get a bit upset for instance, but in other cases its much better to go with at least a degree of confidence if you can't get perfect information rather than waiting for the perfect information to arrive. Put it this way, if two ships are about to collide there is a reason expectation that both will veer to starboard, but there is a chance that one of them won't know they should do this and in fact going to starboard will result in a crash. The perfect information is only available after the crash, therefore its better to make a smart decision based on what you have available.
In some ways this is related to the time criticality of information, for instance "how accurate does the stock count have to be?". Again making a system more reliable means understand the tolerances of the information. So sure you can go and get a super accurate forecast from the great big service running on the massive servers, but if they are down it might be okay to just use a lightweight algorithm to do a 95% accurate calculation. It might be the best thing to go off and the timestamp from one of the Atomic Clocks, but if you can't get it then its probably okay to use your local machine clock.
This is similar to the timeliness of information but is different as it is not about how time impacts information but is purely about information accuracy. What this means is understanding what can be done when you don't have the time or opportunity to get the perfect information. So this is where you can't run a Monte Carlo Simulation because the connection is down. Its where its okay to say the population of France is "around 60 million" when looking at some high level marketing. It is also about understanding that it isn't okay to say that "around 2 quid" when you are buying one element, but when buying a million you need to know it exactly.
In one sense this is about knowing when its okay to reference Wikipedia for geographical information and when its better to talk to the Ordnance Survey. Its about understanding when you can replace a product and the customer won't care, and when doing so will result in a legal case.
Information doesn't always have to be 100% accurate, but coping with variable information accuracy is one of the most challenging problems of reliability. This isn't about a simple proxy or configuration changes, this is about different types of operation given the currently available information quality.
Lets put it this way... don't start with variable information accuracy to make your SOA environment more reliable, this is for those with budget and real determination.
Technorati Tags: SOA, Service Architecture
In some ways this is related to the time criticality of information, for instance "how accurate does the stock count have to be?". Again making a system more reliable means understand the tolerances of the information. So sure you can go and get a super accurate forecast from the great big service running on the massive servers, but if they are down it might be okay to just use a lightweight algorithm to do a 95% accurate calculation. It might be the best thing to go off and the timestamp from one of the Atomic Clocks, but if you can't get it then its probably okay to use your local machine clock.
This is similar to the timeliness of information but is different as it is not about how time impacts information but is purely about information accuracy. What this means is understanding what can be done when you don't have the time or opportunity to get the perfect information. So this is where you can't run a Monte Carlo Simulation because the connection is down. Its where its okay to say the population of France is "around 60 million" when looking at some high level marketing. It is also about understanding that it isn't okay to say that "around 2 quid" when you are buying one element, but when buying a million you need to know it exactly.
In one sense this is about knowing when its okay to reference Wikipedia for geographical information and when its better to talk to the Ordnance Survey. Its about understanding when you can replace a product and the customer won't care, and when doing so will result in a legal case.
Information doesn't always have to be 100% accurate, but coping with variable information accuracy is one of the most challenging problems of reliability. This isn't about a simple proxy or configuration changes, this is about different types of operation given the currently available information quality.
Lets put it this way... don't start with variable information accuracy to make your SOA environment more reliable, this is for those with budget and real determination.
Technorati Tags: SOA, Service Architecture
1 comment:
Steve,
Thank you for a very interesting post on information accuracy in SOA.
I think it is very relevant and timely subject.
By the way, what would be a good place for “information” in our SOA space?
It’s very common to see enterprise chaos of disparate documents describing multiple (often competing) applications on the level of MSWord-based requirements or very technical implementation artifacts.
The gap between these two representations is wide open and it’s not easy to figure out the level of accuracy.
Bridging this gap can help us improve accuracy of information with direct mapping of semantic expressions by subject matter experts (SME) to technology artifacts, down to service APIs.
I think that the service registry is a great place to capture integrated views on a service from business and technology perspectives. I’d love to see there standard-based tools for direct mapping between semantic expressions by subject matter experts (SME) and technology artifacts. These tools and standards would allow SMEs early entrance into SOA space for collaborative work with architects and other technologists.
Semantic approach is a quickly growing area. We can see offers from IBM, WebMethods, and BEA on meta-data repositories and semantic tools. I would not say that we can bridge today from natural language of business requirements by SMEs to service APIs but we came awfully close in our attempts to integrate best practices in software and knowledge engineering.
Service Registry and Repository might be an ideal meeting place for business and technology, especially if supported by semantic tools for direct mapping and conversational facilities that could prevent “lost in translation” cases.
More on semantic approach...
Does it sound as a pipe dream?
What do you think?
Jeff Zhuk
Post a Comment