Thursday, December 22, 2011

RESTs marketing problem and how Facebook solved it

Earlier in the year I commented on REST being still born in the enterprise and now Facebook have now deprecated the REST API in favour of a Graph API now I could choose to say this is 'proof' that REST doesn't work for the Web either. That would be silly for a couple of reasons

  1. The new API appears to be RESTful anyway
  2. REST clearly can work on the web
No, what this really shows is that you have an issue with naming conventions.   The folks at Facebook called the first API the 'REST API' which meant when they felt that there were problems with it they then had two options
  1. Have a new API called REST API 2.0
  2. Create a new name
Now the use of the term 'Graph' I think is actually a good move and one that is much more effective than the term 'REST' in describing what 'REST' is actually good at: the traversal of complex, inter-related, networks of information.  Now this is actually a concept that resonates and has much less of the religious fundamentalism that often comes with REST.

Pulling this into the Business Information space of enterprises could be an interesting way of starting to shift reporting and information management solutions away from structured SQL type approaches into more adhoc and user centric approaches.  'Graph based reporting' is something I could see catching on much better than 'REST'.  So have Facebook actually hit on a term that will help drive RESTs adoption?  Probably not in the system to system integration space, but possibly in the end-user information aggregation/reporting space.

Time will of course tell, but dropping the term 'REST' from the name is a good start.


Technorati Tags: ,

Saturday, December 17, 2011

Wasting the time of a PPI scammer

There are a number of scams going around today which really demonstrate how mainstream outsourcing has become.  There is the current one around the 'unique' number that proves the scammer is from Microsoft and today I got a new one.  This time it was someone claiming to be from 'iClaims' (another Steve Jobs legacy, 'i' is the new 'e') telling me that I was entitled to a PPI or bank charges refund.  They had my address and phone number but nothing else so after saying that Lloyds TSB was my bank (it isn't) I flicked on the recording on my iPhone and away we went.  The game here is to waste as much of their time as possible while giving them as much incorrect information as possible. The total waste of time was about 20 minutes, but I only managed to record 18 minutes....



What surprised me about this scam was that they get all your Visa/Mastercard details but then want to set up a second call at which they will do advanced fee fraud part of the scam.  Almost 'honest' of them to do it at a second call rather than just ripping your bank account then and there (although as its all fake number I have no evidence that they wouldn't do that as well).  What I find most depressing about this however is that, as with the Microsoft virus scams, the drones in the call centre are just like drones in a call centre providing service support, they genuinely think they are doing a good and valuable job, they are just keeping to the script they've been trained to do.  They don't realise that actually they are part of a criminal act, I've even had one guy beg me not to report me to his supervisor as he'd get fired if they realised I'd played him.

Behind these folks lies true scum, total and utter scum.  People who appear to have access to credit card validation software and have a list of 'valid' numbers for UK accounts.  Its a depressing evolution of the outsourcing model that these days people are outsourcing and industrialising crime via India, and its not as if India doesn't have enough corruption of its own.

Now clearly people reading this blog are smart folks and wouldn't fall for this, and I dare say like me also entertain themselves in moments of boredom by playing along with these scams, but its worth mentioning (probably again) to relatives that this stuff is bollocks and they should hang up.

Tuesday, December 13, 2011

Cloud in a box: Life on Mars in Hardware or an empty glass of water?

There are some phrases that are just plain funny and for me 'Cloud in a Box' which is available from multiple vendors is probably just about the best. The idea here is that you can buy a box - a box that looks and acts like a 1970s Mainframe: virtualisation, big power consumption, vendor lock in - and joy of joys you've now got a 'cloud'.
 So:

  • Do you pay for this cloud on demand? 
    • Nope
  • Do you pay for this cloud based on usage?
    • Nope
  • Are you able to just turn off this cloud and then turn it on later and not pay anything for when its off
    • Nope you still need to pay maintenance
  • Can I license software for it based on usage
    • Errr probably not, you'll have to negotiate that
  • Is this cloud multi-tenant?
    • Errr it can be... if you buy another cloud in a box
  • Is this cloud actually pretty much a mainframe virtualisation offer from 1980?
    • Err yes
At first I was thinking that this was in fact the sort of thing created by folks who watch Life on Mars and want to see their data centre populated with flashing lights. But then I realised actually there is a better reason that you don't get cloud in a box.

Clouds are vapour, they float, they dynamically resize... if you put a cloud in a box then the vapour will stick to the sides and turn into water... taking up about 1% of the volume of the cloud.  For me this sums up the reason why it doesn't come in a box.  Clouds need to have capacity well beyond your own normal needs so that if you 'spike' then you can spike in that cloud but not need the capacity the rest of the year.  So a 1% ratio is probably the minimum you should be looking at in terms of what you cloud provider has against what your normal capacity is.  This is the reason that provider clouds like Amazon, or those from other large scale data centre providers, aren't 'in a box' but instead are mutualised capacity environments.   Even if one of these providers gives you a 'private' area of VLANs and tin they've still got the physical capacity to extend it without much of a problem.  That is what a cloud is, dynamic capacity paid for when you use it.

Cloud in a box?  I'm a glass 1% full sort of guy.



Technorati Tags: ,

Monday, December 05, 2011

Why thinking counts and development doesn't

I'm having one of those interviewing streaks at the moment.  The sort of interviews where after 2 minutes the only question is how to politely wrap the interview up but where secretly you want to scream 'DO YOU SERIOUSLY THINK YOU ARE ANY GOOD?'.  You know the sort, where you ask a simple question like

'Explain the difference between EJB and SCA' and you get an explanation of the different Eclipse UI pieces the person has used on their current project.  You then push for a more structural detail and onwards and downwards the explanation goes.

The point here is that these people are 'developers' but I'm after 'solution architects' or 'software engineers'.  What I want is people who understand the principles and structures behind what they do and not simply the series of UI elements or API calls that fulfil their current task.  I don't ask people about 'design' as I don't really care what they use for design whether its Agile, XP, SCRUM, Waterfall or just doing it in their heads.  What I care about is how they think about solving a problem and then apply that thought to the platform they are working on.  This applies to folks who are package functional pieces, Java developers or Perl guys.  I want to know that you actually understand what is behind the scenes otherwise I'm better off just doing it myself and that is normally a waste of my time.

Software Engineer is a phrase that isn't used much these days, and I feel that is a shame.  Software Engineers know about how things work and what makes them, they care about the structure and the mental model and then its application and not about the basics of what is infront of them.

Talking just about the surface is ruddy pointless.  Its the sort of thing that causes issues, for instance asking a Siebel person 'how do you handle organisational email addresses' and them not knowing about the issues that has because 'I've never had to do that'.  That means you are sheeple, you follow what is there and don't try and think about what is behind and the mental models, this means you will make bad decisions.

Thinking counts, design counts because its that sort of thing that means you see the roadblock and avoid it rather than just ploughing straight into the wall.
 

Technorati Tags: ,

Thursday, December 01, 2011

Five reasons why Facebook is dying and Email is king

Mark Zuckerberg tried to pull a Steve Jobs the other day by announcing that a new product of his was going to kill off a competitor. Now there have been articles on the fact that email use is still rising and I'll give five reasons why Facebook is dying and five reasons why email will remain king

Why Facebook is dying
  1. Facebook is at near saturation point, this makes it valuable through the Metcalf law on the value of networks but is a concern in an area where displacement can be exceptionally rapid (anyone remember MySpace?). In China and other places Facebook has practically nothing
  2. Facebook is a one trick pony and its trick isn't as good as Google's. Google's trick is to provide to OTHERS, Facebook's is to control within the garden. The moment the garden is under threat then the value disappears
  3. Facebook is struggling to innovate, the aborted 'places' approach is an example and the new messaging piece (hardly ground breaking) is part of that. Great first act, but where is the follow up?
  4. Facebook's value is on selling its user data, as data privacy rules tighten this avenue will become harder
  5. Facebook haven't shown how they transition from glorious start-up to proper heavy-weight. Yahoo had more and still failed, why are FB's leadership different?
Why email will win
  1. Email is open - standard protocols, standard formats and available to anyone. This means its ability to connect is significantly higher
  2. Email is ubiquitous - we don't need the same client, server or anything. Whether its mobile, desktop, web or anything else people can communicate and it has massively more engaged users than Facebook.
  3. Email can be private - Encryption, local storage and other elements mean email can genuinely be private
  4. Email isn't owned - there isn't one big company trying to fleece or sell its value is in connectivity
  5. Email is federated to the individual - I choose what I want to see and what is important.

Zuckerberg wants to kill email BECAUSE its open and because it allows people to be social. And ultimately this is what I think dooms Facebook. In a federated world the winner in social communication is going to be a federated approach, right now that means email but it doesn't mean that in 5 years we won't have some clever software that enables people to 'carry' their status on their mobile device and have it federated in the manner they wish independent of the provider.

Email isn't dying. Facebook is dying and Zuckergerg is not the next Steve Jobs.

Technorati Tags: ,

Monday, November 14, 2011

SOA Anti-Pattern: Sharing data like Candy

Back in 2006 I wrote a bunch of SOA Anti-patterns, with some additional help,  and these are pretty much as valid then as now.  I'd like to add a new one though that I've seen over and over again its related to the canonical model problem and its pretty easy to solve.

Name: Sharing Data like candy
Description
This anti-pattern comes from a desire to 'share' all the information between areas but instead ends up creating strong dependencies between areas because core information sets, for instance customers, products, locations, etc are thrown about all the time and need to be 'matched' across systems during the transactions.  The information description for these core entities grows continually as different services want 'just one more field' in order to enable other services to get 'everything'.

Effect
The impact of this approach is simple, the schemas for the core entities aim to be 'complete' so all information from every services can be placed within these, this is stated that it aids 'sharing' but the reality is that as new attributes are added, or sub-entities, that different areas of the business begin to add the same attributes in different places and the testing impact of change becomes a significant issue as these core information sets are used in every single interaction.  Services take to storing information from these interactions 'just in case' which leads to significant challenges of data duplication.  The solution degrades into a 'fragile base class' issue and modifying the information model becomes a high ceremony, high impact, change.  Development becomes slow, people look for 'back doors' to avoid using corporate approaches and local point solutions begin to proliferate.

Cause
The cause is simple, a failure to recognise what the purpose of the interaction is.  If service A and service B both have information on customer X, with them being known as "fred bloggs, ID:123" by service A and in service B as "mr fredrick bloggs, ID:ABC" then passing 40 attributes about the customer is just designed to create confusion.  It doesn't matter if Sales and Finance have different information about a customer as long as they have the information that is right for their interaction with them.  Sharing all this information makes communication worse if its done in an unmanaged way.  The problem here is that management of these core entities is a task in itself while the SOA effort has viewed them as being just another set of data entities to be flung around.  The cause is also due to the SOA effort viewing everything as a transactional interaction and not thinking about the information lifecycle.

Resolution
What we need is the ability to exchange the mapping from service A to service B's ID and only the information which is specific to the transaction. This means that we need a central place which maps the IDs from one service to another and ensures that these represent a single valid individual, product, location, etc.  This is the job of Master Data Management and it comes with a big set of business governance requirements.  This approach has an added benefit of being able to synchronise the information between systems so people don't see different versions of the same data attributes and being able to extract information out of source systems to provide a single, transactional, source.

The resolution therefore is to agree as a business that these core entities are important and to start managing them, this reduces the complexity of the SOA environment, increases its flexibility and agility as well as removing issues of data duplication.



Technorati Tags: ,

Wednesday, November 09, 2011

How Microsoft missed the cloud....

Back in 2007 I posted about a research leader (who retired this year) at Microsoft who made some predictions on the future which could be summarised as the following
  1. Single processing is an old school idea - this was pointing out the obvious in 2007 and indeed obvious in 1990 and before if you know anything about decent scale systems.  This was a prediction of the future being the same as the present and recent past - not so much a prediction as a statement
  2. The end of low-level programming languages - as above, this wasn't a prediction but a statement of current reality dressed up as a prediction.  The line "Once considered an extravagant use of memory, compilers are now essential tools" is brilliant.  WTF is Node.js in this world?  So wrong its gone around the other side....
  3. Virtual Memory is dead in next generation OSes.... umm so that would exclude Windows 7 and Windows 8 then.... clearly Microsoft don't consider them next generation
  4. You'll be carrying around all the personal storage you need for video and audio... my 4 TB of video would disagree
  5. Next generations OSes will use DB technologies not file systems .... again a world of MS #fail on this one
Now the first 2 were predictions of 'yesterday' being put forward as the future, 3 was a lack of vision as to what virtual memory is actually for... but the real point here is that later 2.

This was 2007 remember.  A year in which I was talking about Google SaaS and Amazon AWS to lots of folks and here we have someone who was a research leader at Microsoft really missing the point of the next wave, not a 10 year+ wave but the wave that was about to crash across the entire company.  4 has been replaced by iCloud and other approaches that mean you don't need to carry it all with you (and with video and TV you probably couldn't anyway) but instead can access it on demand via mobile broadband. The final point is really just that Windows Vista should have had that DB technology but didn't and you know what?  None of us are missing it.  Spotlight on Mac OS X and the new Windows 7 search stuff means we don't need that DB approach and out in the real-world we are seeing people using NoSQL approaches over traditional DBs.

My point here is simple.  Here was someone at the top of the research tree in one of the biggest tech companies in the world and his 5 'top' predictions were all total bobbins.  So what does that mean for the rest of us?  Well first of all it means listen to the shiny fringe and read about the leading practice of the past.  Secondly it means don't listen to the 'vision' of companies with a bought in objective of extending the present.  Thirdly it shows that missing the wave costs a lot of money to catch-up and profitability becomes an issue (see:Bing, Windows Mobile, etc)

Above all it means challenging visions, and then measuring companies against them.


Technorati Tags: ,

When Big Data is a Big Con

I'm seeing a lot of 'Big Data' washing going on in the market. Some companies are looking at this volume explosion as part of a continuation of history, new technologies, new approaches but evolution not revolution. Yes Map Reduce is cool but its technically much harder than SQL and database design this means that it is far from a business panacea.  Yes the link between structured and unstructured data is rising and the ability of processing power to cut up things like video and audio has never been better.  But seriously lets step back.

Back in 2000 I worked at a place that spent literally MILLIONS on an EMC 5 TB disk set-up.  Yes it had geographical redundancy etc etc and back then 5TB was seen as a stratospheric amount of data for most businesses.  These days its the sort of thing we'd look to put into SSDs, its a bit beyond what people would do in straight RAM but give it a few years and we'll be doing that anyway.

Here is the point about Big Data:  95%+ of it is just about the on-going exponential increase in data which is matched, or at least tracked, by the increase in processing power and storage volumes.  Things like Teradata and Exadata (nice gag there Larry) are set up to handle this sort of volume out of the box and Yahoo apparently modified postgres to handle two PetaBytes which by anyones definition is 'big'.  Yes index tuning might be harder and yes you might shift stuff around onto SSDs but seriously this is just 'bigger' its not a fundamental shift.

Map Reduce is different because its a different way of thinking about data, querying data and manipulating data.  This makes it 'hard' for most IT estates as they aren't good at thinking in new ways and don't have the people who can do that.  In the same way as there aren't that many people who can properly think multi-threaded then there aren't that many people who can think Map Reduce.  Before you leap up and go 'I get it' do two things 1) compare two disparate data sets 2) Think how many people in your office could do it.

So what do we see in the market?  We see people using Big Data in the same way they used SOA, slapping on a logo and saying things like 'Hadoop integration' or 'Social media integration' or.... to put it another way.... 'we've built a connector'.  See how much less impressive the later looks?  Its just an old school EAI connector to a new source or a new ETL connector... WOW hold the front-page.

Big Data has issues of Data Gravity, process movement and lots of other very complex things.  So to find out whether its Big Data or Big Con ask the following

  1. Can you replace the phrase 'Big Data' with 'Big Database' if you can then its just an upgrade
  2. Do they have things that mean old school DBAs et al can handle Hadoop?
  3. Can the 'advance' be reduced to 'we've got an EAI connector'
  4. Is it basically the same product as 2009 with a sticker on it?
  5. Is there anything that solves the data gravity problem?
  6. Is there anything that moves process to data rather than shifting the data?
Finally do they describe Big Data in the same way that the Hitchhikers Guide to the Galaxy described space
"Space," it says, "is big. Really big. You just won't believe how vastly, hugely, mindbogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space, listen...
Then you really know its Big Con.  Big Data is evolution not revolution and pretending otherwise doesn't help anyone.


Technorati Tags: ,

Tuesday, November 08, 2011

Soylent Green is Facebook - People as a Product

As everyone knows 'Soylent Green is People' and indeed its taken nearly 30 years for people to really make a product whose only ability is to sell the value of people, people cut into cross sections, relationships and information.  This is what all social media companies are really selling, they aren't selling TO people they are selling people TO companies.  This is the only way they make money.  People talk about privacy concerns in Social Media but in reality its a balance of how much can they get people to give away.

In other words the goal of Google+, Facebook etc are to make people willingly become products they can sell.  When you opt-in to marketing or 'like' something on facebook your are making that decision and that commitment.

Its 2011 and Soylent Green is the bigger product buzz on the market, its just that as with every relaunch after a faux-pas its been rebranded and renamed (to 'Social Media') but the goal is the same, to provide people, and companies, with people as a product.  This desire to monetise people is only going to get more direct and more visible, it builds on the customer marketing databases of the past but adds a more direct engagement, instead of selling a contact point you really are selling the actual individual. 

The question isn't 'is this right or wrong', its just reality.  The product of social media is the people who use it.  Social media companies aren't a charity, they want to make money, and in order to make money they either have to charge users or sell those users.

Facebook is PEOPLE!


Thanks to Rick Mans for the idea

Monday, October 24, 2011

The 'Natural Platform' - Why people matter more than performance in picking hardware

One of the things that I get asked is 'what hardware should we run this on?'. I've said for years that I don't care about the tin and the tin is irrelevant from a differentiation perspective. Now before people leap up and say 'but X is 2x faster than Y' let me make a couple of points

  1. Software tuning and performance will have miles more than a 2x impact
  2. The software licenses will probably cost more than the hardware
  3. The development will definitely cost more than the hardware
  4. The software support and maintenance will definitely cost more
So what does that mean?  Well it means that when picking harder you should consider the people cost and the support costs much more than you could consider the performance.  What you should be considering is
 'what is the natural platform for this software?'

The natural platform is the one that the software is tested on.  That doesn't mean its the same as the hardware platform from the software vendor that they want you to buy, its the one that the developers are actually using to test that the software works.  Why do you do this?  Well when there is a bug, and lets face there are always bug, then instead of just having the support folks available to fix it you have all of the developers as well because they don't need to switch from their current environments.

Like I say this doesn't mean 'pick a hardware platform from the software vendor' it means pick the natural platform.  So if you think/know the developers are on Linux and deploying to Linux servers for test then you know there are more people able to help on Linux than anything else.  If they are developing on Windows and deploying to Linux then either of those platforms is natural.

As an example of what happens when you don't, let me take you back to 2000.  I was working on a project and we were using MQSeries, JMS and of course Java.  We developed it on Windows and deployed it to Linux for test.  For production however we'd been convinced to go for AIX to give us some major grunt.  We deployed the code into UAT and.... it broke.  Our assumption was that this was our fault because we didn't know AIX that well and clearly running IBM's AIX with IBM's Java Implementation, IBM's JMS implementation and IBM's MQSeries meant that it had all been tested, this was their flagship platform surely this was what it was meant to run on?

36 hours later we were talking directly to the product team who identified the problem as a memory issue that only occurs on AIX.  Making clear this meant that our configuration (pure IBM) had clearly not even been tested.

Working on another project where the database environment was different to that from the package provider and the hardware was a mainframe we had massive issues in just getting anyone who knew about our set-up in order to fix some problems.

These are normal problems and the key to them all is that its not about whether box X is faster than box Y they are about getting the best support and fixing problems quicker.  I'm not arguing that you shouldn't get an environment that scales, what I'm arguing is that when you look at the costs of tin then performance is a distant second to the people costs of fixing problems when they go wrong.

The problem is that normally the people buying tin are just buying tin.  In these days of virtualisation its about picking the right OS inside your virtualised server but its still important to think natural on platforms.

Pick the natural platform not the fastest shiny box.


Technorati Tags: ,

Monday, October 17, 2011

Improvised Explosive Consultants - neutralising a bad consultant

One of the challenges I often have is where a company employs a consultant to give 'independent' advice and the individual employed is a total snake-oil salesperson.  They've read a couple of books and websites and see their only job to lob in a 'bomb' in a meeting and sit back.  I categorise a bomb as a piece of input that is
  • Completely and utterly wrong
  • Contains a grain of truth that has been warped and distorted but can be made defensible 
  • Said with conviction and patronisation as to how you can't see their point
So its the person who says 'Where are the WSDLs, this is an SOA project I expect to see WSDLs' and then follow up with an accusation that you don't have enough detail as you haven't done to WSDLs.  Or someone who in an MDM project asks 'why aren't you storing transactions in the Master, how are we expected to get them?' or questions to which answers are so obvious you don't bother putting them on the slide deck 'how on earth can you say you are doing a global implementation if you aren't using HTTP for the user interface'...'Err we are'.... 'Well you should have said, its a critical point'.  You know the sort of thing.  Dumb questions.  The reason they are dumb questions is that this person is meant to be an expert.  People who don't know can't ask dumb questions, they ask questions and you help them understand but when the person is put forwards as an expert then you have an issue.  I think of these people as Improvised Explosive Consultants as they are normally doing a specific thing, taking a small grain of knowledge and improvising it into a bomb to derail the project to show their value.

You see they are there to keep their fees up and to do that they need to demonstrate 'value'.  One of the easiest ways to do this is to undermine others and doing this in a technical area just means you have to seem smart to someone who doesn't understand the subject area.  One of the problems this causes is that you can't just call 'bullshit' on the bomb as often it sounds plausible to the layman while being rubbish and because doing so creates an antagonistic environment which is counter-productive and often leads to the IEC being seen as 'scoring a hit' with their insight.

This sort of consultancy often leads to bad decisions being made, the bomb is used to confuse and thus a confusing solution is then created.  The bomb maker, the IEC, is of course not going to be doing any of the actual delivery but can continue to stand at the side throwing bombs to demonstrate their 'value' and superiority.  So how do you defuse an IEC?
  • Don't lose your temper - they are after this as it shows the bomb has been effective and makes you look uncertain, even though you've probably lost your temper because of the level of stupidity
  • Get the specific objections down on paper - after the meeting where the bomb is thrown get the IEC to write a clear email listing their 3-5 points.
  • Attack the facts not the person - don't debate the person keep on the facts.  Use terms like 'I'm a bit confused by number 3 because wouldn't that mean....' rather than '3 is just rubbish, don't you understand.
  • When you finish knocking down the points send an email confirming that they accept that all of their 'challenges' have been addressed.
  • Then send their reply of 'yes' (probably including some phrase like 'its important to check these things') to the whole group making clear that you've addressed all the points and the solution hasn't changed at all
  • Repeat this every time. 
  • When you get to the third bomb, add a statement to the reply that 'while you appreciate that IEC is just trying to get to the right solution, as we all are, I'm concerned that the team is being delayed addressing these concerns.  So far we've addressed 15 points and none have resulted in a modification to the solution.  Could I please ask that anyone who has specific challenges to the solution at this stage please email me (or add to the bug/CR/etc repository if you have one) before the meetings to speed up this process.
Hopefully during this process the IEC will quickly be identified with delay and your documented history of their comments producing nothing but additional work will result in them being defused.

If you are on the other side and think you might be employing an IEC there are a few key ways to check

  1. Do they talk about their delivery successes or just their advising successes
  2. Do they use patronising tones when people whose opinion you think is worth something explains stuff to them. 
  3. Do you find that people with a history of successful delivery say they are 'confused' where these challenges are coming from.
  4. Does the person talk about problems without out talking about simple clear solutions
If you've answered yes to 3 or more of the above then take a long hard look, you might have an IEC.


Technorati Tags: ,

Monday, September 19, 2011

NFC - What Apple does next? Buy Amex?

Chatting around on the iPhone 5 I have to say its all rather dull, yes they'll fix the antenna, yes the camera might get better and the processor faster. But really is that a big deal? So what could Apple announce either now or next year that would really blow people away.

I travel a lot, and one thing I see around Europe is the rise of NFC, so the use of 'smart'cards on the Underground, Metro, Dutch rail and elsewhere. The technology is basically the same everytime, Chip with some form of RFID or NFC is swiped over a sensor and your trip is paid for. Around the globe we are seeing MasterCard and Visa both include NFC to their cards so you can just wave the card around and get charged for things.

Now MasterCard and Visa get a percentage of the transaction, there are normally companies providing the Smartcard services for public transport. But what stops Apple creating or leveraging a 'standard' for NFC to cover these cases. Think 'Airplay' over six inches. Now Apple have said NFC won't be in iPhone 5, which indicates... well not much. But the key question here isn't so much when NFC will be included but the impact this will have on the market.

Lets say in the UK my Oyster card can be replaced with my iPhone and my iPhone can be linked to my Credit Card. Suddenly Apple are taking a slice not just of my iTunes transactions but of pretty much everything I do, and they have clarity via GPS on where I'm doing it. I can also charge via this system on the French, Dutch, etc networks without having to buy a ticket or going to a ticket office, think of the staff savings....

Now for fraud reasons it would be good to track the GPS so if I do an iPhone transaction in Utrecht and then there is a card transaction in London 10 minutes later then the odds are that one is fraudulent. This fraud protection would give Apple the reason to store and analyse this information, and of course provide it to the credit-card provider. All very open, if more than a little big brother.

The point here is that integrating NFC into the iPhone and driving a standard due to the massive market volumes that the iPhone has, something no-one else can currently do unless Google dictates hardware standards a bit more, would mean that governments and companies could massively reduce the timescales for adopting NFC and start decreasing their operating costs by relying on iPhone users to self-serve. This would have a knock-on effect of driving iPhone sales because with iPhone being the 'preferred' mechanism it becomes simpler to own one rather than not. Thus Apple provide to the credit card companies and other organisations a platform for NFC which gives Apple revenue and companies the leverage that a standardised market brings (think 802.11x against proprietary approaches).

That then got me thinking. Apple quite like being in control and have $76bn in cash, what could you buy for that which would really kick this up a level? How about VISA($63bn), Mastercard($44bn) or American Express($60bn) themselves? Apple could buy, for cash, anyone of these. So instead of having an open system for all of the providers it suddenly would become a closed system where you are best off having a specific card provider if you get an iPhone. Given the demographic target of Apple, and their larger charges over the competition, AMEX would seem to be a good fit.

Sometimes its scary when you join the dots.


Technorati Tags: ,

Tuesday, July 19, 2011

Java 7 SE approved... Meh

Hey Java SE 7 has been approved... now that is spectacularly quickly. You'd almost think that the normal Java Community Process had been ignored and instead the spec lead had taken an externally created spec straight to approval...

What is most depressing in reading the various Oracle (mainly ex-Sun employee) releases on this is that not a single one actually commented on the fact that of the people doing the approvals six of them expressed reservations about the licensing terms and the transparency of the process. All stated that they were approving it to get Java moving again and that there are issues they want to see addressed. I've said before that Java SE 7 is a nothing release for Java but its the approach and process that concerns me most. While I don't think the Java SE 6 approach taken by the Sun leads at the time was at all positive or constructive and the end result was not what was required at least there was a decent representation and debate. 18 companies spent 18 months and while the dumbest decisions remained (JAX-WS in JavaSE 6... how stupid does that look now?) at least there was a measure of debate.

The expert group for Java SE 7 was a sham, Five companies, five months and the first draft published days after the expert group was finally formed. The comments on the 'six companies who approved (+ Google who voted against) clearly indicate that there are significant changes required for Java to regain its position as the go-to platform for developers, enterprises and vendors.

I really hope that Java SE 8 actually does the radical things that the platform needs given the big shifts in the last 5 years since a decent release which didn't just dump cruft into the platform. Unfortunately I'm still concerned that the mentality is more to continue outside with a closed community which pretends to be open while actually pushing an already proven to fail line.

Java SE 7... I give it a 'Meh'.


Technorati Tags: ,

Dumb IT: We've already got some licenses for this

There are certain phrases that fill me with dread. 'We are using Agile so we don't need to have a vision, we'll just iterate', 'there are no data quality issues', 'We're the first people to use this' and 'The vendors roadmap says they'll do X in 2 years so it will be fine by the time we need that'. One however is completely variable in the fear it introduces because it comes in three clear flavours, one good and two bad.

'We've already got some licenses for this'

What this means is one of three things

  1. We need to do something and a technology we know well is ideally suited where we fortunately don't have to buy anymore licenses
  2. We bought an enterprise agreement when we bought product X and the vendor allows us to have a few licenses for Y and Z as well
  3. I've spent a load of money on licenses for this and we are damned well going to use them
Now clear the first one is fine, this is for example where new functionality is being delivered for a website and the number of CPUs has to be increased.  The later is of course clearly dumb and often is the driver behind IT centric projects that burn money.

The middle-one though however is the most dangerous.  I've seen this over, and over and over again.  A company buys the flagship product from a market leader in a specific segment.  That market leader also has some other products which are either early to market, non-strategic or just plain a bit rubbish and to 'sweeten' the deal they include a bunch of licenses for them and offer this up as value.

A few weeks, months or even years later a project comes along which needs to do a specific thing.  Suddenly someone remembers, or most often pushes, that there are some licenses on the shelf that do that sort of thing so brilliantly money can be saved.

Let me recount a story of just such a decision....

In 2001 I was working with a company who had bought a enterprise package solution from one of the market leaders in the CRM space.  As part of this 'deal' the company was allowed to use up to 4 CPUs of any other product from that vendor.  We had to produce a website to enable consumer to interact directly with the package and this was well before .com front-ends were normal practice.

'Fortunately' the vendor had a new product to do just this, it was new, it was shiny and it was covered by the 4 CPUs.  The alternative was to spend about 6 months developing something custom with about 5 people and despite some heavy cautioning from me on adopting a brand new, unproven technology that looked rather rubbish when I investigated it the company decided that it would be cheaper because of those 4 CPU licenses....

18 months later and with an average team of around 15 people and much hacking, cursing and challenge the site went live with a fraction of the envisioned functionality.

So that 4 CPU 'saving' had in fact delivered a 20 man year cost increase and less functionality.

Want another?  How about the company who used some old EAI licenses and found out half-way through the project that the vendor was discontinuing the product?  How about the company who used the limited number of Web content management licenses and found that 10 years later it was a drain down which millions had been poured... seriously I could go on and on.

The point here is that lots of IT seem to account for license costs in a completely different way to people costs.  Something that saves [£€$]10 of license cost is good even if it delivers 10x that in additional people costs.

The solution is simple, when you look at a programme evaluate the total cost of ownership of the solution not just the immediate cost of buying licenses.  Cheap today is liable to be expensive tomorrow and potentially extortionate the week after that.  TCO is all that should count in these decisions but normally the lure of 'free licenses' outweighs the rationale of 'that isn't the right tool for the job'.

Now that really is Dumb IT



Technorati Tags: ,

Monday, July 18, 2011

Hackgate and what it teaches us about responsibility

The ongoing Hackgate scandal, we can call it that as people in the US are now interested rather than just the dull old 'phone hacking scandal', teaches us some very interesting lessons about corporate politics and the meaning of the term responsibility.

I see quite a few projects where the issue is that someone somewhere hasn't taken control or responsibility and therefore things have gone off the rails. A lot of the time this is about the personality types involved and those personalities massively impact how any recovery can be achieved.

In this scandal we've seen so far five completely different views of responsibility and each of these teaches us a lesson we can learn from

Responsibility without Action but with blame - David Cameron
First up is the Prime Minister David Cameron who has taken 'full responsibility' for hiring Andy Coulson.  What full responsibility means here is that he has stated that he takes that responsibility, but in reality actually nothing has changed or is in danger of actually being done.  I often see this on projects where a senior business sponsor has a pet project that they actually don't care that much about but just want to see it continue for power reasons.  As with this occasion this responsibility normally actually means shooting the project manager or some other individual so in reality what is being said is that the senior individual actually bears no real responsibility and that the failure is further down the chain of command.

This is very common in IT projects, often you will see a programme director being promoted during a project and then taking 'full responsibility' by firing the person who had to clean up their mess after they were promoted.  Its a very good career strategy as it implies that you are the sort of person who takes decisive action and has person integrity but in reality is classic blame farming.  This is one of the hardest situations to deal with when recovering a project as you often need the sponsor to change some of their behaviours and what you get instead is a continual statement that they have 'taken responsibility' but actually in reality done nothing to change the behaviours that led to them having to take responsibility for the failure. These people can be very useful when you recover a program as they are keen to be seen to be doing things and if you can leverage that into action it can be a positive thing.

Responsibility without responsibility - Rebecca Brooks
Rebecca Brooks (nee Wade) typifies another form of rogue sponsor when issues occur.  In this case the sponsor is often actively involved with the project and seen as more than simply a sponsor but actually a leader of the initiative.  Trouble occurs and the sponsor throws up their hands and claims that they had no idea at all what was going on and are appalled at how they were kept in the dark.  This is a very tricky act to play but if played well normally means that the entire project is about to get a huge kicking as literally nobody is going to protect the individuals and indeed the previous sponsor will often be the most vicious in terms of lashing out in order to protect their own reputation.

In IT programmes I regularly see this where a project within an IT Director's remit is failing and they see that their best chance of coming out well is to be seen as a 'strong' leader who has so many things to do that they can't be faulted for 'trusting' lieutenants who then turned out to be rubbish.  When trying to clean up a project these people are toxic as normally their only interest is ensuring that no blame ever attaches itself to themselves, Brooks' initial leading of the 'investigation' is a classic case of this where someone closely associated with the issues tries to ensure the 'correct' outcome by either directly or indirectly leading the investigation.  Sometimes in IT the phrase 'lets draw a line under it and move on' is used which can help if its meant as it means everyone can get on with fixing the problem rather than with worrying about the political issues.

Responsibility with personal accountability and exit - Sir Paul Stephenson 
Then we have the Met Police chief who has resigned for the faults within his organisation which pretty much no-one feels touch him personally.  His extremely barbed comment pointing out that his mistake was to employ someone who hadn't resigned in the original scandal, as opposed to David Cameron, clearly highlights what he feels is a double standard in how people talk about responsibility.  Now from one perspective having the overall sponsor take responsibility and leaving because of it demonstrates strong personal integrity and leadership, but in another it also means that the folks below are left without a clear leader and therefore there is uncertainty on who will clean things up.

In IT this often unfortunately occurs when the sponsor feels they can leap into another job somewhere else if they go before they are pushed.  The challenge to the project team remains who will clean up the mess and drive it forwards.

No responsibility but lots of finger pointing
What has been most common in this scandal is lots of people from the outside pointing fingers and suggesting how things should be cleaned up, with in the most part very little actual personal commitment in helping to clean things up.  MPs have condemned and moaned, but not promised to stop religiously courting the media.  Other scandal sheets have condemned and moaned about its impact on the reputation of 'good journalism', but certainly not offered up themselves to be investigated to clean their names.

This is really common in IT recovery programmes, lots of people standing on the sidelines with 'helpful' advice on how to improve or how to 'learn' from the mistakes but zero actual time commitment in cleaning things up.  Managing these people is central to recovering an IT programme.

No concept of responsibility
Paul McMullen has been the comedy turn of this scandal, a man so divorced from reality that he continues not just to excuse but positively to champion the sorts of behaviour that everyone is condemning around him.  Here is a man who both Hugh Grant and Steve Coogan have show up to be a total and utter muppet of the highest order.  At some stages I've really wondered if he isn't really a journalist but in fact a very hammy actor who is over playing the part.

In IT these are the folks who just don't get that there is an issue.  I once interviewed an architect shortly after Boo.com had failed.  He had been responsibly for lots of their key architectural decisions, several of which were behind the inability of people generally to use the site.  During a curtailed interview he continued to champion the approaches they had taken, which had failed, and even managed to support and promote the business model for Boo.com which had managed to burn through lottery winning cash in an amazingly short period of time (one founder entertainingly stated that they were 'too visionary', nope it was a bad idea, badly implemented).  People like this must be quickly exited from the programme as if they can't see any issues then they will be unable to fix them.


Responsibility with Action
What we haven't really seen so far is someone take responsibility with action in this crisis.  Sure the News of the World has been closed down but allegations by Jude Law against the Sun have been met with abuse rather than a culture of 'we are pretty sure we didn't, we take these allegations seriously, will will investigate and we are confident we will prove our innocence'.  The closure itself was simply an acceleration of a previously announced policy so there really hasn't been that active leadership yet in cleaning things up.

Responsibility with action in an IT failure is the person who stands up and says 'mistakes have been made, right lets fix them' and sets about driving change and leading the cultural shift that is normally required to recover a failed or failing project.  Responsibility with action is normally a quiet thing rather than a shouty thing,  its something that is done rather than talked about.  It isn't a committee to investigate  its an active approach to finding what went wrong and fixing it as quickly as possible.

Critically its not about blame in the sense of finding people to blame, its about finding problems and when these problems turn out to be people then those people are given the simple choice: change or leave.  It is about finding people who should have taken personal responsibility and ensuring that next time they do.

Recovering projects is normally one of the most thankless tasks that I do.  You enter into a scenario where someone else has screwed up and your end result is getting the project to a place where it should have been ages ago.  There is however something personally rewarding in changing the culture of individuals so that they are able to recognised the mistakes that were made and in exiting the toxic people from the process.  Crucially however there is one lesson that I've learnt doing this and that is that the first stage has to be recognise that there are systemic problems that need to be fixed.  If it turns out that actually its a localised issue then this is great, but the assumption must be that the rot is much broader and more general than the currently surfaced failure.  Normally there is a culture of poor sponsorship, leadership, management and clarity that leads to a general case of fail in which only the scale of the project fail stands out.

If you do see a project failing the first thing your should identify is not what went wrong in the weeds but how the sponsors and leaders will react.  Will they behave like a Brooks and deny everything?  Like a Cameron and take 'full responsibility' but actually blame farm?  Will they deny that there actually is an issue like McMullen? Will they fall on their sword and leave a vacuum or do you have someone with whom you can actually work to drive through the recovery?  Clarifying this 'top-cover' challenge is the first step in recovery,

Remember: Don't just people on whether they say they take or have responsibility but on what they do
 

Technorati Tags: ,

Friday, July 15, 2011

Preaching to the Choir: the bane of IT

Sometimes I get asked why I bother debating with people who clearly have a different opinion with me and are unlikely to change their mind. The reason is that sometimes, rarely I'll admit, that sometimes I will change my mind and occasionally I will change theirs.

The other reason is what is the point of debating with someone who agrees with you? Unfortunately in a lot of IT we have two types of discussion

  1. Religious discussions based around IT fundamentalism
  2. Preaching to the Choir to re-enforce the message
These two are very closely related.  Effectively a group of people talk to each other about how great something is and how fantastically brilliant that approach is and how the whole world should bow down before their joint vision of the future.  These folks then head out to 'spread the word' and are just plain shocked when people fail to accept what they say as the gospel truth and normally either result to insults, making up facts or just plain ignore any comments or questions.  Quite typically this later bit includes doing farcical comparisons like

Q: Where are your references?
A: Same as yours, personal experience
Q: Err but mine are published on the web, here are a bunch of links...
A:

This is a conversation I've had many times.  The reason for this post however is that on Google+ someone replied to a post by Jean-Jaques Dubray (which referred to this post) and after a short discussion where the individual started with a personal insult and moved on to ignoring questions and instead posting their own PoV finished with the brilliant line
Wrong audience and tone

Which of course just means that the person feels that they want to go and speak with people who agree with them unquestioningly.  This mentality is a massive problem in IT and, I feel, more prevalent in IT than almost any other discipline.  Whether its the 'leadership' of the Java group ignoring huge amounts of external input that disagrees with them or the various little pieces of fundamentalism around its a significant issue that folks tend to switch from one fanaticism to another without often pausing between them.  The number of times I've bumped into someone a couple of years down the road who is now a fanatic for another approach is just stunning.

I remember once saying at JavaOne that UIs are best created with tools and was told in no uncertain terms that you couldn't build a single complex UI with tools, it had to be hand coded.  I pointed out that I'd built an Air Traffic Control system where everyone was using visual tools for the UI side building, this was a system that was already in production, the reply was 'good luck with that, it won't work'.  Much back-slapping from his friends for 'putting me in my place' while I wandered away sadly wondering if people really could be in IT and want to learn so little from previous experience and instead just create a small clique that backs them up.

I've come to realise that this is sadly exactly what lots of folks in IT prefer to do, they prefer to create an 'us v them' mentality and form small groups of 'evangelists' who preach to each other on the brilliance of their ideas and the stupidity of others for not understanding them.

Its this fractured nature that leads to groups denying any benefits from 'competiting' approaches or even from historical ways that have been proven to work.  Often things come from first principles and sometimes (and this is the one I find most scary) is tied to a single published work which becomes 'the good book' of that clique.  The choir preaches to themselves and sees success when none exists or defines success purely based on their own internal definitions.  The debate that is engaged in works on a very poor level as no challenge is allowed to the basic assumption that they have found the 'holy grail' of IT which will work for every sort of approach.

Preaching to the Choir is at the heart of this issue.  Talking and debating only with those who agree with you is a bad way to test ideas.  The Oxford Union is what debate is about, two sides trying to convince the other and the audience deciding who won.  Argumental has built a programme around people being made to debate on a topic they might not even agree with (although in the linked video Rufus Hound doesn't make a very good job of that).

If all you hear is 'that is great, brilliant, anyone who disagrees is an idiot' then I'm afraid that you are an idiot as you are in danger of wearing the Emperors new Clothes and are clearly taking the easy way out. If you can't convince other people of the power of your argument this is most likely to be because there are flaws in your argument that you don't understand or know, not that the person you are debating with is an idiot (sometimes this will be true of course).

The basic rules should be

  1. Facts count - if you can reduce things to quantative assessments then you are doing better
  2. Ladder of Inference - you need to build from the first point of the debate, not start at the end
  3. Answer questions - if someone asks a question, answer it
  4. Think about where the other person is coming from
  5. Read opposing views, learn from them
  6. Accept when you don't agree - sometimes people will differ and that is okay, accept it

I find it quite depressing when people say 'I'm not talking to X as he can't be taught about Y' when I know that the reality is that X has a very good point of view that the person saying this really should listen to as they'd learn something even if it challenges their current IT religion.

So please can we stop preaching to the choir and start having actual debates, it doesn't matter if the tone is a bit disrespectful or sarcastic as long as you are challenging and responding to challenge.  It should be a fierce debate on occasions and that is fine, but what it shouldn't be is just preaching to the choir and denouncing all those who disagree as heretics.


Technorati Tags: ,

Monday, July 11, 2011

SaaS integration - making the ERP mistakes on a bigger scale

One of the most frustrating things in IT is the totally amazing ability of people not to learn from past experiences. The following are all the sorts of things I've recently heard at conferences, vendor presentations, business presentations and in company architecture practices.
"We don't need an MDM solution as Salesforce is going to be our only customer repository"
"Integration is simple, its all just REST or Web Services, we don't need to worry about that"
"We are moving to SaaS because it doesn't require integration and dealing with IT"
"The business are on their own if they do SaaS, we just deal with the internal IT"
And a whole litany of others over the last few years. The general theme is that business folks are commissioning SaaS solutions and in collusion with the technically naive are setting up entire new estates beyond the firewall. Meanwhile the internal IT groups are often washing their hands of this deliberately and fighting against the change.

Lets start with the first statement, as its the one I've heard several times.

I don't need MDM, my SaaS CRM is my master


This probably wins it in my book for the least ability to learn from IT and business history.  This is exactly what folks did in the first CRM rush in the 90s and have spent the last 15+ years trying to recover from.  Fragmentation of customer information is a fact, even more so in these days of social media, so starting a strategy with the idea that an externally provided solution in which you have little or no say and which is set up to be good as a SaaS solution not as an enterprise source for customer matching, merging and dissemination is like doing a CRM project in the 90s by lobbing money at consultants and saying "build whatever you like lads".... really not a good idea.

If you are going to look externally for SaaS, and there are good business reasons to do so, then the first question should be how to create the unified information landscape that modern businesses require.  That is an MDM problem, which only gets bigger as you include suppliers, products, materials and all of the other core entities that exist.

Integration is simple, its just REST/Web Services
While the CRM one is a joint IT/biz error then its this one where IT really excels itself in ignoring the past.  Integration is a hard problem in IT, its made much harder if you don't have MDM style solutions, when looking at SaaS the fact that you have published interfaces helps slightly but you still have the challenges of integrating between multiple different solutions, mapping the information, mapping the structure and of course updating all this when it changes.  That however is of course just the technical plumbing.  Then you have to look at your business processes that span across SaaS solutions and the enterprise as well as where to add new services into that environment.

The spaghetti mass of ERP and enterprise in the 90s which EAI aimed, and mainly failed, to solve will be nothing compared to this coming morass of externally competing companies who have a real commercial reason to keep you locked to their platforms and approaches and have the actual ability to make things tough as they can change their platforms as they want without asking for permission.

We are moving to SaaS so we don't have to deal with IT
This is a common refrain that I hear, but its very short sighted, what it really means is that the current IT department is broken and not meeting the demands of the business and not even properly explaining to the business what is going on.  This really is just storing up problems for the future, or just delegating the problem externally to another group who will end up being your IT department in future, but probably harder to shift and change that your current group.

If the business do SaaS that is their problem, we just do internal IT
I like to think of this as the IT redundancy programme, its a wilful attempt to ignore the real world and it really is only going to end badly.

Summary
The point here is that moving to SaaS is actually a bigger challenge for integration and information management than the old ERP challenge, but most companies are entering it with the same wild-eyed wonder that companies entered the ERP/CRM decade of the 90s.  Leaping in and just assuming that historical problems of integration will disappear.  The reality is that companies need to be more structured and controlled when it comes to SaaS and the IT departments must be more proactive in setting up the information and integration infrastructure to enable this switch.



Technorati Tags: ,

Monday, July 04, 2011

Microsoft's Eastern Front: the iPad and mobility

For those who study European Wars the decision to invade Russia consistently stands as one of the dumbest that any individual can attempt. Not because Russia as an army was consistently brilliant or strong but because the Russian country is just too big and the winters too harsh to defeat via an invasion.

For years this has been the challenge of those taking on Microsoft, they've attacked the desktop market. Created products to compete with the profit factories that are Windows and Office, even giving them away in the case of Open Office, but the end result was the same... Microsoft remained the massively dominant player. Even when Linux looked like winning on Netbooks the shear size and power of the Microsoft marketplace ensured that there would be no desktop victories. Sure Apple has leveraged the iPod and iPhone to drive some more Mac sales but the dent has been minor.

From one perspective Microsoft has also been the biggest investor on another front, the front of mobile and mobility, billions upon billions have been poured into the various incarnations of Windows on Mobile devices, from Tablets and WindowsCE to the new Windows 7 Mobile it has consistently been a massive set of money for a very, very small slice of the pie. This disappointed people who invested in Microsoft but as long as the profit factories were safe then all was fine.

I think however that this failure is about to really hurt Microsoft. Today I'm sitting in a train carriage (treating myself by going First, on my own cost) and there are now 7 iPads open and 2 laptops (of which one is mine), I'm using my Laptop as I'm creating PPTs but if I wasn't I'd be on the iPad too.

The fact that I'm on a Mac is irrelevant, the key fact is that after Neil Ward-Dutton asked if the stats were good I took a walk down the carriages and found that a 3:1 iPad/slab to laptop continued through-out first class and dropped to 1:1 in standard class. So in the "best" case scenario you had 50% of people using and working on iPads (or equivalents) and in the management section is was at 75% iPad domination.

These people are emailing, browsing, creating documents and generally getting on with mobility working. That is a massive shift in 2 years. 2 years ago it would have been laptops out and people using 3G cards or working offline, now its all about mobility working. This represents a whole new attack on Microsoft's profit factories and one from a completely different direction than they are used to. With rumours saying that Windows 8 for slabs not being available until late 2012 or even early 2013 this means that a full desktop/laptop refresh cycle will have gone through before Microsoft can hope to start competing in this space.

I'm normally asked a couple of times on this 5 hour train journey about my ZAGGmate keyboard for iPad and where I got it from with people saying "that is really good, I could ditch my laptop with that". This concept of mobility extends to how you use things like email. Sure Outlook is a nice rich Email client, but the client on the iPad is pretty good and has the advantage that you don't have to VPN into a corporate environment but just use the mobile Exchange (an MS product) connection so mobile signal quality doesn't impact you as much. As an example, on this trip I've had to re-authenticate on VPN about 12 times, normally with the iPad I of course don't have to do it once.

Its hard to not feel that while MS has invested billions in eastern front of mobility that in reality its left with no actual defences, a Maginot Line if you will which has now been roundly avoided by a whole new set of technologies which are not competing with Microsoft in the way they expected.

How long can the profit factories be considered safe? With 1% of all browsing traffic already from the iPad and mobility being the new normal its a brave person who feels that another 12 or 18 months won't deliver long term damage to Microsoft's core profits.


Technorati Tags: ,

Why Google Apps plus Google+ would change the market

Okay I managed to get into Google+... so what did I find? Well first off I found something with an unusual view on privacy and security. I can send a message to a specific Circle and then anyone in that Circle can then share that information with anyone they want. So the ability for private information to go viral is absolutely straight there... this is something that needs to be changed for Circles to have any weight. Sure the Cut and Paste angle is liable to remain but that is quite different from the immediacy of sharing.

Secondly however I saw a massive opportunity of what Google could do if they combine Google+ with Google Apps, specifically the GAPE products for business. Companies like Yammer are building a nice business in enterprise collaboration. With a bit of focus on security then this is exactly what Google could do too... but better.

How?

Well first off there needs to be the idea of "administrated" Circles, i.e. Circles which are officially vetted and which people can request to join. This would allow not just the sort of FB fan pages to be created but more critically would allow companies to create internal project or information area circles to promote collaboration. I think administrated Circles would be a +ve on both the social and enterprise side. On the Social side I think there should be "closed" admin where a limited set of people can approve access and "open" admin where a group is established and people self-vet themselves in (and potentially out).

Secondly there needs to be the idea of Google+ restricted for a given domain, ala yammer, where everyone on it has to have a specific GAPE account. This means that a company can have a private Google+ environment, which when combined with administrated Circles would enable companies to set up collaborative environments rapidly and link it back to corporate directories and the collaborative technologies of GoogleApps, for instance a Circle could automatically be established for everyone who is editing or reviewing a document....

Thirdly, and this is where I think Google+ + GAPE would be a real killer, there should be "bridge" Circles between different GAPE domains. These are external collaboration circles where people can be added to the environment from multiple specific companies to provide a cross company collaboration. In a world where collaboration between enterprise partners is becoming key I think that this sort of integration between GAPE (which allows this collaboration on documents) and Google+ would provide a step-change in simplicity for inter-enterprise collaboration.

So there it is, three things that would give Google+ a paying audience for its technology in a place where FB and Twitter just have not been able, nor seem willing, to go. A large market where Google+ could be used as a wedge into GAPE and where Google's security and sharing vision could be put to brilliant use.

Personally I said that not bundling Orkut back in 2007 into GAPE was a mistake, now its time for Google to prove that decision was right because they've now got the technology to do much more than simple a corporate social network.

Now they've got the ability to create a fully collaborative company and drive inter-company collaboration.



Technorati Tags: ,

Sunday, July 03, 2011

Geo-Privacy bubbles: controlling smart phone features based on location

The new iOS 5 integration with Twitter is great and the ability to geo-tag posts is fine and dandy. But there is a problem, when I get home and tweet I don't want to send the location, nor do I want to send the location when I pick the kids up from school or do any number of stalker/burglar friendly things. These elements are almost always related to specific physical locations that I don't want to be recorded.

So here is my next idea, the concept of physical privacy or functional bubbles, places where you draw a circle in Google Maps (or similar) and state that when you are within that bubble you do no want your location to be recorded. This could be extended to other functions on a smart phone, for instance by setting "no call" zones in places where you go fishing or setting a inverse zone for a kids smart phone so they can only access the internet when at home or school.
So in this example we've got two "no location" bubbles, one "no call" bubble and an "auto location" bubble, the later is basically for places where you want to automatically check-in to as soon as you get near, for instance the airport, work, etc.

The concept here is that people can manage their privacy, particularly their geo-social privacy, my marking out places on a map where these features will become disabled on their smartphone. So rather than having to remember "oh I'm at home, must turn location off in Twitter" you instead just mark these zones and the features are automatically disabled on the phone as you enter into that zone. This gives parents the ability to better control what their children are accessing and gives individuals greater automatic control over the information they are sharing online.

Now the reason why I haven't just written an app to do this is I quickly realised that this needed some pretty low-level integration with the device in order to make it happen (iOS doesn't like apps changing fundamental settings!) so its something that Google or Apple would have to do rather than it being a download from an app store (unless someone proves me wrong) but I also wanted to make sure that there was published prior art in case someone in future tries to patent what is, for me, a ruddy obvious next development.


Technorati Tags: ,

The problem of mobile places in a geo-social world

I'm sitting writing this on a train, a specific train, the 06:37(ish) leaving St Austell Station and heading to London Paddington. Later in the week I'm going to take a specific train to Paris from London and then probably another to get back to the UK. A few weeks ago I took a specific flight to get to the US.

When considering the current state of the Geo-Social world its clear that movement is not something that is being expected of places but I think this is a classic case where a new technology can, and should, make it easier in the future.

Today for instance if you want to find out if a UK train is on time then your best bet is to go for something like Live Departure Boards which tell you about trains to a station and from there you can find out about a specific train.

Now however lets imaging a future world where moving entities are integrated into Geo-Social solutions. Now instead of "checking in" to the station, I would "check-in" to the actual train. This would then allow me to be automatically tracked, if I want, as my journey progresses until I "check-out" of the train at a specific station.

What are the advantages of this? One of the first is that for plane journeys people could check-in and the person picking them up could check their profile, via FB for instance to get the flight details and from there actually get the current status of the flight, its gate information, etc. Someone picking someone up from a station, or waiting for someone in a meeting, could see that a train is delayed and hence the person will be running late. Indeed by automating these pieces through Geo-Social you could set up notifications of delays automatically in the way that certain travel companies enable you to do today when, and only when, you book tickets with them.

Now there is of course the obvious privacy question of being able to track someone for an extended period of time, but for me if you are signing up to geo-social then you should be considering your privacy and what to share/not to share on a regular basis.

Part of this post is about prior art, namely me making sure there is something on the internet that could be cited as prior art if some numpty in the US tries to patent the idea of mobile geo-social places. The other part is prediction that this will happen.

Geo-social for public transport I can certainly see... for private transport? Probably only in the valley.



Technorati Tags: ,

Friday, July 01, 2011

Has de-normalisation had its day?

Ever since the relational database became king there has been a mantra in IT and information design.  De-normalisation is critical to the effective use of information in both transactional and, particularly, analytical systems.  The reason for de-normalisation is to do with the issues around read performance in relational models.  De-normalisation is always an increase in complexity over the business information model and its done for performance reasons alone.

But do we need that anymore?  For three reasons I think the answer is, if not already no, then rapidly becoming no.  Firstly its to do with the evolution of information itself and the addition of caching technologies, de-normalisation's performance creed is becoming less and less viable in a world where its actually the middle tier that drives the read performance via caching and the OO or hierarchical structures that these caches normally take.  This is also important because the usage of information changes and thus the previous optimisation becomes a limitation when a new set of requirements come along.  Email addresses were often added, for performance reasons, as child records rather than using a proper "POLE" model, this was great... until email became a primary channel.  So as new information types are added the focus on short term performance optimisations causes issues down the road directly because of de-normalisation.

The second reason is Big Data taking over in the analytical space.  Relational models are getting bigger but so are approaches such as Hadoop which encourage you to split the work up to enable independent processing.  I'd argue that this suits a 'normalised' or as I like to think of it "understandable" approach for two reasons.  Firstly the big challenge is often how to break down the problem, the analytics, into individual elements and that is easier to do when you have a simple to understand model.  The second is that grouping done for relational performance don't make sense if you are not using a relational approach to Big Data.

The final reason is to do with flexibility.  De-normalisation optimises information for a specific purpose which was great if you knew exactly what transactions or analytics questions would be answered but is proving less and less viable in a world where we are seeing ever more complex and dynamic ways of interacting with that information.  So having a database schema that is optimised for specific purpose makes no sense in a world where the questions being asked within analytics change constantly.  This is different to information evolution, which is about new information being added, but is about the changing consumption of the same information.  The two elements are most certainly linked but I think its worth viewing them separately.  The first says that de-normalisation is a bad strategy in a world where new information sources come in all the time, the later says its a bad reason if you want to use you current information in multiple ways.

In a world where Moore's Law, Big Data, Hadoop, Columnar databases etc are all in play isn't it time to start from an assumption that you don't de-normalise and instead model information from a business perspective and then most closely realise that business model within IT?  Doing this will save you money as new sources become available, as new uses for information are discovered or required and because for many cases a relational model is no-longer appropriate.

Lets have information stored in the way it makes sense to the business so it can evolve as the business needs, rather than constraining the business for the want of a few SSDs and CPUs.


Technorati Tags: ,

Tuesday, June 28, 2011

Social Relationships don't count until they count

There is a game called "the Six Degrees of Kevin Bacon" which tries to link between any Kevin Bacon and any other actor in less than six steps.  This is a popular version of the "small world" thesis put forwards by Stanley Milgram.  In these days of Social Media and "relationships" there is a massive hype around farming these relationships with an implicit assumption that someone with lots of relationships is more valuable than someone who doesn't

The problem is that in reality this is all a version of the Travelling Salesman problem with everyone assuming that every link is of the same value.  The reality is that links have different values based on their strengths so understanding how individuals are actually related is significantly more complex than many social media "experts" would have you believe.

What do I mean by this?  Well my "Obama Number" is 4 as, via my wife, I can trace to Obama in 4 steps with each individual step being reasonably strong.  By reasonably strong I mean that each link has met the previous link several times and probably could put a name to the face.  Now the variability of strengths on these links is huge, from my wife (hopefully a strong link) to people who move in similar social circles and then into the political sphere where the connection to Obama is made.

I've a Myra Hindley number of 2 as I have a friend who met her more than once (before her conviction).

So for Republicans and Tea Party nut-jobs this means that its 6 steps max from Obama to a child killer.  Does this mean there is a relationship worth knowing or caring about?  Nope.

So how to weight relationships and how to weight each step within the graph?  Well this is actually pretty simple.  Lets say A has a relationship to B via a social network, lets call that a score of 0.0001.  Lets say that B (who is the person) has a score of 1.0.  So for each interaction between two individuals you then look at the strength from A to B.

  1. How many times does A post to B?  If  > 10 then add 0.0001
  2. How many times does B post to A?  If > 10 then add 0.001 (i.e. B connects to A, hence more likely to be mutual) for each multiple of 10
  3. How many times does B indicate that they are at the same place as A? If > 10 then add 0.001 per 10
  4. How many times does a voucher provided to A get used by B? If  > 10 then add 0.1 per 10
  5. Are they directly related or married? If cousin or less then add 0.5
  6. Do they work closely together? If within 1 reporting hop add 0.2
  7. How many times have they met? If > 10 then add 0.05 per 10
What I'm saying is that its actually the interactions that matter to back up the social experience rather than the existence of a social link.

So while from Obama to me is 4 steps I'd say that overall its pretty weak (0.8 * 0.2 * 0.2 * 0.2 =  0.0064) a .64% link which really means I'm not worth lobbying to get influence over the US president.

This is where the combination of Big Data analytics could really deliver value, by understand the true weightings on individual relationships and from that determining the real genuine paths to the maximum possible market for the minimum effort.



Technorati Tags: ,

Thursday, June 09, 2011

iCloud 2.0 - CloudApps

2 years ago I wrote a post on why Apple might dominate the cloud and how an integrated offline/cloud backup solution would both offer more value from Apple's cloud but also offer more of a lock-in. I do like it when a prediction comes pretty much spot on, even if they've only just started doing what I thought they would.

Now the iCloud 1.0 is just a pretty basic sync, and as predicted, it does provide a premium service that includes the ability to sync your whole library. It doesn't appear to do the interface suggestion I made of integrating it directly into the iPod player but instead requiring you to go via the iTunes application, but that really is a minor improvement (and not a difficult one to do either). So we can see that now they've added in the cloud backup for iOS it surely can't be long before its extended to include OS X, especially as its effectively including it for photos already.

What next for Apple and the cloud?

Well one thing they haven't done yet is automate some of this sync, so when I say in iTunes "last 5 un-played" it doesn't automatically do the update on your device but this is a minor piece really.

The bigger thing that isn't in there yet though is the idea of using processing on the cloud rather than simply storage. So doing things like fancy video effects rendered on the cloud would be a good way to extend the experience on both the desktop and the mobile world to include a whole new generation of Apps.
CloudApps
So you don't just have the back-up/sync and all of those other elements but once you have your information being exchanged in this way you open the world to more consumer focused applications, or cloud extensions to existing applications.

Microsoft already have a limited part of this with their cloud services, but they don't appear to have either the co-ordination, brand or vision to make it really happen. Google might have an opportunity with their services and Android but the control of the handset manufacturers and operators might stop them.

The other people who should be worried are Facebook. The point of CloudApps is going to be towards collaboration and multiple users, sharing and the like. So while Ping hasn't been a success this application centric cloud approach could give Apple just what it wants - control within the social media space.

Technorati Tags: ,

Wednesday, June 01, 2011

What REST needs to do to succeed in the enterprise

In the spirit of constructive criticism here is what REST needs to do in order to succeed in the enterprise and B2B markets, the sort of markets that make actual revenues and profits as opposed to hype markets with the stability of a bubble.

First off there is the mental change required, four steps here.
  1. Focus on how people and especially teams work
  2. Accept that HTTP isn't a functional API
  3. Accept that enterprise integration, B2B and Machine to Machine require a new approach
  4. Accept that the integration technology isn't the thing that delivers value
The point here is that REST won't move on and be successful beyond blogs and some very cool web sites and technologies unless it shifts away from technical purism and focuses instead on making the delivery of enterprise software solutions easier. This means helping disparate teams to work better together, and how do you do that.....
DEFINE A CONTRACT
Seriously its that easy. The reason why WSDL succeeded in the enterprise is that it gave a very simple way of doing just this. The interface contract needs to define a limited number of things
  1. What is the function being invoked (for REST this could just be a description)
  2. What data can be passed and will be returned
  3. How to invoke it (that would be the URI and the method (POST, GET, PUT, DELETE))
This contractual definition should be a standard which is agreed, and adhered to, by core enterprise vendors and supported by tools. Now before people scream "but that is against what REST is about" well then you have a simple choice
  1. REST remains a niche technology
  2. REST becomes used in the enterprise
Now in order to become more used we need to also agree things like how you do user authentication, field level security & encryption, rules for reliability on non-idempotent requests, so you know whether your POST request really worked....

So what else does REST need to do? Well it needs to focus on tools because plumbing has zero value. Dynamism does happen but its measured in weeks and months not in days which means an agile release process can handle it perfectly well so all that dynamism and low level coding doesn't really add anything to enterprise development.

This is a key point, something I raised in 2006 (SOA v REST more pointless than vi v emacs) the value is what happens AFTER the call is made, focusing on making the calling "better" is just pointless, the aim is to make the calling as simple as possible.

So basically to succeed REST needs to copy the most successful part of SOAP... the WSDL, sorry folks but an "improved" WSDL based around REST and the associated tooling is required.

Or alternatively the REST crowd could just bury its head in the sand and pretend that its the fault of the enterprise that REST isn't being adopted.

And remember:

There is no value in integration only in what integration enables.





Technorati Tags: ,