Showing posts with label people. Show all posts
Showing posts with label people. Show all posts

Thursday, December 19, 2013

Why your IT strategy failed or why the business hates IT

One of the most depressing things in IT is our inability to learn.  From the 'Oh look our massive waterfall project ran over budget' to the 'I really can't maintain the code we wrote that doesn't have a design or documentation' we do the same things as an industry over and over again.  Most depressing however is the phrase 'The IT Strategy would work if the business would just change'.

To illustrate this I'd like to tell you a tale, the names will not be used to protect the guilty but it sums up nicely the lunacy of many IT strategy efforts.

Many moons ago I was working for the business side of a large and very successful company.  This company was seen as a genuine market leader and I was working on some very complex mathematics around predictive analytics that the business folks wanted to make their supply chain run better.  There were two highlights through the process from the IT department.

The first was when discussing how such a solution would be implemented and integrated into the operational systems.  IT had a strategy you see and by strategy I mean they had picked a couple of vendors, the solution I was working on had some very specific requirements and wasn't available from those vendors.  An Enterprise Architect from the IT department said in a pretty well attended meeting
'It doesn't matter what the business wants, if we say it isn't going in, it isn't going in.'
The project continued on however as the business saw value in it and wanted to understand what could be done.  One of the key pieces was that we'd need some changes in how operational processes worked, not big ones but more we'd change the way people worked within the existing processes by giving them better information.  To this end we had a workshop with the business and certain key IT folks and worked out how we'd have to design the interfaces and processes to work within the current business environment and culture.  It was a good workshop and the business folks were very happy.

Then came IT, IT you see had a big strategic project to replace all of the existing systems with 'best of breed' solutions.  I'd always assumed that given the massive budget for that program the business was fully engaged... then this happened....

One of the IT folks chirped up and said : "We need to have a workshop so we can tell you what your new operational processes are going to be"

Note the 'tell'... to which the most senior business guy there (a board member IIRC) said

"How do you mean tell us?"

IT Guy: "The new systems have new processes and we need to tell you what they are so you can change."

Business Guy:"Have you done an impact analysis against our current processes?"

IT Guy: "No we've just defined the Best Practice To-Be processes, you need to do the impact and change management.  We need the meeting so we can tell you what the To-Be processes are"

Business Guy in a voice so dripping with sarcasm I thought we'd have a flood: "I look forward to our IT department telling the business what Best Practice is for our industry."

IT Guy, completely failing to read the sarcasm: "Great we'll get it organised"

This is one of the most visible examples of my career on why IT strategies fail.  I've said before there is no such thing as IT strategy its the job of IT to help automate and improve the business strategy, that means thinking tactically and taking strategy from the business model.
"Culture eats strategy for breakfast"
This is the reality and an IT approach that seeks to drive over the culture and dictate from a position of technology purity will fail.  You can change the culture, its hard and its not a technology thing, but you always need to be aware of the culture in order to succeed.

IT Strategy, if such a thing exists, is there to make the business better not to make IT better.

Monday, April 29, 2013

IT is a fashion industry

You know when people laugh at the fashion industry for saying that 'blue is the new black' and because of its ridiculous amount of fawning over models, designers and the like?  Is that really different to IT?  We've got our fashion houses - Google, Facebook, Apple.  We've got the big bulk conglomerates IBM, Oracle, SAP, Microsoft and oh hell the fawning that goes around...

I'd say the comparison goes even deeper however.  EAI, Web Services, REST... what are these?  They are all integration approaches.  EAI was going to save the enterprise and create a well managed estate that the business could use and could be changed easily and enable integration with external companies... Web Services were going to save the enterprise by standardising the interfaces enabling a well managed estate the business could use and could be easily... REST was going to save all of IT by enabling interfaces that could be dynamically changed and enable integration...

The point is that the long term challenge is the same, system to system integration, yet we have fad based approaches to solve that challenge.  Its like the fashion industry and dress lengths, it goes up and down, but its still a dress.  The real difference in IT however is that the fashion industry does this better, sure they change the hem, but it still works as a dress.  In IT we concentrate so much on the hem length that we don't even bother with the fact that system to system integration appears to be as hard in 2013 as it was in 1999.  We even know why, the Silver Bullet tells us that technology won't solve the problem on its own.  But do we listen?  No because we are followers of fashion.

This analogy to fashion applies to the age discrimination in IT, we love the young and shiny, and age discrimination against new entrants is wonderfully not present.  However the flip side of that is there is an over emphasis on the new in IT, so we prefer doing things in 'new' ways rather than in 'working' ways, and unlike in the fashion industry we don't actually learn from sales what is successful.  If we've got a fad (hello REST) that works in some places but not in others we'll keep on pushing that fashion even as it fails to set the world on fire.  Its the emperor's new clothes effect, and in IT we do the equivalent of the beauty industry.  In the beauty industry you'll see adverts for 'age defying creams' advertised by 16 year old models.  In IT you'll see enterprise solutions pushed by using Google as an example.  We love the new, we love the young, and we really rather hate facing up to the fact that IT is quite an old industry now and 90%+ of the stuff out there is a long way from new and shiny.

The analysts and vendors are the Vogue and Fashion Houses in this world, the pushing of the new as the 'must have' technology and dire warnings if you dare to actually make the old stuff work.  The concentration on the outfit (the technology product) and little about how it actually works in the real-world (operations).  You know when you see outfits at London, New York, Milan or Paris fashion weeks been shown on the news under the 'what madness do designers think we will wear next' section.  Is that so different from an analyst or vendor pushing a new technology without explaining at all how it will fit into the operations of your current business?  We will see things like people declaring the end to SQL... and then a few years later those same people championing SQL as the approach, now that they've realised people can operate their technology if they do that.

The final place I'll talk about IT and fashion is in the 'rebadging' that we see.  In the fashion industry you see old ideas rehashed and pushed down the catwalk as being 'retro'.  There is at least some honesty in the fashion industry as they talk about it being inspired by an era, when we all know what they mean is 'I didn't have an original idea, so I copied one that was old enough that people will think its original to copy'.

In IT we don't even have the honesty of the fashion industry, what we do is see a new trend and claim that old technologies are actually part of that new trend.  We'll take an old EAI tool and slap on an SOA logo, we'll take a hub and spoke broker and call it an ESB.  This re-badging of technology goes on and on, sometimes you'll be in a meeting and suddenly realise 'hang on, I used that 12 years ago... how the hell is it now new?'. This would be fine if the focus was on building a robust product, but too often its just about how to get on an RFP and shift a few more units with actual investment in new approaches being few and far between.

IT and the fashion industry are miles apart in many ways, but the faddish nature of our industries make us very similar.  The problem is that fashion is allowed to be faddish, its not expected that a business will rely on something made 20 years ago but with IT this faddish behaviour is a big problem.  We are meant to be constructing systems on which a business can rely, not just today but 5 years from now and still be leveraging in 10, 20 or even 30 years time if its well constructed and does the job well.  There are mainframe systems out there doing exactly that, and why haven't they been replaced?  Because the new stuff didn't do the job.

IT needs to stop being like the fashion industry and more like the aircraft manufacturing industry, sure they have 'fads' like an all composite aircraft, but that is based on sound data as well as strategic vision. Its not just based on it being what the cool kids do.   We can do the cool stuff, we can do the new stuff, but we need to recognise that there is lots of stuff out there that needs to change, we can't just use Google or Facebook as references or examples, that would be like Boeing selling a plane technology based on what people did in movies, its just too far removed from the enterprise reality.  We need to stop blindly following IT fashion and start critically appraising it and shouting 'emperor's new clothes' when its bullshit.  Most of all we need to look at enterprise technologies based on how they improve the 'now' not based on 'if only we could replace everything', evolution is the revolution in IT.

Its time for IT to grow up and take responsibility for the mess we've created.

Friday, March 22, 2013

Why NoSQL became MORE SQL and why Hadoop will become the Big Data Virtual Machine

A few years ago I wrote an article about "When Big Data is a Big Con" which talked about some of the hype issues around Big Data.  One of the key points I raised was about how many folks were just slapping on Big Data badges to the same old same old, another was that Map Reduce really doesn't work they way traditional IT estates behave which was a significant barrier to entry for Hadoop as a new technology.  Mark Little took this idea and ran with it on InfoQ about Big Data Evolution or Revolution? Well at the Hadoop Summit in Amsterdam this week the message was clear...
SQL is back, SQL is key, SQL is in fact the King of Hadoop
Part of me is disappointed in this.  I've never really liked SQL and quite liked the LISPiness of Map Reduce but the reason behind this is simple.
When it comes to technology adoption its people that are key, and large scale adoption means small scale change
Think about Java.  A C language (70s concept) derivative running on a virtual machine (60s)  using some OO principles (60s) with a kickass set of libraries (90s).  It exploded because it wasn't a big leap and I think we can now see the same sort of thing with Hadoop now that its stopped with purity and gone for the mainstream.  Sure there will be some NoSQL pieces out there and Map Reduce has its uses but its this change towards using SQL that will really cause Hadoop usage to explode.What is good however is that the Hadoop philosophy remains in-tact, this isn't the Java SE 6 debacle where aiming after 'Joe Six-pack' developer resulted in a bag of mess.  This instead is about retaining that philosophy of cheap infrastructure and massive scale processing but adding a more enterprise friendly view (not developer friendly, enterprise friendly) and its that focus which matters.

Hadoop has the opportunity to become the 'JVM of Big Data' but with a philosophy that the language you use on that Big Data Virtual Machine is down to your requirements and most critically down to what people in your enterprise want to use.

Its great to see a good idea grow by taking a practical approach rather than sticking to flawed dogma. Brilliant work from the Hadoop community I salute you!

Monday, February 04, 2013

People are the problem can we stop pretending its technology

A friend of mine the other day said an amazing thing
I like coding in C++
I mean, seriously?  The land of friends, of people writing C code and debugging nightmares, had things got that much better, I mean I know there are some good threading libraries now but seriously, C++ is nice?
All of the idiots code in Java, they don't know C++
And there we have the point.  Its not about what technology is best its about the people using them, I'll guarantee that if the idiots were in C++ he'd be having more problems but because they are scared of it he can get more done in C++ safely as for them its terra-incognita.  This for me is why debates around SOAP v REST are pointless and make me quite angry.  People pontificate on 'REST scales better' or something else that doesn't matter 99.99% of the time (as in yes it might, but if something else scales acceptably then its not an issue), its like the 'Assembler is more efficient' bullshit that those of us who dared to code in C will remember.

The worst piece about the technology marketing community, by which I mean analysts and vendors, is the ability to hype something that doesn't matter because its a new technology.  It isn't that this technology has to make things better, hell it can actually make things worse, but all it needs is to have some technical reason why its better than something else.  'Its faster' in a place where that isn't important, 'Its quicker to develop your first solution' but a bitch to maintain. We've heard them all down the years.

So as part of my desire to see Thinking is Dead proven wrong I'd like to start a simple campaign.  Everything an analyst, vendor, consultant or developer tells you that something is 'better' ask the following three simple questions

  1. How does it reduce the support costs
  2. How does it reduce the salary levels of my developers
  3. How does it have a measurable impact on its own to our top or bottom line
This last point is critical.  I've seen some crackers down the year around integration technologies in particular 'We used technology X and shipped $1bn in products, therefore X delivered $1bn in revenue' no it didn't, the only question is if it cost less to develop and support technology X, the best that a technology in integration can hope for is a cost reduction in integration TCO, it will never on its own deliver the value because the value is about the information or transaction it delivers.  If it does that more cheaply then its a cost saving, but its never a revenue generator.


There are places where technology can have a top-line impact but those are very minimal (Predictive Analytics and HPC are about the only two I can name) everywhere else its an enabler for people to deliver value.  So the goal of technology is to make the people work better, the people work more efficiently.  Having a technology that is 5% better than another technology at technology stuff but 10% worse from a people perspective is like comparing getting the horse with being driven to near extinction for native americans.  Sure its a benefit, but it really doesn't outweigh the costs.

Tuesday, November 27, 2012

When to shout, the art of constructive destruction

I've always believed that sometimes teaching is about the stick as well as the carrot but there are very clear rules on when to use the stick and how to use it.

Its not good enough to start with shouting, that marks you down as an idiot and a prat.  If someone has done something that you don't want but have never explained to them then its your fault as the person in authority.

Rule 1: You have to have explained first before you shout

The stick therefore is something that should only be used when someone has deliberately gone against advice or guidance.  If people have followed what you said and it didn't work... its your fault.

Rule 2: It should be obvious to 'the average man on the Clapham Omnibus'

By which I mean that the fault should be obvious to a person of the given level and experience, if its a junior person who has made the mistake then shouting is not appropriate.  If its a self-proclaimed expert who has screwed up then a kicking is in order.

Rule 3: Pick on the leader not the team

If there is a team of people who have screwed up, don't share the blame equally, that person is accountable to you for the team so they have to take the responsibility for the failure.  Their team will know its a joint thing and that the leader has taken the heat for them and this should improve the situation if there are any team dynamic issues.  If you flame the whole team it basically says that you don't accept that they have a leader so you should be managing them all directly yourself.

Rule 4: Be specific, be constructive in your destruction

'You are a moron' is not a constructive statement.  'If you don't explain to me how what you are proposing addresses the two key use cases then I'm going to have to kill you' is constructive destruction.  The point here is not to hide the anger but to make clear that your challenge is very specific and targeted and gives them the opportunity to respond.

Rule 5: Make perfectly clear you are pissed off

You should only be doing this when its got really bad so you have to underline that it really is bad.  This doesn't mean you have to swear or throw chairs about but it does mean everyone should leave the room knowing that they are in your bad books and that if they don't buck their ideas up then chair throwing might be in their futures.

Rule 6: Give specific ways they can get back into favour
Before the team breaks up be specific, give them a short time frame on how they can recover the situation.  These need to be actions you can, and will, track of the next few hours or days on how the team can show you they are getting back on track.

Rule 7: Be honest when you get it wrong, congratulate
If it turns out that you were wrong and in fact the team had been going the right way but you didn't have all the information then apologise and congratulate them.  Shake the hand of the person who stands up and says 'you got it wrong, here is the proof' and deal with it as you should, with humility and possibly donuts.

Rule 8: If things don't change make a sacrifice
If the team keeps working badly and avoiding advice its time to make a public sacrifice, this could be kicking them off the project or even out of their job or it could be as simple as public humiliation.  Putting a small statue on their desk saying 'Crap Architect', check with HR first on what you are allowed to do.  Use your companies formal processes to underline it.  The key is that you want everyone to think 'fuck I don't want that to happen to me'.

Rule 9: Be inconsistently angry
The penultimate rule is to not use anger and shouting as a 'what happens at level 10' thing.  Sometimes use it as an opening gambit, sometimes use it as a final twist, sometimes use it through-out a process.  The point here is to use anger and shouting sparingly with the team.

Rule 10: shouting isn't about volume
'Talk quietly and carry a big stick', shouting for constructive destruction in a project is not about raising your voice, its about the impact that an attitude carries.  Talking more slowly, deliberately and quietly can be significantly more threatening than shouting loudly in many circumstances.  The key here is the effect, occasionally you want people to change their ideas and give them a jolt, don't think volume, think impact.

Lets be clear, I don't think that shouting and constructive destruction is a thing to use all the time, but sometimes the stick is required these rules help me ensure that I don't just shout like Shakespeare said 'Full of sound and fury signifying nothing' but direct that anger and use it to save the situation.

Constructive destruction is about tearing down bad behaviours and providing a way to rebuild them in a positive way.  There are lots of touchy feely ways to do that, but sometimes fear is the right way.

Thursday, August 02, 2012

How to weed out bluffers....

Following up from the concept of thinking being dead I'd like to talk now about one of the biggest challenges in IT.
How do you spot those people who are bluffing?
And here I mean people who don't really think they are bluffing because they are rubbish, or those that are bluffing because they think you are rubbish, its related to the challenge of Terry Pratchett Architects (PArchitects?) where how do you tell someone who distaines technology through knowledge v one who distaines through ignorance? The point is this:
95% of people interviewing senior IT people don't understand enough to weed out bluffers
This means that regularly I come across people who have a senior position based on a level of buzzword knowledge and general ignorance of reality that causes me to step back in amazement at the ability of the person to hold down a job. Normally of course these people are in jobs 12-24 months and often are contractors where such variability is almost seen as a benefit rather than an indication of being found out. So here are the top tips on weeding out the bluffers, and I'm assuming here we are at the Architect level and you can weed out bad developers...:
  1. Don't do it in a phase 1 Tech, phase 2 HR interview process - add in a middle phase which is set up by phase 1. 
  2. In Phase 1 ask the following 
    1. When was the last time you coded into production, what language, what plaform? 
    2. When was the last time you created a conceptual and logical data model? What platform for? 
    3. What is the difference between Regex, Regular Expressions and Perl in string handling? 
The first two are the real set-up questions... the last one is something that stunned me once when someone I was interviewing kept saying "I did Regex" and then later on said 'On that project we used Regular Expressions', I asked him the difference and he said that Regex was a language, Regular Expressions was... a different language. The point here is that you are after an understanding of the languages and platforms for which this person claims some level of expertise.

Lets be clear here, I'm of the opinion that an architect, whether Solution, Enterprise or Business who claims to sit within the IT domain should still know how the platforms work and especially should remember how their last platform worked. In the second interview you should include real deep developers, get them to ask real deep developer questions. You aren't looking for 'this person is a bitch ass programmer in language X' but 'not brilliant, but seems to know his stuff generally'. What you are looking to avoid is 'I don't think this guy has ever used X in his life' or similar statements.

Similar approaches, but more abstract should be used for PMs and BAs where you should get PMs and BAs who have done similar technical projects to ask the questions. I'm stunned at how many times I meet someone who is a 'Functional' SAP BA and then when you introduce them to someone who really is a functional expert in that area they fall to pieces... sometimes not even knowing the acronyms of the SAP modules they claimed to have used ('We did supplier management, not sure what SAP called it') The point here is that you need a first stage to find out where the bluffer claims to have depth and then a second stage to rip the hole if it exists.

Bluffers florish in a world where thinking doesn't exist.

Wednesday, July 04, 2012

Thinking is dead

Anne wrote a reasonable blog a while ago on why SOA was and wasn't dead but I'd like to go a bit further today and say that generally the concept of thinking appears to be dead.  The value of 'thought' and thinking in IT has diminished, in a way that mirrors society at large, to the stage where design, planning, architecture and anything else other than just banging away at a keyboard appear to have been relegated behind opinions and statements as fact.

REST was a good example of this.  It was going to be the revolution, it was going to be the way that everything worked.  Two years ago I called bullshit on that revolution and I still say its bullshit and the reason is simple.
IT values technologies over thought
So people genuinely, and stupidly, think that how you shift packets from A to B will have a massive impact in how successful IT projects are at delivering their objectives.  What impact has this had on the massive spend of ERP packages out there?  Nothing at all.  What has impacted that?  Well folks like SFDC because they've shifted the business proposition and in so doing have moved the way business people think about IT systems.

The same goes around with Hadoop and Big Data.  The massive amount of information growth is complemented by an equally large amount of bullshit and a huge absence of critical thinking.  What is the major barrier to things like Hadoop?  "Its lack of real time ability" I hear people cry.  Really?  You don't think its that we have millions of people out there who think in a SQL Relational way and who are really going to struggle thinking in a non relational non-SQL type of way?  You don't think that the major issue is actually in the acquisition and filtering of that information and the construction of the very complex analytics that are required to ensure that people don't go pattern matching in the chaos and find what they are looking for.

We are currently seeing a rush to technology in IT departments that is focused hugely on bells and whistles while the business continues to look at outcomes.  With the business looking at SaaS, BYODand self-service applications the importance of thought and discussion in IT is acute. What I often see however is statements of 'fact' like 'you don't need integration its SaaS' or even worse a statement on a specific piece of API based technology as being important.

Planning, architecture and design are being seen in parts of IT as bad things as a mentality develops around a concept that some how basic fundamentals such as TDD, contract design and doing things that actually proven to work are in some way wrong.  Adoption of unproven technologies is rife as is the surprise when those technologies fail to deliver on the massively over hyped expectation and indeed fail to even come close to delivering at the level of dull old technologies that do the job but don't currently have the twitterati in thrall.

'Experts' in this arena has come to mean 'people who shout loudly' in a similar manner to US politics.  Facts, reason and worst of all experience are considered to be practically a disadvantage when being an expert in this environment.  I was recently in formed that my opinion on a technology was 'tainted' as I'd used multiple other technologies that competed with it and therefore was 'biased against it'.  I'd also used the technology in question and found that it was quite frankly rubbish.  Sure the base coding stuff was ok but when I looked at the tooling, ecosystem and training available from those competitors I just couldn't recommend something that would actually work for the client.  Experience and knowledge are not bias, thinking and being critical of new approaches is not a bad thing, thinking is not dirty.

When I look around IT I see that design is a disappearing skill and that the ability to critically assess technologies is being replaced by a shouty fanaticism that will ensure one thing and one thing only:
IT will no longer be left to IT folks
The focus of shiny technology over business outcomes and the focus of short term coding over long term design will ensure that IT departments get broken up and business folks treat IT as a commodity in an ever growing way.

Thinking, design, planning, architecture and being skeptical on new technologies is the only hope for IT to remain relevant.


Monday, October 24, 2011

The 'Natural Platform' - Why people matter more than performance in picking hardware

One of the things that I get asked is 'what hardware should we run this on?'. I've said for years that I don't care about the tin and the tin is irrelevant from a differentiation perspective. Now before people leap up and say 'but X is 2x faster than Y' let me make a couple of points

  1. Software tuning and performance will have miles more than a 2x impact
  2. The software licenses will probably cost more than the hardware
  3. The development will definitely cost more than the hardware
  4. The software support and maintenance will definitely cost more
So what does that mean?  Well it means that when picking harder you should consider the people cost and the support costs much more than you could consider the performance.  What you should be considering is
 'what is the natural platform for this software?'

The natural platform is the one that the software is tested on.  That doesn't mean its the same as the hardware platform from the software vendor that they want you to buy, its the one that the developers are actually using to test that the software works.  Why do you do this?  Well when there is a bug, and lets face there are always bug, then instead of just having the support folks available to fix it you have all of the developers as well because they don't need to switch from their current environments.

Like I say this doesn't mean 'pick a hardware platform from the software vendor' it means pick the natural platform.  So if you think/know the developers are on Linux and deploying to Linux servers for test then you know there are more people able to help on Linux than anything else.  If they are developing on Windows and deploying to Linux then either of those platforms is natural.

As an example of what happens when you don't, let me take you back to 2000.  I was working on a project and we were using MQSeries, JMS and of course Java.  We developed it on Windows and deployed it to Linux for test.  For production however we'd been convinced to go for AIX to give us some major grunt.  We deployed the code into UAT and.... it broke.  Our assumption was that this was our fault because we didn't know AIX that well and clearly running IBM's AIX with IBM's Java Implementation, IBM's JMS implementation and IBM's MQSeries meant that it had all been tested, this was their flagship platform surely this was what it was meant to run on?

36 hours later we were talking directly to the product team who identified the problem as a memory issue that only occurs on AIX.  Making clear this meant that our configuration (pure IBM) had clearly not even been tested.

Working on another project where the database environment was different to that from the package provider and the hardware was a mainframe we had massive issues in just getting anyone who knew about our set-up in order to fix some problems.

These are normal problems and the key to them all is that its not about whether box X is faster than box Y they are about getting the best support and fixing problems quicker.  I'm not arguing that you shouldn't get an environment that scales, what I'm arguing is that when you look at the costs of tin then performance is a distant second to the people costs of fixing problems when they go wrong.

The problem is that normally the people buying tin are just buying tin.  In these days of virtualisation its about picking the right OS inside your virtualised server but its still important to think natural on platforms.

Pick the natural platform not the fastest shiny box.


Technorati Tags: ,

Friday, July 15, 2011

Preaching to the Choir: the bane of IT

Sometimes I get asked why I bother debating with people who clearly have a different opinion with me and are unlikely to change their mind. The reason is that sometimes, rarely I'll admit, that sometimes I will change my mind and occasionally I will change theirs.

The other reason is what is the point of debating with someone who agrees with you? Unfortunately in a lot of IT we have two types of discussion

  1. Religious discussions based around IT fundamentalism
  2. Preaching to the Choir to re-enforce the message
These two are very closely related.  Effectively a group of people talk to each other about how great something is and how fantastically brilliant that approach is and how the whole world should bow down before their joint vision of the future.  These folks then head out to 'spread the word' and are just plain shocked when people fail to accept what they say as the gospel truth and normally either result to insults, making up facts or just plain ignore any comments or questions.  Quite typically this later bit includes doing farcical comparisons like

Q: Where are your references?
A: Same as yours, personal experience
Q: Err but mine are published on the web, here are a bunch of links...
A:

This is a conversation I've had many times.  The reason for this post however is that on Google+ someone replied to a post by Jean-Jaques Dubray (which referred to this post) and after a short discussion where the individual started with a personal insult and moved on to ignoring questions and instead posting their own PoV finished with the brilliant line
Wrong audience and tone

Which of course just means that the person feels that they want to go and speak with people who agree with them unquestioningly.  This mentality is a massive problem in IT and, I feel, more prevalent in IT than almost any other discipline.  Whether its the 'leadership' of the Java group ignoring huge amounts of external input that disagrees with them or the various little pieces of fundamentalism around its a significant issue that folks tend to switch from one fanaticism to another without often pausing between them.  The number of times I've bumped into someone a couple of years down the road who is now a fanatic for another approach is just stunning.

I remember once saying at JavaOne that UIs are best created with tools and was told in no uncertain terms that you couldn't build a single complex UI with tools, it had to be hand coded.  I pointed out that I'd built an Air Traffic Control system where everyone was using visual tools for the UI side building, this was a system that was already in production, the reply was 'good luck with that, it won't work'.  Much back-slapping from his friends for 'putting me in my place' while I wandered away sadly wondering if people really could be in IT and want to learn so little from previous experience and instead just create a small clique that backs them up.

I've come to realise that this is sadly exactly what lots of folks in IT prefer to do, they prefer to create an 'us v them' mentality and form small groups of 'evangelists' who preach to each other on the brilliance of their ideas and the stupidity of others for not understanding them.

Its this fractured nature that leads to groups denying any benefits from 'competiting' approaches or even from historical ways that have been proven to work.  Often things come from first principles and sometimes (and this is the one I find most scary) is tied to a single published work which becomes 'the good book' of that clique.  The choir preaches to themselves and sees success when none exists or defines success purely based on their own internal definitions.  The debate that is engaged in works on a very poor level as no challenge is allowed to the basic assumption that they have found the 'holy grail' of IT which will work for every sort of approach.

Preaching to the Choir is at the heart of this issue.  Talking and debating only with those who agree with you is a bad way to test ideas.  The Oxford Union is what debate is about, two sides trying to convince the other and the audience deciding who won.  Argumental has built a programme around people being made to debate on a topic they might not even agree with (although in the linked video Rufus Hound doesn't make a very good job of that).

If all you hear is 'that is great, brilliant, anyone who disagrees is an idiot' then I'm afraid that you are an idiot as you are in danger of wearing the Emperors new Clothes and are clearly taking the easy way out. If you can't convince other people of the power of your argument this is most likely to be because there are flaws in your argument that you don't understand or know, not that the person you are debating with is an idiot (sometimes this will be true of course).

The basic rules should be

  1. Facts count - if you can reduce things to quantative assessments then you are doing better
  2. Ladder of Inference - you need to build from the first point of the debate, not start at the end
  3. Answer questions - if someone asks a question, answer it
  4. Think about where the other person is coming from
  5. Read opposing views, learn from them
  6. Accept when you don't agree - sometimes people will differ and that is okay, accept it

I find it quite depressing when people say 'I'm not talking to X as he can't be taught about Y' when I know that the reality is that X has a very good point of view that the person saying this really should listen to as they'd learn something even if it challenges their current IT religion.

So please can we stop preaching to the choir and start having actual debates, it doesn't matter if the tone is a bit disrespectful or sarcastic as long as you are challenging and responding to challenge.  It should be a fierce debate on occasions and that is fine, but what it shouldn't be is just preaching to the choir and denouncing all those who disagree as heretics.


Technorati Tags: ,

Saturday, April 24, 2010

Language in requirements, design and code reviews

Recently I've been doing a bunch of reviews in documents and other artefacts with multiple different groups of people and I've noticed a few things about what works when reviewing and what doesn't. I'm not talking here about the document format or the availability of tea but about how you review documents.

First off some ground rules, what I mean by this is that if you are in a key review position then you should be setting the expectations on what you consider to be good. So before people even start creating the stuff you are going to review spend 5 minutes with them just giving them some context on what you are looking for. This might be as simple as outlining where their piece fits into the broader picture or just making sure they have the right clarity on how they should be structuring what they have been set to do. This initial piece will save you a huge amount of pain later on.

So now when you get to the actual review you should at least be talking more about the content than wasting time telling someone that they've not created it properly and have to do some major rework.

So on into the review. I'm assuming here that you don't use design/requirement/code reviews to bollock people as that would be completely counter productive. If there are big issues pull them aside 1-on-1 later and have the discussion. So that said how to get people to learn from their mistakes?

The key here is language. There are some great phrases and some bad phrases. Lets say that someone has written something down that just isn't clear you can say

a) "This just isn't clear what you are trying to say"
or
b) "I'm confused around this bit, could you explain what you mean"

Now the former says "Crap work" the second says "Its probably me but lets just check" 9/10 times they'll explain in detail what they mean and you can say the magic words

"Great, now I understand it, you might like to write out what you've just said so no-one else gets confused"

Now lets say they've got something plain wrong you can say either

a) "That's just wrong"
or
b) "Umm what would the implications be if we do this?"

Then with b) you go into a discussion where you challenge them with points like "I see, but wouldn't X apply here?". This way you get to find out if its a mistake or they are actually a bit thick. If you do the former then you'll never get to know.

Now lets say there is an area where you realise that something you've done isn't clear and the person you are reviewing would benefit if it was clarified (for instance there is a diagram missing which would help explain their area). This is where you get to make the reviewee feel really good AND get work off your plate. The point here is to say something like

"I've just realised that I really should have created a diagram about Y by now as that would help you explain this area. I tell you what could you have a go at creating it and then we'll make sure that everyone sees it once we've got it right"

Here if you are a senior reviewer you are not only helping the person, and getting work away from yourself, you are really making the reviewee want to demonstrate that they can do a good job. That is the main aim with reviews. Catch the errors, help people improve and keep up morale. Kicking people in reviews for errors just doesn't make sense.

Pull the problems onto yourself, have the reviewee explain them and hopefully (if they aren't a muppet) they'll come to the right answer themselves, they'll think you are a great coach and they'll want to work harder for you.

The same does not apply to managers when reviewing project plans that are rubbish, they must be beaten about the head with a stick.

Technorati Tags: ,

Tuesday, April 06, 2010

Non-Principles

Okay so I talked about Anti-Principles so now I thought I'd talk about the final thing I list to list out in the principles sections of the projects I do. The non-principles this might sound like an odd concept but its one that has really paid dividends for me over the years. While Principles say what you should do and anti-principles say what you shouldn't the non-principles have a really powerful role.

A non-principle is something that you don't give a stuff about. You are explicitly declaring that its not of importance or consideration when you are making a decision.

While you can evaluate something against a principle to see if it is good or against a non-principle to see if it is bad the objective of the non-principles is to make clear things that shouldn't be evaluated against. In Freakonomics Steven Levitt talks about "Received Wisdom" and how its often wrong. I like to list out pieces in the non-principles that are those pieces of received wisdom and detail why they aren't in fact relevant.

Scenario 1 - Performance isn't the issue

A while ago I worked on a system where they were looking at making changes to an existing system. A mantra I kept hearing was that performance was a big problem. People were saying that the system was slow and that any new approach would have to be much quicker. So I went hunting for the raw data. The reality was that the current process which consisted of about 8 stages and one call to a 3rd party system was actually pretty quick. The automated testing could run through the process in about 6 seconds with 5 of those being taken up by the 3rd party system (which couldn't be changed) in other words the system itself was firing back pages in around 125 milliseconds which is lightning quick.

So in this case a core non-principle was performance optimisation. The non-principle was

"Any new approach will not consider performance optimisation as a key criteria"

This isn't an anti-principle as clearly building performant systems is a good thing but for our programme it was a non-principle as our SLA allowed us to respond in 3 seconds per page (excluding that pesky 3rd party call) so things that improved other core metrics (maintainability, cost of delivery, speed of delivery, etc) and sacrificed a little performance were okay.

Scenario 2 - Data Quality isn't important

The next programme was one that was looking to create a new master for product and sales information, this information was widely seen as being of a very poor quality in the organisation and there was a long term ambition to make it better. The first step however was to create the "master index" that cross referenced all the information in the existing systems so unified reports could be created in the global data warehouse.

Again everyone muttered on about data quality and in this case they were spot on. The final result of the programme was to indicate some serious gaps in the financial reporting. However at this first phase I ruled out data quality as being a focus. The reason for this was that it was impossible to start accurately attacking the data quality problems until we had a unified view of the scale of the problem. This required the master index to be created. The master index was the thing that indicated that a given product from one system was the same as one sold from another and that the customer in one area was the same as the customer in another. Once we had that master index we could then start cleaning up the information from a global perspective rather than messing around at a local level and potentially creating bigger problems.

So the non-principle for phase 1 was

"Data Quality improvement will not be considered as an objective in Phase 1, pre-existing data issues will be left as is and replicated into the phase 1 system"

This non-principle actually turned out to be a major boon as not only did it reduce the work required in Phase 1 it meant that the reports that could be done at the end of phase 1 really indicated the global scale of the problem and were already able to highlight discrepancies. Had we done some clean up during the process it wouldn't have been possible to prove that it wasn't a partial clean-up that was causing the issues.

Scenario 3 - Business Change is not an issue

The final example I'll use here is a package delivery programme. The principles talked about delivering a vanilla package solution while the anti-principles talked of the pain of customisation. The non-principle outlined however a real underpinning philosophy of the programme. We knew that business change was required, hell we'd set out on the journey saying that we were going to do it. Therefore we had a key non-principle

"Existing processes will not be considered when looking at future implementation"

Now this might sound harsh and arrogant but this is exactly what made the delivery a success. The company had recognised that they were buying a package because it was a non-differentiating area and that doing the leading practice from the package was where they wanted to get to. This made the current state of the processes irrelevant for the programme and made business change a key deliverable. This didn't however mean that business change was something we should consider when looking at process design. We knew that there had to be change, the board had signed off on that change and we were damned well going to deliver that change.

This non-principle helped to get the package solution out in a very small timeframe and made sure that upgrades and future extensions would be simple. It also made sure that everyone was focusing on delivering the change and not bleating about how things were done today.

Summary

So the point here is that the non-principles are very context specific and are really about documenting the perceived wisdom that is wrong from the programme perspective. The non-principles are the things that will save you time by cutting short debates and removing pointless meetings (for instance in Scenario 1 a whole stream of work was shut down because performance was downgraded in importance). Non-principles clearly state what you will ignore, they don't say what is good or bad because you shouldn't measure against them (e.g. in Scenario 3 it turned out that one of the package processes was identical to an existing process, this was a happy coincidence and not a reason to deliberately modify the package).

So when you are looking at your programme remember to document all three types of principles

  1. The Principles - what good looks like and is measured against
  2. The Anti-Principles - what bad looks like and is measured against
  3. The non-principles - what you really couldn't give a stuff about

All three have power and value and missing one of them out will cause you pain.



Technorati Tags: ,

Friday, February 19, 2010

Delegate

Possibly my shortest post ever but seriously

Do remember if you have a team that your job is actually to delegate things to them otherwise there is no point having a team.

Technorati Tags: ,

Thursday, February 04, 2010

Is IT evolution a bad thing?

One of the tenants of IT is that enabling evolution, i.e. the small incremental change of existing systems, is a good thing at that approaches which enable this are a good thing. You see it all the time when people talk about Agile and code quality and clearly there are positive benefits to these elements.

SOA is often talked about as helping this evolutionary approach as services are easier to change. But is the reality that actually IT is hindered by this myth of evolution? Should we reject evolution and instead take up arms with the Intelligent design mob?

I say yes, and what made me think that was reading from Richard Dawkins in The Greatest Show on Earth: The Evidence for Evolution where he points out that quite simply evolution is rubbish at creating decently engineered solutions
When we look at animals from the outside, we are overwhelmingly impressed by the elgant illusion of design. A browsing giraffe, a soaring albatross, a diving swift, a swooping falcon, a leafy sea dragon invisible amoung the seaweed [....] - the illusion of design makes so much intuitive sense that it becomes a positive critical effort to put critical thinking into gear and overcome the seductions of naive intuition. That's when we look at animals from the outside. When we look inside the impression is opposite. Admittedly, an impresion of elegant design is conveyed by simplified diagrams in textbooks, neatly laid out and colour-coded like and engineer's blueprint. But the reality that hits you when you see an animal opened up on a dissecting table is very different [....] a haphazard mess that we actually see when we open a real chest.


This matches my experience of IT. The interfaces are clean and sensible. The design docs look okay but the code is a complete mess and the more you prod the design the more gaps you find between it and reality.

The point is that actually we shouldn't sell SOA from the perspective of evolution of the INSIDE at all we should sell it as an intelligent design approach based on the outside of the service. Its interfaces and its contracts. By claiming internal elements as benefits we are actually undermining the whole benefits that SOA can actually deliver.

In otherwords the point of SOA is that the internals are always going to be a mess and we are always going to reach a point where going back to the whiteboard is a better option than the rubbish internal wiring that we currently have. This mentallity would make us concentrate much more on the quality of our interfaces and contracts and much less on technical myths for evolution and dynamism which inevitably lead into a pit of broken promises and dreams.

So I'm calling it. Any IT approach that claims it enables evolution of the internals in a dynamic and incremental way is fundamentally hokum and snake oil. All of these approaches will fail to deliver the long term benefits and will create the evolutionary mess we see in the engineering disaster which is the human eye. Only by starting from a perspective of outward clarity and design and relegating internal behaviour to the position of a temporary implementation will be start to create IT estates that genuinely demonstrate some intelligent design in IT.


Technorati Tags: ,


PS. I'd like to claim some sort of award for claiming Richard Dawkins supports Intelligent Design

Monday, January 25, 2010

Define the standards FIRST

One of the bits that often surprises, no infact not suprises it stuns me, is the amazing way that people don't define the standards they are going to use for their project, programme or SOA effort right at the start. This means the business, requirements and technical standards.

Starting with the business architecture that means picking your approach to defining the business services. Now you could use my approach or something else but what ever you do it needs to be consistent across the project and across the enterprise if you are doing a broader transformation programme.

On requirements its about structuring those requirements against the business architecture and having a consistent way of matching the requirements against the services and capabilities so you don't get duplication.

These elements are about people processes and documentation and they really aren't hard to set up and its very important that you do this so your documentation is in a consistent format that flows through to delivery and operations.

The final area are the technical standards and this is the area where there really is the least excuse. Saying "but its REST" and claiming that everything will be dynamic is a cop-out and it really just means you are lazy. So in the REST world what you need to do is
  1. Agree how you are going to publish the specifications to the resources, how will you say what a "GET" does and what a "POST" does
  2. Create some exemplar "services"/resources with the level of documentation required for people to use them
  3. Agree a process around Mocking/Proxying to enable people to test and verifying their solutions without waiting for the final solution
  4. Agree the test process against the resources and how you will verify that they meet the fixed requirements of the system at that point in time
This last one is important. Some muppet tried to tell me last year that as it was REST that the resource was correct as it was in itself it was the specification of what it should do and the test harnesses should dynamically discover only what the REST implementation already did. This was muppetry of the highest order and after forcing the individual to ingest a copy of the business requirements document we agreed that the current solution didn't match the business requirements no matter how dynamically it failed to do so.

So with REST there are things that you have to do as a project and programme and they take time and experience and you might get them wrong and need them updating. If you've chosen to go Web Services however and you haven't documented your standards then to be frank you really shouldn't be working in IT.

So in Web Service world it really is easy. First off do you want to play safe and solid or do you need lots of call-backs in your Web Services. If you are willing to cope without callbacks then you start off with the easy ones
  1. WS-I Basic Profile 1.1
  2. WSDL 1.1
  3. SOAP 1.1
Now if you want call-backs its into WSDL 2.0 and there are technical advantages to that but you can get hit by some really gnarly XML marshalling and header clashes that exist when going between non-WS-I compliant platforms. You could choose to define your own local version of WS-I compliance based around WSDL 2.0 but most of the time you are better off investing in some decent design and simple approaches like having standard matched schemas for certain process elements and passing the calling service name which can then be resolved via a registry to determine the right call-back service.

Next up you need to decide if you are going WS-* and if so what do you want
  1. WS-Security - which version, which spec
  2. WS-RM - which version, which spec
  3. WS-TX - your kidding right?
For each of these elements it is really important to say which specification you are going to use as some products claim they support a specification but either support an old version or, more impressively, support a version of the standard from before it was even submitted to a standards organisation.

The other pieces is to agree on your standard transport mechanism being HTTP. Seriously its 2010 and its about time that people stopped muttering "performance" and proposing an alternative solution of messaging. If you have real performance issues then go tailored and go binary but 99.999% of the time this would be pointless and you are better off using HTTP/S.

You can define all of these standards before you start a programme and on the technical side there really is little excuse in the REST world and zero excuse in the WS-* world not to do this.
SO



Technorati Tags: ,

Monday, September 21, 2009

Theory v Practice - the opposite view

There is an age old saying
In theory, practice and theory are the same thing, in practice they aren't


This is true 90% of the time, but in Engineering it isn't always the case. I was speaking to someone a day or so ago about interviews and they were nervous as the job they were applying for required a specific programming skill and they had only done "a bit" of it.

What I told this poor young fool was that as they had talent (and they do) this lack of experience was just a minor element. Could they learn more in the week before the interview? I asked. "Sure" came the reply.

Well there you go. Any if they ask questions about threading and deadlocks can you answer them.

"Well I know the theory but not the syntax"

And it was here than I imparted the knowledge... Its actually the theory that counts not the syntax. To this end I'll tell two tales.

My first job interview was for a start-up company. They had some interesting bits around Eiffel and were trying to create a meta-language on Eiffel that enabled multiple different GUIs and Databases from a single code base. Part of this would require me to know C. I was asked

"Do you know C"

"Sure" I said.

"You'll have to take a coding test next week to check" they said

This gave me 7 days to learn C, a language I'd never coded in before. By the end of that week I was coding with pointers to functions which took pointers to arrays of functions as arguments. The reason was I understood the theory and could quickly apply it to the new syntax.

I got the job..... but they went bust 6 months later owing me 2 months wages so it wasn't the best story.

Now for another story, a good friend wanted to shift out of his current IT job which didn't do coding into a coding job. He had a bunch of theory and brains but no experience. I boldly said that I could coach him through a C++ interview in a couple of weeks. For 2 weeks we talked about classes, STL, friends and lots of other things.

He got to the interview, chatted for 30 minutes about computing in general and was asked the killer question

"So you know C++"

To which he quickly replied "Yes".... and the interview was over. He got the job and was pretty bloody good at it, despite the level of bluffing (although the single word "Yes" isn't the strongest bluff in the world).

The point is that if you understand the theory of programming languages and computing then individual languages are just a set of syntax that implements that theory in a specific context. Unfortunately in IT very few people understand the theory and are therefore condemned to badly implement software in the manner of an orang-outang who doesn't understand English but has a dictionary of English words to point at.

Lots of times Theory is less important than practice, but in IT if you don't know the theory then the odds are you'll be rubbish at the practice.



Technorati Tags: ,

Wednesday, September 02, 2009

Why I like Open Source documentation

I've got someone creating a structured Semantic Wiki for me at the moment and we are using Semantic Forms. One of the things we needed to do was pre-populate the fields. This means something like

{{#forminput:form_name|size|value|button_text|query_string}}

With the query string set... The documentation said

query_string is the set of values that you want passed in through the query string to the form. It should look like a typical URL query string; an example would be "namespace=User&User[Is_employee]=yes".
Now this is accurate but misses out a couple of important bits.

  1. The Namespace doesn't actually matter unless you are using namespaces (we aren't)
  2. The second "User" doesn't refer to the form name or to the namespace it refers to the template name
  3. The underscore is only valid if you actually put it in the field name yourself (i.e. unlike other bits in MediaWiki where "Fred Jones == Fred_Jones" that isn't true
So after a bit of randomly focused hacking I found the solution.... and what did I do. I updated the documentation to add
The format of a query string differs from the form_name in that it uses the template name. As an example if you have a "Person" template (Template:Person) and a Person Form (Form:Person_Form) for entry then it is the names from the Template that matter. An example to populate the Home Telephone field would therefore be: {{#forminput:PersonForm||Add Person|Person_Form[Home Telephone]=555-6666}} N.B. The FORM uses underscores while the field uses spaces.
Now this could be written better I agree, but the point is that the next poor bugger through will now have a better starting place than we did. Adding examples is something that is particularly useful in much documentation and something that is often missing. I regularly find myself Googling for an example after failing to understand something that the person writing the documentation clearly felt was beneath them to explain.

For commercial software you'd clearly like to see a bit more of an editorial process to make sure its not stupid advice like "Install this Malware", but its an area where more companies could benefit from improvements in customer service and self-help by enabling people to extend their current documentation in ways that better fit how end-users see their technologies.

Thursday, June 25, 2009

When successful systems go bad - boiling frogs with Technical Debt

One of the challenges I often see in companies is when successful systems go bad. These aren't the systems that were delivered 3 times over time and 5 times the budget, these are the systems that many years ago delivered real benefits for the business and delivered in a reasonable time and budget.

The problem is that all those years ago the team in question was focused absolutely on getting the system live and successful, and like teams often do they cut corners. This started the technological debt of the system and began to act as a drag on future projects from day 1. Thanks however to the talent of that original team and the abject failure of systems elsewhere the success was rightly lauded and the system held as a shining jewel.

Roll forwards a couple of years and the situation as evolved. Those smart developers are now managers and some of them have left the company all together. The team profile has changed and the odds are the talent pool has decreased. Those pieces that the first team had missed out: metrics, unit tests, documentation are starting to be felt, but the current team would struggle to justify the cost to put them in and the rate of progress, while slowing, still delivers in a reasonable time frame. The level of debt is increasing and some of those short cuts in the first build are becoming evident and the newer short cuts to keep things on track are getting ever more desperate. Cut/Paste/Modify is probably becoming a normal strategy at this stage but people are building successful careers, quite rightly, based on their previous success.

Roll forwards five or more years and the situation has evolved again. The managers are becoming directors, more disconnected from the actual technology but still aware of what it represented to them all those years ago. The people working on it now have little connection to the original vision and are in a full on duct-tape mode. The pace of change has slowed to a crawl, the cost of change has gone through the roof and the ability to actually innovate has all bit disappeared. The problem is that this has happened slowly and over an extended period of time so people are still thinking that they have a wonderfully flexible system.

Its pretty much like boiling a frog, no-one has noticed that the pace has slowed and that the major cost of all projects is from the technological debt of the system. Some retrofitting efforts have been made no doubt, and these are lauded as being important, but the fundamental challenge is that the code base now is hugely fragmented from a management perspective while maintaining points of critical instability. Testing and releases take an age because changing one thing breaks five others, all of which are then just patched.

I'm not talking here about back-end transactional systems in which change is an irregular thing, who cares if the COBOL is hard to maintain if last year you only modified 5 lines, I'm talking about dynamic systems that are meant to be flexible and agile and have suffered greatly under 5 or more years of continual development of variable quality.

The challenge is that senior IT managers in these types of companies often are wedded emotionally to the system and can create elaborate arguments which are based around their perception of the system when it was built and their desire to see what was a great system continue to be a focus for the business. Arguments that off the shelf components now offer a better and more flexible approach or that starting again would be cheaper than next years development budget to add 5 features will just fall on deaf ears.

But its something that people in IT must do as a normal part of their business, to step back and realise that 5 or 10 years is a huge period of time in IT and that there will have been significant changes in the business and IT market during that time which will change your previous assumptions. The right answer might be to continue on the current path and to invest to a level that removes the technical debt, it might be to do a full rebuild in a new technology or even in the same technology just based on the learnings from the last 5 years, or it might be that what was differentiating before has now become a commodity and you can replace it with a standardised package (and make the business change from differentiating to standardising with its additional cost benefits).

So have an honest look at those successful systems and ask yourself the question Is the frog boiling? be honest, think about the alternatives, and above all think about what the right economic model would be for that system. This last bit is important, differentiating systems are about top line growth, about Capital investment (CapEx) and looking at the business ROI. Standardised systems are about cost saving in IT and the business and measuring by the Cost to Serve (OpEx).

Here is the check list to see if you have a boiling frog
  1. Yearly development spend is higher than the initial development estimate
  2. Many people in IT have a significant emotional attachment, but don't code
  3. "differentiation" and "flexibility" are the watch words
  4. The code quality metrics are in the scary territory (or aren't collected)
  5. There have been several large "refresh" projects through the code base
  6. The competition seems to be getting new features to market quicker
  7. The pace of change on other systems is limited by the pace of change of the boiling frog
  8. Each generation of buzzwords is applied to the solution, but no major refactoring has been done
I'm sure there are others but another one is simple

If the business gave you the development budget for the next two years and asked you if you could rebuild the site with a couple of friends for that amount would you
  1. Say "god no its way to complicated and high quality"
  2. Say "No way, it would cost a bit more than that"
  3. Say "Ummm I think so"
  4. Say "definitely, when do I start?"
  5. Bite their hand off at the elbow and give them an order form
If you are scoring 3 or more then you've probably got a boiling frog.

Technorati Tags: ,

Monday, February 02, 2009

Think Holistically, speak clearly

Recently I've had a couple of occasions where I've needed to work on some direct communications to some execs. In all of these (both internal and external) there have been a whole list of issues but a couple of actually core ones. People often like to hide in the detail of these conversations, particularly in change programmes, and you lose the big picture as they sink into arguing line 83 of the spreadsheet.

Most of the time the issues come down to something very specific from which other issues stem and I've found that by ignoring the detail when engaging with people it helps hugely in getting the result you want. So if the lack of a specific person means that you aren't engaging with the business, aren't getting the documentation and aren't getting the sort of clarity you need then completely ignore the later points just be specific that "lack of Bill = FAIL". If they want to get specific then just say "engagement, documentation, clarity, I can go into the specifics but it all comes back to having Bill". Let the senior people ask for the detail, let yourself just provide the clarity.

If you don't then its certain that the person you are talking with won't, after all you are the expert in this area so how are they meant to understand it better than you?

With some SOA efforts I've seen this where people start saying things like "We need to reorganise the teams, set up a new procurement process, buy an ESB, get a rules engine, get finance engaged, agree on the KPIs" and the list goes on. The reality is that the first two bits are the most important and the rest will either drop out or be materially effected by the first two.

The problem is that engineers, and especially architects, like to "think holistically" which means "telling everyone all the problems" in other words there is a lack of filtering between the brain and the mouth.

So take that list of 20 "big issues" on your project and look at it. If you could fix just 2 (or at most 3) which would they be? Do they now seem quite a lot bigger than the rest of the list? So go and be clear "these mean FAIL".


Technorati Tags: ,

Friday, December 19, 2008

What to do when insanity reigns. CYA for a reason

Recently I've had one of those experiences where you just have to stand back and think "Either I'm insane or they are". Reviewing the facts its pretty clear that the insanity is on the other side. Slightly worse than that the decision that is about to be made is actually tacitly admitting that its an insane decision.

This is when you need to implement a real CYA policy. CYA is Cover Your Arse this isn't the bad old EA CYA approach, its not even about clear documentation. This is about making sure when the shit inevitably fan that people don't turn round and say "why didn't you say" but also making sure that they don't say "its your fault because you didn't support it".

So the line here on your programme is simple, you know its going to fail, you have to support it through the failure, but when it goes tits up you need to make sure that your well reasoned arguments into the clinically insane decision are well versed.

Stage 1: Store ALL the documents that went into the decision

Stage 2: Write an email saying that you respect the decision and will of course support the approach, but you feel it doesn't address some of the issues that you previously raised

When this email is ignored that is fine, the people making the decision clearly know its insane but have decided that right now insanity is a valid defense. If they reply with "We think it does address those problems" then again don't reply.

Stage 3: Write down a list of the things that are going to go horribly wrong. Get them flagged in the risk register. You don't have a risk register.... holy crap you need more help than I can ever offer, get one in a hurry

Stage 4: It now becomes the PMs job to track against the risks, if you are the PM then make sure you got Amber early, not Red as you will be "obstructive" but go Amber with your concerns. The worst thing in the world is the project that flicks from Green to Red with you saying "told you so".

Stage 5: Its going down hill, your risk report looks like the dance floor in Saturday Night Fever and people have forgotten what colour green actually is. Now is the time to start pushing "resolutions" you really want to be the person who pulls it from the fire, this will involve however Napalming lots of the people who screwed up, you do not want a problem child responsible for clean up as that just means that it will go more wrong.

Stage 6: You are responsible for clean up. The key phrase here is "Drawing a line under what happened before" this means you are going to ignore any previous decisions and base your judgement on what goes forward is what you want. Its the same line as when a failure is put in charge, the difference here is you will be firing people.

Stage 7: Remember that people screwed up against advice. Those people need to find other place to screw up, unless they are in a desperate career saving type place, then they are your perfect allies. Think of reformed smokers or born again Christians... but with a pay cheque.

Stage 8: Do an Obama. The smartest thing Obama has done so far is declare everything screwed. Don't be nice, be ruthless. If there are six months to go live but it will never happen blame it on the prior administration. Then work out what is achievable.

Stage 9: Plan to an end. Don't get caught in fire-fighting, break the current project and start a new plan for a real end game.

Stage 10: Deliver to live. There is nothing that will help you more than meeting the expectations you set in Stages 8 and 9. This makes you a rock star and this gives you the future right to say "this is wrong".

Insanity and stupidity are horrible things, but don't try and ignore them. The worst things I've ever seen are where bright people have dug stupid people out of holes without making that visible. I've seen some really dumb people in roles they couldn't handle as a result and some smart people burnt out because of it.

The Risk Log is your friend. Do your job, do your best. But this isn't the navy. Over throwing the stupid captain is fine, as long as the number back you up.


Technorati Tags: ,

Wednesday, December 10, 2008

User Adoption matters

One of the most annoying things in the WWW and most especially the Web 2.0 world is the Field of Dreams mentality of "build it and they will come". With package projects and business services this is the resort of two groups of people.
  1. People who think the technology is the only thing
  2. People who are scared of users
Sometimes people fall into both of these groups but the underlying principle is always the same. The technology is enough, its enough to "let people know" and you will then "build a community" which will make it all successful. The problem is that while there are successful internet businesses that were created in part by this approach they also had a couple of other things
  1. Marketing
  2. A user population the size of the internet
If you are aiming at the mass consumer market on the web then it might be enough to launch it and do some marketing, if however you are doing this internally to your organisation then quite simply it isn't enough.

So how do you drive user adoption? Well the first thing is to find out why users might not use what ever you are proposing. Be negative, get the worst things out there and then one by one mitigate those risks. To do this you've of course got to identify your users and be realistic about them. If you are doing a bug tracking system or a service for fraud analysis its highly unlikely that everyone in the company is going to use it, so set your objectives realistically and see why your user community might not switch.

Next up think about how you are going to market it to the users, yes that's right market it. Again its not enough just to lob an article on your intranet, think about how you are going to communicate what is coming before it is there. Create a comms plan and work out what you are going to tell them when. Maybe even create an internal buzz campaign to make people interested.

Next up look at how you transition users to your system gradually (if you can). If it is a green field system then this is easier as you don't have the data migration challenge. The point of a gradual migration is to start building a reputation for success that can then be used to go after the more challenging groups. If you have to go big bang then make sure it works on day 1. If this means delaying the launch a couple of weeks then try and do that because if you bugger up the launch day they'll remember for a long time, no matter how good it is a few weeks later (look at Heathrow T5 for an example of that).

Finally, and most importantly, don't stop on go-live day. Track usage and adoption and look at who is, and isn't, using the service/package/solution/etc go out and find out what has worked and then have a follow up campaign to get people more engaged. Keep doing this as a core part of the run for the system to make sure that the system is successful in 24 months time, not just 24 minutes after being turned on.




Technorati Tags: ,