Showing posts with label NoSQL. Show all posts
Showing posts with label NoSQL. Show all posts

Thursday, February 06, 2014

NoSQL? No Thanks

There continues to be a disproportionate amount of hype around 'NoSQL' data stores.  By disproportionate I mean 'completely and utterly out of scale with the actual problems of the vast majority of companies'.  I wrote before about 'how NoSQL became more SQL'.  The point I made there is now more apparent the more I work with companies on Big Data challenges.

There are three worlds of data interaction developing

  1. Traditional Reporting - its SQL, deal with it
  2. Complex Analytics - its about the tools and languages, R, SAS, MADLib, etc
  3. Embedding in applications
The point here is that getting all those reports, and more importantly all those people who write reports, re-written using a NoSQL approach makes no sense.  Sure Statistical languages and tools aren't SQL, but is it right to claim they are NoSQL approaches?  I'd argue not.  The use of a NoSQL database such as Hadoop or MongoDB is about the infrastructure behind it, its hidden from the users so while it make good technical sense to use such a data store it really doesn't change the way the users are working.

The point in these two areas is that its about the tools that people use to interact with information and supporting the languages they use to interact with that information.  The infrastructural question is simply one of abstraction and efficiency.  Like caring about whether your laptop is connecting over 802.11g or 802.11n, yes I know you care but that is because you are a techy.  The person using their iPad doesn't care as long as the videos stream from YouTube successfully.  Its the end user experience that counts not the infrastructure.

The final case is the world of developers, and here is another shock: business users couldn't care less what developers use as long as they deliver. If you can deliver better using SQL then use that, if you use NoSQL then use that, if you can deliver better by using a random number generator and the force then go for it.  Again however the business doesn't care if you use NoSQL or not and nor should they. What they care about is that it works, meets the business requirements and non-functionals and can be changed when they need it to.

Stop trying to force a technical approach onto the business, start hiding your technical infrastructure while giving them the tools and languages that they want.

Monday, January 27, 2014

Six things to make your Big Data project succeed

So I wrote about why your Hadoop project will fail so I think its only right that I should follow up with some things that you can do to actually make the Big Data project you take on succeed.  The first thing you need to do is stop trying to make 'Big Data' succeed and instead start focusing on how you educate the business on the value of information and then work out how to deliver new value... that just so happens to be delivered with Big or Fast Data technologies

Don't try and change the business
The first thing is to stop trying to see technology as being a goal in itself and complaining when the business doesn't recognise that your 'magic' technology is the most important thing in the world.  So find out how the business works, look at how people actually work day to day and see how you can improve that.

Sounds simple?  Well the good news is that it is, but it means you need to forget about technology until you know how the business works.

Explain why Information Matters
The next bit after you've understood the business better is to explain to them why they should care about information.  Digitization is the buzzword you need to learn, folks like MIT Sloan (Customer Facing Digitization), Harvard Business Review (are you ready for Digitization?) and Davos (Digitization and Growth) are saying that this is the way forwards.  And what is Digitization? In the raw its just about converting stuff into digital formats, but the reality is that what its about is having an information and analytical driven business.  The prediction of all the business schools is that companies that do this will out perform their competition.

This is an important step, its about shifting Information from being a technology and IT conversation towards the business genuinely seeing information as a critical part of business growth.  Its also about you as an IT professional learning how to communicate technology changes in the language the business wants to hear.  They don't want to hear 'Hadoop' they want to hear 'Digitization'.

Find a problem that needs a new solution
The next key thing is finding a problem that isn't well served by your current environments.  If you could solve a problem by just having a new report on an EDW then it really doesn't prove anything to use new technologies to do that in a more time consuming way.  The good news is there are probably loads of problems out there not well served by your current environments.  From volume challenges around sensor data, click stream through to real-time analytics, predictive analytics through data discovery and ad-hoc information solutions there are lots of business problems.

Find that problem, find the person or group in the business that cares about having that problem solves and be clear about what the benefits of solving that problem are.

Get people with the 'scars and ribbons'
What do I do when I work with a new technology?  Two things, firstly I get some training and from that build something for myself that helps me learn.  If I'm doing it at work in building a business I then go and find someone who has already done this before and hire them or transfer them into my team.

Bill Joy once said that the smartest people weren't at Sun so they should learn from outside.  I'm not Bill Joy, you aren't Bill Joy, so we can certainly learn from outside.  Whether this means going to a consultancy who has done it before, hiring people in who have done it before doesn't really matter.  The point is that unless you really are revolutionising the IT market you are doing something that someone has done before, so your best bet is to learn from their example.

It stuns me how many people embark on complex IT projects having never used the technology before and are then surprised that the project fails.  Get people with the 'scars and ribbons' who can tell you what not to do which is massively more important than what to do.

Throw out some of your old Data Warehouse thinking
The next bit is something that you need to forget, a cherished truth that no longer holds.  Get rid of the notion that your job as a data architect is to dictate a single view to the business.  Get rid of the thought that the cherished ETL process.  Land the data in Hadoop, all the data you can, don't worry if you don't think you might not use it, you are landing it Hadoop and then turning into the views or analytics.  There is no benefit in not taking everything across and lots of benefits for doing so.

In other words you've got the problem, that is the goal, now go and collect all the data but not worry about the full A-Z straight away by defining Z and working backwards.  Understand the data areas, drop that into Hadoop and then worry about what the right A-Z is today knowing that if its a different route tomorrow you've got the data ready to go without updating the integration.

Then if you have another problem that needs access to the same data don't automatically try and make one solution do two things.  Its perfectly ok to create a second solution to solve that problem on top of Hadoop.  You don't need everyone to agree on a single schema, you just need to be able to solve the problem.  The point here is that to get different end-results you need to start thinking differently.

Don't get hung up on NoSQL, don't get hung up on Hadoop
The final thing is the dirty secret of the Hadoop world that has rapidly become the bold proclamation - NoSQL really isn't for everyone and SQL is perfectly good for lots of cases.  Hive, Impala, HAWQ are all addressing exactly that challenge, and you shouldn't limit yourself to Hadoop friendly approaches, if the right way is to push it to your existing data warehouse from Hadoop... do it.  If the requirement is to have some fast data processing then do that.

The point here is your goal is to show how the new technologies are more flexible and better able to adapt to the business and how the new IT approach is to match what the business wants not to try and force an EDW onto it every time.


The point here is that making your Big Data program succeed is actually about having the business care about the value that information brings and then fitting your approach to match what the business wants to achieve.

The business is your customer, time do do what they want, not force an EDW down their throats.

Monday, January 06, 2014

Six reasons your Big Data Hadoop project will fail in 2014

Ok so Hadoop is the bomb, Hadoop is the schizzle, Hadoop is here to solve world hunger and all problems.  Now I've talked before about some of the challenges around Hadoop for enterprises but here are six reasons that Information Week is right when it says that Hadoop projects are going to fail more often than not.

1. Hadoop is a Java thing not a BI thing
The first is the most important challenge, I'm a Java guy, I'm a Java guy who thinks that Java has been driven off a cliff by its leadership in the last 8 years but its still one of the best platforms out there.  However the problems that Hadoop is trying to address are analytics problems, BI problems.  Put briefly BI guys don't like Java guys and Java guys don't like BI guys.  For Java guys Hadoop is yet more proof that they can do everything, but BI guys know that custom build isn't an efficient route to deliver all of those BI requirements.

On top of that the business folks know SQL, often they really know SQL, SQL is the official language of business and data.  So a straight 'No-SQL' approach is doomed to fail as you are speaking French to the British.  2014 will be the year when SQL on Hadoop becomes the norm but you are still going to need your Java and BI guys to get along, and you are going to have to recognise that SQL beats No-SQL.

2. You decide to roll-your own
Hadoop is open source, all you have to do is download it, install it and off you go right?  There are so many cases of people not doing that right that there is an actual page explaining why they won't accept those as bugs.  Hadoop is a bugger to install, it requires you to really understand how distributed computing works, and guess what?  You thought you did but it turns out you really didn't.  Distributed computing and multi-threaded computing are hard.

There are three companies you need to talk to Pivotal, Cloudera and Hortonworks and how easy can they make it? Well Pivotal have an easy Pivotal HD Hadoop Virtual Machine to get you started and even claim that that they can get you running a Hadoop cluster in 45 minutes.

3. You are building a technical proof of concept... why?
One reason that your efforts will fail is that you are doing a 'technical proof of concept' at the end of which you will amazingly find that something used in some of the biggest analytics challenges on planet earth at the likes of Yahoo fits your much, much smaller challenge.  Well done, you've spent money proving the obvious.

Now what?  How about solving an actual business problem?  Actually why didn't you start by solving an actual business problem as a way to see how it would work for what the business faces?  Technical proof of concepts are pointless, you need to demonstrate to the business how this new technology solve their problems in a better (cheaper, faster, etc) way.

4. You didn't understand what Hadoop was bad at
Hadoop isn't brilliant at everything analytical... shocking eh?  So that complex analytics you want to do which is effectively a complex 25 table join and then do the analytics... yeah that really isn't going to work too well.  Those bits where you said that you could do that key business use case faster and cheaper and then it took 2 days to run?

Hadoop is good a some things, but its not good at everything.  That is why folks are investing in SQL technologies on top of Hadoop, some of which like Pivotal's HAWQ or Cloudera's Impala, with Pivotal already showing how the bridge between traditional MPP and Hadoop is going to be made.

5. You didn't understand that its part of the puzzle
One of the big reasons that Hadoop pieces fail to really deliver is that they are isolated silos, they might even be doing some good analytics but people can't see that analytics where they care about it.  Sure you've put up some nice web-pages for people but they don't use that in their daily lives.  They want to see the information pushed into the Data Warehouse so they can see it in their reports, they want it pushed to the ERP so they can make better decisions... they might want it in many many places but you've left it in the one place that they don't care about it.

When looking at the future of your information landscape you need to remember that Hadoop and NoSQL are just a new tool, a good new tool and one that has a critical part to play but its just one new tool in your toolbox.

6. You didn't change
The biggest reason that your Hadoop project will fail however is that you've not changed some of your basic assumptions and looked how Hadoop enables you to do things differently.  So you are still doing ETL to transform into some idealised schema which is based on a point in time view of what is required.  You are doing that into a Hadoop cluster which couldn't care less about redundant or unused data and where the costs of that are significantly lower than doing another set of ETL development.

You've carried on thinking about grand enterprise solutions to which everyone will come and be beholden to your technical genius.

What you've not done is sit back and think 'the current way sucks for the business can I change that?' because if you had you'd have realised that using Hadoop as a Data substrate/lake layer makes more sense than ETL and you'd have realised that its actually local solutions that get used the most not corporate ones.

Your Hadoop project will fail because of you
The main reason Hadoop projects will fail is because you approach using a new technology with an old mindset, you'll try and build a traditional BI solution in a traditional BI way and you'll not understand that Java doesn't work like that, you'll not understand how Map Reduce is different to SQL and you'll plough on regardless and blame the technology.

Guess what though?  The technology works at massive scale, much, much bigger than anything you've ever deployed.  Its not the technology, its you.

So what to do?
.... I think I'll leave that for another post

Friday, March 22, 2013

Why NoSQL became MORE SQL and why Hadoop will become the Big Data Virtual Machine

A few years ago I wrote an article about "When Big Data is a Big Con" which talked about some of the hype issues around Big Data.  One of the key points I raised was about how many folks were just slapping on Big Data badges to the same old same old, another was that Map Reduce really doesn't work they way traditional IT estates behave which was a significant barrier to entry for Hadoop as a new technology.  Mark Little took this idea and ran with it on InfoQ about Big Data Evolution or Revolution? Well at the Hadoop Summit in Amsterdam this week the message was clear...
SQL is back, SQL is key, SQL is in fact the King of Hadoop
Part of me is disappointed in this.  I've never really liked SQL and quite liked the LISPiness of Map Reduce but the reason behind this is simple.
When it comes to technology adoption its people that are key, and large scale adoption means small scale change
Think about Java.  A C language (70s concept) derivative running on a virtual machine (60s)  using some OO principles (60s) with a kickass set of libraries (90s).  It exploded because it wasn't a big leap and I think we can now see the same sort of thing with Hadoop now that its stopped with purity and gone for the mainstream.  Sure there will be some NoSQL pieces out there and Map Reduce has its uses but its this change towards using SQL that will really cause Hadoop usage to explode.What is good however is that the Hadoop philosophy remains in-tact, this isn't the Java SE 6 debacle where aiming after 'Joe Six-pack' developer resulted in a bag of mess.  This instead is about retaining that philosophy of cheap infrastructure and massive scale processing but adding a more enterprise friendly view (not developer friendly, enterprise friendly) and its that focus which matters.

Hadoop has the opportunity to become the 'JVM of Big Data' but with a philosophy that the language you use on that Big Data Virtual Machine is down to your requirements and most critically down to what people in your enterprise want to use.

Its great to see a good idea grow by taking a practical approach rather than sticking to flawed dogma. Brilliant work from the Hadoop community I salute you!