Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Thursday, August 07, 2014

Whistler, Microsoft and how far cloud has come

In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.  Sure back in early 2008 there were some pieces of Microsoft that knew about cloud but that really wasn't a corporate view it was what a very few people inside the company knew.

How do I know this? Well back in 2008 I was sitting on the top of a mountain with Simon Plant in Whistler.  The snow wasn't great that season but there are few places that I'd rather take a conference call.  The conference call was with Microsoft's licensing folks discussing how we can license their technology, SQL Server, Sharepoint etc on AWS.  It was a rather interesting conversation to say the least.

We were asking how they'd license for virtual machines, and how things like license portability worked in virtual environments.  A typical exchange would go something like

Simon: "So what we need is a virtual core price"
MSFT: "That will be the same as the physical core price"
Simon: "But its ok that it might move physical machines?"
MSFT: "As long as its less than once every 90 days yes"
Simon: "It could be more than that"
Me: "It could be every hour"
MSFT: "No problems, you'll just need to license every core it goes on"
Me: "We don't know what physical cores it runs on"
MSFT: "Why not?"
Simon: "Because its a cloud platform, we don't care about the physical boxes"

Then the conversation included one of the finest lines to ever come out of a software companies mouth

"Well to be safe you just need to ask Amazon how many cores they have in the Data Centre and license for that"

The reason I re-tell this story is to make the point at just how far we've come in 6 years.  I don't think any licensing person would suggest today that you'd need to license for every physical core in an entire data centre.  There really wasn't an understanding that we couldn't just ask Amazon for its core count in every data centre or that we didn't even know physically lived, the bit they really couldn't get was the idea that we didn't care and not knowing those things was actually a positive.

The call continued and by the end we were actually getting somewhere with a general acceptance that physical to virtual licensing needed some wording changes to get it working on AWS.  The Microsoft guys were pretty receptive and keen to learn but it was clearly a new set of concepts for them.

Then Mr Plant blew their mind

Simon: "What about scale down?"
MSFT: "What do you mean?"
Simon: "Well the point of cloud is to scale up and down, so what do we do when we scale down?"
MSFT: "You just need to license at peak usage"
Me: "But that destroys the whole idea of dynamic scaling"
MSFT: "Why?"
Simon: "Well if you scale once a year for a peak for a couple days, say for financial reporting, the rest of the year that just remains idle which is wasted money"

The concept of temporary licenses and dynamic scaling was clearly one that went way beyond what they were able to do at that stage.  There were more conversations then explaining about what cloud really meant and the sorts of things customers would be asking them for in years to come.  This whole call took place with Simon at 12,000ft and him about 12 feet further up the mountain so we wouldn't get interference.  The Microsoft team commented that we appeared very co-ordinated given we were dialing in from UK and US numbers and we just didn't think saying 'actually we are sitting with snowboards on our feet' was terribly professional.

The above conversation was repeated with pretty much every single software vendor over about 3 months with the same misunderstanding and same suggestions of 'license the whole data centre', I'm just singling out the Microsoft example as they are probably now one of the biggest proponents of cloud and it sits at the core of their strategy... oh and doing a conference call on a snowboard was cool.

Six years is what its taken to go from there to here, a world where cloud is now practically the default approach, whether public, private or hybrid and those questioning cloud are effectively the uneducated minority, just as Microsoft were back in 2008.  Now the challenge for enterprises is understanding just how they take on these challenges at enterprise scale, and that is what Simon has been doing since then, leading to him setting up his own business Dual Spark which specialises in exactly that.

Simon Plant: doing cloud computing for longer than Microsoft.


Wednesday, April 25, 2012

Bling and ignorance - how cloudy thinking will screw up IT

Today it was annouced that Progress Software are going to 'divest' their BPM and SOA technologies and instead are going to focus on cloud technologies that don't really exist yet. This is indicative of a mentality I see around so first lets start with some credentials: 1) I worked with Google around a SaaS partnership in 2007 around Google Apps 2) I delivered an SFDC solution in 2008 3) I've worked with Amazon, VMWare and lots of other cloud based companies So lets be clear I think cloud and SaaS can be extremely useful things, the problem comes when people begin to think they are magic. Lets be clear SaaS is a software package pre-installed and configured that you can pay for on demand - the change isn't the software its the capacity and the charging model Cloud is an infrastructure play to provide old school tin in a virtualised manner in a way that can be quickly provisioned and paid for on demand. That really is it. The problem I see is that people take the new bling and forget the old lessons. So people say 'I'm am using SFDC and it is my customer MDM I don't need anything else'... in the same way that people said that about Siebel in 2000 and found out in 2002 that it wasn't true. They say 'its on the cloud so I don't need to worry about architecting for performance, I'll just scale the hardware' as people did around 'pizza box' architectures in 2000. I hear 'we don't need to worry about integration, its on the cloud'... like people did around SOA/ESB/WS-* in ... 2000. The problem is that the reality is the opposite. The more federation you have the more process and control you need. The more information is federated the more requirements you have for Master Data Management. Cloud solutions are not silver bullets and they are not always simple solutions to integrate with other systems, great on their own but not good with others. Rapidly people are building federated IT estates with federated data and no practice and process around integration which leads to a massive EXTERNAL spaghetti mess, something that makes the EAI problems of the 90s look like a walk in the park. Or to put in another way... isn't it great how people are making more work for those of us in IT.

Thursday, March 01, 2012

Has cloud lost its buzz?

After doing a presentation the other day I joked to someone from IBM that the WebSphere 'Cloudburst' appliance was one of the silliest names I'd heard.  An appliance that was tagged as cloud.  He then informed me that its not called that anymore but is now called the much duller, but more accurate, Workload Deployer.  Now I'm still not sure as to why this is a piece of physical kit rather than a virtual machine but its an interesting shift in marketing over the last few years that now people are taking the word 'cloud' OFF products.

Now this could be because the definition of cloud is now clearer and marketing departments don't want to confuse people, or (and more likely) its because cloud isn't seen as 'shiny' any more and therefore has lost much of its ability to entice clients.  In the later case shifting to a more prosaic name makes sense.

This would be a good thing as it means we've got beyond the peak of the hype cycle and are now getting towards cloud becoming an important technical approach but not being the solution to world hunger or having value as a 'badge' to slap on something to make it more attractive.

Friday, February 24, 2012

How politicians could kill SaaS with stupidity

Back when I was doing SaaS a few years ago I raised the issue of the Patriot act as being a reason why cloud providers would be setting up in Europe.  The rule however appears even worse than I knew so now the Patriot Act impact US cloud sales directly as the hosting location doesn't matter its the rules.

With the US Congress seeming to view China and the rest of the world with concern and talk of trade barriers being raised it isn't hard to see that the next four years could see a real shift in cloud and SaaS adoption, if for instance any European or Asian companies suffer publicly as a result of US policy with regards to their own information or non-US legislation (EU data privacy for instance) makes it impossible to be both Patriot Act and client complaint.

This challenge to US based vendors could lead to a flight from US shores for many of them or arms-length 'collaboration' agreements with European and Asian providers.  At worst it will lead to a collapse of these cloud providers as their markets are restricted to just the US borders while truly global players will be able to address emerging high volume markets.

If congress does start making more draconian legislation which means US companies are able to offer even less assurances to non-US organisations and non-US governments 'retaliate' by strengthening their own data privacy and retention legislation then we could very quickly find ourselves in a 'Cloud' based trade war, one governed not by tariffs but by policies because the adherence to those policies acts as a tax on the cloud provider, and if the provider is not able to obey both the US legislation and local laws then in effect that provider will have been barred from the country.

Trade wars in SaaS and Cloud will be fundamentally different, less about tariffs and taxes and more about policies and laws.  Right now Congress has firmly put itself on a trade war path.

Unfortunately I don't think they realise that.

Tuesday, December 13, 2011

Cloud in a box: Life on Mars in Hardware or an empty glass of water?

There are some phrases that are just plain funny and for me 'Cloud in a Box' which is available from multiple vendors is probably just about the best. The idea here is that you can buy a box - a box that looks and acts like a 1970s Mainframe: virtualisation, big power consumption, vendor lock in - and joy of joys you've now got a 'cloud'.
 So:

  • Do you pay for this cloud on demand? 
    • Nope
  • Do you pay for this cloud based on usage?
    • Nope
  • Are you able to just turn off this cloud and then turn it on later and not pay anything for when its off
    • Nope you still need to pay maintenance
  • Can I license software for it based on usage
    • Errr probably not, you'll have to negotiate that
  • Is this cloud multi-tenant?
    • Errr it can be... if you buy another cloud in a box
  • Is this cloud actually pretty much a mainframe virtualisation offer from 1980?
    • Err yes
At first I was thinking that this was in fact the sort of thing created by folks who watch Life on Mars and want to see their data centre populated with flashing lights. But then I realised actually there is a better reason that you don't get cloud in a box.

Clouds are vapour, they float, they dynamically resize... if you put a cloud in a box then the vapour will stick to the sides and turn into water... taking up about 1% of the volume of the cloud.  For me this sums up the reason why it doesn't come in a box.  Clouds need to have capacity well beyond your own normal needs so that if you 'spike' then you can spike in that cloud but not need the capacity the rest of the year.  So a 1% ratio is probably the minimum you should be looking at in terms of what you cloud provider has against what your normal capacity is.  This is the reason that provider clouds like Amazon, or those from other large scale data centre providers, aren't 'in a box' but instead are mutualised capacity environments.   Even if one of these providers gives you a 'private' area of VLANs and tin they've still got the physical capacity to extend it without much of a problem.  That is what a cloud is, dynamic capacity paid for when you use it.

Cloud in a box?  I'm a glass 1% full sort of guy.



Technorati Tags: ,

Thursday, June 09, 2011

iCloud 2.0 - CloudApps

2 years ago I wrote a post on why Apple might dominate the cloud and how an integrated offline/cloud backup solution would both offer more value from Apple's cloud but also offer more of a lock-in. I do like it when a prediction comes pretty much spot on, even if they've only just started doing what I thought they would.

Now the iCloud 1.0 is just a pretty basic sync, and as predicted, it does provide a premium service that includes the ability to sync your whole library. It doesn't appear to do the interface suggestion I made of integrating it directly into the iPod player but instead requiring you to go via the iTunes application, but that really is a minor improvement (and not a difficult one to do either). So we can see that now they've added in the cloud backup for iOS it surely can't be long before its extended to include OS X, especially as its effectively including it for photos already.

What next for Apple and the cloud?

Well one thing they haven't done yet is automate some of this sync, so when I say in iTunes "last 5 un-played" it doesn't automatically do the update on your device but this is a minor piece really.

The bigger thing that isn't in there yet though is the idea of using processing on the cloud rather than simply storage. So doing things like fancy video effects rendered on the cloud would be a good way to extend the experience on both the desktop and the mobile world to include a whole new generation of Apps.
CloudApps
So you don't just have the back-up/sync and all of those other elements but once you have your information being exchanged in this way you open the world to more consumer focused applications, or cloud extensions to existing applications.

Microsoft already have a limited part of this with their cloud services, but they don't appear to have either the co-ordination, brand or vision to make it really happen. Google might have an opportunity with their services and Android but the control of the handset manufacturers and operators might stop them.

The other people who should be worried are Facebook. The point of CloudApps is going to be towards collaboration and multiple users, sharing and the like. So while Ping hasn't been a success this application centric cloud approach could give Apple just what it wants - control within the social media space.

Technorati Tags: ,

Saturday, January 29, 2011

Rightscale - cloud provision as a commodity

I made a comment about cloud providers not being good long term investments well having a drink with Simon Plant of Rightscale it became clear that actually its already pretty much a cooked goose in the cloud space. Rightscale do the VM creation and provisioning stuff across most of the public cloud providers as well as folks like VM in the "private" cloud space. What does this mean?

Well simply put it means that you can have Rightscale create the VM image for you in a way that means you can deploy it to pretty much any cloud you want. This means you can start doing SLA/price arbitrage across providers and reduce any potential lock-in from the cloud provider. I like to think of this as an "iPhone strategy" as before the iPhone it was the carrier who would specify what the phones did and would put network specific cruft on them. Apple came along with the iPhone and said "nope, our phone, exactly the same, every network, managed by us", Rightscale is effectively the iPhone and iTunes for your cloud provisioning. By using an intermediary approach you get to control not just the standard stuff like number of VMs, CPUs, Storage etc but the more important stuff like which actual cloud you are deploying to. If you want to shift it in-house from an external provider then you can, if you want to shift between providers then you can, and if you want to start off internally and shift it externally when demand spikes or when it makes financial or security sense then you can.

So Rightscale are doing to clouds what clouds have done for tin... commoditising them. This means cloud providers are in a volume business with retail style metrics and margins. Effectively this means that Rightscale are achieving commercially what Open Cloud has so far failed to do publicly.

So in the same way as you wouldn't consider an Intel/AMD box where your software could only ever run on Dell (for example) why choose an approach to clouds that means you can only choose one provider?


Oh and I bought the drinks BTW.

Technorati Tags: ,

Tuesday, January 25, 2011

Cloud providers and software vendors aren't a great long term bet

I'm noticing a bunch of cloud providers attracting massive numbers for funding and people are talking about mega-billion industries and everyone getting hugely rich.

I'd like to sound a note of caution, not on the concept that cloud is important or not going to happen but on the concept that there are loads of companies that are going to make loads of money on it. Let me tell you a quick story about a company that believed in Telecoms in the late 20th Century. The company was called GEC and was one of the giants of UK industry. A GE of the UK with a very strong defence arm. The company had billions in the bank and was one of the most solid stocks in the FTSE 100. Now this company had some new leaders who loved the idea of Telecoms and its "better multiples" and wanted to get out of that boring, profitable, defence industry and go heavy into Telecoms. In 5 years from 1997 till 2001 these new leaders invested all of the cash pile, sold off the defence arm and turned a once towering industrial into a bankrupt shell.

How about another? Lets take Vodafone and their stock chart across this Telecom bubble.


Want another? Alcatel Lucent. Note here I'm talking about two companies who survived the bubble as well as one huge company that bit the bullet as a result of it. One that never recovered would be Nortel a company that during the bubble was at one stage worth 1/3 of the total value of Canadian companies! Startups like Winstar were allegedly worth over $4bn but went pop within a year. Throw in AOL's merger with Time Warner and the picture is pretty complete of massive over investment in infrastructure providers and technologies with a view that the market was basically infinite.

This isn't the first time that an infrastructure play has fundamentally failed to make long term money. Roads, Rail and even Canals had their own booms and bust as it became clear that it was too expensive to build all that infrastructure which people fundamentally didn't want. This is really true in something like Telco, and the cloud, where fundamentally the cost of provision is being driven relentlessly downwards. Investing $10bn today in IT infrastructure is like investing $2.5bn in 4 years time, in other words your investment is worth 1/4 of its retail value in 4 years. Even today with the boom in Mobile Internet you could argue that the large providers aren't massive growth stocks but instead are acting as traditional infrastructure providers and many aren't back to their peak of ten years ago.

So what does this mean for cloud? Well this is another infrastructure play. SaaS and end user provider pieces like Facebook are different types of companies but cloud companies are fundamentally about infrastructure so there are a couple of things to note

1) Its probably too late to get in at the ground floor with startups, although a few will do a spectacular growth and pop
2) Its still worth getting into a cloud startup
3) Start looking for the exits when you compare your company with a "dull" company and think "hell we could be worth as much as Wallmart soon"... that is the time to jump

Stock and investment wise its fine to ride the wave that these companies represent as we should never avoid making money from the up-curve of a bubble.

In the long term its a Telco model ala Vodafone or AT&T so expect the big investments from Microsoft, IBM and Amazon to yield minor returns initially but provide a long term steady income but at the sort of levels that would make people just hold onto the cash if they sat back and thought about it.

Technorati Tags: ,

Integrating the cloud with an incremental backup solution

Okay its time for another "can someone please just build this" type of request. So lets get a few things straight

1) I know that there are cloud backup solutions out there
2) Yes I know that theoretically you could set up rsync to do this.

Now here is the problem

On my computers I have three basic sets of data

1) Work related information - Needs to be backed up securely and if I lose it then its a pain but anything decent will be in my email archive
2) Personally highlighted information - e.g. my flagged photos, stuff that is irreplaceable
3) Stuff I'd prefer not to lose - e.g. the rest of the photos and videos

Now effectively this could give 3 different backups but instead I'd say that actually its only two sets but they need to work together.

1) a Local disk backup
2) an occasional cloud backup

Now to people who run data centres this is all old hat and is basically the "ship the tapes off" part, but I think we can make it a little smoother. So what do we need?

1) A way to specify what is in the backup
2) A way to specify what is backed-up further into the cloud
3) A way to specify the security applied to the backup files

I'm going to deal with 3 first. Now if you have an encrypted hard-drive or password protected elements then clearly the default on the backup needs to be at least that strong, this presents a bit of an issue as it means you have to be able to decrypt to determine the deltas so in other words an approach which is linked to the security profile of the user makes sense and where its hard-drive encrypted its easier as once you are in you are in.

So now to the other pieces, I won't cover item 1 as that is pretty standard in most backup solutions but I'll go onto the second instead.

What we need is some way of specifying which elements we want sent to the cloud for offsite backup. As an example in iPhoto you might decide the flagged photos need to go to the cloud or you might decide that all photos go. These elements are automatically added to the local disk backup but would then be added, for instance daily, to a cloud backup.

Now the surprise here is why TimeMachine doesn't have this sort of facility in conjunction with MobileMe or another Apple branded cloud provision or why Microsoft's massive spend on the cloud hasn't produced something similar. It really is an obvious idea which is basically that key sections (or all if you have a fast enough connection and cash) should float from your local storage onto the cloud.

Technorati Tags: ,

Tuesday, January 18, 2011

Public Cloud is temporary, virtual cloud will move compute to the information

This is another of my "prior art" patent ideas, its something I've talked about before but reading pieces around increasing data volumes its made me think about it more and more.

The big problem with public cloud is that the amount of data that needs to move around is getting exponentially higher. This doesn't mean that public cloud is wrong, it just means that we will need to look more and more about what needs to be moved. At the moment a public cloud solution consists of storage + processing and its the storage that we move around. In that I mean that we ship data to the cloud and back down again. Amazon have recognised the challenge so you can actually physically ship storage to them for large volume pieces, there is however with the continuing rise of Moore's Law and virtualisation another option.

Your organisation has lots of desktops, servers, mobiles and other pieces. The information is created and stored fairly close to these things. The data centre also will contain lots of unused capacity (it always does) so why don't we view it differently? Rather than shipping storage we ship processing? You virtually provision a grid/hadoop/etc infrastructure across your desktop/server/mobile estate as close as possible to the bulk data.

This is when it really gets cloudy as you now move compute to where it can most efficiently process information (Jini folks can now say "told you so") rather than shifting storage to cloud.

The principle here is that the amount of spare capacity in a corporate desktop environment will outstrip that in a public cloud (on a cost/power ratio) and due to its faster network connections to the raw data will be able to more efficiently process the information.

So I predict that in future people will develop technologies that deploy VMs and specific process pieces (I've talked about this with BPEL for years) to the point where it can most efficiently process information.

Public clouds are just new data centre solutions, they don't solve the data movement problem. A truly cloud based processing solution would shift the lightest thing (the processing units) to the data rather than moving the data to the processing units. The spare capacity in desktop and mobile estates could well be the target environment for these virtual clouds.

Technorati Tags: ,

Sunday, January 16, 2011

Using REST and the cloud to meet unexpected and unusual demand

I’m writing this because its something that I recommended to a client about 3 years ago and I know they haven’t adopted it because they’ve suffered a number of outages since then. The scenario is simple, you’ve designed your website to cope with a certain level of demand and you’ve given yourself about 50% leeway to cope with any spikes. Arguably this means that you are spending 1/3 more on hardware and licenses than you need to but realistically its probably a decent way of capacity planning without getting to complex.

Now comes the problem though. Every so often this business gets unexpected spikes, these spikes aren’t a result of increased volume through the standard transactions but are a result of a peak on specific parts of their site, often new parts of the site related to (for instance) sales or problem resolution. The challenge is that these spikes are anything from 300% to 1000% over their expected peak and the site just can’t handle it.

So what is the solution? The answer is to use the power of HTTP and in particular the power of the redirect. I’m saying that this is REST but its something I’d done before I knew about REST but I’m not one to let a bit of reality to get in the way of marketing ;) When I’d done it previously it was prior to cloud but the architecture was basically the same.

First you split your infrastructure architecture into two parts

  1. The redirecting part (hosted in the cloud, or at least on a separately scalable part of your infrastructure)
  2. The bit that does the work


The redirect just sends an HTTP redirect (code 307 so it isn’t cached) to the new site, so lets say http://example.com goes to http://example.com/home its important to not here that this is the only page we are redirecting, its not a case that every page has this just the main page because when there is a mega-spike it tends to come via the homepage.

Now I’m always one to talk about being chatting but the wonder of a redirect is that the user sees a URL flicker in their browser and then the normal page loads. This is certainly an overhead of a single call but from experience this isn’t a big deal in modern sites where you have a page made up of multiple fragments, the additional redirect doesn’t add a significant amount and its only on the initial page load which now takes two network hits rather than one… its an increase in latency for that homepage but not much of an increase in terms of load time.

Now lets wander into the world of cloud, what does this get us then and why is it worth adding this overhead?

Well when you have an extraordinary event you should really think about creating new pages for it rather than just tacking pages onto your normal site, if you are in a scenario where 70-98% of your visitors are looking a specific piece of content then you are much better of thinking in terms of a microsite rather than adding it to your normal site.

All of the old URIs that go beyond the main page should still go to their old places but the home page needs to be redirected to your new microsite. Now some people will be screaming “just use a load balancer” and they have a bit of a point but I’ve always been a bit of a fan of offloading processing onto the client and this is exactly what the redirect does.

So now the redirect site uses the same template as the home site in case of CSS and key navigation but it doesn’t include all of the dynamic bits and fragments that were on the old front page it includes two things

  1. The information directly related to the extraordinary event
  2. Links off to the normal site

So now our original redirection goes from http://example.com to http://example.com/event and we scale the event part to our new demand. If its truly extraordinary then you are better off doing it as static pages and having people making the modifications (even if updates are ever 5 minutes then its cost wise a lot less than call centre staff). The point is simple you are scaling the extraordinary using the cloud.

So spotted the big point here? Its something that you can do with a traditional infrastructure and then make the shift to cloud for what cloud is real good at - handling spikes. You don’t have to redesign your current site to scale dynamically you just have to use a very simple policy and have a cloud solution that you can rapidly put up in the event of an massive spike.

A couple of hints:

  1. Have the images for the spike ready to go and monitor at the redirect level to automatically kick-in the spike protector
  2. Have an automatic process to dump a holding page onto the spike protector which tells people that more information is coming soon, they’ll tend to refresh rather than go to the rest of the site

You don’t need the normal commercial licenses as you can do it via static uploads (the normal site can do its dynamic magic on your old infrastructure) or a temporary OSS solution.

I'm often confused as to why people try and scale to meet extraordinary demand on a normal architecture, people seem to not realise that most spikes aren’t a result of your core business getting 500% more popular over night, its normally a result of a specific promotion or problem and its that specific area which needs scaling. If its a promotion you need to scale the people hitting that promotion and then look at either scaling the payment piece, putting in place a temporary process or throttling the requests through that part of the process. If its an issue then treat the site like a news site and statically publish updates.

So there you go by using the power of a simple command - “redirect” - you can take advantage of cloud quickly and effectively and if you never get the extraordinary event it doesn’t cost you much, if anything.

So get on with the power of redirect and link it to the power of the cloud because that is when technical things are actually interesting, when they can simply be used to solve a problem cheaply that previously was too expensive to solve.

Technorati Tags: ,

Monday, January 03, 2011

I don't care about cloud because I don't care about tin

I'll start with an admission, I've worked with cloud providers for quite a few years now and the reason is not because I'm excited about elastic scalability of compute and storage... its exactly because I don't care about elastic scalability of compute and storage. I've said before that Tin Huggers are a major blocker to cloud adoption... but now I wonder if cloud itself is actually part of a broader problem...

Cloud is just tin, virtual tin it doesn't actually have a point, it doesn't actually do anything...

Great cloud services such as Amazon AWS are great not because they provide a bunch of tin, but because they provide a set of services which enable the virtualisation of tin. You aren't buying tin its the service, but all the service provides is virtual tin.

SaaS however is different because SaaS provides you with some business capabilities, you are buying a set of business services and you are buying them "as is". Cloud is in one sense just a revision of current IT models where you are building your stuff on virtualised infrastructure, sure its a bit more dynamic but at the end of the day you are still building your own stuff the only difference is that rather than it being hosted in a data centre you never visit but on tin you've bought its hosted in a data centre you don't even know where it is on tin you are renting.

So my point is that cloud is in one sense dull, its the same reason I don't care about the telephone infrastructure, sure I make phone calls and I'm glad its all there, but its the phone and the services I care about, the infrastructure can go hang. This doesn't mean cloud or the phone infrastructure aren't important building blocks for lots of things and clearly SaaS builds on cloud, but equally clearly SaaS is the point that (in the words of the OASIS SOA RM) delivers the real world effect while Cloud just forms part of the execution context.

Lots of cloud marketing and words out there is really just old style IT with a minor bit of lip gloss applied by using "cloud", but does that actually deliver a better service to the business? Sure sometimes the dynamism is good, but sometimes you'd be better off just buying a SaaS service and people are using Cloud interchangeably with SaaS to deliberately muddy the waters and pretend that by doing Cloud they are in fact really doing SaaS and being more business centric.

Cloud is IT centric, SaaS is business centric.

And that is why I care about SaaS and don't care about Cloud. I want to know what services the business can run not how "dynamic" or "scalable" the tin is, I've heard those conversations all my career and they've always bored me. Software is scalable, tin just gets bigger (horizontally or vertically). Cloud is a diversion, sometimes its a successful diversion but in 80%+ of cases SaaS is the true revolution and confusing it with virtual tin isn't helping move us forwards.

Clouds are boring, because Clouds do nothing, its what you run on Clouds that counts and most of the time SaaS is better than old style custom build but on a shinier set of tin.

Technorati Tags: ,

Monday, December 20, 2010

When clouds really will be cloudy

People are talking about clouds and SaaS as the future, and I really believe that they are, in fact I'd say they are the present reality for leading companies. However one of the questions is always "where does this go"? Now there is one world that says "everything on the cloud and delivered via HTML 5". This is an interesting view but it misses out a couple of key questions
  1. When does Moore's Law go away?
  2. When is it really a cloud
The first point is that I'm sitting here with an iPad, iPhone, MacBook Pro and AppleTV (I am a fanboi) with miles more processing at my disposal than commercial systems and websites I put live late in the last century. Clouds talk about dynamic deployment and portability... but normally within a specific data centre environment. When we think about services being consumed and co-ordinated and assume that this is being done over the internet then two questions raise themselves.
  1. What decides where a service is deployed?
  2. Why can't it be deployed to my phone?
What is the point of these questions? Well my son and I can play Need for Speed:Undercover with one of us "hosting" the game on the iPhone or iPad. This is therefore an example of a piece of Software being delivered "as a Service" from a mobile device to another device. Sure its a specific use case but its a very real one to scale up.

Why wouldn't the "Rich" interface still be deployed to the device but now as a client service? Why wouldn't the information cache and some clever software that proactively populates the cache be deployed to the local device?

Now folks like RightScale already do deployment and management across multiple cloud platforms and why wouldn't this be extended to ever more powerful mobile devices, laptops and other devices. Why couldn't my operating system be deployed as part of the cloud rather than just a consumer and the elements such as latency determine where the most effective deployment is for each service in a network? Think about all those apple iPhone apps running in the background on millions of devices... who needs more capacity than that and what latency problems when the app is actually spread across a few devices in the local area?

Now there are challenges to this but there are also big advantages, your data centres are cheap because you don't need them anymore, you just deploy to your clients devices.

This clearly isn't a solution for 2011 but it is something I firmly believe will happen and its driven by the power of devices. Sure HTML 5 is cool, sure Amazon AWS is neat and sure SaaS is wonderful.... but the day that clouds really become cloudy is when no-one can point at the great big data centre that it ultimately all connects to.


Technorati Tags: ,

Monday, June 21, 2010

Tin-huggers the big problem for cloud adoption

Going through yet another one of those "holy crap that infrastructure is expensive" occasions recently I did a quick calculation and found that we could get all of our capacity at an absolute fraction of the internal price. Think less than 1/10th of the quoted price when installation was factored in.

What stopped us shifting? Well a little bit of compliance, which we might have overcome, but the big stopper were the tin-huggers.

Tin-huggers are people who live by the old adage "I don't understand the software, I don't understand the hardware but I can see the flashing lights" which I've commented on before.

Tin-huggers love their tin, they love the network switches, they love the CPU counts and worrying about "shared", "dedicated", "virtualised" and all of those things. They love having to manually upgrade memory and having to select storage months or years in advance. Above all of these things they love the idea that there is a corner of some data centre that they could take their tin-hugging mates into and point and say "that is my stuff".

Tin-huggers hate clouds because they don't know where the data centre is and their tin-hugger mates would laugh at them and say "HA! Google/Amazon/Microsoft/etc own that tin, you've just got some software". This makes the tin-hugger sad and so the tin-hugger will do anything they can to avoid the cloud. This means they'll play the FUDmeister card to the max and in this they have a real card to play...

Tin-huggers are the only ones who work in hardware infrastructure design, software people couldn't give a stuff.

This means its all tin-huggers making the infrastructure decisions, so guess what? Cloud is out.

Tin-huggers are yet another retarding force on IT. Sometimes the software folks can get it out and work with the business but too often the TIn-hugging FUDmeistering is enough to scare the business back into its box.

Its time to build a nice traditional bypass right through the tin and into the cloud and let the tin-huggers protest from their racks as we demolish them from underneath their feet.

Technorati Tags: ,

Tuesday, October 13, 2009

When you know it isn't a cloud

Following up on the previous post I thought I'd do the REALLY obvious ones that indicate it isn't a cloud. James' list wasn't nearly cynical enough in light of the things that are claimed to be a cloud.
So here goes
  1. If its just a single website with no backups storing stuff to disk then its just a standard WEBSITE not a cloud (Hello Sidekick)
  2. If its a physical box that is meant to help you manage a virtual environment... then its not a cloud (hello Cloudburst appliance)
  3. Seriously if you are just a single website doing stuff with a database you aren't a cloud (hello Soonr)
  4. No seriously if about buying a physical box it isn't a cloud (like HP's spin though that they are just cloud enabling... nice weasel room)

And I could go on. The point is that cloud is an infrastructure thing, it is IaaS in the "aaS" hierarchy. PaaS can have a go at being cloud but SaaS is "just" something that might be deployed to a cloud. Having a website (SaaS solution) that runs on Amazon doesn't make that SaaS solution "a cloud" it makes it a SaaS solution hosted on a cloud.

The hardware point is that making capital expenditure is exactly what a cloud isn't about and physicality is exactly what a cloud isn't about. You want virtual compute and storage that you pay as a utility. This is the economic model of cloud.

So in the words of Kryton. I know that strictly speaking I've only identified two things, but they were such big things I thought I'd say them twice.

Technorati Tags: ,

Monday, October 12, 2009

Not a cloud? Then what is it?

Redmonk are one of those smaller analyst companies who make up for a lack of numbers with a refreshing depth and honesty. James Governor's latest, and I assume light hearted, view of "15 Ways to Tell Its Not Cloud Computing" however does a bit of a disservice to the debate around clouds. Mostly right but with a few glaring pieces I felt I had to disagree with.
  1. If you peel back the label and its says “Grid” or “OGSA” underneath… its not a cloud.
    1. Fair enough if its talk about people selling last years technology with this years sticker but.....
    2. If its a question of doing a deep dive and finding that underneath there is a "Grid" but that you don't care about it then I don't think this discounts it.
  2. If you need to send a 40 page requirements document to the vendor then… it is not cloud.
    1. I'll go with this one... with the caveat of governments can turn anything into 40 pages ;)
  3. If you can’t buy it on your personal credit card… it is not a cloud
    1. Nope I can't accept this. If I'm a fortune 500 company and I'm buying millions of dollars a month in dynamic capacity then I want a professional invoicing and billing approach. When governments build their own clouds they won't be billing to credit cards and for most companies this is an irrelevance.
  4. If they are trying to sell you hardware… its not a cloud.
    1. Absolutely with it
  5. If there is no API… its not a cloud.
    1. This is really to enable 3rd party tools integration and its a good thing. Fair enough
  6. If you need to rearchitect your systems for it… Its not a cloud.
    1. Very very wrong. For a simple reason, shifting boxes into the cloud and doing the same thing you've done before is easy. having a software application that can actually dynamically scale up and down and handle scalable data stores is harder.
    2. To take best advantage of the cloud you need systems that can scale down and up very quickly, LOTS of systems today do not get the full value out of the cloud (as opposed to just virtual infrastructure) and will require re-architecting to take advantage of the cloud.
  7. If it takes more than ten minutes to provision… its not a cloud.
    1. Depends what we call provisioning. I've got 5TB of data to process that needs pre-loading into the database image. Does this count as provisioning as its going to take more than 10 minutes.
    2. If it means 10 minutes to get a new compute instance for an existing system then fair enough but that isn't the same as provisioning a whole system in the cloud.
  8. If you can’t deprovision in less than ten minutes… its not a cloud.
    1. As an IT manager once told me "I can turn any system off in 5 seconds if I have to"... "just kick out the UPS and pull the plugs"
    2. Fair enough point though in that it would at least be managed in a cloud.
  9. If you know where the machines are… its not a cloud.
    1. Really? So Amazon don't have a cloud as I know that some of my instances are in the EU?
    2. If you mean "don't know exactly physically where a given compute instance is" then fair enough, but most companies don't even have a clue where their SAP systems are physically running.
    3. Also against this one is the government cloud and security requirements. I need to know that a given instance is running in a secure environment in a specific country. This doesn't stop it being a cloud it just means that my non-functional requirements have geographical specifications in them.
  10. If there is a consultant in the room… its not a cloud.
    1. Cheap gag. You could add "if a vendor says it is... it is not a cloud"
  11. If you need to specify the number of machines you want upfront… its not a cloud.
    1. Fair enough
  12. If it only runs one operating system… its not a cloud.
    1. Why does this matter? Why can't I have a Linux cloud or a Windows cloud? Why is OS independence critical to a cloud?
  13. If you can’t connect to it from your own machine… its not a cloud.
    1. Non functionals (e.g. Government) might specify this. It depends what connection means. I could connect to the provisioning element without being able to connect to the running instance.
  14. If you need to install software to use it… its not a cloud.
    1. Server or client side? If its the later then I'd disagree, how will you use something like Amazon without installing a browser or the tools to construct an AMI?
    2. If its the former.... I take it that it isn't the former
  15. If you own all the hardware… its not a cloud.
    1. Or you own the cloud and are selling it. What this would mean would that a mega-corp couldn't turn its current infrastructure into a cloud, and I don't see why they can't.
  16. If it takes 20 slides to explain…. its not a cloud
    1. Fair enough again. As long as this is the concepts rather than a code review!

So pretty much I agree with 50% and disagree with the remainder. The point is that cloud is still arbitrary and there are some fixed opinions. Utility pricing is clearly a given, but credit cards aren't (IMO) required.

One big way to tell its not a cloud is of course if you can see the flashing lights.



Technorati Tags: ,

Wednesday, September 16, 2009

Business Utilities are about levers not CPUs

"As a Service" is a moniker tagged against a huge number of approaches. Often it demonstrates a complete marketing and intelligence fail and regularly it just means a different sort of licensing model.

"As a Service" tends to mean utility pricing and the best "As a Service" offers have worked out both what their service is and what its utility is. Salesforce.com have a CRM/Sales Support service (or set of service) and the utility is "people". Its a pretty basic utility and not connected to the value but this makes sense in this area as we are talking about a commodity play and hence a simple utility works.

Amazon with their Infrastructure as a Service/Cloud offer have worked out that they are selling compute, storage and bandwidth. Obvious eh? Well not really as some others appear to confuse or mesh the three items together which doesn't really drive the sort of conservation behaviour you'd want.

The point about most of these utilities though is that are really IT utilities. SFDC measures the number of people who are allowed to "log on" to the system. Amazon measure the raw compute pieces. If you are providing the base services this is great. But what if you are trying to then build these pieces for business people and they don't want to know about GB, Gbps, RAM or CPU/hrs? Then its about understanding the real business utility.

As an example lets take retail supply chain forecasting, a nice and complex area which can take a huge amount of CPU power and where you have a large bunch of variables
  1. The length of time taken to do the forecast
  2. The potential accuracy of the forecast
  3. The amount of different data feeds used to create the forecast
  4. The granularity of the forecast (e.g. Beer or Carlsburg, Stella, Bud, etc)
  5. Number of times per day to run it
Now each of these has an impact on the cost which can be estimated (not precisely as this is a chaotic system). You can picture a (very) simplified dashboard




So in this case a very rubbish forecast (one that doesn't even take historical information) costs less than its impact. In other words you spend $28 to lose $85,000 as a result of the inaccuracy. As you tweak the variables the variables the price and accuracy vary enabling you to determine the right point for your forecasts.

The person may choose to run an "inaccurate but cheap" forecast every hour to help with tactical decisions and run an "accurate and a bit expensive" forecast every day to help with tactical planning and run a weekly "Pretty damned accurate but very expensive" forecast.

The point here is that the business utilities may eventually map down to storage, bandwidth, CPU/hrs but you are putting them within the context of the business users and hiding away those underlying IT utilities.

Put it this way, when you set the Oven to "200 C" you are choosing a business utility. Behind the scenes the power company is mapping this to a technical utility (kW/h) but your decision is base on your current business demand. With the rise of smart metering you'll begin to be able to see what the direct impact is on your business utility decision on cost.

This is far from a simple area but it is where IT will need to get to in order to clearly place the controls into the hands of the business.

They've paid for the car, shouldn't we let them drive?

Friday, July 03, 2009

Vendor Managed Infrastructure - Are clouds just a VMI solution?

Tweeting with Neil Ward-Dutton I had a thought about what he has written on public v private clouds and it made me think that the only real difference between them is in the who manages and pays. This might sound like a big thing but taking a leaf out of the retailers book it doesn't need to be that large.

Vendor Managed Inventory is simply where a supplier takes over the management of a products inventory and ensures that it meets the buyers SLAs (availability, price, etc). The advantage for this on the buyer is that they don't need to worry about ordering, they just need to track against the SLA.

What is a cloud proposition if not that? Further more if we take this to its logical conclusion then even private clouds could be delivered to the same economic model as public ones. Maybe not with quite the same leverage but why couldn't IBM, HP or whomever supply you with hardware and software infrastructures against an SLA that you define and be responsible for ensuring that the capacity and pricing of the infrastructure meets the SLA? What is security and separation but part of an SLA?

My point is that "Private Cloud" really tends to mean "Still hugging your own tin" and that the real impact of cloud is in the economic model of procurement (the switch from CapEx to OpEx) and in the scaling of infrastructure independently of the current direct demand (i.e. you don't pay for Amazon to buy more hardware, that is part of their calculation to meet the SLAs).

So in 5 years will there really be Private Clouds that have a CapEx model, or will people be demanding that the H/W vendor provision capacity in a private environment with a specific SLA. In other words will VMI be applied to infrastructure in the same way as a supermarket applies it to apples?

Personally I think it will and that this makes strong financial sense for both businesses and suppliers as it changes the relationship and enables hardware vendors to undertake hardware refresh directly (after all if their Powerpoints are to be believed you'll always save money this way) and the business will have a defined capacity model.

Don't believe me? Well an awful lot of companies are already doing just this around storage. Getting "private" pieces of a great big SAN and paying a utility price for it.

This to me means that the current sales pitches of end user purchases of "cloud" infrastructures are just a temporary marketing led blip and that the future is VMI for everything.

Technorati Tags: ,

Tuesday, June 16, 2009

Why would a cloud appliance be physical?

IBM often lead in technology areas and with the history of LPAR on the mainframe they've got a background in virtualisation that most competitors would envy. So clearly with cloud they are going to go after it. Sometimes they'll do what they did with SOA and tag a (IMO) dog of a product with the new buzzword (MQSI = Advanced ESB - I'm looking at you) and other times they will actually do something right.

Now a product that can handle the deploy and manage instances sounds like a good idea. IBM have created just such a product which basically acts as a dispenser and manager for WebSphere Hypervisor edition images. The WebSphere Cloudburst Appliance will deploy, it will reclaim, it will monitor and it will manage. Very nice for people who have large WebSphere estates.

And this is what the product looks like

Yes I did say look like because IBM have built this cloud manager into a physical box. Now appliances for things that need dedicated hardware acceleration I understand, but why on earth is something that is about managing virtual machines, something that might be doing bugger all for large periods of time not in itself a virtual image?

Given that the manager is unlikely to be a major CPU hog it seems like an ideal thing to be lobbed into the cloud itself (yes I know its not really a cloud, but lets go with the marketing people for now, they've made a bigger mistake here IMO). If it was in the cloud then you could add redundancy much more easily and of course it would require its own dedicated rackspace and power.

Like I said I can understand why you might like a virtual machine to do what the CloudBurst appliance does, but I have no idea why you would want a dedicated physical machine to work on a low CPU task. As IBM expand this technology into DB2 and other WebSphere elements you could end up with 20 "Cloudburst" appliances managing and deploying to a single private cloud. How much better for these to be cloud appliances in the truest sense and to be virtualised within the cloud infrastructure itself.

A physical box to deploy virtual images makes no sense at all.


Technorati Tags: ,

Monday, June 01, 2009

SaaS and the Cloud a development challenge

A while back I blogged on how to do ERP with a middleware solution. The point was to leave the package untouched while adding your customisations in an environment that was better suited to the challenge. It made upgrades easier and also would help to reduce your development timescales.

Well the world is moving on and now I'm looking at a challenge of SaaS + cloud. A standardised package delivered as SaaS where we need to extend it using a cloud solution to enable it to scale to meet some pretty hefty peaks. So how to meet this challenge? Well first off lets be clear I'm not talking about doing APEX in Salesforce I'm talking about treating the SaaS solution in the same way as I'd treat an ERP, namely as the service engines that provide the transactional functionality. What the cloud part would need to do would be to do the piece on top, the user interaction, pulling in some other web delivered elements in a late integration piece.

Model wise its the same


We've got a bunch of back-end services with interfaces (Web Services mainly) and we need to build on top.

The first new challenge is that of bandwidth and latency. "Build for the web" is all very well but there is nothing quite like a gigabit ethernet connection and six feet of separation. Here we have the information flowing over the web (an unreliable network performance wise) and we need to provide a good responsive service to the end user. So clearly we need to invest a bit around our internet bandwidth. Using something like Amazon clearly helps as they've done a lot of that already but you do need to keep it in mind and it becomes more important to make sure its not "Chatty" as those latency hops really add up.

The next piece of course is that we need to cache some information "Use REST" I hear people cry, the trouble is that even if I used REST (or rather if the SaaS provider did) then I'd still have to over-ride the HTTP Cache headers and build some caching in myself. Why? Well the SaaS solution has a per transaction model in certain sections and a per user in others, this means I need to limit users until the point at which they REALLY have to be known to the SaaS solution and I need to cache transactions in a way that reduces the money going to the SaaS vendor. So here the caching is 20% performance and 80% economic. Its an interesting challenge in either REST or WS-* as you are going against the policy that the service provider would set.

So the objective here is to build proxy services on the cloud side which handle these elements. These proxies are going to be reasonably dumb, maybe a basic rules engine to control pieces but are there to make sure that the "web" doesn't get in the way of performance. These proxies will however have a database that enables searching across sub-queries as well as the matching of exact queries (e.g. "find me all letters between A and Z" should enable the query "find me all letters between M and U" to be done as well without a SaaS hit) not sure yet whether we will go for a full database or do some basic pieces ala Amazon.

"Build for the web" is a mantra that many people are supporting these days, and there are good reasons for making your services available via that channel. But combining solutions and still delivering high performance is a significant challenge particularly when economic contracts can rule approaches such as the basic REST approaches redundant.

So when looking to build for the web think about the structure of your application and in particular the impact of latency and bandwidth on its performance as you look to consume other web applications. If you do have a financial contract in place with a SaaS vendor be very clear what you are paying for.

Technorati Tags: ,