One of the things that I get asked is 'what hardware should we run this on?'. I've said for years that I don't care about the tin and the tin is irrelevant from a differentiation perspective. Now before people leap up and say 'but X is 2x faster than Y' let me make a couple of points
The natural platform is the one that the software is tested on. That doesn't mean its the same as the hardware platform from the software vendor that they want you to buy, its the one that the developers are actually using to test that the software works. Why do you do this? Well when there is a bug, and lets face there are always bug, then instead of just having the support folks available to fix it you have all of the developers as well because they don't need to switch from their current environments.
Like I say this doesn't mean 'pick a hardware platform from the software vendor' it means pick the natural platform. So if you think/know the developers are on Linux and deploying to Linux servers for test then you know there are more people able to help on Linux than anything else. If they are developing on Windows and deploying to Linux then either of those platforms is natural.
As an example of what happens when you don't, let me take you back to 2000. I was working on a project and we were using MQSeries, JMS and of course Java. We developed it on Windows and deployed it to Linux for test. For production however we'd been convinced to go for AIX to give us some major grunt. We deployed the code into UAT and.... it broke. Our assumption was that this was our fault because we didn't know AIX that well and clearly running IBM's AIX with IBM's Java Implementation, IBM's JMS implementation and IBM's MQSeries meant that it had all been tested, this was their flagship platform surely this was what it was meant to run on?
36 hours later we were talking directly to the product team who identified the problem as a memory issue that only occurs on AIX. Making clear this meant that our configuration (pure IBM) had clearly not even been tested.
Working on another project where the database environment was different to that from the package provider and the hardware was a mainframe we had massive issues in just getting anyone who knew about our set-up in order to fix some problems.
These are normal problems and the key to them all is that its not about whether box X is faster than box Y they are about getting the best support and fixing problems quicker. I'm not arguing that you shouldn't get an environment that scales, what I'm arguing is that when you look at the costs of tin then performance is a distant second to the people costs of fixing problems when they go wrong.
The problem is that normally the people buying tin are just buying tin. In these days of virtualisation its about picking the right OS inside your virtualised server but its still important to think natural on platforms.
Pick the natural platform not the fastest shiny box.
- Software tuning and performance will have miles more than a 2x impact
- The software licenses will probably cost more than the hardware
- The development will definitely cost more than the hardware
- The software support and maintenance will definitely cost more
'what is the natural platform for this software?'
The natural platform is the one that the software is tested on. That doesn't mean its the same as the hardware platform from the software vendor that they want you to buy, its the one that the developers are actually using to test that the software works. Why do you do this? Well when there is a bug, and lets face there are always bug, then instead of just having the support folks available to fix it you have all of the developers as well because they don't need to switch from their current environments.
Like I say this doesn't mean 'pick a hardware platform from the software vendor' it means pick the natural platform. So if you think/know the developers are on Linux and deploying to Linux servers for test then you know there are more people able to help on Linux than anything else. If they are developing on Windows and deploying to Linux then either of those platforms is natural.
As an example of what happens when you don't, let me take you back to 2000. I was working on a project and we were using MQSeries, JMS and of course Java. We developed it on Windows and deployed it to Linux for test. For production however we'd been convinced to go for AIX to give us some major grunt. We deployed the code into UAT and.... it broke. Our assumption was that this was our fault because we didn't know AIX that well and clearly running IBM's AIX with IBM's Java Implementation, IBM's JMS implementation and IBM's MQSeries meant that it had all been tested, this was their flagship platform surely this was what it was meant to run on?
36 hours later we were talking directly to the product team who identified the problem as a memory issue that only occurs on AIX. Making clear this meant that our configuration (pure IBM) had clearly not even been tested.
Working on another project where the database environment was different to that from the package provider and the hardware was a mainframe we had massive issues in just getting anyone who knew about our set-up in order to fix some problems.
These are normal problems and the key to them all is that its not about whether box X is faster than box Y they are about getting the best support and fixing problems quicker. I'm not arguing that you shouldn't get an environment that scales, what I'm arguing is that when you look at the costs of tin then performance is a distant second to the people costs of fixing problems when they go wrong.
The problem is that normally the people buying tin are just buying tin. In these days of virtualisation its about picking the right OS inside your virtualised server but its still important to think natural on platforms.
Pick the natural platform not the fastest shiny box.