As part of my on going quest to not become totally subsumed in Powerpoint I decided to have a crack at Google Apps Engine which is the Google bit of cloud computing. This first release is pretty limited in that there is no multi-threading allowed (a bit of a bugger for async) and you have to use Python. What I was interested in however was the efficiency of the Google Engine so I thought I'd come up with a test that works today but that will scale to multi-threading if it becomes available. This did mean I had to learn how to programme in Python however and would once again put my dislike of "dynamic" aka "code in haste repent at leisure" languages.
When learning new languages there are a couple of things that I do and for this one I went right back to my earliest coding experiences....
The above works just by clicking on where you want to zoom in and then clicking on "zoom out" well that zooms out. I've also added a gadget onto the sidebar.
The reason for picking the Mandelbrot generator is that its pretty simple code, which makes it good for a new language, but it also stresses out the processor a bit with the maths. The basic set up is that there are two Google App Engines, the first one does the calculation and then does a "POST" to the second with the results this gives the obvious 2nd stage which is to start profiling the results to see how the App Engine performs.
Some things I've found out so far. The basic image generator can generate images of "gadget", "blog", "medium" and "large" sizes. But the time taken for Google App Engine to generate the large image is too long so it times out every time I've tried it. This suggests to me that Google's time enforcement on requests is purely at the in/out level rather than being around CPU load which does mean that at times of higher load on the Engine cloud you will see more rejected requests.
The next thing I've found out is that in raw time terms (obviously I can't do "time" on the App Engine) it returns about 30% faster than my MacBook Pro which isn't too bad but certainly seems to indicate its indeed standard CPUs lurking under the covers rather than anything super special, Mandelbrot is quite a good test here as its mainly CPU bound with a bit of memory a faster CPU, or the ability to multi-thread, makes a big difference.
The next piece I've found out is that the offline version and the online version do not appear to work the same. Right now the reason why I'm not giving public access to the results is that I have two different problems being reported between the offline version and the online version. This is a real pain in the arse as it means debugging is effectively a game of Russian roulette. I tracked down the problem and it was a user error in storing and retrieving information, but the reported errors were not very helpful and were different between offline and online.
So the 2nd bit is the information reporting and I've decided to further my REST experience, hell you can't slag stuff off until you've really seen how little it offers ;) This gives the results. In true REST style it currently does bugger all but I will add new features over time as I learn more stuff to do. I'm using Atom and the standard Google output from the data model rather than writing any fancy XML stuff myself at this stage. The server URI takes a parameter of "server" which is the server name (e.g. localhost:80 for my local stuff and georipper.appspot.com for the cloud stuff)
The timings are taken from the request to the point where the image is created. To create the image I needed an all python image library and the rest is pretty standard algorithm stuff.
So what I've found out so far in the approximately 12 hours of playing with Google Apps Engine is that it works like any remote server. Power wise its nothing special and the lack of multi-threading kills it for lots of async tasks. I can see why Google are doing this but a limited threadpool or even green threads would be nice. The kill threshold also doesn't seem to be very high and once I've got some raw info I'll start doing analysis
Its not for space cadets therefore and it isn't (IMO) something that lends itself to overly complex tasks its sweetspot (and ironically the bit I had most problems with) appears to be around data access and retrieval and also acting as a platform for all those weird social apps that explode and die over the course of a month. In other words its part of that further commoditisation of technology and the on-going reduction in costs for basic web applications.
I'll write a bit more in detail about my experiences with Python and the Engine, but the first bit was to start getting some data. It appears that response times are pretty time dependent, sometimes even a single level click on the top level image doesn't render in time.
1 comment:
It is not surprising that the machines used are commodity CPUs because the Google File System itself is designed to run on these computers. And what GFS optimizes for is stability, reliability and high availability. Hence, as you point out, the main purpose of Google App Engine is to allow developers to write more scalable apps like Facebook apps, etc. Interesting analysis, nevertheless.
Post a Comment