With my first Google App Engine application I deliberately decided to pick on something that is CPU heavy but which when threading is supported will go horizontally to help see how "wide" the cloud is.
Or maybe not.
Checking my results on usage I've come to the conclusion that a thread gets killed at around 9.5 seconds, so its on the deep calculation ones or a very big image. However this isn't the only quota that exists.
So to be clear this is the smallest mandelbrot set, its 200x200 and at this stage its a max of 16 iterations. This still means a potential for a lot of calculations but it really does indicate that Google are aiming at the data access end of the market rather than a commodity platform for doing heavy calculations (sort of a small problem competitor to IBM's Blue Gene.
In terms of what this means well on my MacBook Pro a "time python mandelbrot.py" request (which has to include starting python (about 0.2 seconds appears about right) which creates the "default" image takes the following
So with the python engine piece take off we have around 3.3 of user + sys and according to the logs this same request via the web is 9.4 times over quota. This means that the request quota is about 0.35 seconds of raw grunt but it will let you hang around in the data access for up to that 9.5 seconds.
Now right now Google are playing very nice with me given that pretty much every request passes or smashes the quota. Some go a LONG way past
The next zoom in then failed (at a recorded 31.7 times over quota! but around 9.5 seconds total time).
So fair play on Google for letting me abuse their infrastructure and it will be interesting to see what they do around CPU heavy tasks especially those that could go horizontally. I could see a real market for companies who effectively want 10,000 CPUs for a short period of time to calculate a forecast or similar where right now the cost/benefit analysis does stack up for a full hardware purchase but buying a whole load of Gigacycles (horizontally scaled) would be a great fit.