[Planetlab-users] Why so high load averages?
joe at sics.se
Wed May 12 05:02:35 EDT 2004
I've just got my first application running on a few PL nodes, and
have observed the following:
The nodes I am running on are incredibly loaded
The first node I tried was like this:
03:28:29 up 12 days, 4:02, 0 users, load average: 13.23, 35.23, 35.21
load averages of 35 make the machine virtually unusable
The second machine gave:
04:12:21 up 12 days, 14:13, 0 users, load average: 3.23, 4.70, 4.60
Which is also pretty bad
At this point I checked
And yes the load averages virtually everywhere are horrific.
<< With load averages like this it is impossible to do realistic
experiments - I cannot perform real measurements that reflect
real-world behaviours, since in the real world the machine I interact
with are not heavily this loaded. >>
Now http://www.planet-lab.org/php/status.php lists 10 different
tools that is some sense measure the performance of the system.
A nightmare scenario is where every research group in the world
develops tools to measure the performance of the system and they run
all of them at the same time all measuring the same things
What I had naively hoped when we joined planetlab was that I could
write an application and deploy it on a lot of nodes and see how it
ran in realistic conditions (this I argued, was much better than a
Now if 450 experimenters (I read somewhere there were 450
experiments) on PL - *all* do this and they all use 1% of the CPU -
then we'll get 4.5 overload on the machines - and *none* of us can do
experiments (all we'll be able to research is the properties of
heavily loaded machines :-) <<this is interesting but not that
This seems to be a show stopper to me, or am I being pessimistic.
Otherwise, we'd need a fancy booking system: You can have a short
time on all machines, or a bigger time on some sub-set of machines,
and you can trade your slots with other experimenters.
Will my first application have to be a booking system - or has
anybody got one?
More information about the Users