Zorba the Hutt (zorbathut) wrote,
Zorba the Hutt
zorbathut

Question for those techs out there.

I'm trying to set up a "load" indicator for this distributed computing server. The way Windows does their load indicator is to simply read off the amount of CPU time that's being used. I could do that easily (the percentage of available CPUs that are being used) but my load would end up being either 100% or 0%, very rarely ending up in between.

Linux, on the other hand, calculates load as the number of processes, on average, that are waiting for a timeslice. 0.03 means 3% load, 1.00 means one process is eating all the time, and the handy thing is that this continues going above "full load" - 7.00 means there are seven processes all attempting to get the entire processor. Or both processors.

Unfortunately this fails rather miserably also, since one "job" could easily eat four hundred processors if they were available, but that much work existing doesn't really have anything to do with the load - a single four-hundred-processor job, once a week, is pretty low load, since it'll spend most of its time idling. However, one every few hours would be extremely high load - in fact, it would end up falling behind.

So . . . suggestions? I guess I just want an easy way of monitoring how much the network is doing, with it being able to tell me "I am precisely x% overworked" or "I am x% idle" without any problems, and I'm having trouble coming up with one :P

Questions welcome, incidentally. I haven't gone into much detail on this ;)
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 4 comments