On Thu, Jan 14, 2010 at 01:52:02PM +0100, Dennis J. wrote:
Why is logging into a pipe considered a waste of CPU?
The log parser throws away some data, aggregates the rest and then writes
it to a remote database. The "tail -f" approach would waste lokal disk i/o
by writing data unnecessarily to disk which i would then have to read again
with the script.
Why is this considered more efficient than handing the data directly over
to a script?
It is not considered as more efficient. It may be more efficient because of
bulk data processing. Note also, that logged data are written to disk, but
are not read because they are already in OS cache: they are just copied.
Logging to pipe is a CPU waste because it causes a lot of context switches
and memory copies for every log operation:
Hm, interesting. I didn't know that writing to a pipe actually forces a
context switch. I was under the impression that the writing process could
use up it's time slice to write an arbitrary amount of data into the pipe
and when the OS scheduler switches to the script it would read all the data
from that pipe.
The "tail -f" approach looks racy to me though. The log would grow fairly
fast which means it would probably have to be rotated at least once per
hour or the disk will fill up. I'm not sure how to process this rotation
with "tail -f" without potentially missing some data.