Why is logging into a pipe considered a waste of CPU?
The log parser throws away some data, aggregates the rest and then writes
it to a remote database. The "tail -f" approach would waste lokal disk i/o
by writing data unnecessarily to disk which i would then have to read again
with the script.
Why is this considered more efficient than handing the data directly over
to a script?
Ny the way "tail -F" is the only recommended way to do the near real-time log parsing with nginx.
On 14.01.2010, at 8:12, Dennis J. wrote:
Is there a nginx equivalent to apaches CustomLog directive with the "|" prefix so it logs into stdin of another program/script? I need to do real-time processing of the access log data and I'm wondering how I can accomplish this once I switch to nginx.