Dear Aaron,
On 19/09/11 19:40 -0400, Aaron Richton wrote:
On Tue, 20 Sep 2011, Nick Urbanik wrote:
What I have said is that with *only* loglevel stats, in our production
servers, the write load is excessive, sometimes filling up the hard
They can be large. We rotate (including compression) nightly.
Fortunately they do compress very well...
...but compressing 25 GB files in itself takes away resources from
answering the large number of LDAP queries.
disk in four days. The disk I/O from the logging of stats at debug
priority is excessive. Of course, only OpenLDAP was logging at
debug level.
At a minimum, make sure that your syslog daemon isn't flushing on
each write. (With some daemons, -/file/name disables this.)
Naturally, we do that.
Actually, we do remote syslog (@loghost) so the UDP is
send-and-forget from our live servers, so there's no disk I/O at all.
Each member of our cluster of four LDAP slaves in Sydney generates
about 25 GB per day with loglevel set to 'stats'. The cluster of
three in Melbourne are less busy, but still fairly prolific. Imagine
sending 1 additional terabyte of data to our syslog servers every ten
days, and you need money to pay for the additional infrastructure.
(Note that none of the above is particular to OpenLDAP; we do this
with most of our services.)
Agreed; basic system administration.
Perhaps our production OpenLDAP servers work harder than the
servers everyone else here on the list manages? It's possible: we
are a large ISP, and we provision all our services with OpenLDAP.
There are some pretty large installations watching the list...
But clearly most are smaller than our installation.
I've raised bug http://www.openldap.org/its/index.cgi/Incoming?id=7046