[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: File descriptor leak, slapd hangs and runs out of file descriptor
On Jan 2, 2008 6:59 PM, Quanah Gibson-Mount <quanah@zimbra.com> wrote:
>
>
> --On January 2, 2008 5:14:25 PM -0500 Sam Tran <stlist@gmail.com> wrote:
>
> > Dear All,
> >
> > We are running OL 2.3.39 on Centos 5 i386 or x86_64. We have one
> > provider and three consumers (LDAP-sync repl).
> >
> > Several applications perform LDAP write and read operations on the
> > provider.
> >
> > For the second time in 2 months, we had what it looked like a file
> > descriptor leak on the provider: file descriptors were not closed at
> > all or fast enough. At the same time, slapd was unresponsive. Here is
> > what the logs shows:
> >
> > I restarted slapd, which fixed the problem.
> >
> > The first time that problem occurred, slapd ran out of file descriptor.
> > I don't know what triggered the problem. Prior to the problem there
> > was no increase in load, all LDAP operations were performed
> > successfully.
> >
> > I would appreciate it if anyone could give me some pointers on how to
> > troubleshoot the problem.
>
> I don't see any issue here -- Every connection takes a file descriptor.
> Most of the things opening connections aren't being closed in the log
> snippet you show, so they hold onto the resource. The connections that it
> does show closed, the fd is freed, and then the next incoming connection
> uses it. I.e., no evidence of a leak. One of the first things I always do
> with my LDAP servers is to bump up the number of file descriptors available
> to slapd (usually to 16k on 64-bit boxes). Many things uses persistent
> connections, which uses up file descriptors. You could also implement a
> harsh idletimeout (like idletimeout of 5 seconds) to kick off idle
> persistent connections. How well clients handle that is variable though.
>
Quanah,
Thanks for the input.
However this does not explain why slapd was hanging and unable to
respond to any queries before the number of open file descriptors
reached its maximum.
--
Sam