[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Large stack allocations
Sean Burford wrote:
Hi,
bdb_search() allocates space for 128k candidate IDs and 64k scope IDs on
the stack (about 768k of memory). bdb_idl_fetch_key() also allocates a
large buffer on the stack (262352 bytes). Most other functions allocate
roughly 1kB.
Using the stack rather than heap limits the size of these structures
(idl.h observes: "IDL sizes - likely should be even bigger. limiting
factors: sizeof(ID), thread stack size")
Using the stack for large allocations also creates the possibility of
these arrays straddling the guard page (possibly resulting in local
variables ending up in neighboring memory regions).
Generally that can't happen. The main thing that can push the depth of the
stack is deeply nested search filters, and we already check for those and
explicitly switch to using the heap if a filter is nested too deeply.
The only other thing that can push the stack is a deeply nested configuration
of overlays, and as you've noted, most functions only take about 1KB each so
it would take a few thousand overlays all recursing through a search to cause
any problem. If you're configuring thousands of overlays on a single DB,
you're doing something very weird and the right thing to do is increase the
#define of the thread stack size.
Would the performance cost of using the heap or thread local storage for
these allocations outweigh the benefit of being able to use bigger arrays?
If you don't believe we've already thought about all of this, try it yourself
and see.
How often have you needed a bigger array?
If you're interested in some statistics about slapd's stack usage, the
static analysis perl script on kegel.com <http://kegel.com> matches what
I'm seeing from real running slapds:
http://www.kegel.com/stackcheck/
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/