[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Memory consumption issue
Pierangelo Masarati wrote:
Thorsten Kohlhepp wrote:
Pierangelo Masarati wrote:
Thorsten Kohlhepp wrote:
Hi,
Hallvard B Furuseth wrote:
Andrew Findlay writes:
Retrieving 2M entries in a single operation is going to tax any LDAP
server, especially if you do not request paged results. Consider
what
it must do:
1) Make a list of every entry ID
2) Retrieve the data for every entry
3) Build a message containing 2M entries
4) Send the message
No, each entry is sent in a separate message.
I also thought it would send each message separately, because to
build a
message with 2M entries wouldn't make sense. It would also take much
longer to respond. The first entry of the search is returned
immediately
which indicates that each entry is sent separately.
There's no need to experiment. This is clearly indicated in the
protocol specification (RFC4511, but it has always been like this)
However OpenLDAP does build a list of all entry IDs to examine and
possibly, subject to indexes for the filters. And it must
readlock all
these entries so that an update operation won't mess things up
while it
is sending, and so updates will be atomic as seen by the search
request.
I don't know what BDB does when there are 2M entries to examine
though.
Maybe it just gives up and examines all entries, as LDBM did.
The total memory of the server is 4 GB and swap 2 GB. So it will
survive
even if we pull the entire tree by using ldapsearch. But we would like
to put other services as well on the same server which could slow
things
down if LDAP is already using a lot of memory.
I know doing an ldapsearch "(objectClass=*)" is a bad way to get all
entries,
Too bad there's no other way. If you find any, please let us know.
but I want to make sure that a bad formatted search can't slow
down the entire server by consuming a lot of memory.
If you want to inhibit expensive searches, tke a look at the
"limits" statement of slapd.conf(5). In detail, consider limits
size.unchecked.
Another question why isn't it releasing the used memory after the
search
finished?
Depending on the backend and on the database, caching may take place
(and should, if you want performances). For details about Berkeley
DB caching, see Sleepycat's documentation. For details about
back-bdb and back-hdb caching, see cachesize, idlcachesize,
dncachesize in slapd-bdb(5), and
<http://www.openldap.org/doc/admin24/tuning.html>.
Of course it will cache the entries, but I defined a cache size of
8.4m, an entry cachesize of 1000 and an idlcachesize of 1000. When
the search finishes it consumes 937316 kB. This is a way over than
the cachesize.
What do I wrong?
What is 8.4m? 8.4 minutes? Berkeley BDB cache size is expressed by
two numbers, the first one counting the GB (Giga bytes) and the second
counting the MB (Mega bytes). If by "8.4m" you mean 8.4 MB, then your
cache is very likely way underestimated.
I meant 8.4 MB. Actually the function DB->set_cachesize contains 3
numbers.
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_set_cachesize.html
In my DB_CONFIG I've set
set_cachesize 0 8435456 1
which means to reserve a cachesize of 8.4MB one time.
This is absolutely working fine because dbstat -m shows
1 Number of caches
1 Maximum number of caches
10MB 64KB Pool individual cache size
An entry cachesize of 1000 means 1000 entries. So it may well mean
lots of kB (or MB) depending on the actual size of your entries (the
size of an entry is usually more than twice that of its textual
representation, since all values are stored in pretty and normalized
form, plus overhead).
In any case, if you fear leaks, please do run slapd under valgrind and
report any issue. It will help making slapd better.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
-----------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Fax: +39 0382 476497
Email: ando@sys-net.it
-----------------------------------
And now comes the strange thing:
slapd not running
free -k
total used free shared buffers cached
Mem: 4194304 2768584 1425720 0 134096 2441308
-/+ buffers/cache: 193180 4001124
Swap: 1052248 64 1052184
slapd running
free -k
total used free shared buffers cached
Mem: 4194304 2772980 1421324 0 134124 2441320
-/+ buffers/cache: 197536 3996768
Swap: 1052248 64 1052184
So it took only 4MB which is fine because the cache isn't used yet.
After running ldapsearch ... "(objectClass=*)" by using no sizelimt
free -k
total used free shared buffers cached
Mem: 4194304 3242360 951944 0 134368 2451296
-/+ buffers/cache: 656696 3537608
Swap: 1052248 64 1052184
As you can see it took more than 400 MB. And this memory isn't released
unless I restart the LDAP server. I don't know where this memory went,
because if I do a pmap I get this line which were increasing during the
search.
000000001f511000 464100 - - - rw--- [ anon ]
Thanks for all of your help
Ciao
Thorsten