hello guys,
i'm running some tests with openldap 2.1.30 on redhat 9 (2.4.20-30.9smp)
box with 4gig memory and 4 logical cpus (hypertreading enabled, dell
poweredge 2650) to find out if it meets the requirements from our
development team.
configure string:
env CPPFLAGS="-I/usr/local/BerkeleyDB.4.2/include -
I/usr/kerberos/include" LDFLAGS="-
L/usr/local/BerkeleyDB.4.2/lib" ./configure --prefix=/opt/LDAP-
Server-2.1.30 --enable-ldbm=yes --enable-ldap=yes --enable-meta=yes --
enable-rewrite=yes --with-tls --enable-shared=no
i had to use workaround
http://www.openldap.org/its/index.cgi/Archive.Software%20Bugs?
id=1134;selectid=1134;usearchives=1;statetype=-1 to avoid ITS issue 1134
(segfault during make test) in order to build openldap.
i don't know if a id2entry.dbb file of almost 4gb size would mean a big
directory, but this is the situation i have:
# du -sh /opt/LDAP-Server/var/openldap-data/*
45M /opt/LDAP-Server/var/openldap-data/birthday.dbb
108M /opt/LDAP-Server/var/openldap-data/cn.dbb
233M /opt/LDAP-Server/var/openldap-data/dn2id.dbb
132K /opt/LDAP-Server/var/openldap-data/gender.dbb
3.9G /opt/LDAP-Server/var/openldap-data/id2entry.dbb
8.0K /opt/LDAP-Server/var/openldap-data/nextid.dbb
452K /opt/LDAP-Server/var/openldap-data/objectClass.dbb
17M /opt/LDAP-Server/var/openldap-data/postalCode.dbb
240K /opt/LDAP-Server/var/openldap-data/preferredRestaurant.dbb
108M /opt/LDAP-Server/var/openldap-data/uid.dbb
this was created by a dummy script to simulate approx. 1.000.000 user
profiles for a webportal we run for a customer.
queries like:
/opt/LDAP-Server/bin/ldapsearch -x -D cn=Admin,o=netmldap -W -b
ou=campaigntool,ou=netm,o=netmldap -H "ldap://0.0.0.0:390" -LLL
postalcode=50000 email
took more than six minutes and returning approx. 110.000 email adresses.
the index in slapd.conf looks like this:
index cn,uid pres,eq,approx,sub
index objectClass eq
index gender eq
index birthday eq,sub,subinitial
index postalCode eq,sub,subinitial
index preferredRestaurant eq
the cachesize and dbcachesize are both 100.000 at the moment. i
experienced slapd to crash when the value is to big.
i used e.g. dbcachesize 45.000.000 and cachesize 1.000.000. when usig
ldapsearch the slapd takes up to 2.9gig memory and then exits.
is there some idea to improve the performace for queries like above on
this hardware and to avoid this crashes? i read that the dbcachesize
should be as large as the biggest index file.
is ldap designed to return such a large number of entries (sizelimit -1)
or would a database fit more to this requirement? developers expect less
than 10sec even for the most complex query...
any ideas? thanx for reading up to here.
regards,
carsten