[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: a few back-bdb tuning questions
Quoting matthew sporleder <msporleder@gmail.com>:
> > Quoting matthew sporleder <msporleder@gmail.com>:
> >
> > > I am using openldap 2.3.21 and bdb 4.4.20 (both compiled 64-bit) on
> > > solaris 10.
> > >
> > > I followed (How do I determine the proper BDB/HDB database cache
> > > size?) http://www.openldap.org/faq/data/cache/1075.html and saw the
> > > note:
> > > "I don't have enough free RAM to hold all the 800MB id2entry data, so
> > > 4MB is good enough."
> > > And I found my number (around 30MB), which got me -great- read
> > > performance.
> > >
> > > (so far I have my sun v120 upto 750+ BINDS/second using slamd and a
> > > few thousand test accounts. Although, when I turned on logging, it
> > > went down to 350/second)
> >
> > This seems rather low. I'll note that on Solaris it is best to use a
> shared
> > memory cache. My SunFire 120's get several thousand searches/second.
> >
>
> Is this with the entire directory loaded into ram? (I noticed you had
> a pretty big cachesize)
Yeah, on Solaris (SPARC) I had to load everything into RAM to get good
performance. Linux didn't have this issue.
> > > Well.. what if you do have enough ram? Do you just set a huge cache
> > > size, and it will eventually grab the whole thing? Do I need to
> > > include my indexes and other things in that memory calculation, or
> > > just the id2entry?
> >
> > Yep.
> >
>
> Sorry, can you clarify if this is 'Yep' on just the id2entry, or 'Yep' on
> *.bdb?
Yep, in that I just set a huge cache size.
For daily operations, sizeof (id2entry.bdb) + 10% is usually sufficient.
For loading the database (slapadd), sizeof (*.bdb) gives the best load
times. I just set my DB cachesize to be that of sizeof (*.bdb) so I don't
have to fiddle with it after completing the load.
> > > The man page mentions that hdb needs a very large idlcachesize
> > > relative to the cachesize. What's the bdb recommendation on this?
> > >
> > > When I checkpoint and use DB_LOG_AUTOREMOVE, what am I losing? The
> > > old transactions that have already been written to the database on
> > > disk? This would mean that my log.### are all active, correct?
> Since
> > > the old ones would have been deleted.
> > >
> > > And can multiple subordinate databases (I didn't find a lot of
> > > documentation about subordinate, by the way. Shouldn't that be in
> > > slapd.conf(5)?) share the same set_lg_dir? Or should they be
> > > separated into directories of their own?
> >
> > Their own directories with their own DB_CONFIG files.
> >
> > I suggest reading over:
> >
> > http://www.stanford.edu/services/directory/openldap/configuration/
> >
>
> Is the shm_key arbitrary? And does it need to be different for each
> subordinate database? (I'm assuming yes, since they all have
> different DB_CONFIG settings.)
Yes, the number assigned to shm_key is arbitrary. I've not used it with
subordinate databases, but I too believe that would be yes. ;)
> Also, while I'm on the subject, with solaris 10, you can tune shm it
> on the fly with 'prctl'.
Neat. :) We have no plans to support Solaris 10 or migrate anything to it
(We are dumping Solaris and moving wholesale to Linux), so I doubt I'll
ever get much chance to play with that. ;)
--Quanah
--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html