[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
best practices WRT resizing a MDB backend?
Other than this thread:
http://t23307.network-openldap-technical.opennetworks.info/lmdb-growing-the-database-t23307.html
I don't see a discussion of changing the 'maxsize' setting after a
LMDB database has been generated.
This thread includes this response about growing the database:
http://www.openldap.org/lists/openldap-technical/201402/msg00302.html
On Windows, Linux, and FreeBSD, there's no problem increasing the mapsize
and preserving the existing data.
(I'm making a wild assumption that 'mapsize' is a typo, and 'maxsize'
was intended.)
Can 'maxsize' ever be reduced after the fact? If so, is their
guidance as to how much it can change (perhaps based on mdb_stat)?
The problem I'm trying to solve:
For my $job, we provide OpenLDAP-backed clustered appliances to
customers. The hardware doesn't vary, but the size of individual
customers' databases.
- Our strategy for adding members to the cluster involves managing
backups (compressed tarballs). Our prior use of the now-ancient
bdb backend let these backups be lightweight things to manage for
smaller customers, and larger customers would take the hit for
having big databases.
- Also, upgrading appliances means importing data from the customers'
bdb-based server.
My naive use of the LDMB backend has me assume the worst case, and
now everyone is equally punished for having a 'big' (albeit sparse)
database.
My hope was to, given awareness of either the data in an LDIF
extract, or data about the legacy bdb database itself, we could
make a more conservative guess as to a reasonable size for the mdb
backend.
Has anyone written up some strategies on these topics, or in the
position to provide any recommendation?
--
Brian Reichert <reichert@numachi.com>
BSD admin/developer at large