nicolas prochazka wrote:
Hello,
we are using lmbd in our product as database backend to save big key value (
16K ) .
First at all, lmbd is the best database in performance comparaison ( read /
write) that we have tested.
My question :
# Databases
data_dir:/data/cache/data
lmdb version:LMDB 0.9.16: (August 14, 2015)
db page size:4096
b-tree depath:4
branch pages:527
leaf pages:29287
overflow oages:6659875
data items:1331990
We have a lot of leaf pages, this is due to our big value size.
When the database is empty, write and read are very fast,
but when we delete all keys or a big part of keys, write and read (new keys)
becomes very poor in performance ( / 100 ) .
Is it a normal due to our big values ?
The only solution we have found is to destroy database file rather than
deleting keys
Yeah, the page reclaiming algorithm is somewhat simpleminded and large records can aggravate a fragmentation problem in the DB. Try upgrading to 0.9.18 first, it may help a bit.
We'll be experimenting with better reclaiming for LMDB 1.0.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/