On Jul 23, 2014 7:26 PM, "Howard Chu" <hyc@symas.com <mailto:hyc@symas.com>>
wrote:
Ross MacGregor wrote:
I am running tests of LMDB and getting the MDB_MAP_FULL error adding
entries to thesdasd database.
You must always specify what software version you're using.
I am filling a database to 80% capacity and deleting and adding entries
so that the database fluctuates between 60% and 80% capacity.
An issue with deletes wasting pages has been fixed in the most recent LMDB
snapshot.
If I add entries that are greater than a single page size (4096) then
the database free list very quickly becomes fragmented and it dies with
a MDB_MAP_FULL error even though the database is less than 80% full. By
very quickly I mean running my test application for a few seconds with a
small database of 8 Mb. If I increase the database size to 1 Gb I get
the same results within a minute or two.
Fragmentation with larger records is definitely still a known issue.
Although usually it only gets bad while there's heavy concurrent read
activity, which also prevents page reclaiming. But with the delete fix
this type of problem should be much less frequent.
I've even encountered this with a 1G database and entries that fit into
a single page, but it takes significantly longer.
I am considering performing a defrag of the database if I get the error
by using the mdb_env_copy2 function. This can take on the order of 1
second to complete with the 1G database. This seems like a horrible
workaround though and I may need to replace LMDB with something less
fast but more robust.
Is there any other way to perform a defragmentation of the database?
Not at present. Nor are there any plans to add such. The intent is for
LMDB to never need defrag or maintenance. It will, probably, always need
at least 50% free space though.