[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
py-lmdb
hi all, and a special hello to howard, i had forgotten you're working
on openldap.
i've just discovered lmdb and its python bindings, and i am to be
absolutely honest completely astounded. that's for two reasons: the
first is how steeupidly quick lmdb is, and secondly because bizarrely
although it walks all over the alternatives it isn't more widely
adopted. mariadb added leveldb last year, mongodb i believe are
looking at a leveldb port.
i have been looking at a *lot* of key-value database stores recently,
to find the fastest possible one after realising that standard SQL and
NOSQL databases simply aren't fast enough. there is something called
structchunk which instead of storing the object in a leveldb just
stores an mmap file offset in the value and stores the actual object
in an mmap'd file... and it's *still* nowhere near as quick as lmdb.
so i am both impressed and puzzled :) i have created a debian RFP for
the python bindings, in case it helps.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=748373 - can i
suggest that anyone wishing to see that in there send a message
seconding it to be added: because as a general rule debian developers
do not like my blunt honesty and they tend to ignore my advice, so if
you would like to see py-lmdb packaged someone needs to speak up.
questions:
i wrote some python evaluation code that stored 5,000 records with
8-byte keys and 100-byte values before doing a transaction commit: it
managed 900,000 records per second (which is ridiculously fast even
for python. however: when i enabled append mode (on the cursor
put)... and yes i used the db stats to create a key that each time was
1 bigger lexicographically than all other keys... bizarrely things
*slowed down* ever so slightly - maybe about 3 to 5%.
what gives, there? the benchmarks show that this is supposed to be
faster (a *lot* faster) and that is simply not happening. is the
overhead from python that large it wipes out the speed advantages?
l.