Thanks again Howard for the help early on. I'm into the development
of my real application now, and I'm able to leverage the docs much
better.
With my application, I have built a (~650MB) database with (9)
sub-databases. I'll have a writer application to do updates, which
should be pretty easy. There are a couple of things I want to make
sure that I'm doing correctly for the reader:
My application is a web service written in Go, using Go's net/http
package, which creates a new "goroutine" for each incoming request.
goroutines run concurrently, but may be multiplexed onto a single OS
thread. So, I will be using the MDB_NOTLS flag when opening the
environment. Then--from what I can gather--it seems like I will need
to allocate a pool of read-only transactions if I want to avoid
allocating new transactions for each HTTP request (is that right?).
Something like the following:
/* Test this to figure out how many are needed to never run out in practice */
N_READERS = 512
txn = env.BeginTxn(nil, MDB_RDONLY) // mdb_txn_begin: parent=nil,
flags=MDB_RDONLY
for each dbname in dbnames {
txn.DBIOpen(dbname, 0) // mdb_dbi_open: name=dbname, flags=0
}
txn.Commit()
for i = 0; i < N_READERS; i++ {
txn = env.BeginTxn(nil, MDB_RDONLY)
txnPool.Add(txn)
}
Then, for each HTTP request, I would pull a txn out of the pool, use
it (for multiple sequential queries for a given HTTP request), reset
it, renew it, and put it back in the pool.
I've got a proof of concept working with the above strategy, but does
this all sound sane?