[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: file backend
Jonghyuk Choi writes:
>> Not sure if you mean search indexes or not having the entire database in
>> memory, but anyway I don't think so.
>
> Actually both: the entries for a given "search index" are represented by
> their IDs by which entries are identified in the database as well as in
> the entry cache. (...)
>
> For large directories, managing entry locations in the file
> can improve search performance in a memory constrained case
> and can open the possibility of indexing support.
Hmm. I think bdb/ldbm would be better if you need to do this.
The one thing back-file could do which they can't do is to replace
the database while slapd is running. And use another input format,
of course.
Maybe it would be better to forget the file backend and instead
implement a close/open database extended request for bdb/ldbm.
Then one could
- build a new database,
- send a 'close database' request, which will block incoming requests
to that database and wait for slapd to finish outstanding requests,
- replace the old database with the new one,
- send an 'open database' request, so slapd can reopen the database
and continue processing commands.
Or is this capability too unimportant, so there is no need for either
a file backend or such an extended request?
--
Hallvard