[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Slapd frontend performance issues
Sam Drake wrote:
> I've also been doing quite a bit of work on the Back-sql code to optimize
> it, and now have Back-sql running at 100 queries/second against our
> in-memory relational database, TimesTen. Your 6 queries/sec number pretty
> much matches what I saw using Oracle with back-sql out-of-the-box as well.
> I'm using similar Sun hardware to Eric's case.
>
> I have a long list of additional enhancements to make, including using
> connection pools as Eric suggests. I haven't got around to that one yet.
>
> I'll send additional email to the list in the next day or two describing the
> changes we've made, none of which are specific to TimesTen (they should
> improve the performance with most backend ODBC databases). I'd be happy to
> make the changes public to Dmitry and/or anyone else who would like to try
> them. If anyone's interested in details sooner, please let me know.
I would be quite happy to commit any improvements to back-sql, especially when
they are contributed as ready-to-go patches:)
Thank you and all who has something to contribute.
In fact, I also have a big list of improvements, not only concerning
performance, but unfortunately I have no resources to code for back-sql
nowadays :( So, we could hammer out a TODO list here, for people with more
resources to help with.
As for performance isuues - we had a long thread with Eric, where several ideas
were mentioned
1) most evident. it is back-sql design that sacrifices some speed to improve
flexibility - the ability to map _any_ relational schema requires per-attribute
queries, which in most (rather simple) cases with administration databases
could be done at once.
The problem is that I had quite different goals developing back-sql pilot,
where flexibility was a major issue, and DIT structure was quite complex...
For those familiar with back-sql principles, I see several improvements to
mapping metadata, like flagging attributes that have only one value per
relational database design, to group loading such attributes in one queries...
But first I would like to have some clear profiling results, to see if it
really worth doing...
Other improvements that were discussed are:
- using another ODBC suite like OOB, or even direct APIs for each RDBMS, like
OCI for Oracle. If profiling would show that ODBC overhead is really so big
(which I doubt seriously), it is fairly simple to do, since ODBC calls are
almost accurately wrapped in back-sql, so changing them to OCI or any other
calls by #DEFINE is no problem
- using pool of RDBMS connections (now RDBMS connection is open for each LDAP
connection), which would help with stupid LDAP clients that like to do one LDAP
query per connection, but won't help much in general, I think...
- fixing some current algorythm deficiences, which I never had a chance to fix
myself (some of them were discussed with Eric)
- finding out, why Eric came out with only 300 qps with a dummy backend, which
returned success for all requests only, and making other good profiling, which
would finally point out where is most of the time spent
A real improvement would be to change back-sql architecture as follows:
- first, a "service" layer - back-sql general RDBMS API, connection pool and
things
- second - two different logic schemes, one is the old scheme with
per-objectclass mappings to SQL queries and support for ad-hoc LDAP queries,
and the second - support for mapping objectclass (or even a limited number of
supported LDAP queries) to C functions,which are easy to customize, and would
be generally more effective in retrieving/storing data. In other words, the
second approach is just a framework to help people develop their own custom
efficient backends, for their custom schemas
All these primarily require good profiling, to find out which direction is most
critical to boost performance. So, if you and other folks have some profiling
results and other suggestions - please post. And patches are welcome, of course
:)
WBW, Dmitry