[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: replicating back-sql data
You don't indicate what version of OpenLDAP's slapd
you're using.
> We are using back-sql to point to a sybase database. ldap_entries is a
> view of about 40,000 entries. It's working ok. Unfortunately, every time
> a search like this is executed:
>
> ldapsearch -H ldaps://xx.uen.org -D uid=bmidgley,dc=my,dc=uen,dc=org -x
> -W -d 256 -z 10 "(uid=bmidgley)"
>
> It looks like the backend selects all records from ldap_entries. It
> takes about 6 seconds on our fastest db server.
This is strange, since your filter should result in
an exact match search.
>
> Is there a way to replicate this back-sql data into a traditional
> openldap backend to improve performance? I know slurpd needs a log so
> that won't work. Is there anything out there that will just do a brute
> force push from one ldap db to another? I think this would be more
> reasonable than constant heavyweight user lookups. We could deal with
> some lag time in the updates.
SysNet developed a tool to synchronize SQL and LDAP
servers in either a push or pull mode; you may contact
<info@sys-net.it>.
I guess something could be attempted using syncrepl,
but since there are no timestamps in the default
back-sql implementation, this might not be a viable
solution.
>
> BTW, we have several web apps that use the sybase data so unfortunately
> we have to stick with it being the definitive database.
Are you using a view or a table fro ldap_entries?
The view approach may definitely impact performances,
although the table approach requires to maintain one
extra table.
p.
--
Pierangelo Masarati
mailto:pierangelo.masarati@sys-net.it
SysNet - via Dossi,8 27100 Pavia Tel: +390382573859 Fax: +390382476497