--On Thursday, January 24, 2008 12:12 PM -0600 Brad Knowles
<b.knowles@its.utexas.edu> wrote:
I do not yet understand a great deal about how our existing OpenLDAP
systems are designed, but I am curious to learn what kinds of
recommendations you folks would have for a large scale system like this.
This is generally good information to know...
But basically, have you read over the information on understanding
your system requirements? I.e., how to properly tune DB_CONFIG and
slapd.conf?
In the far, dark, distant past, I know that OpenLDAP did not handle
situations well when you had both updates and reads occurring on the
same system, so the recommendation at the time was to make all updates
on the master server, then replicate that out to the slaves where all
the read operations would occur. You could even go so far as to set up
slaves on pretty much every single major client machine, for maximum
distribution and replication of the data, and maximum scalability of the
overall LDAP system.
Updates -> master is always recommended. You can set up multi-master
with 2.4, but it will be slower than a single master scenario. The
general best practice for fail over is to have a primary master that
receives writes, and a secondary master that is getting the updates,
and will take over via fail-over mechanisms if the primary goes down,
becoming the new primary.
If you did use a multi-master cluster pair environment that handled all
the updates and all the LDAP queries that were generated, what kind of
performance do you think you should reasonably be able to get with the
latest version of 2.4.whatever on high-end hardware, and what kind of
hardware would you consider to be "high-end" for that environment? Is
CPU more important, or RAM, or disk space/latency?