[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
RE: Multimaster Algorithm (was: Documentation volunteer (which was: Cluster replication: multiple masters))
> Otherwise we'll get problems if the internal network is down, and one
> node gets an update without changes (or at least looks like) and a
> prior update to _another_ node (with changes) gets delayed propagated
> to all nodes which, except one node, not knowing of the later one, will
> apply wrong Information.
Yes, this problem is true for any replicated database. Even
adding a new slave to your backup group (without multimaster) faces this
problem.
Look at the Spread protocol at www.spread.org. It's not
directly LDAP-related, but they have dealt with many of the issues that
come up with database replication.
Also look at Splash, a masterless in-memory database that uses the
Spread protocol. It's only a linked list underneath (last time I checked,
a few months ago) but the theory there is interesting.
> I still think, even if you solve all of the theoretical problems, there
> is way to much uncertainty in Multi Master Replication. But in some
> special cases, e.g. if you _know_, that there _will_ be plenty of time
> between updates, it might even work.
Let's not forget that LDAP is generally used for read-only
databases. That's what it's designed and optimized for, so in real-world
use it might be rare that two objects with the same DN would be updated at
"the same time". I for one would be happy if users could simply update
their own addresses (from any node in my load-balanced cluster). My
admins are generally the only ones that do mass entries, and I can tell
them to simply act synchronously.
I will be looking into this over the next few weeks, I'll post if
I get anything interesting put together. I've already got a Python-based
daemon that does file replication, it shouldn't be too hard to modify it
to look at LDAP entries.
--Derek