[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: openldap performance numbers vs NS
On Thursday, August 9, 2001, at 09:12 AM, Archive User wrote:
Hiya all,
I am currently trying to get openldap accepted
as the ldap directory solution at my company.
Others have mentioned the price angle, I fugured I'd go into a
little more detail...
Netscape representatives have been telling my boss
that openldap cant scale,
To what level? 300 LDAP servers, each with Dual 1Ghz CPU's and
8GB of RAM?
OpenLDAP *can* do that.... it scales best by adding machines,
and it scales much, much, better on the same budget. :-)
performs poorly,
On the same CPU, disk, and hardware, OpenLDAP is slower than
iPlanet. It's also cheap enough (free), that you can easily
build out two to over *one hundred times* as much hardware
running openldap as you would when running iPlanet, and paying
for the appropriate licensing.
doesn't have the needed stability,
iPlanet is on thin ice here, IMHO. I've now professionally
repaired 3 generations of Netscape/iPlanet software, including
several sites who resorted to rebooting and completely
reloading (!!!) the servers on a daily basis, just to make their
bugs go away. Everything from vanishing connections to
spontaneous disappearance of records with iPlanet/NDS
1.x->3.x.... at 2 sites, they decided to switch to OpenLDAP
*specifically* for stability reasons. They knew they needed a
little more hardware, but they were more than happy to dump the
old servers.
and cant hold
nearly the amount of data that iplanet directory
server can.
OpenLDAP does not usually use a relational database backend,
it's often set up with a flat-file lightweight db. It defnitely
can use much larger DB systems, if massive storage is part of
your requirements.
Netscape claims:
1. Can handle over 50 million entries per server
Will your directories grow to have 50 million entries, or does
your company have 50 million users? Not many do.... and
considering the pricing from iPlanet on such a thing, it' might
cost *less* to have a small team write a directory server, from
scratch, in C, than to buy it from them (really, about $400,000
USD). :-)
2. Can import over 1 million entries per hour
<cough>
Gee, I wonder why somebody would need to import entries real
fast, especially when on a very stable server, this task should
only need to be done *once* (when setting up the server). I
wonder what would compel a company to ensure that reloading
millions of entries would always happen very quickly?
3. Has achived a query rate of 5200 entries per hour
That seems low. Even for openldap. That's only 1.5 per second.
OpenLDAP can beat that, easy. (I think they mean per second,
I've seen iPlanet get 3000+ per sec on one good hardware box). I
don't know what the max throughput of single-machine OpenLDAP
is, as I don't scale it that way... maybe somebody else on the
list has thrown it onto some single massive high-speed machine.
4. Offers performance that scales lineraly with multiple cpus
OpenLDAP scales linearly with multiple CPU's.... on multiple
machines. Anybody who truly believes that actual linearity can
happen by adding CPU's to *one* machine needs to take
microprocessor courses... it's not actually possible to do this.
The very best systems achieve "near-linear" (90-96%), but not
linear, 100%, performace. My general openldap scaling method is
to add another $1000 *nix box, rather than a $1000 server-grade
CPU to an MP box, which not only linerarly scales queries, it
also linerarly scales storage, throughput, and redundnacy. :-)
Keep in mind that the entire reason people needed single-machine
scalability were per machine costs.... the days of $1000 for a
1Gb hard drive, RAM at $40 per megabyte, software liceses at
$300-$3000 per machine, and 4U for decent cooling, meant that
buying more machines was unreasonable. Now that X86 machines are
tiny and dirt cheap, it's a viable option to just throw another
cheap *nix box into the racks, for similar costs to buying into
big hardware and adding CPU's. (Not only is it redundant, an
entire motherboard can die/burn/be dipped in acid/whatever
without the cluster going down.....)
5. 500 million directory licenses sold worldwide (over 70% of the
ldap market)
Market? That means of products that are *sold*, openldap is not
"sold".. Not only that, but it's misleading, if people think
that those are 500 million sites who just wanted a directory...
most of the iPlanet software suite *requires* iPlanet directory
server to run. Need calendaring? Bundled with their directory.
Web portal? Bundled with their directory. Mail services? Bundled
with their directory. Enterprise web server? You got it, iPlanet
Directory is part of it. Their directory is the heart and soul
of most of their product lineup, so it's been designed as a
simple "database system", with an LDAP interface to a high-speed
backend. Their licensing scheme for all of their software uses
the directory just to store the *software license
numbers*....(Compare to Windows 2K, who is probably, very soon,
going to lay claim to "most installed" version of LDAP directory
services, not because folks chose to use Active Directory, but
because running it is pretty much required to use multiple users
on NT.)
Does anyone have any real world experiences with openldap that
show it can scale, performs well, stable, etc ?
One of my clients: 20K users, in 18 countries, and 720 websites,
running constant queries on 3 geographically distributed servers
(with one failover, in case one of the servers goes down), all
x86, master is server grade (<6,000K USD), slaves and failover
are all commodity (<$2000K USD) desktop hardware. Longest run
time was on a machine that went for 527 days without rebooting
(master). Most drastic stability issue was ensuring that we had
backups to reload from every 3 months or so (on a stock RedHat
6.0 openldap version 1.0.9, IIRC), but I haven't done a reload
since upgrading/recompiling to newer OpenLDAP (1.x and 2.x)
versions back in November (10 months now, I guess...). Doing it
this way, we also maintained 100% system uptime when we moved
one server 900 miles away in the back of a truck (company
relocation). Our *total* OpenLDAP system downtime over 3 years
has been 27 (very painful) minutes, or roughly 9 minutes a year
(clustering and failover is a good thing). In the last year,
we've had zero minutes of full system downtime, with occasional
node outages (OS upgrades, emergency network outages, etc.)
For high transaction rate work, (say calendaring, or a db-driven
website), we use PostgreSQL _and_ LDAP, and by tying data access
to specific information, we balance out our needs for data with
constant edits/changes and high-speed, directory driven, access.
We use separate connections (rather than filtering though an SQL
backend) for maximum peformance. I guess in one way, this can
almost be viewed as echos of the mainframe flat file vs
distributed RDBMs argument. Either beef up the backend to get
high speed, flat data, access, or deploy different systems as
needed, when needed, where needed.... we use our directory for
storage of static information about people and locations, and
dynamic RDBMS's for information about dynamically changing
systems.
We would probably be looking at 50k records tops (and thats if
I put the kitchen sink in it) and using the latest version 2.x.
If your boss wants to pony up the cash for what iPlanet says is
required to achieve the above 50 million specs, hey, that's not
neccesarrily a bad way to go. Maybe you'll get a nice pair of
quad CPU Sun boxen out of it (if you'll need two, for
failover?). :-) It is, indeed, faster on a single machine than
OpenLDAP in every test. How much faster depends on the test, and
the tuning of each directory and machine.
But if you price it out, and then figure out how much X86 *nix
hardware you can buy for the same price, you may be able to give
each department their own, dedicated, OpenLDAP server, and
achieve not only greater *overall* performance, but lower
latency, and an insane amount of redundancy and failover....
(For the 50 million user spec, at that performance level, budget
at least half a million dollars.... that's a _lot_ of X86 *nix
servers, and one heck of a budget for maintenance and
co-ordination of the servers).
Going to the Sun store, to price for your actual expected needs
(50k entries):
http://store.sun.com/catalog/doc/BrowsePage.jhtml?cid=64499
the media and docs for iPlanet (with 0 users) costs $200, and
then it's $2 per entry for less than 200K entries, so with 50K
entries, that's a $100K license, plus your OS licenses and
hardware for whatever platform.....
$100,200 US dollars, before hardware costs.
For that cost, you can set up 50 OpenLDAP servers on cheapie $2k
X86 *nix boxes.
Yes, *50*.
Even with the worst openldap performance specs, it's hard to
say that iPlanet is consistantly 50 times faster, especially if
you have those 50 deployed in clusters at the best points to
reduce latency over a large network. Even if a dealer or
salesman gave you a 50% price reduction, that's still only 25
times more, for the same cost. If your current business setup is
made up of 25 different locations, that's still a dedicated LDAP
machine for *every single location*. If you only have one
location, with a server room, and 50 servers, that's a dedicated
LDAP machine for every two servers, or 5 massively burly servers
running the entire directory out of RAM.
So, to summarize:
iPlanet is, indeed, consistantly faster when comparing single machines.
A Ferrari is also faster than most cars when comparing single
machines. (and iPlanet may actually cost you more than a
Ferrari).
You can buy one Ferrari for your data delivery fleet, or 50,
much cheaper, cars.
50 cheaper cars can make many more deliveries, which is faster
overall, with less downtime when a single (or even 10) downtime
event occurs.
50 cheaper cars also require more maintenance, but you may only
need 5 moderately faster, or ten somewhat faster cars, to handle
the load, and you may not *need* a Ferrari for your deliveries.
OTOH, your computing needs may actually require the maximum
speed of one or two, super fast, machines, tied into one or two
machines in a data center, with a single, high-speed,
application being used, and you cannot justify 25U (or 5 of 5U)
of rack space for a cluster of LDAP servers, nor do you wish to
add the management of that cluster into the budget. It's all
about the individual company's need, budget, and finding the
right balance.
Or, perhaps, (this is a long shot) balancing expense with
overall performance or final value is a non-existant
consideration in your company.....In which case, I'd like to
know what company you work for, so I can submit a resume or
consulting contract. ;-)
-Bop
--2D426F70|759328624|00101101010000100110111101110000
ron@opus1.com, 520-326-6109, http://www.opus1.com/ron/
The opinions expressed in this email are not necessarily those
of myself,
my employers, or any of the other little voices in my head.