[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Road Map : improved scalability - how to proceed ?
"Armbrust, Daniel C." wrote :
>
> So, what exactly is too large?
>
I have no idea - but I cant help but feeling that the only file size limit
that *should* be imposed be the max file size limit supported by the
underlying operating system. Any other file size limits caused by either
slapd or the backends should be adressed in my honest opinion.
But if we are seriously going to try to run Mindcraft's DirectoryMark
benchmark in order to assess our scalability, we should at least be able to
load the DIT with 100,000,000 entries, with each entry having at least the
following attributes :
objectClass
cn
sn
description
facsimileTelephoneNumber
l
postalAddress
telephoneNumber
title
And if I did my calculations right (im lousy at math, I must admit ;) this
could easily lead to a +10Gb ldif file.
Sincerely,
John Smith
----- Original Message -----
From: "Armbrust, Daniel C." <Armbrust.Daniel@mayo.edu>
To: <openldap-software@OpenLDAP.org>
Sent: Monday, April 12, 2004 5:53 PM
Subject: RE: Road Map : improved scalability - how to proceed ?
> This question may fit - I just tried create a database with slapadd and a
2.1 GB ldif file and all I get is a message saying:
> Filename: File to large
>
> This was with 2.2.5 - I'm going to grab the latest openldap now, and try
again, in hopes that I don't have to split this file.
>
> So, what exactly is too large?
>
> Dan
>
> -----Original Message-----
> From: owner-openldap-software@OpenLDAP.org
[mailto:owner-openldap-software@OpenLDAP.org] On Behalf Of J.Smith
> Sent: Monday, April 12, 2004 9:29 AM
> To: openldap-software@OpenLDAP.org
> Subject: Road Map : improved scalability - how to proceed ?
>
> Hi.
>
> I just browsed through the openldap roadmap, and noticed that improved
> scalability and/or performance is coming up on the horizon sometime soon
now
> (~3Q2004 ;). I was just wondering, how should we start looking at
measuring
> and/or improving the scalability of OpenLDAP ? Should we start creating
our
> own (mini-)benchmarks and scalability tests ? Or should we simply start
out
> by using something like Mindcraft's DirectoryMark, and see where that
leads
> us ? Should we do autobuilds regularly, followed by a testbench run and
> autopost the results somewhere to see how good we are doing ?
>
> Anyways, any and all ideas are more than welcome here.
>
>
> Sincerely,
>
> John Smith
>