[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
FW: Large add/replace fails with I/O error
When I saw this email I realized one of my recent hacks to liblber/decode.c
wasn't up to snuff. I've just committed a new patch to eliminate the fixed-size
limit I had before. It uses recursion to allocate objects on the stack until
hitting the end of the vector, and then allocates the vector at the end,
storing the elements as it returns. This is still much faster than using
realloc() repeatedly. In fact it appears to be about as fast as the TMP_SLOTS
version, or even a bit faster (not sure I understand that one...).
Further on these lines, I have an idea to investigate a version of
ber_get_string that just returns a pointer to the ber_buf, to avoid another
malloc/memcpy.
-- Howard Chu
Chief Architect, Symas Corp. Director, Highland Sun
http://www.symas.com http://highlandsun.com/hyc
Symas: Premier OpenSource Development and Support
-----Original Message-----
From: owner-openldap-software@OpenLDAP.org
[mailto:owner-openldap-software@OpenLDAP.org] On Behalf Of John Dalbec
Sent: Wednesday, January 02, 2002 8:26 AM
To: openldap-software@OpenLDAP.org
Subject: Large add/replace fails with I/O error
I'm currently using the RedHat RawHide openldap-2.0.19-1 package.
I have several groupOfNames classes in my LDAP directory. I am trying to
rebuild them automatically from an SQL database by replacing the "member"
attribute. This works fine for most of them. However, one of them fails,
apparently because there are too many members/too much data for OpenLDAP to
handle in a "replace" command (16000+ members, 750K ldif file).
Using Net::LDAP, I get the "I/O error" code (1) and the message "Connection
reset by peer". Using ldapmodify, I get:
modifying entry <...>
ldap_modify: Can't contact LDAP server
ldif_record() = 81
If I reduce the number of member: entries to 3396 or fewer, I don't have any
problems. Is this a bug? A limitation of the LDAP protocol?
I was able to load the member list by using replace to build the first entry
and
adding the remaining entries in groups of about 3000. Still, I'd prefer not to
have to automate that process just for the one group.
Thanks,
John