[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Socket-level timeouts?
On Apr 9, 2008, at 4:00 AM, Philip Guenther wrote:
In versions before 2.4.4, LDAP_OPT_TIMEOUT had no effect. Starting
in version 2.4.4 it sets the default timeout for reading the
requested result in ldap_result().
This sounds like exactly what we've needed it - that ends up in poll()/
select(), both of which should handle our malevolent server scenario.
I understand the current situation but as a user it would feel more
correct for LDAP_OPT_NETWORK_TIMEOUT to mean "try the next server
if a response is not obtained within this time", covering the
additional class of failures where an LDAP server is partially up
as we cannot guarantee minute-level admin response times to restart
a failing server.
Hmm, what do you think the distinction between
LDAP_OPT_NETWORK_TIMEOUT and LDAP_OPT_TIMEOUT should be? (Neither
of which should be confused with LDAP_OPT_TIMELIMIT, of course.)
Perhaps the difference between how long it will wait for a given
server to response (LDAP_OPT_NETWORK_TIMEOUT) and how long it will
spend before giving up on the call (LDAP_OPT_TIMEOUT) so it will
eventually time out if it can't contact any of the servers? The latter
case can be useful in odd networking environments where connectivity
creatively broken (e.g. a "smart" gateway which attempts to spoof any
IP it thinks your laptop is using as a gateway) - while it would be if
hotels stopped using that kind of stuff, laptops need to recover.
Chris
Attachment:
smime.p7s
Description: S/MIME cryptographic signature