[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Bug reading length field in ber_next_read (io.c) (ITS#533)
This is a MIME message. If you are reading this text, you may want to
consider changing to a mail reader or gateway that understands how to
properly handle MIME multipart messages.
--=_277F0E31.D8B9720F
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Greetings,
We discovered a bug in the libraries file io.c. The ber_get_next() =
function is coded so that
if only part of a ber is read, it doesn't go after the next packet =
immediately, but returns back
to wait4msg(). This allows other requests to be serviced and prevents =
replies with lots of
data from starving other requests.
The problem we are seeing is when a multi-byte length field is split =
between two TCP PDUs,
i.e., part of the length is read in ber_get_next() and then the function =
returns.
When the function is called again to read the rest, the remaining bytes of =
the length
clobber the first bytes read. Thus a length of 0x82 0x2 0x33 where the =
0x2 and 0x33 are=20
in different TCP PDUs is intrepreted as length 0x33 instead of 0x233.
I didn't see much value in putting in a lot of messy code to deal with =
this directly, so I just forced
the code to read the whole length field even if split across TCP PDUs. =
It won't starve
any other requests because it is only one additional PDU, and it simplifies=
the code..
I replaced the ber_pvt_sb_read with BerRead and eliminated the EAGAIN/EWOUL=
DBLOCK returns
in the length processing code.
The patch is on URL: URL: ftp://ftp.openldap.org/incoming/aclark-000513.pat=
ch
Alan Clark
Novell Directory Services
aclark@novell.com
--=_277F0E31.D8B9720F
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD W3 HTML//EN">
<HTML>
<HEAD>
<META content=3D"text/html; charset=3Diso-8859-1" http-equiv=3DContent-Type=
>
<META content=3D'"MSHTML 4.72.3110.7"' name=3DGENERATOR>
</HEAD>
<BODY style=3D"FONT: 8pt MS Sans Serif; MARGIN-LEFT: 2px; MARGIN-TOP: =
2px">
<DIV>Greetings,</DIV>
<DIV> </DIV>
<DIV>We discovered a bug in the libraries file io.c. The ber_get_next=
()=20
function is coded so that</DIV>
<DIV>if only part of a ber is read, it doesn't go after the next packet=20
immediately, but returns back</DIV>
<DIV>to wait4msg(). This allows other requests to be serviced and =
prevents=20
replies with lots of</DIV>
<DIV>data from starving other requests.</DIV>
<DIV> </DIV>
<DIV>The problem we are seeing is when a multi-byte length field is =
split=20
between two TCP PDUs,</DIV>
<DIV>i.e., part of the length is read in ber_get_next() and then the =
function=20
returns.</DIV>
<DIV>When the function is called again to read the rest, the remaining =
bytes of=20
the length</DIV>
<DIV>clobber the first bytes read. Thus a length of 0x82 0x2 0x33 =
where=20
the 0x2 and 0x33 are </DIV>
<DIV>in different TCP PDUs is intrepreted as length 0x33 instead of =
0x233.</DIV>
<DIV> </DIV>
<DIV>I didn't see much value in putting in a lot of messy code to deal =
with this=20
directly, so I just forced</DIV>
<DIV>the code to read the whole length field even if split across TCP=20
PDUs. It won't starve</DIV>
<DIV>any other requests because it is only one additional PDU, and it =
simplifies=20
the code..</DIV>
<DIV>I replaced the ber_pvt_sb_read with BerRead and eliminated the=20
EAGAIN/EWOULDBLOCK returns<BR>in the length processing code.</DIV>
<DIV> </DIV>
<DIV>The patch is on URL: URL: <A=20
href=3D"ftp://ftp.openldap.org/incoming/aclark-000513.patch">ftp://ftp.open=
ldap.org/incoming/aclark-000513.patch</A></DIV>
<DIV> </DIV>
<DIV>Alan Clark</DIV>
<DIV>Novell Directory Services</DIV>
<DIV><A=20
href=3D"mailto:aclark@novell.com">aclark@novell.com</A></DIV></BODY></HTML>=
--=_277F0E31.D8B9720F--