[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
internal representation of tags/enums vs. bcp64bis
How do various implementations represent BER tags internally (that is,
identifier octets - tag number + class + encoding)? Or enumerations
like SearchRequest.scope? I imagine it's common to inherit Umich LDAP's
way and stuff them into an integer in one way or another.
I think [Bcp64bis] and [Protocol] should RECOMMEND that extensions do
not define tags and enumerated values that are too large to be
represented in a 31-bit integer (not 32, in case the programming
language does not support unsigned integers).
With enums, it's simple: Use values <= 2**31-1 (maxInt from [Protocol]).
With tags, it depends on how various implementations map (tag number,
class, encoding bit) into an integer - if they do map them to an
integer, of course. A BER element's identifier octets consist of:
- First octet:
+ 1 * (tag number <= 0x1E ? tag number : 0x1F)
+ 0x20 * (constructed encoding ? 1 : 0)
+ 0x40 * class (UNIVERSAL=0/APPLICATION=1/CONTEXT-SPECIFIC=2/PRIVATE=3)
- Next octets, if tag number > 0x1E:
tag number in base128, most significant "digit" first,
with 8.bit=1 except in the last octet.
So if the identifier octets are just copied into a 32-bit integer, the
max tag number is 128**3-1 = 2**21-1. Treating the identifier octets as
an integer with the least significant octet first, that puts the class
and encoding values in a fixed place, and leaves bit 32 (the "sign bit"
of a 32-bit integer) as zero. But maybe there are implementations that
use the bits of an integer even less efficiently?
--
Hallvard