[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Normalized DN (Was: Re: commit: ldap/servers/slapd acl.c slap.h value.c)
> I've been thinking of extending the Attribute structure to
> hold both "presented" and "normalized" values.
Actually, some time ago I started doing something like that
(when I optimized the search for duplicates in modify operations),
but gave up because of lack of time: according to my way
of doing that, most of the entry/attribute/value API was going
to change.
> I've been
> think it is likely less work to pre-compute all the normalized
> values (as entries are read from disk/provided by user)
You have to do it anyway the first time you handle a value.
> then
> to do caching on the fly. Also avoid nasty locking issues
> and makes it possible to do one per entry allocation instead
> per attribute allocations.
Well, I'd like to be able to compare this solution against
performing normalization once on request.
> Of course, we could store both
> "presented" and "normalized" values on disk... but that would
> effectively double the size of id2entry (which may not be
> all that bad).
Mmmmh, in this case if you change the normalization implementation
you surely need to rebuild the database; maybe on the fly may
suffice (although I don't see disk usage as a significant drawback).
Anyway, you need to store only values of attrs that require some
normalization, i.e. ad->ad_type->sat_syntax->ssyn_normalize != NULL;
that is, no large blobs, if I get it right.
>
> Before any change of this kind is actually committed, I would
> prefer to see some analysis (e.g. benchmarking) that should
> it "added value".
This sounds a lot of work; maybe one could cut, say, 90% of the
calls to mormalization routines and replace the cost of each call
to value_match() with a call to memcmp(). Let me recompile with
profiling ...
--
Pierangelo Masarati
mailto:pierangelo.masarati@sys-net.it