[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Choosing OID for overlay config
> Hallvard B Furuseth wrote:
>> Gavin Henry writes:
>>> What's the best practice for choosing the numbers in OLcfgOvAt:14.1
>>> Above is taken from translucent.c
>>
>> Not sure what you mean. OLcfgOvAt:14.1 is an attribute type defined in
>> translucent.c, you shouldn't take OIDs below it.
>>
>> If you want to add another config attribute to translucent, grab
>> OLcfgOvAt:14.3 - the first free OLcfgOvAt:14.* OID in translucent.c.
>> If you want an OID arc for another overlay/backend in OpenLDAP, add a
>> new OID arc in the OID registry in bconfig.c.
>>
>
> I notice:
>
> * (FIXME: separate arc for contribware?)
>
> This is where mine would fit in if it existed, but I see smbk5pwd
> sneaked in there.
>
> When I'm not doing doc project work (and Suretec work), I'm hacking on
> valsort.c for learning, and making valregex.c from it.
>
> This uses PCRE to apply regex to attribute values and apply your
> captures to a configured string etc., then return that in a search
> response.
>
> Kind of like a sed on attribute values. For things like:
>
> "/home/users/(.*)" and returning "/home/$1" etc.
>
> valregex-attr homeDirectory ou=Users,dc=example,dc=com /home/user/(.*)
> /home/$1
>
> I thought a new overlay for this, rather than trying to get my head
> round rwm would be better. As rwm only does attribute name stuff, not
> values.
RWM was designed that way because it was intended for remapping
namingContexts and schema, as a DS should only muck with how stuff is
presented, and not with contents. However, I see that in many cases
having the possibility to muck with contents would be of great help. Of
course, core code shouldn't allow that, but custom modules seem to be a
good place to provide that functionality. So, I think sort of an rwm
companion that mucks with values would be greatly appreciated.
> Then next on my list is to add a local db search to translucent.c so you
> can search for attributes not just on the remote db.
I think this was already discussed. I think a big issue about that is
that with complex filters (even with simple, if part of the values is
local and part remote) need local + remote data knowledge at candidate
selection. As a consequence, there's no (simple) way search candidates
can be selected if part of the filter relies on local data. I don't see
this an easy task from a theoretical point of view, not just in terms of
implementation. For example, this is basically what prevents back-meta
from building entries by joining partial entries residing on different
remote servers.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati@sys-net.it
---------------------------------------