[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: How wold you go about writing a new OpenLDAP backend?
- To: Prentice Bisbal <pbisbal@pppl.gov>
- Subject: Re: How wold you go about writing a new OpenLDAP backend?
- From: John Lewis <oflameo2@gmail.com>
- Date: Mon, 05 Jun 2017 11:34:11 -0400
- Cc: Howard Chu <hyc@symas.com>, openldap-technical@openldap.org
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:subject:from:to:cc:date:in-reply-to:references :mime-version:content-transfer-encoding; bh=jUcupNOEvE0J2hE1AcsrCIEPwELlPbceoR89+2S2SLc=; b=Uyk2Duy9WdwaNnDLkvSAXaAZ4lopczXj7igK8kVvgRzfD5TRFrIXpzDwjxuiul11+o VxOqxEaGfZqxFLT3XAJTngnirWgoNf9s9EN1tknbRbNsMxZVMdmdMgnDntLpChalB8vw ybPfOu5Yk1U5lEnTHouCM/3wals6TKuo7kvCOha6ypBC8v4vpZWMGE+S5yCKgj/ocA6w eUUCsXteGNz45ldldt2kwJgDIIcAYdbvglF+WjPrRenOjDR/+SJ31Bvw2+tQ+BMz3PiT Q+WdE1CUm8f9tZgeMOFTftubV7tHk9mjBjiwghZqkFj5dXFv+JtAA+aIZKvXChe9AqEV ovSw==
- In-reply-to: <0567c88d-6431-07ed-9e1a-95b8884e7028@pppl.gov>
- References: <1496205432.24282.55.camel@gmail.com> <7725a836-a6fd-2a6a-d4ae-3a4d8f4dcd39@pppl.gov> <WM!aeccca996b27959e587df3f355b2e5c2a0fc74c3173f39e10273d6ce021835564547c99115572b4270f0198ed543ffe9!@mailstronghold-2.zmailcloud.com> <771d3994-a1f8-7751-bb32-451872bc9b0e@symas.com> <e101181f-3d28-29a3-bdd1-e3ce056c87a2@pppl.gov> <1496263193.2987.11.camel@gmail.com> <0567c88d-6431-07ed-9e1a-95b8884e7028@pppl.gov>
On Fri, 2017-06-02 at 13:19 -0400, Prentice Bisbal wrote:
> Fair enough answer. How would you store and retrieve your results? Are
> you creating/using a schema specifically for this? If so, what does it
> look like.
>
> I'm still having trouble seeing the performance benefits, though. The
> data is still stored in a backend DB, right? Let's say that backend is
> some sort of SQL DB. You write your ldapsearch query, and then LDAP
> converts that to SQL, and then the DB returns the results to LDAP, which
> then translated the results to LDIF and returns them to you. Isn't there
> performance hits every time you traverse the LDAP layer?
>
> I suppose this might still be faster than reading a plain-text log file.
> Is that the point I'm missing?
>
> Prentice
>
There is no point in using an SQL DB because log data is not relational.
The SQL database would have a lot of overhead for features I wouldn't
use.
I envision that I can use rsyslog to forward all of the logs to a log
server and write them out to a journal using omjournal and then present
the logs over the network via LDAP which can be injected into many
different tools for filtering and displaying the information.
We won't have to worry about getting compatibility for NoSQL
implantation of the month with its own special network protocol, on disk
format, pile of dependencies, and assemble of startup companies from the
San Francisco Bay Area, with their binding arbitration agreements there
to pump and dump the whole stack every 2-5 years. We can use LDAP
instead. We can do all of this crap or we can just use LDAP.
In envision that we will be able to start ignoring all of the hype
because we will all of the relevant features, and maximum
interoperability.
As for implementation, the jounral's file format is documented here
https://www.freedesktop.org/wiki/Software/systemd/journal-files/, and
there is a C API
https://www.freedesktop.org/software/systemd/man/sd-journal.html that is
recommended to be used.
A new LDAP schema for the file format may need to be written, but it
would not surprise me if there is a schema that already exists that
perfectly maps to every attribute that was written 15 years ago and it
is already installed by default on every instance of OpenLDAP.
You could ask why to bother with the journal when the logs could be
written directly to lmdb. The answer is, I don't know if lmdb would be
better for logs yet, but I know this format was designed specially for
logs. When I get the logs into ldap, it would be trivial to copy the
data to a lmdb backend and find out.