[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: LMDB use of sparse or non-sparse data file
- To: openldap-technical@openldap.org
- Subject: Re: LMDB use of sparse or non-sparse data file
- From: Geoff Swan <gswan3@bigpond.net.au>
- Date: Mon, 30 Mar 2015 14:21:54 +1100
- In-reply-to: <551324FF.1080407@bigpond.net.au>
- References: <550E04E4.8030405@bigpond.net.au> <55106B9B.60005@symas.com> <4C21C296B9E28F6477579880@[192.168.1.9]> <5511CBF9.2050703@bigpond.net.au> <91DCD575FFE229D8F7057C15@[192.168.1.9]> <551324FF.1080407@bigpond.net.au>
- User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0
On 26/03/2015 8:13 AM, Geoff Swan wrote:
>
> On 26/03/2015 6:23 AM, Quanah Gibson-Mount wrote:
>> --On Wednesday, March 25, 2015 8:41 AM +1100 Geoff Swan
>> <gswan3@bigpond.net.au> wrote:
>>
>>>> Well, to be clear: While the DB is sparse, mdb_copy does drop the
>>>> unused map space when using mdb_copy by default.
>>> That is what I have seen.
>>> The filesystem reports 2TB file size for the first server, and 590GB
>>> file size for the mdb copy, with default options.
>> Use du -c to get actual used space instead of the maxsize value.
>>
>>>> I think this is the behavior they're referring to. However, in my
>>>> experience, after starting up slapd with an mdb_copy'd db, where
>>>> sparse files are in use, the size will be set to whatever slapd's
>>>> configured to use after slapd is started.
>>> This is not what is being seen.
>>> The file size remains at 590GB on the server using the copy, and 2TB on
>>> the original server.
>> Then there's something odd happening on your server where you placed
>> the copy. What OpenLDAP version are you using?
> It is 2.4.39.
> I might try a slapcat/slapadd to rebuild the db file and see if that
> corrects the problem.
Further testing may give some clues. The search is on modifyTimestamp,
in particular branches, to find objects with a modifyTimestamp>=value.
If value is fairly close to the current datetime, the search returns
quickly. However if value is a few days ago then the search appears to
take many hours, even though there are no objects that match the filter
(ie the result set size has no effect). Not sure why this should be the
case, given that modifyTimestamp is indexed, there is plenty of memory
and 30-50% is free during the search operation.
>
>> --Quanah
>>
>>
>> --
>>
>> Quanah Gibson-Mount
>> Platform Architect
>> Zimbra, Inc.
>> --------------------
>> Zimbra :: the leader in open source messaging and collaboration
>>
>