[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: syncrepl consumer is slow
- To: "openldap-devel@openldap.org >> OpenLDAP Devel" <openldap-devel@openldap.org>
- Subject: Re: syncrepl consumer is slow
- From: Emmanuel Lécharny <elecharny@gmail.com>
- Date: Tue, 03 Feb 2015 10:42:19 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=LoNl+YlnLh+jsuumtsU2WSVrch3x3aYwXTgbeXZgLvo=; b=YL5UzI5Z6P+7yZaxmIjNG/T2zQ/eOp94R2XkD1POdVA5okVXtadWVTej/A7YxFb/1E FyroPLnP6E4U54cSnJTSYeRzJVZQBQmR91o7qMJk9kxVNGxGFATd5M+tT8hSUDgS9dQz Cc+2tjUlN2/pPfx0lARFaqIQRMI04xZC7LvY5zuv7rgaLzuVP3BmUQ5zt4dmeSLxPnFS 49t+iPokGmr38gPqZsgfm3PkvsxOOCLb8TQpCT7HM/TZUhn/98Mac5l4AmCafy1iiNbv 6Ygud5lqG//1fwfsbTXVWXW5IDX1z3yH0acDW/E1GHCb67FMLxC88NpfVj5/Vzggo09M da2Q==
- In-reply-to: <54D089BA.5020007@symas.com>
- References: <54C9A511.8000800@symas.com> <54CB4D24.7080106@usit.uio.no> <54D04A86.20302@symas.com> <54D05908.5080107@gmail.com> <54D089BA.5020007@symas.com>
- User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:31.0) Gecko/20100101 Thunderbird/31.4.0
Le 03/02/15 09:41, Howard Chu a écrit :
> Emmanuel Lécharny wrote:
>> Le 03/02/15 05:11, Howard Chu a écrit :
>>> Another option here is simply to perform batching. Now that we have
>>> the TXN api exposed in the backend interface, we could just batch up
>>> e.g. 500 entries per txn. much like slapadd -q already does.
>>> Ultimately we ought to be able to get syncrepl refresh to occur at
>>> nearly the same speed as slapadd -q.
>>
>> Batching is ok, except that you never know how many entries you'll going
>> to have, thus you will have to actually write the data after a period of
>> time, even if you don't have the 500 entries.
>
> This isn't a problem - we know exactly when refresh completes, so we
> can finish the batch regardless of how many entries are left over.
True for Refresh. I was thinking more specifically of updates when we
are connected.
The idea of pushing the expected number of updates within the cookie is
for information purposes : having this number traced in the
logs/monitored could help in some cases where the refresh phase takes
long : the users will not stop the server thinking it has stalled.
>
> Testing this out with the experimental ITS#8040 patch - with lazy
> commit the 2.8M entries (2.5GB data) takes ~10 minutes for the refresh
> to pull them across. With batching 500 entries/txn+lazy commit it
> takes ~7 minutes, a decent improvement. It's still 2x slower than
> slapadd -q though, which loads the data in 3-1/2 minutes.
Not bad at all. What makes it 2x slower, btw?