[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: 2.4.21 delta syncrepl : do_syncrep2: rid=000 (4096) Content Sync Refresh Required
Hello Buchan, ea.
On Tue, 6 Jul 2010, Buchan Milne wrote:
On Tuesday, 6 July 2010 07:08:05 Arjan Filius wrote:
Hello openldap-technical,
new on the list, Arjan Filius is my Name.
Having setup openldap 2.4.21, with one master, and six slaves/consumers in
delta syncrepl configuration and testing an upgrade from an older openldap
version.
Please specify the version you are upgrading from, it *is* relevant.
Excuse, didn't came up
it's 2.3.38 for 32bits i386
the older ldap version is on an other machine (and is using slurpd for
replication) , and upgrade is by doing an
export on the old (slapcat > export-file.ldif)
and on the new machien (2.4.21) :
slapadd -l /tmp/export-file.ldif -F ./slapd.d/
exporting (slapcat > export-file), importing (slapadd -l `export-file`) on
the (empty/pristine) master, and attaching empty/pristine slaves works
just fine except for taking more than one hour to complete.
Well, maybe you should consider appropriate tuning/changes to your import
process to speed things up, rather than risk data integrity. You don't specify
how large your database is, or any tuning etc., or other slapadd flags, so it
is difficult to know if 1 hour is good or bad.
slapadd (on master) can be done in 12 minutes , with a tuned config, major
ingredients:
#checkpoint 10 3
checkpoint 2000 60
#dbnosync
dbnosync
# syncprov-checkpoint 100 10
syncprov-checkpoint 1000 100
'#' is the regular value without the special import tunables.
after import start master just with regular parameters (without nodbsync
and checkpointing more strict 100 10)
machines all have 8G RAM, 1 CPU.
I'm also not use what is good or bad in terms of performance. Just looking
for the quickest migration path, but not interested by doing exotic
"tricks" . I thought just importing the same data (slapadd) on master and
all slaves in paralel is not exotic, and would shorten the migration time.
E.g., using the -q flag to slapadd can speed things up significantly, setting
'tool-threads' in slapd.conf appropriately can too, and you should have
database tuning (e.g. DB_CONFIG) in place.
Never looked at the -g option, but i will look at that Thanks.
the DB_CONFIG i use:
# grep -v ^$ DB_CONFIG |grep -v ^#
set_cachesize 2 0 1
set_flags DB_LOG_AUTOREMOVE
set_lg_regionmax 1048576
set_lg_max 10485760
set_lg_bsize 2097152
set_lg_dir /var/openldap-logs
set_lk_max_locks 20000
set_lk_max_objects 30000
set_lk_max_lockers 30000
The tool-threads is also something i never looked into, not having it in
mij config, is uses the default now.
Having also a single core cpu, i'm not sure settings above 1 wil speed
things up.
But, the import of the created export isn't realy a problem, it's the
replication to the slaves which takes some time, and it's a lot faster
"just" to import the same import as i did on the master (only if possible
and works fine)
[...]
Starting with different data on providers and consumers is sure to result in
broken replication.
I used exactly the same import file master ans slaves (source 2.3.39) ,
the imported ldif is about 580MB
in the imported ldif is no contextCSN, so i think (re)generating it may
lead to the situation (cookie issue) encountered.
Has anyone an idea what is/might be going on, and how to fix/prevent this?
or how to migrate/replicate in a fast way?
See above, but you provide no detail on what you have done to speed up your
import, and this seems the original problem.
It's the replication with the slaves which takes a much longer time dan
just an import. preventing that by doing a import of the same file,
leads to the cooky issue.
But from what i understand from you that might not a good idea?
Thanks for your reaction
Regards,
Regards,
Buchan
--
Arjan Filius
mailto:iafilius@xs4all.nl