Martin Kosek wrote:
On 09/06/2012 11:17 PM, Rob Crittenden wrote:
Martin Kosek wrote:
On 09/06/2012 05:55 PM, Rob Crittenden wrote:
Rob Crittenden wrote:
Rob Crittenden wrote:
Martin Kosek wrote:
On 09/05/2012 08:06 PM, Rob Crittenden wrote:
Rob Crittenden wrote:
Martin Kosek wrote:
On 07/05/2012 08:39 PM, Rob Crittenden wrote:
Martin Kosek wrote:
On 07/03/2012 04:41 PM, Rob Crittenden wrote:
Deleting a replica can leave a replication vector (RUV) on the
other servers.
This can confuse things if the replica is re-added, and it also
causes the
server to calculate changes against a server that may no longer
exist.

389-ds-base provides a new task that self-propogates itself to all
available
replicas to clean this RUV data.

This patch will create this task at deletion time to hopefully
clean things up.

It isn't perfect. If any replica is down or unavailable at the
time
the
cleanruv task fires, and then comes back up, the old RUV data
may be
re-propogated around.

To make things easier in this case I've added two new commands to
ipa-replica-manage. The first lists the replication ids of all the
servers we
have a RUV for. Using this you can call clean_ruv with the
replication id of a
server that no longer exists to try the cleanallruv step again.

This is quite dangerous though. If you run cleanruv against a
replica id that
does exist it can cause a loss of data. I believe I've put in
enough scary
warnings about this.

rob


Good work there, this should make cleaning RUVs much easier than
with the
previous version.

This is what I found during review:

1) list_ruv and clean_ruv command help in man is quite lost. I
think
it would
help if we for example have all info for commands indented. This
way
user could
simply over-look the new commands in the man page.


2) I would rename new commands to clean-ruv and list-ruv to make
them
consistent with the rest of the commands (re-initialize,
force-sync).


3) It would be nice to be able to run clean_ruv command in an
unattended way
(for better testing), i.e. respect --force option as we already
do for
ipa-replica-manage del. This fix would aid test automation in the
future.


4) (minor) The new question (and the del too) does not react too
well for
CTRL+D:

# ipa-replica-manage clean_ruv 3 --force
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: unexpected error:


5) Help for clean_ruv command without a required parameter is quite
confusing
as it reports that command is wrong and not the parameter:

# ipa-replica-manage clean_ruv
Usage: ipa-replica-manage [options]

ipa-replica-manage: error: must provide a command [clean_ruv |
force-sync |
disconnect | connect | del | re-initialize | list | list_ruv]

It seems you just forgot to specify the error message in the
command
definition


6) When the remote replica is down, the clean_ruv command fails
with an
unexpected error:

[root@vm-086 ~]# ipa-replica-manage clean_ruv 5
Clean the Replication Update Vector for
vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y
unexpected error: {'desc': 'Operations error'}


/var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors:
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: failed
to connect to repl        agreement connection
(cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,

cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping


tree,cn=config), error 105
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: replica
(cn=meTovm-055.idm.lab.
bos.redhat.com,cn=replica,cn=dc\3Didm\2Cdc\3Dlab\2Cdc\3Dbos\2Cdc\3Dredhat\2Cdc\3Dcom,cn=mapping







tree,   cn=config) has not been cleaned.  You will need to rerun
the
CLEANALLRUV task on this replica.
[04/Jul/2012:06:28:16 -0400] NSMMReplicationPlugin -
cleanAllRUV_task: Task
failed (1)

In this case I think we should inform user that the command failed,
possibly
because of disconnected replicas and that they could enable the
replicas and
try again.


7) (minor) "pass" is now redundant in replication.py:
+        except ldap.INSUFFICIENT_ACCESS:
+            # We can't make the server we're removing read-only
but
+            # this isn't a show-stopper
+            root_logger.debug("No permission to switch replica to
read-only,
continuing anyway")
+            pass


I think this addresses everything.

rob

Thanks, almost there! I just found one more issue which needs to be
fixed
before we push:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com --force
Directory Manager password:

Unable to connect to replica vm-055.idm.lab.bos.redhat.com, forcing
removal
Failed to get data from 'vm-055.idm.lab.bos.redhat.com': {'desc':
"Can't
contact LDAP server"}
Forcing removal on 'vm-086.idm.lab.bos.redhat.com'

There were issues removing a connection: %d format: a number is
required, not str

Failed to get data from 'vm-055.idm.lab.bos.redhat.com': {'desc':
"Can't
contact LDAP server"}

This is a traceback I retrieved:
Traceback (most recent call last):
      File "/sbin/ipa-replica-manage", line 425, in del_master
        del_link(realm, r, hostname, options.dirman_passwd, force=True)
      File "/sbin/ipa-replica-manage", line 271, in del_link
        repl1.cleanallruv(replica_id)
      File
"/usr/lib/python2.7/site-packages/ipaserver/install/replication.py",
line 1094, in cleanallruv
        root_logger.debug("Creating CLEANALLRUV task for replica id
%d" %
replicaId)


The problem here is that you don't convert replica_id to int in this
part:
+    replica_id = None
+    if repl2:
+        replica_id = repl2._get_replica_id(repl2.conn, None)
+    else:
+        servers = get_ruv(realm, replica1, dirman_passwd)
+        for (netloc, rid) in servers:
+            if netloc.startswith(replica2):
+                replica_id = rid
+                break

Martin


Updated patch using new mechanism in 389-ds-base. This should more
thoroughly clean out RUV data when a replica is being deleted, and
provide for a way to delete RUV data afterwards too if necessary.

rob

Rebased patch

rob


0) As I wrote in a review for your patch 1041, changelog entry slipped
elsewhere.

1) The following KeyboardInterrupt except class looks suspicious. I
know why
you have it there, but since it is generally a bad thing to do, some
comment
why it is needed would be useful.

@@ -256,6 +263,17 @@ def del_link(realm, replica1, replica2,
dirman_passwd,
force=False):
        repl1.delete_agreement(replica2)
        repl1.delete_referral(replica2)

+    if type1 == replication.IPA_REPLICA:
+        if repl2:
+            ruv = repl2._get_replica_id(repl2.conn, None)
+        else:
+            ruv = get_ruv_by_host(realm, replica1, replica2,
dirman_passwd)
+
+        try:
+            repl1.cleanallruv(ruv)
+        except KeyboardInterrupt:
+            pass
+

Maybe you just wanted to do some cleanup and then "raise" again?

No, it is there because it is safe to break out of it. The task will
continue to run. I added some verbiage.


2) This is related to 1), but when some replica is down,
"ipa-replica-manage
del" may wait indefinitely when some remote replica is down, right?

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com
Deleting a master is irreversible.
To reconnect to the remote master you will need to prepare a new
replica file
and re-install.
Continue to delete? [no]: y
ipa: INFO: Setting agreement
cn=meTovm-086.idm.lab.bos.redhat.com,cn=replica,cn=dc\=idm\,dc\=lab\,dc\=bos\,dc\=redhat\,dc\=com,cn=mapping




tree,cn=config schedule to 2358-2359 0 to force synch
ipa: INFO: Deleting schedule 2358-2359 0 from agreement
cn=meTovm-086.idm.lab.bos.redhat.com,cn=replica,cn=dc\=idm\,dc\=lab\,dc\=bos\,dc\=redhat\,dc\=com,cn=mapping




tree,cn=config
ipa: INFO: Replication Update in progress: FALSE: status: 0 Replica
acquired
successfully: Incremental update succeeded: start: 0: end: 0
Background task created to clean replication data

... after about a minute I hit CTRL+C

^CDeleted replication agreement from 'vm-086.idm.lab.bos.redhat.com' to
'vm-055.idm.lab.bos.redhat.com'
Failed to cleanup vm-055.idm.lab.bos.redhat.com DNS entries: NS record
does not
contain 'vm-055.idm.lab.bos.redhat.com.'
You may need to manually remove them from the tree

I think it would be better to inform user that some remote replica is
down or
at least that we are waiting for the task to complete. Something like
that:

# ipa-replica-manage del vm-055.idm.lab.bos.redhat.com
...
Background task created to clean replication data
Replication data clean up may take very long time if some replica is
unreachable
Hit CTRL+C to interrupt the wait
^C Clean up wait interrupted
....
[continue with del]

Yup, did this in #1.


3) (minor) When there is a cleanruv task running and you run
"ipa-replica-manage del", there is a unexpected error message with
duplicate
task object in LDAP:

# ipa-replica-manage del vm-072.idm.lab.bos.redhat.com --force
Unable to connect to replica vm-072.idm.lab.bos.redhat.com, forcing
removal
FAIL
Failed to get data from 'vm-072.idm.lab.bos.redhat.com': {'desc': "Can't
contact LDAP server"}
Forcing removal on 'vm-086.idm.lab.bos.redhat.com'

There were issues removing a connection: This entry already exists
<<<<<<<<<

Failed to get data from 'vm-072.idm.lab.bos.redhat.com': {'desc': "Can't
contact LDAP server"}
Failed to cleanup vm-072.idm.lab.bos.redhat.com DNS entries: NS record
does not
contain 'vm-072.idm.lab.bos.redhat.com.'
You may need to manually remove them from the tree


I think it should be enough to just catch for "entry already exists" in
cleanallruv function, and in such case print a relevant error message
bail out.
Thus, self.conn.checkTask(dn, dowait=True) would not be called too.

Good catch, fixed.



4) (minor): In make_readonly function, there is a redundant "pass"
statement:

+    def make_readonly(self):
+        """
+        Make the current replication agreement read-only.
+        """
+        dn = DN(('cn', 'userRoot'), ('cn', 'ldbm database'),
+                ('cn', 'plugins'), ('cn', 'config'))
+
+        mod = [(ldap.MOD_REPLACE, 'nsslapd-readonly', 'on')]
+        try:
+            self.conn.modify_s(dn, mod)
+        except ldap.INSUFFICIENT_ACCESS:
+            # We can't make the server we're removing read-only but
+            # this isn't a show-stopper
+            root_logger.debug("No permission to switch replica to
read-only,
continuing anyway")
+            pass         <<<<<<<<<<<<<<<

Yeah, this is one of my common mistakes. I put in a pass initially, then
add logging in front of it and forget to delete the pass. Its gone now.



5) In clean_ruv, I think allowing a --force option to bypass the
user_input
would be helpful (at least for test automation):

+    if not ipautil.user_input("Continue to clean?", False):
+        sys.exit("Aborted")

Yup, added.

rob

Slightly revised patch. I still had a window open with one unsaved change.

rob


Apparently there were two unsaved changes, one of which was lost. This adds in
the 'entry already exists' fix.

rob


Just one last thing (otherwise the patch is OK) - I don't think this is what we
want :-)

# ipa-replica-manage clean-ruv 8
Clean the Replication Update Vector for vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]: y   <<<<<<
Aborted


Nor this exception, (your are checking for wrong exception):

# ipa-replica-manage clean-ruv 8
Clean the Replication Update Vector for vm-055.idm.lab.bos.redhat.com:389

Cleaning the wrong replica ID will cause that server to no
longer replicate so it may miss updates while the process
is running. It would need to be re-initialized to maintain
consistency. Be very careful.
Continue to clean? [no]:
unexpected error: This entry already exists

This is the exception:

Traceback (most recent call last):
    File "/sbin/ipa-replica-manage", line 651, in <module>
      main()
    File "/sbin/ipa-replica-manage", line 648, in main
      clean_ruv(realm, args[1], options)
    File "/sbin/ipa-replica-manage", line 373, in clean_ruv
      thisrepl.cleanallruv(ruv)
    File "/usr/lib/python2.7/site-packages/ipaserver/install/replication.py",
line 1136, in cleanallruv
      self.conn.addEntry(e)
    File "/usr/lib/python2.7/site-packages/ipaserver/ipaldap.py", line 503, in
addEntry
      self.__handle_errors(e, arg_desc=arg_desc)
    File "/usr/lib/python2.7/site-packages/ipaserver/ipaldap.py", line 321, in
__handle_errors
      raise errors.DuplicateEntry()
ipalib.errors.DuplicateEntry: This entry already exists

Martin


Fixed that and a couple of other problems. When doing a disconnect we should
not also call clean-ruv.

Ah, good self-catch.


I also got tired of seeing crappy error messages so I added a little convert
utility.

rob

1) There is CLEANALLRUV stuff included in 1050-3 and not here. There are also
some finding for this new code.


2) We may want to bump Requires to higher version of 389-ds-base
(389-ds-base-1.2.11.14-1) - it contains a fix for CLEANALLRUV+winsync bug I
found earlier.


3) I just discovered another suspicious behavior. When we are deleting a master
that has links also to other master(s) we delete those too. But we also
automatically run CLEANALLRUV in these cases, so we may end up in multiple
tasks being started on different masters - this does not look right.

I think we may rather want to at first delete all links and then run
CLEANALLRUV task, just for one time. This is what I get with current code:

# ipa-replica-manage del vm-072.idm.lab.bos.redhat.com
Directory Manager password:

Deleting a master is irreversible.
To reconnect to the remote master you will need to prepare a new replica file
and re-install.
Continue to delete? [no]: yes
ipa: INFO: Setting agreement
cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,cn=dc\=idm\,dc\=lab\,dc\=bos\,dc\=redhat\,dc\=com,cn=mapping
tree,cn=config schedule to 2358-2359 0 to force synch
ipa: INFO: Deleting schedule 2358-2359 0 from agreement
cn=meTovm-055.idm.lab.bos.redhat.com,cn=replica,cn=dc\=idm\,dc\=lab\,dc\=bos\,dc\=redhat\,dc\=com,cn=mapping
tree,cn=config
ipa: INFO: Replication Update in progress: FALSE: status: 0 Replica acquired
successfully: Incremental update succeeded: start: 0: end: 0
Background task created to clean replication data. This may take a while.
This may be safely interrupted with Ctrl+C

^CWait for task interrupted. It will continue to run in the background

Deleted replication agreement from 'vm-055.idm.lab.bos.redhat.com' to
'vm-072.idm.lab.bos.redhat.com'
ipa: INFO: Setting agreement
cn=meTovm-086.idm.lab.bos.redhat.com,cn=replica,cn=dc\=idm\,dc\=lab\,dc\=bos\,dc\=redhat\,dc\=com,cn=mapping
tree,cn=config schedule to 2358-2359 0 to force synch
ipa: INFO: Deleting schedule 2358-2359 0 from agreement
cn=meTovm-086.idm.lab.bos.redhat.com,cn=replica,cn=dc\=idm\,dc\=lab\,dc\=bos\,dc\=redhat\,dc\=com,cn=mapping
tree,cn=config
ipa: INFO: Replication Update in progress: FALSE: status: 0 Replica acquired
successfully: Incremental update succeeded: start: 0: end: 0
Background task created to clean replication data. This may take a while.
This may be safely interrupted with Ctrl+C

^CWait for task interrupted. It will continue to run in the background

Deleted replication agreement from 'vm-086.idm.lab.bos.redhat.com' to
'vm-072.idm.lab.bos.redhat.com'
Failed to cleanup vm-072.idm.lab.bos.redhat.com DNS entries: NS record does not
contain 'vm-072.idm.lab.bos.redhat.com.'
You may need to manually remove them from the tree

Martin


All issues addressed and I pulled in abort-clean-ruv from 1050. I added a list-clean-ruv command as well.

rob
>From 682fd818708001e247a9c1249d106246da2b83c2 Mon Sep 17 00:00:00 2001
From: Rob Crittenden <rcrit...@redhat.com>
Date: Fri, 14 Sep 2012 14:00:13 -0400
Subject: [PATCH] fix up cleanruv

---
 freeipa.spec.in                        |  10 +--
 install/tools/ipa-replica-manage       | 142 ++++++++++++++++++++++++++-------
 install/tools/man/ipa-replica-manage.1 |   6 ++
 ipaserver/install/replication.py       |  24 ++++++
 4 files changed, 144 insertions(+), 38 deletions(-)

diff --git a/freeipa.spec.in b/freeipa.spec.in
index 86833fb5fe4939ee151bfa843d2533d5ec9f17cc..fbfe4eb9f7a5c45feb0ae36c323387ee31b13ab7 100644
--- a/freeipa.spec.in
+++ b/freeipa.spec.in
@@ -24,7 +24,7 @@ Source0:        freeipa-%{version}.tar.gz
 BuildRoot:      %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
 
 %if ! %{ONLY_CLIENT}
-BuildRequires: 389-ds-base-devel >= 1.2.10-0.6.a6
+BuildRequires: 389-ds-base-devel >= 1.2.11.14
 BuildRequires:  svrcore-devel
 BuildRequires:  /usr/share/selinux/devel/Makefile
 BuildRequires:  policycoreutils >= %{POLICYCOREUTILSVER}
@@ -100,11 +100,7 @@ Requires: %{name}-python = %{version}-%{release}
 Requires: %{name}-client = %{version}-%{release}
 Requires: %{name}-admintools = %{version}-%{release}
 Requires: %{name}-server-selinux = %{version}-%{release}
-%if 0%{?fedora} >= 17
-Requires(pre): 389-ds-base >= 1.2.11.11-1
-%else
-Requires(pre): 389-ds-base >= 1.2.10.10-1
-%endif
+Requires(pre): 389-ds-base >= 1.2.11.14-1
 Requires: openldap-clients
 Requires: nss
 Requires: nss-tools
@@ -753,7 +749,7 @@ fi
 
 %changelog
 * Mon Aug 20 2012 Rob Crittenden <rcrit...@redhat.com> - 2.99.0-44
-- Set min for 389-ds-base to 1.2.11.11-1 on F17+ to pull in updated
+- Set min for 389-ds-base to 1.2.11.14-1 on F17+ to pull in updated
   RUV code and nsslapd-readonly schema.
 
 * Mon Aug 20 2012 Rob Crittenden <rcrit...@redhat.com> - 2.99.0-43
diff --git a/install/tools/ipa-replica-manage b/install/tools/ipa-replica-manage
index c6ef51b7215164c9538afae942e3d42285ca860b..acd05271168170ab7403af319062d10a107b28c2 100755
--- a/install/tools/ipa-replica-manage
+++ b/install/tools/ipa-replica-manage
@@ -48,7 +48,9 @@ commands = {
                     "must provide hostname of master to delete"),
     "re-initialize":(0, 0, "", ""),
     "force-sync":(0, 0, "", ""),
-    "clean-ruv":(1, 1, "Replica ID of to clean", ""),
+    "clean-ruv":(1, 1, "Replica ID of to clean", "must provide replica ID to clean"),
+    "abort-clean-ruv":(1, 1, "Replica ID to abort cleaning", "must provide replica ID to abort cleaning"),
+    "list-clean-ruv":(0, 0, "", ""),
 }
 
 def convert_error(exc):
@@ -203,20 +205,15 @@ def list_replicas(realm, host, replica, dirman_passwd, verbose):
             print "  last update status: %s" % entry.getValue('nsds5replicalastupdatestatus')
             print "  last update ended: %s" % str(ipautil.parse_generalized_time(entry.getValue('nsds5replicalastupdateend')))
 
-def del_link(realm, replica1, replica2, dirman_passwd, force=False, clean_ruv=True):
+def del_link(realm, replica1, replica2, dirman_passwd, force=False):
     """
     Delete a replication agreement from host A to host B.
 
-    This can optionally delete the Replication Update Vector (RUV) from
-    all masters. This should only be used when a master is being completely
-    deleted and not simply managing topology.
-
     @realm: the Kerberos realm
     @replica1: the hostname of master A
     @replica2: the hostname of master B
     @dirman_passwd: the Directory Manager password
     @force: force deletion even if one server is down
-    @clean_ruv: remove the replication vector for replica2 completely
     """
 
     repl2 = None
@@ -230,14 +227,14 @@ def del_link(realm, replica1, replica2, dirman_passwd, force=False, clean_ruv=Tr
         if not force and len(repl_list) <= 1 and type1 == replication.IPA_REPLICA:
             print "Cannot remove the last replication link of '%s'" % replica1
             print "Please use the 'del' command to remove it from the domain"
-            return
+            return False
 
     except (ldap.NO_SUCH_OBJECT, errors.NotFound):
         print "'%s' has no replication agreement for '%s'" % (replica1, replica2)
-        return
+        return False
     except Exception, e:
         print "Failed to determine agreement type for '%s': %s" % (replica1, convert_error(e))
-        return
+        return False
 
     if type1 == replication.IPA_REPLICA:
         try:
@@ -247,16 +244,16 @@ def del_link(realm, replica1, replica2, dirman_passwd, force=False, clean_ruv=Tr
             if not force and len(repl_list) <= 1:
                 print "Cannot remove the last replication link of '%s'" % replica2
                 print "Please use the 'del' command to remove it from the domain"
-                return
+                return False
 
         except (ldap.NO_SUCH_OBJECT, errors.NotFound):
             print "'%s' has no replication agreement for '%s'" % (replica2, replica1)
             if not force:
-                return
+                return False
         except Exception, e:
             print "Failed to get list of agreements from '%s': %s" % (replica2, convert_error(e))
             if not force:
-                return
+                return False
 
     if repl2 and type1 == replication.IPA_REPLICA:
         failed = False
@@ -280,7 +277,7 @@ def del_link(realm, replica1, replica2, dirman_passwd, force=False, clean_ruv=Tr
             if force:
                 print "Forcing removal on '%s'" % replica1
             else:
-                return
+                return False
 
     if not repl2 and force:
         print "Forcing removal on '%s'" % replica1
@@ -288,17 +285,6 @@ def del_link(realm, replica1, replica2, dirman_passwd, force=False, clean_ruv=Tr
     repl1.delete_agreement(replica2)
     repl1.delete_referral(replica2)
 
-    if type1 == replication.IPA_REPLICA and clean_ruv:
-        if repl2:
-            ruv = repl2._get_replica_id(repl2.conn, None)
-        else:
-            ruv = get_ruv_by_host(realm, replica1, replica2, dirman_passwd)
-
-        try:
-            repl1.cleanallruv(ruv)
-        except KeyboardInterrupt:
-            print "Wait for task interrupted. It will continue to run in the background"
-
     if type1 == replication.WINSYNC:
         try:
             dn = DN(('cn', replica2), ('cn', 'replicas'), ('cn', 'ipa'), ('cn', 'etc'),
@@ -356,9 +342,9 @@ def list_ruv(realm, host, dirman_passwd, verbose):
     for (netloc, rid) in servers:
         print "%s: %s" % (netloc, rid)
 
-def get_ruv_by_host(realm, sourcehost, host, dirman_passwd):
+def get_rid_by_host(realm, sourcehost, host, dirman_passwd):
     """
-    Try to determine the RUV by host name.
+    Try to determine the RID by host name.
     """
     servers = get_ruv(realm, sourcehost, dirman_passwd)
     for (netloc, rid) in servers:
@@ -398,6 +384,82 @@ def clean_ruv(realm, ruv, options):
     thisrepl.cleanallruv(ruv)
     print "Cleanup task created"
 
+def abort_clean_ruv(realm, ruv, options):
+    """
+    Given an RID abort a CLEANALLRUV task.
+    """
+    try:
+        ruv = int(ruv)
+    except ValueError:
+        sys.exit("Replica ID must be an integer: %s" % ruv)
+
+    servers = get_ruv(realm, options.host, options.dirman_passwd)
+    found = False
+    for (netloc, rid) in servers:
+        if ruv == int(rid):
+           found = True
+           hostname = netloc
+           break
+
+    if not found:
+        sys.exit("Replica ID %s not found" % ruv)
+
+    servers = get_ruv(realm, options.host, options.dirman_passwd)
+    found = False
+    for (netloc, rid) in servers:
+        if ruv == int(rid):
+           found = True
+           hostname = netloc
+           break
+
+    if not found:
+        sys.exit("Replica ID %s not found" % ruv)
+
+    print "Aborting the clean Replication Update Vector task for %s" % hostname
+    print
+    thisrepl = replication.ReplicationManager(realm, options.host,
+                                              options.dirman_passwd)
+    thisrepl.abortcleanallruv(ruv)
+
+    print "Cleanup task stopped"
+
+def list_clean_ruv(realm, host, dirman_passwd, verbose):
+    """
+    List all clean RUV tasks.
+    """
+    repl = replication.ReplicationManager(realm, host, dirman_passwd)
+    dn = DN(('cn', 'cleanallruv'),('cn', 'tasks'), ('cn', 'config'))
+    try:
+        entries = repl.conn.getList(dn, ldap.SCOPE_ONELEVEL)
+    except errors.NotFound:
+        print "No CLEANALLRUV tasks running"
+    else:
+        print "CLEANALLRUV tasks"
+        for entry in entries:
+            name = entry.getValue('cn').replace('clean ', '')
+            status = entry.getValue('nsTaskStatus')
+            print "RID %s: %s" % (name, status)
+            if verbose:
+                print str(dn)
+                print entry.getValue('nstasklog')
+
+    print
+
+    dn = DN(('cn', 'abort cleanallruv'),('cn', 'tasks'), ('cn', 'config'))
+    try:
+        entries = repl.conn.getList(dn, ldap.SCOPE_ONELEVEL)
+    except errors.NotFound:
+        print "No abort CLEANALLRUV tasks running"
+    else:
+        print "Abort CLEANALLRUV tasks"
+        for entry in entries:
+            name = entry.getValue('cn').replace('abort ', '')
+            status = entry.getValue('nsTaskStatus')
+            print "RID %s: %s" % (name, status)
+            if verbose:
+                print str(dn)
+                print entry.getValue('nstasklog')
+
 def del_master(realm, hostname, options):
 
     force_del = False
@@ -451,21 +513,35 @@ def del_master(realm, hostname, options):
         if not ipautil.user_input("Continue to delete?", False):
             sys.exit("Deletion aborted")
 
+    # Save the RID value before we start deleting
+    if repltype == replication.IPA_REPLICA:
+        rid = get_rid_by_host(realm, options.host, hostname, options.dirman_passwd)
+
     # 4. Remove each agreement
+
+    print "Deleting replication agreements between %s and %s" % (hostname, ', '.join(replica_names))
     for r in replica_names:
         try:
-            del_link(realm, r, hostname, options.dirman_passwd, force=True)
+            if not del_link(realm, r, hostname, options.dirman_passwd, force=True):
+                print "Unable to remove replication agreement for %s from %s." % (hostname, r)
         except Exception, e:
             print "There were issues removing a connection: %s" % convert_error(e)
 
-    # 5. Finally clean up the removed replica common entries.
+    # 5. Clean RUV for the deleted master
+    if repltype == replication.IPA_REPLICA:
+        try:
+            thisrepl.cleanallruv(rid)
+        except KeyboardInterrupt:
+            print "Wait for task interrupted. It will continue to run in the background"
+
+    # 6. Finally clean up the removed replica common entries.
     try:
         thisrepl.replica_cleanup(hostname, realm, force=True)
     except Exception, e:
         print "Failed to cleanup %s entries: %s" % (hostname, convert_error(e))
         print "You may need to manually remove them from the tree"
 
-    # 6. And clean up the removed replica DNS entries if any.
+    # 7. And clean up the removed replica DNS entries if any.
     try:
         if bindinstance.dns_container_exists(options.host, thisrepl.suffix,
                                              dm_password=options.dirman_passwd):
@@ -667,9 +743,13 @@ def main():
         elif len(args) == 2:
             replica1 = host
             replica2 = args[1]
-        del_link(realm, replica1, replica2, dirman_passwd, clean_ruv=False)
+        del_link(realm, replica1, replica2, dirman_passwd)
     elif args[0] == "clean-ruv":
         clean_ruv(realm, args[1], options)
+    elif args[0] == "abort-clean-ruv":
+        abort_clean_ruv(realm, args[1], options)
+    elif args[0] == "list-clean-ruv":
+        list_clean_ruv(realm, host, dirman_passwd, options.verbose)
 
 try:
     main()
diff --git a/install/tools/man/ipa-replica-manage.1 b/install/tools/man/ipa-replica-manage.1
index 4a1c489f33591ff6ac98fe7f9a16ebb6a52ee28a..2a853317ef005f5bd3fcde384c2994f3d426cffc 100644
--- a/install/tools/man/ipa-replica-manage.1
+++ b/install/tools/man/ipa-replica-manage.1
@@ -48,6 +48,12 @@ Manages the replication agreements of an IPA server.
 \fBclean\-ruv\fR [REPLICATION_ID]
 \- Run the CLEANALLRUV task to remove a replication ID.
 .TP
+\fBabort\-clean\-ruv\fR [REPLICATION_ID]
+\- Abort a running CLEANALLRUV task.
+.TP
+\fBlist\-clean\-ruv\fR [REPLICATION_ID]
+\- List all running CLEANALLRUV and abort CLEANALLRUV tasks.
+.TP
 The connect and disconnect options are used to manage the replication topology. When a replica is created it is only connected with the master that created it. The connect option may be used to connect it to other existing replicas.
 .TP
 The disconnect option cannot be used to remove the last link of a replica. To remove a replica from the topology use the del option.
diff --git a/ipaserver/install/replication.py b/ipaserver/install/replication.py
index 552d61841ac20d8ee077a0764ec2443e92ba8c57..361abf1e95c62772142322f5726ab811aaefe545 100644
--- a/ipaserver/install/replication.py
+++ b/ipaserver/install/replication.py
@@ -1142,3 +1142,27 @@ class ReplicationManager(object):
         print "This may be safely interrupted with Ctrl+C"
 
         self.conn.checkTask(dn, dowait=True)
+
+    def abortcleanallruv(self, replicaId):
+        """
+        Create a task to abort a CLEANALLRUV operation.
+        """
+        root_logger.debug("Creating task to abort a CLEANALLRUV operation for replica id %d" % replicaId)
+
+        dn = DN(('cn', 'abort %d' % replicaId), ('cn', 'abort cleanallruv'),('cn', 'tasks'), ('cn', 'config'))
+        e = ipaldap.Entry(dn)
+        e.setValues('objectclass', ['top', 'extensibleObject'])
+        e.setValue('replica-base-dn', api.env.basedn)
+        e.setValue('replica-id', replicaId)
+        e.setValue('cn', 'abort %d' % replicaId)
+        try:
+            self.conn.addEntry(e)
+        except errors.DuplicateEntry:
+            print "An abort CLEANALLRUV task for replica id %d already exists." % replicaId
+        else:
+            print "Background task created. This may take a while."
+
+        print "This may be safely interrupted with Ctrl+C"
+
+        self.conn.checkTask(dn, dowait=True)
+
-- 
1.7.11.4

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to