Hi deepu,

Can you please check all bricks are up on both master and slave side.
It will also be helpful if you share geo-rep mount logs form slave.

/sunny

On Wed, Mar 11, 2020 at 6:19 AM deepu srinivasan <[email protected]> wrote:
>
> Hi Sunny
> Please update on this issue.
>
> On Tue, Mar 10, 2020 at 11:51 AM deepu srinivasan <[email protected]> wrote:
>>
>> Hi Sunny
>> Any updates on this issue?
>>
>> On Mon, Mar 9, 2020 at 3:24 PM deepu srinivasan <[email protected]> wrote:
>>>
>>> Hi Sunny
>>> This is what I got in slave end
>>>
>>> [2020-03-09 08:08:20.651207] W [gsyncd(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):307:main] <top>: Session 
>>> config file not exists, using the default config     
>>> path=/var/lib/glusterd/geo-replication/code-ide_10.47.8.152_code-ide/gsyncd.conf
>>>
>>> [2020-03-09 08:08:20.663784] I [resource(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):1105:connect] GLUSTER: 
>>> Mounting gluster volume locally...
>>>
>>> [2020-03-09 08:08:21.885207] I [resource(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):1128:connect] GLUSTER: Mounted 
>>> gluster volume       duration=1.2213
>>>
>>> [2020-03-09 08:08:21.885740] I [resource(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):1155:service_loop] GLUSTER: 
>>> slave listening
>>>
>>> [2020-03-09 08:10:29.463335] E [repce(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):122:worker] <top>: call failed:
>>>
>>> Traceback (most recent call last):
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in 
>>> worker
>>>
>>>     res = getattr(self.obj, rmeth)(*in_data[2:])
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 532, in 
>>> entry_ops
>>>
>>>     er = entry_purge(op, entry, gfid, e, uid, gid)
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 395, in 
>>> entry_purge
>>>
>>>     if not matching_disk_gfid(gfid, entry):
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 574, 
>>> in matching_disk_gfid
>>>
>>>     disk_gfid = get_gfid_from_mnt(entry)
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 570, 
>>> in get_gfid_from_mnt
>>>
>>>     GX_GFID_CANONICAL_LEN], [ENOENT], [ESTALE])
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 546, 
>>> in errno_wrap
>>>
>>>     return call(*arg)
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 59, in 
>>> lgetxattr
>>>
>>>     return gr_query_xattr(cls, path, siz, 'lgetxattr', attr)
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/py2py3.py", line 130, in 
>>> gr_query_xattr
>>>
>>>     return cls._query_xattr(path, size, syscall, attr)
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 48, in 
>>> _query_xattr
>>>
>>>     cls.raise_oserr()
>>>
>>>   File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 38, in 
>>> raise_oserr
>>>
>>>     raise OSError(errn, os.strerror(errn))
>>>
>>> OSError: [Errno 107] Transport endpoint is not connected
>>>
>>> [2020-03-09 08:10:29.535897] I [repce(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):97:service_loop] RepceServer: 
>>> terminating on reaching EOF.
>>>
>>> [2020-03-09 08:10:40.747127] W [gsyncd(slave 
>>> 10.37.xx.xxx/home/sas/gluster/data/code-ide):307:main] <top>: Session 
>>> config file not exists, using the default config     
>>> path=/var/lib/glusterd/geo-replication/code-ide_10.47.8.152_code-ide/gsyncd.conf
>>>
>>>
>>> On Mon, Mar 9, 2020 at 3:15 PM Sunny Kumar <[email protected]> wrote:
>>>>
>>>> Hi Deepu,
>>>> Can you share traceback from slave node.
>>>>
>>>> /sunny
>>>>
>>>>
>>>> On Mon, Mar 9, 2020 at 5:08 AM deepu srinivasan <[email protected]> wrote:
>>>> >
>>>> > Hi guys
>>>> > Have any update on this?
>>>> >
>>>> > On Thu, Mar 5, 2020, 12:30 PM deepu srinivasan <[email protected]> 
>>>> > wrote:
>>>> >>
>>>> >> Hi Users/Devs
>>>> >>
>>>> >> We are getting a faulty status for geo-replication and below are the 
>>>> >> logs. How to solve this issue?
>>>> >>>
>>>> >>> [2020-03-05 06:52:44.858073] E [repce(worker 
>>>> >>> /home/sas/gluster/data/code-ide):214:__call__] RepceClient: call 
>>>> >>> failed    call=232079:140660126910272:1583391154.92       
>>>> >>> method=entry_ops        error=OSError
>>>> >>> [2020-03-05 06:52:44.858309] E [syncdutils(worker 
>>>> >>> /home/sas/gluster/data/code-ide):338:log_raise_exception] <top>: FAIL:
>>>> >>> Traceback (most recent call last):
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 325, 
>>>> >>> in main
>>>> >>>     func(args)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 86, 
>>>> >>> in subcmd_worker
>>>> >>>     local.service_loop(remote)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 
>>>> >>> 1297, in service_loop
>>>> >>>     g3.crawlwrap(oneshot=True)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 602, 
>>>> >>> in crawlwrap
>>>> >>>     self.crawl()
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 
>>>> >>> 1592, in crawl
>>>> >>>     self.changelogs_batch_process(changes)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 
>>>> >>> 1492, in changelogs_batch_process
>>>> >>>     self.process(batch)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 
>>>> >>> 1327, in process
>>>> >>>     self.process_change(change, done, retry)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 
>>>> >>> 1221, in process_change
>>>> >>>     failures = self.slave.server.entry_ops(entries)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 233, 
>>>> >>> in __call__
>>>> >>>     return self.ins(self.meth, *a)
>>>> >>>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 215, 
>>>> >>> in __call__
>>>> >>>     raise res
>>>> >>> OSError: [Errno 1] Operation not permitted
>>>> >>> [2020-03-05 06:52:44.887593] I [repce(agent 
>>>> >>> /home/sas/gluster/data/code-ide):97:service_loop] RepceServer: 
>>>> >>> terminating on reaching EOF.
>>>> >>> [2020-03-05 06:52:44.899364] I 
>>>> >>> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker 
>>>> >>> Status Change status=Faulty
>>>> >>> [2020-03-05 06:52:55.288171] I 
>>>> >>> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker 
>>>> >>> Status Change status=Initializing...
>>>> >>
>>>> >>
>>>> >> Our Setup Details
>>>> >> We have one volume at each DC. Below is the configuration
>>>> >>
>>>> >> Volume configuration2x3
>>>> >> Replication3
>>>> >> Distribution2
>>>> >> Gluster version7.0
>>>> >> Client TypeFuse Mount
>>>> >> Glusterd process
>>>> >> GeoReplication Sessionenabled
>>>> >> Gluster daemon permission in masterroot
>>>> >> Gluster daemon permission in slaveroot
>>>> >> Gsynd user daemon in masterroot
>>>> >> Gsynd user daemon in slavenon-root
>>>> >> Geo Replication configurationroot(master) - non-root(slave)
>>>> >>
>>>>

________



Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to