Thank you for your assessment, Jayme.

I'll collect the interesting facts and do so.

Chris

On 03.02.20 18:58, Jayme wrote:
Ah, the bug I'm referring to may only apply to replica 3 gluster.  You appear to be using an arbiter. It sounds like you may need to file a bug for this one

On Mon, Feb 3, 2020 at 12:05 PM Christoph Köhler <[email protected] <mailto:[email protected]>> wrote:

    Hello Jayme,

    the gluster-config is this:

    gluster volume info gluvol3

    Volume Name: gluvol3
    Type: Replicate
    Volume ID: 8172ebea-c118-424a-a407-50b2fd87e372
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x (2 + 1) = 3
    Transport-type: tcp
    Bricks:
    Brick1: glusrv01:/gluster/p1/brick1
    Brick2: glusrv02:/gluster/p1/brick1
    Brick3: glusrv03:/gluster/p1/brick1 (arbiter)
    Options Reconfigured:
    performance.client-io-threads: off
    nfs.disable: on
    transport.address-family: inet
    performance.quick-read: off
    performance.read-ahead: off
    performance.io-cache: off
    performance.low-prio-threads: 32
    network.remote-dio: off
    cluster.eager-lock: enable
    cluster.quorum-type: auto
    cluster.server-quorum-type: server
    cluster.data-self-heal-algorithm: full
    cluster.locking-scheme: granular
    cluster.shd-max-threads: 8
    cluster.shd-wait-qlength: 10000
    features.shard: on
    user.cifs: off
    storage.owner-uid: 36
    storage.owner-gid: 36
    performance.strict-o-direct: on
    cluster.granular-entry-heal: enable
    network.ping-timeout: 8
    auth.allow: 192.168.11.*
    client.event-threads: 4
    cluster.background-self-heal-count: 128
    cluster.heal-timeout: 60
    cluster.heal-wait-queue-length: 1280
    features.shard-block-size: 256MB
    performance.cache-size: 4096MB
    server.event-threads: 4

    I really do not know what to do new...

    Chris

    On 03.02.20 16:53, Jayme wrote:
     > Chris, what is the storage configuration?  I was under the
    impression
     > that there was a bug preventing snapshots from working when using
     > libgfapi on gluster replica configurations.  This is one of the main
     > reasons why I have been unable to implement libgfapi.
     >
     > On Mon, Feb 3, 2020 at 10:53 AM Christoph Köhler
     > <[email protected]
    <mailto:[email protected]>
    <mailto:[email protected]
    <mailto:[email protected]>>> wrote:
     >
     >     Hi,
     >
     >     since we have updated to 4.3.7 and another cluster to 4.3.8
    snapshots
     >     are not longer possible. In previous version all went well...
     >
     >     ° libGfApi enabled
     >     ° gluster 6.7.1 on gluster-server and client
     >     ° libvirt-4.5.0-23.el7_7.3
     >
     >     vdsm on a given node says:
     >
     >     jsonrpc/2) [vds] prepared volume path:
>  gluvol3/e54d835a-d8a5-44ae-8e17-fcba1c54e46f/images/1f43916a-bbf2-447b-b17d-ba22d4ec8c90/0e56d498-11d2-4f35-b781-a2e06d286eb8
     >
     >     (clientIF:510)
     >
     >     (jsonrpc/2) [virt.vm]
    (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
     >     <?xml version='1.0' encoding='utf-8'?>
     >     <domainsnapshot><disks><disk name="sda" snapshot="external"
     >     type="network"><source
>  name="gluvol3/e54d835a-d8a5-44ae-8e17-fcba1c54e46f/images/1f43916a-bbf2-447b-b17d-ba22d4ec8c90/0e56d498-11d2-4f35-b781-a2e06d286eb8"
     >
     >     protocol="gluster" type="network"><host name="192.168.1
     >     1.20" port="0" transport="tcp"
     >     /></source></disk></disks></domainsnapshot> (vm:4497)
     >
     >     (jsonrpc/2) [virt.vm]
    (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
     >     Disabling drive monitoring (drivemonitor:60)
     >
     >     (jsonrpc/2) [virt.vm]
    (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73')
     >     Freezing guest filesystems (vm:4268)
     >     WARN  (jsonrpc/2) [virt.vm]
     >     (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Unable to
    freeze guest
     >     filesystems: Guest agent is not responding: QEMU guest agent
    is not
     >     connected (vm:4273)
     >     INFO  (jsonrpc/2) [virt.vm]
     >     (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Taking a live
    snapshot
     >     (drives=sda, memory=True) (vm:4513)
     >     ...
     >     ...
     >
     >     ERROR (jsonrpc/2) [virt.vm]
     >     (vmId='acdc31b5-082b-4a68-b586-02354a7fdd73') Unable to take
    snapshot
     >     (vm:4517)
     >     Traceback (most recent call last):
     >         File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py",
    line 4514,
     >     in snapshot
     >           self._dom.snapshotCreateXML(snapxml, snapFlags)
     >         File
    "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
     >     line
     >     100, in f
     >           ret = attr(*args, **kwargs)
     >         File
>  "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
     >     line 131, in wrapper
     >           ret = f(*args, **kwargs)
     >         File
    "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
     >     line
     >     94, in wrapper
     >           return func(inst, *args, **kwargs)
     >         File "/usr/lib64/python2.7/site-packages/libvirt.py",
    line 2620, in
     >     snapshotCreateXML
     >           if ret is None:raise
    libvirtError('virDomainSnapshotCreateXML()
     >     failed', dom=self)
     >     libvirtError: internal error: unable to execute QEMU command
     >     'transaction': Could not read L1 table: Input/output error
     >     ...
     >     INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
    VM.snapshot failed
     >     (error 48) in 4.65 seconds (__init__:312)
     >
     >     It seems that the origin is libvirt or qemu.
     >
     >     Regards
     >     Chris
     >     _______________________________________________
     >     Users mailing list -- [email protected]
    <mailto:[email protected]> <mailto:[email protected]
    <mailto:[email protected]>>
     >     To unsubscribe send an email to [email protected]
    <mailto:[email protected]>
     >     <mailto:[email protected] <mailto:[email protected]>>
     >     Privacy Statement: https://www.ovirt.org/site/privacy-policy/
     >     oVirt Code of Conduct:
     > https://www.ovirt.org/community/about/community-guidelines/
     >     List Archives:
     >
    
https://lists.ovirt.org/archives/list/[email protected]/message/2HZEFV4GBUBLLIDYMWJEO26A2O3M6XGJ/
     >


_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/2ZTFNZVN7F4HPRCXR5MJCVB2XMV65F6K/

_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/QPZ6TUBYAVTMD4VZNWDLLZYSDGXP3KET/

Reply via email to