On Thu, Jul 1, 2021 at 12:53 AM Patrick Donnelly wrote:
>
> Hi Dan,
>
> Sorry for the very late reply -- I'm going through old unanswered email.
>
> On Mon, Nov 9, 2020 at 4:13 PM Dan van der Ster wrote:
> >
> > Hi,
> >
> > Today while debugging something we had a few questions that might lead
>
On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
>
> Hello, Ceph users,
>
> How can I figure out why it is not possible to unprotect a snapshot
> in a RBD image? I use this RBD pool for OpenNebula, and somehow there
> is a snapshot in one image, which OpenNebula does not see. So I wanted
On 22.06.2021 17:27, David Orman wrote:
https://tracker.ceph.com/issues/50526
https://github.com/alfredodeza/remoto/issues/62
If you're brave (YMMV, test first non-prod), we pushed an image with
the issue we encountered fixed as per above here:
https://hub.docker.com/repository/docker/ormandj/ce
Ilya Dryomov wrote:
: On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
: >
: > Hello, Ceph users,
: >
: > How can I figure out why it is not possible to unprotect a snapshot
: > in a RBD image? I use this RBD pool for OpenNebula, and somehow there
: > is a snapshot in one image, which Op
Hi,
Is ceph using TCP_FASTOPEN for its sockets?
If not why?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
mapping of rbd volumes fails clusterwide.
The volumes that are mapped, are ok, but new volumes wont map.
Receiving errors liks:
(108) Cannot send after transport endpoint shutdown
or executing:
#rbd -p CEPH map vm-120-disk-0
will show:
rbd: sysfs write failed
In some cases useful info
On Thu, Jul 1, 2021 at 9:48 AM Jan Kasprzak wrote:
>
> Ilya Dryomov wrote:
> : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
> : >
> : > Hello, Ceph users,
> : >
> : > How can I figure out why it is not possible to unprotect a snapshot
> : > in a RBD image? I use this RBD pool for Op
Ilya Dryomov wrote:
: On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
: >
: > # rbd snap unprotect one/one-1312@snap
: > 2021-07-01 08:28:40.747 7f3cb6ffd700 -1 librbd::SnapshotUnprotectRequest:
cannot unprotect: at least 1 child(ren) [68ba8e7bace188] in pool 'one'
: > 2021-07-01 08:28:40.749
On Thu, Jul 1, 2021 at 10:50 AM Jan Kasprzak wrote:
>
> Ilya Dryomov wrote:
> : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
> : >
> : > # rbd snap unprotect one/one-1312@snap
> : > 2021-07-01 08:28:40.747 7f3cb6ffd700 -1 librbd::SnapshotUnprotectRequest:
> cannot unprotect: at least 1 chi
Ilya Dryomov wrote:
: On Thu, Jul 1, 2021 at 10:50 AM Jan Kasprzak wrote:
: >
: > Ilya Dryomov wrote:
: > : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
: > : >
: > : > # rbd snap unprotect one/one-1312@snap
: > : > 2021-07-01 08:28:40.747 7f3cb6ffd700 -1
librbd::SnapshotUnprotectRequest:
I am having issues accessing the dashboard
[dashboard WARNING request] [:::10.2.16.6:49696] [GET] [401] [0.045s]
[463.0B] [d4a77c44-f9ee-45bc-8f90-870362e131bc] /api/mgr/module/telemetry
Jun 30 20:51:16 cfnode01 bash[2655]: debug 2021-06-30T20:51:16.373+
7f536f15b700 0 [dashboard INFO
That's this one:
https://github.com/ceph/ceph/pull/41893
Daniel
On 6/29/21 5:35 PM, Chu, Vincent wrote:
Hi, I'm running into an issue with RadosGW where multipart uploads crash, but
only on buckets with a hyphen, period or underscore in the bucket name and with
a bucket policy applied. We've
Hello Everyone!
I'm trying to mount the ceph, with the fuse client under debian 9 (ceph-fuse
10.2.11-2).
Ceph is on the latest Octopus release.
The direct command is working, but writing it in fstab does not.
Command I use:
ceph-fuse --id dev.wsc -k /etc/ceph/ceph.clinet.dev.wsc.keyring -r
/t
Hello Stefan!
Thanks for the input.
Yes that was a typo.
I have created the file /etc/ceph/ceph.client.dev.wsc.key and entered just the
key without anything else.
fstab line now looks like this:
none/mnt/ceph fuse.ceph
ceph.id=dev.wsc,ceph.client_mountpoint=/testing/dev.wsc
On 7/1/21 4:14 PM, Simon Sutter wrote:
Hello Everyone!
I'm trying to mount the ceph, with the fuse client under debian 9 (ceph-fuse
10.2.11-2).
Ceph is on the latest Octopus release.
The direct command is working, but writing it in fstab does not.
Command I use:
ceph-fuse --id dev.wsc -k /et
On Thu, Jul 1, 2021 at 10:36 AM Oliver Dzombic wrote:
>
>
>
> Hi,
>
> mapping of rbd volumes fails clusterwide.
Hi Oliver,
Clusterwide -- meaning on more than one client node?
>
> The volumes that are mapped, are ok, but new volumes wont map.
>
> Receiving errors liks:
>
> (108) Cannot send aft
I have two rbd images.
running fio on them, one performs very well, and one performs poorly. I'd like
to get more insight on why.
I know that
rbd info xx/yyy
gives me a small amount of information. but I cant seem to find a detailed info
dump, in the way you can do things like
ceph daemon os
Hi,
Want to have logs from cluster on Graylog but seems like CEPH send empty
"host" field. Any one can help ?
CEPH 16.2.3
# ceph config dump | grep graylog
global advanced clog_to_graylog true
global advanced clog_to_graylog_host xx.xx.xx.xx
global basic err_to_graylog true
global basic log_g
18 matches
Mail list logo