I noticed you're trying to connect to an IPv4, but the listening port is on a
IPv6. Is that right? You should have a IPv4 listening, right?
Also, did you check selinux or firewalld? Maybe you need to allow this 5000
port.
--
Salsa
Sent with ProtonMail Secure Email.
‐‐‐ Original Me
the end all I can do
is remove the image.
Again, I see no errors on the logs and Ceph's status is OK. I tried to alter
some log levels, but still no helpful info.
Is there anything I should check? Rados?
--
Salsa
Sent with ProtonMail Secure
Joe,
sorry, I should have been clearer. The incompatible rbd features are
exclusive-lock, journaling, object-map and such.
The info comes from here:
https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html
--
Salsa
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On
BTW, the documentation can be found here:
https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html
--
Salsa
‐‐‐ Original Message ‐‐‐
On Wednesday, September 2, 2020 7:08 PM, Salsa wrote:
> I just came across a Suse documentation stating that RBD features are not
>
while using rbd-mirror to backup data to a second cluster? I created all
images with all features enabled. Is that compatible?
--
Salsa
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Any news on this error? I'm facing the same issue I guess. Had a Windows Server
copy data to some RBD images through iSCSI and the server got stuck and had to
be reset and now the images that had data are blocking all I/O operations,
including editing their config, creating snapshots, etc.
en or stuck but as I said no locks
on them.
I tried a lot of options and somehow my cluster now has some RGW pools that I
have no idea where they came from.
Any idea what I should do?
--
Salsa
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
I got 2.323.206 B/s inside the same VM.
I think the performance is way too slow, much more than should be and that I
can fix this by correcting some configuration.
Any advices?
--
Salsa
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe s
I got 2.323.206 B/s inside the same VM.
I think the performance is way too slow, much more than should be and that I
can fix this by correcting some configuration.
Any advices?
--
Salsa
Sent with [ProtonMail](https://protonmail.com) Secure Email.
__
I have the same problem. 30TB available on Ceph, but my SMB share has only 5TB
available. At IRC I was told I should raise pg count and run balancer. Raising
pg count helped a little and I'm waiting Ceph to recover from pg resizing to
run the balancer.
--
Salsa
Sent with ProtonMail S
‐‐‐ Original Message ‐‐‐
On Friday, February 14, 2020 4:49 PM, Mike Christie wrote:
> On 02/13/2020 09:56 AM, Salsa wrote:
>
> > I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3
> > replica rbd pool and some images and presented them to
rbd 29 4.3 TiB 1.42M13 TiB 13.13
29 TiB
--
Salsa
Sent with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, February 13, 2020 4:50 PM, Andrew Ferris
wrote:
> Hi Salsa,
>
> More information
I got 2.323.206 B/s inside the same VM.
I think the performance is way too slow, much more than should be and that I
can fix this by correcting some configuration.
Any advices?
--
Salsa
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe s
13 matches
Mail list logo