That's worth a look, I had an issue when upgrading an older cluster to
Octopus where I had to change some config file within the container.
Zitat von "Jens Hyllegaard (Soft Design A/S)" :
According to the management interface, everything is OK.
There are 3 monitors in quorum.
I am running thi
I did not. Honestly I was not aware of such a thing. Thanks for the
notification. And I hope this is not bad news.
May I ask if it can be dynamically changed and any disadvantages should be
expected?
> On 27 Jan 2021, at 01:33, Josh Baergen wrote:
>
> > I created radosgw pools. secondaryzone
> I created radosgw pools. secondaryzone.rgw.buckets.data pool is
configured as EC 8+2 (jerasure).
Did you override the default bluestore_min_alloc_size_hdd (64k in that
version IIRC) when creating your hdd OSDs? If not, all of the small objects
produced by that EC configuration will be leading to
Thanks Joe for your reply.
Yes I realise I can scrub the one that's behind, that's not my issue this
time. I'm interested in the inconsistent pg.
Usually the list inconsistent obj command gives which copy is wrong and
what the issue is. In this case it reports nothing.
I don't really want to blindl
just issue the commands
scrub pg deep-scrub 17.1cs
this will deep scrub this pg
ceph pg repair 17.7ff
repairs the pg
>>> Richard Bade 1/26/2021 3:40 PM >>>
Hi Everyone,
I also have seen this inconsistent with empty when you do
list-inconsistent-obj
$ sudo ceph health detail
HEALTH_
Hi Everyone,
I also have seen this inconsistent with empty when you do list-inconsistent-obj
$ sudo ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent; 1
pgs not deep-scrubbed in time
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsist
On Tue, Jan 26, 2021 at 9:48 PM Schoonjans, Tom (RFI,RAL,-) <
tom.schoonj...@rfi.ac.uk> wrote:
> Hi Yuval,
>
>
> I worked on this earlier today with Tom Byrne and I think I may be able to
> provide some more information.
>
> I set up the RabbitMQ server myself, and created the exchange with type
>
Hi Tom,
Did you create the exchange in rabbitmq? The RGW does not create it and
assume it is already created?
Could you increase the log level in RGW and see if there are more log
messages that have "AMQP" in them?
Thanks,
Yuval
On Tue, Jan 26, 2021 at 7:33 PM Byrne, Thomas (STFC,RAL,SC) <
tom.b
Sorry for replying late :(. And thanks for the tips.
This is a fresh cluster. And I didn’t think data distribution would be a
problem. Is this normal?
Below is the ceph osd df output. The related pool is hdd only
(prod.rgw.buckets.data). I guess there is variance but I couldn’t get the
reason.
We just upgraded a cephfs cluster from 12.2.12 to 14.2.11. Our next step is
to upgrade to 14.2.16 to troubleshoot this issue, but I thought I'd reach
out here first if anyone had any ideas. The clients are still running an
older version of ceph-fuse 12.2.4 and it's very difficult to remount all of
Did some testing with clients running 16.1. I setup two different clients, each
one dedicated to its perspective cluster. Running Proxmox, I compiled the
latest Pacific 16.1 build.
root@Ccspacificclient:/cephbuild/ceph/build/bin# ./ceph -v
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LI
Hi all,
We've been trying to get RGW Bucket notifications working with a RabbitMQ
endpoint on our Nautilus 14.2.15 cluster. The gateway host can communicate with
the rabbitMQ server just fine, but when RGW tries to send a message to the
endpoint, the message never appears in the queue, and we g
It seems I have erasure pools.
https://docs.ceph.com/en/latest/rados/operations/erasure-code/#erasure-coding-with-overwrites
On Tue, Jan 26, 2021 at 12:02 PM Cary FitzHugh
wrote:
> are there any good documents on the implicit requirements for librados
> calls?
>
> I can write a file, if it does
are there any good documents on the implicit requirements for librados
calls?
I can write a file, if it doesn't exist successfully.
I can write_full a file, and it works (but many of my files will need to be
chunked off disk, not enough RAM)
I can append to a nonexistent file and it works.
I *can
According to the management interface, everything is OK.
There are 3 monitors in quorum.
I am running this on docker.
Perhaps I should have a look at the containers and see if their information is
different than what is in /etc/ceph on the hosts
Regards
Jens
-Original Message-
From: Eug
https://github.com/ceph/ceph/blob/master/src/pybind/rados/rados.pyx#L390
I have found that I need to pass in "name" to the Cluster create call, with
the name of the user I want to connect with.
On Tue, Jan 26, 2021 at 10:09 AM Cary FitzHugh
wrote:
> Hello!
>
> I've got a new cluster set up. I
Hello!
I've got a new cluster set up. I created a new user and pool. Creating
the user like so:
ceph auth add client.myclient mon 'allow r' osd 'allow rw pool=testpool'
I have set up librados and python bindings, with a ceph.conf and a keyring file.
My trouble is that when I connect with clie
Do you have mon containers running so they can form a quorum? Do your
hosts still have a (at least) minimal ceph.conf?
Zitat von "Jens Hyllegaard (Soft Design A/S)" :
Hi.
I am not sure why this is not working, but I am now unable to use
the ceph command on any of my hosts.
When I try to
Hi.
I am not sure why this is not working, but I am now unable to use the ceph
command on any of my hosts.
When I try to launch ceph, I get the following response:
[errno 13] RADOS permission denied (error connecting to the cluster)
The web management interface is working fine.
I have a suspic
yes, correct.it was in older releasesI remember once I saw this
video from Sage where he had run the rados bench on a default 'rbd'.
On Tue, Jan 26, 2021 at 2:23 PM Janne Johansson wrote:
> Den tis 26 jan. 2021 kl 14:20 skrev Bobby :
> > well, you are right. I forgot to create the po
Den tis 26 jan. 2021 kl 14:20 skrev Bobby :
> well, you are right. I forgot to create the pool. I thought 'rbd' pool is
> created by default. Now it works after creating it :-)
It was on older releases, I think many old clusters have an unused "rbd" pool.
--
May the most significant bit of your
well, you are right. I forgot to create the pool. I thought 'rbd' pool is
created by default. Now it works after creating it :-)
On Tue, Jan 26, 2021 at 1:52 PM Eugen Block wrote:
> The message is quite clear, it seems as if you don't have a pool
> "rbd", do you?
>
>
> Zitat von Bobby :
>
> > He
The message is quite clear, it seems as if you don't have a pool
"rbd", do you?
Zitat von Bobby :
Hello,
I am having an error while trying to run rados benchmark after running
vstart script
I run :
../src/vstart.sh -d -n -l
and then when I try to run:
bin/rados -p rbd bench 30 write
i
Hello,
I am having an error while trying to run rados benchmark after running
vstart script
I run :
../src/vstart.sh -d -n -l
and then when I try to run:
bin/rados -p rbd bench 30 write
it gives me error saying:
error opening pool rbd: (2) No such file or directory
Can someone please help
Hi,
Is there anybody running a cluster with different os?
Due to the centos 8 change I might try to add ubuntu osd nodes to centos
cluster and decommission the centos slowly but I'm not sure this is possible or
not.
Thank you
This message is confidential and is
Anthony D'Atri (anthony.datri) writes:
> I have firsthand experfience migrating multiple clusters from Ubuntu to RHEL,
> preserving the OSDs along the way, with no loss or problems.
>
> It’s not like you’re talking about OpenVMS ;)
:)
We converted a cluster from Ubuntu 18.04 to
26 matches
Mail list logo