Hi,
I know a third option that is create a secondary zone mapping to replicated
pool. The data will be replicated from primary zone, after that, switch the
master zone and the migration will be done. Zero downtime
This is possible in theory, but I can not make it work when trying to setup 2
zon
I've been setting up a cookbook OSD creation process and as I walked
through the various stages, I noted that the /etc/redhat-release file
said "CentOS Stream 8". I panicked, because IBM has pulled the Ceph
archives for CentOS 8 and nuked the machine, then rebuilt it with more
attention to detail.
Hey Aleksandr,
> In the Pacific we have RocksDB column families. It will be helpful in the
> case of many tombstones to do resharding of our old OSDs?
> Do you think It can help without rocksdb_cf_compact_on_deletion?
> Or, maybe It can help much more with rocksdb_cf_compact_on_deletion?
Ah, I'm
Le 18/07/2024 à 11:33:35+0200, David C. a écrit
> you can test on a host and restart it, to validate that everything is fine
> (with ceph orch host maintenance enter [or noout]).
>
> But yes, you should be able to do it without breaking anything.
So just for those who got the same question as me
Josh, thanks!
I will read more about LSM in RocksDB, thanks!
Can I ask last one question)
We have a lot of "old" SSD OSDs in the index pool which were deployed before
Pacific.
In the Pacific we have RocksDB column families. It will be helpful in the case
of many tombstones to do resharding of
Hi Richard,
See here for an example of what the OSD logs show in case of this "PG
overdose protection". https://tracker.ceph.com/issues/65749
Cheers, dan
--
Dan van der Ster
CTO
Clyso GmbH
p: +49 89 215252722 | a: Vancouver, Canada
w: https://clyso.com | e: dan.vanders...@clyso.com
On Wed, Ju
> And my question is: we have regular compaction that does some work. Why It
> doesn't help with tombstones?
> Why only offline compaction can help in our case?
Regular compaction will take care of any tombstones in the files that
end up being compacted, and compaction, when triggered, may even f
+1 to this, also ran into this in our lab testing. Thanks for sharing this
information!
Regards,
Bailey
> -Original Message-
> From: Eugen Block
> Sent: July 18, 2024 3:55 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Heads up: New Ceph images require x86-64-v2 and
> possibly
Hi,
instead of exporting/importing single objects via rados export/import
I would use 'rados cppool ' although it does a
linear copy of each object, so I'm not sure if that's so much better...
So first create a new replicated pool, 'rados cppool old new', then
rename the original pool, and
Hi,
can you please provide more information?
Which other flags did you set (noout should be sufficient, or just use
the maintenance mode)?
Please share the output from:
ceph osd tree
ceph osd df
ceph osd pool ls detail
Add the corresponding crush rule which applies to the affected pool.
Zit
Thanks, that's what I proposed to the customer as well. They also have
their own CA, so it probably shouldn't be a problem to have such a
certificate as well.
Thanks!
Zitat von Kai Stian Olstad :
On Thu, Jul 18, 2024 at 10:49:02AM +, Eugen Block wrote:
And after restarting the daemon,
Thank you for your research, Frédéric,
We looked and the conf files were up to date, in the form
[v1:(...),v2:(...)]
I manage to reproduce the "incident":
[aevoo-test - ceph-0]# ceph mon dump -f json|jq '.mons[].public_addrs'
dumped monmap epoch 2
{
"addrvec": [
{
"type": "v2",
Hi Josh, thanks!
I have one more question. I try to reproduce our OSD degradation due to massive
lifecycle deletion and next step I will try to fix
rocksdb_cf_compact_on_deletion. But I don't understand one thing.
Okay, default auto-compaction can't detect tombstones which are growing, but
reg
On Thu, Jul 18, 2024 at 10:49:02AM +, Eugen Block wrote:
And after restarting the daemon, it seems to work. So my question is,
how do you deal with per-host certificates and rgw? Any comments are
appreciated.
By not dealing with it, sort of.
Since we run our own CA, so I create one certifi
On Thu, Jul 18, 2024 at 6:14 PM Petr Bena wrote:
>
> I created a cephfs using mgr dashboard, which created two pools:
> cephfs.fs.meta and cephfs.fs.data
>
> We are using custom provisioning for user defined volumes (users provide yaml
> manifests with definition of what they want) which creates
Hi,
I came across [1] and wanted to try to have all certificates/keys in
one file. But it appears that the validation happens only against the
first cert. So what I did was to concatenate all certs/keys into one
file, then added that to ceph:
ceph config-key set rgw/cert/rgw.realm.zone -i
Hi Albert, David,
I came across this: https://github.com/ceph/ceph/pull/47421
"OSDs have a config file that includes addresses for the mon daemons.
We already have in place logic to cause a reconfig of OSDs if the mon map
changes, but when we do we aren't actually regenerating the config
so it's
I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta
and cephfs.fs.data
We are using custom provisioning for user defined volumes (users provide yaml
manifests with definition of what they want) which creates dedicated data pools
for them, so cephfs.fs.data is never u
Thanks Christian,
I see the fix is on the postinst, so probably the reboot shouldn't put
"nobody" back, right?
Le jeu. 18 juil. 2024 à 11:44, Christian Rohmann <
christian.rohm...@inovex.de> a écrit :
> On 18.07.24 9:56 AM, Albert Shih wrote:
> >Error scraping /var/lib/ceph/crash: [Errno 13
On 18.07.24 9:56 AM, Albert Shih wrote:
Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied:
'/var/lib/ceph/crash'
There is / was a bug with the permissions for ceph-crash, see
*
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VACLBNVXTYNSXJSNXJSRAQNZHCHABDF4/
you can test on a host and restart it, to validate that everything is fine
(with ceph orch host maintenance enter [or noout]).
But yes, you should be able to do it without breaking anything.
Le jeu. 18 juil. 2024 à 11:08, Albert Shih a écrit :
> Le 18/07/2024 à 11:00:56+0200, Albert Shih a éc
Anyone got any ideas why this one lifecycle rule never runs automatically?
On 15/07/2024 13:32, Chris Palmer wrote:
Reef 18.2.2, package install on Centos 9.
This is a very straightforward production cluster, 2 RGW hosts, no
multisite. 4 buckets have lifecycle policies:
$ radosgw-admin lc l
Le 18/07/2024 à 11:00:56+0200, Albert Shih a écrit
> Le 18/07/2024 à 10:56:33+0200, David C. a écrit
>
Hi,
>
> > Your ceph processes are in containers.
>
> Yes I know but in my install process I just install
>
> ceph-common
> ceph-base
>
> then cephadm from with the wget.
>
> I didn't
Le 18/07/2024 à 10:56:33+0200, David C. a écrit
Hi,
> Your ceph processes are in containers.
Yes I know but in my install process I just install
ceph-common
ceph-base
then cephadm from with the wget.
I didn't install manually the other packages like :
ii ceph-fuse
Your ceph processes are in containers.
You don't need the ceph-* packages on the host hosting the containers
Cordialement,
*David CASIER*
*Ligne directe: +33(0) 9 72 61 98 29*
Le 18/07/2024 à 10:27:09+0200, David C. a écrit
Hi,
>
> perhaps a conflict with the udev rules of locally installed packages.
>
> Try uninstalling ceph-*
Sorry...not sure I understandyou want me to uninstall ceph ?
Regards.
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure loca
Hi Albert,
perhaps a conflict with the udev rules of locally installed packages.
Try uninstalling ceph-*
Le jeu. 18 juil. 2024 à 09:57, Albert Shih a écrit :
> Hi everyone.
>
> After my upgrade from 17.2.7 to 18.2.2 I notice after each time I restart I
> got a issue with perm on
>
> /var/lib
Hi,
Currently I'm testing replication ability between 2 zones in the same cluster.
But only metadata is synced, not the data. I checked endpoints, system_key,...
all good. If anyone have any idea, please guide me to resolve this situation.
Thanks
The radosgw shows this log on both side, primary
Hi everyone.
After my upgrade from 17.2.7 to 18.2.2 I notice after each time I restart I
got a issue with perm on
/var/lib/ceph/FSID/crash
after the restart the owner/group are on nobody and I got
Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied:
'/var/lib/ceph/crash'
I
29 matches
Mail list logo