Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak :
>
> Hi all,
>
> how can radosgw be deployed manually? For Ceph cluster deployment,
> there is still (fortunately!) a documented method which works flawlessly
> even in Reef:
>
> https://docs.ceph.com/en/latest/install/manual-deployment/#mon
Good morning,
Janne was a bit quicker than me, so I'll skip my short instructions
how to deploy it manually. But your (cephadm managed) cluster will
complain about "stray daemons". There doesn't seem to be a way to
deploy rgw daemons manually with the cephadm tool so it wouldn't be
stray.
Hi Michel,
are your OSDs HDD or SSD? If they are HDD, its possible that they can't handle
the deep-scrub load with default settings. In that case, have a look at this
post
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YUHWQCDAKP5MPU6ODTXUSKT7RVPERBJF/
for some basic tuning in
I'm not sure if I understand correctly:
I decided to distribute subvolumes across multiple pools instead of
multi-active-mds.
With this method I will have multiple MDS and [1x cephfs clients for each
pool / Host]
Those two statements contradict each other, either you have
multi-active MDS or
Den mån 29 jan. 2024 kl 09:35 skrev Eugen Block :
But your (cephadm managed) cluster will
> complain about "stray daemons". There doesn't seem to be a way to
> deploy rgw daemons manually with the cephadm tool so it wouldn't be
> stray. Is there a specific reason not to use the orchestrator for rg
Ah, you probably have dedicated RGW servers, right?
Zitat von Janne Johansson :
Den mån 29 jan. 2024 kl 09:35 skrev Eugen Block :
But your (cephadm managed) cluster will
complain about "stray daemons". There doesn't seem to be a way to
deploy rgw daemons manually with the cephadm tool so it w
Den mån 29 jan. 2024 kl 10:38 skrev Eugen Block :
>
> Ah, you probably have dedicated RGW servers, right?
They are VMs, but yes.
--
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Thank you Frank ,
All disks are HDDs . Would like to know if I can increase the number of PGs
live in production without a negative impact to the cluster. if yes which
commands to use .
Thank you very much for your prompt reply.
Michel
On Mon, Jan 29, 2024 at 10:59 AM Frank Schilder wrote:
>
Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
>
> Thank you Frank ,
>
> All disks are HDDs . Would like to know if I can increase the number of PGs
> live in production without a negative impact to the cluster. if yes which
> commands to use .
Yes. "ceph osd pool set pg_num "
where the nu
Thank you Janne ,
no need of setting some flags like ceph osd set nodeep-scrub ???
Thank you
On Mon, Jan 29, 2024 at 2:04 PM Janne Johansson wrote:
> Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
> >
> > Thank you Frank ,
> >
> > All disks are HDDs . Would like to know if I can increa
I have logged this as https://tracker.ceph.com/issues/64213
On 16/01/2024 14:18, DERUMIER, Alexandre wrote:
Hi,
ImportError: PyO3 modules may only be initialized once per
interpreter
process
and ceph -s reports "Module 'dashboard' has failed dependency: PyO3
modules may only be initialized on
On Sat, Nov 25, 2023 at 7:01 PM Tony Liu wrote:
>
> Thank you Eugen! "rbd du" is it.
> The used_size from "rbd du" is object count times object size.
> That's the actual storage taken by the image in backend.
Somebody just quoted this sentence out of context, so I feel like
I need to elaborate.
This is how it is set , if you suggest to make some changes please advises.
Thank you.
ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1407
flags hashpspool stripe_width 0 pg_nu
Hello, Janne,
Janne Johansson wrote:
> Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak :
> >
> > Is it possible to install a new radosgw instance manually?
> > If so, how can I do it?
>
> We are doing it, and I found the same docs issue recently, so Zac
> pushed me to provide a skeleton
Hello, Eugen,
Eugen Block wrote:
> Janne was a bit quicker than me, so I'll skip my short instructions
> how to deploy it manually. But your (cephadm managed) cluster will
> complain about "stray daemons". There doesn't seem to be a way to
> deploy rgw daemons manually with the cephadm too
Hi,
I was just curious what your intentions are, not meaning to critisize
it. ;-) There are different reasons why that could be a better choice.
And as I already mentioned previously, you only would have stray
daemons warnings if you deployed the RGWs on hosts which already have
cephadm m
> If there is a (planned) documentation of manual rgw bootstrapping,
> it would be nice to have also the names of required pools listed there.
It will depend on several things, like if you enable swift users, I
think they get a pool of their own, so I guess one would need to look
in the so
Make sure you're on a fairly recent version of Ceph before doing this, though.
Josh
On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson wrote:
>
> Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
> >
> > Thank you Frank ,
> >
> > All disks are HDDs . Would like to know if I can increase the num
I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using
ceph-ansible.
Michel
On Mon, Jan 29, 2024 at 4:47 PM Josh Baergen
wrote:
> Make sure you're on a fairly recent version of Ceph before doing this,
> though.
>
> Josh
>
> On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson
> wrot
Hey All,
We will be having a Ceph science/research/big cluster call on Wednesday
January 31st. If anyone wants to discuss something specific they can add
it to the pad linked below. If you have questions or comments you can
contact me.
This is an informal open call of community members mostl
You need to be running at least 16.2.11 on the OSDs so that you have
the fix for https://tracker.ceph.com/issues/55631.
On Mon, Jan 29, 2024 at 8:07 AM Michel Niyoyita wrote:
>
> I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using
> ceph-ansible.
>
> Michel
>
> On Mon, Jan 29,
Hi
We put a host in maintenance and had issues bringing it back.
Is there a safe way of exiting maintenance while the host is unreachable /
offline?
We would like the cluster to rebalance while we are working to get this host
back online.
Maintenance was set using:
ceph orch host maintenance en
Respond back with "ceph versions" output
If your sole goal is to eliminate the not scrubbed in time errors you can
increase the aggressiveness of scrubbing by setting:
osd_max_scrubs = 2
The default in pacific is 1.
if you are going to start tinkering manually with the pg_num you will want
to tu
Setting osd_max_scrubs = 2 for HDD OSDs was a mistake I made. The result was
that PGs needed a bit more than twice as long to deep-scrub. Net effect: high
scrub load, much less user IO and, last but not least, the "not deep-scrubbed
in time" problem got worse, because (2+eps)/2 > 1.
For spinner
You will have to look at the output of "ceph df" and make a decision to balance
"objects per PG" and "GB per PG". Increase he PG count for the pools with the
worst of these two numbers most such that it balances out as much as possible.
If you have pools that see significantly more user-IO than
Hi,
if you just want the cluster to drain this host but bring it back
online soon I would just remove the noout flag:
ceph osd rm-noout osd1
This flag is set when entering maintenance mode (ceph osd add-noout
). But it would not remove the health warning (host is in
maintenance) until th
Details of this release are summarized here:
https://tracker.ceph.com/issues/64151#note-1
Seeking approvals/reviews for:
rados - Radek, Laura, Travis, Ernesto, Adam King
rgw - Casey
fs - Venky
rbd - Ilya
krbd - in progress
upgrade/nautilus-x (pacific) - Casey PTL (regweed tests failed)
upgrade/
Hi
When I deploy my cluster I didn't notice on two of my servers the private
network was not working (wrong vlan), now it's working, but how can I check
the it's indeed working (currently I don't have data).
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
lun. 29 janv. 2024 22:36:01
Le 29/01/2024 à 22:43:46+0100, Albert Shih a écrit
> Hi
>
> When I deploy my cluster I didn't notice on two of my servers the private
> network was not working (wrong vlan), now it's working, but how can I check
> the it's indeed working (currently I don't have data).
I mean...ceph going to use
Hello Albert,
this should return you the sockets used on the network cluster :
ceph report | jq '.osdmap.osds[] | .cluster_addrs.addrvec[] | .addr'
Cordialement,
*David CASIER*
30 matches
Mail list logo