> I tried doing some fio test on local disk(NVME) and ceph rbd. Why ceph is
> having low IO whereas it’s also on all NVME.
> What to tune to reach equal amount of IO?
>
> root@node01:~/fio-cdm# python3 fio-cdm ./
> tests: 5, size: 1.0GiB, target: /root/fio-cdm 6.3GiB/64.4GiB
You should really mak
Den ons 23 juli 2025 kl 08:24 skrev Michel Jouvin
:
> I'd say yes OS and Ceph upgrades are two very separated operations. Thus I
> personally advise (it's what I'm doing) not to do them at the same time. It
> makes easier to identify the culprit in case of problems.I generally
> upgrade the OS befo
Den sön 29 juni 2025 kl 17:22 skrev Steven Vacaroaia :
> Hi,
>
> I just built a new CEPH squid cluster with 7 nodes
> Since this is brand new, there is no actuall data on it except few test
> files in the S3 data.bucket
>
> Why is "ceph -s" reporting 8 TB of used capacity ?
>
Because each OSD wil
> If anything strictly below 4GB is completely unsupported and expected to
> go into a thrashing tailspin, perhaps that doc should be updated to
> state that.
> > Angrily writing that a complex, mature, FREE system is “broken” because it
> > doesn’t perform miracles when abused is folly, like exp
Den tis 10 juni 2025 kl 19:39 skrev Gregory Farnum :
>
> I'm sure at some scale on some hardware it is possible to run into
> bottlenecks, but no reported issues with scaling come to mind.
>
> CephX keys are durably stored in the monitor's RocksDB instance, which
> it uses to store all of its data.
Den tis 10 juni 2025 kl 18:59 skrev Michel Jouvin
:
> a little bit surprised that the osd_mclock_capacity_iops_hdd computed
> for each OSD is so different (basically a x2 between the lowest and
> highest values).
> Also, the documentation explains that you can define a value that you
> measured an
> Den tis 10 juni 2025 kl 13:27 skrev Mahdi Noorbala :
> > Where? On the netplan config on all ceph nodes (mon, mgr, osd)
>
> If this network is listed in the public/private network config
> settings for ceph, then I would make sure it gets added there too and
> not just on the interface config for
it does, but as long as
you do one daemon at a time, you would be able to handle even that
situation. And you would know after the first test if that is so.
(grab list of established tcps from "ss -ntp" and then change netplan,
and check again)
> On Tue, Jun 10, 2025, 14:38 Janne Jo
Den tis 10 juni 2025 kl 12:49 skrev Mahdi Noorbala :
> Hello everybody
> We have a ceph cluster in production. It's used for openstack as rbd image.
> Now we want to change netmask from /22 to /21.
Where? On the clients, on the cluster or the ceph.conf network configs or all?
> What do you think?
> > Pathological example:
> >
> > rbd rm $image (successful deletion)
> > ceph pause immediately after that
> > Do the recovery procedure noted above
> >
> > How likely is it that we would be able to recovery the data?
>
> Like most filesystems, pretty likely at a certain granularity. In the above
Den tis 27 maj 2025 kl 21:14 skrev Danish Khan :
>
> Dear Team,
>
> We have few ceph-ansible clusters which are still running on Ubuntu 18 and
> 20. We want to upgrade OS to ubuntu 22. But my ceph clusters are still
> running on octopus version.
>
> Is there any document which can suggest if I can
Den fre 23 maj 2025 kl 04:23 skrev Michel Jouvin
:
> One good reason to use cephadm and the container based deployment!
I would say that the Reef release has been an interesting ride, you
hold off on the 18.2.0 and early releases to be safe from the worst
burn-in bugs and by the time you get your
> On a related note, we’re also exploring immutable Veeam backups to Ceph
> S3-compatible object storage. We came across this unofficial Veeam
> object storage compatibility list:
> https://forums.veeam.com/object-storage-as-backup-target-f52/unoffizial-compatibility-list-for-veeam-cloud-tier-t5695
Den ons 7 maj 2025 kl 10:59 skrev Torkil Svensgaard :
> We are looking at a cluster split between two DCs with the DCs as
> failure domains.
>
> Am I right in assuming that any recovery or backfill taking place should
> largely happen inside each DC and not between them? Or can no such
> assumption
Den tors 1 maj 2025 kl 09:12 skrev gagan tiwari
:
>
> HI Janne,
> Thanks for the explanation.
>
> So, using all 10X15 Disks on 7 OSD nodes. Number PG will be :-
>
> ( 10X7X100 ) / 6 = 1166.666 nearest power of 2 is 2028.
>
> So, I will need to set 2028 placement groups. With
> Hi Guys,
> I have 7 OSD nodes with 10X15T NVME disk on each OSD node.
>
> To start with , I want to use only 8X15T disk on each osd node and keep
> 2X15 disk spare in case of any disk failure and recovery event.
>
> I am going to use the 4X2 EC CephFS data pool to store data.
>
>
> Hi Janne,
> Thanks for your advice.
>
> So, you mean with with K=4 M =2 EC, we need 8 OSD nodes to have better
> protection
As always, it is a tradeoff in cost, speed, availability, storage size
and so on.
What you need if the data is important is for the cluster to be able
> So, I need to know what will be data safely level with the above set-up (
> i.e. 6 OSDs with 4X2 EC ). How many OSDs ( disks ) and nodes failure ,
> above set-up can withstand.
With EC N+2 you can lose one drive or host, and the cluster will go on
with degraded mode until it has been able to
Den fre 11 apr. 2025 kl 09:59 skrev Anthony D'Atri :
>
> Filestore IIRC used partitions, with cute hex GPT types for various states
> and roles. Udev activation was sometimes problematic, and LVM tags are more
> flexible and reliable than the prior approach. There no doubt is more to it
> but
Can the client talk to the MDS on the port it listens on?
Den fre 11 apr. 2025 kl 08:59 skrev Iban Cabrillo :
>
>
>
> Hi guys Good morning,
>
>
> Since I performed the update to Quincy, I've noticed a problem that wasn't
> present with Octopus. Currently, our Ceph cluster exports a filesystem to
> >> killing processes. Does someone have ideas why the daemons seem
> >> to completely ignore the set memory limits?
>
> Remember that osd_memory_target is a TARGET not a LIMIT. Upstream docs
> suggest an aggregate 20% headroom, personally I like 100% headroom, but
> that’s informed by some pri
> > After upgrading our Ceph cluster from 17.2.7 to 17.2.8 using `cephadm`, all
> > OSDs are reported as unreachable with the following error:
> >
> > HEALTH_ERR 32 osds(s) are not reachable
> > [ERR] OSD_UNREACHABLE: 32 osds(s) are not reachable
> > osd.0's public address is not in '172.20.180
> Hello everyone,
>
> After upgrading our Ceph cluster from 17.2.7 to 17.2.8 using `cephadm`, all
> OSDs are reported as unreachable with the following error:
>
> ```
> HEALTH_ERR 32 osds(s) are not reachable
> [ERR] OSD_UNREACHABLE: 32 osds(s) are not reachable
> osd.0's public address is not
> The safest approach would be to use the upmap-remapped.py tool developed by
> Dan at CERN. See [1] for details.
>
> The idea is to leverage the upmap load balancer to progressively migrate the
> data to the new servers, minimizing performance impact on the cluster and
> clients. I like to crea
Den mån 17 mars 2025 kl 14:48 skrev Joshua Baergen :
> Hey Brian,
>
> The setting you're looking for is bluefs_buffered_io. This is very
> much a YMMV setting, so it's best to test with both modes, but I
> usually recommend turning it off for all but omap-intensive workloads
> (e.g. RGW index) due
> >> > > I need to take 3 of them, 0, 10 and 30, out, is it safe to run out on
> >> > > all 3
> >> > > OSDs at the same time with "ceph osd out 0 10 20" or do I need to take
> >> > > one
> >> > > after the other out?
> >> >
> >> > It is not safe. [...]
> >> > What you can do is lower the weight o
> >> I'll leave it to the devs to discuss this one.
> >
> > It would be nice if the defaults for newly created clusters also came
> > with the global reclaim id thing disabled, so we didn't have to
> > manually enable msgrv2 (and disable v1 possibly as per this thread)
> > and also disable the recl
> On 13-03-2025 16:08, Frédéric Nass wrote:
> > If ceph-mon respected ms_bind_msgr1 = false, then one could add
> > --ms-bind-msgr1=false as extra_entrypoint_args in the mon service_type [1],
> > so as to have any ceph-mon daemons deployed or redeployed using msgr v2
> > exclusively.
> > Unfortu
Den ons 12 mars 2025 kl 17:12 skrev Alexander Patrakov :
> > >
> > > I need to take 3 of them, 0, 10 and 30, out, is it safe to run out on all
> > > 3
> > > OSDs at the same time with "ceph osd out 0 10 20" or do I need to take one
> > > after the other out?
> >
> > It is not safe. [...]
> > What
Den ons 12 mars 2025 kl 11:41 skrev Kai Stian Olstad :
>
> Say we have 10 host with 10 OSDs and the failure domain is host.
>
> host0 osd 0 to 9
> host1 osd 10 to 19
> host2 osd 20 to 29
> host3 osd 30 to 39
> host4 osd 40 to 49
> host5 osd 50 to 59
> host6 osd 60 to 69
> host7 osd 70 to 79
> host8
Den fre 7 mars 2025 kl 17:05 skrev Nicola Mori :
>
> Dear Ceph users,
>
> after upgrading from 19.2.0 to 19.2.1 (via cephadm) my cluster started
> showing some warnings never seen before:
>
> 29 OSD(s) experiencing slow operations in BlueStore
> 13 OSD(s) experiencing stalled read in db
> Once stopped:
>
> ceph osd crush remove osd.2
> ceph auth del osd.2
> ceph osd rm osd.2
While I can't help you with the is-it-gone-or-not part of your
journey, the three commands above are correct, but also done in one
single step with "ceph osd purge osd.2". So just adding this if anyone
else i
Den ons 5 mars 2025 kl 01:59 skrev 小小 <13071007...@163.com>:
> Hi,
>I'm facing a critical issue with my Ceph cluster. It has become unable to
> read/write data properly and cannot recover normally. What steps should I
> take to resolve this?
Did you do anything to the cluster, or did anythin
Den tors 27 feb. 2025 kl 18:48 skrev quag...@bol.com.br :
>
> Hello,
> I recently installed a new cluster.
> After the first node was working, I started transferring the files I
> needed. As I was in some urgency to do rsync, I enabled size=1 for the CephFS
> data pool.
> After a f
Den sön 23 feb. 2025 kl 03:40 skrev Alex Gorbachev :
>
> In addition, you can review what is locking the drive, if anything, with
> lsof (lsof /dev/...). If you boot a live CD of any distro, you will
> absolutely wipe the OSD drive with wipefs -a. If you had used a WAL/DB
> SSD, you need to wipe
Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
:
> Thanks for the feedback!
> Yes, the Heath_ok is there.]
> The OSD status show all of them as "exists,up".
>
> The interesting part is that "ceph df" shows the correct values in the "RAW
> STORAGE" section. However, for the SSD pool I have, it shows
> but I also can mount the disk directly with
>
> /etc/ceph/rbdmap
>
> at the boot the disk will appear somewhere in /dev/sd* on the kvm server
> and then use it in kvm as a «normal» disk.
> Don't know if they are any difference or just a preference.
If you mount with KRBD and the ceph cluster a
Den fre 7 feb. 2025 kl 16:05 skrev Alan Murrell :
> Hello,
> I Googled for this answer but could not find it. I am running Reef 18.2.2 .
> I need to find out if krbd is enabled on my cluster but I cannot figure out
> what command to check that. I have checked the Pool settings in the GUI but
>
> We in the Ceph Steering Committee are discussing when we want to
> target the Tentacle release for, as we find ourselves in an unusual
> scheduling situation:
> * Historically, we have targeted our major release in early Spring. I
> believe this was initially aligned to the Ubuntu LTS release. (W
> Hi Dev,
>
> You can't. There's no 'ceph osd erasure-code-profile modify' command and the
> 'ceph osd erasure-code-profile set' will fail on output below when run on an
> existing profile. See below:
I think you are answering the wrong question.
You are right that one cannot change the EC prof
> I read a Ceph Benchmark paper (
> https://www.proxmox.com/en/downloads/proxmox-virtual-environment/documentation/proxmox-ve-ceph-benchmark-2023-12)
> where they demonstrated, among other things, the performance of using a
> Full-Mesh Network Schema for Ceph on a three node cluster.
>
> Is this me
> We need to increase the number of PG’s for a pool on a Pacific cluster.
> I know we can easily do that using:
>
> ceph osd pool set $name pg_num $amount
>
> However, the documentation [1] states (both for Pacific and for Squid)
> that you need to upgrade pgp_num as well. But, I seem to remember t
Den tors 16 jan. 2025 kl 00:08 skrev Bruno Gomes Pessanha <
bruno.pessa...@gmail.com>:
> Hi everyone. Yes. All the tips definitely helped! Now I have more free
> space in the pools, the number of misplaced PG's decreased a lot and lower
> std deviation of the usage of OSD's. The storage looks way
> > > We have a cluster running with 6 Ubuntu 20.04 servers and we would like
> > > to add another host but with Ubuntu 22.04, will we have any problems?
> > > We would like to add new HOST with Ubuntu 22.04 and deactivate the Ubuntu
> > > 20.04 ones, our idea would be to update the hosts from Ub
> Ceph version 18.2.4 reef (cephadm)
> Hello,
> We have a cluster running with 6 Ubuntu 20.04 servers and we would like to
> add another host but with Ubuntu 22.04, will we have any problems?
> We would like to add new HOST with Ubuntu 22.04 and deactivate the Ubuntu
> 20.04 ones, our idea would
> You can use pg-remapper (https://github.com/digitalocean/pgremapper) or
> similar tools to cancel the remapping; up-map entries will be created
> that reflect the current state of the cluster. After all currently
> running backfills are finished your mons should not be blocked anymore.
> I would
> > To be honest with 3:8 we could protect the cluster more from osd flapping.
> > Let's say you have less chance to have 8 down pgs on 8 separate nodes then
> > with 8:3 only 3pgs on 3 nodes.
> > Of course this comes with the cost on storage used.
> > Is there any disadvantage performance wise on
I have clusters that have been upgraded into "upmap"-capable releases,
but in those cases, it was never in upmap mode, since these clusters
would also have jewel-clients as lowest possible, so if you tried to
enable balancer in upmap mode it would tell me to first bump clients
to luminous at least,
> Dear Ceph users,
> I'm struggling to unbderstand what is the correct way to remove a
> working disk and replace it (e.g. for a disk upgrade) while keeping the
> same OSD ID.
There may or may not be good guides for reaching this goal, but as a
long time ceph user I can only say that you should no
> we are already running the "default" rgw pool with some users.
>
> Data is stored in pool:
> pool 9 'default.rgw.buckets.data' erasure profile ec-32-profile size 5
> min_size 4 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512
> autoscale_mode on last_change 309346 lfor 0/127784/214408 fla
I see the same on a newly deployed 17.2.8 cluster.
all empty perf values.
Den tors 28 nov. 2024 kl 23:45 skrev Marc :
>
>
>
> My ceph osd perf are all 0, do I need to enable module for this?
> osd_perf_query? Where should I find this in manuals? Or do I just need to
> wait?
>
>
> [@ target]# cep
Den tors 21 nov. 2024 kl 19:18 skrev Andre Tann :
> > This post seem to show that, except they have their root named "nvme"
> > and they split on rack and not dc, but that is not important.
> >
> > https://unix.stackexchange.com/questions/781250/ceph-crush-rules-explanation-for-multiroom-racks-setu
Den tors 21 nov. 2024 kl 09:45 skrev Andre Tann :
> Hi Frank,
> thanks a lot for the hint, and I have read the documentation about this.
> What is not clear to me is this:
>
> == snip
> The first category of these failures that we will discuss involves
> inconsistent networks -- if there is a netsp
> What issues should I expect if I take an OSD (15TB) out one at a time,
> encrypt it, and put it back into the cluster? I would have a long period
> where some OSDs are encrypted and others are not. How dangerous is this?
I don't think it would be more dangerous than if you were redoing OSDs
for
> Sorry, sent too early. So here we go again:
> My setup looks like this:
>
>DC1
>node01
>node02
>node03
>node04
>node05
>DC2
>node06
>node07
>node08
>node09
>no
Den tis 19 nov. 2024 kl 03:15 skrev Christoph Pleger
:
> Hello,
> Is it possible to have something like RAID0 with Ceph?
> That is, when the cluster configuration file contains
>
> osd pool default size = 4
This means all data is replicated 4 times, in your case, one piece per
OSD, which also in y
> I have exactly 16 pgs with these conditions. Is there anyway i could do? I
> have tried to initiate the scrubing both deep and normal but they remain
> the same.
>
> HEALTH_WARN: 16 pgs not deep-scrubbed in time
> pg 8.14 not deep-scrubbed since 2024-09-27T19:42:47.463766+0700
> pg 7.1b not deep-
Den tis 15 okt. 2024 kl 19:13 skrev Dave Hall :
> I'm seeing the following in the Dashboard -> Configuration panel
> for osd_memory_target:
>
> Default:
> 4294967296
>
> Current Values:
> osd: 9797659437,
> osd: 10408081664,
> osd: 11381160192,
> osd: 22260320563
>
> I am confused why I have 4 cur
Den ons 9 okt. 2024 kl 20:48 skrev Frank Schilder :
> The PG count per OSD is a striking exception. Its just a number (well a range
> with 100 recommended and 200 as a max:
> https://docs.ceph.com/en/latest/rados/operations/pgcalc/#keyDL). It just is.
> And this doesn't make any sense unless th
Den ons 9 okt. 2024 kl 11:34 skrev Frank Schilder :
> Hi Janne,
> thanks for looking at this. I'm afraid I have to flag this as rumor as well,
> you are basically stating it yourself:
> It is a good idea to collect such hypotheses, assuming that a dev drops by
> and can comment on that with backg
> Thanks for chiming in. Unfortunately, it doesn't really help answering my
> questions either.
>
> Concurrency: A system like ceph that hashes data into PGs translates any IO
> into random IO anyways. So it's irrelevant for spinners, they have to seek
> anyways and the degree of parallelism doe
Den mån 23 sep. 2024 kl 16:23 skrev Stefan Kooman :
>
> On 23-09-2024 16:04, Dave Hall wrote:
> > Thank you to everybody who has responded to my questions.
> >
> > At this point I think I am starting to understand. However, I am still
> > trying to understand the potential for data loss.
> >
> > I
> We have a multisite Ceph configuration, with http (not https) sync endpoints.
> Are all sync traffic in plain text?
For S3 v4 auth, there are things that "obfuscates" the login auth, but
might not be called real crypto in that sense, so if you decide to
send things in the clear, expect it to be
The pgremapper (and the python one) will allow you to mark all the PGs
that a new disk gets as an empty-misplaced PG to be correct where they
currently are. This means that after you run one of the remappers, the
upmap will tell the cluster to stay as it is even though new empty
OSDs have arrived w
Den lör 31 aug. 2024 kl 15:42 skrev Tim Holloway :
>
> I would greatly like to know what the rationale is for avoiding
> containers.
>
> Especially in large shops. From what I can tell, you need to use the
> containerized Ceph if you want to run multiple Ceph filesystems on a
> single host. The leg
Den fre 30 aug. 2024 kl 20:43 skrev Milan Kupcevic :
>
> On 8/30/24 12:38, Tim Holloway wrote:
> > I believe that the original Ansible installation process is deprecated.
>
> This would be a bad news as I repeatedly hear from admins running large
> storage deployments that they prefer to stay away
Den fre 23 aug. 2024 kl 18:30 skrev Phong Tran Thanh :
>
> This is my first time setting up a Ceph cluster for OpenStack. Will
> running both the Mon service and the OSD service on the same node affect
> maintenance or upgrades in the future? While running Mon, MDS, and OSD
> services on the same
Den tors 15 aug. 2024 kl 14:35 skrev Alfredo Rezinovsky :
>
> I think is a very bad idea to name a release with the name of the most
> popular http cache.
> It will difficult googling.
Just enter
"ceph" squid < other terms you might want >
and google will make sure the word "ceph" is present, thi
> We made a mistake when we moved the servers physically so while the
> replica 3 is intact the crush tree is not accurate.
>
> If we just remedy the situation with "ceph osd crush move ceph-flashX
> datacenter=Y" we will just end up with a lot of misplaced data and some
> churn, right? Or will the
> Note the difference of convention in ceph command presentation. In
> https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#understanding-mon-status,
> mon.X uses X to represent the portion of the command to be replaced by the
> operator with a specific value. However, that ma
Den tors 16 maj 2024 kl 07:47 skrev Jayanth Reddy :
>
> Hello Community,
> In addition, we've 3+ Gbps links and the average object size is 200
> kilobytes. So the utilization is about 300 Mbps to ~ 1.8 Gbps and not more
> than that.
> We seem to saturate the link when the secondary zone fetches big
Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass
:
> Ceph is strongly consistent. Either you read/write objects/blocs/files with
> an insured strong consistency OR you don't. Worst thing you can expect from
> Ceph, as long as it's been properly designed, configured and operated is a
> temporary
Den mån 15 apr. 2024 kl 13:09 skrev Mitsumasa KONDO :
> Hi Menguy-san,
>
> Thank you for your reply. Users who use large IO with tiny volumes are a
> nuisance to cloud providers.
>
> I confirmed my ceph cluster with 40 SSDs. Each OSD on 1TB SSD has about 50
> placement groups in my cluster. Therefo
Den tors 11 apr. 2024 kl 15:55 skrev :
>
> I have mapped port 32505 to 23860, however when connect via s3cmd it fails
> with "ERROR: S3 Temporary Error: Request failed for: /. Please try again
> later." .
> has anyone ecounted same issue?
>
> [root@vm-04 ~]# s3cmd ls
> WARNING: Retrying failed r
Den tis 9 apr. 2024 kl 10:39 skrev Eugen Block :
> I'm trying to estimate the possible impact when large PGs are
> splitted. Here's one example of such a PG:
>
> PG_STAT OBJECTS BYTES OMAP_BYTES* OMAP_KEYS* LOG DISK_LOGUP
> 86.3ff277708 4144030984090 0
Den tors 4 apr. 2024 kl 06:11 skrev Zakhar Kirpichenko :
> Any comments regarding `osd noin`, please?
> >
> > I'm adding a few OSDs to an existing cluster, the cluster is running with
> > `osd noout,noin`:
> >
> > cluster:
> > id: 3f50555a-ae2a-11eb-a2fc-ffde44714d86
> > health: HEALT
> Hi every one,
> I'm new to ceph and I'm still studying it.
> In my company we decided to test ceph for possible further implementations.
>
> Although I undestood its capabilities I'm still doubtful about how to
> setup replication.
Default settings in ceph will give you replication = 3, which i
> Sure! I think Wido just did it all unofficially, but afaik we've lost
> all of those records now. I don't know if Wido still reads the mailing
> list but he might be able to chime in. There was a ton of knowledge in
> the irc channel back in the day. With slack, it feels like a lot of
> discu
> Now we are using the GetBucketInfo from the AdminOPS api -
> https://docs.ceph.com/en/quincy/radosgw/adminops/#id44 with the stats=true
> option GET /admin/bucket?stats=1 which returns all buckets with the number of
> objects and size we then parse. We also use it for the tracking of newly
>
Den mån 4 mars 2024 kl 11:30 skrev Ml Ml :
>
> Hello,
>
> i wonder why my autobalancer is not working here:
I think the short answer is "because you have so wildly varying sizes
both for drives and hosts".
If your drive sizes span from 0.5 to 9.5, there will naturally be
skewed data, and it is no
Den mån 12 feb. 2024 kl 14:12 skrev Murilo Morais :
>
> Good morning and happy holidays everyone!
>
> Guys, what would be the best strategy to increase the number of PGs in a
> POOL that is already in production?
"ceph osd pool set pg_num " and let the pool get pgp_nums increased slowly by
itself
I now concur you should increase the pg_num as a first step for this
> >>>>>> cluster. Disable the pg autoscaler for and increase the volumes pool to
> >>>>>> pg_num 256. Then likely re-asses and make the next power of 2 jump to
> >>>>>>
> I’ve heard conflicting asserts on whether the write returns with min_size
> shards have been persisted, or all of them.
I think it waits until all replicas have written the data, but from
simplistic tests with fast network and slow drives, the extra time
taken to write many copies is not linear
> If there is a (planned) documentation of manual rgw bootstrapping,
> it would be nice to have also the names of required pools listed there.
It will depend on several things, like if you enable swift users, I
think they get a pool of their own, so I guess one would need to look
in the so
Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
>
> Thank you Frank ,
>
> All disks are HDDs . Would like to know if I can increase the number of PGs
> live in production without a negative impact to the cluster. if yes which
> commands to use .
Yes. "ceph osd pool set pg_num "
where the nu
Den mån 29 jan. 2024 kl 10:38 skrev Eugen Block :
>
> Ah, you probably have dedicated RGW servers, right?
They are VMs, but yes.
--
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Den mån 29 jan. 2024 kl 09:35 skrev Eugen Block :
But your (cephadm managed) cluster will
> complain about "stray daemons". There doesn't seem to be a way to
> deploy rgw daemons manually with the cephadm tool so it wouldn't be
> stray. Is there a specific reason not to use the orchestrator for rg
Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak :
>
> Hi all,
>
> how can radosgw be deployed manually? For Ceph cluster deployment,
> there is still (fortunately!) a documented method which works flawlessly
> even in Reef:
>
> https://docs.ceph.com/en/latest/install/manual-deployment/#mon
Den sön 28 jan. 2024 kl 23:02 skrev Adrian Sevcenco :
>
> >> is it wrong to think of PGs like a kind of object bucket (S3 like)?
> >
> > Mostly, yes.
> so .. in a PG there are no "file data" but pieces of "file data"?
> so 100 GB file with 2x replication will be placed in more than 2 PGs?
> Is ther
Den tors 25 jan. 2024 kl 17:47 skrev Robert Sander
:
> > forth), so this is why "ceph df" will tell you a pool has X free
> > space, where X is "smallest free space on the OSDs on which this pool
> > lies, times the number of OSDs". Given the pseudorandom placement of
> > objects to PGs, there is n
Den tors 25 jan. 2024 kl 11:57 skrev Henry lol :
>
> It's reasonable enough.
> actually, I expected the client to have just? thousands of
> "PG-to-OSDs" mappings.
Yes, but filename to PG is done with a pseudorandom algo.
> Nevertheless, it’s so heavy that the client calculates location on
> deman
Den tors 25 jan. 2024 kl 03:05 skrev Henry lol :
>
> Do you mean object location (osds) is initially calculated only using its
> name and crushmap,
> and then the result is reprocessed with the map of the PGs?
>
> and I'm still skeptical about computation on the client-side.
> is it possible to obt
Den ons 10 jan. 2024 kl 19:20 skrev huxia...@horebdata.cn
:
> Dear Ceph folks,
>
> I am responsible for two Ceph clusters, running Nautilius 14.2.22 version,
> one with replication 3, and the other with EC 4+2. After around 400 days
> runing quietly and smoothly, recently the two clusters occured
Den tis 26 dec. 2023 kl 08:45 skrev Phong Tran Thanh :
>
> Hi community,
>
> I am running ceph with block rbd with 6 nodes, erasure code 4+2 with
> min_size of pool is 4.
>
> When three osd is down, and an PG is state down, some pools is can't write
> data, suppose three osd can't start and pg stuc
Den ons 13 dec. 2023 kl 10:57 skrev Rok Jaklič :
> Hi,
>
> shouldn't etag of a "parent" object change when "child" objects are added
> on s3?
>
> Example:
> 1. I add an object to test bucket: "example/" - size 0
> "example/" has an etag XYZ1
> 2. I add an object to test bucket: "example/test1.
>
> Based on our observation of the impact of the balancer on the
> performance of the entire cluster, we have drawn conclusions that we
> would like to discuss with you.
>
> - A newly created pool should be balanced before being handed over
> to the user. This, I believe, is quite evident.
>
Den tors 30 nov. 2023 kl 17:35 skrev Francisco Arencibia Quesada <
arencibia.franci...@gmail.com>:
> Hello again guys,
>
> Can you recommend me a book that explains best practices with Ceph,
> for example is it okay to have mon,mgr, osd in the same virtual machine,
>
OSDs can need very much RAM d
Looking up the "manual installation" parts might help, if you can't
get the container stuff going for $reasons.
Den mån 27 nov. 2023 kl 00:45 skrev Leo28C :
>
> I'm pulling my hair trying to get a simple cluster going. I first tried
> Gluster but I have an old system that can't handle the latest v
Den fre 24 nov. 2023 kl 10:25 skrev Frank Schilder :
>
> Hi Denis,
>
> I would agree with you that a single misconfigured host should not take out
> healthy hosts under any circumstances. I'm not sure if your incident is
> actually covered by the devs comments, it is quite possible that you obser
Den fre 24 nov. 2023 kl 08:53 skrev Nguyễn Hữu Khôi :
>
> Hello.
> I have 10 nodes. My goal is to ensure that I won't lose data if 2 nodes
> fail.
Now you are mixing terms here.
There is a difference between "cluster stops" and "losing data".
If you have EC 8+2 and min_size 9, then when you stop
1 - 100 of 417 matches
Mail list logo