Yes, 'ceph osd reweight-by-xxx' will use the osd crush-weight(which
represent how much data it can hold)
to calculate.
Igor Podlesny 于2019年4月29日周一 下午2:56写道:
>
> Say, some nodes have OSDs that are 1.5 times bigger, than other nodes
> have, meanwhile weights of all the nodes in question is almost
Sure there is:
ceph pg ls-by-osd
Regards,
Eugen
Zitat von Igor Podlesny :
Or is there no direct way to accomplish that?
What workarounds can be used then?
--
End of message. Next message?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
Or is there no direct way to accomplish that?
What workarounds can be used then?
--
End of message. Next message?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, 29 Apr 2019 at 15:13, Eugen Block wrote:
>
> Sure there is:
>
> ceph pg ls-by-osd
Thank you Eugen, I overlooked it somehow :)
--
End of message. Next message?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
I am planning to set up a ceph cluster and already implemented a test
cluster where we are going to use RBD images for data storage (9 hosts,
each host has 16 OSDs, each OSD 4TB).
We would like to use erasure coded (EC) pools here, and so all OSD are
bluestore. Since several projects are going to
Hi,
On 4/29/19 11:19 AM, Rainer Krienke wrote:
I am planning to set up a ceph cluster and already implemented a test
cluster where we are going to use RBD images for data storage (9 hosts,
each host has 16 OSDs, each OSD 4TB).
We would like to use erasure coded (EC) pools here, and so all OSD a
On Mon, 29 Apr 2019 at 16:37, Burkhard Linke
wrote:
> On 4/29/19 11:19 AM, Rainer Krienke wrote:
[...]
> > - I also thought about the different k+m settings for a EC pool, for
> > example k=4, m=2 compared to k=8 and m=2. Both settings allow for two
> > OSDs to fail without any data loss, but I as
On Mon, 29 Apr 2019 at 16:19, Rainer Krienke wrote:
[...]
> - Do I still (nautilus) need two pools for EC based RBD images, one EC
> data pool and a second replicated pool for metadatata?
The answer is given at
http://docs.ceph.com/docs/nautilus/rados/operations/erasure-code/#erasure-coding-with-
Hi,
I need to add a more complex crush ruleset to a cluster and was trying
to script that as I'll need to do it often.
Is there any way to create these other than manually editing the crush map?
This is to create a k=4 + m=2 across 3 rooms, with 2 parts in each room
The ruleset would be somethin
Hi list,
Woke up this morning to two PG's reporting scrub errors, in a way that I
haven't seen before.
> $ ceph versions
> {
> "mon": {
> "ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic
> (stable)": 3
> },
> "mgr": {
> "ceph version 13.2.5 (cbff8
I think I need a second set of eyes to understand some unexpected data
movement when adding new OSDs to a cluster (Luminous 12.2.11).
Our cluster ran low on space sooner than expected; so as a stopgap I
recommissioned a couple of older storage nodes while we get new hardware
purchases under wa
On Sun, 28 Apr 2019 at 21:45, Igor Podlesny wrote:
> On Sun, 28 Apr 2019 at 16:14, Paul Emmerich
> wrote:
> > Use k+m for PG calculation, that value also shows up as "erasure size"
> > in ceph osd pool ls detail
>
> So does it mean that for PG calculation those 2 pools are equivalent:
>
> 1) EC(
We're happy to announce the first bug fix release of Ceph Nautilus
release series.
We recommend all nautilus users upgrade to this release. For upgrading
from older releases of
ceph, general guidelines for upgrade to nautilus must be followed
Notable Changes
---
* The default value
CephFS automatically chunks objects into 4MB objects by default. For
an EC pool, RADOS internally will further subdivide them based on the
erasure code and striping strategy, with a layout that can vary. But
by default if you have eg an 8+3 EC code, you'll end up with a bunch
of (4MB/8=)512KB objec
Glad you got it working and thanks for the logs! Looks like we've seen
this once or twice before so I added them to
https://tracker.ceph.com/issues/38724.
-Greg
On Fri, Apr 26, 2019 at 5:52 PM Elise Burke wrote:
>
> Thanks for the pointer to ceph-objectstore-tool, it turns out that removing
> an
I would add that the use of cache tiering, though still possible, is not
recommended and comes with its own challenges.
On Mon, Apr 29, 2019 at 11:49 AM Igor Podlesny wrote:
> On Mon, 29 Apr 2019 at 16:19, Rainer Krienke
> wrote:
> [...]
> > - Do I still (nautilus) need two pools for EC based R
Now that I dig into this, I can see in the exported crush map that the
choose_args weight_set for this bucket id is zero for the 9th member
(which I assume corresponds to the evacuated node-98).
rack even01 {
id -10 # do not change unnecessarily
id -14 class ssd
Is the 4MB configurable?
On Mon, Apr 29, 2019 at 4:36 PM Gregory Farnum wrote:
> CephFS automatically chunks objects into 4MB objects by default. For
> an EC pool, RADOS internally will further subdivide them based on the
> erasure code and striping strategy, with a layout that can vary. But
> b
Yes, check out the file layout options:
http://docs.ceph.com/docs/master/cephfs/file-layouts/
On Mon, Apr 29, 2019 at 3:32 PM Daniel Williams wrote:
>
> Is the 4MB configurable?
>
> On Mon, Apr 29, 2019 at 4:36 PM Gregory Farnum wrote:
>>
>> CephFS automatically chunks objects into 4MB objects b
19 matches
Mail list logo