Fair point. I just tried with 12.2.1 (on pre-release Ubuntu bionic now).
Doesn't change anything - fsck doesn't fix rocksdb, the bluestore won't
mount, the OSD won't activate and the error is the same.
Is there any fix in .2 that might address this, or do you just mean that
in general there will
Would you mind telling me what rados command set you use, and share the
output? I would like to compare it to our server as well.
On Fri, Nov 10, 2017 at 6:29 AM, Robert Stanford
wrote:
>
> In my cluster, rados bench shows about 1GB/s bandwidth. I've done some
> tuning:
>
> [osd]
> osd op thre
I met the same issue as http://tracker.ceph.com/issues/3370 ,
But I can't find the commit id of 2978257c56935878f8a756c6cb169b569e99bb91 ,
Can someone help me?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
Well, as stated in the other email I think in the EC scenario you can
set size=k+m for the pgcalc tool. If you want 10+2 then in theory you
should be able to get away with 6 nodes to survive a single node
failure if you can guarantee that every node will always receive 2 out
of the 12 chunks - look
I guess my questions are more centered around k+m and PG calculations.
As we were starting to build and test our EC pools with our infrastructure we
were trying to figure out what our calculations needed to be starting with 3
OSD hosts with 12 x 10 TB OSDs a piece. The nodes have the ability to
On Mon, Nov 13, 2017 at 4:57 AM, David Turner wrote:
> You cannot reduce the PG count for a pool. So there isn't anything you can
> really do for this unless you create a new FS with better PG counts and
> migrate your data into it.
>
> The problem with having more PGs than you need is in the m
I might be wrong, but from memory I think you can use
http://ceph.com/pgcalc/ and use k+m for the size
On Sun, Nov 12, 2017 at 5:41 AM, Ashley Merrick wrote:
> Hello,
>
> Are you having any issues with getting the pool working or just around the
> PG num you should use?
>
> ,Ashley
>
> Get Outloo
As per: https://www.spinics.net/lists/ceph-devel/msg38686.html
Bluestore as a hard 4GB object size limit
On Sat, Nov 11, 2017 at 9:27 AM, Marc Roos wrote:
>
> osd's are crashing when putting a (8GB) file in a erasure coded pool,
> just before finishing. The same osd's are used for replicated poo
The default failure domain is host and you will need 5 (=k+m) nodes
for this config. If you have 4 nodes you can run k=3,m=1 or k=2,m=2
otherwise you'd have to change failure domain to OSD
On Fri, Nov 10, 2017 at 10:52 AM, Marc Roos wrote:
>
> I added an erasure k=3,m=2 coded pool on a 3 node tes
http://tracker.ceph.com/issues/22015 - someone else has this issue?
Regards
--
Jarek
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Am 12.11.2017 um 17:55 schrieb Sage Weil:
> On Wed, 25 Oct 2017, Sage Weil wrote:
>> On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
>>> Hello,
>>>
>>> in the lumious release notes is stated that zstd is not supported by
>>> bluestor due to performance reason. I'm wondering why btrfs inst
You cannot reduce the PG count for a pool. So there isn't anything you can
really do for this unless you create a new FS with better PG counts and
migrate your data into it.
The problem with having more PGs than you need is in the memory footprint
for the osd daemon. There are warning thresholds
On Wed, 25 Oct 2017, Sage Weil wrote:
> On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
> > Hello,
> >
> > in the lumious release notes is stated that zstd is not supported by
> > bluestor due to performance reason. I'm wondering why btrfs instead
> > states that zstd is as fast as lz4 bu
I've created some Bluestore OSD with all data (wal, db, and data) all on
the same rotating disk. I would like to now move the wal and db onto an
nvme disk. Is that possible without re-creating the OSD?
___
ceph-users mailing list
ceph-users@lists.ceph.c
Hi David,Thanks for your valuable reply , once complete the
backfilling for new osd and will consider by increasing replica value asap. Is
it possible to decrease the metadata pg count ? if the pg count for metadata
for value same as data count what kind of issue
What's the output of `ceph df` to see if your PG counts are good or not?
Like everyone else has said, the space on the original osds can't be
expected to free up until the backfill from adding the new osd has finished.
You don't have anything in your cluster health to indicate that your
cluster wi
I think that more pgs help to distribute the data more evenly, but I
dont know if its recommended with a low OSDs number. I remember read
somewhere in the docs a guideline for the mas pgs number/OSD, but was
from an really old ceph version, maybe things has changed.
Em 11/12/2017 12:39 PM, gj
[@c03 ~]# ceph osd status
2017-11-12 15:54:13.164823 7f478a6ad700 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
2017-11-12 15:54:13.211219 7f478a6ad700 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
no valid command fou
Hi Cassiano,
Thanks for your valuable feedback and will wait for some time till new
osd sync get complete. Also for by increasing pg count it is the issue will
solve? our setup pool size for data and metadata pg number is 250. Is this
correct for 7 OSD with 2 replica. Also currently st
I am also not an expert, but it looks like you have big data volumes on
few PGs, from what I've seen, the pg data is only deleted from the old
OSD when is completed copied to the new osd.
So, if 1 pg have 100G por example, only when it is fully copied to the
new OSD, the space will be released
Hi
Thanks Sebastian, If anybody help on this issue it will be highly appropriated
.
Regards
Prabu GJ
On Sun, 12 Nov 2017 19:14:02 +0530 Sébastien VIGNERON
wrote
I’m not an expert either so if someone in the list have some ideas on this
Hi
If anybody help on this issue it will be highly appropriated .
Regards
Prabu GJ
On Sun, 12 Nov 2017 19:14:02 +0530 Sébastien VIGNERON
wrote
I’m not an expert either so if someone in the list have some ideas on this
problem, don’t be sh
I’m not an expert either so if someone in the list have some ideas on this
problem, don’t be shy, share them with us.
For now, I only have hypothese that the OSD space will be recovered as soon as
the recovery process is complete.
Hope everything will get back in order soon (before reaching 95%
Hi,
Have you tried to query pg state for some stuck or undersized pgs? Maybe some
OSD daemons are not right, blocking the reconstruction.
ceph pg 3.be query
ceph pg 4.d4 query
ceph pg 4.8c query
http://docs.ceph.com/docs/jewel/rados/troubleshooting/troubleshooting-pg/
Cordialement / Best regar
Hi Sebastien
Thanks for you reply , yes undersize pgs and recovery in process becuase of we
added new osd after getting 2 OSD is near full warning . Yes newly added osd
is reblancing the size.
[root@intcfs-osd6 ~]# ceph osd df
ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
0
Hi,
Can you share:
- your placement rules: ceph osd crush rule dump
- your CEPH version: ceph versions
- your pools definitions: ceph osd pool ls detail
With these we can determine is your pgs are stuck because of a misconfiguration
or something else.
You seems to have some undersized pgs an
Hi Team,
We have ceph setup with 6 OSD and we got alert with 2 OSD is near full
. We faced issue like slow in accessing ceph from client. So i have added 7th
OSD and still 2 OSD is showing near full ( OSD.0 and OSD.4) , I have restarted
ceph service in osd.0 and osd.4 . Kindly check
27 matches
Mail list logo