On 5/15/19 1:49 PM, Kevin Flöh wrote:
since we have 3+1 ec I didn't try before. But when I run the command
you suggested I get the following error:
ceph osd pool set ec31 min_size 2
Error EINVAL: pool min_size must be between 3 and 4
What is your current min size? `ceph osd pool get ec31
ceph osd pool get ec31 min_size
min_size: 3
On 15.05.19 9:09 vorm., Konstantin Shalygin wrote:
ceph osd pool get ec31 min_size
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
All of these options have been removed quite some time ago, most of them
have been removed in Luminous.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Wed, May 1
Does anybody know whether S3 encryption of Ceph is ready for production?
-
本邮件及其附件含有新华三集团的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收
Hi Manuel.
Thanks for your response. We will consider this settings when we enable
deep-scrubbing. For now i saw this write up from Nautilus release notes,
Configuration values mon_warn_not_scrubbed and
mon_warn_not_deep_scrubbed have been renamed. They are now
mon_warn_pg_not_scrubbed_ratio and
Hello, Ceph users,
how do you deal with the "clock skew detected" HEALTH_WARN message?
I think the internal RTC in most x86 servers does have 1 second resolution
only, but Ceph skew limit is much smaller than that. So every time I reboot
one of my mons (for kernel upgrade or something), I
Hi Yenya,
You could try to synchronize the system clock to the hardware clock before
rebooting. Also try chrony, it catches up very fast.
Kind regards,
Marco Stuurman
Op wo 15 mei 2019 om 13:48 schreef Jan Kasprzak
> Hello, Ceph users,
>
> how do you deal with the "clock skew detect
Another option would be adding a boot time script which uses ntpdate (or
something) to force an immediate sync with your timeservers before ntpd
starts - this is actually suggested in ntpdate's man page!
Rich
On 15/05/2019 13:00, Marco Stuurman wrote:
> Hi Yenya,
>
> You could try to synchronize
Hi,
is there a way to migrate a cephfs to a new data pool like it is for rbd on
nautilus?
https://ceph.com/geen-categorie/ceph-pool-migration/
Thanks
Lars
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-use
On Tue, May 14, 2019 at 7:24 PM Bob R wrote:
>
> Does 'ceph-volume lvm list' show it? If so you can try to activate it with
> 'ceph-volume lvm activate 122 74b01ec2--124d--427d--9812--e437f90261d4'
Good suggestion. If `ceph-volume lvm list` can see it, it can probably
activate it again. You can
Hello, Ceph users,
I wanted to install the recent kernel update on my OSD hosts
with CentOS 7, Ceph 13.2.5 Mimic. So I set a noout flag and ran
"yum -y update" on the first OSD host. This host has 8 bluestore OSDs
with data on HDDs and database on LVs of two SSDs (each SSD has 4 LVs
for OS
Are you sure your osd's are up and reachable? (run ceph osd tree on
another node)
-Original Message-
From: Jan Kasprzak [mailto:k...@fi.muni.cz]
Sent: woensdag 15 mei 2019 14:46
To: ceph-us...@ceph.com
Subject: [ceph-users] Huge rebalance after rebooting OSD host (Mimic)
Hello
Dear Stefan,
thanks for the fast reply. We encountered the problem again, this time in a
much simpler situation; please see below. However, let me start with your
questions first:
What bug? -- In a single-active MDS set-up, should there ever occur an
operation with "op_name": "fragmentdir"?
T
Marc,
Marc Roos wrote:
: Are you sure your osd's are up and reachable? (run ceph osd tree on
: another node)
They are up, because all three mons see them as up.
However, ceph osd tree provided the hint (thanks!): The OSD host went back
with hostname "localhost" instead of the cor
TLDR; I activated the drive successfully but the daemon won't start, looks
like it's complaining about mon config, idk why (there is a valid ceph.conf
on the host). Thoughts? I feel like it's close. Thank you
I executed the command:
ceph-volume lvm activate --all
It found the drive and activate
kas wrote:
: Marc,
:
: Marc Roos wrote:
: : Are you sure your osd's are up and reachable? (run ceph osd tree on
: : another node)
:
: They are up, because all three mons see them as up.
: However, ceph osd tree provided the hint (thanks!): The OSD host went back
: with hostname "loca
Hi Manuel,
My response is interleaved below.
On 5/8/19 3:17 PM, EDH - Manuel Rios Fernandez wrote:
> Eric,
>
> Yes we do :
>
> time s3cmd ls s3://[BUCKET]/ --no-ssl and we get near 2min 30 secs for list
> the bucket.
We're adding an --allow-unordered option to `radosgw-admin bucket list`.
Tha
We setup 2 monitors as NTP server, and the other nodes are sync from monitors.
-Mensaje original-
De: ceph-users En nombre de Richard Hesketh
Enviado el: miércoles, 15 de mayo de 2019 14:04
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] How do you deal with "clock skew detected"
Hi Eric,
FYI , Ceph osd df in Nautilus reports metadata and Omap. We updated to
Nautilis 14.2.1
Im going to create a issue in tracket about timeout after a return.
[root@CEPH001 ~]# ceph osd df tree
ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMAPMETA AVAIL
%USE VAR PGS
On Wed, May 15, 2019 at 5:05 AM Lars Täuber wrote:
> is there a way to migrate a cephfs to a new data pool like it is for rbd on
> nautilus?
> https://ceph.com/geen-categorie/ceph-pool-migration/
No, this isn't possible.
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Ha
I actually made a dumb python script to do this. It's ugly and has a
lot of hardcoded things in it (like the mount location where i'm
copying things to to move pools, names of pools, the savings i was
expecting, etc) but should be easy to adapt to what you're trying to
do
https://gist.github.com/p
since I'm using chrony instead ntpd/openntpd, I don't have clock skew anymore.
(chrony is really faster to resync)
- Mail original -
De: "Jan Kasprzak"
À: "ceph-users"
Envoyé: Mercredi 15 Mai 2019 13:47:57
Objet: [ceph-users] How do you deal with "clock skew detected"?
Hello, Ceph use
I came across that and tried it - the short answer is no, you can't do that
- using cache tier. The longer answer as to why I'm less sure about, but
iirc it has to do with copying / editing the OMAP object properties.
The good news, however, is that you can 'fake it' using File Layouts -
http://do
Oops, forgot a step - need to tell the MDS about the new pool before step 2:
`ceph mds add_data_pool `
You may also need to mark the pool as used by cephfs:
`ceph osd pool application enable {pool-name} cephfs`
On Wed, May 15, 2019 at 3:15 PM Elise Burke wrote:
> I came across that and tried
Lars, I just got done doing this after generating about a dozen CephFS subtrees
for different Kubernetes clients.
tl;dr: there is no way for files to move between filesystem formats (ie CephFS
,> RBD) without copying them.
If you are doing the same thing, there may be some relevance for you in
On Tue, May 14, 2019 at 11:03 AM Rainer Krienke wrote:
>
> Hello,
>
> for a fresh setup ceph cluster I see a strange difference in the number
> of existing pools in the output of ceph -s and what I know that should
> be there: no pools at all.
>
> I set up a fresh Nautilus cluster with 144 OSDs on
After upgrading from 14.2.0 to 14.2.1, I've noticed PGs are frequently
resetting their scrub and deep scrub time stamps to 0.00. It's extra
strange because the peers show timestamps for deep scrubs.
## First entry from a pg list at 7pm
$ grep 11.2f2 ~/pgs-active.7pm
11.2f2 6910
how do you deal with the "clock skew detected" HEALTH_WARN message?
I think the internal RTC in most x86 servers does have 1 second resolution
only, but Ceph skew limit is much smaller than that. So every time I reboot
one of my mons (for kernel upgrade or something), I have to wait for several
m
On Wed, May 15, 2019 at 9:34 PM Frank Schilder wrote:
>
> Dear Stefan,
>
> thanks for the fast reply. We encountered the problem again, this time in a
> much simpler situation; please see below. However, let me start with your
> questions first:
>
> What bug? -- In a single-active MDS set-up, sh
Hi
After growing the size of an OSD's PV/LV, how can I get bluestore to see
the new space as available? It does notice the LV has changed size, but it
sees the new space as occupied.
This is the same question as:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/023893.html
and
tha
do you have osd's crush location changed after reboot?
kas 于2019年5月15日周三 下午10:39写道:
>
> kas wrote:
> : Marc,
> :
> : Marc Roos wrote:
> : : Are you sure your osd's are up and reachable? (run ceph osd tree on
> : : another node)
> :
> : They are up, because all three mons see them as u
Hello all,
I've got a 30 node cluster serving up lots of CephFS data.
We upgraded to Nautilus 14.2.1 from Luminous 12.2.11 on Monday earlier
this week.
We've been running 2 MDS daemons in an active-active setup. Tonight
one of the metadata daemons crashed with the following several times:
-
Hello Michael,
growing (expanding) bluestore OSD is possible since Nautilus (14.2.0)
using bluefs-bdev-expand tool as discussed in this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034116.html
-- Yury
On Wed, May 15, 2019 at 10:03:29PM -0700, Michael Andersen wrote:
>
Thanks! I'm on mimic for now, but I'll give it a shot on a test nautilus
cluster.
On Wed, May 15, 2019 at 10:58 PM Yury Shevchuk wrote:
> Hello Michael,
>
> growing (expanding) bluestore OSD is possible since Nautilus (14.2.0)
> using bluefs-bdev-expand tool as discussed in this thread:
>
> http
On 5/12/19 4:21 PM, Thore Krüss wrote:
> Good evening,
> after upgrading our cluster yesterday to Nautilus (14.2.1) and pg-merging an
> imbalanced pool we noticed that the number of objects in the pool has dubled
> (rising synchronously with the merge progress).
>
> What happened there? Was this
Many thanks for the analysis !
I'm going to test with 4K on heavy mssql database to see if I'm seeing
improvement on ios/latency.
I'll report results in this thread.
- Mail original -
De: "Trent Lloyd"
À: "ceph-users"
Envoyé: Vendredi 10 Mai 2019 09:59:39
Objet: [ceph-users] Poor per
Dear Yan,
OK, I will try to trigger the problem again and dump the information requested.
Since it is not easy to get into this situation and I usually need to resolve
it fast (its not a test system), is there anything else worth capturing?
I will get back as soon as it happened again.
In the
37 matches
Mail list logo