On Mon, Mar 2, 2020 at 7:19 PM Alex Chalkias
wrote:
>
> Thanks for the update. Are you doing a beta-release prior to the official
> launch?
the first RC was tagged a few weeks ago:
https://github.com/ceph/ceph/tree/v15.1.0
Paul
>
>
> On Mon, Mar 2, 2020 at 7:12 PM Sage Weil wrote:
>
> > It's
In data martedì 3 marzo 2020 04:57:35 CET, Steven. Scheit ha scritto:
> Can you share "ceph pg 6.36a query" output
>
Sure, it's attached.
*Simone Lazzaris*
*Qcom S.p.A. a Socio Unico*
Via Roggia Vignola, 9 | 24047 Treviglio (BG)T +39 0363 1970352 | M +39
3938111237
simone.lazza...@qcom.it
Hi!
Again. New version in repository without announce.
:(
I wonder who needs to write a letter and complain that there would always be an
announcement, and then a new version in the repository?
WBR,
Fyodor.
___
ceph-users mailing list -- ceph-use
I really do not care about these 1-2 days in between, why are you? Do
not install it, configure yum to lock a version, update your local repo
less frequent.
-Original Message-
Sent: 03 March 2020 11:22
To: ceph-users
Subject: [ceph-users] Nautilus 14.2.8
Hi!
Again. New version in
Hi!
> I really do not care about these 1-2 days in between, why are you? Do
> not install it, configure yum to lock a version, update your local repo
> less frequent.
I already asked this question - what to do to those who today decide to install
the CEPH for the first time?
ceph-deploy instal
This is the eighth update to the Ceph Nautilus release series. This release
fixes issues across a range of subsystems. We recommend that all users upgrade
to this release. Please note the following important changes in this
release; as always the full changelog is posted at:
https://ceph.io/releas
Nobody who has an idea? Ceph automatically started to migrate all date
from the hdd to the ssd db device but has stopped at 128kb on nearly all
osds.
Greets,
Stefan
Am 02.03.20 um 10:32 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i added a db device to my osds running nautilus. The DB dat
Am 03.03.20 um 08:38 schrieb Thomas Lamprecht:
> Hi,
>
> On 3/3/20 8:01 AM, Stefan Priebe - Profihost AG wrote:
>> does anybody have a guide to build ceph Nautilus for Debian stretch? I
>> wasn't able to find a backported gcc-8 for stretch.
>
> That's because a gcc backport isn't to trivial, it
Stefan,
What version are you running? You wrote "Ceph automatically started to
migrate all date from the hdd to the ssd db device", is that normal auto
compaction or ceph developed a trigger to do it?
Best Regards,
Rafał Wądołowski
___
ceph-users maili
Hello,
I do not know how to restrict a client.user to a certain rbd pool where
this pool has a replicated metadata pool pool.rbd and an erasure coded
data pool named pool.ec . I am running ceph nautilus.
I tried this for a client.user:
# ceph auth caps client.user mon 'profile rbd' osd 'profile
On Tue, Mar 3, 2020 at 10:05 AM Rainer Krienke wrote:
>
> Hello,
>
> I do not know how to restrict a client.user to a certain rbd pool where
> this pool has a replicated metadata pool pool.rbd and an erasure coded
> data pool named pool.ec . I am running ceph nautilus.
>
> I tried this for a clien
Hello!
IFAIK, you have to access replivated pool with default data pool pointing to ec
pool like that:
[client.user]
rbd_default_data_pool = pool.ec
Now you can access pool.rbd, but actual data will be placed on pool.ec.
Maybe it is another way to specify default data pool for using EC+Replicat
*cough* use croit to deploy your cluster, then you have a well tested
OS+Ceph image and no random version change ;) *cough*
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth-trainin
This bluestore_min_alloc_size_ssd=4K, do I need to recreate these osd's?
Or does this magically change? What % performance increase can be
expected?
-Original Message-
To: ceph-annou...@ceph.io; ceph-users@ceph.io; d...@ceph.io;
ceph-de...@vger.kernel.org
Subject: [ceph-users] v14.2.
Am 03.03.20 um 15:34 schrieb Rafał Wądołowski:
> Stefan,
>
> What version are you running?
14.2.7
> You wrote "Ceph automatically started to
> migrate all date from the hdd to the ssd db device", is that normal auto
> compaction or ceph developed a trigger to do it?
normal after running
ceph-b
Hi,
I installed a basic three host cluster with ceph-deploy for development
today and I'm seeing messages like
mon.somehost@2(electing) e5 failed to get devid for :
udev_device_new_from_subsystem_sysname failed on ''
in monitor logs when monitor starts up. What is this? I found this:
https
Hi,
We have updated our cluster to 14.2.8 since we suffered the bug
https://tracker.ceph.com/issues/43583, now life cycle policies give more
information than before.
In 14.2.7 they ended instantly so something we have advanced. But they are not
yet able to eliminate multipart.
Just a line of t
The default value of this reshard pool is "default.rgw.log:reshard". You
can check 'radosgw-admin zone get' for the list of pool names/namespaces
in use. It may be that your log pool is named ".rgw.log" instead, so you
could change your reshard_pool to ".rgw.log:reshard" to share that.
On 3/2
(resending to the new maillist)
Dear Casey, Dear All,
We tested the migration from Luminous to Nautilus and noticed two regressions
breaking the RGW integration in Openstack:
1) the following config parameter is not working on Nautilus but is valid on
Luminous and on Master:
Hello,
does anybody know whether there is any mechanism to make sure an image
looks like the original after an import-diff?
While doing ceph backups on another ceph cluster i currently do a fresh
import every 7 days. So i'm sure if something went wrong with
import-diff i have a fresh one every 7
Hi,
You can use a full local export, piped to some hash program (this is
what Backurne¹ does) : rbd export - | xxhsum
Then, check the hash consistency with the original
Regards,
[1] https://github.com/JackSlateur/backurne
On 3/3/20 8:46 PM, Stefan Priebe - Profihost AG wrote:
> Hello,
>
> doe
Hello,
This is for a cluster currently running at 14.2.7. Since our cluster is
still relatively small we feel a strong need to run our CephFS on an EC
Pool (8 + 2) and Crush Failure Domain = OSD to maximize capacity.
I have read and re-read
https://docs.ceph.com/docs/nautilus/cephfs/createf
Hi,
Am 03.03.20 um 20:54 schrieb Jack:
> Hi,
>
> You can use a full local export, piped to some hash program (this is
> what Backurne¹ does) : rbd export - | xxhsum
> Then, check the hash consistency with the original
Thanks for the suggestion but this still needs to run an rbd export on
the sou
Hi,
Our cluster (14.2.6) has sporadic slow ops warnings since upgrading from Jewel
1 month ago. Today I checked the OSD log files and found out a lot of entries
like:
ceph-osd.5.log:2020-03-04 10:33:31.592 7f18ca41f700 0
bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation observ
24 matches
Mail list logo