Le 01/04/2020 à 08:29, James, GleSYS a écrit :
> Hi Gilles,
>
> Yes, your configuration works with Netplan on Ubuntu 18 as well. However,
> this would use only one of the physical interfaces (the current active
> interface for the bond) for both networks.
>
> The reason I want to create two bond
You already have the correct option, there's not much to it:
mount -t ceph mon1,mon2,mon3:// -o
name=,secretfile=,mds_namespace=
//
If your caps and path restrictions are correct this should work.
Zitat von Jarett DeAngelis :
Thanks. I’m now trying to figure out how to get Proxmox to pas
> I’m actually very curious how well this is performing for you as I’ve
> definitely not seen a deployment this large. How do you use it?
What exactly do you mean? Our cluster has 11PiB capacity of which about
15% are used at the moment (web-scale corpora and such). We have
deployed 5 MONs and 5
Dear all,
I have two observations regarding bluestore compression config:
1) ceph.conf settings seem to be ignored.
2) The SSD default values seem not to save space using compression.
To 1) We are running a mimic 13.2.8 cluster with OSDs deployed under mimic
13.2.2. Back then the interpretatio
Hi Frank,
answering the second part.
The following settings look senseless indeed:
bluestore_compression_min_blob_size_ssd 8192
bluestore_min_alloc_size_ssd 16384
Presumably this was an incomplete backport from Nautilus which has
proper numbers: 32K and 16K respectively.
Fell free to create
Hi,
are you hitting [1]? Did you run Nautilus only for a short period of
time before upgrading to Octopus?
If this doesn't apply to you, can you see anything in the OSD logs
(/var/log/ceph/ceph-osd..log)?
Regards,
Eugen
[1] https://tracker.ceph.com/issues/44770
Zitat von "Lomayani S. La
On 01.04.20 08:29, James, GleSYS wrote:
> The reason I want to create two bonds is to have enp179s0f0 as active for the
> public network, and enp179s0f1 as active for the cluster network, therefore
> spreading the traffic across the nics.
I do not think this will work. AFAIK you cannot create a
Dear Igor, thanks, done: https://tracker.ceph.com/issues/44878 .
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Igor Fedotov
Sent: 01 April 2020 12:14:02
To: Frank Schilder; ceph-users
Subject: Re: [ceph-users] Bluestore compr
I've been trying to use drivegroups on 15.2.0 to setup osds, but with
no luck, is this implemented?
On Sun, 29 Mar 2020 at 16:01, kefu chai wrote:
>
> On Sat, Mar 28, 2020 at 1:29 AM Mazzystr wrote:
> >
> > What about the missing dependencies for octopus on el8? (looking at yu
> > ceph-mgr!
Hi,
I have a different approach in mind for a replacement, we successfully
accomplished that last year in our production environment where we
replaced all nodes of the cluster with newer hardware. Of course we
wanted to avoid rebalancing the data multiple times.
What we did was to create
So, this is following on from a discussion in the #ceph IRC channel, where we
seem to have reached the limit of what we can do.
I have a ~15 node, 311 OSD cluster. (20 OSDs per node).
The cluster is Nautilus - the 3 MONs + the first 8 OSD hosts were installed as
Mimic and upgraded to Nautilus w
(I note that some of the down OSDs still report issues with secret
dissemination:
2020-04-01 14:32:11.265 7f9d9a7be700 0 auth: could not find secret_id=5010
2020-04-01 14:32:11.265 7f9d9a7be700 0 cephx: verify_authorizer could not get
service secret for service osd secret_id=5010
2020-04-01 14:
Hi,
As the upgrade documentation tells:
> Note that the first time each OSD starts, it will do a format
> conversion to improve the accounting for “omap” data. This may
> take a few minutes to as much as a few hours (for an HDD with lots
> of omap data). You can disable this automatic conversion w
Ouch, did you open a ticket for that?
-- Dan
On Wed, Apr 1, 2020 at 5:28 PM Jack wrote:
>
> Hi,
>
> As the upgrade documentation tells:
> > Note that the first time each OSD starts, it will do a format
> > conversion to improve the accounting for “omap” data. This may
> > take a few minutes to a
If I were to upgrade a host with 18+ 12TB+ OSDs, I would need to
disable the automatic conversion and do them a few at a time?
On Wed, Apr 1, 2020 at 11:28 AM Jack wrote:
>
> Hi,
>
> As the upgrade documentation tells:
> > Note that the first time each OSD starts, it will do a format
> > conversi
April fools day!! :)
-Original Message-
Sent: 01 April 2020 17:28
To: ceph-users@ceph.io
Subject: [ceph-users] [Octopus] Beware the on-disk conversion
Hi,
As the upgrade documentation tells:
> Note that the first time each OSD starts, it will do a format
> conversion to improve the
Doh, I hope so!
On Wed, Apr 1, 2020 at 5:35 PM Marc Roos wrote:
>
> April fools day!! :)
>
>
> -Original Message-
> Sent: 01 April 2020 17:28
> To: ceph-users@ceph.io
> Subject: [ceph-users] [Octopus] Beware the on-disk conversion
>
> Hi,
>
> As the upgrade documentation tells:
> > No
It is not a joke :)
First node is upgraded (and converted), my cluster is currently healing
its degraded objects
On 4/1/20 5:37 PM, Dan van der Ster wrote:
> Doh, I hope so!
>
> On Wed, Apr 1, 2020 at 5:35 PM Marc Roos wrote:
>>
>> April fools day!! :)
>>
>>
>> -Original Message-
>
All;
We set up a CephFS on a Nautilus (14.2.8) cluster in February, to hold backups.
We finally have all the backups running, and are just waiting for the system
reach steady-state.
I'm concerned about usage numbers, in the Dashboard Capacity it shows the
cluster as 37% used, while under File
All;
Another interesting piece of information: the host that mounts the CephFS shows
it as 45% full.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air Internationl, Inc.
dhils...@performair.com
www.PerformAir.com
-Original Message-
From: dhils...@per
Resending the response back to the list.
Zitat von "Lomayani S. Laizer" :
Hello,
I have been running Nautilus from May last year so this is separate issue
from recent bug
I think the problem is between systemd and ceph-volume. No any logs hitting
osd logs because osd dont start at all.
start
Quoting victorh...@yahoo.com (victorh...@yahoo.com):
> Hi,
>
> I've read that Ceph has some InfluxDB reporting capabilities inbuilt
> (https://docs.ceph.com/docs/master/mgr/influx/).
>
> However, Telegraf, which is the system reporting daemon for InfluxDB,
> also has a Ceph plugin
> (https://gith
Hi Jeff,
I got around to building 3.10.0-1062.18.1 with the patch you included, and it
seems to be fixed.
Thank you very much for your help!
Best regards, Mikael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
The strategy that Nghia described is inefficient for moving data more than
once, but safe since there are always N copies, vs a strategy of setting noout,
destroying the OSDs, and recreating them on the new server. That would be more
efficient, albeit with a period of reduced redundancy.
I’ve
HI! all! Thanks for reading this msg.
I hava one ceph cluster installed with ceph V12.2.12. It runs well for about
half a year.
Last week we add anoher two meachine to this ceph cluster.Then all the osds
became unstable.
The osd ansync message complain can not hearbeat to eachother.But the
Hi, I am new on rgw and try deploying a mutisite cluster in order to sync
data from one cluster to another.
My source zone is the default zone in the default zonegroup, structure as
belows:
realm: big-realm
|
Yeah, I should have mentioned the swap-bucket option. We couldn't use
that because we actually didn't swap anything but moved the old hosts
to a different root and we keep them for erasure coding pools.
Zitat von Anthony D'Atri :
The strategy that Nghia described is inefficient for moving d
27 matches
Mail list logo