[ceph-users] Re: Netplan bonding configuration

2020-04-01 Thread Gilles Mocellin
Le 01/04/2020 à 08:29, James, GleSYS a écrit : > Hi Gilles, > > Yes, your configuration works with Netplan on Ubuntu 18 as well. However, > this would use only one of the physical interfaces (the current active > interface for the bond) for both networks. > > The reason I want to create two bond

[ceph-users] Re: Multiple CephFS creation

2020-04-01 Thread Eugen Block
You already have the correct option, there's not much to it: mount -t ceph mon1,mon2,mon3:// -o name=,secretfile=,mds_namespace= // If your caps and path restrictions are correct this should work. Zitat von Jarett DeAngelis : Thanks. I’m now trying to figure out how to get Proxmox to pas

[ceph-users] Re: No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-04-01 Thread Janek Bevendorff
> I’m actually very curious how well this is performing for you as I’ve > definitely not seen a deployment this large. How do you use it? What exactly do you mean? Our cluster has 11PiB capacity of which about 15% are used at the moment (web-scale corpora and such). We have deployed 5 MONs and 5

[ceph-users] Bluestore compression parameters in ceph.conf not used in mimic 13.2.8?

2020-04-01 Thread Frank Schilder
Dear all, I have two observations regarding bluestore compression config: 1) ceph.conf settings seem to be ignored. 2) The SSD default values seem not to save space using compression. To 1) We are running a mimic 13.2.8 cluster with OSDs deployed under mimic 13.2.2. Back then the interpretatio

[ceph-users] Re: Bluestore compression parameters in ceph.conf not used in mimic 13.2.8?

2020-04-01 Thread Igor Fedotov
Hi Frank, answering the second part. The following settings look senseless indeed: bluestore_compression_min_blob_size_ssd 8192 bluestore_min_alloc_size_ssd 16384 Presumably this was an incomplete backport from Nautilus which has proper numbers: 32K and 16K respectively. Fell free to create

[ceph-users] Re: osd can not start at boot after upgrade to octopus

2020-04-01 Thread Eugen Block
Hi, are you hitting [1]? Did you run Nautilus only for a short period of time before upgrading to Octopus? If this doesn't apply to you, can you see anything in the OSD logs (/var/log/ceph/ceph-osd..log)? Regards, Eugen [1] https://tracker.ceph.com/issues/44770 Zitat von "Lomayani S. La

[ceph-users] Re: Netplan bonding configuration

2020-04-01 Thread Robert Sander
On 01.04.20 08:29, James, GleSYS wrote: > The reason I want to create two bonds is to have enp179s0f0 as active for the > public network, and enp179s0f1 as active for the cluster network, therefore > spreading the traffic across the nics. I do not think this will work. AFAIK you cannot create a

[ceph-users] Re: Bluestore compression parameters in ceph.conf not used in mimic 13.2.8?

2020-04-01 Thread Frank Schilder
Dear Igor, thanks, done: https://tracker.ceph.com/issues/44878 . = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Igor Fedotov Sent: 01 April 2020 12:14:02 To: Frank Schilder; ceph-users Subject: Re: [ceph-users] Bluestore compr

[ceph-users] Re: v15.2.0 Octopus released

2020-04-01 Thread Theofilos Mouratidis
I've been trying to use drivegroups on 15.2.0 to setup osds, but with no luck, is this implemented? On Sun, 29 Mar 2020 at 16:01, kefu chai wrote: > > On Sat, Mar 28, 2020 at 1:29 AM Mazzystr wrote: > > > > What about the missing dependencies for octopus on el8? (looking at yu > > ceph-mgr!

[ceph-users] Re: Replace OSD node without remapping PGs

2020-04-01 Thread Eugen Block
Hi, I have a different approach in mind for a replacement, we successfully accomplished that last year in our production environment where we replaced all nodes of the cluster with newer hardware. Of course we wanted to avoid rebalancing the data multiple times. What we did was to create

[ceph-users] Multiple OSDs down, and won't come up (possibly related to other Nautilus issues)

2020-04-01 Thread aoanla
So, this is following on from a discussion in the #ceph IRC channel, where we seem to have reached the limit of what we can do. I have a ~15 node, 311 OSD cluster. (20 OSDs per node). The cluster is Nautilus - the 3 MONs + the first 8 OSD hosts were installed as Mimic and upgraded to Nautilus w

[ceph-users] Re: Multiple OSDs down, and won't come up (possibly related to other Nautilus issues)

2020-04-01 Thread aoanla
(I note that some of the down OSDs still report issues with secret dissemination: 2020-04-01 14:32:11.265 7f9d9a7be700 0 auth: could not find secret_id=5010 2020-04-01 14:32:11.265 7f9d9a7be700 0 cephx: verify_authorizer could not get service secret for service osd secret_id=5010 2020-04-01 14:

[ceph-users] [Octopus] Beware the on-disk conversion

2020-04-01 Thread Jack
Hi, As the upgrade documentation tells: > Note that the first time each OSD starts, it will do a format > conversion to improve the accounting for “omap” data. This may > take a few minutes to as much as a few hours (for an HDD with lots > of omap data). You can disable this automatic conversion w

[ceph-users] Re: [Octopus] Beware the on-disk conversion

2020-04-01 Thread Dan van der Ster
Ouch, did you open a ticket for that? -- Dan On Wed, Apr 1, 2020 at 5:28 PM Jack wrote: > > Hi, > > As the upgrade documentation tells: > > Note that the first time each OSD starts, it will do a format > > conversion to improve the accounting for “omap” data. This may > > take a few minutes to a

[ceph-users] Re: [Octopus] Beware the on-disk conversion

2020-04-01 Thread Nathan Fish
If I were to upgrade a host with 18+ 12TB+ OSDs, I would need to disable the automatic conversion and do them a few at a time? On Wed, Apr 1, 2020 at 11:28 AM Jack wrote: > > Hi, > > As the upgrade documentation tells: > > Note that the first time each OSD starts, it will do a format > > conversi

[ceph-users] Re: [Octopus] Beware the on-disk conversion

2020-04-01 Thread Marc Roos
April fools day!! :) -Original Message- Sent: 01 April 2020 17:28 To: ceph-users@ceph.io Subject: [ceph-users] [Octopus] Beware the on-disk conversion Hi, As the upgrade documentation tells: > Note that the first time each OSD starts, it will do a format > conversion to improve the

[ceph-users] Re: [Octopus] Beware the on-disk conversion

2020-04-01 Thread Dan van der Ster
Doh, I hope so! On Wed, Apr 1, 2020 at 5:35 PM Marc Roos wrote: > > April fools day!! :) > > > -Original Message- > Sent: 01 April 2020 17:28 > To: ceph-users@ceph.io > Subject: [ceph-users] [Octopus] Beware the on-disk conversion > > Hi, > > As the upgrade documentation tells: > > No

[ceph-users] Re: [Octopus] Beware the on-disk conversion

2020-04-01 Thread Jack
It is not a joke :) First node is upgraded (and converted), my cluster is currently healing its degraded objects On 4/1/20 5:37 PM, Dan van der Ster wrote: > Doh, I hope so! > > On Wed, Apr 1, 2020 at 5:35 PM Marc Roos wrote: >> >> April fools day!! :) >> >> >> -Original Message- >

[ceph-users] Maximum CephFS Filesystem Size

2020-04-01 Thread DHilsbos
All; We set up a CephFS on a Nautilus (14.2.8) cluster in February, to hold backups. We finally have all the backups running, and are just waiting for the system reach steady-state. I'm concerned about usage numbers, in the Dashboard Capacity it shows the cluster as 37% used, while under File

[ceph-users] Re: Maximum CephFS Filesystem Size

2020-04-01 Thread DHilsbos
All; Another interesting piece of information: the host that mounts the CephFS shows it as 45% full. Thank you, Dominic L. Hilsbos, MBA Director - Information Technology Perform Air Internationl, Inc. dhils...@performair.com www.PerformAir.com -Original Message- From: dhils...@per

[ceph-users] Re: osd can not start at boot after upgrade to octopus

2020-04-01 Thread Eugen Block
Resending the response back to the list. Zitat von "Lomayani S. Laizer" : Hello, I have been running Nautilus from May last year so this is separate issue from recent bug I think the problem is between systemd and ceph-volume. No any logs hitting osd logs because osd dont start at all. start

[ceph-users] Re: Ceph influxDB support versus Telegraf Ceph plugin?

2020-04-01 Thread Stefan Kooman
Quoting victorh...@yahoo.com (victorh...@yahoo.com): > Hi, > > I've read that Ceph has some InfluxDB reporting capabilities inbuilt > (https://docs.ceph.com/docs/master/mgr/influx/). > > However, Telegraf, which is the system reporting daemon for InfluxDB, > also has a Ceph plugin > (https://gith

[ceph-users] Re: Using sendfile on Ceph FS results in data stuck in client cache

2020-04-01 Thread Mikael Öhman
Hi Jeff, I got around to building 3.10.0-1062.18.1 with the patch you included, and it seems to be fixed. Thank you very much for your help! Best regards, Mikael ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-use

[ceph-users] Re: Replace OSD node without remapping PGs

2020-04-01 Thread Anthony D'Atri
The strategy that Nghia described is inefficient for moving data more than once, but safe since there are always N copies, vs a strategy of setting noout, destroying the OSDs, and recreating them on the new server. That would be more efficient, albeit with a period of reduced redundancy. I’ve

[ceph-users] luminous: osd continue down because of the hearbeattimeout

2020-04-01 Thread linghucongsong
HI! all! Thanks for reading this msg. I hava one ceph cluster installed with ceph V12.2.12. It runs well for about half a year. Last week we add anoher two meachine to this ceph cluster.Then all the osds became unstable. The osd ansync message complain can not hearbeat to eachother.But the

[ceph-users] RGW Multi-site Issue

2020-04-01 Thread Zhenshi Zhou
Hi, I am new on rgw and try deploying a mutisite cluster in order to sync data from one cluster to another. My source zone is the default zone in the default zonegroup, structure as belows: realm: big-realm |

[ceph-users] Re: Replace OSD node without remapping PGs

2020-04-01 Thread Eugen Block
Yeah, I should have mentioned the swap-bucket option. We couldn't use that because we actually didn't swap anything but moved the old hosts to a different root and we keep them for erasure coding pools. Zitat von Anthony D'Atri : The strategy that Nghia described is inefficient for moving d