Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-26 Thread Oliver Freyermuth
Am 27.10.18 um 04:12 schrieb Linh Vu: > Should be fine as long as your "mgr/balancer/max_misplaced" is reasonable. I > find the default value of 0.05 decent enough, although from experience that > seems like 0.05% rather than 5% as suggested here:  > http://docs.ceph.com/docs/luminous/mgr/balance

Re: [ceph-users] Lost machine with MON and MDS

2018-10-26 Thread Martin Verges
Hello, did you lost the only mon as well? If yes, restore it not easy but possible. The mds is no problem just install it. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: M

Re: [ceph-users] Client new version than server?

2018-10-26 Thread Martin Verges
Hello, In my opinion it is not a problem. It could be a problem on mayor releases (read the release notes to check for incompatibilities) but minor version differences shouldn't be a problem. In the most environments I know are different client versions connecting to a cluster. -- Martin Verges

Re: [ceph-users] Lost machine with MON and MDS

2018-10-26 Thread Yan, Zheng
ceph-mds store all its data in object store. you just need to create new ceph-mds on another machine On Sat, Oct 27, 2018 at 1:40 AM Maiko de Andrade wrote: > Hi, > > I have 3 machine with ceph config with cephfs. But I lost one machine, > just with mon and mds. It's possible recovey cephfs? If

[ceph-users] Client new version than server?

2018-10-26 Thread Andre Goree
I wanted to ask for thoughts/guidance on the case of running a newer version of Ceph on a client than the version of Ceph that is running on the server. E.g., I have a client machine running Ceph 12.2.8, while the server is running 12.2.4. Is this a terrible idea? My thoughts are to more th

[ceph-users] Lost machine with MON and MDS

2018-10-26 Thread Maiko de Andrade
Hi, I have 3 machine with ceph config with cephfs. But I lost one machine, just with mon and mds. It's possible recovey cephfs? If yes how? ceph: Ubuntu 16.05.5 (lost this machine) - mon - mds - osd ceph-osd-1: Ubuntu 16.05.5 - osd ceph-osd-2: Ubuntu 16.05.5 - osd []´s Maiko de Andrade MAX B

Re: [ceph-users] Large omap objects - how to fix ?

2018-10-26 Thread Florian Engelmann
Hi, hijacking the hijacker! Sorry! radosgw-admin bucket reshard --bucket somebucket --num-shards 8 *** NOTICE: operation will not remove old bucket index objects *** *** these will need to be removed manually *** tenant: bucket name: somebucket old bucket instance id: cb1594b

Re: [ceph-users] Ceph mds memory leak while replay

2018-10-26 Thread Yan, Zheng
Reset the source code and apply the attached patch. It should resolve the memory issue. good luck Yan, Zheng On Fri, Oct 26, 2018 at 2:41 AM Johannes Schlueter wrote: > Hello, > > os: ubuntu bionic lts > ceph v12.2.7 luminous (on one node we updated to ceph-mds 12.2.8 with no > luck) > 2 m

Re: [ceph-users] Large omap objects - how to fix ?

2018-10-26 Thread Alexandru Cucu
Hi, Sorry to hijack this thread. I have a similar issue also with 12.2.8 recently upgraded from Jewel. I my case all buckets are within limits: # radosgw-admin bucket limit check | jq '.[].buckets[].fill_status' | uniq "OK" # radosgw-admin bucket limit check | jq '.[].buckets[].objec

Re: [ceph-users] Migrate/convert replicated pool to EC?

2018-10-26 Thread David Turner
It is indeed adding a placement target and not removing it replacing the pool. The get/put wouldn't be a rados or even ceph command, you would do it through an s3 client. On Fri, Oct 26, 2018, 9:38 AM Matthew Vernon wrote: > Hi, > > On 26/10/2018 12:38, Alexandru Cucu wrote: > > > Have a look at

Re: [ceph-users] Migrate/convert replicated pool to EC?

2018-10-26 Thread Matthew Vernon
Hi, On 26/10/2018 12:38, Alexandru Cucu wrote: > Have a look at this article:> > https://ceph.com/geen-categorie/ceph-pool-migration/ Thanks; that all looks pretty hairy especially for a large pool (ceph df says 1353T / 428,547,935 objects)... ...so something a bit more controlled/gradual and

Re: [ceph-users] Migrate/convert replicated pool to EC?

2018-10-26 Thread Matthew Vernon
Hi, On 25/10/2018 17:57, David Turner wrote: > There are no tools to migrate in either direction between EC and > Replica. You can't even migrate an EC pool to a new EC profile. Oh well :-/ > With RGW you can create a new data pool and new objects will be written > to the new pool. If your objec

[ceph-users] Large omap objects - how to fix ?

2018-10-26 Thread Ben Morrice
Hello all, After a recent Luminous upgrade (now running 12.2.8 with all OSDs migrated to bluestore, upgraded from 11.2.0 and running filestore) I am currently experiencing the warning 'large omap objects'. I know this is related to large buckets in radosgw, and luminous supports 'dynamic shard

Re: [ceph-users] Migrate/convert replicated pool to EC?

2018-10-26 Thread Alexandru Cucu
Hi, Have a look at this article: https://ceph.com/geen-categorie/ceph-pool-migration/ --- Alex Cucu On Thu, Oct 25, 2018 at 7:31 PM Matthew Vernon wrote: > > Hi, > > I thought I'd seen that it was possible to migrate a replicated pool to > being erasure-coded (but not the converse); but I'm fai

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-26 Thread Oliver Freyermuth
Dear Cephalopodians, thanks for all your feedback! I finally "pushed the button" and let upmap run for ~36 hours. Previously, we had ~63 % usage of our CephFS with only 50 % raw usage, now, we see only 53.77 % usage. That's as close as I expect things to ever become, and we gained about 70 Ti

Re: [ceph-users] RGW how to delete orphans

2018-10-26 Thread Florian Engelmann
Hi, we've got the same problem here. Our 12.2.5 RadosGWs crashed (unrecognised by us) about 30.000 times with ongoing multipart uploads. After a couple of days we ended up with: xx-1.rgw.buckets.data 6 N/A N/A 116TiB 87.22 17.1TiB 36264870 36.26M

Re: [ceph-users] Ceph mds memory leak while replay

2018-10-26 Thread Yan, Zheng
On Fri, Oct 26, 2018 at 3:53 PM Johannes Schlueter wrote: > > Hello, > thanks for the reply. > Before the restart there was HEALTH OK and for a few moments "slow request". > > Maybe helpful: > # > Events by type: > COMMITED: 12188 > EXPORT: 196 > IMPORTFINISH: 197 > IMPORTSTART: 197 > OP

Re: [ceph-users] ceph df space usage confusion - balancing needed?

2018-10-26 Thread Konstantin Shalygin
upmap has been amazing and balanced my clusters far better than anything else I've ever seen. I would go so far as to say that upmap can achieve a perfect balance. Upmap is awesome. I ran it on our new cluster before we started ingesting data, so that the PG count is balanced on all OSDs. G

Re: [ceph-users] Ceph mds memory leak while replay

2018-10-26 Thread Johannes Schlueter
Hello, thanks for the reply. Before the restart there was HEALTH OK and for a few moments "slow request". Maybe helpful: # ceph-journal-tool event get summary Events by type: COMMITED: 12188 EXPORT: 196 IMPORTFINISH: 197 IMPORTSTART: 197 OPEN: 28096 SESSION: 2 SESSIONS: 64 SLAVEUPD

Re: [ceph-users] Ceph mds memory leak while replay

2018-10-26 Thread Yan, Zheng
On Fri, Oct 26, 2018 at 2:41 AM Johannes Schlueter wrote: > > Hello, > > os: ubuntu bionic lts > ceph v12.2.7 luminous (on one node we updated to ceph-mds 12.2.8 with no luck) > 2 mds and 1 backup mds > > we just experienced a problem while restarting a mds. As it has begun to > replay the journa