[ceph-users] EC Metadata Pool Storage

2018-10-31 Thread Ashley Merrick
Hello, I have a small EC Pool I am using with RBD to store a bunch of large files attached to some VM's for personal storage use. Currently I have the EC Meta Data Pool on some SSD's, I have noticed even though the EC Pool has TB's of data in the metadata pool is only in the 2MB Range. My questi

Re: [ceph-users] Client new version than server?

2018-10-31 Thread Konstantin Shalygin
I wanted to ask for thoughts/guidance on the case of running a newer version of Ceph on a client than the version of Ceph that is running on the server. E.g., I have a client machine running Ceph 12.2.8, while the server is running 12.2.4. Is this a terrible idea? My thoughts are to more thorou

Re: [ceph-users] add monitors - not working

2018-10-31 Thread Joao Eduardo Luis
On 10/31/2018 04:48 PM, Steven Vacaroaia wrote: > On the monitor that works I noticed this  > > mon.mon01@0(leader) e1 handle_probe ignoring fsid > d01a0b47-fef0-4ce8-9b8d-80be58861053 != 8e7922c9-8d3b-4a04-9a8a-e0b0934162df > > Where is that fsid ( 8e7922 ) coming from ? monmap. somehow one

[ceph-users] Priority for backfilling misplaced and degraded objects

2018-10-31 Thread Jonas Jelten
Hello! My cluster currently has this health state: 2018-10-31 21:20:13.694633 mon.lol [WRN] Health check update: 39010709/192173470 objects misplaced (20.300%) (OBJECT_MISPLACED) 2018-10-31 21:20:13.694684 mon.lol [WRN] Health check update: Degraded data redundancy: 1624786/192173470 objects de

Re: [ceph-users] add monitors - not working

2018-10-31 Thread Steven Vacaroaia
On the monitor that works I noticed this mon.mon01@0(leader) e1 handle_probe ignoring fsid d01a0b47-fef0-4ce8-9b8d-80be58861053 != 8e7922c9-8d3b-4a04-9a8a-e0b0934162df Where is that fsid ( 8e7922 ) coming from ? Steven On Wed, 31 Oct 2018 at 12:45, Steven Vacaroaia wrote: > Hi, > > I've a

Re: [ceph-users] Balancer module not balancing perfectly

2018-10-31 Thread Steve Taylor
I think I pretty well have things figured out at this point, but I'm not sure how to proceed. The config-key settings were not effective because I had not restarted the active mgr after setting them. Once I restarted the mgr the settings became effective. Once I had the config-key settings wor

[ceph-users] add monitors - not working

2018-10-31 Thread Steven Vacaroaia
Hi, I've added 2 more monitors to my cluster but they are not joining the cluster service is up, ceph.conf is the same , what am I missing ?? ceph-deploy install ceph-deploy mon create ceph-deploy add I then manually change ceph.conf to contain the following Then I push it to all cluster membres

Re: [ceph-users] Migrate/convert replicated pool to EC?

2018-10-31 Thread Matthew Vernon
Hi, On 10/26/18 2:55 PM, David Turner wrote: It is indeed adding a placement target and not removing it replacing the pool. The get/put wouldn't be a rados or even ceph command, you would do it through an s3 client. Which is an interesting idea, but presumably there's no way of knowing which S

Re: [ceph-users] crush rules not persisting

2018-10-31 Thread Steven Vacaroaia
Nevermind ...a bit of reading was enough to point me to "osd_crush_update_on_start": "true" Thanks Steven On Wed, 31 Oct 2018 at 10:31, Steven Vacaroaia wrote: > Hi, > I have created a separate root for my ssd drives > All works well but a reboot ( or restart of the services) wipes out all my >

Re: [ceph-users] ceph-mds failure replaying journal

2018-10-31 Thread Jon Morby
although that said, I’ve just noticed this crash this morning 2018-10-31 14:26:00.522 7f0cf53f5700 -1 /build/ceph-13.2.1/src/mds/CDir.cc: In function 'void CDir::fetch(MDSInternalContextBase*, std::string_view, bool)' thread 7f0cf53f5700 time 2018-10-31 14:26:00.485647 /build/ceph-13.2.1/src/mds

[ceph-users] crush rules not persisting

2018-10-31 Thread Steven Vacaroaia
Hi, I have created a separate root for my ssd drives All works well but a reboot ( or restart of the services) wipes out all my changes How can I persist changes to crush rules ? here are some details Initial / default - this is what I am getting after a restart / reboot If I just do that on o

Re: [ceph-users] ceph.conf mon_max_pg_per_osd not recognized / set

2018-10-31 Thread Steven Vacaroaia
so, moving the entry from [mon] to [global] worked This is a bit confusing - I use to put all my configuration setting starting with mon_ under [mon] Steven On Wed, 31 Oct 2018 at 10:13, Steven Vacaroaia wrote: > I do not think so ..or maybe I did not understand what are you saying > There is n

Re: [ceph-users] ceph.conf mon_max_pg_per_osd not recognized / set

2018-10-31 Thread Steven Vacaroaia
I do not think so ..or maybe I did not understand what are you saying There is no key listed on mgr config ceph config-key list [ "config-history/1/", "config-history/2/", "config-history/2/+mgr/mgr/dashboard/server_addr", "config-history/3/", "config-history/3/+mgr/mgr/prometh

Re: [ceph-users] ceph.conf mon_max_pg_per_osd not recognized / set

2018-10-31 Thread ceph
Isn't this a mgr variable ? On 10/31/2018 02:49 PM, Steven Vacaroaia wrote: > Hi, > > Any idea why different value for mon_max_pg_per_osd is not "recognized" ? > I am using mimic 13.2.2 > > Here is what I have in /etc/ceph/ceph.conf > > > [mon] > mon_allow_pool_delete = true > mon_osd_min_dow

[ceph-users] ceph.conf mon_max_pg_per_osd not recognized / set

2018-10-31 Thread Steven Vacaroaia
Hi, Any idea why different value for mon_max_pg_per_osd is not "recognized" ? I am using mimic 13.2.2 Here is what I have in /etc/ceph/ceph.conf [mon] mon_allow_pool_delete = true mon_osd_min_down_reporters = 1 mon_max_pg_per_osd = 400 checking the value with ceph daemon osd.6 config show| gr

Re: [ceph-users] Using FC with LIO targets

2018-10-31 Thread Frédéric Nass
Hi Mike, Thank you for your answer. I thought maybe FC would just be the transport protocol to LIO and all would be fine but I forgot the tcmu-runner part which I suppose is where some iSCSI specifics were hard coded. FC was interesting in the way that (when already set) it would avoid having t

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Alfredo Deza
On Wed, Oct 31, 2018 at 8:28 AM Hayashida, Mami wrote: > > Thank you for your replies. So, if I use the method Hector suggested (by > creating PVs, VGs etc. first), can I add the --osd-id parameter to the > command as in > > ceph-volume lvm prepare --bluestore --data hdd0/data0 --block.db ss

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Hayashida, Mami
Thank you for your replies. So, if I use the method Hector suggested (by creating PVs, VGs etc. first), can I add the --osd-id parameter to the command as in ceph-volume lvm prepare --bluestore --data hdd0/data0 --block.db ssd/db0 --osd-id 0 ceph-volume lvm prepare --bluestore --data hdd1/data

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Alfredo Deza
On Wed, Oct 31, 2018 at 5:22 AM Hector Martin wrote: > > On 31/10/2018 05:55, Hayashida, Mami wrote: > > I am relatively new to Ceph and need some advice on Bluestore migration. > > I tried migrating a few of our test cluster nodes from Filestore to > > Bluestore by following this > > (http://docs

Re: [ceph-users] ceph-bluestore-tool failed

2018-10-31 Thread Igor Fedotov
You might want to try --path option instead of --dev one. On 10/31/2018 7:29 AM, ST Wong (ITSC) wrote: Hi all, We deployed a testing mimic CEPH cluster using bluestore.    We can’t run ceph-bluestore-tool on OSD with following error: --- # ceph-bluestore-tool show-label --dev *device

Re: [ceph-users] Large omap objects - how to fix ?

2018-10-31 Thread Alexandru Cucu
Hi, Didn't know that auto resharding does not remove old instances. Wrote my own script for cleanup as I've discovered this before reading your message. Not very wlll tested, but here it is: for bucket in $(radosgw-admin bucket list | jq -r .[]); do bucket_id=$(radosgw-admin metadata get buck

Re: [ceph-users] Filestore to Bluestore migration question

2018-10-31 Thread Hector Martin
On 31/10/2018 05:55, Hayashida, Mami wrote: I am relatively new to Ceph and need some advice on Bluestore migration. I tried migrating a few of our test cluster nodes from Filestore to Bluestore by following this (http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/) as the