[ceph-users] How to configure placement_targets?

2016-01-07 Thread Yang Honggang
Hello, * **How to configure placement_targets? Which step is wrong in my following steps? * I want to use different pools to hold user's buckets. Two pools are created, one is '.bj-dz.rgw.buckets', the other is '.bj-dz.rgw.buckets.hot'. 1. Two placement targets are added to region map. Targets

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2016-01-07 Thread Wade Holler
I commented out partprobe and everything seems to work just fine. *If someone has experience with why this is very bad please advise. Make sure you know about http://tracker.ceph.com/issues/13833 also. *ps we are running btrfs in the test jig and had to add the "-f" to the btrfs_args for ceph-dis

Re: [ceph-users] double rebalance when removing osd

2016-01-07 Thread Steve Taylor
If I’m not mistaken, marking an osd out will remap its placement groups temporarily, while removing it from the crush map will remap the placement groups permanently. Additionally, other placement groups from other osds could get remapped permanently when an osd is removed from the crush map. I

Re: [ceph-users] double rebalance when removing osd

2016-01-07 Thread Wido den Hollander
On 01/07/2016 05:08 PM, Steve Taylor wrote: > If I’m not mistaken, marking an osd out will remap its placement groups > temporarily, while removing it from the crush map will remap the > placement groups permanently. Additionally, other placement groups from > other osds could get remapped permanen

Re: [ceph-users] Any suggestion to deal with slow request?

2016-01-07 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 What is the file system on the OSDs? Anything interesting in iostat/atop? What are the drives backing the OSDs? A few more details would be helpful. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1

Re: [ceph-users] Ceph & Hbase

2016-01-07 Thread Gregory Farnum
On Thu, Jan 7, 2016 at 5:56 AM, Jose M wrote: > Hi, > > Following Yan's feeling that something could be wrong with ceph > configuration, i started again from scratch, this time configuring ceph with > three nodes (one mon, two osds). > > After starting hbase, it seems it moves forward a few more

[ceph-users] Ceph Architecture and File Management

2016-01-07 Thread James Gallagher
Hi, I'm looking to create a Ceph storage architecture to store files, I'm particularly interested in the metadata segregation so would be implementing the Ceph Storage Cluster to start, with CephFS added once done. I'm wondering what the best approach to storing data would be, for example, conside

Re: [ceph-users] Ceph Architecture and File Management

2016-01-07 Thread John Spray
On Thu, Jan 7, 2016 at 9:28 PM, James Gallagher wrote: > Hi, > > I'm looking to create a Ceph storage architecture to store files, I'm > particularly interested in the metadata segregation so would be implementing > the Ceph Storage Cluster to start, with CephFS added once done. I'm > wondering wh

[ceph-users] ceph osd tree output

2016-01-07 Thread Wade Holler
Sometimes my ceph osd tree output is wrong. Ie. Wrong osds on the wrong hosts ? Anyone else have this issue? I have seen this at Infernalis and Jewell. Thanks Wade ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] ceph osd tree output

2016-01-07 Thread Shinobu Kinjo
Can you share the output with us? Rgds, Shinobu - Original Message - From: "Wade Holler" To: "ceph-users" Sent: Friday, January 8, 2016 7:29:07 AM Subject: [ceph-users] ceph osd tree output Sometimes my ceph osd tree output is wrong. Ie. Wrong osds on the wrong hosts ? Anyone else ha

Re: [ceph-users] In production - Change osd config

2016-01-07 Thread Tyler Bishop
http://sudomakeinstall.com/uncategorized/ceph-make-configuration-changes-in-realtime-without-restart Tyler Bishop Chief Technical Officer 513-299-7108 x10 tyler.bis...@beyondhosting.net If you are not the intended recipient of this transmission you are notified that disclosing, copying,

Re: [ceph-users] ceph osd tree output

2016-01-07 Thread Wade Holler
Sure. Apologies for all the text: We have 12 Nodes for OSDs, 15 OSDs per node, but I will only include a sample: ceph osd tree | head -35 ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 130.98450 root default -2 5.82153 host cpn1 4 0.72769 osd.4

Re: [ceph-users] Any suggestion to deal with slow request?

2016-01-07 Thread Jevon Qiao
Hi Robert, Thank you for the prompt response. The OSDs are built on XFS and the drives are Intel SSDs. Each SSD is parted into two partitions, one is for journal, the other is for data. There is no alignment issue for the partitions. When slow request msg is outputted, the workload is quite

Re: [ceph-users] Any suggestion to deal with slow request?

2016-01-07 Thread Christian Balzer
Hello, On Fri, 8 Jan 2016 12:22:04 +0800 Jevon Qiao wrote: > Hi Robert, > > Thank you for the prompt response. > > The OSDs are built on XFS and the drives are Intel SSDs. Each SSD is > parted into two partitions, one is for journal, the other is for data. > There is no alignment issue for

Re: [ceph-users] KVM problems when rebalance occurs

2016-01-07 Thread Josef Johansson
Hi, How did you benchmark? I would recommend to have a lot of mysql with a lot of innodb tables that are utilised heavily. During a recover you should see the latency rise at least. Maybe using one of the tools here https://dev.mysql.com/downloads/benchmarks.html Regards, Josef On 7 Jan 2016 16:

Re: [ceph-users] ceph osd tree output

2016-01-07 Thread Mart van Santen
Hi, Do you have by any chance disabled automatic crushmap updates in your ceph config? osd crush update on start = false If this is the case, and you move disks around hosts, they won't update their position/host in the crushmap, even if the crushmap does not reflect reality. Regards, Mart

[ceph-users] Shared cache and regular pool

2016-01-07 Thread Christian Balzer
Hello, just in case I'm missing something obvious, there is no reason a pool called aptly "ssd" can't be used simultaneously as a regular RBD pool and for cache tiering, right? Regards, Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Global

Re: [ceph-users] Shared cache and regular pool

2016-01-07 Thread Burkhard Linke
Hi, On 01/08/2016 08:07 AM, Christian Balzer wrote: Hello, just in case I'm missing something obvious, there is no reason a pool called aptly "ssd" can't be used simultaneously as a regular RBD pool and for cache tiering, right? AFAIK the cache configuration is stored in the pool entry itself (