Re: [ceph-users] basic questions about pool

2014-07-15 Thread Karan Singh
Hi Pragya Let me try to answer these. 1# The decisions is based on your use case ( performance , reliability ) .If you need high performance out of your cluster , the deployer will create a pool on SSD and assign this pool to applications which require higher I/O. For Ex : if you integrate op

Re: [ceph-users] mon doesn't start

2014-07-15 Thread Joao Eduardo Luis
(re-cc'ing list without log file) Did you change anything in the cluster aside from ubuntu's version? Were any upgrades performed? If so, from which version to which version and on which monitors? -Joao On 07/15/2014 03:18 AM, Richard Zheng wrote: Thanks Joao. Not sure if the log contain

Re: [ceph-users] mon doesn't start

2014-07-15 Thread Richard Zheng
We used to have ubuntu 12.04 with ceph 0.80.1. The upgrade just tries to use ubuntu 14.04. We have three servers which run 1 mon and 10 OSDs each. The other 2 servers are ok. Before upgrade we didn't enable upstart and just manually started mon on all three nodes, e.g. start ceph-mon id=storag

Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-15 Thread Andrija Panic
Hi Sage, since this problem is tunables-related, do we need to expect same behavior or not when we do regular data rebalancing caused by adding new/removing OSD? I guess not, but would like your confirmation. I'm already on optimal tunables, but I'm afraid to test this by i.e. shuting down 1 OSD.

Re: [ceph-users] mon doesn't start

2014-07-15 Thread Richard Zheng
I found the problem. Storage1 dpkg shows ceph is running on 0.80.1 and other 2 are running on 0.80.2. The upgrade follows the same method. I don't understand why. I have tried apt-get update and apt-get upgrade on storage1. How do I bring it to 0.80.2? On Tue, Jul 15, 2014 at 12:30 AM, Richa

[ceph-users] Ceph with Multipath ISCSI

2014-07-15 Thread Andrei Mikhailovsky
Hello guys, I was wondering if there has been any progress on getting multipath iscsi play nicely with ceph? I've followed the how to and created a single path iscsi over ceph rbd with XenServer. However, it would be nice to have a built in failover using iscsi multipath to another ceph mon or

Re: [ceph-users] qemu image create failed

2014-07-15 Thread Sebastien Han
Can you connect to your Ceph cluster? You can pass options to the cmd line like this: $ qemu-img create -f rbd rbd:instances/vmdisk01:id=leseb:conf=/etc/ceph/ceph-leseb.conf 2G Cheers. Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood." Phone: +33 (0)1 49 70 99

Re: [ceph-users] Placing different pools on different OSDs in the same physical servers

2014-07-15 Thread Marc
Hi, to avoid confusion I would name the "host" entries in the crush map differently. Make sure these host names can be resolved to the correct boxes though (/etc/hosts on all the nodes). You're also missing a new rule entry (also shown in the link you mentioned). Lastly, and this is *extremely* i

[ceph-users] Working ISCSI target guide

2014-07-15 Thread Drew Weaver
Does anyone have a guide or re-producible method of getting multipath ISCSI working infront of ceph? Even if it just means having two front-end ISCSI targets each with access to the same underlying Ceph volume? This seems like a super popular topic. Thanks, -Drew __

Re: [ceph-users] Working ISCSI target guide

2014-07-15 Thread Drew Weaver
One other question, if you are going to be using Ceph as a storage system for KVM virtual machines does it even matter if you use ISCSI or not? Meaning that if you are just going to use LVM and have several hypervisors sharing that same VG then using ISCSI isn't really a requirement unless you a

[ceph-users] create a image that stores information in erasure-pool failed

2014-07-15 Thread qixiaof...@chinacloud.com.cn
hi,all: I created an erasure-code pool ,I create a 1GB image named fool that stores information in the erasure-code pool,however the action failed,with tips as follows: root@mon1:~# rbd create foo --size 1024 --pool ecpool rbd: create error: (95) Operation not supported2014-07-13 10:32:55.311

Re: [ceph-users] Working ISCSI target guide

2014-07-15 Thread Andrei Mikhailovsky
Drew, I would not use iscsi with ivm. instead, I would use built in rbd support. However, you would use something like nfs/iscsi if you were to connect other hypervisors to ceph backend. Having failover capabilities is important here )) Andrei -- Andrei Mikhailovsky Director Arhont Infor

[ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Jaemyoun Lee
Hi All, I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect to the MON from a client using ceph-fuse. On the client, I installed the ceph-fuse 0.80.1 and added fuse. But, I think it is wrong. The result is # modprobe fuse (Any output was nothing) # lsmod | grep fuse (Any out

Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-15 Thread Sage Weil
On Tue, 15 Jul 2014, Andrija Panic wrote: > Hi Sage, since this problem is tunables-related, do we need to expect > same behavior or not  when we do regular data rebalancing caused by > adding new/removing OSD? I guess not, but would like your confirmation. > I'm already on optimal tunables, but

Re: [ceph-users] Working ISCSI target guide

2014-07-15 Thread fastsync
hi. there may be 2 ways. but cephfs is not product-ready. 1. you can use a file stored in cephfs as a target. 2.there is a rbd.ko which map a rbd device as a block device, which you can assign to target. i have not tested yet. good luck At 2014-07-15 09:18:53, "Drew Weaver" wrote: O

[ceph-users] the differences between snap and clone in terms of implement

2014-07-15 Thread fastsync
hi,all i take a glance at ceph code of cls_rbd.cc. it seems that snap and clone both do not R/W any data, they just add some keys and values, even rbds in different pools. am i missing something? or could you explain deeper about the implemention of snap and clone. thanks very much. ___

Re: [ceph-users] how to plan the ceph storage architecture when i reuse old PC Server

2014-07-15 Thread Gregory Farnum
It's generally recommended that you use disks in JBOD mode rather than involving RAID. -Greg On Monday, July 14, 2014, 不坏阿峰 wrote: > I have installed and test Ceph on VMs before, i know a bit about > configuration and install. > Now i want to use physic PC Server to install Ceph and do some Test

Re: [ceph-users] RGW: Get object ops performance problem

2014-07-15 Thread Gregory Farnum
Are you saturating your network bandwidth? That's what it sounds like. :) -Greg On Monday, July 14, 2014, baijia...@126.com wrote: > hi, everyone! > > I test RGW get obj ops, when I use 100 threads get one and the same > object , I find that performance is very good, meadResponseTime is 0.1s.

Re: [ceph-users] create a image that stores information in erasure-pool failed

2014-07-15 Thread Gregory Farnum
You can't use erasure coded pools directly with RBD. They're only suitable for use with RGW or as the base pool for a replicated cache pool, and you need to be very careful/specific with the configuration. I believe this is well-documented, so check it out! :) -Greg On Saturday, July 12, 2014, qix

[ceph-users] [ANN] ceph-deploy 1.5.9 released

2014-07-15 Thread Alfredo Deza
Hi All, There is a new release of ceph-deploy, the easy deployment tool for Ceph. There is a minor cleanup when ceph-deploy disconnects from remote hosts that was creating some tracebacks. And there is a new flag for the `new` subcommand that allows to specify an fsid for the cluster. The full l

[ceph-users] 0.80.1 to 0.80.3: strange osd log messages

2014-07-15 Thread Dzianis Kahanovich
After upgrading 0.80.1 to 0.80.3 I see many regular messages on every OSD log: 2014-07-15 19:44:48.292839 7fa5a659f700 0 osd.5 62377 crush map has features 2199057072128, adjusting msgr requires for mons (constant part: "crush map has features 2199057072128, adjusting msgr requires for mons"

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Gregory Farnum
What did ceph-fuse output to its log file or the command line? On Tuesday, July 15, 2014, Jaemyoun Lee wrote: > Hi All, > > I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect > to the MON from a client using ceph-fuse. > > On the client, I installed the ceph-fuse 0.80.1 and

Re: [ceph-users] 0.80.1 to 0.80.3: strange osd log messages

2014-07-15 Thread Dzianis Kahanovich
Dzianis Kahanovich пишет: After upgrading 0.80.1 to 0.80.3 I see many regular messages on every OSD log: 2014-07-15 19:44:48.292839 7fa5a659f700 0 osd.5 62377 crush map has features 2199057072128, adjusting msgr requires for mons (constant part: "crush map has features 2199057072128, adjusting

Re: [ceph-users] the differences between snap and clone in terms of implement

2014-07-15 Thread Gregory Farnum
Okay, first the basics: cls_rbd.cc operates only on rbd header objects, so it's doing coordinating activities, not the actual data handling. When somebody does an operation on an rbd image, they put some data in the header object so that everybody else can coordinate (if it's open) or continue (if

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Jaemyoun Lee
The output is nothing because ceph-fuse fell into an infinite while loop as I explain below. Where can I find the log file of ceph-fuse? Jae. 2014. 7. 16. 오전 1:59에 "Gregory Farnum" 님이 작성: > What did ceph-fuse output to its log file or the command line? > > On Tuesday, July 15, 2014, Jaemyoun Lee

Re: [ceph-users] 0.80.1 to 0.80.3: strange osd log messages

2014-07-15 Thread Dzianis Kahanovich
Dzianis Kahanovich пишет: Dzianis Kahanovich пишет: After upgrading 0.80.1 to 0.80.3 I see many regular messages on every OSD log: 2014-07-15 19:44:48.292839 7fa5a659f700 0 osd.5 62377 crush map has features 2199057072128, adjusting msgr requires for mons (constant part: "crush map has featur

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Gregory Farnum
On Tue, Jul 15, 2014 at 10:15 AM, Jaemyoun Lee wrote: > The output is nothing because ceph-fuse fell into an infinite while loop as > I explain below. > > Where can I find the log file of ceph-fuse? It defaults to /var/log/ceph, but it may be empty. I realize the task may have hung, but I'm prett

[ceph-users] v0.80.4 Firefly released

2014-07-15 Thread Sage Weil
This Firefly point release fixes an potential data corruption problem when ceph-osd daemons run on top of XFS and service Firefly librbd clients. A recently added allocation hint that RBD utilizes triggers an XFS bug on some kernels (Linux 3.2, and likely others) that leads to data corruption and

Re: [ceph-users] scrub error on firefly

2014-07-15 Thread Sage Weil
Hi Randy, This is the same kernel we reproduced the issue on as well. Sam traced this down to the XFS allocation hint ioctl we recently started using for RBD. We've just pushed out a v0.80.4 firefly release that disables the hint by default. It should stop the inconsistencies from popping up

Re: [ceph-users] basic questions about pool

2014-07-15 Thread pragya jain
thank you very much, Karan, for your explanation. Regards  Pragya Jain On Tuesday, 15 July 2014 1:53 PM, Karan Singh wrote: > > >Hi Pragya > > >Let me try to answer these. > > >1#  The decisions is based on your use case ( performance , reliability ) .If >you need high performance out of yo

Re: [ceph-users] 403-Forbidden error using radosgw

2014-07-15 Thread Wido den Hollander
On 07/16/2014 07:58 AM, lakshmi k s wrote: Hello Ceph Users - My Ceph setup consists of 1 admin node, 3 OSDs, I radosgw and 1 client. One of OSD node also hosts monitor node. Ceph Health is OK and I have verified the radosgw runtime. I have created S3 and Swift users using radosgw-admin. But whe