Re: [ceph-users] SSD Hardware recommendation

2015-03-19 Thread Christian Balzer
On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote: > Hi, > > > On 18 Mar 2015, at 05:29, Christian Balzer wrote: > > > > > > Hello, > > > > On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote: > [snip] > >> We though of doing a cluster with 3 servers, and any recommendation of > >

Re: [ceph-users] scubbing for a long time and not finished

2015-03-19 Thread Xinze Chi
Currently, users do not know when some pg do scrubbing for a long time. I think whether we could give some warming if it happend (defined as osd_scrub_max_time). It would tell the user something may be wrong in cluster. 2015-03-17 21:21 GMT+08:00 池信泽 : > > On 周二, 3月 17, 2015 at 10:01 上午, Xinze C

[ceph-users] Segfault after modifying CRUSHMAP

2015-03-19 Thread gian
Hi guys, I was creating new buckets and adjusting the crush map when 1 monitor stopped replying. The scenario is: 2 servers 2 MONs 21 OSDs each server Error message in the mon.log: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. I uploaded the stderr to: ht

Re: [ceph-users] SSD Hardware recommendation

2015-03-19 Thread Christian Balzer
Hello, On Wed, 18 Mar 2015 11:41:17 +0100 Francois Lafont wrote: > Hi, > > Christian Balzer wrote : > > > Consider what you think your IO load (writes) generated by your > > client(s) will be, multiply that by your replication factor, divide by > > the number of OSDs, that will give you the ba

[ceph-users] Code for object deletion

2015-03-19 Thread khyati joshi
Can anyone tel me where code for deleting objects using command "rados rm test-object-1 --pool=data" will be found for ceph-version 0.80.5?? Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-cep

[ceph-users] Readonly cache tiering and rbd.

2015-03-19 Thread Matthijs Möhlmann
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, - From the documentation: Cache Tier readonly: Read-only Mode: When admins configure tiers with readonly mode, Ceph clients write data to the backing tier. On read, Ceph copies the requested object(s) from the backing tier to the cache tier. St

[ceph-users] Ceiling on number of PGs in a OSD

2015-03-19 Thread Sreenath BH
Hi, Is there a celing on the number for number of placement groups in a OSD beyond which steady state and/or recovery performance will start to suffer? Example: I need to create a pool with 750 osds (25 OSD per server, 50 servers). The PG calculator gives me 65536 placement groups with 300 PGs pe

Re: [ceph-users] scubbing for a long time and not finished

2015-03-19 Thread Sage Weil
On Thu, 19 Mar 2015, Xinze Chi wrote: > Currently, users do not know when some pg do scrubbing for a long time. > I think whether we could give some warming if it happend (defined as > osd_scrub_max_time). > It would tell the user something may be wrong in cluster. This should be pretty straightf

Re: [ceph-users] Readonly cache tiering and rbd.

2015-03-19 Thread Gregory Farnum
On Thu, Mar 19, 2015 at 4:46 AM, Matthijs Möhlmann wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hi, > > - From the documentation: > > Cache Tier readonly: > > Read-only Mode: When admins configure tiers with readonly mode, Ceph > clients write data to the backing tier. On read, C

[ceph-users] Issue with Ceph mons starting up- leveldb store

2015-03-19 Thread Andrew Diller
Hello: We have a cuttlefish (0.61.9) 192-OSD cluster that we are trying to get back to a quorum. We have 2 mon nodes up and ready, we just need this 3rd. We moved the data dir over (/var/lib/ceph/mon) from one of the good ones to this 3rd node, but it won't start- we see this error, after which n

[ceph-users] cciss driver package for RHEL7

2015-03-19 Thread O'Reilly, Dan
I understand there's a KMOD_CCISS package available. However, I can't find it for download. Anybody have any ideas? Thanks! Dan O'Reilly UNIX Systems Administration [cid:image001.jpg@01D06222.B852F940] 9601 S. Meridian Blvd. Englewood, CO 80112 720-514-6293 __

Re: [ceph-users] Issue with Ceph mons starting up- leveldb store

2015-03-19 Thread Steffen W Sørensen
On 19/03/2015, at 15.50, Andrew Diller wrote: > We moved the data dir over (/var/lib/ceph/mon) from one of the good ones to > this 3rd node, but it won't start- we see this error, after which no further > logging occurs: > > 2015-03-19 06:25:05.395210 7fcb57f1c7c0 -1 failed to create new leveld

Re: [ceph-users] cciss driver package for RHEL7

2015-03-19 Thread Steffen W Sørensen
> On 19/03/2015, at 15.57, O'Reilly, Dan wrote: > > I understand there’s a KMOD_CCISS package available. However, I can’t find > it for download. Anybody have any ideas? Oh I believe HP swapped cciss for hpsa (Smart Array) driver long ago… so maybe only download cciss latest source and then c

Re: [ceph-users] cciss driver package for RHEL7

2015-03-19 Thread O'Reilly, Dan
The problem with using the hpsa driver is that I need to install RHEL 7.1 on a Proliant system using the SmartArray 400 controller. Therefore, I need a driver that supports it to even install RHEL 7.1. RHEL 7.1 doesn’t generically recognize that controller out of the box. From: Steffen W Søre

[ceph-users] Mapping OSD to physical device

2015-03-19 Thread Colin Corr
Greetings Cephers, I have been lurking on this list for a while, but this is my first inquiry. I have been playing with Ceph for the past 9 months and am in the process of deploying a production Ceph cluster. I am seeking advice on an issue that I have encountered. I do not believe it is a Ceph

Re: [ceph-users] Mapping OSD to physical device

2015-03-19 Thread Robert LeBlanc
Udev already provides some of this for you. Look in /dev/disk/by-*. You can reference drives by UUID, id or path (for SAS/SCSI/FC/iSCSI/etc) which will provide some consistency across reboots and hardware changes. On Thu, Mar 19, 2015 at 1:10 PM, Colin Corr wrote: > Greetings Cephers, > > I have

[ceph-users] FastCGI and RadosGW issue?

2015-03-19 Thread Potato Farmer
Hi, I am running into an issue uploading to a bucket over an s3 connection to ceph. I can create buckets just fine. I just can't create a key and copy data to it. Command that causes the error: >>> key.set_contents_from_string("testing from string") I encounter the following error:

Re: [ceph-users] FastCGI and RadosGW issue?

2015-03-19 Thread Yehuda Sadeh-Weinraub
- Original Message - > From: "Potato Farmer" > To: ceph-users@lists.ceph.com > Sent: Thursday, March 19, 2015 12:26:41 PM > Subject: [ceph-users] FastCGI and RadosGW issue? > > > > Hi, > > > > I am running into an issue uploading to a bucket over an s3 connection to > ceph. I can c

Re: [ceph-users] FastCGI and RadosGW issue?

2015-03-19 Thread Potato Farmer
Yehuda, You rock! Thank you for the suggestion. That fixed the issue. :) -Original Message- From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com] Sent: Thursday, March 19, 2015 12:45 PM To: Potato Farmer Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] FastCGI and RadosGW iss

Re: [ceph-users] Cache Tier Flush = immediate base tier journal sync?

2015-03-19 Thread Gregory Farnum
On Wed, Mar 18, 2015 at 11:10 PM, Christian Balzer wrote: > > Hello, > > On Wed, 18 Mar 2015 11:05:47 -0700 Gregory Farnum wrote: > >> On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk wrote: >> > Hi Greg, >> > >> > Thanks for your input and completely agree that we cannot expect >> > developers to full

[ceph-users] PGs issue

2015-03-19 Thread Bogdan SOLGA
Hello, everyone! I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick deploy ' page, with the following setup: - 1 x admin / deploy node; - 3 x OSD and MON nodes; - each OSD node has 2 x 8 GB HDDs; The set

Re: [ceph-users] Cache Tier Flush = immediate base tier journal sync?

2015-03-19 Thread Nick Fisk
I think this could be part of what I am seeing. I found this post from back in 2003 http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083 Which seems to describe a work around for the behaviour to what I am seeing. The constant small block IO I was seeing looks like it was either t

Re: [ceph-users] PGs issue

2015-03-19 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Bogdan SOLGA > Sent: 19 March 2015 20:51 > To: ceph-users@lists.ceph.com > Subject: [ceph-users] PGs issue > > Hello, everyone! > I have created a Ceph cluster (v0.87.1-1) using the info f

[ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-19 Thread Nick Fisk
I'm looking at trialling OSD's with a small flashcache device over them to hopefully reduce the impact of metadata updates when doing small block io. Inspiration from here:- http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083 One thing I suspect will happen, is that when the OSD no

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-19 Thread Gregory Farnum
On Thu, Mar 19, 2015 at 2:41 PM, Nick Fisk wrote: > I'm looking at trialling OSD's with a small flashcache device over them to > hopefully reduce the impact of metadata updates when doing small block io. > Inspiration from here:- > > http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/120

Re: [ceph-users] Mapping OSD to physical device

2015-03-19 Thread Colin Corr
On 03/19/2015 12:27 PM, Robert LeBlanc wrote: > Udev already provides some of this for you. Look in /dev/disk/by-*. > You can reference drives by UUID, id or path (for > SAS/SCSI/FC/iSCSI/etc) which will provide some consistency across > reboots and hardware changes. Thanks for the quick responses

Re: [ceph-users] Mapping OSD to physical device

2015-03-19 Thread Robert LeBlanc
I don't use ceph-deploy, but using ceph-disk for creating the OSDs automatically uses the by-partuuid reference for the journals (at least I recall only using /dev/sdX for the journal reference, which is what I have in my documentation). Since ceph-disk does all the partitioning, it automatically f

[ceph-users] 'pgs stuck unclean ' problem

2015-03-19 Thread houguanghua
Dear all, Ceph 0.72.2 is deployed in three hosts. But the ceph's status is HEALTH_WARN . The status is as follows: # ceph -s cluster e25909ed-25d9-42fd-8c97-0ed31eec6194 health HEALTH_WARN 768 pgs degraded; 768 pgs stuck unclean; recovery 2/3 objects degraded (66.667%) monmap e3

[ceph-users] hadoop namenode not starting due to bindException while deploying hadoop with cephFS

2015-03-19 Thread Ridwan Rashid
Hi, I have a 5 node ceph(v0.87) cluster and am trying to deploy hadoop with cephFS. I have installed hadoop-1.1.1 in the nodes and changed the conf/core-site.xml file according to the ceph documentation http://ceph.com/docs/master/cephfs/hadoop/ but after changing the file the namenode is not star

[ceph-users] Server Specific Pools

2015-03-19 Thread Garg, Pankaj
Hi, I have a Ceph cluster with both ARM and x86 based servers in the same cluster. Is there a way for me to define Pools or some logical separation that would allow me to use only 1 set of machines for a particular test. That way it makes easy for me to run tests either on x86 or ARM and do some

Re: [ceph-users] Server Specific Pools

2015-03-19 Thread David Burley
Pankaj, You can define them via different crush rules, and then assign a pool to a given crush rule. This is the same in practice as having a node type with all SSDs and another with all spinners. You can read more about how to set this up here: http://ceph.com/docs/master/rados/operations/crush-

Re: [ceph-users] PGs issue

2015-03-19 Thread Bogdan SOLGA
Hello, Nick! Thank you for your reply! I have tested both with setting the replicas number to 2 and 3, by setting the 'osd pool default size = (2|3)' in the .conf file. Either I'm doing something incorrectly, or they seem to produce the same result. Can you give any troubleshooting advice? I have

[ceph-users] OSD remains down

2015-03-19 Thread Jesus Chavez (jeschave)
there was a blackout and one of my osds remains down, I have noticed that the journal partition an data partion is not showed anymore so the device cannot mounted… 8 1145241856 sdh2 8 128 3906249728 sdi 8 129 3901005807 sdi1 8 1305241856 sdi2 8 14