Re: [ceph-users] Using Valgrind with Teuthology

2014-08-11 Thread Sarang G
Yes. It does work when valgrind option is removed. On Mon, Aug 4, 2014 at 7:43 PM, Sage Weil wrote: > On Mon, 4 Aug 2014, Sarang G wrote: > > Hi, > > > > I am configuring Ceph Cluster using teuthology. I want to use Valgrind. > > > > My yaml File contains: > > > > check-locks: false > > > > rol

Re: [ceph-users] Fresh deploy of ceph 0.83 has OSD down

2014-08-11 Thread Mark Kirkwood
On 07/08/14 11:06, Mark Kirkwood wrote: Hi, I'm doing a fresh install of ceph 0.83 (src build) to an Ubuntu 14.04 VM using ceph-deploy 1.59. Everything goes well until the osd creation, which fails to start with a journal open error. The steps are shown below (ceph is the deploy target host):

Re: [ceph-users] Show IOps per VM/client to find heavy users...

2014-08-11 Thread Andrija Panic
Hi Dan, the script provided seems to not work on my ceph cluster :( This is ceph version 0.80.3 I get empty results, on both debug level 10 and the maximum level of 20... [root@cs1 ~]# ./rbd-io-stats.pl /var/log/ceph/ceph-osd.0.log-20140811.gz Writes per OSD: Writes per pool: Writes per PG

Re: [ceph-users] Show IOps per VM/client to find heavy users...

2014-08-11 Thread Andrija Panic
; [root@cs1 ~]# ./rbd-io-stats.pl /var/log/ceph/ceph-osd.0.log-20140811.gz > Writes per OSD: > Writes per pool: > Writes per PG: > Writes per RBD: > Writes per object: > Writes per length: > . > . > . > > > > > On 8 August 2014 16:01, Dan Van Der Ster > w

Re: [ceph-users] Show IOps per VM/client to find heavy users...

2014-08-11 Thread Dan Van Der Ster
h debug level 10 and the maximum level of 20... [root@cs1 ~]# ./rbd-io-stats.pl<http://rbd-io-stats.pl/> /var/log/ceph/ceph-osd.0.log-20140811.gz Writes per OSD: Writes per pool: Writes per PG: Writes per RBD: Writes per object: Writes per length: . . . On 8 August 2014 1

Re: [ceph-users] Fw: external monitoring tools for processes

2014-08-11 Thread Erik Logtenberg
Hi, Be sure to check this out: http://ceph.com/community/ceph-calamari-goes-open-source/ Erik. On 11-08-14 08:50, Irek Fasikhov wrote: > Hi. > > I use ZABBIX with the following script: > [ceph@ceph08 ~]$ cat /etc/zabbix/external/ceph > #!/usr/bin/python > > import sys > import os > import c

Re: [ceph-users] Show IOps per VM/client to find heavy users...

2014-08-11 Thread Andrija Panic
> Andrija > > > > On 11 August 2014 12:46, Andrija Panic wrote: > >> Hi Dan, >> >> the script provided seems to not work on my ceph cluster :( >> This is ceph version 0.80.3 >> >> I get empty results, on both debug level 10 and the maximum

[ceph-users] ceph-disk: Error: ceph osd start failed: Command '['/sbin/service', 'ceph', 'start', 'osd.5']' returned non-zero exit status 1

2014-08-11 Thread Yitao Jiang
Hi, I launched a ceph (ceph version 0.80.5) lab on my laptop with 7 disk for osd. Yesterday all works fine, and i can create new pool and mount them. But after reboot, the ceph now working, more specificly the osd not start, belows are logs [root@cephnode1 ~]# ceph-disk activate-all === osd.5 ===

[ceph-users] Can't export cephfs via nfs

2014-08-11 Thread Micha Krause
Hi, Im trying to build a cephfs to nfs gateway, but somehow i can't mount the share if it is backed by cephfs: mount ngw01.ceph:/srv/micha /mnt/tmp/ mount.nfs: Connection timed out cephfs mount on the gateway: 10.210.32.11:6789:/ngw on /srv type ceph (rw,relatime,name=cephfs-ngw,secret=,nod

Re: [ceph-users] Can't export cephfs via nfs

2014-08-11 Thread Pierre BLONDEAU
Hi, The NFS crossmnt options can help you. Regards Le 11/08/2014 16:34, Micha Krause a écrit : Hi, Im trying to build a cephfs to nfs gateway, but somehow i can't mount the share if it is backed by cephfs: mount ngw01.ceph:/srv/micha /mnt/tmp/ mount.nfs: Connection timed out cephfs mount o

Re: [ceph-users] Can't export cephfs via nfs

2014-08-11 Thread Micha Krause
Hi, > The NFS crossmnt options can help you. Thanks for the suggestion, I tried it, but it makes no difference. Micha Krause ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [Ceph-community] working ceph.conf file?

2014-08-11 Thread O'Reilly, Dan
More information: When the system is booted, for whatever reason udev doesn’t seem to find the devices used for OSD. However, once the system comes up, I can perform a “udevadm trigger –action=add” command and all the devices appear. Perhaps some sort of race condition? I am using a 95-ceph-

[ceph-users] Moving Journal to SSD

2014-08-11 Thread Dane Elwell
Hi list, Our current setup has OSDs with their journal sharing the same disk as the data, and we've reached the point we're outgrowing this setup. We're currently vacating disks in order to replace them with SSDs and recreate the OSD journals on the SSDs in a 5:1 ratio of spinners to SSDs. I've r

Re: [ceph-users] Moving Journal to SSD

2014-08-11 Thread Sebastien Han
Hi Dane, If you deployed with ceph-deploy, you will see that the journal is just a symlink. Take a look at /var/lib/ceph/osd//journal The link should point to the first partition of your hard drive disk, so no filesystem for the journal, just a block device. Roughly you should try: create N pa

[ceph-users] OSD Issue

2014-08-11 Thread Jacob Godin
Hi there, Currently having an issue with a Cuttlefish cluster w/ 3 OSDs and 1 MON. When trying to restart an OSD, the cluster became unresponsive to 'rbd export'. Here are some sample OSD logs: OSD we restarted -http://pastebin.com/UUuDdS1V Another OSD - http://pastebin.com/f12r4W2s In an attemp

[ceph-users] Issues with installing 2 node system

2014-08-11 Thread Ojwang, Wilson O (Wilson)
I am currently new to Ceph and had the following error while trying to install a 2 node system (admin and one other node) using quick installation guide from http://ceph.com/docs/master/start/ =\ [root@nfv2 ~]# ceph-deploy install nfv2 nfv3 [ceph_deploy.conf][DEBUG ] found configura

Re: [ceph-users] OSD Issue

2014-08-11 Thread Jacob Godin
We were able to get the cluster back online. The issue stemmed from the MON having a lower epoch than the OSDs. We used ceph osd thrash to bring the MON's epoch up to be >= that of the OSDs, restarted the osd procs, and they began cooperating again. After they completed syncing, we're now running

[ceph-users] unsubscribe

2014-08-11 Thread Darren Breeze ML
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD Issue

2014-08-11 Thread Jacob Godin
Update #3: Our OSDs all crashed at the same time. Logs are all showing this: http://pastebin.com/ns0McteE On Mon, Aug 11, 2014 at 6:40 PM, Jacob Godin wrote: > We were able to get the cluster back online. The issue stemmed from the > MON having a lower epoch than the OSDs. > > We used ceph osd

[ceph-users] Integrating ceph with cinder-backup

2014-08-11 Thread Sushma R
Hi, I followed the instructions at http://ceph.com/docs/next/rbd/rbd-openstack/ I was able to configure for glance, cinder and nova. However, when I try to do the backup, I get an error "ERROR: Service cinder-backup could not be found. (HTTP 500) I installed cinder-block, added the following in c

Re: [ceph-users] [Ceph-community] working ceph.conf file?

2014-08-11 Thread Andrew Woodward
hrm with cciss try adding rootdelay=90 to your boot options. I'm not sure if it will delay udev for non-root, but I've heard of other people needing the delay for the / mount because the cciss devices might not be ready immediately On Mon, Aug 11, 2014 at 7:54 AM, O'Reilly, Dan wrote: > Mor

Re: [ceph-users] ceph-disk: Error: ceph osd start failed: Command '['/sbin/service', 'ceph', 'start', 'osd.5']' returned non-zero exit status 1

2014-08-11 Thread Craig Lewis
Are the disks mounted? You should have a single mount for each OSD in /var/lib/ceph/osd/ceph-/. If they're not mounted, is there anything complicated about your disks? On Mon, Aug 11, 2014 at 6:32 AM, Yitao Jiang wrote: > Hi, > > I launched a ceph (ceph version 0.80.5) lab on my laptop with 7

[ceph-users] best practice of installing ceph(large-scale deployment)

2014-08-11 Thread yuelongguang
hi,all i am using ceph-rbd with openstack as its backends storage. is there a best practice? 1. it needs at least how many osds,mons, and their proportion ? 2. how you deploy the network?public , cluster network... 3.as for performance, what do you do? journal.. 4. anything it promotes

Re: [ceph-users] CRUSH map advice

2014-08-11 Thread Craig Lewis
Your MON nodes are separate hardware from the OSD nodes, right? If so, with replication=2, you should be able to shut down one of the two OSD nodes, and everything will continue working. Since it's for experimentation, I wouldn't deal with the extra hassle of replication=4 and custom CRUSH rules

[ceph-users] ceph network

2014-08-11 Thread yuelongguang
hi,all i know ceph differentiates network, mostly it uses public and cluster ,heartbeat network. do mon and mds have those network? i only know osd has. is there a place to introduce ceph's network? thanks. ___ ceph-users mailing list ceph-users@lis

Re: [ceph-users] ceph network

2014-08-11 Thread Craig Lewis
Only the OSDs use the cluster network. OSD heartbeat use both networks, to verify connectivity. Check out the Network Configuration Reference: http://ceph.com/docs/master/rados/configuration/network-config-ref/ On Mon, Aug 11, 2014 at 6:30 PM, yuelongguang wrote: > hi,all > i know ceph diffe

Re: [ceph-users] issues with creating Swift users for radosgw

2014-08-11 Thread debian Only
I meet same problem. maybe this is a bug http://tracker.ceph.com/issues/9002 but i stiell can not access radosgw root@ceph-radosgw:~# radosgw-admin user create --subuser=testuser:swf0001 --display-name="Test User One" --key-type=swift --access=full { "user_id": "testuser", "display_name": "T

Re: [ceph-users] Fresh deploy of ceph 0.83 has OSD down

2014-08-11 Thread Mark Kirkwood
On 11/08/14 20:52, Mark Kirkwood wrote: On 07/08/14 11:06, Mark Kirkwood wrote: Hi, I'm doing a fresh install of ceph 0.83 (src build) to an Ubuntu 14.04 VM using ceph-deploy 1.59. Everything goes well until the osd creation, which fails to start with a journal open error. The steps are shown b

[ceph-users] policy cache pool

2014-08-11 Thread Никитенко Виталий
Hi! I can not understand the meaning of the parameter hit_set_period. This is the minimum time then begin flush changed data from cache to based pool? Or this time after which the not changed data is deleted from cache pool? For example: there is a pool main_pool on 4 OSD and ssd_pool on ssd di

Re: [ceph-users] best practice of installing ceph(large-scale deployment)

2014-08-11 Thread Craig Lewis
Take a look at Cern's "Scaling Ceph at Cern" slides , as well as Inktank's Hardware Configuration Guide . You need at least 3 MONs for production. You might want m

Re: [ceph-users] CRUSH map advice

2014-08-11 Thread John Morris
On 08/11/2014 08:26 PM, Craig Lewis wrote: Your MON nodes are separate hardware from the OSD nodes, right? Two nodes are OSD + MON, plus a separate MON node. If so, with replication=2, you should be able to shut down one of the two OSD nodes, and everything will continue working. IIUC, the

Re: [ceph-users] best practice of installing ceph(large-scale deployment)

2014-08-11 Thread Robert van Leeuwen
> SSD journals will really help to get the full IOPS out of each disk. > Make sure the SSD has enough write speed to match the OSDs using it. > ie, if your SSDs can write 400MB/s, and the OSDs can write 100MB/s, then you > only want 4 OSDs sharing an SSD for journals. > I don't think looking at r