Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103

2013-08-05 Thread Olivier Bonvalet
It's Xen yes, but no I didn't tried the RBD tab client, for two reasons : - too young to enable it in production - Debian packages don't have the TAP driver Le lundi 05 août 2013 à 01:43 +, James Harper a écrit : > What VM? If Xen, have you tried the rbd tap client? > > James > > > -Ori

Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103

2013-08-05 Thread James Harper
> > It's Xen yes, but no I didn't tried the RBD tab client, for two > reasons : > - too young to enable it in production > - Debian packages don't have the TAP driver > It works under Wheezy. blktap is available via dkms package, then just replace the tapdisk with the rbd version and follow the

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-05 Thread Stefan Hajnoczi
On Sun, Aug 04, 2013 at 03:36:52PM +0200, Oliver Francke wrote: > Am 02.08.2013 um 23:47 schrieb Mike Dawson : > > We can "un-wedge" the guest by opening a NoVNC session or running a 'virsh > > screenshot' command. After that, the guest resumes and runs as expected. At > > that point we can exami

Re: [ceph-users] [AD]:Factory Price,Promotional Metal items

2013-08-05 Thread s...@sunpin.net
Dear sir / madam, Thanks for your reading! We engaged in metal industry ten years ago. High quality and price has always been our pride. of course, the good service too. We can show you a price only take 15 minutes.Any demand, Please Contact us without hesitate. Best regards Christine Zho

[ceph-users] Block device storage

2013-08-05 Thread Benito Chamberlain
Hi there I have a few questions regarding the block device storage and the ceph-filesystem. We want to cluster a database (Progress) on a clustered filesystem , but the database requires the operating system to see the clustered storage area as a block device , and not a network storage area

Re: [ceph-users] Block device storage

2013-08-05 Thread James Harper
> > Hi there > > I have a few questions regarding the block device storage and the > ceph-filesystem. > > We want to cluster a database (Progress) on a clustered filesystem , but > the database requires the > operating system to see the clustered storage area as a block device , > and not a netw

Re: [ceph-users] About single monitor recovery

2013-08-05 Thread Yu Changyuan
The good news is, with new patch, ceph start OK, cephfs mount OK, and kvm virtual machine use rbd boot OK(and seems running ok), and I check the timestamp of last file write to cephfs, it's fair near to the time of reboot(which cause ceph not work any more). Since I don't have any other way to chec

[ceph-users] re-initializing a ceph cluster

2013-08-05 Thread Jeff Moskow
After more than a week of trying to restore our cluster I've given up. I'd like to reset the data, metadata and rbd pools to their initial clean states (wiping out all data). Is there an easy way to do this? I tried deleting and adding pools, but still have: health HEALTH_WARN 32 pgs

[ceph-users] Large storage nodes - best practices

2013-08-05 Thread Brian Candler
I am looking at evaluating ceph for use with large storage nodes (24-36 SATA disks per node, 3 or 4TB per disk, HBAs, 10G ethernet). What would be the best practice for deploying this? I can see two main options. (1) Run 24-36 osds per node. Configure ceph to replicate data to one or more ot

Re: [ceph-users] Block device storage

2013-08-05 Thread Bill Campbell
Here's a quick Google result for Ceph Resource Agents packages in the Debian unstable branch. These look to apply to 0.48, but could be used as a base for a Resource Agent for RBD. http://packages.debian.org/unstable/ceph-resource-agents -Original Message- From: James Harper To: Beni

Re: [ceph-users] compile error on centos 5.9

2013-08-05 Thread Sage Weil
[Moving to ceph-devel] On Mon, 5 Aug 2013, huangjun wrote: > hi,all > i compiled ceph 0.61.3 on centos 5.9,the "sh autogen.sh" and > "./configure " is ok, but when i "make", an error occurs, the err log: > /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../include/c++/4.1.2/bits/concurrence.h: > I

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Mike Dawson
Brian, Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to consider. Mark Nelson, the Ceph performance guy at Inktank, has published sever

Re: [ceph-users] About single monitor recovery

2013-08-05 Thread Sage Weil
On Mon, 5 Aug 2013, Yu Changyuan wrote: > The good news is, with new patch, ceph start OK, cephfs mount OK, and kvm > virtual machine use rbd boot OK(and seems running ok), and I check the > timestamp of last file write to cephfs, it's fair near to the time of > reboot(which cause ceph not work any

Re: [ceph-users] Ceph Hadoop Configuration

2013-08-05 Thread Scottix
Hey Noah, Yes it does look like an older version 56.6, I got it from the Ubuntu Repo. Is there another method or pull request I can run to get the latest? I am having a hard time finding it. Thanks On Sun, Aug 4, 2013 at 10:33 PM, Noah Watkins wrote: > Hey Scott, > > Things look OK, but I'm a l

Re: [ceph-users] Ceph Hadoop Configuration

2013-08-05 Thread Noah Watkins
The ceph.com repositories can be added in Ubuntu. Checkout http://ceph.com/docs/master/install/debian/ for details. If you upgrade to the latest stable (cuttlefish) then all the dependencies should be correct. On Mon, Aug 5, 2013 at 9:38 AM, Scottix wrote: > Hey Noah, > Yes it does look like an o

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Brian Candler
On 05/08/2013 17:15, Mike Dawson wrote: Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to consider. Mark Nelson, the Ceph performance

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Mike Dawson
On 8/5/2013 12:51 PM, Brian Candler wrote: On 05/08/2013 17:15, Mike Dawson wrote: Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to cons

Re: [ceph-users] trouble authenticating after bootstrapping monitors

2013-08-05 Thread Kevin Weiler
Thanks for looking Sage, I came to this conclusion myself as well and this seemed to work. I'm trying to replicate a ceph cluster that was made with ceph-deploy manually. I noted that these capabilities entries were not in the ceph-deploy cluster. Does ceph-deploy do something special when creatin

[ceph-users] Trying to identify performance bottlenecks

2013-08-05 Thread Lincoln Bryant
Hi all, I'm trying to identify the performance bottlenecks in my experimental Ceph cluster. A little background on my setup: 10 storage servers, each configured with: -(2) dual-core opterons -8 GB of RAM -(6) 750GB disks (1 OSD per disk, 720

Re: [ceph-users] kernel BUG at net/ceph/osd_client.c:2103

2013-08-05 Thread Alex Elder
On 08/04/2013 08:07 PM, Olivier Bonvalet wrote: > > Hi, > > I've just upgraded a Xen Dom0 (Debian Wheezy with Xen 4.2.2) from Linux > 3.9.11 to Linux 3.10.5, and now I have kernel panic after launching some > VM which use RBD kernel client. A crash like this was reported last week. I started l

Re: [ceph-users] inconsistent pg: no 'snapset' attr

2013-08-05 Thread John Nielsen
Can no one shed any light on this? On Jul 30, 2013, at 1:51 PM, John Nielsen wrote: > I am running a ceph cluster with 24 OSD's across 3 nodes, Cuttlefish 0.61.3. > Recently an inconsistent PG cropped up: > > # ceph health detail > HEALTH_ERR 1 pgs inconsistent; 1 scrub errors > pg 11.2d5 is a

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-05 Thread Mike Dawson
Josh, Logs are uploaded to cephdrop with the file name mikedawson-rbd-qemu-deadlock. - At about 2013-08-05 19:46 or 47, we hit the issue, traffic went to 0 - At about 2013-08-05 19:53:51, ran a 'virsh screenshot' Environment is: - Ceph 0.61.7 (client is co-mingled with three OSDs) - rbd cac

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread James Harper
> I am looking at evaluating ceph for use with large storage nodes (24-36 SATA > disks per node, 3 or 4TB per disk, HBAs, 10G ethernet). > > What would be the best practice for deploying this? I can see two main > options. > > (1) Run 24-36 osds per node. Configure ceph to replicate data to one o

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Scottix
In the previous email, you are forgetting Raid1 has a write penalty of 2 since it is mirroring and now we are talking about different types of raid and nothing really to do about Ceph. One of the main advantages of Ceph is to have data replicated so you don't have to do Raid to that degree. I am su

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread James Harper
> > In the previous email, you are forgetting Raid1 has a write penalty of 2 > since it > is mirroring and now we are talking about different types of raid and nothing > really to do about Ceph. One of the main advantages of Ceph is to have data > replicated so you don't have to do Raid to that d

[ceph-users] CDS Day One Videos

2013-08-05 Thread Ross David Turk
Hi, all! Thanks to those who attended the first day (unless you are in Europe, where they all happen on the same glorious day) of the Ceph Developer Summit. For those of you who couldn’t make it, I have added links to videos of the sessions to the wiki page - it’s actually just three large video