[ceph-users] problem in ceph installation

2014-07-17 Thread pragya jain
Hi all, I am installing ceph on ubuntu 14.04 desktop 64-bit VM using the link http://eu.ceph.com/docs/wip-6919/start/quick-start/ But I got following error while installing ceph - root@prag2648-VirtualBox:~# sudo apt-get update && sudo apt-get install ceph Ign http://securit

Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-17 Thread Andrei Mikhailovsky
Sage, would it help if you add a cache pool to your cluster? Let's say if you add a few TBs of ssds acting as a cache pool to your cluster, would this help with retaining IO to the guest vms during data recovery or reshuffling? Over the past year and a half that we've been using ceph we had a

Re: [ceph-users] how to scale out RADOSgw?

2014-07-17 Thread Riccardo Murri
Hi Wido, all, thanks for the quick reply. One more question: On 16 July 2014 17:02, Wido den Hollander wrote: >> Op 16 jul. 2014 om 16:54 heeft "Riccardo Murri" het >> volgende geschreven: >> >> Since RADOSgw is a FastCGI module, can one scale it by just adding >> more HTTP servers behind a l

Re: [ceph-users] Some OSD and MDS crash

2014-07-17 Thread John Spray
Hi Pierre, Unfortunately it looks like we had a bug in 0.82 that could lead to journal corruption of the sort you're seeing here. A new journal format was added, and on the first start after an update the MDS would re-write the journal to the new format. This should only have been happening on t

[ceph-users] row geo-replication to another data store?

2014-07-17 Thread Guang Yang
Hi cephers, We are investigating a backup solution for Ceph, in short, we would like a solution to backup a Ceph cluster to another data store (not Ceph cluster, assume it has SWIFT API). We would like to have both full backup and incremental backup on top of the full backup. After going throug

[ceph-users] Warning for Ceph FS users (bug in 0.82)

2014-07-17 Thread John Spray
Hi, If you are using the experimental filesystem component of Ceph, and you use the less stable "numbered" Ceph releases, you should be aware of the following issue affecting the 0.82 development release: http://tracker.ceph.com/issues/8811 This issue introduces a risk of corruption when first st

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-17 Thread Jaemyoun Lee
Thank you, Greg! I solved it through creating MDS. - Jae On Wed, Jul 16, 2014 at 8:36 PM, Gregory Farnum wrote: > Your MDS isn't running or isn't active. > -Greg > > > On Wednesday, July 16, 2014, Jaemyoun Lee wrote: > >> >> The result is same. >> >> # ceph-fuse --debug-ms 1 --debug-client 1

Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-17 Thread Sage Weil
On Thu, 17 Jul 2014, Quenten Grasso wrote: > Hi Sage & List > > I understand this is probably a hard question to answer. > > I mentioned previously our cluster is co-located MON?s on OSD servers, which > are R515?s w/ 1 x AMD 6 Core processor & 11 3TB OSD?s w/ dual 10GBE. > > When our cluster i

Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-17 Thread Andrei Mikhailovsky
Comments inline - Original Message - From: "Sage Weil" To: "Quenten Grasso" Cc: ceph-users@lists.ceph.com Sent: Thursday, 17 July, 2014 4:44:45 PM Subject: Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time On Thu, 17 Jul 2014, Quenten Grasso wrot

Re: [ceph-users] Some OSD and MDS crash

2014-07-17 Thread Pierre BLONDEAU
Hi 0 Brilliant I recovered my data. 1 Gregory, Joao, John, Samuel : Thank a lot for all the help and to have responded at each time. 2 It's my fault, if i am pass to 0.82. And it's good, if that helped you to find some bugs ;) 3 With this fear, we will recreate our cluster in firefly.

Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at the same time

2014-07-17 Thread Craig Lewis
I'd like to see some way to cap recovery IOPS per OSD. Don't allow backfill to do no more than 50 operations per second. It will slow backfill down, but reserve plenty of IOPS for normal operation. I know that implementing this well is not a simple task. I know I did some stupid things that ca

Re: [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-17 Thread Dmitry Borodaenko
In case of Icehouse on Ubuntu 14.04, you should be able to test this patch series by grabbing this branch from github: https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse and replacing contents of /usr/share/pyshared/nova with contents of nova/ from that branch. You may also

Re: [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-17 Thread Dmitry Borodaenko
The meeting is in 2 hours, so you still have a chance to particilate or at least lurk :) On Wed, Jul 16, 2014 at 11:55 PM, Somhegyi Benjamin wrote: > Hi Dmitry, > > Will you please share with us how things went on the meeting? > > Many thanks, > Benjamin > > > >> -Original Message- >> Fro

Re: [ceph-users] how to scale out RADOSgw?

2014-07-17 Thread Wido den Hollander
On 07/17/2014 02:27 PM, Riccardo Murri wrote: Hi Wido, all, thanks for the quick reply. One more question: On 16 July 2014 17:02, Wido den Hollander wrote: Op 16 jul. 2014 om 16:54 heeft "Riccardo Murri" het volgende geschreven: Since RADOSgw is a FastCGI module, can one scale it by just

[ceph-users] pg repair info

2014-07-17 Thread Caius Howcroft
I wonder if someone can just clarify something for me. I have a cluster which I have upgraded to firefly. I'm having pg inconsistencies due to the recent reported xfs bug. However, I'm running pg repair X.YYY and I would like to just understand what, exactly this is doing. It looks like its copyin

Re: [ceph-users] pg repair info

2014-07-17 Thread Wido den Hollander
On 07/17/2014 09:44 PM, Caius Howcroft wrote: I wonder if someone can just clarify something for me. I have a cluster which I have upgraded to firefly. I'm having pg inconsistencies due to the recent reported xfs bug. However, I'm running pg repair X.YYY and I would like to just understand what,

Re: [ceph-users] PERC H710 raid card

2014-07-17 Thread Jake Young
There are two command line tools for Linux for LSI cards: megacli and storcli You can do pretty much everything from those tools. Jake On Thursday, July 17, 2014, Dennis Kramer (DT) wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hi, > > What do you recommend in case of a disk fai

Re: [ceph-users] EU mirror now supports rsync

2014-07-17 Thread David Moreau Simard
(taking this back to ceph-users, not sure why I posted to ceph-devel?) Thanks for the info, I sent them a message to inquire about access. In the meantime, the mirror is already synchronized (sync every 4 hours) and available on http://mirror.iweb.ca or directly on http://ceph.mirror.iweb.ca. Da

[ceph-users] Regarding ceph osd setmaxosd

2014-07-17 Thread Anand Bhat
Hi, I have question on intention of Ceph setmaxosd command. From source code, it appears as if this is present as a way to limit the number of OSDs in the Ceph cluster. Can this be used to shrink the number of OSDs in the cluster without gracefully shutting down OSDs and letting recovery/remap