Hi all,
I am installing ceph on ubuntu 14.04 desktop 64-bit VM using the link
http://eu.ceph.com/docs/wip-6919/start/quick-start/
But I got following error while installing ceph
-
root@prag2648-VirtualBox:~# sudo apt-get update && sudo apt-get install ceph
Ign http://securit
Sage,
would it help if you add a cache pool to your cluster? Let's say if you add a
few TBs of ssds acting as a cache pool to your cluster, would this help with
retaining IO to the guest vms during data recovery or reshuffling?
Over the past year and a half that we've been using ceph we had a
Hi Wido, all,
thanks for the quick reply. One more question:
On 16 July 2014 17:02, Wido den Hollander wrote:
>> Op 16 jul. 2014 om 16:54 heeft "Riccardo Murri" het
>> volgende geschreven:
>>
>> Since RADOSgw is a FastCGI module, can one scale it by just adding
>> more HTTP servers behind a l
Hi Pierre,
Unfortunately it looks like we had a bug in 0.82 that could lead to
journal corruption of the sort you're seeing here. A new journal
format was added, and on the first start after an update the MDS would
re-write the journal to the new format. This should only have been
happening on t
Hi cephers,
We are investigating a backup solution for Ceph, in short, we would like a
solution to backup a Ceph cluster to another data store (not Ceph cluster,
assume it has SWIFT API). We would like to have both full backup and
incremental backup on top of the full backup.
After going throug
Hi,
If you are using the experimental filesystem component of Ceph, and
you use the less stable "numbered" Ceph releases, you should be aware
of the following issue affecting the 0.82 development release:
http://tracker.ceph.com/issues/8811
This issue introduces a risk of corruption when first st
Thank you, Greg!
I solved it through creating MDS.
- Jae
On Wed, Jul 16, 2014 at 8:36 PM, Gregory Farnum wrote:
> Your MDS isn't running or isn't active.
> -Greg
>
>
> On Wednesday, July 16, 2014, Jaemyoun Lee wrote:
>
>>
>> The result is same.
>>
>> # ceph-fuse --debug-ms 1 --debug-client 1
On Thu, 17 Jul 2014, Quenten Grasso wrote:
> Hi Sage & List
>
> I understand this is probably a hard question to answer.
>
> I mentioned previously our cluster is co-located MON?s on OSD servers, which
> are R515?s w/ 1 x AMD 6 Core processor & 11 3TB OSD?s w/ dual 10GBE.
>
> When our cluster i
Comments inline
- Original Message -
From: "Sage Weil"
To: "Quenten Grasso"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, 17 July, 2014 4:44:45 PM
Subject: Re: [ceph-users] ceph osd crush tunables optimal AND add new OSD at
the same time
On Thu, 17 Jul 2014, Quenten Grasso wrot
Hi
0 Brilliant I recovered my data.
1 Gregory, Joao, John, Samuel : Thank a lot for all the help and to have
responded at each time.
2 It's my fault, if i am pass to 0.82. And it's good, if that helped you
to find some bugs ;)
3 With this fear, we will recreate our cluster in firefly.
I'd like to see some way to cap recovery IOPS per OSD. Don't allow
backfill to do no more than 50 operations per second. It will slow
backfill down, but reserve plenty of IOPS for normal operation. I know
that implementing this well is not a simple task.
I know I did some stupid things that ca
In case of Icehouse on Ubuntu 14.04, you should be able to test this
patch series by grabbing this branch from github:
https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse
and replacing contents of /usr/share/pyshared/nova with contents of
nova/ from that branch. You may also
The meeting is in 2 hours, so you still have a chance to particilate
or at least lurk :)
On Wed, Jul 16, 2014 at 11:55 PM, Somhegyi Benjamin
wrote:
> Hi Dmitry,
>
> Will you please share with us how things went on the meeting?
>
> Many thanks,
> Benjamin
>
>
>
>> -Original Message-
>> Fro
On 07/17/2014 02:27 PM, Riccardo Murri wrote:
Hi Wido, all,
thanks for the quick reply. One more question:
On 16 July 2014 17:02, Wido den Hollander wrote:
Op 16 jul. 2014 om 16:54 heeft "Riccardo Murri" het
volgende geschreven:
Since RADOSgw is a FastCGI module, can one scale it by just
I wonder if someone can just clarify something for me.
I have a cluster which I have upgraded to firefly. I'm having pg
inconsistencies due to the recent reported xfs bug. However, I'm
running pg repair X.YYY and I would like to just understand what,
exactly this is doing. It looks like its copyin
On 07/17/2014 09:44 PM, Caius Howcroft wrote:
I wonder if someone can just clarify something for me.
I have a cluster which I have upgraded to firefly. I'm having pg
inconsistencies due to the recent reported xfs bug. However, I'm
running pg repair X.YYY and I would like to just understand what,
There are two command line tools for Linux for LSI cards: megacli and
storcli
You can do pretty much everything from those tools.
Jake
On Thursday, July 17, 2014, Dennis Kramer (DT) wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> What do you recommend in case of a disk fai
(taking this back to ceph-users, not sure why I posted to ceph-devel?)
Thanks for the info, I sent them a message to inquire about access.
In the meantime, the mirror is already synchronized (sync every 4 hours) and
available on http://mirror.iweb.ca or directly on http://ceph.mirror.iweb.ca.
Da
Hi,
I have question on intention of Ceph setmaxosd command. From source code, it
appears as if this is present as a way to limit the number of OSDs in the Ceph
cluster. Can this be used to shrink the number of OSDs in the cluster without
gracefully shutting down OSDs and letting recovery/remap
19 matches
Mail list logo