IFFD - INTERNATIONAL FUND FOR DEVELOPMENT
NOBEL HOUSE, 17 SMITH SQUARE
LONDON SW1P 3JR UNITED KINGDOM
Tel: 00448719157181
developmentf...@ymail.com
developmentf...@europemail.com
IFFD is an International Financial Institution and a Specialized Agency of the
United Nations whose mission is to enab
I am using ceph 0.58 and kernel 3.9-rc2 and btrfs on my osds.
I have an osd that starts up but blocks with the log message 'waiting
for 1 open ops to drain'.
This never happens, and I can't get the osd 'up'.
I need to clear this problem. I have recently had an osd go problematic
and I have recre
Thanks josh,the problem is solved by updating ceph in the glance node.
发自我的 iPhone
在 2013-3-20,14:59,"Josh Durgin" 写道:
> On 03/19/2013 11:03 PM, Chen, Xiaoxi wrote:
>> I think Josh may be the right man for this question ☺
>>
>> To be more precious, I would like to add more words about the stat
Hi there!
What steps needs to be perform if we have totally lost a node.
As I already understand from docs, OSDs must be recreated (disabled,
removed and again created, right?)
But what about MON and MDS?
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
Dan, Sebastién,
thanks for the hints.
For Inktank:
There's no doubt that setting the appropriate pg_num is a very
important parameter to set, and due to the lack of a stable command to
increase them, the workaround by Sébastien Han along with the advises
by Dan van der Ster should be included in t
Igor,
I am sure that I'm right in saying that you just have to create a new
filesystem (btrfs?) on the new block device, mount it, and then
initialise the osd with:
ceph-osd -i --mkfs
Then you can start the osd with:
ceph-osd -i
Since you are replacing an osd that already existed, the cluste
Hello!
I've deployed a test ceph cluster according to this guide:
http://ceph.com/docs/master/start/quick-start/
The problem is that the cluster will never go to a clean state by itself.
The corresponding outputs are the following:
root@test-4:~# ceph health
HEALTH_WARN 3 pgs degraded; 38 pgs
Actually, I already have recovered OSDs and MON daemon back to the cluster
according to http://ceph.com/docs/master/rados/operations/add-or-rm-osds/and
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ .
But doc has missed info about removing/add MDS.
How I can recovery MDS daemon for f
The MDS doesn't have any local state. You just need start up the daemon
somewhere with a name and key that are known to the cluster (these can be
different from or the same as the one that existed on the dead node; doesn't
matter!).
-Greg
Software Engineer #42 @ http://inktank.com | http://cep
Well, can you please clarify what exactly key I must to use? Do I need to
get/generate it somehow from working cluster?
On Wed, Mar 20, 2013 at 7:41 PM, Greg Farnum wrote:
> The MDS doesn't have any local state. You just need start up the daemon
> somewhere with a name and key that are known to
Yeah. If you run "ceph auth list" you'll get a dump of all the users and keys
the cluster knows about; each of your daemons has that key stored somewhere
locally (generally in /var/lib/ceph/ceph-[osd|mds|mon].$id). You can create
more or copy an unused MDS one. I believe the docs include informa
Oh, thank you!
On Wed, Mar 20, 2013 at 7:52 PM, Greg Farnum wrote:
> Yeah. If you run "ceph auth list" you'll get a dump of all the users and
> keys the cluster knows about; each of your daemons has that key stored
> somewhere locally (generally in /var/lib/ceph/ceph-[osd|mds|mon].$id). You
> c
Hello Ceph-Users,
I was testing our rados gateway and after a few hours rgw started sending http
500 responses for certain uploads. I did some digging and found that a HDD
died. The OSD was marked out, but not after a short rgw outage. Start to finish
was 60 to 120 seconds.
I have a few questi
I have a cluster of 3 hosts each with 2 SSD and 4 Spinning disks.
I used the example in th ecrush map doco to create a crush map to place
the primary on the SSD and replica on spinning disk.
If I use the example, I end up with objects replicated on the same host,
if I use 2 replicas.
Question 1,
On Wed, Mar 20, 2013 at 5:06 PM, Darryl Bond wrote:
> I have a cluster of 3 hosts each with 2 SSD and 4 Spinning disks.
> I used the example in th ecrush map doco to create a crush map to place
> the primary on the SSD and replica on spinning disk.
>
> If I use the example, I end up with objects r
15 matches
Mail list logo