Hi all,
approaching ceph today for the first time, so apologize for the basic
questions I promise I will do all my homework :-)
Following the documentation "storage cluster quick start" I am soon
stuck with the issue below while creating a first mon:
ceph-admin # ceph-deploy mon create ceph
Hello Jan
I faced similar kind of errors and these are really annoying. I tried this and
worked for me.
1. Your ceph-node1 is now a monitor node, but it cannot form quorum.
2. check monitor logs from ceph-node1 on /var/lib/ceph directory. This will
give you more strength.
3. You might need t
hi ceph,
just for testing (on emperor 0.72.1) I created two OSD’s on a single server,
resized the pool to a replication factor of one, and created 200 PG’s for that
pool:
# ceph osd dump
...
pool 4 'rbd' rep size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num
200 pgp_num 200 last
2013/12/4 Simon Leinen :
> I think this is a fine configuration - you won't be writing to the root
> partition too much, outside journals. We also put journals on the same
> SSDs as root partitions (not that we're very ambitious about
> performance...).
Do you suggest a RAID1 for the OS partition
Installed ceph-emperor using apt-get in ubuntu 12.04 by following the steps
given in installation part of ceph-doc website.
http://ceph.com/docs/master/install/get-packages/
http://ceph.com/docs/master/install/install-storage-cluster/
But get error when this command is run :
service ceph -a st
Hi Karan,
On 12/05/2013 10:31 AM, Karan Singh wrote:
Hello Jan
I faced similar kind of errors and these are really annoying. I tried this and
worked for me.
Glad to know I am not alone :-) , though this sounds like a not really
robust procedure...
1. Your ceph-node1 is now a monitor nod
Hi Sahana,
Did you already create any osd? With the osd prepare and activate command?
Best regards
Enviado desde mi Personal Samsung GT-i8190L
Original message
From: Sahana
Date: 05/12/2013 07:26 (GMT-03:00)
To: ceph-us...@ceph.com
Subject: [ceph-users] Error in star
what ceph status and ceph mon_status output says , did you checked logs after
this anything interesting there ?
Many Thanks
Karan Singh
- Original Message -
From: "Jan Kalcic"
To: "Karan Singh"
Cc: ceph-users@lists.ceph.com
Sent: Thursday, 5 December, 2013 12:58:33 PM
Subject: Re: [
ems is a remote machine?
Did you set up the corresponding directories: /var/lib/ceph/osd/ceph-0,
and called mkcephfs before?
You can also try starting osd manually by 'ceph-osd -i 0 -c
/etc/ceph/ceph.conf', then 'pgrep ceph-osd' to see if they are there,
then 'ceph -s' to check the health.
On
Hi All,
I found 6 pgs incomplete while "ceph health detail" after 3 osds down,
but after i manage to start again all 3 osds, only 1 left incomplete pg.
root@:~# ceph health detail | grep 4.7d
pg 4.7d is stuck inactive for 306404.577611, current state incomplete, last
acting [6,0]
pg 4.7d is stuck
Hello Everyone
Trying to boot from ceph volume using bolg
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/ and
http://docs.openstack.org/user-guide/content/boot_from_volume.html
Need help for this error.
=
Logs from /var/
On 12/05/2013 10:52 AM, Wolfgang Hennerbichler wrote:
hi ceph,
just for testing (on emperor 0.72.1) I created two OSD’s on a single server,
resized the pool to a replication factor of one, and created 200 PG’s for that
pool:
# ceph osd dump
...
pool 4 'rbd' rep size 1 min_size 1 crush_rulese
> On 12/05/2013 10:52 AM, Wolfgang Hennerbichler wrote:
>> Now I do an rbd import of an RBD Image (which is 1G in size), and I would
>> expect that RBD image to stripe across the two OSD’s. Well, this is just not
>> happening, everything sits on OSD2 (osd1 and osd0 have been removed in the
>> me
Perfect, that worked very well. Thanks a lot.
Another question:
Using http://ceph.com/howto/deploying-ceph-with-ceph-deploy/ as a guide to set
up my test-cluster I now have a working cluster with 12 osd's in and up. I've
create a client, a 10gb rbd volume, mounted it, written data all good.
Lo
On Thu, Dec 5, 2013 at 7:12 AM, Karan Singh wrote:
> what ceph status and ceph mon_status output says , did you checked logs
> after this anything interesting there ?
>
> Many Thanks
> Karan Singh
>
>
> - Original Message -
> From: "Jan Kalcic"
> To: "Karan Singh"
> Cc: ceph-users@lis
Hi guys,
I won’t do a RAID 1 with SSDs since they both write the same data.
Thus, they are more likely to “almost” die at the same time.
What I will try to do instead is to use both disk in JBOD mode or (degraded
RAID0).
Then I will create a tiny root partition for the OS.
Then I’ll still have
On Thu, Dec 5, 2013 at 9:18 AM, Jonas Andersson wrote:
> Perfect, that worked very well. Thanks a lot.
>
> Another question:
>
> Using http://ceph.com/howto/deploying-ceph-with-ceph-deploy/ as a guide to
> set up my test-cluster I now have a working cluster with 12 osd's in and up.
> I've create
I mean, I have OSD's and MON's running now, but I see no mention of them in the
current config file (/etc/ceph/ceph.conf) so backing that file up would not
allow me to see where monitors/objectstores/journals where placed. Is there a
nifty command that allows me to push these defaults to somethi
Another option is to run journals on individually presented SSDs, in a
5:1 ratio (spinning-disk:ssd) and have the OS somewhere else. Then the
failure domain is smaller.
Ideally implement some way to monitor SSD write life SMART data - at
least it gives a guide as to device condition compared
Hi Nathan,
Here is a very rough draft of the announcement which is going to be released
next monday. It is more a discussion starter than a draft. Feel free to modify
at will :-) It includes the names and affiliations of all founding members.
There may be more in the days to come and I'll add t
On 12/05/2013 09:16 AM, Jan Kalcic wrote:
It seems ceph-mon does not exit with success, in fact:
ceph-node1 # sudo /usr/bin/ceph-mon -i ceph-node1 --pid-file
/var/run/ceph/mon.ceph-node1.pid -c /etc/ceph/ceph.conf -d
2013-12-05 10:06:27.429602 7fe06baf9780 0 ceph version 0.72.1
(4d923861868f6a1
We are investigating a curious problem with radosgw:
We see intermittent timeouts and http connections breaking when streaming video
files through the rados gateway.
On server 1 we have Ubuntu 13.10 (saucy) with the stock Apache 2.4 and
associated fastcgi (and a mon)
On server 2 we also have U
Ah. So that warning compares the objects per pg in that pool vs the
objects per pg in the entire system, and if there is too much of a skew,
issues a warning. If you look at 'ceph health detail' you will see some of
the detail there.
The reason you're seeing this is because you have lots and l
ceph pg 4.7d query
will tell you which OSDs it wants to talk to in order to make the PG
complete (or what other information it needs).
sag
On Thu, 5 Dec 2013, Rzk wrote:
> Hi All,
>
> I found 6 pgs incomplete while "ceph health detail" after 3 osds down,
> but after i manage to start again al
On Thu, 5 Dec 2013, James Harper wrote:
> >
> > Can you generate an OSD log with 'debug filestore = 20' for an idle period?
> >
>
> Any more tests you would like me to run? I'm going to recreate that osd
> as xfs soon.
Ah, Ilya tells me that the brfs cleaner is probably chewing on a snapshot
Jonas,
You can query the admin sockets of your monitors and osds get a json
listing of their running configuration. The command will look something
like:
# ceph --admin-daemon /var/run/ceph/ceph-mon.a.asok config show
# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show
You can
Hi,
Thank you for quick reply.
ems is a server from where I ran service ceph start.
These are the steps followed. Please let me know if have anything is
missing or something is wrong.
wget -q -O- '
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo
apt-key add -
echo deb h
Hi Joao,
On 12/05/2013 04:29 PM, Joao Eduardo Luis wrote:
On 12/05/2013 09:16 AM, Jan Kalcic wrote:
It seems ceph-mon does not exit with success, in fact:
ceph-node1 # sudo /usr/bin/ceph-mon -i ceph-node1 --pid-file
/var/run/ceph/mon.ceph-node1.pid -c /etc/ceph/ceph.conf -d
2013-12-05 10:06:27
Hey all,
For those who have been following, or are interested in, the Ceph User
Committee [0] discussed at CDS [1] there is now a mailing list to
discuss all things User Committee. This could include:
* Proposed changes to Ceph.com
* Event participation and coverage
* Community development logis
Suppose I should have mentioned, as with the other mailing lists you
can find the info to subscribe at:
http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
and mail can be sent to the list at:
ceph-commun...@lists.ceph.com
Best Regards,
Patrick McGarry
Director, Community || Inktank
h
Can someone point me to directions on how to mount a Ceph storage volume on
Linux as well as Windows?
Thanks in advance for your help.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I've been working on getting this setup working. I have virtual machines
working using rbd based images by editing the domain directly.
Is there any way to make the creation process better? We are hoping to be
able to use a virsh pool using the rbd driver but it appears that Redhat
has not compi
>
> Can someone point me to directions on how to mount a Ceph storage
> volume on Linux as well as Windows?
>
Do you mean cephfs filesystem, or rbd block device?
I have ported librbd to windows in a very "alpha" sense - it compiles and I can
do things like 'rbd ls' and 'rbd import', but haven'
> How do you mount cephfs, use ceph-fuse or kernel driver?
>
> Regards
> Yan, Zheng
I use ceph-fuse.
Cheers,
MAO
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Native block support is coming for Hyper-V next year we hope... would
be great to hear from InkTank on anything that can be shared publicly on
that front :)
On 2013-12-05 22:02, James Harper wrote:
Can someone point me to directions on how to mount a Ceph storage
volume on Linux as well as Wi
Josh,
On Tue, Nov 19, 2013 at 4:24 PM, Josh Durgin wrote:
>>> I hope I can release or push commits to this branch contains live-migration,
>>> incorrect filesystem size fix and ceph-snapshort support in a few days.
>>
>> Can't wait to see this patch! Are you getting rid of the shared
>> storage r
On Thu, 5 Dec 2013, James Harper wrote:
> >
> > Can someone point me to directions on how to mount a Ceph storage
> > volume on Linux as well as Windows?
> >
>
> Do you mean cephfs filesystem, or rbd block device?
>
> I have ported librbd to windows in a very "alpha" sense - it compiles
> and
>
> On Thu, 5 Dec 2013, James Harper wrote:
> > >
> > > Can someone point me to directions on how to mount a Ceph storage
> > > volume on Linux as well as Windows?
> > >
> >
> > Do you mean cephfs filesystem, or rbd block device?
> >
> > I have ported librbd to windows in a very "alpha" sense - it
On Thu, 5 Dec 2013, James Harper wrote:
> >
> > On Thu, 5 Dec 2013, James Harper wrote:
> > > >
> > > > Can someone point me to directions on how to mount a Ceph storage
> > > > volume on Linux as well as Windows?
> > > >
> > >
> > > Do you mean cephfs filesystem, or rbd block device?
> > >
> > >
A little info about wip-port.
The wip-port branch lags behind master a bit, usually a week or two
depending on what I've got going on. There are testers for OSX and
FreeBSD, and bringing in windows patches would probably be a nice
staging place for them, as I suspect the areas of change will overl
On Fri, Dec 6, 2013 at 6:08 AM, Miguel Oliveira
wrote:
>> How do you mount cephfs, use ceph-fuse or kernel driver?
>>
>> Regards
>> Yan, Zheng
>
> I use ceph-fuse.
>
Looks like the issue is not caused by the bug I presume. Could you
please run following commands, and send the output to me.
rado
41 matches
Mail list logo