Hi ,
I also found similar problem when adding a new monitor to my cluster. hope
this can help.
In my ceph monitor log, i found this line:
2013-10-23 17:00:15.907105 7fc9edc9b780 0 ceph version 0.61.4
(1669132fcfc27d0c0b5e5bb93ade59d147e23404), process ceph-mon, pid 4312
2013-10-23 17:00:16.1586
I've rebuilt my nodes on raring and now I'm hitting the same issue trying
to add the 2nd and 3rd monitors as specified in the quickstart. The
quickstart makes no mention of setting public_addr or public_network to
complete this step. What's the deal?
JL
On 13/10/24 10:23 AM, "Joao Eduardo Luis
On Thu, 24 Oct 2013, Gaylord Holder wrote:
> Works perfectly.
>
> My only grip is --cluster isn't listed as a valid argument from
>
> ceph-mon --help
Ah, it is there for current versions, but not cuttlefish or dumpling.
It'll be in the next point release (for each).
sage
>
> and the only
I also created a ticket to try and handle this particular instance of bad
behavior:
http://tracker.ceph.com/issues/6629
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On October 24, 2013 at 1:22:54 PM, Greg Farnum (gregory.far...@inktank.com)
wrote:
>
>I was also able to repr
I was also able to reproduce this, guys, but I believe it’s specific to the
mode of testing rather than to anything being wrong with the OSD. In
particular, after restarting the OSD whose file I removed and running repair,
it did so successfully.
The OSD has an “fd cacher” which caches open file
Hi Sir,
I try to use implement CEPH following
http://ceph.com/docs/master/start/quick-ceph-deploy/
All my servers are VMware instances, all steps working fine unless
prepare/create OSD , I try
ceph-deploy osd prepare ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
and aslo I try to use extra HD
ceph-
>-Original Message-
>From: Tyler Brekke [mailto:tyler.bre...@inktank.com]
>Sent: Thursday, October 24, 2013 4:36 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Default PGs
>
>You have to do this before creating your first monitor as the default pools are
On 10/23/2013 1:14 PM, Gruher, Joseph R wrote:
Hi all,
I have CentOS 6.4 with 3.11.6 kernel running (built from latest stable
on kernel.org) and I cannot load the rbd client module. Should I have
to do anything to enable/install it? Shouldn’t it be present in this
kernel?
[ceph@joceph05 /]
On 2013-10-24 15:08, Nathan Stratton wrote:
9 - Samsung 840 EVO 120 GB SSD (1 root 8 ceph)
The EVO is a TLC drive with durability of about 1,100 write cycles.
Whether that is or isn't a problem in your environment of course is a
separate question - I'm just pointing it out :) If they are
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Thursday, October 24, 2013 5:24 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph-deploy hang on CentOS 6.4
>
>On Wed, Oct 23, 2013 at 12:43 PM, Gruher, Joseph R
> wrote:
On 24/10/2013 14:55, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 9:13 PM, Michael wrote:
On 24/10/2013 13:53, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 5:43 PM, Michael
wrote:
On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael
wrote:
Tying to gather some more in
On 10/24/2013 11:38 AM, Nathan Stratton wrote:
On Thu, Oct 24, 2013 at 11:19 AM, Kyle Bader wrote:
If you are talking about the links from the nodes with OSDs to their
ToR switches then I would suggest going with Twinax cables. Twinax
doesn't go very far but it's really durable and uses less po
On Thu, Oct 24, 2013 at 11:19 AM, Kyle Bader wrote:
> If you are talking about the links from the nodes with OSDs to their
> ToR switches then I would suggest going with Twinax cables. Twinax
> doesn't go very far but it's really durable and uses less power than
> 10GBase-T. Here's a blog post tha
On Thu, Oct 24, 2013 at 9:48 AM, Mark Nelson wrote:
> Ceph does work with IPoIB, We've got some people working on rsocket support,
> and Mellanox just opensourced VMA, so there are some options on the
> infiniband side if you want to go that route. With QDR and IPoIB we have
> been able to push a
> I know that 10GBase-T has more delay then SFP+ with direct attached
> cables (.3 usec vs 2.6 usec per link), but does that matter? Some
> sites stay it is a huge hit, but we are talking usec, not ms, so I
> find it hard to believe that it causes that much of an issue. I like
> the lower cost and
On 10/24/2013 03:36 PM, Martin Catudal wrote:
Hi,
Here my scenario :
I will have a small cluster (4 nodes) with 4 (4 TB) OSD's per node.
I will have OS installed on two SSD in raid 1 configuration.
I would never run your journal in RAID-1 on SSDs. It means you'll 'burn'
through them at
Thanks for the explanation that makes sense.
Tim
-Original Message-
From: Tyler Brekke [mailto:tyler.bre...@inktank.com]
Sent: Thursday, October 24, 2013 6:42 AM
To: Snider, Tim
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] num of placement groups created for default pools
Hey
On Thu, Oct 24, 2013 at 6:31 AM, Guang Yang wrote:
> Hi Mark, Greg and Kyle,
> Sorry to response this late, and thanks for providing the directions for me
> to look at.
>
> We have exact the same setup for OSD, pool replica (and even I tried to
> create the same number of PGs within the small clus
Thank's Kurt,
That's comfort me in my decision to separate OS and Journal.
Martin
Martin Catudal
Responsable TIC
Ressources Metanor Inc
Ligne directe: (819) 218-2708
Le 2013-10-24 10:40, Kurt Bauer a écrit :
> Hi,
>
> we had a setup like this and ran into trouble, so I would strongly
> disco
On 10/24/2013 09:08 AM, Nathan Stratton wrote:
I have tried to make Gluster FS work for last 2 years on different
projects and have given up. With Gluster I have always used 10 gig
Infiniband. Its dirt cheap (about $80 a port used including switch)
and very low latency, however ceph does not supp
Hi,
we had a setup like this and ran into trouble, so I would strongly
discourage you from setting it up like this. Under normal circumstances
there's no problem, but when the cluster is under heavy load, for
example when it has a lot of pgs backfilling, for whatever reason
(increasing num of pgs,
On 10/24/2013 03:12 PM, David J F Carradice wrote:
Hi.
I am getting an error on adding monitors to my cluster.
ceph@ceph-deploy:~/my-cluster$ ceph-deploy mon create ceph-osd01
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy mon
create ceph-osd01
[ceph_deploy.mon][DEBUG ] Deployin
A bug would be great. Thanks!
sage
Gaylord Holder wrote:
>Works perfectly.
>
>My only grip is --cluster isn't listed as a valid argument from
>
> ceph-mon --help
>
>and the only reference searching for --cluster in the ceph
>documentation
>is in regards to ceph-rest-api.
>
>Shall I file a bug
Hi.
I am getting an error on adding monitors to my cluster.
ceph@ceph-deploy:~/my-cluster$ ceph-deploy mon create ceph-osd01
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy mon create
ceph-osd01
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-osd01
[ceph_deploy.mo
Thanks Mark.
I cannot connect to my hosts, I will do the check and get back to you tomorrow.
Thanks,
Guang
在 2013-10-24,下午9:47,Mark Nelson 写道:
> On 10/24/2013 08:31 AM, Guang Yang wrote:
>> Hi Mark, Greg and Kyle,
>> Sorry to response this late, and thanks for providing the directions for
>>
I have tried to make Gluster FS work for last 2 years on different
projects and have given up. With Gluster I have always used 10 gig
Infiniband. Its dirt cheap (about $80 a port used including switch)
and very low latency, however ceph does not support it so we are
looking at ethernet.
I know tha
On Thu, Oct 24, 2013 at 9:13 PM, Michael wrote:
> On 24/10/2013 13:53, Yan, Zheng wrote:
>>
>> On Thu, Oct 24, 2013 at 5:43 PM, Michael
>> wrote:
>>>
>>> On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael
wrote:
>
> Tying to gather some more in
Try passing --cluster csceph instead of the config file path and I suspect it
will work.
sage
Gaylord Holder wrote:
>I'm trying to bring a ceph cluster not named ceph.
>
>I'm running version 0.61.
>
>From my reading of the documentation, the $cluster metavariable is set
>by the basename of the
I have a filesystem shared by several systems mounted on 2 ceph nodes
with a 3rd as a reference monitor.
It's been used for a couple of months now but suddenly the root
directory for the mount has become inaccessible and requests to files in
it just hang, there's no ceph errors reported before/a
On 10/24/2013 08:31 AM, Guang Yang wrote:
> Hi Mark, Greg and Kyle,
> Sorry to response this late, and thanks for providing the directions for
> me to look at.
>
> We have exact the same setup for OSD, pool replica (and even I tried to
> create the same number of PGs within the small cluster), h
Hi,
Here my scenario :
I will have a small cluster (4 nodes) with 4 (4 TB) OSD's per node.
I will have OS installed on two SSD in raid 1 configuration.
Is one of you have successfully and efficiently a Ceph cluster that is
built with Journal on a separate partition on the OS SSD's?
I know
Hi Mark, Greg and Kyle,
Sorry to response this late, and thanks for providing the directions for me to
look at.
We have exact the same setup for OSD, pool replica (and even I tried to create
the same number of PGs within the small cluster), however, I can still
reproduce this constantly.
This
On 24/10/2013 13:53, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 5:43 PM, Michael wrote:
On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael
wrote:
Tying to gather some more info.
CentOS - hanging ls
[root@srv ~]# cat /proc/14614/stack
[] wait_answer_interruptible+
Works perfectly.
My only grip is --cluster isn't listed as a valid argument from
ceph-mon --help
and the only reference searching for --cluster in the ceph documentation
is in regards to ceph-rest-api.
Shall I file a bug to correct the documentation?
Thanks again for the quick and accurat
On Thu, Oct 24, 2013 at 5:43 PM, Michael wrote:
> On 24/10/2013 03:09, Yan, Zheng wrote:
>>
>> On Thu, Oct 24, 2013 at 6:44 AM, Michael
>> wrote:
>>>
>>> Tying to gather some more info.
>>>
>>> CentOS - hanging ls
>>> [root@srv ~]# cat /proc/14614/stack
>>> [] wait_answer_interruptible+0x81/0xc0
On Wed, Oct 23, 2013 at 12:43 PM, Gruher, Joseph R
wrote:
>
>
>>-Original Message-
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>
>>Did you tried working with the `--no-adjust-repos` flag in ceph-deploy ? It
>>will
>>allow you to tell ceph-deploy to just go and install ceph wit
Hey Tim,
If you deployed with ceph-deploy then your monitors started without
knowledge of how many OSDs you will be adding to your cluster. You can
add 'osd_pool_default_pg_num' and 'osd_pool_default_pgp_num' to you
ceph.conf before creating your monitors to have the default pools
created with th
You have to do this before creating your first monitor as the default
pools are created by the monitor.
Now any pools you create should have the correct number of placement
groups though.
You can also increase your pg and pgp num with,
ceph osd pool set pg_num
ceph osd pool set pgp_num
I'm trying to bring a ceph cluster not named ceph.
I'm running version 0.61.
From my reading of the documentation, the $cluster metavariable is set
by the basename of the configuration file: specifying the configuration
file "/etc/ceph/mycluster.conf" sets the $cluster metavariable to
"myclus
To add -- I thought I was running 0.67.4 on my test cluster (fc 19), but I
appear to be running 0.69. Not sure how that happened as my yum config is
still pointing to dumpling. :)
On Thu, Oct 24, 2013 at 10:52 AM, Matt Thompson wrote:
> Hi Harry,
>
> I was able to replicate this.
>
> What does
Hi Harry,
I was able to replicate this.
What does appear to work (for me) is to do an osd scrub followed by a pg
repair. I've tried this 2x now and in each case the deleted file gets
copied over to the OSD from where it was removed. However, I've tried a
few pg scrub / pg repairs after manually
On 24/10/2013 03:09, Yan, Zheng wrote:
On Thu, Oct 24, 2013 at 6:44 AM, Michael wrote:
Tying to gather some more info.
CentOS - hanging ls
[root@srv ~]# cat /proc/14614/stack
[] wait_answer_interruptible+0x81/0xc0 [fuse]
[] fuse_request_send+0x1cb/0x290 [fuse]
[] fuse_do_getattr+0x10c/0x2c0 [f
Hello ceph-users,
we're hitting a similar problem last Thursday and today. We have a cluster
consisting of 6 storagenodes containing 70 osds (JBOD configuration). We
created several rbd devices and mapped them on dedicated server and exporting
them via targetcli. This iscsi target are connected
43 matches
Mail list logo