Hi
I try to install a client with ceph block device following the instructions
here:
http://ceph.com/docs/master/start/quick-rbd/
the client has a user ceph and ssh is setup passwordless also for sudo
when I run ceph-deploy I see:
On the ceph management host:
ceph-deploy install 10.100.21.10
[
On 10/19/2013 08:53 PM, Andrey Korolyov wrote:
Hello,
I was able to reproduce following on the top of current cuttlefish:
- create pool,
- delete it after all pgs initialized,
- create new pool with same name after, say, ten seconds.
All osds dies immediately with attached trace. The problem e
On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
wrote:
> Hi
>
> I try to install a client with ceph block device following the instructions
> here:
> http://ceph.com/docs/master/start/quick-rbd/
>
> the client has a user ceph and ssh is setup passwordless also for sudo
> when I run cep
Sage -
The journal device needs a file system created does that device need to be
mounted?
Tim
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Thursday, October 17, 2013 11:02 AM
To: Snider, Tim
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] changing from def
Dear ceph-users,
Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) to
a much bigger one (330 OSDs).
When using rados bench to test the small cluster (24 OSDs), it showed the
average latency was around 3ms (object size is 5K), while for the larger one
(330 OSDs), the av
On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> It looks like without LVM we're getting 128KB requests (which IIRC is
> typical), but with LVM it's only 4KB. Unfortunately my memory is a bit
> fuzzy here, but I seem to recall a property on the request_queue or device
> that affecte
On 10/21/2013 09:13 AM, Guang Yang wrote:
Dear ceph-users,
Hi!
Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) to
a much bigger one (330 OSDs).
When using rados bench to test the small cluster (24 OSDs), it showed the
average latency was around 3ms (object size
On Mon, Oct 21 2013 at 11:01am -0400,
Mike Snitzer wrote:
> On Mon, Oct 21 2013 at 10:11am -0400,
> Christoph Hellwig wrote:
>
> > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > > It looks like without LVM we're getting 128KB requests (which IIRC is
> > > typical), but with LVM
On Mon, Oct 21 2013 at 10:11am -0400,
Christoph Hellwig wrote:
> On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > It looks like without LVM we're getting 128KB requests (which IIRC is
> > typical), but with LVM it's only 4KB. Unfortunately my memory is a bit
> > fuzzy here, but I
Hi,
neither do you need a filesystem on the partition, nor does it have to
be mounted. You can link the journal against the raw partition.
Best regards,
Kurt
Snider, Tim schrieb:
> Sage -
> The journal device needs a file system created does that device need to be
> mounted?
> Tim
>
> -Ori
Hi all,
I'm using Ceph as a filestore for my nginx web server, in order to have
shared storage, and redundancy with automatic failover.
The cluster is not high spec, but given my use case (lots of images) - I
am very dissapointed with the current throughput I'm getting, and was
hoping for so
On Mon, 21 Oct 2013, Snider, Tim wrote:
> Sage -
> The journal device needs a file system created does that device need to be
> mounted?
Yes.. the mkjournal step needs to writes to the journal (wehther it's a
file or block device).
sage
> Tim
>
> -Original Message-
> From: Sage Weil
Your reply seems to contradict the reply from Sage:
> Sage -
> The journal device needs a file system created does that device need to
be mounted?
Yes.. the mkjournal step needs to writes to the journal (wehther it's a
file or block device).
Tim
From: Kurt Bauer [mailto:kurt.b
Hi Everybody,
I'm attempting to get Ceph working for CentOS 6.4 running RDO Havana for Cinder
volume storage and boot-from-volume, and I keep bumping into a very unhelpful
errors on my nova-compute test node and my cinder controller node.
Here is what I see on my cinder-volume controller (Node
On Mon, 21 Oct 2013, Snider, Tim wrote:
>
> Your reply seems to contradict the reply from Sage:
>
> > Sage -
>
> > The journal device needs a file system created does that device need
> to be mounted?
>
> Yes.. the mkjournal step needs to writes to the journal (wehther
> it's a
On Mon, 21 Oct 2013, Mike Snitzer wrote:
> On Mon, Oct 21 2013 at 10:11am -0400,
> Christoph Hellwig wrote:
>
> > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > > It looks like without LVM we're getting 128KB requests (which IIRC is
> > > typical), but with LVM it's only 4KB. Un
On 10/21/2013 09:03 AM, Andrew Richards wrote:
Hi Everybody,
I'm attempting to get Ceph working for CentOS 6.4 running RDO Havana for
Cinder volume storage and boot-from-volume, and I keep bumping into a
very unhelpful errors on my nova-compute test node and my cinder
controller node.
Here is w
unsubscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Oct 21, 2013 at 7:13 AM, Guang Yang wrote:
> Dear ceph-users,
> Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs)
> to a much bigger one (330 OSDs).
>
> When using rados bench to test the small cluster (24 OSDs), it showed the
> average latency was around 3ms (o
Hello all,
Similar to this post from last month, I am experiencing 2 nodes that are
constantly crashing upon start up:
http://www.spinics.net/lists/ceph-users/msg04589.html
Here are the logs from the 2 without the debug commands, here:
http://pastebin.com/cB9ML5md and http://pastebin.com/csHHj
Hi,
We have been testing a ceph cluster with the following specs:
3 Mon's
72 OSD's spread across 6 Dell R-720xd servers
4 TB SAS drives
4 bonded 10 GigE NIC ports per server
64 GB of RAM
Up until this point we have been running tests using the default journal size
of '1024'.
Before we start to
Thanks for the response Josh!
If the Ceph CLI tool still needs to be there for Cinder in Havana, then am I
correct in assuming that I still also need to export "CEPH_ARGS='--id volumes'"
in my cinder init script for the sake of cephx like I had to do in Grizzly?
Thanks,
Andy
On Oct 21, 2013, a
On 10/21/2013 10:35 AM, Andrew Richards wrote:
Thanks for the response Josh!
If the Ceph CLI tool still needs to be there for Cinder in Havana, then
am I correct in assuming that I still also need to export
"CEPH_ARGS='--id volumes'" in my cinder init script for the sake of
cephx like I had to d
On Mon, Oct 21 2013 at 12:02pm -0400,
Sage Weil wrote:
> On Mon, 21 Oct 2013, Mike Snitzer wrote:
> > On Mon, Oct 21 2013 at 10:11am -0400,
> > Christoph Hellwig wrote:
> >
> > > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
> > > > It looks like without LVM we're getting 128KB req
On Mon, Oct 21, 2013 at 1:21 PM, Shain Miley wrote:
> Hi,
>
> We have been testing a ceph cluster with the following specs:
>
> 3 Mon's
> 72 OSD's spread across 6 Dell R-720xd servers
> 4 TB SAS drives
> 4 bonded 10 GigE NIC ports per server
> 64 GB of RAM
>
> Up until this point we have been runn
On Mon, 21 Oct 2013, Mike Snitzer wrote:
> On Mon, Oct 21 2013 at 12:02pm -0400,
> Sage Weil wrote:
>
> > On Mon, 21 Oct 2013, Mike Snitzer wrote:
> > > On Mon, Oct 21 2013 at 10:11am -0400,
> > > Christoph Hellwig wrote:
> > >
> > > > On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote:
On Mon, Oct 21 2013 at 2:06pm -0400,
Christoph Hellwig wrote:
> On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
> > It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
> > no?
>
> Well, it's the block layer based on what DM tells it. Take a look at
> dm_merge_bv
On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
> It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
> no?
Well, it's the block layer based on what DM tells it. Take a look at
dm_merge_bvec
>From dm_merge_bvec:
/*
* If the target doesn't support
Thanks, Josh. I am able to boot from by RBD Cinder volumes now.
Thanks,
Andy
On Oct 21, 2013, at 1:38 PM, Josh Durgin wrote:
> On 10/21/2013 10:35 AM, Andrew Richards wrote:
>> Thanks for the response Josh!
>>
>> If the Ceph CLI tool still needs to be there for Cinder in Havana, then
>> am I
Hi Loic,
On 10/19/2013 02:57 PM, Loic Dachary wrote:
Hi Ceph,
I don't know if anyone thought about asking for a Ceph stand during FOSDEM. If
there was one, I would volunteer to sit at the table during a full day. The
requirement is that there are at least two persons at all times.
https://fo
So I'm running into this issue again and after spending a bit of time
reading the XFS mailing lists, I believe the free space is too
fragmented:
[root@den2ceph001 ceph-0]# xfs_db -r "-c freesp -s" /dev/sdb1
from to extents blockspct
1 1 85773 85773 0.24
2
On Mon, Oct 21, 2013 at 8:05 AM, Pieter Steyn wrote:
> Hi all,
>
> I'm using Ceph as a filestore for my nginx web server, in order to have
> shared storage, and redundancy with automatic failover.
>
> The cluster is not high spec, but given my use case (lots of images) - I am
> very dissapointed w
It looks like an xattr vanished from one of your objects on osd.3.
What fs are you running?
On Mon, Oct 21, 2013 at 9:58 AM, Jeff Williams wrote:
> Hello all,
>
> Similar to this post from last month, I am experiencing 2 nodes that are
> constantly crashing upon start up:
> http://www.spinics.net
We're running xfs on a 3.8.0-31-generic kernel
Thanks,
Jeff
On 10/21/13 1:54 PM, "Samuel Just" wrote:
>It looks like an xattr vanished from one of your objects on osd.3.
>What fs are you running?
>
>On Mon, Oct 21, 2013 at 9:58 AM, Jeff Williams
>wrote:
>> Hello all,
>>
>> Similar to this post
On Sun, Oct 20, 2013 at 9:04 PM, Derek Yarnell wrote:
> So I have tried to enable usage logging on a new production Ceph RadosGW
> cluster but nothing seems to show up.
>
> I have added to the [client.radosgw.] section the following
>
> rgw enable usage log = true
> rgw usage log tick interval = 3
Can someone tell me what I'm missing. I have a radosgw user created.
I get the following complaint when I try to create a pool with or without the
--uid parameter:
#radosgw-admin pool add --pool=radosPool --uid=rados
failed to add bucket placement: (2) No such file or directory
Thanks,
Tim
Timo
Alfredo,
Thanks a lot for the info.
I'll make sure I have an updated version of ceph-deploy and give it another
shot.
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
From: Alfredo Deza [alfredo.
Can you get the pg to recover without osd.3?
-Sam
On Mon, Oct 21, 2013 at 1:59 PM, Jeff Williams wrote:
> We're running xfs on a 3.8.0-31-generic kernel
>
> Thanks,
> Jeff
>
> On 10/21/13 1:54 PM, "Samuel Just" wrote:
>
>>It looks like an xattr vanished from one of your objects on osd.3.
>>What
What is the best way to do that? I tried ceph pg repair, but it only did
so much.
On 10/21/13 3:54 PM, "Samuel Just" wrote:
>Can you get the pg to recover without osd.3?
>-Sam
>
>On Mon, Oct 21, 2013 at 1:59 PM, Jeff Williams
>wrote:
>> We're running xfs on a 3.8.0-31-generic kernel
>>
>> Than
On Mon, Oct 21, 2013 at 2:46 PM, Snider, Tim wrote:
>
> Can someone tell me what I'm missing. I have a radosgw user created.
>
> I get the following complaint when I try to create a pool with or without the
> --uid parameter:
>
>
> #radosgw-admin pool add --pool=radosPool --uid=rados
>
> failed t
What happened when you simply left the cluster to recover without osd.11 in?
-Sam
On Mon, Oct 21, 2013 at 4:01 PM, Jeff Williams wrote:
> What is the best way to do that? I tried ceph pg repair, but it only did
> so much.
>
> On 10/21/13 3:54 PM, "Samuel Just" wrote:
>
>>Can you get the pg to re
Can any suggest a straightforward way to import a VHD to a ceph RBD? The easier
the better!
Thanks
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I apologize, I should have mentioned that both osd.3 and osd.11 crash
immediately and if I do not 'set noout', the crash cascades to the rest of the
cluster.
Thanks,
Jeff
Sent from my Samsung Galaxy Note™, an AT&T LTE smartphone
Original message
From: Samuel Just
Date: 1
On 10/16/2013 04:25 PM, Kelcey Jamison Damage wrote:
Hi,
I have gotten so close to have Ceph work in my cloud but I have reached
a roadblock. Any help would be greatly appreciated.
I receive the following error when trying to get KVM to run a VM with an
RBD volume:
Libvirtd.log:
2013-10-16 22
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 22/10/13 14:19, Josh Durgin wrote:
> On 10/16/2013 04:25 PM, Kelcey Jamison Damage wrote:
>> Hi,
>>
>> I have gotten so close to have Ceph work in my cloud but I have reached a
>> roadblock. Any help
>> would be greatly appreciated.
>>
>> I recei
Besides what Mark and Greg said it could be due to additional hops through
network devices. What network devices are you using, what is the network
topology and does your CRUSH map reflect the network topology?
On Oct 21, 2013 9:43 AM, "Gregory Farnum" wrote:
> On Mon, Oct 21, 2013 at 7:13 AM, Gu
> Try moving the above configurables to the global section, if it's
> working then you're probably using the wrong section.
Moving sections doesn't seem to change the behavior. My two other test
gateways seems to be working fine with similar configs all running
0.67.4 (slightly patched for ACLs[1
Oh hi,
Turns out I solved it, It works with libvirt directly via CloudStack. The only
major modification is to ensure you don't accidentally use client.user for
authentication and just use user.
My guess is the error I had was related to testing with virsh.
Thanks for the reply.
- Or
Hi,
I have purchase my hardware for my Ceph storage cluster but did not
open any of my 960GB SSD drive box since I need to answer my question first.
Here's my hardware.
THREE server Dual 6 core Xeon 2U capable with 8 hotswap tray plus 2 SSD
mount internally.
In each server I will have :
2
On 22/10/13 15:05, Martin Catudal wrote:
Hi,
I have purchase my hardware for my Ceph storage cluster but did not
open any of my 960GB SSD drive box since I need to answer my question first.
Here's my hardware.
THREE server Dual 6 core Xeon 2U capable with 8 hotswap tray plus 2 SSD
mount i
On Mon, Oct 21, 2013 at 7:05 PM, Martin Catudal wrote:
> Hi,
> I have purchase my hardware for my Ceph storage cluster but did not
> open any of my 960GB SSD drive box since I need to answer my question first.
>
> Here's my hardware.
>
> THREE server Dual 6 core Xeon 2U capable with 8 hotswap
Hi,
Plus reads will still come from your non-SSD disks unless you're using
something like flashcache in front and as Greg said, having much more IOPS
available for your db often makes a difference (depending on load, usage
etc ofc).
We're using Samsung Pro 840 256GB pretty much like Martin descri
On 21/10/2013 22:45, Gregory Farnum wrote:
On Mon, Oct 21, 2013 at 8:05 AM, Pieter Steyn wrote:
Hi all,
I'm using Ceph as a filestore for my nginx web server, in order to have
shared storage, and redundancy with automatic failover.
The cluster is not high spec, but given my use case (lots of
53 matches
Mail list logo