- Original Message -
> From: "Sage Weil"
> To: "Keith Phua"
> Cc: ceph-us...@ceph.com
> Sent: Friday, August 23, 2013 12:48:18 PM
> Subject: Re: [ceph-users] Network failure scenarios
>
> On Fri, 23 Aug 2013, Keith Phua wrote:
> > Hi,
> >
> > It was mentioned in the devel mailing list
Hi,
My RadosGW (0.67.2) setup keeps complaining about the following error:
2013-08-23 10:20:32.920808 7f9e298fb7c0 0 WARNING: cannot read region
map
# radosgw-admin region-map get
failed to read region map: (2) No such file or directory
What could be wrong with my setup?
Cheers,
Tobias
__
Hi,
I'm trying to use radosgw with s3cmd:
# s3cmd ls
# s3cmd mb s3://bucket-1
ERROR: S3 error: 405 (MethodNotAllowed):
So there seems to be something missing according to buckets. How can I
create buckets? What do I have to configure on the radosgw side to have
buckets working?
Cheers,
Tob
Hello,
Dumpling now supports crc32c, which is available on recent Intel processors (
not on AMD ones AFAIK ).
How much does it affect performance ?
Does it affect the 1 GHz requirement per OSD ?
Should we avoid processors without crc32c support ?
Regards, Luc.
Hi, we built a ceph cluster with the folling network setup
eth0 is on a management network (access for admins and monitoring tools)
eth1 is ceph sync
eth2 is ceph public
deployed by ceph-deploy I have the following config
[global]
fsid = 18c6b4db-b936-43a2-ba68-d750036036cc
mon_initial_members =
On Thu, Aug 22, 2013 at 03:32:35PM +0200, raj kumar wrote:
>ceph cluster is running fine in centos6.4.
>
>Now I would like to export the block device to client using rbd.
>
>my question is,
Hi Greg,
We are using RBD for most of our VM images and volumes. But, if you spin
off an instance from a Glance image without specifying a boot volume,
Glance caches the image (/var/lib/nova/instances/_base) on Nova node where
this instance is scheduled.. You can use a shared file system for Glan
Thank you Sir. I appreciate your help on this.
I upgraded the kernel to 3.4.53-8.
For second point, I want to give a client(which is not kvm) a block
storage. So without iscsi how the client will access the ceph cluster and
allocated block device. and can you please let me know the flow to
provi
On 08/22/2013 11:12 PM, Tobias Brunner wrote:
Hi,
I'm trying to create a snapshot from a KVM VM:
# virsh snapshot-create one-5
error: unsupported configuration: internal checkpoints require at least
one disk to be selected for snapshot
RBD should support such snapshot, according to the wiki:
h
Once the cluster is created on Ceph server nodes with MONs and OSDs on it
you have to copy the config + auth info to the clients:
#--- on server node, e.g.:
scp /etc/ceph/ceph.conf client-1:/etc/ceph
scp /etc/ceph/keyring.bin client-1:/etc/ceph
scp /etc/ceph/ceph.conf client-
On 08/23/2013 04:20 AM, Luc Dumaine wrote:
Hello,
Dumpling now supports crc32c, which is available on recent Intel
processors ( not on AMD ones AFAIK).
How much does it affect performance ?
Does it affect the 1 GHz requirement per OSD ?
Should we avoid processors without crc32c support ?
2013/8/22 Sage Weil
> On Thu, 22 Aug 2013, Mih?ly ?rva-T?th wrote:
> > Hello,
> >
> > Is there any method to one radosgw user has more than one
> access/secret_key?
>
> Yes, you can have multiple keys for each user:
>
> radosgw-admin key create ...
>
Hello Sage,
Thank you. Here is an example f
Hello,
I have an user with 3 subuser:
{ "user_id": "johndoe",
"display_name": "John Doe",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{ "id": "johndoe:readonly",
"permissions": "read"},
{ "id": "johndoe:swift",
"permis
Hi all,
There is a new bug-fix release of ceph-deploy, the easy ceph deployment tool.
Installation instructions: https://github.com/ceph/ceph-deploy#installation
This is the list of all fixes that went into this release which can
also be found in the CHANGELOG.rst file in ceph-deploy's git repo:
On Fri, Aug 23, 2013 at 1:35 AM, Tobias Brunner wrote:
> Hi,
>
> My RadosGW (0.67.2) setup keeps complaining about the following error:
>
> 2013-08-23 10:20:32.920808 7f9e298fb7c0 0 WARNING: cannot read region map
>
> # radosgw-admin region-map get
> failed to read region map: (2) No such file or
On Fri, Aug 23, 2013 at 1:47 AM, Tobias Brunner wrote:
> Hi,
>
> I'm trying to use radosgw with s3cmd:
>
> # s3cmd ls
>
> # s3cmd mb s3://bucket-1
> ERROR: S3 error: 405 (MethodNotAllowed):
>
> So there seems to be something missing according to buckets. How can I
> create buckets? What do I have
Hi,
On Fri, Aug 23, 2013 at 1:47 AM, Tobias Brunner
wrote:
Hi,
I'm trying to use radosgw with s3cmd:
# s3cmd ls
# s3cmd mb s3://bucket-1
ERROR: S3 error: 405 (MethodNotAllowed):
So there seems to be something missing according to buckets. How can I
create buckets? What do I have to configu
On Fri, 23 Aug 2013, Keith Phua wrote:
>
>
> - Original Message -
> > From: "Sage Weil"
> > To: "Keith Phua"
> > Cc: ceph-us...@ceph.com
> > Sent: Friday, August 23, 2013 12:48:18 PM
> > Subject: Re: [ceph-users] Network failure scenarios
> >
> > On Fri, 23 Aug 2013, Keith Phua wrote:
Hi Andreas,
On Fri, 23 Aug 2013, Fuchs, Andreas (SwissTXT) wrote:
> Hi, we built a ceph cluster with the folling network setup
>
> eth0 is on a management network (access for admins and monitoring tools)
> eth1 is ceph sync
> eth2 is ceph public
>
> deployed by ceph-deploy I have the following c
Hi,
Are there any known issues for mulipart? Anyone has tried with the Amazon
S3 API?
Thanks
Juan FRANÇOIS
2013/8/21 Juan Pablo FRANÇOIS
> Hello,
>
> I'm trying to upload a multipart file to the radosgw (v 0.67.1) using the
> Amazon S3 API (v 1.5.3) following the example in
> http://docs.a
Hi Greg,
> I haven't had any luck with the seq bench. It just errors every time.
>
Can you confirm you are using the --no-cleanup flag with rados write? This
will ensure there is actually data to read for subsequent seq tests.
~Brian
___
ceph-users ma
I'm not aware of any multipart upload issues. Just tested it with the
latest, and it seemed to work. Can you provide more details about your
environment and about the failing operations?
Thanks,
Yehuda
On Fri, Aug 23, 2013 at 8:56 AM, Juan FRANÇOIS wrote:
> Hi,
>
> Are there any known issues for
This is an imporant point release for Dumpling. Most notably, it fixes a
problem when upgrading directly from v0.56.x Bobtail to v0.67.x Dumpling
(without stopping at v0.61.x Cuttlefish along the way). It also fixes a
problem with the CLI parsing of the CEPH_ARGS environment variable (which
cau
On Fri, 23 Aug 2013, Sage Weil wrote:
> * osd: disable PGLog::check() via config option (fixes CPU burn)
Just to clarify this point: the new config option contorls some debugging
checks we left in, and is now off (i.e., no expensive checks) by default,
so users don't need to do anything here ex
On Thu, Aug 22, 2013 at 5:23 PM, Greg Poirier wrote:
> On Thu, Aug 22, 2013 at 2:34 PM, Gregory Farnum wrote:
>>
>> You don't appear to have accounted for the 2x replication (where all
>> writes go to two OSDs) in these calculations. I assume your pools have
>
>
> Ah. Right. So I should then be l
Hey Sage,
I'm all for it and will help testing.
Regards,
Oliver
On 22-08-13 17:23, Sage Weil wrote:
> We should perhaps hack the old (cuttlefish and earlier) flushing behavior
> into the new code so that we can confirm that it is really the writeback
> that is causing the problem an
Ah thanks, Brian. I will do that. I was going off the wiki instructions on
performing rados benchmarks. If I have the time later, I will change it
there.
On Fri, Aug 23, 2013 at 9:37 AM, Brian Andrus wrote:
> Hi Greg,
>
>
>> I haven't had any luck with the seq bench. It just errors every time.
>
On Fri, Aug 23, 2013 at 9:53 AM, Gregory Farnum wrote:
>
> Okay. It's important to realize that because Ceph distributes data
> pseudorandomly, each OSD is going to end up with about the same amount
> of data going to it. If one of your drives is slower than the others,
> the fast ones can get ba
I pushed a branch, wip-dumpling-perf. It does two things:
1) adds a config filestore_wbthrottle_enable (defaults to true) to
allow you to disable the wbthrottle altogether
2) causes the wbthrottle when enabled to fdatasync rather than fsync.
Can you rerun the random workload with that branch with
Hey Samuel,
That changed something, for the better. :-)
Your test-version, with wbthrottle off:
# ceph-osd --version
ceph version 0.67.1-18-g3fe3368
(3fe3368ac7178dcd312e89d264d8d81307e582d8)
# ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep
wbthrottle_enable
"filestore_wbt
I created a server on a virtual machine for testing using Ubuntu server
(Precise 64-bit), following the 5 minute guide that used to be at
http://ceph.com/docs/master/start/quick-start/ and the radosgw one in
http://ceph.com/docs/master/start/quick-rgw/. It was initially a Cuttlefish
installation an
Hey folks,
I've just done a brand new install of 0.67.2 on a cluster of Calxeda nodes.
I have one particular monitor that number joins the quorum when I restart
the node. Looks to me like it has something to do with the "create-keys"
task, which never seems to finish:
root 1240 1 4 1
When you were running with the wbthrottle on, did you have the
settings I gave you earlier set, or was it using the defaults?
-Sam
On Fri, Aug 23, 2013 at 12:48 PM, Oliver Daudey wrote:
> Hey Samuel,
>
> That changed something, for the better. :-)
>
> Your test-version, with wbthrottle off:
> # c
Hey Samuel,
I commented the earlier settings out, so it was with defaults.
Regards,
Oliver
On vr, 2013-08-23 at 13:35 -0700, Samuel Just wrote:
> When you were running with the wbthrottle on, did you have the
> settings I gave you earlier set, or was it using the defaults?
> -Sam
>
>
Hi Travis,
On Fri, 23 Aug 2013, Travis Rhoden wrote:
> Hey folks,
>
> I've just done a brand new install of 0.67.2 on a cluster of Calxeda nodes.
>
> I have one particular monitor that number joins the quorum when I restart
> the node. Looks to me like it has something to do with the "create-k
Ok, can you try setting filestore_op_threads to 1 on both cuttlefish
and wip-dumpling-perf (with and with wbthrottle, default wbthrottle
settings). I suspect I created contention in the filestore op threads
(FileStore::lfn_open specifically), and if so setting it to only use 1
thread should even o
Hey Samuel,
Ok, here are the results.
wip-dumpling-perf, filestore_op_threads = 1, wbthrottle on:
# rbd bench-write test --io-pattern=rand
bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern rand
SEC OPS OPS/SEC BYTES/SEC
1 666665.67 1948743.06
2 1
The easy solution to this is to create a really tiny image in glance (call
it fake_image or something like that) and tell nova that it is the image
you are using. Since you are booting from the RBD anyway, it doesn't
actually use the image for anything, and should only put a single copy of
it in t
Hi,
I understand that Ceph is a scalable distributed storage architecture.
However, I'd like to understand if performance on single node cluster is
better or worse than a 3 node cluster.
Let's say I have the following 2 setups:
1. Single node cluster with one OSD.
2. Three node cluster with one OS
On Fri, Aug 23, 2013 at 12:47 PM, Juan FRANÇOIS wrote:
> I created a server on a virtual machine for testing using Ubuntu server
> (Precise 64-bit), following the 5 minute guide that used to be at
> http://ceph.com/docs/master/start/quick-start/ and the radosgw one in
> http://ceph.com/docs/master
On Fri, Aug 23, 2013 at 2:50 PM, Yehuda Sadeh wrote:
> On Fri, Aug 23, 2013 at 12:47 PM, Juan FRANÇOIS wrote:
>> I created a server on a virtual machine for testing using Ubuntu server
>> (Precise 64-bit), following the 5 minute guide that used to be at
>> http://ceph.com/docs/master/start/quick-
Thank you for the help Yehuda!
Regards
Juan
2013/8/24 Yehuda Sadeh
> On Fri, Aug 23, 2013 at 2:50 PM, Yehuda Sadeh wrote:
> > On Fri, Aug 23, 2013 at 12:47 PM, Juan FRANÇOIS
> wrote:
> >> I created a server on a virtual machine for testing using Ubuntu server
> >> (Precise 64-bit), following
Ah, this appears to be http://tracker.ceph.com/issues/6087.
If you are able and care to install dev packages then the fix is now
in the dumpling branch and will be in the next release.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Aug 21, 2013 at 2:06 AM, Damien Churc
43 matches
Mail list logo