I have a CEPH cluster with 9 nodes (6 data nodes & 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
- 2 node run Rados-GW
- 2 node run multi-process put file to [multi] Rados-GW
Result:
a) When i use 1 RadosGW node & 1 upload-node, speed upload = 50MB/s
/upload-node
Hi ceph-users,
I deployed a cluster successfully in VMs, and today I tried to deploy a cluster
in physical nodes. However, I came across a problem when I started creating a
monitor.
-bash-4.1$ ceph-deploy mon create x
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts
[ceph
Hi guys,
How do I get a list of all users with the radosgw-admin command and/or
REST API?
# radosgw-admin --version
ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)
Cheers,
Valery
--
SWITCH
--
Valery Tschopp, Software Engineer, Peta Solutions
Werdstrasse
On 09/25/2013 10:03 AM, Guang wrote:
> Hi ceph-users,
> I deployed a cluster successfully in VMs, and today I tried to deploy a
> cluster in physical nodes. However, I came across a problem when I started
> creating a monitor.
>
> -bash-4.1$ ceph-deploy mon create x
> ssh: Could not r
Thanks. This fixed the problem.
BTW, after adding this line I still got the same error on my pvcreate
but then I ran pvcreate -vvv and found that it was ignorning my
/dev/rbd1 device because it had detected a partition signature (which I
had added in an earlier attempt to work around this "ignore
Thanks.
After fixing the issue with the types entry in lvm.conf, I discovered
the -vvv option which helped me detect a the second cause for the
"ignored" error: pvcreate saw a partition signature and skipped the device.
The -vvv is s good flag. :)
~jpr
On 09/25/2013 01:52 AM, Wido den Hollander
Thanks Wolfgang.
-bash-4.1$ ping web2
PING web2 (10.193.244.209) 56(84) bytes of data.
64 bytes from web2 (10.193.244.209): icmp_seq=1 ttl=64 time=0.505 ms
64 bytes from web2 (10.193.244.209): icmp_seq=2 ttl=64 time=0.194 ms
...
[I omit part of the host name].
It can ping to the host and I actua
Hi ceph-users,
I see RADOS Gateway (RGW) can be either authenticated or unauthenticated
from
http://ceph.com/docs/master/radosgw/s3/authentication/. But there is no
details about how to disable it.
So is there any way to do it? Thanks for sharing.
--
Regards,
Zhi
___
On 09/25/2013 02:49 AM, Chu Duc Minh wrote:
I have a CEPH cluster with 9 nodes (6 data nodes & 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
- 2 node run Rados-GW
- 2 node run multi-process put file to [multi] Rados-GW
Result:
a) When i use 1 RadosGW node & 1
On Wed, Sep 25, 2013 at 5:08 AM, Guang wrote:
> Thanks Wolfgang.
>
> -bash-4.1$ ping web2
> PING web2 (10.193.244.209) 56(84) bytes of data.
> 64 bytes from web2 (10.193.244.209): icmp_seq=1 ttl=64 time=0.505 ms
> 64 bytes from web2 (10.193.244.209): icmp_seq=2 ttl=64 time=0.194 ms
> ...
>
> [I om
Thanks for the reply!
I don't know the reason, but I work-around this issue by add a new entry in the
/etc/hosts with something like 'web2 {id_address_of_web2}' and it can work.
I am not sure if that is due to some mis-config by my end of the deployment
script, will further investigate.
Than
Hi:
I have question regarding using the class plugin API.
We finally able to make a test plugin class worked. We was able to invoke the
exec() call and execute our test plugin class successfully.
However, we have a hard time trying to figure out what object this plugin class
been ran on OSD.
On Wed, Sep 25, 2013 at 9:31 AM, Guang wrote:
> Thanks for the reply!
>
> I don't know the reason, but I work-around this issue by add a new entry in
> the /etc/hosts with something like 'web2 {id_address_of_web2}' and it can
> work.
>
> I am not sure if that is due to some mis-config by my en
On Wed, Sep 25, 2013 at 6:40 AM, Chen, Ching-Cheng (KFRM 1)
wrote:
> Hi:
>
>
>
> I have question regarding using the class plugin API.
>
>
>
> We finally able to make a test plugin class worked. We was able to invoke
> the exec() call and execute our test plugin class successfully.
>
>
>
> Howev
Hi all-
I am following the object storage quick start guide. I have a cluster with two
OSDs and have followed the steps on both. Both are failing to start radosgw
but each in a different manner. All the previous steps in the quick start
guide appeared to complete successfully. Any tips on h
Hi,
We've been working with Ceph 0.56 on Ubuntu 12.04 and are able to
create, map, and mount ceph block devices via the RBD kernel module. We
have a CentOS6.4 box one which we would like to do the same.
http://ceph.com/docs/next/install/os-recommendations/
OS recommedations state that we should
G'day Mark,
I stumbled across an older thread it looks like you were involved with the
centos and poor seq write performance on the R515's.
Were you using centos or Ubuntu on your server at the time? (I'm wondering if
this could be related to Ubuntu)
http://marc.info/?t=13481911702&r=1&w=2
On 09/25/2013 06:46 PM, Quenten Grasso wrote:
G'day Mark,
I stumbled across an older thread it looks like you were involved with the
centos and poor seq write performance on the R515's.
Were you using centos or Ubuntu on your server at the time? (I'm wondering if
this could be related to Ubun
Hi there,
Now that there is fledgling shared filesystem project starting up for
OpenStack (see: https://launchpad.net/manila) I'm wondering whether there
has been any progress towards full multi-tenancy for CephFS, i.e., the
ability for clients to mount their own CephFS (complete with separate
met
On Thu, 26 Sep 2013, Blair Bethwaite wrote:
> Hi there,
> Now that there is fledgling shared filesystem project starting up for
> OpenStack (see: https://launchpad.net/manila) I'm wondering whether there
> has been any progress towards full multi-tenancy for CephFS, i.e., the
> ability for clients
Hi all,
Does anyone know how to specify which pool the mds and CephFS data will be
stored in?
After creating a new cluster, the pools "data", "metadata", and "rbd" all
exist but with pg count too small to be useful. The documentation indicates
the pg count can be set only at pool creation time, s
On Thu, Sep 26, 2013 at 9:57 AM, Aaron Ten Clay wrote:
> Hi all,
>
> Does anyone know how to specify which pool the mds and CephFS data will be
> stored in?
>
> After creating a new cluster, the pools "data", "metadata", and "rbd" all
> exist but with pg count too small to be useful. The documenta
Hi ceph-users,
Could some one give some suggestions? Anything will be appreciated. Thanks!
On Wed, Sep 25, 2013 at 8:06 PM, david zhang wrote:
> Hi ceph-users,
>
> I see RADOS Gateway (RGW) can be either authenticated or unauthenticated
> from
> http://ceph.com/docs/master/radosgw/s3/authentica
On Wed, 25 Sep 2013, Aaron Ten Clay wrote:
> Hi all,
>
> Does anyone know how to specify which pool the mds and CephFS data will be
> stored in?
>
> After creating a new cluster, the pools "data", "metadata", and "rbd" all
> exist but with pg count too small to be useful. The documentation indica
24 matches
Mail list logo