Hi Tim,
Try the following:
curl -D - -H "X-Auth-User: rados:swift" -H "X-Auth-Key: 123"
http://10.113.193.189/auth
You should then see X-Storage-Url, X-Storage-Token, and X-Auth-Token in the
returned headers.
Regards,
Matt
On Wed, Oct 16, 2013 at 10:05 PM, Snider, Tim wrote:
> Rookie questi
Hi Derek,
Sorry, I added the caps, but didn't restarted the radosgw process. After
a restart it works :)
Cheers,
Valery
On 16/10/13 17:20 , Derek Yarnell wrote:
On 10/16/13 4:26 AM, Valery Tschopp wrote:
Hi Derek,
Thanks for your example.
I've added caps='metadata=*', but I still have an
On 16/10/2013 17:16, Sage Weil wrote:
I'm not sure what options LVM provides for aligning things to the
underlying storage...
There is a generic kernel ABI for exposing performance properties of
block devices to higher layers, so that they can automatically tune
themselves according to those
Hi list,
I'm trying to figure out how can I set up 3 defined cluster IPs and 3 other
public IPs on my 3 node cluster with ceph-deploy (Ubuntu raring, stable).
Here are my IPs for the public network : 172.23.5.101, 172.23.5.102,
172.23.5.103
And my IPs for the cluster network : 172.200.255.21, 17
Le 17/10/2013 11:06, NEVEU Stephane a écrit :
Hi list,
I'm trying to figure out how can I set up 3 defined cluster IPs and 3
other public IPs on my 3 node cluster with ceph-deploy (Ubuntu raring,
stable).
Here are my IPs for the public network : 172.23.5.101, 172.23.5.102,
172.23.5.103
A
Thank you Gilles, I actually have other servers running on the same networks so
can't I just set these particular 3 IPs ?
De : ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] De la part de Gilles Mocellin
Envoyé : jeudi 17 octobre 2013 11:55
À : ceph-users@lists.ce
On Thu, Oct 17, 2013 at 3:24 AM, NEVEU Stephane <
stephane.ne...@thalesgroup.com> wrote:
> Thank you Gilles, I actually have other servers running on the same
> networks so can’t I just set these particular 3 IPs ?
>
Your servers need to have the IP addresses assigned already. The daemons
will fig
Aaron,
Thank you for the precision, everything is working fine now !
De : Aaron Ten Clay [mailto:aaro...@aarontc.com]
Envoyé : jeudi 17 octobre 2013 14:06
À : NEVEU Stephane
Cc : Gilles Mocellin; ceph-users@lists.ceph.com
Objet : Re: [ceph-users] Public/Cluster addr how to
On Thu, Oct 17, 2013
Hi,
I'm wondering how Ceph behaves when there are multiple sources writing heavily
to the same pool (e.g. Openstack nova compute)
Will each get its own "fair share" or will a very heavy user impact all others?
Are there ways to tune this?
(Openstack Havana has added QoS so this somewhat reduces a
Hi,
I also looking for something like that.
It is possible to set FULL_CONTROL permissions for "Group All Users", and:
- it is possible to put object to bucket (whitout authentication -> anonymous)
- setacl,getacl,get,delete not working for this object.
--
Regards
Dominik
2013/9/26 david zhang
/var/lib/ceph/osd/ceph-NNN/journal is a "real" file on my system:
ls -l /var/lib/ceph/osd/ceph-0/journal
-rw-r--r-- 1 root root 1073741824 Oct 17 06:47
/var/lib/ceph/osd/ceph-0/journal
Any problems with my proposed added steps (3 - 5)?
1. stop a ceph-osd daemon
2. ceph-osd
On Wed, Oct 16 2013 at 12:16pm -0400,
Sage Weil wrote:
> Hi,
>
> On Wed, 16 Oct 2013, Ugis wrote:
> >
> > What could make so great difference when LVM is used and what/how to
> > tune? As write performance does not differ, DM extent lookup should
> > not be lagging, where is the trick?
>
> My
On Thu, 17 Oct 2013, Snider, Tim wrote:
> /var/lib/ceph/osd/ceph-NNN/journal is a "real" file on my system:
> ls -l /var/lib/ceph/osd/ceph-0/journal
> -rw-r--r-- 1 root root 1073741824 Oct 17 06:47
> /var/lib/ceph/osd/ceph-0/journal
>
> Any problems with my proposed added steps (3 - 5
I'd like to experiment with the ceph class methods technology. I've looked at
the cls_hello sample but I'm having trouble figuring out how to compile, like
and install. Are there any step-by-step documents on how to compile, link and
deploy the method .so files?
Paul Whittington
Chief Archite
This point release resolves several low to medium-impact bugs across the
code base, and fixes a performance problem (CPU utilization) with radosgw.
We recommend that all production cuttlefish users upgrade.
Notable changes:
* ceph, ceph-authtool: fix help (Danny Al-Gaaf)
* ceph-disk: partprob
Hi Ceph,
Configuring a Ceph cluster requires sharing data between clients and daemons.
It would be helpful for the architecture of a puppet module to have a detailed
descripiton of what these requirements are. As far as I understand there is:
* The IP address of at least one MON in the Ceph clu
On Thu, Oct 17, 2013 at 6:19 AM, Robert van Leeuwen
wrote:
> Hi,
>
> I'm wondering how Ceph behaves when there are multiple sources writing
> heavily to the same pool (e.g. Openstack nova compute)
> Will each get its own "fair share" or will a very heavy user impact all
> others?
Ceph doesn't d
On Thu, Oct 17, 2013 at 12:40 PM, wrote:
> I'd like to experiment with the ceph class methods technology. I've looked
> at the cls_hello sample but I'm having trouble figuring out how to compile,
> like and install. Are there any step-by-step documents on how to compile,
> link and deploy the m
Hi all,
We're trying to mount an rbd image inside of a linux container that has been
created with docker (https://www.docker.io/). We seem to have access to the rbd
kernel module from inside the container:
# lsmod | grep ceph
libceph 218854 1 rbd
libcrc32c 12603 3 x
My first guess would be that it's due to LXC dropping capabilities, I'd
investigate whether CAP_SYS_ADMIN is being dropped. You need CAP_SYS_ADMIN
for mount and block ioctls, if the container doesn't have those privs a map
will likely fail. Maybe try tracing the command with strace?
On Thu, Oct 17
The ability to specify the osd id would maybe simplify things, here is a pull
request for discussion https://github.com/ceph/ceph/pull/736
On 17/10/2013 22:46, Loic Dachary wrote:
> Hi Ceph,
>
> Configuring a Ceph cluster requires sharing data between clients and daemons.
> It would be helpful
Hello list.
I am trying to create a new single-node cluster using the ceph-deploy
tool but the 'mon create' step keeps failing apparently because the
'ceph' cluster name is hardwired into the /etc/init.d/ceph rc script
or more correctly, the rc script does not have any support for
"--cluster ". Ha
[ Adding back the list. ]
On Thu, Oct 17, 2013 at 3:37 PM, wrote:
> Thanks Gregory,
>
> I assume the .so gets loaded into the process space of each OSD associated
> with the object whose method is being called.
Yep; the .so is loaded on-demand wherever it's needed.
> Does the .so remain loade
> > * The IP address of at least one MON in the Ceph cluster
>
If you configure nodes with a single monitor in the "mon hosts" directive
then I believe your nodes will have issues if that one monitor goes down.
With Chef I've gone back and forth between using Chef search and having
monitors be dec
|
Hi all !
I think I suceed using cephFS with hadoop, but I still face lost question
for that I am a newer in hadoop and ceph!
first, my ceph version is 0.62. java is 1.6.0_45 .hadoop is 1.1.2 I think
suceed because:
# hadoop fs -ls /
drwxrwxrwx - root 0 2013-10-16 10:57 /
Brilliant, thanks! That'll do for now. Not something we can do real time
given the speed, but certainly enough to get by on for generating reports.
Cheers,
On 17 October 2013 08:41, Josh Durgin wrote:
> On 10/15/2013 08:56 PM, Blair Bethwaite wrote:
>
>>
>> > Date: Wed, 16 Oct 2013 16:06:49 +
Hi Peng
The conf in my cluster is almost the same with yours, but when i run
#bin/hadoop fs -ls /
It failed with:
Exception in thread "main" java.lang.NoClassDefFoundError:
com/ceph/fs/CephFileAlreadyExistsException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:2
27 matches
Mail list logo