I tryed with:
ceph osd crush tunables default
ceph osd crush tunables argonaut
while the command runs without error, I still get the feature set mismatch
error whe I try to mount
do I have to restart some service?
Andi
> -Original Message-
> From: Gregory Farnum [mailto:g...@inktank.com
Hi Greg,
I apologize for the lack of details. To sum up, I check that my image
exists:
$ rbd ls
img0
img1
Then I try to mount it:
$ sudo rbd map img0
rbd: add failed: (22) Invalid argument
When I try the exact same command from the box with version 0.61.9, it
succeeds:
$ rbd ls
img0
img1
I'm pretty sure this is the 'rw' mount option bug that Josh just fixed. It
affects kernels older than 3.7 or so and Ceoh newer than .70 or .71 (somewhere
in there). Can you try installing the package for the 'next' branch from
gitbuilder.ceph.com? If you are using ceph-deploy you can do
cep
Hello Everyone
Can someone guide me how i can start for " ceph deployment using puppet " ,
what all things i need to have for this .
I have no prior idea of using puppet , hence need your help to getting started
with it.
Regards
Karan Singh
___
Hi Sage,
Just tried it, the behaviour disappears in version 0.72-rc1, so it seems
you got it right. Thanks for the reply! I did not see any mention of
that bug in the 0.70 or 0.71 release notes, though.
Keep up the good work. Best regards,
Nicolas Canceill
Scalable Storage Systems
SURFsara (
Hi,
using ceph 0.67.4 I followed http://ceph.com/docs/master/radosgw/. I can connect
using s3cmd (test configuration succeeds), so the user credentials and
everything else seems to be running as it should. But when doing a "s3cmd mb
s3://test" the radosgw returns a "405 Method Not Allowed" (co
Hi,
Unless you're force to use puppet for some reason, I suggest you give
ceph-deploy a try:
http://ceph.com/docs/master/start/quick-ceph-deploy/
Cheers
On 04/11/2013 19:00, Karan Singh wrote:
> Hello Everyone
>
> Can someone guide me how i can start for " ceph deployment using puppet " ,
>
Are you saying despite the osdir error message (I am pasting again below from
my posting yesterday) the OSDs are successfully prepared?
[ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
[ceph_deploy][ERROR ] GenericError: Failed to create 2 OSDs
Thanks!
Narendra
-Orig
Hello Loic
Thanks for your reply , Ceph-deploy works good to me.
My next objective is to deploy ceph using puppet. Can you guide me now i can
proceed.
Regards
karan
- Original Message -
From: "Loic Dachary"
To: ceph-users@lists.ceph.com
Sent: Monday, 4 November, 2013 4:45:06 PM
Subjec
Hello,
We experienced the same error as reported by Navendra, although we're
running Ubuntu Server 12.04.
We managed to work around the error (by trial and error). Below are the
steps we performed, perhaps this can help you track down the error.
*Step 1 - This was the error*
openstack@monitor3
On Mon, Nov 4, 2013 at 9:55 AM, Trivedi, Narendra
wrote:
> Are you saying despite the osdir error message (I am pasting again below from
> my posting yesterday) the OSDs are successfully prepared?
>
> [ceph_deploy.osd][ERROR ] OSError: [Errno 2] No such file or directory
> [ceph_deploy][ERROR ] G
Pls can somebody help? Im getting this error.
ceph@CephAdmin:~$ ceph-deploy osd create
server1:sda:/dev/sdj1[ceph_deploy.cli][INFO ] Invoked (1.3):
/usr/bin/ceph-deploy osd create server1:sda:/dev/sdj1[ceph_deploy.osd][DEBUG ]
Preparing cluster ceph disks server1:/dev/sda:/dev/sdj1[server1]
Is disk sda on server1 empty or does it contain already a partition?
On Nov 4, 2013, at 5:25 PM, charles L wrote:
>
> Pls can somebody help? Im getting this error.
>
> ceph@CephAdmin:~$ ceph-deploy osd create server1:sda:/dev/sdj1
> [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-d
Bingo! A lot of people are getting this dreadful GenericErro and Failed to
create 1 OSD. Does anyone know why despite /etc/ceph being there on each node?
Also, FYI purgedata on multiple nodes doesn't work sometime i.e. it says it is
uninstalled ceph and removed /etc/ceph from all nodes but they
On Mon, Nov 4, 2013 at 10:56 AM, Trivedi, Narendra
wrote:
> Bingo! A lot of people are getting this dreadful GenericErro and Failed to
> create 1 OSD. Does anyone know why despite /etc/ceph being there on each
> node?
/etc/ceph is created by installing ceph on a node, and purgedata will
remove th
On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
wrote:
> Could these problems be caused by running a purgedata but not a purge?
It could be, I am not clear on what the expectation was for just doing
purgedata without a purge.
> Purgedata removes /etc/ceph but without the purge ceph is still in
On Mon, Nov 4, 2013 at 12:13 AM, Fuchs, Andreas (SwissTXT)
wrote:
> I tryed with:
> ceph osd crush tunables default
> ceph osd crush tunables argonaut
>
> while the command runs without error, I still get the feature set mismatch
> error whe I try to mount
> do I have to restart some service?
Ah
On Mon, Nov 4, 2013 at 6:40 AM, Corin Langosch
wrote:
> Hi,
>
> using ceph 0.67.4 I followed http://ceph.com/docs/master/radosgw/. I can
> connect using s3cmd (test configuration succeeds), so the user credentials
> and everything else seems to be running as it should. But when doing a
> "s3cmd mb
04.11.2013 19:56, schrieb Yehuda Sadeh:
This was answered off list on irc, but for the sake of completeness
I'll answer here too. The issue is that s3cmd uses a virtual bucket
host name. E.g., instead of http:///bucket, it sends request to
http://., so in order for the gateway to identify which
Sorry to bump this, but does anyone have any idea what could be wrong here?
To resummarize, radosgw fails to start. Debug output seems to indicate it is
complaining about the keyring, but the keyring is present and readable, and
other Ceph functions which require the keyring can success. So wh
Not sure why you're able to run the 'rados' and 'ceph' command, and
not 'radosgw', just note that the former two don't connect to the
osds, whereas the latter does, so it might fail on a different level.
You're using the default client.admin as the user for radosgw, but
your ceph.conf file doesn't
>-Original Message-
>From: Yehuda Sadeh [mailto:yeh...@inktank.com]
>Sent: Monday, November 04, 2013 12:40 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] radosgw fails to start
>
>Not sure why you're able to run the 'rados' and 'ceph' command, and not
>'r
On Mon, Nov 4, 2013 at 1:12 PM, Gruher, Joseph R
wrote:
>>-Original Message-
>>From: Yehuda Sadeh [mailto:yeh...@inktank.com]
>>Sent: Monday, November 04, 2013 12:40 PM
>>To: Gruher, Joseph R
>>Cc: ceph-users@lists.ceph.com
>>Subject: Re: [ceph-users] radosgw fails to start
>>
>>Not sure w
On 05/11/13 06:37, Alfredo Deza wrote:
On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
wrote:
Could these problems be caused by running a purgedata but not a purge?
It could be, I am not clear on what the expectation was for just doing
purgedata without a purge.
Purgedata removes /etc/cep
Purgedata is only meant to be run *after* the package is uninstalled. We
should make it do a check to enforce that. Otherwise we run into these
problems...
Mark Kirkwood wrote:
>On 05/11/13 06:37, Alfredo Deza wrote:
>> On Mon, Nov 4, 2013 at 12:25 PM, Gruher, Joseph R
>> wrote:
>>> Could th
Hi list:
0.72 Emperor document shows us ,we can deploy two zones in a cluster to have a
test .I tried to build two zones in different clusters , failed now! After some
tests ,I find master zone can not be wroten in . Using the S3 API ,also I can
not create a new bucket in the master zone ,though
|
hi all!
I configuration a ceph object gateway ! but I cant cleate bucket !
the ragosgw.log is
rgw_create_bucket return ret=-95 bucket=mybucket(@.rgw.buckets[5510,25])
WARNING:set_req_state_err_err_no=95 resorting to 500
req 43:0.003689:s3:PUT /mybucket/:create_bucket:http_st
error 95 is "Not supported", might mean that there are some issues with the
osd, e.g., incompatibility (running older version than the gateway's), or
likely objclass issues. What does the osd log say?
Yehuda
On Mon, Nov 4, 2013 at 6:50 PM, 鹏 wrote:
>
> hi all!
> I configuration a ceph obje
Hello All,
I would like to control my Ceph cluster via Perl scripts. After a bit of
searching, I found a Perl module that was started back in 2011 [1].
It seems to work great except for the list_pools function which causes a
segfault.
We can observe this behavior when running a stacktrace on th
Hey, all, we just refreshed stgt to its latest released version
(1.0.41), and I also tweaked the rbd backend to be a little more
flexible and useful.
stgt is a userspace iSCSI target implementation (using tgtd) that can
export several types of storage entities as iSCSI LUNs; backends include
f
Hi all
I tried to deploy a cluster with 0.72 ,the S3 api of ceph(0.72v) about user's
permission confused me .
this the user's info :
{ "user_id": "johndoe",
"display_name": "John Doe",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
31 matches
Mail list logo