+ if you have not already done # yum clean all , give it a try.
- Karan -
On 22 Apr 2014, at 20:33, Alfredo Deza wrote:
> On Tue, Apr 22, 2014 at 1:31 PM, Alfredo Deza
> wrote:
>> On Tue, Apr 22, 2014 at 1:29 PM, Alfredo Deza
>> wrote:
>>> On Sun, Apr 20, 2014 at 11:06 PM, peng wrote:
Hi Alexander
Try adding your monitor details in /etc/ceph/ceph.conf file (please check for
typos)
[mon]
[mon.nfs2.abboom.world]
host = nfs2.abboom.world
mon addr = 10.60.0.111:6789
[mon.nfs3.abboom.world]
host = nfs3.abboom.world
mon addr = 10.60.0.112:6789
[mon.nfs4.abboom
-- Forwarded message --
From: Gandalf Corvotempesta
Date: 2014-04-14 16:06 GMT+02:00
Subject: Fwd: [ceph-users] RadosGW: bad request
To: "ceph-users@lists.ceph.com"
-- Forwarded message --
From: Gandalf Corvotempesta
Date: 2014-04-09 14:31 GMT+02:00
Subject: Re:
I have successfully created a one region, two zone federated setup with
seperate clusters for each zone.
I am also noticing this issue. Metadata syncs properly but data will
not. The radosgw-agent says 'state is error'. Im also noticing lots of:
== req done req=0xb5a7d0 http_status=404 =
the following is from the radosgw-agent log:
2014-04-23T12:50:50.081 4884:ERROR:radosgw_agent.worker:syncing
entries for shard 3 failed
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/radosgw_agent/worker.py",
line 151, in run
new_retries = self.sync_entries(lo
Hi All,
I could able to create a cluster with 1 monitor node and 2 OSD nodes on our
proprietary distribution. Ceph health is Ok and active.
root@mon:/etc/ceph#
* ceph -s cluster a7f64266-0894-4f1e-a635-d0aeaca0e993 health
HEALTH_OK monmap e1: 1 mons at {mon=192.168.0.102:6789/0
i had a similar issue with authentication over S3 with fastcgi. It was
due to slashes (\ /) in secret key. I see that your secret key has
slashes. perhaps generate a new gateway user specifying keys using:
--access-key= and --secret=
On 04/23/2014 02:30 PM, Srinivasa Rao Ragolu wrote:
Hi A
even after creating new secret key, I am facing the issue. Could you please
let me know are there any other mistakes?
Thanks,
Srinivas.
On Wed, Apr 23, 2014 at 7:03 PM, Peter wrote:
> i had a similar issue with authentication over S3 with fastcgi. It was
> due to slashes (\ /) in secret key.
My Swift command line outputs are like below
root@mon:/etc/ceph# *radosgw-admin user info --uid=srinivas*
{ "user_id": "srinivas",
"display_name": "srinivas",
"email": "j...@example.com",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{ "id": "srinivas:swift",
Perhaps the endpoint isnt configured correctly.
Try adding some of these settings under your gateway config in ceph.conf:
http://ceph.com/docs/master/radosgw/config-ref/#swift-settings
or
rgw dns name =
On 04/23/2014 02:57 PM, Srinivasa Rao Ragolu wrote:
My Swift command line outputs are li
Hi Peter,
I have added rgw dns name , as well. But could no luck. Please go through
my configurations above and let me know if there is any clue
thanks,
Srinivas.
On Wed, Apr 23, 2014 at 7:37 PM, Peter wrote:
> Perhaps the endpoint isnt configured correctly.
>
> Try adding some of these setti
Hello,
I've been working on Ceph / Openstack integration and I have a couple of
questions.
1. If I boot an instance from a volume, I can't see the storage of that
volume:
[Screen capture]
The volume I booted from is located at /dev/vda. I'm not too familiar with
Linux filesystem, but fro
Hi Yehuda and all,
I am using apache 2.4.3 version and with this version I could not able to
load mod_fastcgi version 2.4.6.
So I have used only apache without fastcgi. Added "ServerName" and
"mod_rewrite.so" attributes to/etc/apache2/confirm/httpd.conf.
I could able to run apache2, radosgwa
On Wed, 23 Apr 2014 12:39:20 +0800 Indra Pramana wrote:
> Hi Christian,
>
> Good day to you, and thank you for your reply.
>
> On Tue, Apr 22, 2014 at 12:53 PM, Christian Balzer wrote:
>
> > On Tue, 22 Apr 2014 02:45:24 +0800 Indra Pramana wrote:
> >
> > > Hi Christian,
> > >
> > > Good day to
Hello,
we have rebooted switch and then ceph cluster stopped working.
Ceph version: 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
Ceph status: HEALTH_WARN 256 pgs peering; 256 pgs stuck inactive; 256 pgs stuck
unclean; 109 requests are blocked > 32 sec; 6 osds have slow requests; mds
clust
Hello,
Le 18/04/2014 16:33, Jean-Charles LOPEZ a écrit :
> use the radios command to remove the empty pool name if you need to.
>
> rados rmpool ‘’ ‘’ —yes-i-really-really-mean-it
>
> You won’t be alb to remove it with the ceph command
Dont know how this pool got unclean, anyway as you said I remo
Hello all,
I want to set the following value for ceph:
osd recovery max active = 1
Where do I place this setting? And how do I ensure that it is active?
Do I place it only in /etc/ceph/ceph.conf on the monitor in a section like so:
[osd]
osd recovery max active = 1
Or do I have to place i
Hi,
I'd like to know what happens to a cluster with one monitor while that
one monitor process is being restarted.
For example, if I have an RBD image mounted and in use (actively
reading/writing) when I restart that monitor, will all those reading and
writing operations block until the moni
Hi Chad,
It's usually best practice to propagate changes to ceph.conf amongst all
nodes. In this case, it will at least need to be on the OSD nodes.
You will need to restart OSDs for it to take effect OR use ceph tell.
>From a node with admin keyring: ceph tell osd.* injectargs
'--osd_recovery_m
Hi Cephers,
I would like to know if is the swift object versioning feature [1] is
(or will be) on the road map ?
Because ... it would be great ;-)
Thx,
Cédric
[1]
http://docs.openstack.org/api/openstack-object-storage/1.0/content/set-object-versions.html
--
Cédric
__
Thanks for the tip Brian!
Chad.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 4/23/14 12:33 , Dyweni - Ceph-Users wrote:
Hi,
I'd like to know what happens to a cluster with one monitor while that
one monitor process is being restarted.
For example, if I have an RBD image mounted and in use (actively
reading/writing) when I restart that monitor, will all those rea
What does your ceph.conf look like? I'm wondering if you changed any of
the osd recovery settings.
That's kind of a long shot though. I'd try IRC again. According to the
community help page (http://ceph.com/help/community/), there should be
some geeks on duty for at least an hour.
*Craig L
Hi all,
I am having difficulty working out how OSDs are started automatically at
boot, as they are not currently in my simple deployment
Ubuntu 12.04
root@ceph-osd98:/usr/bin# ceph -v
ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
I gather this is the script I need to investiga
I got this yesterday when copying data to the rbd,
,
| Apr 24 02:43:28 servername kernel: rbd: rbd1: write 4 at 44893c1
(1)
| Apr 24 02:43:28 servername kernel:
| Apr 24 02:43:28 servername kernel: rbd: rbd1: result -28 xferred 4
| Apr 24 02:43:28 servername kernel:
| Apr 2
On 04/24/2014 03:07 AM, Jianing Yang wrote:
I got this yesterday when copying data to the rbd,
,
| Apr 24 02:43:28 servername kernel: rbd: rbd1: write 4 at 44893c1
(1)
| Apr 24 02:43:28 servername kernel:
| Apr 24 02:43:28 servername kernel: rbd: rbd1: result -28 xferred 4000
Hi all,
Just a reminder that the Ceph meetup in Amsterdam is taking place
tonight:
https://wiki.ceph.com/Community/Meetups/The_Netherlands/Ceph_meetup_Amsterdam
I'll be ordering pizza later today, so please fill in your name on the
Wiki when you're joining so I can make sure there is enough
Hi Christian,
Good day to you, and thank you for your reply.
On Wed, Apr 23, 2014 at 11:41 PM, Christian Balzer wrote:
> > > > Using 32 concurrent writes, result is below. The speed really
> > > > fluctuates.
> > > >
> > > > Total time run: 64.31704964.317049
> > > > Total writes made:
28 matches
Mail list logo