Hi,
Reading the URL http://ceph.com/docs/next/radosgw/adminops/#create-user , I'm
trying to create a new user with:
curl -v -X PUT -d '{"uid": "alvaro", "display-name": "alvaro"}
http://myradosgw/admin/user?format=json
and a 403 response is received :(
So, Do I need a token? A token of who?
Hi,
Last night our cluster became unhealthy for 3 hours after one of the mons (a
qemu-kvm VM) had this glitch:
Jul 18 00:12:43 andy03 kernel: Clocksource tsc unstable (delta = -60129537028
ns). Enable clocksource failover by adding clocksource_failover kernel
parameter.
shortly afterwords t
Hi all,
I have 4 (stale+inactive) pgs, how to delete those pgs?
pgmap v59722: 21944 *pgs: 4 stale,* 12827 active+clean, 9113
active+degraded; 45689 MB data, 1006 GB used, 293 TB / 294 TB avail;
I found on google a long time, still can't resolve it.
Please, help me!
Thank you so much.
--tuan
Hello,
I've deployed a Ceph cluster consisting of 5 server nodes and a Ceph client
that will hold the mounted CephFS.
>From the client I want to deploy the 5 servers with the ceph-deploy tool.
I installed Ceph from this repository: http://ceph.com/rpm-cuttlefish/el6/x86_64
And ceph-deploy from
*snip*
raise IOError, "End of file"
IOError: End of file
[remote] sudo: sorry, you must have a tty to run sudo
Did you comment out #Defaults requiretty in /etc/sudoers?
That worked for me.
Oliver
___
ceph-users mailing list
ceph-users@lists.
I did it now and it partially solved the problem. Thanks!
However, now I face another error:
curl: (7) couldn't connect to host
error: https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc: import
read failed(2).
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 2
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of jose.valerioorop...@swisscom.com
> Sent: Thursday, July 18, 2013 8:12 AM
> To: o...@fuckner.net
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Problem execu
Hi Dan, thanks for your reply
Yes, as you has said:
"RBD Device /dev/rbd/rbd/iscsi-image-part1 exported with tgt" is kernel
block device and "the TGT-RBD connector" is stgt/tgtd.
My tests was been done exporting it with tgt without multipath. It seems
that are no very big difference in this
Hi Dan,
On Thu, 18 Jul 2013, Dan van der Ster wrote:
> Hi,
> Last night our cluster became unhealthy for 3 hours after one of the mons (a
> qemu-kvm VM) had this glitch:
>
> Jul 18 00:12:43 andy03 kernel: Clocksource tsc unstable (delta =
> -60129537028 ns). Enable clocksource failover by adding
We've prepared another update for the Cuttlefish v0.61.x series. This
release primarily contains monitor stability improvements, although there
are also some important fixes for ceph-osd for large clusters and a few
important CephFS fixes. We recommend that all v0.61.x users upgrade.
* mon: mi
In the monitor log you sent along, the monitor was crashing on a
setcrushmap command. Where in this sequence of events did that happen?
On Wed, Jul 17, 2013 at 5:07 PM, Vladislav Gorbunov wrote:
> That's what I did:
>
> cluster state HEALTH_OK
>
> 1. load crush map from cluster:
> https://dl.drop
Title: MAXIS HOME AND BUSINESS FIBRE INTERNET
MAXIS HOME AND BUSINESS FIBRE INTERNET
Fastest Fibre Broadband in Malaysia with lowest price
10Mbps, 20Mbps, & 30Mbps from only RM148
Hi Dan,
When i say multipath i want to say multipath with round robin ;)
On 18/07/13 17:39, Toni F. [ackstorm] wrote:
Hi Dan, thanks for your reply
Yes, as you has said:
"RBD Device /dev/rbd/rbd/iscsi-image-part1 exported with tgt" is
kernel block device and "the TGT-RBD connector" is stgt/
What is the output of ceph pg dump | grep 'stale\|creating' ?
On Wed, Jul 17, 2013 at 7:56 PM, Ta Ba Tuan wrote:
> zombie pgs might occured when i remove some data pools.
> but, with pgs in stale state, i can't delete it?
>
> I found this guide, but I don't understand it.
> http://ceph.com/docs/n
Hi Samuel,
Output logs from : ceph pg dump | grep 'stale\|creating'
0.f4f 0 0 0 0 0 0 0 stale+creating 2013-07-17 16:35:06.882419 0'0 0'0 []
[68,12] 0│'0 0.00 0'0 0.00 │
2.f4d 0 0 0 0 0 0 0 stale+creating 2013-07-17 16:35:22.826552 0'0 0'0 []
[68,12] 0│
'0 0.00 0'0 0.00 │
0.2c
On 07/17/2013 11:39 PM, Maciej Gałkiewicz wrote:
I have created VM with KVM 1.1.2 and all I had was rbd_cache configured
in ceph.conf. Cache option in libvirt set to "none":
f81d6108-d8c9-4e06-94ef-02b1943a873d
Hello All,
stupid question.
service ceph stop mon
or
/etc/init.d/ceph stop mon
doesn't work.
how i can stop some osds or mons ?
Thanks much.
--
AIXIT GmbH - Witalij Poljatchek
(T) +49 69 203 4709-13 - (F) +49 69 203 470 979
w...@aixit.com -http://www.aixit.com
AIXIT GmbH
Strahlenberge
Hi,
service ceph stop mon
doesn't work.
how i can stop some osds or mons ?
Try for example:
service ceph stop mon.a
or
service ceph stop osd.1
replacing "a" and "1" with the id, you want to stop.
--
Jens Kristian Søgaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.m
On 18 Jul 2013 20:25, "Josh Durgin" wrote:
> Setting rbd_cache=true in ceph.conf will make librbd turn on the cache
> regardless of qemu. Setting qemu to cache=none tells qemu that it
> doesn't need to send flush requests to the underlying storage, so it
> does not do so. This means librbd is cach
On Thu, Jul 18, 2013 at 11:31 AM, Jens Kristian Søgaard
wrote:
> Hi,
>
>> service ceph stop mon
>>
>> doesn't work.
>> how i can stop some osds or mons ?
>
>
> Try for example:
>
> service ceph stop mon.a
>
> or
>
> service ceph stop osd.1
>
> replacing "a" and "1" with the id, you want to sto
Hi,
first ceph mon dump:
dumped monmap epoch 2
epoch 2
fsid b8c3e27a-5f9a-4367-9b73-4451360c747c
last_changed 2013-07-09 05:19:07.776307
created 0.00
0: 10.0.104.31:6789/0 mon.ceph01
1: 10.0.104.32:6789/0 mon.ceph02
2: 10.0.104.33:6789/0 mon.ceph03
second: /etc/ceph.conf
[global]
fsid = b8c3
On Thu, Jul 18, 2013 at 6:27 PM, Sage Weil wrote:
> Hi Dan,
>
> On Thu, 18 Jul 2013, Dan van der Ster wrote:
>> Hi,
>> Last night our cluster became unhealthy for 3 hours after one of the mons (a
>> qemu-kvm VM) had this glitch:
>>
>> Jul 18 00:12:43 andy03 kernel: Clocksource tsc unstable (delta
On Thu, 18 Jul 2013, Dan van der Ster wrote:
> On Thu, Jul 18, 2013 at 6:27 PM, Sage Weil wrote:
> > Hi Dan,
> >
> > On Thu, 18 Jul 2013, Dan van der Ster wrote:
> >> Hi,
> >> Last night our cluster became unhealthy for 3 hours after one of the mons
> >> (a
> >> qemu-kvm VM) had this glitch:
> >>
On Thu, Jul 18, 2013 at 9:29 PM, Sage Weil wrote:
> this sounds exactly like the problem we just fixed in v0.61.5.
Glad to hear that.
Thanks for the quick help :)
dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.
how is your hbase performance on ceph compared to hdfs - was there some
special knobs that you needed to turn ?
I'm running a few hbase tests with the yahoo cloud serving benchmark -
ycsb with ceph & hdfs and the results were very surprising considering
that the hadoop + ceph results were not a
Hi,
We added some new OSDs today and since we've recently written many
many (small/tiny) objects to a test pool, backfilling those new disks
is going to take something like 24hrs. I'm therefore curious if we can
speed up the recovery at all or if the default settings in cuttlefish
already bring us
On 07/18/2013 11:32 AM, Maciej Gałkiewicz wrote:
On 18 Jul 2013 20:25, "Josh Durgin" mailto:josh.dur...@inktank.com>> wrote:
> Setting rbd_cache=true in ceph.conf will make librbd turn on the cache
> regardless of qemu. Setting qemu to cache=none tells qemu that it
> doesn't need to send flush
On Thu, Jul 18, 2013 at 3:53 AM, Ta Ba Tuan wrote:
> Hi all,
>
> I have 4 (stale+inactive) pgs, how to delete those pgs?
>
> pgmap v59722: 21944 pgs: 4 stale, 12827 active+clean, 9113 active+degraded;
> 45689 MB data, 1006 GB used, 293 TB / 294 TB avail;
>
> I found on google a long time, still ca
On Thu, Jul 18, 2013 at 12:50 AM, Alvaro Izquierdo Jimeno
wrote:
> Hi,
>
>
>
> Reading the URL http://ceph.com/docs/next/radosgw/adminops/#create-user ,
> I’m trying to create a new user with:
>
>
>
> curl -v -X PUT -d '{"uid": "alvaro", "display-name": "alvaro"}
> http://myradosgw/admin/user?for
>In the monitor log you sent along, the monitor was crashing on a
setcrushmap command. Where in this sequence of events did that happen?
It's happened after I try to upload different crushmap, much later step 13.
>Where are you getting these numbers 82-84 and 92-94 from? They don't
appear in any a
On Thu, Jul 18, 2013 at 12:50 AM, Alvaro Izquierdo Jimeno
wrote:
> Hi,
>
>
>
> Reading the URL http://ceph.com/docs/next/radosgw/adminops/#create-user ,
> I’m trying to create a new user with:
>
>
>
> curl -v -X PUT -d '{"uid": "alvaro", "display-name": "alvaro"}
> http://myradosgw/admin/user?for
Hi Greg,
I don't lost any OSDs,
The first, Ceph had 4 pgs (0.f4f, 2.f4d, 0.2c8, 2.2c6) in stale state.
then, I created those pgs by following commands:
ceph pg force_create_pg 0.f4f
ceph pg force_create_pg 2.f4d
ceph pg force_create_pg 0.2c8
ceph pg force_create_pg 2.2c6
Now, after
BAG(FOB&CIF).doc
Description: MS-Word document
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Jul 18, 2013 at 6:41 PM, Ta Ba Tuan wrote:
> Hi Greg,
>
> I don't lost any OSDs,
>
> The first, Ceph had 4 pgs (0.f4f, 2.f4d, 0.2c8, 2.2c6) in stale state.
> then, I created those pgs by following commands:
>
> ceph pg force_create_pg 0.f4f
> ceph pg force_create_pg 2.f4d
> ceph pg force_c
hey folks, I was hoping to be able to use xfs on top of RBD for a
deployment of mine. And was hoping for the resize of the RBD
(expansion, actually, would be my use case) in the future to be as
simple as a "resize on the fly", followed by an 'xfs_growfs'.'
I just found a recent post, though
(http:
A note on upgrading:
One of the fixes in 0.61.5 is with a 32bit vs 64bit bug with the feature
bits. We did not realize it before, but the fix will prevent 0.61.4 (or
earlier) from forming a quorum with 0.61.5. This is similar to the upgrade
from bobtail (and the future upgrade to dumpling). As
All mons do not work anymore:
=== mon.a ===
Starting Ceph mon.a on ccad...
[21207]: (33) Numerical argument out of domain
failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i a --pid-file
/var/run/ceph/mon.a.pid -c /etc/ceph/ceph.conf '
Stefan
Am 19.07.2013 07:59, schrieb Sage Weil:
> A note on upgrad
37 matches
Mail list logo