Hi,
On one of our test clusters, I have a node with 4 OSDs with SAS / non-SSD
drives (sdb, sdc, sdd, sde) and 2 SSD drives (sdf and sdg) for journals to
serve the 4 OSDs (2 each).
Model: ATA ST100FM0012 (scsi)
Disk /dev/sdf: 100GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
N
Dear ceph-users,
I am currently facing issue with my ceph mds server. Ceph-mds daemon does not
want to bring up back.
I tried running it manually with ceph-mds -i mon01 -d but aborted and the log
shows that it stucks at failed assert(session) line 1303 in mds/journal.cc.
Can someone shed some l
Hi
After upgrading my OpenStack cluster to Icehouse I came across a very
suprising bug. It is no longer possible to create cinder volumes
(rbd-backed) from image (rbd-backed) by copy-on-write cloning:
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
Both rbd volumes would be st
Hi Yehuda,
Thanks for your help...that missing date error gone but still i am getting
the access denied error :-
-
2014-04-25 15:52:56.988025 7f00d37c6700 1 == starting new request
req=0x237a090 =
2014-04-25 15:52:56.988072 7f00d37c6700 2 req 24:0.46::GET
Hi Alain,
Thanks for your prompt answer. It looks cool. It seems to be a separated
project rather under ceph. Will it be incorporated into ceph? And what¹s
the schedule for remaining features such as user/bucket management?
Thanks,
Ray
On 4/22/14, 10:21 PM, "alain.dechorg...@orange.com"
wrote:
Hi, All.
Yesterday i managed to reproduce the bug on my test environment
with a fresh installation of dumpling release. I`ve attached the
link to archive with debug logs.
http://lamcdn.net/pool_with_empty_name_bug_logs.tar.gz
Test cluster contains only one bucket with name
"test" and one file in th
Hi.
radosgw-admin bucket list
2014-04-25 15:32 GMT+04:00 :
> Hi, All.
> Yesterday i managed to reproduce the bug on my test environment
> with a fresh installation of dumpling release. I`ve attached the
> link to archive with debug logs.
> http://lamcdn.net/pool_with_empty_name_bug_logs.tar.gz
$ radosgw-admin bucket list
[
"test"]
--
Regards,
Mikhail
On Fri, 25 Apr 2014 15:48:23 +0400
Irek Fasikhov wrote:
> Hi.
>
> radosgw-admin bucket list
>
>
>
> 2014-04-25 15:32 GMT+04:00 :
>
> > Hi, All.
> > Yesterday i managed to reproduce the bug on my test environment
> > with a fr
You correctly configured DNS records?
2014-04-25 16:24 GMT+04:00 :
> $ radosgw-admin bucket list
> [
> "test"]
>
>
> --
> Regards,
> Mikhail
>
>
> On Fri, 25 Apr 2014 15:48:23 +0400
> Irek Fasikhov wrote:
>
> > Hi.
> >
> > radosgw-admin bucket list
> >
> >
> >
> > 2014-04-25 15:32 GMT+04:00
I think yes, you can see request headers in the attached radosgw.log.
Can you try access you cluster with curl or wget with none-existent
bucket and file?
and then show ceph osd dump?
--
Regards,
Mikhail
On Fri, 25 Apr 2014 16:26:09 +0400
Irek Fasikhov wrote:
> You correctly configured DNS re
For yet, the inkscope project is independent of Ceph.
The S3 user management could be finished before the end of the next week.
Alain
-Message d'origine-
De : Ray Lv [mailto:ra...@yahoo-inc.com]
Envoyé : vendredi 25 avril 2014 11:57
À : DECHORGNAT Alain IMT/OLPS
Cc : ceph-users@lists.cep
Before configuring region and zone, I would like to known which tags can be
updated in metadata bucket.instance?
Are there restrictions according to the capabilities applied to radosgw-admin?
-Message d'origine-
De : ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ce
If you had it working in Havana I think you must have been using a
customized code base; you can still do the same for Icehouse.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Apr 25, 2014 at 12:55 AM, Maciej Gałkiewicz
wrote:
> Hi
>
> After upgrading my OpenStack clu
On 25 April 2014 16:00, Gregory Farnum wrote:
> If you had it working in Havana I think you must have been using a
> customized code base; you can still do the same for Icehouse.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
I was using a standard OpenStack version from
This is a COW clone, but the BP you pointed doesn’t match the feature you
described. This might explain Greg’s answer.
The BP refers to the libvirt_image_type functionality for Nova.
What do you get now when you try to create a volume from an image?
Sébastien Han
Cloud Engineer
"Always
- Message from Craig Lewis -
Date: Thu, 24 Apr 2014 11:20:08 -0700
From: Craig Lewis
Subject: Re: [ceph-users] OSD distribution unequally -- osd crashes
To: Kenneth Waegeman
Cc: ceph-users@lists.ceph.com
Your OSDs shouldn't be crashing during a remap. Although..
Dear Ceph-devel, ceph-users,
I am currently facing issue with my ceph mds server. Ceph-mds daemon does
not want to bring up back.
I tried running that manually with ceph-mds –i mon01 –d but it got aborted
and the log shows that it stucks at failed assert(session) line 1303 in
mds/journal.cc.
On 25 April 2014 16:37, Sebastien Han wrote:
> This is a COW clone, but the BP you pointed doesn’t match the feature you
> described. This might explain Greg’s answer.
> The BP refers to the libvirt_image_type functionality for Nova.
>
> What do you get now when you try to create a volume from an
I just tried, I have the same problem, it looks like a regression…
It’s weird because the code didn’t change that much during the Icehouse cycle.
I just reported the bug here: https://bugs.launchpad.net/cinder/+bug/1312819
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giv
On Thu, Apr 24, 2014 at 7:03 PM, wsnote wrote:
> Hi, Yehuda.
> It doesn't matter.We have fixed it.
> The filename will be transcoded by url_encode and decoded by url_decode.
> There is a bug when decoding the filename.
> There is another bug when decoding the filename. when radosgw-agent fails
> d
On Fri, Apr 25, 2014 at 1:03 AM, Punit Dambiwal wrote:
> Hi Yehuda,
>
> Thanks for your help...that missing date error gone but still i am getting
> the access denied error :-
>
> -
> 2014-04-25 15:52:56.988025 7f00d37c6700 1 == starting new request
> req=0x237a090
Hmm, it looks like your on-disk SessionMap is horrendously out of
date. Did your cluster get full at some point?
In any case, we're working on tools to repair this now but they aren't
ready for use yet. Probably the only thing you could do is create an
empty sessionmap with a higher version than t
I was reading about mon osd min down reports at
http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/,
and I had a question.
Are mon osd min down reporters and mon osd min down reports both
required to mark an OSD down, or just one?
For example, if I set
[global]
mon osd min d
I am not able to do a dd test on the SSDs since it's not mounted as
filesystem, but dd on the OSD (non-SSD) drives gives normal result.
Since you have free space on the SSDs, you could add a 3rd 10G partition
to one of the SSDs. Then you could put a filesystem on that partition,
or just dd
Greetings, I got a ceph test cluster setup this week and I thought it would be
neat if I could write a php script that let me start working with the adminops
API.
I did some research to figure out how to correctly 'authorize' in the AWS
fashion and wrote this little script.
http://host.com/";;
The monitor requires at least number of reports, from a set of
OSDs whose size is at least . So with 9 reporters and 3 reports,
it would wait until 9 OSDs had reported an OSD down (basically ignoring the
reports setting, as it is smaller).
-Greg
On Friday, April 25, 2014, Craig Lewis wrote:
>
Are there packages for Trusty being built yet?
I don't see it listed at http://ceph.com/debian-emperor/dists/
Thanks,
- Travis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You can actually just install it using the Ubuntu packages. I did it yesterday
on Trusty.
Thanks,
-Drew
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Travis Rhoden
Sent: Friday, April 25, 2014 3:06 PM
To: ceph-users
Subject: [ceph-users] packag
Well as far as I know trusty has 0.79 and will get firefly as soon as it's ready so I'm not sure if it's that urgent. Precise repo should work fine.
My 2 cents
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance
Yes, juste apt-get install ceph ;-)
Cheers
--
Cédric Lemarchand
> Le 25 avr. 2014 à 21:07, Drew Weaver a écrit :
>
> You can actually just install it using the Ubuntu packages. I did it
> yesterday on Trusty.
>
> Thanks,
> -Drew
>
>
> From: ceph-users-boun...@lists.ceph.com
> [mailto:c
Bobtail is really too old to draw any meaningful conclusions from; why
did you choose it?
That's not to say that performance on current code will be better
(though it very much might be), but the internal architecture has
changed in some ways that will be particularly important for the futex
profi
Thanks guys. I don't know why I didn't try that. I guess just too much
habit of setting up the additional repo. =)
On Fri, Apr 25, 2014 at 4:09 PM, Cédric Lemarchand wrote:
> Yes, juste apt-get install ceph ;-)
>
> Cheers
>
> --
> Cédric Lemarchand
>
> Le 25 avr. 2014 à 21:07, Drew Weaver a
Using the Emperor builds for Precise seems to work on Trusty. I just
put a hold on all of the ceph, rados, and apache packages before the
release upgrade.
It makes me nervous though. I haven't stressed it much, and I don't
really want to roll it out to production.
I would like to see Emper
For what it's worth, I've been able to achieve up to around 120MB/s with
btrfs before things fragment.
Mark
On 04/25/2014 03:59 PM, Xing wrote:
Hi Gregory,
Thanks very much for your quick reply. When I started to look into Ceph,
Bobtail was the latest stable release and that was why I picked
I don't have any recent results published, but you can see some of the
older results from bobtail here:
http://ceph.com/performance-2/argonaut-vs-bobtail-performance-preview/
Specifically, look at the 256 concurrent 4MB rados bench tests. In a 6
disk, 2 SSD configuration we could push about 8
HI Greg,
Actually the cluster that my colleague and I is working is rather new and still
have plenty of space left (less than 7% used). What we noticed just before the
MDS gave us this problem, was a temporary network issue in the data center so
we are not sure that could have been the root cau
rbd -m mon-cluster1 export rbd/one-1 - | rbd -m mon-cluster2 - rbd/one-1
пятница, 25 апреля 2014 г. пользователь Brian Rak написал:
> Is there a recommended way to copy an RBD image between two different
> clusters?
>
> My initial thought was 'rbd export - | ssh "rbd import -"', but I'm not
> sur
rbd -m mon-cluster1 export rbd/one-1 - | rbd -m mon-cluster2 import -
rbd/one-1
пятница, 25 апреля 2014 г. пользователь Brian Rak написал:
> Is there a recommended way to copy an RBD image between two different
> clusters?
>
> My initial thought was 'rbd export - | ssh "rbd import -"', but I'm no
38 matches
Mail list logo