Hi,
fio is probably the tool you are looking that supports both RBD images and
other disk devices for testing so that you can bench your Ceph cluster as week
as compare it with other devices.
Cheers
JC
On May 14, 2014, at 23:37, yalla.gnan.ku...@accenture.com wrote:
> Hi All,
>
> Is there
Am 15.05.2014 00:26, schrieb Josef Johansson:
> Hi,
>
> So, apparently tmpfs does not support non-root xattr due to a possible
> DoS-vector. There's configuration set for enabling it as far as I can see.
>
> CONFIG_TMPFS=y
> CONFIG_TMPFS_POSIX_ACL=y
> CONFIG_TMPFS_XATTR=y
>
> Anyone know a way a
Hi,
Does firefly already fixed the bug in 0.78 that need to patch the script
/etc/init.d/ceph,
from:
do_cmd "timeout 10 $BINDIR/ceph -c $conf --name=osd.$id --keyring=$osd_keyring
osd crush create-or-move -- $id ${osd_weight:-${defaultweight:-1}}
$osd_location"
to
do_cmd "timeout 10 $BINDIR/
On 14.05.2014 17:26, Wido den Hollander wrote:
On 05/14/2014 05:24 PM, Georg Höllrigl wrote:
Hello List,
I see a pool without a name:
ceph> osd lspools
0 data,1 metadata,2 rbd,3 .rgw.root,4 .rgw.control,5 .rgw,6 .rgw.gc,7
.users.uid,8 openstack-images,9 openstack-volumes,10
openstack-backups,1
On 15/05/14 09:11, Stefan Priebe - Profihost AG wrote:
> Am 15.05.2014 00:26, schrieb Josef Johansson:
>> Hi,
>>
>> So, apparently tmpfs does not support non-root xattr due to a possible
>> DoS-vector. There's configuration set for enabling it as far as I can see.
>>
>> CONFIG_TMPFS=y
>> CONFIG_TM
Am 15.05.2014 09:56, schrieb Josef Johansson:
>
> On 15/05/14 09:11, Stefan Priebe - Profihost AG wrote:
>> Am 15.05.2014 00:26, schrieb Josef Johansson:
>>> Hi,
>>>
>>> So, apparently tmpfs does not support non-root xattr due to a possible
>>> DoS-vector. There's configuration set for enabling it
Hello,
I would like to test I/O Ceph's performances. I'm searching for a
popular benchmark to create a dataset and run I/O tests that I can reuse
for other distributed file systems and other tests. I have tried
filebench but it seems not to be reproducible.
Do you know any other benchmark th
On 15/05/14 20:23, Séguin Cyril wrote:
Hello,
I would like to test I/O Ceph's performances. I'm searching for a
popular benchmark to create a dataset and run I/O tests that I can reuse
for other distributed file systems and other tests. I have tried
filebench but it seems not to be reproducible.
On 05/14/2014 11:47 PM, Lukac, Erik wrote:
Hi there,
me again
is there anybody who uses librados in java? It seems like my company
would be the first one who thinks about using it and if I (as a part of
OPS-Team) cant convince our DEV-Team to use librados and improve
performance they'll use
Hi,
Thanks for the reply. I need an FIO installation file for Ubuntu platform and
also could you send any links for examples and documentation.
Thanks
Kumar
From: Jean-Charles LOPEZ [mailto:jc.lo...@inktank.com]
Sent: Thursday, May 15, 2014 12:40 PM
To: Gnan Kumar, Yalla
Cc: LOPEZ Jean-Charles
Hi,
One of the osd in my cluster downs w no reason, I saw the error message in the
log below, I restarted osd, but after several hours, the problem come back
again. Could you help?
"Too many open files not handled on operation 24 (541468.0.1, or op 1, counting
from 0)
-96> 2014-05-14 22:12:
>>Thanks for the reply. I need an FIO installation file for Ubuntu platform and
>>also could you send any links for examples and documentation.
Hi, you can build it (with rbd support recently)
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
#apt-get instal
On 15 May 2014 04:05, Maciej Gałkiewicz wrote:
> On 28 April 2014 16:11, Sebastien Han wrote:
>
>> Yes yes, just restart cinder-api and cinder-volume.
>> It worked for me.
>
>
> In my case the image is still downloaded:(
>
Option "show_image_direct_url = True" was missing in my glance config.
Hello, I have some trouble with OSD. It's crashed with error
osd/osd_types.h: 2868: FAILED assert(rwstate.empty())
ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
1: (SharedPtrRegistry::OnRemoval::operator()(ObjectContext*)+0x2f5) [0x8dfee5]
2:
(std::tr1::__shared_count<(__gnu_
Craig,
Thanks for your response. I have already marked osd.6 as lost, as
you suggested. The problem is that it is still querying osd.8 which is
not lost. I don't know why it is stuck there. It has been querying osd.8
for 4 days now.
I also tried deleting the broken RBD image but the op
On Tue, May 06, 2014 at 12:59:27AM +0400, Andrey Korolyov wrote:
> On Tue, May 6, 2014 at 12:36 AM, Dave Chinner wrote:
> > On Mon, May 05, 2014 at 11:49:05PM +0400, Andrey Korolyov wrote:
> >> Hello,
> >>
> >> We are currently exploring issue which can be related to Ceph itself
> >> or to the XFS
Hi everyone,
I am new to the Ceph. I have 5 PC test cluster on wich id like to test
CephFS behavior and performance.
I have used ceph-deploy on nod pc1 and installed ceph software (emeperor
0.72.2-0.el6) on all 5 machines.
Then set pc1 as mon and mds. PC2, PC3 as OSD and PC4, PC5 as ceph
clien
On Mon, May 05, 2014 at 11:49:05PM +0400, Andrey Korolyov wrote:
> Hello,
>
> We are currently exploring issue which can be related to Ceph itself
> or to the XFS - any help is very appreciated.
>
> First, the picture: relatively old cluster w/ two years uptime and ten
> months after fs recreatio
On Wed, 14 May 2014, Brian Rak wrote:
> Why are the defaults for 'cephx require signatures' and similar still false?
> Is it still necessary to maintain backwards compatibility with very old
> clients by default? It seems like from a security POV, you'd want everything
> to be more secure out of t
Hello,
Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD.
I’d like to store my KVM virtual machines on pools that I have made on the ceph
cluster.
I would like to achieve to have multiple storage solutions for multiple
tenants. Currently when I launch an instance the
"Too many open files not handled on operation 24"
This is the reason. You need to increase the fd size limit.
On Thu, May 15, 2014 at 6:06 PM, Cao, Buddy wrote:
> Hi,
>
>
>
> One of the osd in my cluster downs w no reason, I saw the error message in
> the log below, I restarted osd, but after se
On Thu, 15 May 2014, Cao, Buddy wrote:
> ?Too many open files not handled on operation 24 (541468.0.1, or op 1,
> counting from 0)
You need to increase the 'ulimit -n' max open files limit. You can do
this in ceph.conf with 'max open files' if it's sysvinit or manually in
/etc/init/ceph-osd.con
Can you please open a ticket in at tracker.ceph.com with this backtrace,
and some info about what workload and system config led to this? Are you
using erasure coding and/or tiering?
Thanks!
sage
On Thu, 15 May 2014, Sergey Korolev wrote:
> Hello, I have some trouble with OSD. It's crashed w
Hello guys,
I am trying to figure out what is the problem here.
Currently running Ubuntu 12.04 with latest updates and radosgw version
0.72.2-1precise. My ceph.conf file is pretty standard from the radosgw howto.
I am testing radosgw as a backup solution to S3 compatible clients. I
Your rewrite rule might be off a bit. Can you provide log with 'debug rgw = 20'?
Yehuda
On Thu, May 15, 2014 at 8:02 AM, Andrei Mikhailovsky wrote:
> Hello guys,
>
>
> I am trying to figure out what is the problem here.
>
>
> Currently running Ubuntu 12.04 with latest updates and radosgw version
Hi
I'm trying to follow the idea to enable COW with cinder, but it seems like
I'm missing something in a patching part (with this patch
https://review.openstack.org/#/c/90644/1).
Could someone please explain me how to apply the patch appropriately?
And yeah, another question is whether I need it
Hello,
i’m trying to backup hdfs to ceph/radosgw/s3, but I run into different
problems. Currently I’m fighting against an segfault of radosgw.
Some details about my setup:
* nginx, because apache2 isn’t returning an „content-length: 0“ header on head
as required by hadoop (http://tracker.ceph.
I have the same issue, but using Josh's havana-ephemeral-rbd branch from
https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
On Thu, May 15, 2014 at 8:25 AM, Сергей Мотовиловец <
motovilovets.ser...@gmail.com> wrote:
> Hi
>
> I'm trying to follow the idea to enable COW with cinder, but it
Hello!
Does CEPH rely on any multicasting? Appreciate the feedback..
Thanks!
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello!
Does CEPH rely on any multicasting? Appreciate the feedback..
Thanks!
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, May 15, 2014 at 9:52 AM, Amit Vijairania
wrote:
> Hello!
>
> Does CEPH rely on any multicasting? Appreciate the feedback..
Nope! All networking is point-to-point.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users
> > Does CEPH rely on any multicasting? Appreciate the feedback..
>
> Nope! All networking is point-to-point.
Besides, it would be great if ceph could use existing cluster stacks like
corosync, ...
Is there any plan to support that?
___
ceph-users ma
Thanks Greg!
Amit Vijairania | 415.610.9908
--*--
On Thu, May 15, 2014 at 9:55 AM, Gregory Farnum wrote:
> On Thu, May 15, 2014 at 9:52 AM, Amit Vijairania
> wrote:
> > Hello!
> >
> > Does CEPH rely on any multicasting? Appreciate the feedback..
>
> Nope! All networking is point-to-point.
Thanks Greg!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey All,
Thanks for the quick responses! I have chosen the micron pci-e card due to
its benchmark results on
http://www.storagereview.com/micron_realssd_p320h_enterprise_pcie_review .
Per the vendor the card has
a 25PB life expectancy so I'm not terribly worried about it failing on me
too soon :)
Excellent choice.. we did put 2 of them (350G) in Dell R720xd with
24x 10K 1.2Gb drives... Excellent performance.. we now need to test
it on low latency switches .. (we are on regular switches 2x10GB per
servers).
I know for that amount of OSD we would need 3 cards to max out IO but
Netw
On 05/15/2014 01:19 PM, Tyler Wilson wrote:
> Would running a different distribution affect this at all? Our target was
> CentOS 6 however if a more
> recent kernel would make a difference we could switch.
FWIW you can run centos 6 with 3.10 kernel from elrepo.
--
Dimitri Maziuk
Programmer/sysa
Glad to hear that it works now :)
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance
On 15 May 2014, at 0
Yehuda,
what do you mean by the rewrite rule? is this for Apache? I've used the ceph
documentation to create it. My rule is:
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
/s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
Or are you talking about something
What version of Ceph are you using? mkcephfs was deprecated with the
Cuttlefish release. You can still see the old documentation for mkcephfs
here: http://ceph.com/docs/cuttlefish/start/. However, most people use
ceph-deploy to bootstrap a cluster now.
The output looks like exactly half of the obj
Hello,
On Thu, 15 May 2014 11:19:04 -0700 Tyler Wilson wrote:
> Hey All,
>
> Thanks for the quick responses! I have chosen the micron pci-e card due
> to its benchmark results on
> http://www.storagereview.com/micron_realssd_p320h_enterprise_pcie_review .
> Per the vendor the card has
> a 25PB
Hi
I can not receive subscribed email from ceph-users and ceph-dev mail list,
please help to check it.thanks.
Best Regards
Sean Cao
ZeusCloud Storage Engineer
Privileged/Confidential information may be contained in this email. If you
or your employer is not the addressee in this e
On May 15, 2014, at 6:06 PM, Cao, Buddy wrote:
> Hi,
>
> One of the osd in my cluster downs w no reason, I saw the error message in
> the log below, I restarted osd, but after several hours, the problem come
> back again. Could you help?
>
> “Too many open files not handled on operation 24
Hi there,
“ceph pg dump summary –f json” does not returns data as much as “ceph pg dump
summary”, are there any ways to get the fully Json format data for “ceph pg
dump summary”?
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
Looks like "ceph pg dump all -f json" = "ceph pg dump summary".
On Fri, May 16, 2014 at 1:54 PM, Cao, Buddy wrote:
> Hi there,
>
> “ceph pg dump summary –f json” does not returns data as much as “ceph pg
> dump summary”, are there any ways to get the fully Json format data for
> “ceph pg dump su
Hi All,
What are the kinds of raid levels of storage provided by Ceph block devices ?
Thanks
Kumar
This message is for the designated recipient only and may contain privileged,
proprietary, or otherwise confidential information. If you have received it in
err
Sage, does firefly require to manually set "ulimit -n" while add a new storage
node with 16 osds(500G disks)?
Wei Cao (Buddy)
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Thursday, May 15, 2014 10:49 PM
To: Cao, Buddy
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-us
In my env, "ceph pg dump all -f json" only returns result below,
{"version":45685,"stamp":"2014-05-15
23:50:27.773608","last_osdmap_epoch":13875,"last_pg_scan":13840,"full_ratio":"0.95","near_full_ratio":"0.85","pg_stats_sum":{"stat_sum":{"num_bytes":151487109145,"num_objects":36186,"num_
48 matches
Mail list logo