On 10/03/14 23:18, Xavier Trilla wrote:
> -What do you think is a better approach to improve the
> performance of RBD for VMs: Caching OSDs with FlashCache or using SSD
> Cache Pools?
Well as has been mentioned, Cache Pools isn't available yet however I'm
starting to do some thinking about
Hi,
the second meetup takes place at March 24.
For more details please have a look at
http://www.meetup.com/Ceph-Berlin/events/163029162/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Hi All,
I left out my OS/kernel version, Ubuntu 12.04.4 LTS w/ Kernel
3.10.33-031033-generic (We upgrade our kernels to 3.10 due to Dell Drivers).
Here's an example of starting all the OSD's after a reboot.
top - 09:10:51 up 2 min, 1 user, load average: 332.93, 112.28, 39.96
Tasks: 310 total,
Hi All,
There is a new release of ceph-deploy, the easy deployment tool for Ceph.
This release comes with two new features: the ability to add a new
monitor to an existing cluster
and a configuration file to manage custom repositories/mirrors.
As always, you can find all changes documented in th
Hi list,
Is there a way to get a list of all RADOSGW users?
We've been using the following so far, but it doesn't show users that
have been created but not done anything yet. I'd like a complete list,
including inactive users.
radosgw-admin usage show --show-log-entries=false --id radosgw.ga
How about this:
rados ls -p .users.uid
Your pool name may vary, but should contain the .users.uid extension.
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Thu, Mar 20, 2014 at 2:00 PM, Dane Elwell wrote:
> Hi list,
>
> Is there a way to get a list of all RADOSGW user
Or
radosgw-admin metadata list user
On Mar 20, 2014 7:23 PM, "Michael J. Kidd" wrote:
How about this:
rados ls -p .users.uid
Your pool name may vary, but should contain the .users.uid extension.
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Thu, Mar 20, 2014 at 2:00
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our best to help it land, but it was ultimately rejected. Furthermore,
an additional requirement was imposed to make this patch series
dependent on full support of
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our best to help it land, but it was ultimately rejected. Furthermore,
an additional requirement was imposed to
On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin wrote:
> On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
>> The patch series that implemented clone operation for RBD backed
>> ephemeral volumes in Nova did not make it into Icehouse. We have tried
>> our best to help it land, but it was ultimately re
When CephFS is mounted on a client and when client decides to go to sleep, MDS
segfaults. Has anyone seen this? Below is a part of MDS log. This happened
in emperor and recent 0.77 release. I am running Debian Wheezy with testing
kernels 3.13. What can I do to not crash the whole system if
Hi Hong,
May I know what has happened to your MDS once it crashed? Was it able to
recover from replay?
We also facing this issue and I am interested to know on how to reproduce it.
Thanks.
Bazli
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of hjch
On client, I was no longer able to access the filesystem. It would hang.
Makes sense since MDS has crashed. I tried running 3 MDS demon on the same
machine. Two crashes and one appears to be hung up(?). ceph health says MDS is
in degraded state when that happened.
I was able to recover by r
Did you see any messages in dmesg saying ceph-mds respawnning or stuffs like
that?
Regards,
Luke
On Mar 21, 2014, at 11:09 AM, "hjcho616"
mailto:hjcho...@yahoo.com>> wrote:
On client, I was no longer able to access the filesystem. It would hang.
Makes sense since MDS has crashed. I tried r
Nope just these segfaults.
[149884.709608] ceph-mds[17366]: segfault at 200 ip 7f09de9d60b8 sp
7f09db461520 error 4 in libgcc_s.so.1[7f09de9c7000+15000]
[211263.265402] ceph-mds[17135]: segfault at 200 ip 7f59eec280b8 sp
7f59eb6b3520 error 4 in libgcc_s.so.1[7f59eec19000+15000]
[
Hi Hong,
That's interesting, for Mr. Bazli and I, we ended with MDS stuck in (up:replay)
and a flapping ceph-mds daemon, but then again we are using version 0.72.2.
Having said so the triggering point seem similar to us as well, which is the
following line:
-38> 2014-03-20 20:08:44.495565 7
Luke,
Not sure what flapping ceph-mds daemon mean, but when I connected to MDS when
this happened there no longer was any process with ceph-mds when I ran one
daemon. When I ran three there were one left but wasn't doing much. I didn't
record the logs but behavior was very similar in 0.72 emp
Hello,
I plan to setup a Ceph cluster for a small size hosting company. The aim is
to have customers data (website and mail folders) in a distributed cluster.
Then to setup different servers like web, smtp, pop and imap, accessing the
the cluster data.
The goals are:
* Store all data replicated
18 matches
Mail list logo