Dear all,
I'm currently running a ceph client on centos6.3. Kernel has been upgraded
to kernel-lt-3.0.77-1 from elrepo which includes the rbd module.
I can create an map a rbd image fine. However, info and resize fails.
create new image
[root@nfs1 ~]# rbd --pool userfs -m ceph1 --id nfs --keyr
Hi,
Are you running the package from EPEL, if so it's likely that
you're seeing https://bugzilla.redhat.com/show_bug.cgi?id=891993
Barry
On 10/05/13 08:15, YIP Wai Peng wrote:
Dear all,
I'm currently running a ceph client on centos6.3. Kernel has been
upgraded to kernel-lt-3.0.77-1 from elre
Mhm, if that was the case I would expect it to be deleting things over
time. On one occurrence for example the data pool reached 160GB after 3 or
4 days, with a reported usage in cephfs of 12GB. Within minutes of my
stopping the clients, the data pool dropped by over 140GB.
I suspect the filehandle
Thanks Barry.
I can confirmed that the osd and mon was using epel. I have upgraded it to
use ceph's repo and everything is fine now.
Once again, thanks!
- WP
On Fri, May 10, 2013 at 3:23 PM, Barry O'Rourke wrote:
> Hi,
>
> Are you running the package from EPEL, if so it's likely that
> you're
Investigating further, there seems to be a large number of inodes with
caps, many of which are actually unlinked from the filesystem.
2013-05-10 13:04:11.270365 7f2d7f349700 2 mds.0.cache
check_memory_usage total 306000, rss 90624, heap 143444, malloc 53053
mmap 0, baseline 131152, buffers 0, max
Mike,
Thanks for the looking into this further.
On May 10, 2013, at 5:23 AM, Mike Bryant wrote:
> I've just found this bug report though: http://tracker.ceph.com/issues/3601
> Looks like that may be the same issue..
This definitely seems like a candidate.
>> Adding some debug to the cephfs ja
> There is already debugging present in the Java bindings. You can turn on
> client logging, and add 'debug javaclient = 20' to get client debug logs.
Ah, I hadn't noticed that, cheers.
> How many clients does HBase setup?
There's one connection to cephfs from the master, and one from each of
t
Hi all,
I deploy ceph 0.56.6,
I have 1 server run OSD deamon (format ext4), 1 server run Mon + MDS.
I use RAID 6 with 44TB capacity, I divided into 2 partitions *(ext4)*, each
corresponding to 1 OSD.
Ceph -s:
health HEALTH_OK
monmap e1: 1 mons at {0=10.160.0.70:6789/0}, election epoch
Hi,
I'd like to know how a file that's been striped across multiple objects/object
sets (potentially multiple placement groups) is reconstituted and returned back
to a client?
For example say I have a 100 MB file, foo that's been striped across 16 objects
in 2 object sets. What is the data fl
On Fri, 10 May 2013, Noah Watkins wrote:
> Mike,
>
> Thanks for the looking into this further.
>
> On May 10, 2013, at 5:23 AM, Mike Bryant wrote:
>
> > I've just found this bug report though: http://tracker.ceph.com/issues/3601
> > Looks like that may be the same issue..
>
> This definitely s
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verify the
disks are not the bottleneck for 1 client). All nodes are connected on a
We need a cephalopod name that starts with 'e', and trolling through
taxonomies seems like a great thing to crowdsource. :) So far I've found
a few latin names, but the main problem is that I can't find a single
large list of species with the common names listed. Wikipedia's taxonomy
seems t
I like enteroctopus, but Enope is cool and shorter...
http://en.wikipedia.org/wiki/Sparkling_Enope_Squid
On Fri, May 10, 2013 at 11:31 AM, Sage Weil wrote:
> We need a cephalopod name that starts with 'e', and trolling through
> taxonomies seems like a great thing to crowdsource. :) So far I
On Fri, May 10, 2013 at 11:31 AM, Sage Weil wrote:
> We need a cephalopod name that starts with 'e', and trolling through
> taxonomies seems like a great thing to crowdsource. :) So far I've found
> a few latin names, but the main problem is that I can't find a single
> large list of species wit
As long as we have a picture. Enteroctopus is giant, which implies
large scale and is what we're about. I just like Enope, because they
are bio-luminescent.
http://en.wikipedia.org/wiki/Sparkling_Enope_Squid The pictures are
kind of cool too.
On Fri, May 10, 2013 at 11:47 AM, Yehuda Sadeh wrote
On Fri, 10 May 2013, John Wilkins wrote:
> I like enteroctopus, but Enope is cool and shorter...
> http://en.wikipedia.org/wiki/Sparkling_Enope_Squid
I was saving this one for F; the common name is 'firefly squid'. :)
sage
>
>
> On Fri, May 10, 2013 at 11:31 AM, Sage Weil wrote:
> We
On Fri, 10 May 2013, Yehuda Sadeh wrote:
> On Fri, May 10, 2013 at 11:31 AM, Sage Weil wrote:
> > We need a cephalopod name that starts with 'e', and trolling through
> > taxonomies seems like a great thing to crowdsource. :) So far I've found
> > a few latin names, but the main problem is that
Please don't make me say Enteroctopus over and over again in sales
meetings. :-) Something simple and easy please!
N
On Fri, May 10, 2013 at 7:51 PM, John Wilkins wrote:
> As long as we have a picture. Enteroctopus is giant, which implies
> large scale and is what we're about. I just like Enope
I found the eye flash squid.
http://www.bio.davidson.edu/people/midorcas/animalphysiology/websites/2005/plekon/eyeflashsquid.htm
Dave Spano
- Original Message -
From: "Sage Weil"
To: ceph-de...@vger.kernel.org, ceph-us...@ceph.com
Sent: Friday, May 10, 2013 2:31:57 PM
Subject
After upgrading my cluster everything looked good, then I rebooted the farm and
all hell broke loose.
I have 3 monitors but none are able to start. On all of them the
'/usr/bin/python /usr/sbin/ceph-create-keys' command is hanging because none of
the nodes can accept quorum.
All ceph tools a
On 05/10/2013 11:02 PM, Jeppesen, Nelson wrote:
After upgrading my cluster everything looked good, then I rebooted the
farm and all hell broke loose.
I have 3 monitors but none are able to start. On all of them the
'/usr/bin/python /usr/sbin/ceph-create-keys' command is hanging because
none of
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verify the
disks are not the bottleneck for 1 cli
Thank you, you saved my bacon. I didn't inject the new map properly, the
monitor is going nuts but it's recovering. I wonder if I was hit by the .61
race condition. How can I verify that the monitor has upgraded to the 'new' .61
style that uses a single paxos? Thanks.
Nelson Jeppesen
_
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verif
Hi Mark,
Given the same hardware, optimal configuration (I have no idea what that
means exactly but feel free to specify), which is supposed to perform
better, kernel rbd or qemu/kvm? Thanks,
Yun
On Fri, May 10, 2013 at 6:56 PM, Mark Nelson wrote:
> On 05/10/2013 12:16 PM, Greg wrote:
>
>> Hel
On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this
On 05/10/2013 07:21 PM, Yun Mao wrote:
Hi Mark,
Given the same hardware, optimal configuration (I have no idea what that
means exactly but feel free to specify), which is supposed to perform
better, kernel rbd or qemu/kvm? Thanks,
Yun
Hi Yun,
I'm in the process of actually running some tests
Anyone running 0.61.1,
Watch out for high disk usage due to a file likely located at
/var/log/ceph/ceph-mon..tdump. This file contains debugging
for monitor transactions. This debugging was added in the past week or
so to track down another anomaly. It is not necessary (or useful unless
you a
I also just pushed a fix to the cuttlefish branch, so if you want
packages that fix this, you can get them from gitbuilders using the
"testing" versions, branch "cuttlefish".
Thanks, Mike, for pointing this out!
On 05/10/2013 08:27 PM, Mike Dawson wrote:
Anyone running 0.61.1,
Watch out for
On Fri, 10 May 2013, Mike Dawson wrote:
> Anyone running 0.61.1,
>
> Watch out for high disk usage due to a file likely located at
> /var/log/ceph/ceph-mon..tdump. This file contains debugging for
> monitor transactions. This debugging was added in the past week or so to track
> down another anoma
On Fri, 10 May 2013, Sage Weil wrote:
> On Fri, 10 May 2013, Mike Dawson wrote:
> > Anyone running 0.61.1,
> >
> > Watch out for high disk usage due to a file likely located at
> > /var/log/ceph/ceph-mon..tdump. This file contains debugging for
> > monitor transactions. This debugging was added in
31 matches
Mail list logo