subscibe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Marco & Igor,
mountall in Ubuntu 12.04 has a bug when mounting cephfs. It's fixed in 12.10.
The fix could probably be sponsored for backport if there's user interest.
See: https://bugs.launchpad.net/ubuntu/+source/mountall/+bug/677960
And: http://tracker.ceph.com/issues/2919
You could build the
On Apr 1, 2013, Sage Weil wrote:
> * mds, ceph-fuse: fix bugs with replayed requests after MDS restart (Sage
> Weil)
There's a brown paperbag bug in the session_info_t compat decoding code,
that caused mds to crash on start up for me. Here's a fix. I also
patched the spec file so that rpms
On Sat, Mar 30, 2013 at 3:46 AM, Wido den Hollander wrote:
> On 03/29/2013 01:42 AM, Steve Carter wrote:
>>
>> I create an empty 150G volume them copy it to a second pool:
>>
>> # rbd -p pool0 create --size 153750 steve150
>>
>> # /usr/bin/time rbd cp pool0/steve150 pool1/steve150
>> Image copy: 1
Hi,
Yes, the problem still persists.
I've changed the crushmap because I started with a single OSD and added
a second one later and it did not replicate at all by the original
crushmap. The crushmap is now:
root@test-4:~# ceph osd getcrushmap -o /tmp/crushmap
got crush map from osdmap epoch
On 04/02/2013 06:18 AM, Varun Chandramouli wrote:
Hi All,
I wanted to monitor the performance of a ceph cluster: the disk storage,
cpu utilization, and mainly, the network traffic (data getting
transferred between 2 OSDs). Could you suggest any tools/commands suited
for this?
There are lots of
Hi All,
I wanted to monitor the performance of a ceph cluster: the disk storage,
cpu utilization, and mainly, the network traffic (data getting transferred
between 2 OSDs). Could you suggest any tools/commands suited for this?
Another question I had was regarding hadoop-MR on ceph. I believe that
Hello.
Today i upgraded my cluster to version 0.60 and i noticed strange thing.
I mounted to cepfs using kernel module, uploaded 10G file to test upload
speed and then manually deleted file. From mountpoint i have no data in
cluster but "ceph -w" still shows 10G data. Any idea how to clear that
n
Igor,
Thanks, I confirm too:
the problem rises only tryin' to mount the root directory on Ubuntu 12.04
--
Marco Aroldi
Il 02/04/2013 11:02, Igor Laskovy ha scritto:
Hi, I can confirm this behavior in Ubuntu 12.04.
Try mount not root directory. For example, change
"m1:6789,m2:6789,m3:6789:/"
Hi, I can confirm this behavior in Ubuntu 12.04.
Try mount not root directory. For example, change "m1:6789,m2:6789,m3:6789:/"
to "m1:6789,m2:6789,m3:6789:/datastore00", but first you need have created
that "datastore00" catalog. Try this!
On Tue, Apr 2, 2013 at 11:39 AM, Marco Aroldi wrote:
>
My laptop (Linux Mint 14) mount ceph at boot (0.59) - no problem at all
I've tried with a server with Ubuntu 12.04 (ceph 0.56.4) - problem!
I've tried with 2 virtual machines with Ubuntu 12.04 (ceph 0.56.4) on my
laptop - problem!
The line in the fstab and the chmod setting on the keyring file
Anyone got any clue as to why this goes wrong:
root@c2-backup ~ # uname -a
Linux c2-backup 3.2.0-39-generic #62-Ubuntu SMP Thu Feb 28 00:28:53 UTC
2013 x86_64 x86_64 x86_64 GNU/Linux
root@c2-backup ~ # rbd showmapped
id poolimage snapdevice
0 22-optelec 22-optelec-01 -
12 matches
Mail list logo