Hi,
I am trying to map a rbd device in Ubuntu 14.04 (kernel 3.13.0-30-generic):
# rbd -p mypool create test1 --size 500
# rbd -p mypool ls
test1
# rbd -p mypool map test1
rbd: add failed: (5) Input/output error
and in the syslog:
Jul 4 09:31:48 testceph kernel: [70503.356842] libceph: mon2
1
Hi
I just follow the quickstart guide for creating a block image but always got
failed.
I am totally new to ceph. don't know where to find the problems.
Can you help? thanks.
The client output:
$ rbd -m 172.17.6.176 -k my-cluster/ceph.client.admin.keyring -c
my-cluster/ceph.conf create foo --
Hi,
Yesterday I finally updated our cluster to emperor (lastest stable
commit) and what's fairly apparent is a much higher RAM usage on the
OSD:
http://i.imgur.com/qw9iKSV.png
Has anyone noticed the same ? I mean 25% sudden increase in the idle
ram usage is hard to ignore ...
Those OSD are pre
On 07/03/2014 04:32 PM, VELARTIS Philipp Dürhammer wrote:
HI,
Ceph.conf:
osd journal size = 15360
rbd cache = true
rbd cache size = 2147483648
rbd cache max dirty = 1073741824
rbd cache max dirty age = 100
osd recovery max active = 1
Hello Ceph-Community,
I'm writing here because we have a bad write-performance on our Ceph-Cluster of
about
As an overview the technical details of our Cluster:
3 x monitoring-Servers; each with 2 x 1 Gbit/s NIC configured as Bond (Link
Aggregation-Mode)
5 x datastore-Servers; each with 10 x
Hi,
I wouldn't put those SSD's in raid, just use them separately as journals
for half of your's HDD's. This should make your write performance somewhat
better.
W dniu 04.07.2014 o 11:13 Marco Allevato pisze:
Hello Ceph-Community,
I’m writing here because we have a bad write-performanc
I use between 1 and 128 in different steps...
But 500mb write is the max playing around.
Uff its so hard to tune ceph... so many people have problems... ;-)
-Ursprüngliche Nachricht-
Von: Wido den Hollander [mailto:w...@42on.com]
Gesendet: Freitag, 04. Juli 2014 10:55
An: VELARTIS Philip
On 07/04/2014 11:33 AM, Daniel Schwager wrote:
Hi,
I think, the problem is the rbd device. It's only ONE device.
I fully agree. Ceph excels in parallel performance. You should run
multiple fio instances in parallel on different RBD devices and even
better on different clients.
Then you wil
On 07/04/2014 11:40 AM, VELARTIS Philipp Dürhammer wrote:
I use between 1 and 128 in different steps...
But 500mb write is the max playing around.
I just mentioned it in a different thread, make sure you do parallel
I/O! That's where Ceph really makes the difference. Run rados bench from
mul
On 04/07/14 02:32, VELARTIS Philipp Dürhammer wrote:
Ceph.conf:
rbd cache = true
rbd cache size = 2147483648
rbd cache max dirty = 1073741824
Just a FYI - I posted a setting very like this in another thread, and
remarked that it was "aggressive" - probably too much
Just thought to save some time )))
- Original Message -
From: "Wido den Hollander"
To: ceph-users@lists.ceph.com
Sent: Thursday, 3 July, 2014 12:11:07 PM
Subject: Re: [ceph-users] release date for 0.80.2
On 07/03/2014 10:27 AM, Andrei Mikhailovsky wrote:
> Hi guys,
>
> Was wonde
Hi David,
Do you mind sharing the howto/documentation with examples of configs, etc.?
I am tempted to give it a go and replace the Apache reverse proxy that I am
currently using.
cheers
Andrei
- Original Message -
From: "David Moreau Simard"
To: ceph-users@lists.ceph.com
Sent
> Try to create e.g. 20 (small) rbd devices, putting them all in a lvm vg,
> creating a logical volume (Raid0) with
> 20 stripes and e.g. stripeSize 1MB (better bandwith) or 4kb (better io) - or
> use md-raid0 (it's maybe 10% faster - but not that flexible):
BTW - we use this approach for VMware
Hi,
I extracted a disk with two partitions (journal and data) and copied its
content in the hope to restart the OSD and recover its content.
mount /dev/sdb1 /mnt
rsync -avH --numeric-ids /mnt/ /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/
rm /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/journa
On 07/04/2014 03:18 PM, Loic Dachary wrote:
Hi,
I extracted a disk with two partitions (journal and data) and copied its
content in the hope to restart the OSD and recover its content.
mount /dev/sdb1 /mnt
rsync -avH --numeric-ids /mnt/ /var/lib/ceph/osd/ceph-$(cat /mnt/whoami)/
I th
On 04/07/2014 15:25, Wido den Hollander wrote:
> On 07/04/2014 03:18 PM, Loic Dachary wrote:
>> Hi,
>>
>> I extracted a disk with two partitions (journal and data) and copied its
>> content in the hope to restart the OSD and recover its content.
>>
>> mount /dev/sdb1 /mnt
>> rsync -avH -
On 07/04/2014 04:13 AM, Marco Allevato wrote:
Hello Ceph-Community,
I’m writing here because we have a bad write-performance on our
Ceph-Cluster of about
_As an overview the technical details of our Cluster:_
3 x monitoring-Servers; each with 2 x 1 Gbit/s NIC configured as Bond
(Link Aggregati
Thank you Luis for your response.
Quite unbelievable, but your solution worked!
Unfortunately, I'm stuck again when trying to upload parts of the file.
Apache's logs:
==> apache.access.log <==
127.0.0.1 l - [04/Jul/2014:15:40:41 +0200] "PUT /bucketbig/ HTTP/1.1" 200
477 "{Referer}i" "Boto/2.30.
On 07/03/2014 08:11 AM, VELARTIS Philipp Dürhammer wrote:
Hi,
I have a ceph cluster setup (with 45 sata disk journal on disks) and get
only 450mb/sec writes seq (maximum playing around with threads in rados
bench) with replica of 2
Which is about ~20Mb writes per disk (what y see in atop also)
Still not sure do I need the ceph's modified fastcgi or not.
But I guess this explains my problem with the installation:
http://tracker.ceph.com/issues/8233
It would be nice to have at least a workaround for this...
Thanks,
Patrycja Szabłowska
2014-07-04 16:02 GMT+02:00 Patrycja Szabłowska <
i am having issues running radosgw-agent to sync data between two
radosgw zones. As far as i can tell both zones are running correctly.
My issue is when i run the radosgw-agent command:
radosgw-agent -v --src-access-key --src-secret-key
--dest-access-key --dest-secret-key
--src-zone us-m
On Fri, Jul 4, 2014 at 11:48 AM, Xabier Elkano wrote:
> Hi,
>
> I am trying to map a rbd device in Ubuntu 14.04 (kernel 3.13.0-30-generic):
>
> # rbd -p mypool create test1 --size 500
>
> # rbd -p mypool ls
> test1
>
> # rbd -p mypool map test1
> rbd: add failed: (5) Input/output error
>
> and in
For the record here is a summary of what happened : http://dachary.org/?p=3131
On 04/07/2014 15:35, Loic Dachary wrote:
>
>
> On 04/07/2014 15:25, Wido den Hollander wrote:
>> On 07/04/2014 03:18 PM, Loic Dachary wrote:
>>> Hi,
>>>
>>> I extracted a disk with two partitions (journal and data) an
23 matches
Mail list logo