Hi John,
On 02/09/2014 05:29, Jakes John wrote:> Hi,
>I have some general questions regarding the crush map. It would be helpful
> if someone can help me out by clarifying them.
>
> 1. I saw that a bucket 'host' is always created for the crush maps which are
> automatically generated by ce
Hi,
the next Berlin Ceph meetup is scheduled for September 22.
http://www.meetup.com/Ceph-Berlin/events/198884162/
Our host Christian will present the Ceph cluster they use for education
at the Berlin College of Further Education for Information Technology
and Medical Equipment Technology http:/
Hi,
I have some general questions regarding the crush map. It would be
helpful if someone can help me out by clarifying them.
1. I saw that a bucket 'host' is always created for the crush maps which
are automatically generated by ceph. If I am manually creating crushmap,
do I need to always a
Mark and all, Ceph IOPS performance has definitely improved with Giant.
With this version: ceph version 0.84-940-g3215c52
(3215c520e1306f50d0094b5646636c02456c9df4) on Debian 7.6 with Kernel 3.14-0.
I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition).
So basically twice
Hello,
I am currently testing ceph to make a replicated block device for a project
that would involve 2 data servers accessing this block device, so that if one
fails or crashes, the data can still be used and the cluster can be rebuilt.
This project requires that both machines run an OSD and a
Hi list,
on the weekend one of five OSD-nodes fails (hung with kernel panic).
The cluster degraded (12 of 60 osds), but from our monitoring-host the
noout-flag is set in this case.
But around three hours later the kvm-guest, which used storage on the
ceph cluster (and use writes) are unaccessible.
Hi,
The cluster got installed with quattor, which uses ceph-deploy for
installation of daemons, writes the config file and installs the
crushmap.
I have 3 hosts, each 12 disks, having a large KV partition (3.6T) for
the ECdata pool and a small cache partition (50G) for the cache
I manual
You can get what you want by this:
[client]
log_file = /var/log/ceph/ceph-rbd.log
admin_socket = /var/run/ceph/ceph-rbd.asok
On Mon, Sep 1, 2014 at 11:58 AM, Ding Dinghua
wrote:
> Hi all:
>
> Apologize if this question has been asked before.
> I noticed that since librbd doesn't have a daemon
Hmm, could you please list your instructions including cluster existing
time and all relevant ops? I want to reproduce it.
On Mon, Sep 1, 2014 at 4:45 PM, Kenneth Waegeman
wrote:
> Hi,
>
> I reinstalled the cluster with 0.84, and tried again running rados bench
> on a EC coded pool on keyvalues
This reminds me that we should also schedule some sort of meetup during
the Openstack summit which is also in Paris !
--
David Moreau Simard
Le 2014-09-01, 8:06 AM, « Loic Dachary » a écrit :
>Hi Ceph,
>
>The next Paris Ceph meetup is scheduled immediately after the Ceph day.
>
> http://ww
Hi Ceph,
The next Paris Ceph meetup is scheduled immediately after the Ceph day.
http://www.meetup.com/Ceph-in-Paris/events/204412892/
I'll be there and hope to discuss the Giant features on this occasion :-)
Cheers
--
Loïc Dachary, Artisan Logiciel Libre
signature.asc
Description: Open
Mark, thanks a lot for experimenting this for me.
I’m gonna try master soon and will tell you how much I can get.
It’s interesting to see that using 2 SSDs brings up more performance, even both
SSDs are under-utilized…
They should be able to sustain both loads at the same time (journal and osd
Hi,
I reinstalled the cluster with 0.84, and tried again running rados
bench on a EC coded pool on keyvaluestore.
Nothing crashed this time, but when I check the status:
health HEALTH_ERR 128 pgs inconsistent; 128 scrub errors; too
few pgs per osd (15 < min 20)
monmap e1: 3 mons a
As I said, 107K with IOs serving from memory, not hitting the disk..
From: Jian Zhang [mailto:amberzhan...@gmail.com]
Sent: Sunday, August 31, 2014 8:54 PM
To: Somnath Roy
Cc: Haomai Wang; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi all.
When connect to ceph.com , I got message 403 forbidden.
If I using US-proxy server , it's work well.
Please solve this problem.
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
15 matches
Mail list logo