Hello Gilles,
On Fri, 20 Dec 2013 21:04:45 +0100 Gilles Mocellin wrote:
> Le 20/12/2013 03:51, Christian Balzer a écrit :
> > Hello Mark,
> >
> > On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
> >
> >> On 12/16/2013 02:42 AM, Christian Balzer wrote:
> >>> Hello,
> >> Hi Christian!
> >>
>
Le 20/12/2013 03:51, Christian Balzer a écrit :
Hello Mark,
On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
On 12/16/2013 02:42 AM, Christian Balzer wrote:
Hello,
Hi Christian!
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessi
Hello Mark,
On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
> On 12/16/2013 02:42 AM, Christian Balzer wrote:
> >
> > Hello,
>
> Hi Christian!
>
> >
> > new to Ceph, not new to replicated storage.
> > Simple test cluster with 2 identical nodes running Debian Jessie, thus
> > ceph 0.48. A
On 12/16/2013 02:42 AM, Christian Balzer wrote:
Hello,
Hi Christian!
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessie, thus ceph
0.48. And yes, I very much prefer a distro supported package.
I know you'd like to use the distro pa
Hello,
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessie, thus ceph
0.48. And yes, I very much prefer a distro supported package.
Single mon and osd1 on node a, osd2 on node b.
1GbE direct interlink between the nodes, used exclusively for
Martin,
Thanks for the confirmation about 3-replica performance.
dmesg | fgrep /dev/sdb # returns no matches
Jeff
--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Jeff,
I would be surprised as well - we initially tested on a 2-replica cluster
with 8 nodes having 12 osd each - and went to 3-replica as we re-built the
cluster.
The performance seems to be where I'd expect it (doing consistent writes in
a rbd VM @ ~400MB/sec on 10GbE which I'd expect is eit
On 08/20/2013 08:42 AM, Jeff Moskow wrote:
Hi,
More information. If I look in /var/log/ceph/ceph.log, I see 7893 slow
requests in the last 3 hours of which 7890 are from osd.4. Should I
assume a bad drive? I SMART says the drive is healthy? Bad osd?
Definitely sounds suspicious! Might be w
Hi,
More information. If I look in /var/log/ceph/ceph.log, I see 7893 slow
requests in the last 3 hours of which 7890 are from osd.4. Should I
assume a bad drive? I SMART says the drive is healthy? Bad osd?
Thanks,
Jeff
--
___
ceph
Hi,
I am now occasionally seeing a ceph statuses like this:
health HEALTH_WARN 2 requests are blocked > 32 sec
They aren't always present even though the cluster is still slow, but
they may be a clue
Jeff
On Sat, Aug 17, 2013 at 02:32:47PM -0700, Sage Wei
On Sat, 17 Aug 2013, Jeff Moskow wrote:
> Hi,
>
> When we rebuilt our ceph cluster, we opted to make our rbd storage
> replication level 3 rather than the previously configured replication
> level 2.
>
> Things are MUCH slower (5 nodes, 13 osd's) than before even though
> most of o
Hi,
When we rebuilt our ceph cluster, we opted to make our rbd storage
replication level 3 rather than the previously
configured replication level 2.
Things are MUCH slower (5 nodes, 13 osd's) than before even though most
of our I/O is read. Is this to be expected?
What are th
12 matches
Mail list logo