wonders if in Mike's case a "RBD FSCK"
of sorts can be created that is capable of restoring things based on the
actual data still on the OSDs.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Jap
nt on the local node a and Ceph would be smart enough to get it all
locally. But even if it was talking to both nodes a and b (or just b) I
would have expected something in the 100MB/s range.
Any insights would be much appreciated.
Regards,
Christian
--
Christian BalzerNetwork/S
k
fail, would the OSDs still continue to function and use the public network
instead?
I hope this wasn't tl;dr. ^o^
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.g
follow but likely won't be able to for
initial deployment, as native QEMU RBD support is just being added to
ganiti. And no, Openstack is not an option at this point in time (no CPU
pinning, manual node failure recovery), especially given that this is
supposed to be up and running in 3 mon
one would have expected by just
punching in the numbers for the backing storage, never mind that this is
on top of DRBD:
7disk * 75IOPs / 6 (RAID6WritePenality) = 87.5 WIOP/s
Christian
On Wed, 18 Dec 2013 09:12:15 +0900 Christian Balzer wrote:
>
> Hello Mike,
>
> On Tue, 17 Dec 2
you're happy with killing another drive when
replacing a faulty one in that Supermicro contraption), that ratio is down
to 1 in 21.6 which is way worse than that 8disk RAID5 I mentioned up there.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...
eding edge isn't my style when it comes to
production systems. ^o^
And at something like 600 disks, that would still have to be a mighty high
level of replication to combat failure statistics...
> Wolfgang
>
> On 12/19/2013 09:39 AM, Christian Balzer wrote:
> >
[snip]
Chr
Hello Mark,
On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
> On 12/16/2013 02:42 AM, Christian Balzer wrote:
> >
> > Hello,
>
> Hi Christian!
>
> >
> > new to Ceph, not new to replicated storage.
> > Simple test cluster with 2 identical nodes
Hello,
On Thu, 19 Dec 2013 12:12:13 +0100 Mariusz Gronczewski wrote:
> Dnia 2013-12-19, o godz. 17:39:54
> Christian Balzer napisał(a):
[snip]
>
>
> > So am I completely off my wagon here?
> > How do people deal with this when potentially deploying hundreds of
>
Hello,
On Thu, 19 Dec 2013 12:42:15 +0100 Wido den Hollander wrote:
> On 12/19/2013 09:39 AM, Christian Balzer wrote:
[snip]
> >
>
> I'd suggest to use different vendors for the disks, so that means you'll
> probably be mixing Seagate and Western Digital in s
body was available to go there and
swap a fresh one in, last night the next drive failed and now somebody is
dashing there with 2 spares. ^o^
More often than not the additional strain of recovery will push disks over
the edge, aside from increasing likelihood for clustered failures with
certain driv
r so that you end up with 64MB objects
> instead of 4MB objects.
>
It is my understanding that this would likely significantly impact
performance of those RBD images...
Christian
--
Christian BalzerNetwork/Systems Engineer
c
ace, I expect to be impressed. ^o^
> Cheers, Dan
> CERN IT/DSS
>
[snip]
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
__
Hello Dan,
On Fri, 20 Dec 2013 14:01:04 +0100 Dan van der Ster wrote:
> On Fri, Dec 20, 2013 at 9:44 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Fri, 20 Dec 2013 09:20:48 +0100 Dan van der Ster wrote:
> >
> >> Hi,
> >> Our fio
Hello Gilles,
On Fri, 20 Dec 2013 21:04:45 +0100 Gilles Mocellin wrote:
> Le 20/12/2013 03:51, Christian Balzer a écrit :
> > Hello Mark,
> >
> > On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
> >
> >> On 12/16/2013 02:42 AM, Christian Balzer w
Hello,
On Sun, 22 Dec 2013 12:27:46 +0100 Gandalf Corvotempesta wrote:
> 2013/12/17 Christian Balzer :
> > Network:
> > Infiniband QDR, 2x 18port switches (interconnected of course),
> > redundant paths everywhere, including to the clients (compute nodes).
>
> Are y
PiB
> ---- -- --
> -- -- --
> RADOS: 3 cp 10-nines 0.000e+00 5.232e-08
> 0.000116% 0.000e+00 6.486e+03
>
> [1] https://github.com/ceph/ceph-tools/tree/master/m
ize: 4MB
> >> stripe length: 1100
> > I take it that is to mean that any RBD volume of sufficient size is
> > indeed spread over all disks?
>
> Spread over all placement groups, the difference is subtle but there
> is a difference.
>
Right, it isn't exac
their traffic over the public network if that
was still available in case the cluster network fails?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.go
allow for a seamless and transparent
migration, other than a "deploy more hardware, create a new pool, migrate
everything by hand (with potential service interruptions)" approach?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
Hello Loic,
On Tue, 24 Dec 2013 08:29:38 +0100 Loic Dachary wrote:
>
>
> On 24/12/2013 05:42, Christian Balzer wrote:
> >
> > Hello,
> >
> > from what has been written on the roadmap page and here, I assume that
> > the erasure coding option with
Hello,
On Tue, 24 Dec 2013 16:33:49 +0100 Loic Dachary wrote:
>
>
> On 24/12/2013 10:22, Wido den Hollander wrote:
> > On 12/24/2013 09:34 AM, Christian Balzer wrote:
> >>
> >> Hello Loic,
> >>
> >> On Tue, 24 Dec 2013 08:29:38 +0100 Loic Dac
still welcome!
Christian
On Tue, 17 Dec 2013 16:44:29 +0900 Christian Balzer wrote:
>
> Hello,
>
> I've been doing a lot of reading and am looking at the following design
> for a storage cluster based on Ceph. I will address all the likely
> knee-jerk reactions and reason
ort HBA for the
cluster network would be worth it (at a mere $1000 per card).
A failover to public network would have let me gotten away with a single
port card, which unsurprisingly is half the price. ^o^
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ceph-deploy from ceph.com isn't available for Jessie or sid
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph
Hello,
On Tue, 07 Jan 2014 12:47:02 + Joao Eduardo Luis wrote:
> On 01/07/2014 05:14 AM, Christian Balzer wrote:
[snip]
> >
> > Is this the expected state of affairs, as in:
> > 1. The current Debian Sid package can't create a new cluster by itself
> > 2.
Hello,
On Tue, 7 Jan 2014 08:35:44 -0500 Alfredo Deza wrote:
> On Tue, Jan 7, 2014 at 7:47 AM, Joao Eduardo Luis
> wrote:
> > On 01/07/2014 05:14 AM, Christian Balzer wrote:
> >>
[snip]
> >> So I grabbed the latest ceph-deploy,
> >> ceph-deploy_1.3.4-1~bp
On Wed, 8 Jan 2014 10:46:42 +0900 Christian Balzer wrote:
> Hello,
>
> On Tue, 7 Jan 2014 08:35:44 -0500 Alfredo Deza wrote:
>
> > On Tue, Jan 7, 2014 at 7:47 AM, Joao Eduardo Luis
> > wrote:
> > > On 01/07/2014 05:14 AM, Christian Balzer wrote:
> > >
needs to be disabled for it to work.
Am I correct in that assumption?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph
caching with a dual-primary
DRBD as backing device when requesting live migration.
> Afaik a full FS flush is called just as it completes copying the memory
> across for the live migration.
>
Yeah, that ought to do the trick nicely when dealing with RBD or page
cache.
Christian
> -M
t more than about 150 devices per compute node now, but that
might change and there is also the case of failovers...
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
ense. Of course more storage nodes will
help even more, it all becomes a question of how much money and rack
space you're willing to spend. ^o^
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
at's why I am going to deploy an Infiniband based Ceph (and client)
cluster because even with just IPoIB I'm expecting it to be quite a bit
snappier than 10GigE in the latency bits at least.
But of course a more native, low-level approach would be even better. ^o^
Regards,
Christian
--
their on repository but to also
ensure that Debian (and consequently Ubuntu) packages are in a well
maintained and working condition.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.co
013/10/25/measure-ceph-rbd-performance-in-a-quantitative-way-part-i
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users
gainst using it as a backing store for a reason.
^.^
Regards,
Christian (another one)
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-user
On Fri, 7 Feb 2014 19:22:54 -0800 (PST) Sage Weil wrote:
> On Sat, 8 Feb 2014, Christian Balzer wrote:
> > On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote:
> >
> > > Am 07.02.2014 14:42, schrieb Mark Nelson:
> > > > Ok, so the reason I was wonderi
urce/linux/+bug/721896
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
t; could not open disk image
> > rbd:rbd/2dde8ec6-f46c-40e9-b1b7-f5f8a653c42c.rbd.disk0: No such file or
> > directory
> >
> > I have noticed this file.
> >
> > root@hype-1:~# cat /sys/devices/rbd/1/name
> > 2dde8ec6-f46c-40e9-b1b7-f5f8a653c42c.rbd.disk0
eyrings go?) and bring the OSD
up?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lis
B, bw=446771B/s, iops=71, runt=1929861msec
EXT4:
read : io=3273.8MB, bw=3878.9KB/s, iops=635, runt=864262msec
write: io=841998KB, bw=997621B/s, iops=159, runt=864262msec
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global O
stable provided the ubuntu kernel guys are
> keeping up with the mainline stable kernels at kernel.org (they
> generally do).
>
He says tomato (Debian Wheezy), you say tomato (Ubuntu).
Chad, use the 3.13 kernel from wheezy-backports if you're uncomfortable
with rolling your own.
Re
mance can be better with lower
> quantity of drives per host.
>
More storage nodes is always better, because in a 24 OSD node with 10GBE
you definitely are saturating your network link with just 10 disks.
In your case 6 of the storage nodes I'm describing above would be swell,
but probably tot
ceph.com/Planning/Blueprints/Emperor/Kernel_client_read_ahead_optimization
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
d, but actual OSD needs to
handle all those writes and becomes limited by the IOPS it can peform.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
_
On Tue, 08 Apr 2014 10:31:44 +0200 Josef Johansson wrote:
> On 08/04/14 10:04, Christian Balzer wrote:
> > Hello,
> >
> > On Tue, 08 Apr 2014 09:31:18 +0200 Josef Johansson wrote:
> >
> >> Hi all,
> >>
> >> I am currently benchmarking a sta
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion
On Tue, 08 Apr 2014 14:19:20 +0200 Josef Johansson wrote:
>
> On 08/04/14 10:39, Christian Balzer wrote:
> > On Tue, 08 Apr 2014 10:31:44 +0200 Josef Johansson wrote:
> >
> >> On 08/04/14 10:04, Christian Balzer wrote:
> >>> Hello,
> >>>
>
> >>
> >> Can anyone suggest me good way to do this ??
> >>
> >> Thanks,
> >> Punit
> >>
> >> _______
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> htt
On Tue, 8 Apr 2014 09:35:19 -0700 Gregory Farnum wrote:
> On Tuesday, April 8, 2014, Christian Balzer wrote:
>
> > On Tue, 08 Apr 2014 14:19:20 +0200 Josef Johansson wrote:
> > >
> > > On 08/04/14 10:39, Christian Balzer wrote:
> > > > On Tue, 08 Apr
ing storage to its
throughput limits would be "filestore min sync interval" then?
Or can something else cause the journal to be flushed long before it
becomes full?
Christian
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wed, Apr 9, 2014
l be lower in practice.
> Please suggest me better way to do this ??
>
I already did below.
Regards,
Christian
>
> On Wed, Apr 9, 2014 at 4:02 PM, Christian Balzer wrote:
>
> > On Wed, 9 Apr 2014 14:59:30 +0800 Punit Dambiwal wrote:
> >
> > > Hi,
> > &
xed?
b) Any idea what might have caused the heap to grow to that size?
c) Shouldn't it have released that memory by itself at some point in time?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine J
On Fri, 11 Apr 2014 23:33:42 -0700 Gregory Farnum wrote:
> On Fri, Apr 11, 2014 at 11:12 PM, Christian Balzer wrote:
> >
[snip]
> >
> > Questions remaining:
> >
> > a) Is that non-deterministic "ceph heap" behavior expected and if yes
> > can
line with qmp ?
> >
> > Best Regards,
> >
> > Alexandre
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
://bugs.debian.org/cgi-bin/bugreport.cgi?bug=729961
and the referenced bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=679686
Le sigh.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
Apr 2014 19:01:30 +0900 Christian Balzer wrote:
>
> Hello,
>
> Nothing new, I know. But some numbers to mull and ultimately weep over.
>
> Ceph cluster based on Debian Jessie (thus ceph 0.72.x), 2 nodes, 2 OSDs
> each.
> Infiniband 4xQDR, IPoIB interconnects, 1 GByt
---------
> >
> >___
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
gt;> VMs running a 3.2+ kernel (iirc) can "give back blocks" by issuing
> >> TRIM.
> >>
> >> http://wiki.qemu.org/Features/QED/Trim
> >>
> > _______
> > ceph-users mailing list
> > ceph-users@lis
m going to test noatime on a few
> VMs and see if I notice a change in the distribution.
>
That strikes me as odd, as since kernel 2.6.30 the default option for
mounts is relatime, which should have an effect quite close to that of a
strict noatime.
Regards,
Christian
--
Christian Balz
cluster you outlined would be
something (assuming 100 IOPS for those NL drives) like this:
100(IOPS) x 9(disk) x 12(hosts) / 3(replication ratio) = 3600 IOPS
However that's ignoring all the other caches, in particular the controller
HW cache, which can raise the sustainable level quite a bit.
R
On Fri, 18 Apr 2014 11:34:15 +1000 Blair Bethwaite wrote:
> > Message: 20
> > Date: Thu, 17 Apr 2014 17:45:39 +0900
> > From: Christian Balzer
> > To: "ceph-users@lists.ceph.com"
> > Subject: Re: [ceph-users] SSDs: cache pool/tier versus node-loca
gt; >>
> >
> > True, but it will obviously take time before this hits the upstream
> > kernels and goes into distributions.
> >
> > For RHEL 7 it might be that the krbd module from the Ceph extra repo
> > might work. For Ubuntu it's waiting for ne
%), but still
nowhere near their capacity.
This leads me to believe that aside from network latencies (4xQDDR
Infiniband here, which has less latency than 10GBE) that there is a lot of
space for improvement when it comes to how Ceph handles things
(bottlenecks in the code) and tuning in general.
Rega
On Tue, 22 Apr 2014 02:45:24 +0800 Indra Pramana wrote:
> Hi Christian,
>
> Good day to you, and thank you for your reply. :) See my reply inline.
>
> On Mon, Apr 21, 2014 at 10:20 PM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Mon, 21 A
On Wed, 23 Apr 2014 12:39:20 +0800 Indra Pramana wrote:
> Hi Christian,
>
> Good day to you, and thank you for your reply.
>
> On Tue, Apr 22, 2014 at 12:53 PM, Christian Balzer wrote:
>
> > On Tue, 22 Apr 2014 02:45:24 +0800 Indra Pramana wrote:
> >
> >
eased" data from osd when we see it fit "maintenance time".
>
> If doing it automatically has a too bad impact on overall performances.
> I would be glad yet to be able to decide an appropriate moment to force
> cleaning task that would be better than not
On Thu, 24 Apr 2014 13:51:49 +0800 Indra Pramana wrote:
> Hi Christian,
>
> Good day to you, and thank you for your reply.
>
> On Wed, Apr 23, 2014 at 11:41 PM, Christian Balzer wrote:
>
> > > > > Using 32 concurrent writes, result is below. The
ining an
> > isolated new cluster?
> >
> > thanks in advance,
> > Xabier
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
oller
> with 2GB Cache. I have to plan this storage as a KVM image backend and
> the goal is the performance over the capacity.
>
Writeback cache can be very helpful, however it is not a miracle cure.
Not knowing your actual load and I/O patterns it might very well be
enough, though.
Re
On Wed, 07 May 2014 11:01:33 +0200 Xabier Elkano wrote:
> El 06/05/14 18:40, Christian Balzer escribió:
> > Hello,
> >
> > On Tue, 06 May 2014 17:07:33 +0200 Xabier Elkano wrote:
> >
> >> Hi,
> >>
> >> I'm designing a new ceph p
.
I guess what I am wondering about is if this is normal and to be expected
or if not where all that potential performance got lost.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http
On Wed, 7 May 2014 18:37:48 -0700 Gregory Farnum wrote:
> On Wed, May 7, 2014 at 5:57 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs each. The
> > journals are on (separate) DC 3700s, the actual OSDs a
;
I have that set of course, as well as specifically "writeback" for the KVM
instance in question.
Interestingly I see no difference at all with a KVM instance that is set
explicitly to "none", but that's not part of this particular inquiry
either.
Christian
>
>
>
ve begin the raid6 ?)
> >
> >
> > Aslo, I known that direct ios can be quite slow with ceph,
> >
> > maybe can you try without --direct=1
> >
> > and also enable rbd_cache
> >
> > ceph.conf
> > [client]
> > rbd cache = true
> &
;
>
> - Mail original -----
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Envoyé: Jeudi 8 Mai 2014 08:26:33
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
>
> Hello,
>
> On Wed, 7
get busy (CPU, not IOWAIT), but there still are 1-2 cores idle at
that point.
>
>
> - Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Cc: "Alexandre DERUMIER"
> Envoyé: Jeudi 8 Mai 2014 08:52:15
> Objet: Re
actually needed it.
Because for starters it is ext4, not xfs, see:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg08619.html
For what it's worth, I never got an answer to the actual question in that
mail.
Christian
> Udo
>
> Am 08.05.2014 02:57, schrieb Christian Balze
t; Aslo, I known that direct ios can be quite slow with ceph,
> >
> > maybe can you try without --direct=1
> >
> > and also enable rbd_cache
> >
> > ceph.conf
> > [client]
> > rbd cache = true
> >
> >
> >
> >
> > - Ma
or XFS will serve you better in your case, no doubt.
Regards,
Christian
> Thanks in advance
>
> Carlos M. Perez
> CMP Consulting Services
> 305-669-1515
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Com
>
> ---
> Tato zpráva neobsahuje viry ani jiný škodlivý kód - avast! Antivirus je
> aktivní. http://www.avast.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http:/
Re-added ML.
On Fri, 9 May 2014 14:50:54 + Carlos M. Perez wrote:
> Christian,
>
> Thanks for the responses. See below for a few
> reposnses/comments/further questions...
>
> > -Original Message-
> > From: Christian Balzer [mailto:ch...@gol.com]
> &g
On Fri, 09 May 2014 23:03:50 +0200 Michal Pazdera wrote:
> Dne 9.5.2014 9:08, Christian Balzer napsal(a):
> > Is that really just one disk?
>
> Yes, its just one disk in all PCs. I know that the setup is bad, but I
> want just to get
> familiar with Ceph (and other parall
the other hand if people here regularly get thousands or tens of
thousands IOPS per OSD with the appropriate HW I'm stumped.
Christian
On Fri, 9 May 2014 11:01:26 +0900 Christian Balzer wrote:
> On Wed, 7 May 2014 22:13:53 -0700 Gregory Farnum wrote:
>
> > Oh, I didn'
athing room at about 80GB/day writes, so this
is what I will order in the end.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
cep
re 32+ cores.
Christian
>
>
>
> - Mail original -
>
> De: "Christian Balzer"
> À: ceph-users@lists.ceph.com
> Envoyé: Mardi 13 Mai 2014 11:03:47
> Objet: Re: [ceph-users] Slow IOPS on RBD compared to journal and backing
> devices
>
>
well. ^o^
My beef with the 530 is that is spiky, you can't really rely on it for
consistent throughput and IOPS.
Christian
> Regards
>
> Mark
>
> On 13/05/14 21:31, Christian Balzer wrote:
> >
> > Hello,
> >
> > No actual question, just some food for
On Tue, 13 May 2014 12:07:12 +0200 Xabier Elkano wrote:
> El 13/05/14 11:31, Christian Balzer escribió:
> > Hello,
> >
> > No actual question, just some food for thought and something that later
> > generations can scour from the ML archive.
> >
> > I'm
rse I made
sure these came come the pagecache of the storage nodes, no disk I/O
reported at all and the CPUs used just 1 core per OSD.
---
fio --size=400m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=1
--rw=read --name=fiojob --blocksize=4k --iodepth=64
---
Christian
>
> Maybe Int
On Tue, 13 May 2014 14:46:23 +0200 Xabier Elkano wrote:
> El 13/05/14 14:23, Christian Balzer escribió:
> > On Tue, 13 May 2014 12:07:12 +0200 Xabier Elkano wrote:
> >
> >> El 13/05/14 11:31, Christian Balzer escribió:
> >>> Hello,
> >>>
> >
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
> >=64=0.0% issued : total=r=102400/w=0/d=0, short=r=0/w=0/d=0
>
>
> Run status group 0 (all jobs):
> READ: io=409600KB, aggrb=22886KB/s, minb=22886KB/s, maxb=22886KB/s,
> mint=17897msec, maxt=17897msec
>
>
> Disk st
200 IOPS, cluster wide, no matter how many parallel VMs are running fio...
Christian
> (I remember some tests from Stefan Priebe on this mailing, with a full
> ssd cluster, having almost same results)
>
>
>
> - Mail original -
>
> De: "Alexandre DERUMIER"
uld be interesting indeed.
Given what I've seen (with the journal at 20% utilization and the actual
filestore ataround 5%) I'd expect Ceph to be the culprit.
> I'll get back to you with the results, hopefully I'll manage to get them
> done during this night.
>
Looki
as you would expect.
>
> >
> > Thanks for your guys assistance with this.
>
> np, good luck!
>
> >
> >
> > _______
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cg
card has
> a 25PB life expectancy so I'm not terribly worried about it failing on me
> too soon :)
>
Ah yes.
If I were the maker I'd put that information on my homepage in big
friendly letters. ^o^
> Christian Balzer writes:
>
> >
> > On Wed, 14 May 20
On Fri, 16 May 2014 13:51:09 +0100 Simon Ironside wrote:
> On 13/05/14 13:23, Christian Balzer wrote:
> >>> Alas a DC3500 240GB SSD will perform well enough at half the price of
> >>> the DC3700 and give me enough breathing room at about 80GB/day
> >>> writ
__
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www
and haven't made any dent into my 200GB S3700
yet.
>
> Now we're try to use DC S3710 for journals.
As I wrote a few days ago, unless you go for the 400GB version the the
200GB S3710 is actually slower (for journal purposes) than the 3700, as
sequential write speed is the key factor her
Nick touched on that already, for right now SSD pools would be definitely
better.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-us
om.tw/products/system/2U/5028/SSG-5028R-E1CR12L.cfm
(4 100GB DC S3700 will perform better than 2 200GB ones and give you
smaller failure domains at about the same price).
Christian
>Any other suggestion/comment?
>
> Thanks a lot!
>
> Best regards
>
> German
>
>
>
401 - 500 of 1226 matches
Mail list logo