6 99
18 01
19 00
---
The first two are HDD OSD, the 2nd two are SSD OSDs in the cache tier.
And I can assure that the HDD based OSDs (which have journal OSD and are
really RAID10s behind a 4GB
amed as a recipient, please immediately notify
> the sender and destroy all copies of this e-mail.
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Maxime Guyot
> Sent: Tuesday, March 14, 2017 7:29 AM
> To: Christi
y be tight but 1GB RAM for a combination of
OS _and_ OSD is way too little for my taste and experience.
Christian
> What is the network setup & connectivity between them (hopefully
> 10Gbit).
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.c
er Z
> * Server C-OSD-1 (i3.large instance):
> * OSD.1 for Cluster Y
> * Server C-OSD-2 (i3.large instance):
> * OSD.1 for Cluster Z
>
>
> Alternative Layout:
> Split, by half, the NVMe storage between 2 OSDs, and provide 3ea OSDs per
> cluster for hi
egraded.
>
>
What you are seeing is not recovery per se (as in Ceph trying to put 2
replicas on the same node), but the result of the one host and its OSDs
being removed from the CRUSH map (marked down and out).
The new CRUSH map of course results in different computations of where PGs
ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-
ate.com/enterprise-storage/hard-disk-drives/enterprise-capacity-3-5-hdd/
"Proven conventional PMR technology backed by highest field reliability
ratings and an MTBF of 2M hours"
HTH,
Christian
--
Christian BalzerNetwork/Systems En
Hello,
On Mon, 27 Mar 2017 16:09:09 +0100 Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Wido den Hollander
> > Sent: 27 March 2017 12:35
> > To: ceph-users@lists.ceph.com; Christian
/27/2017 01:34 PM, Wido den Hollander wrote:
> >
> >> Op 27 maart 2017 om 13:22 schreef Christian Balzer :
> >>
> >>
> >>
> >> Hello,
> >>
> >> On Mon, 27 Mar 2017 12:27:40 +0200 Mattia Belluco wrote:
> >>
> >>>
t right now. NO code yet out there, but should be
> > there later this year.
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph
ions?
> >
> > thanks.
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______
> ceph-users
138
> 4911 17
> 5040 131
> 5115 22
> 5239 137
> 5314 23
> 5436
o a single layer 2 domain and participate in
> > the Clos fabric as a single unit, or scale out across racks (preferred).
> > Why bother with multiple switches in a rack when you can just use multiple
> > racks? That's the beauty of Clos: just add more
build the next unit there w/o much interconnect overhead.
All the directly neighboring racks or rows were already full by the time
we did build the first unit...
Christian
>
> On Apr 23, 2017 7:56 PM, "Christian Balzer" wrote:
>
>
> Hello,
>
> Aaron pretty muc
hines:
> """
> OS: Ubuntu 16.04
> kernel: 4.4.0-75-generic
> librbd: 10.2.7-1xenial
> """
>
> We did try to flush and invalidate the client cache via the ceph admin
> socket,
> but this did not change an
ppreciated...
> >
> > regards,
> > Sam
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> c
back.
I'm looking at oversized Samsungs (base model equivalent to 3610s) and am
following this thread for other alternatives.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_
est I could see myself doing in my generic
cluster.
For the 2 mission critical production clusters, they are (will be) frozen
most likely.
Christian
> -Ben
>
> On Wed, May 17, 2017 at 5:30 PM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Wed, 17 May 2017 11:2
; ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ce
graphing Ceph performance values with Grafana/Graphite) will be able
to see instantly.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ceph-us
Hello,
On Fri, 2 Jun 2017 14:30:56 +0800 jiajia zhong wrote:
> christian, thanks for your reply.
>
> 2017-06-02 11:39 GMT+08:00 Christian Balzer :
>
> > On Fri, 2 Jun 2017 10:30:46 +0800 jiajia zhong wrote:
> >
> > > hi guys:
> > >
> > > Ou
it.
Otherwise those objects need to be promoted back in from the HDDs, making
things slow again.
Tuning a cache-tier (both parameters and size in general) isn't easy and
with some workloads pretty impossible to get desirable results.
Christian
--
Christian BalzerNetwo
On Mon, 5 Jun 2017 15:32:00 +0800 TYLin wrote:
> Hi Christian,
>
> Thanks for you quick reply.
>
>
> > On Jun 5, 2017, at 2:01 PM, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > On Mon, 5 Jun 2017 12:25:25 +0800 TYLin wrote:
> &g
4887 0 5269G
> 28
> default.rgw.users.keys 7144 0 5269G
> 16
> default.rgw.buckets.index 9 0 0 5269G
> 14
> default.rgw.buckets.non-ec 10 0 0
On Tue, 6 Jun 2017 10:25:38 +0800 TYLin wrote:
> > On Jun 5, 2017, at 6:47 PM, Christian Balzer wrote:
> >
> > Personally I avoid odd numbered releases, but my needs for stability
> > and low update frequency seem to be far off the scale for "normal" Ceph
>
x27;t know, as
they never got posted).
I was reciting from Hammer times, where this was the default case.
Christian
> Em Seg, 5 de jun de 2017 23:26, TYLin escreveu:
>
> > On Jun 5, 2017, at 6:47 PM, Christian Balzer wrote:
> >
> > Personally I avoid odd numbered relea
makes you
fell better, do the cluster and public with VLANs.
But that will cost you in not so cheap switch ports, of course.
Christian
> If there is a more appropriate venue for my request, please point me in
> that direction.
>
> Thanks,
> Dan
--
Christian BalzerNe
7;s less than elegant of course.
Christian
>
> On Mon, Jun 5, 2017 at 11:54 PM, Christian Balzer wrote:
>
> >
> > Hello,
> >
> > On Tue, 06 Jun 2017 02:35:25 + Webert de Souza Lima wrote:
> >
> > > I'd like to add that, from all tests I did,
waste. Might be worth getting a second IB
> card for each server.
>
If you're happy with getting old ones, I hear they can be found quite
cheap.
Christian
>
>
> Again, thanks a million for the advice. I'd rather learn this the easy way
> than to have to rebuild this 6 time
here
any real reason for the above scaremongering or is that based solely on
lack of testing/experience?
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mai
On Thu, 8 Jun 2017 14:21:43 +1000 Brad Hubbard wrote:
> On Thu, Jun 8, 2017 at 1:06 PM, Christian Balzer wrote:
> >
> > Hello,
> >
> > New cluster, Jewel, setting up cache-tiering:
> > ---
> > Error EPERM: 'readforward' is not a well-supported
On Thu, 8 Jun 2017 15:29:05 +1000 Brad Hubbard wrote:
> On Thu, Jun 8, 2017 at 3:10 PM, Christian Balzer wrote:
> > On Thu, 8 Jun 2017 14:21:43 +1000 Brad Hubbard wrote:
> >
> >> On Thu, Jun 8, 2017 at 1:06 PM, Christian Balzer wrote:
> >> >
> >&
On Thu, 8 Jun 2017 17:03:15 +1000 Brad Hubbard wrote:
> On Thu, Jun 8, 2017 at 3:47 PM, Christian Balzer wrote:
> > On Thu, 8 Jun 2017 15:29:05 +1000 Brad Hubbard wrote:
> >
> >> On Thu, Jun 8, 2017 at 3:10 PM, Christian Balzer wrote:
> >> > On Thu, 8
On Thu, 8 Jun 2017 07:06:04 -0400 Alfredo Deza wrote:
> On Thu, Jun 8, 2017 at 3:38 AM, Christian Balzer wrote:
> > On Thu, 8 Jun 2017 17:03:15 +1000 Brad Hubbard wrote:
> >
> >> On Thu, Jun 8, 2017 at 3:47 PM, Christian Balzer wrote:
> >> > On Thu, 8
n Fri, 9 Jun 2017 11:45:46 +0900 Christian Balzer wrote:
> On Thu, 8 Jun 2017 07:06:04 -0400 Alfredo Deza wrote:
>
> > On Thu, Jun 8, 2017 at 3:38 AM, Christian Balzer wrote:
> > > On Thu, 8 Jun 2017 17:03:15 +1000 Brad Hubbard wrote:
> > >
> > >> O
Hello,
can we have the status, projected release date of the Ceph packages for
Debian Stretch?
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ll.
Christian
>
>
> - Mail original -
> De: "Alfredo Deza"
> À: "Christian Balzer"
> Cc: "ceph-users"
> Envoyé: Mardi 20 Juin 2017 18:54:05
> Objet: Re: [ceph-users] Ceph packages for Debian Stretch?
>
> On Mon, Jun 19, 2017 at
Hello,
On Wed, 21 Jun 2017 11:15:26 +0200 Fabian Grünbichler wrote:
> On Wed, Jun 21, 2017 at 05:30:02PM +0900, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Wed, 21 Jun 2017 09:47:08 +0200 (CEST) Alexandre DERUMIER wrote:
> >
> > > Hi,
ung PM1725a 1.6TB
seems to be a) cheaper and b) at 2GB/s write speed more likely to be
suitable for double duty.
Similar (slightly better on paper) endurance than then P4600, so keep that
in mind, too.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
Hello,
Hmm, gmail client not grokking quoting these days?
On Wed, 21 Jun 2017 20:40:48 -0500 Brady Deetz wrote:
> On Jun 21, 2017 8:15 PM, "Christian Balzer" wrote:
>
> On Wed, 21 Jun 2017 19:44:08 -0500 Brady Deetz wrote:
>
> > Hello,
> > I'm expandin
the help and time.
> On 30 Nov 2015, at 09:53, MATHIAS, Bryn (Bryn)
> mailto:bryn.math...@alcatel-lucent.com>>
> wrote:
>
>
> On 30 Nov 2015, at 14:37, MATHIAS, Bryn (Bryn)
> mailto:bryn.math...@alcatel-lucent.com>>
> wrote:
>
> Hi,
> On 30 Nov 20
1aF40xp0RAXlhSC6MjfU7GEl0KSvYbLomoR4nyGxrP
> //woszrmbau80f7A5Of0T2ILwx77FJbokOdGSsjL711LDjuqo/nXa/eLzYN1
> jtEC/pwqkbSBYR6avi8ZxqQZoMmPHeXxaTSK4dQyY7l6fWyBk0LB6MZYYaVl
> 8T96wI8uMFBGPu13OYysuq6qrpJ/Cc0YglcmTqBpIOCjLHsZMUMtEgcvLvIB
> m9qobcoxoqyTxjpa5EOhTV+7Qy9qPRv4vjyS0dcrCHUHwNP9QQOmbZIfqskd
> W+ENtNK2vB
On Thu, 10 Dec 2015 09:11:46 +0100 Dan van der Ster wrote:
> On Thu, Dec 10, 2015 at 5:06 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Wed, 9 Dec 2015 15:57:36 + MATHIAS, Bryn (Bryn) wrote:
> >
> >> to update this, the error looks like it
h (ceph-osd process) writing to the disk then the
question is indeed what causes that.
I have no experience with CephFS, but quiescent RBD volume (mounted VM
images) does not produce I/O by itself.
You say that you're not running any application, but is the CephFS mounted
somewhere?
ph/tmp/mnt.FASof5/ceph_fsid.126649.tmp
> >> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /sbin/restorecon -R
> >> /var/lib/ceph/tmp/mnt.FASof5/fsid.126649.tmp
> >> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/chown -R
> >> ceph:ceph /var/lib/c
= 5
> mon allow pool delete = false
> mon osd down out subtree limit = host
> mon osd min down reporters = 4
>
> [mon.ceph01]
> host = ceph01
> mon addr = 10.0.2.101
>
> [mon.ceph02]
> host = ceph02
> mon addr = 10.0.2.102
>
> [mon.ceph03]
it safe?
2. Does anybody have experiences/performance numbers to compare the
benefits of io=native versus io=threads with Ceph/RBD?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http
h.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol
},
> >> { "time": "2016-01-06 08:17:11.687016",
> >>"event": "sub_op_commit_rec"},
> >>
> >> 2) spend more than 3 seconds in writeq and 2 seconds to write the
> >> jou
Hello,
just in case I'm missing something obvious, there is no reason a pool
called aptly "ssd" can't be used simultaneously as a regular RBD pool and
for cache tiering, right?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
Christian
> This cluster will probably run Hammer LTS unless there are huge
> improvements in Infernalis when dealing 4k IOPS.
>
> The first link above hints at awesome performance. The second one from
> the list not so much yet..
>
> Is anyone running Hammer or Infer
sanity checks, if you don't mind.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@l
how to further investigate this
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@list
he cache pool is done to fix it and set it
> >> back to writeback?
> >>
> >> This way we can get away with a pool size of 2 without worrying for
> >> too much downtime!
> >>
> >> Hope i was explicit enough!
> >
> > _
n.ridewithgps.com/ridewithgps.com/drexler.ridewithgps.com/index.html#disk
> http://munin.ridewithgps.com/ridewithgps.com/paley.ridewithgps.com/index.html#disk
> http://munin.ridewithgps.com/ridewithgps.com/lucy.ridewithgps.com/index.html#disk
>
>
> Any input would be appreciated, be
On Wed, 3 Feb 2016 22:42:32 -0700 Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
> On Wed, Feb 3, 2016 at 9:00 PM, Christian Balzer wrote:
> > On Wed, 3 Feb 2016 16:57:09 -0700 Robert LeBlanc wrote:
>
> > That's an interesting
Hello,
On Thu, 4 Feb 2016 08:44:25 -0800 Cullen King wrote:
> Replies in-line:
>
> On Wed, Feb 3, 2016 at 9:54 PM, Christian Balzer
> wrote:
>
> >
> > Hello,
> >
> > On Wed, 3 Feb 2016 17:48:02 -0800 Cullen King wrote:
> >
> > > He
On Thu, 4 Feb 2016 21:33:10 -0700 Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On Thu, Feb 4, 2016 at 8:32 PM, Christian Balzer wrote:
> > On Wed, 3 Feb 2016 22:42:32 -0700 Robert LeBlanc wrote:
>
> > I just finished downgrading my t
ot default
-2 8.0 host irt04
3 2.0 osd.3 up 1.0 1.0
. . .
---
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Jap
I've read through some older mails on the list, where people had similar
> problems and suspected somehting like that.
>
Any particular references (URLs, Message-IDs)?
Regards,
Christian
> What are the proper/right settings for rdb/qemu/libvirt?
>
> libvirt: cachemode=no
- RAID1)
> - 2x 200GB SSD (PERC H730)
> - 14x 6TB NL-SAS (PERC H730)
> - 12x 4TB NL-SAS (PERC H830 - MD1400)
>
>
> Please let me know if you want any more info.
>
> In my experience thus far, I've found this ratio is not useful for cache
> tiering etc -
ain on the restarted OSDs when recovering those operations (which I
also saw) I was prepared for, the near screeching halt not so much.
Any thoughts on how to mitigate this further or is this the expected
behavior?
Christian
--
Christian BalzerNetwork/Systems Engineer
ch.
On Fri, 12 Feb 2016 15:56:31 +0100 Burkhard Linke wrote:
> Hi,
>
> On 02/12/2016 03:47 PM, Christian Balzer wrote:
> > Hello,
> >
> > yesterday I upgraded our most busy (in other words lethally overloaded)
> > production cluster to the latest Firefly in prepara
emination or copying of this message is prohibited. If you
> received this message erroneously, please notify the sender and delete
> it, together with any attachments.
>
>
> -Original Message-
> From: Robert LeBlanc [mailto:rob...@leblancnet.us]
> Sent: Friday, February 12,
Hello,
On Sat, 13 Feb 2016 11:14:23 +0100 Lionel Bouton wrote:
> Le 13/02/2016 06:31, Christian Balzer a écrit :
> > [...] > --- > So from shutdown to startup about 2 seconds, not that
> > bad. >
> However here is where the cookie crumbles massively: > ---
25-30
> > primary pgs/osd range, and restarting osds, or whole nodes (24-32 OSDs
> > for us) no longer causes us pain. In the past restarting a node could
> > cause 5-10 minutes of peering and pain/slow requests/unhappiness of
> > various sorts (RAM exhaustion, OOM Killer, Fla
s on how to do that), I asked about mitigating the
impact of OSD restarts when they are going to be slow, for whatever reason.
Regards,
Christian
>
>
>
>
> On Sat, Feb 13, 2016 at 11:08 AM, Lionel Bouton <
> lionel-subscript...@bouton.name> wrote:
>
> > Hi,
>
notify the sender by telephone or e-mail (as shown above)
> immediately and destroy any and all copies of this message in your
> possession (whether hard copies or electronically stored copies).
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global
ed
> pg/osd settings. Additionally that ratio/variance has been the same
> regardless of the number of pgs/osd. Meaning it started out bad, and
> stayed bad but didn't get worse as we added osds. We've had to reweight
> osds in our crushmap to get anything close to a sane
d performance make it not an attractive
proposition.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-
64 Packages
> 100 /var/lib/dpkg/status
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
s, iops=5092, runt=33msec
>
>
> —
> Swapnil Jain | swap...@linux.com <mailto:swap...@linux.com>
> Solution Architect & Red Hat Certified Instructor
> RHC{A,DS,E,I,SA,SA-RHOS,VA}, CE{H,I}, CC{DA,NA}, MCSE, CNE
>
>
--
Christian BalzerNetwork/Sys
ant (ie ghzxcores) Go with the cheapest/power
> > efficient you can get. Aim for somewhere around 1Ghz per disk.
> >
> >> 4) What SSDs for Ceph journals are the best?
> >
> > Intel S3700 or P3700 (if you can stretch)
> >
&
y?)
I don't think you'll be happy with the resulting performance.
Christian.
> So, is the above correct, or am I missing some pieces here? Any other
> major differences between the two approaches?
>
> Thanks.
> P.
--
Christian BalzerNetwork/Systems Engineer
will wear out long before
it has payed for itself.
Christian
> Third, it seems that I am also running into the known "Lots Of Small
> Files" performance issue. Looks like performance in my use case will be
> drastically improved with the upcoming bluestore, though migrating to it
al OSDs have done their writing.
>(But
> still both of those have to be completed before the write operation
> returns to the client).
>
See above, eventually, kind-a-sorta.
>
>
> > (Which SSDs do you plan to use anyway?)
> >
>
> Intel DC S3700
>
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol
Hello,
On Wed, 17 Feb 2016 09:23:11 - Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of Christian Balzer
> > Sent: 17 February 2016 04:22
> > To: ceph-users@lists.ceph.com
> >
Hello,
On Wed, 17 Feb 2016 13:44:17 + Ed Rowley wrote:
> On 17 February 2016 at 12:04, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Wed, 17 Feb 2016 11:18:40 + Ed Rowley wrote:
> >
> >> Hi,
> >>
> >> We have been running
Hello,
On Wed, 17 Feb 2016 07:00:38 -0600 Mark Nelson wrote:
> On 02/17/2016 06:36 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Wed, 17 Feb 2016 09:23:11 - Nick Fisk wrote:
> >
> >>> -Original Message-
> >>> From:
Hello,
On Wed, 17 Feb 2016 21:47:31 +0530 Swapnil Jain wrote:
> Thanks Christian,
>
>
>
> > On 17-Feb-2016, at 7:25 AM, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > On Mon, 15 Feb 2016 21:10:33 +0530 Swapnil Jain wrote:
> >
&
Hello,
On Wed, 17 Feb 2016 09:19:39 - Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of Christian Balzer
> > Sent: 17 February 2016 02:41
> > To: ceph-users
> > Subject:
2
> osd/nmz-2/current/16.22_head/rbd\\udata.1a72a39011461.0001__head_A7E34AA2__10
>
>
> File is still corrupted.
>
>
> So my questions are:
>
> 1. How to make full OSD scrub not part of it.
Only primary PGs are compared to their secondaries.
mething is done.
If you do a "watch ceph -s", do you see at least some recovery activity at
this point?
The 3 scrub errors wouldn't fill me with confidence either.
Lastly, for the duration of the backfill, I'd turn off scrubs to improve
the speed (and performance impact) of
; mint=300018msec, maxt=300018msec
>
> Run status group 2 (all jobs):
>READ: io=12892MB, aggrb=44002KB/s, minb=44002KB/s, maxb=44002KB/s,
> mint=32msec, maxt=32msec
>
> Run status group 3 (all jobs):
>READ: io=3961.9MB, aggrb=13520KB/s, minb=13520KB/s, maxb=13520KB/s,
> mint=30
Hello,
On Tue, 23 Feb 2016 22:49:44 +0900 Christian Balzer wrote:
[snip]
> > 7 nodes(including 3 mons, 3 mds).
> > 9 SATA HDD in every node and each HDD as an OSD&journal(deployed by
> What replication, default of 3?
>
> That would give the theoretical IOPS of 21
016-02-24
> >>
> >> Esta Wang
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
>
s of the recency fix (thanks for
that) being worth risk of being an early adopter of 0.94.6?
In other words, are you eating your own dog food already and 0.94.6 hasn't
eaten your data babies yet? ^o^
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
are Intel DC S3700s.
That's aside from the point above if they're suitable for Ceph or not.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
ly applies when planning and buying SSDs.
And where the Ceph code could probably do with some attention.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_
t, as I seem to have found
another bug, more on that later if I can confirm it.
Right now I'm reboot my entire test cluster to make sure this isn't a
residual effect from doing multiple upgrades w/o ever rebooting nodes.
Christian
> Sent from a mobile device, please excuse any typos.
&
" nor
"min_write_recency_for_promote" are present in the latest Hammer.
The first one fills me with dread when thinking about the potential I/O
storms when the cache gets full, the later removes some of the reasons
why I would want to go to 0.94.6 for the recency fix.
Regards,
Christian
--
Christian B
On Thu, 25 Feb 2016 13:44:30 - Nick Fisk wrote:
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of Jason Dillaman
> > Sent: 25 February 2016 01:30
> > To: Christian Balzer
> > Cc:
; ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > <http://xo4t.mj.am/link/xo4t/x0557gim2n74/1/sbo9Blye4QJEav_CN9NrlA/aHR0cDovL2xpc3RzLmNlcGguY29tL2xpc3RpbmZvLmNnaS9jZXBoLXVzZXJzLWNlcGguY29t>
> > __
r an upgrade to Hammer or later on the client side a restart of the VM
would allow the header object to be evicted, while the header objects for
VMs that have been running since the dawn of time can not.
Correct?
This would definitely be better than having to stop the VM, flush things
and then star
fully utilized as I am
seeing below?
Christian
> Sent from a mobile device, please excuse any typos.
> On Feb 24, 2016 8:10 PM, "Christian Balzer" wrote:
>
> >
> > Hello,
> >
> > For posterity and of course to ask some questions, here are my
> >
.
> > Does anyone have any list of SSDs describing which SSD is highly
> > recommended, which SSD is not.
> >
> > Rgds,
> > Shinobu
> > _______
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo
Hello,
On Fri, 26 Feb 2016 15:59:51 +1100 Nigel Williams wrote:
> On Fri, Feb 26, 2016 at 3:10 PM, Christian Balzer wrote:
>
> > Then we come to a typical problem for fast evolving SW like Ceph,
> > things that are not present in older versions.
>
>
> I was going
d do a better job that just spamming PRs, which tend to be
forgotten/ignored by the already overworked devs.
Christian
> --
> Adam
>
> On Thu, Feb 25, 2016 at 10:59 PM, Nigel Williams
> wrote:
> > On Fri, Feb 26, 2016 at 3:10 PM, Christian Balzer
> > wrote:
> >
silently fixed ^o^) of my previous one:
http://tracker.ceph.com/issues/14873
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
c
1001 - 1100 of 1226 matches
Mail list logo