o the number Micron are reporting on NVMe?
>
> Thanks a log.
>
> [0]
> https://www.micron.com/-/media/client/global/documents/products/other-documents/micron_9200_max_ceph_12,-d-,2,-d-,8_luminous_bluestore_reference_architecture.pdf?la=en
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ance ever you could also permanently set noout
and nodown and live with the consequences and warning state.
But of course everybody will (rightly) tell you that you need enough
capacity to at the very least deal with a single OSD loss.
Christian
--
Christian BalzerNetwork/Systems Engineer
; >> On 3/5/19 3:49 AM, Darius Kasparavičius wrote:
> >>> Hello,
> >>>
> >>>
> >>> I was thinking of using AMD based system for my new nvme based
> >>> cluster. In particular I'm looking at
> >>> https://www.supermi
133 SControl 300)
[54954737.206133] ata5.00: configured for UDMA/133
[54954737.206140] ata5: EH complete
---
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing
considered.
>
> Is the penalty for a too small DB on an SSD partition so severe that
> it's not worth doing?
>
> Thanks,
> Erik
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lis
uce things down to
the same risk as a 3x replica pool.
Feedback welcome.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.cep
es.
>
I'm happy that somebody else spotted this. ^o^
Regards,
Christian
> > What is the known maximum cluster size that Ceph RBD has been deployed to?
>
> See above.
> ___
> ceph-users mailing list
> ceph-users
red" semi-automatically with ceph pg repair?
>
> What and how would happen in case erasure coded pool's data was found
> to be damaged as well?
>
> --
> End of message. Next message?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.
Hello Hector,
Firstly I'm so happy somebody actually replied.
On Tue, 2 Apr 2019 16:43:10 +0900 Hector Martin wrote:
> On 31/03/2019 17.56, Christian Balzer wrote:
> > Am I correct that unlike with with replication there isn't a maximum size
> > of the critical path
On Tue, 2 Apr 2019 19:04:28 +0900 Hector Martin wrote:
> On 02/04/2019 18.27, Christian Balzer wrote:
> > I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2
> > pool with 1024 PGs.
>
> (20 choose 2) is 190, so you're never going to have more tha
> I would assume then that unlike what documentation says, it's safe to
> > run 'reshard stale-instances rm' on a multi-site setup.
> >
> > However it is quite telling if the author of this feature doesn't
> > trust what they have written to work co
week" situation like experienced
with several people here, you're even more like to wind up in trouble very
fast.
This is of course all something people do (or should know), I'm more
wondering how to model it to correctly asses risks.
Christian
On Wed, 3 Apr 2019 10:28:09 +0900 Ch
Hello,
On Wed, 10 Apr 2019 20:09:58 +0200 Paul Emmerich wrote:
> On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > Another thing that crossed my mind aside from failure probabilities caused
> > by actual HDDs dying i
ou're using
object store), how busy those disks and CPUs are, etc.
That kind of information will be invaluable for others here and likely the
developers as well.
Regards,
Christian
> Kind regards,
>
> Charles Alva
> Sent from Gmail Mobile
--
Christian BalzerNe
But only completely so if everything is on the same boat.
So if you clients (or most of them at least) can be on 25GB/s as well,
that would be the best situation, with a non-split network.
Christian
>
> >
> > My 2 cents,
> >
> > Gr. Stefan
> >
>
> Cheers,
&
are you probably want to reduce
> > recovery speed anyways if you would run into that limit
> >
> > Paul
> >
>
> Lars
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://
On Wed, 17 Apr 2019 16:08:34 +0200 Lars Täuber wrote:
> Wed, 17 Apr 2019 20:01:28 +0900
> Christian Balzer ==> Ceph Users :
> > On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
> >
> > > Wed, 17 Apr 2019 10:47:32 +0200
> > > Paul Emmerich ==
7;m a little bit confuse now. I suppose to get different results when using
> different pool image, but it isnt. It's like using 1 same performance.
> Although we're really sure that we alreay separate the SSD and HDD pool and
> crushmap.
>
> My question is :
>
> 1. W
jessie has 1 Packages. No ceph package found
> stretch has 1 Packages. No ceph package found
>
> If you want to re-run these tests, the attached hacky shell script does it.
>
> Regards,
>
> Matthew
>
>
>
> --
> The Wellcome Sanger Institute is operated by Genome R
On Thu, 25 Jul 2019 13:49:22 +0900 Sangwhan Moon wrote:
> osd: 39 osds: 39 up, 38 in
You might want to find that out OSD.
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Mobile Inc.
___
ceph-us
Reads from a hot cache with direct=0
read: IOPS=199, BW=797MiB/s (835MB/s)(32.0GiB/41130msec)
with direct=1
read: IOPS=702, BW=2810MiB/s (2946MB/s)(32.0GiB/11662msec)
Which is as fast as gets with this setup.
Comments?
Christian
--
Christian BalzerNetwork/Systems Engineer
Hello,
On Sun, 4 Aug 2019 06:34:46 -0500 Mark Nelson wrote:
> On 8/4/19 6:09 AM, Paul Emmerich wrote:
>
> > On Sun, Aug 4, 2019 at 3:47 AM Christian Balzer wrote:
> >
> >> 2. Bluestore caching still broken
> >> When writing data with the fios below, it
labor intensive and a nuisance for real users) as
well as harsher ingress and egress (aka spamfiltering) controls you will
find that all the domains spamvertized are now in the Spamhaus DBL.
"host abbssm.edu.in.dbl.spamhaus.org"
Pro tip for spammers:
Don't get my attention, ever.
Ch
c: 5517.19 bytes/sec:
> 45196784.26 (45MB/sec) => WHY JUST 45MB/sec?
>
> Since i ran those rbd benchmarks in ceph01, i guess the problem is not
> related to my backup rbd mount at all?
>
> Thanks,
> Mario
> ___
> ceph
2 sec at 164MiB/sec 41
> IOPS
> osd.27: bench: wrote 1GiB in blocks of 4MiB in 7.00978 sec at 146MiB/sec 36
> IOPS
> osd.32: bench: wrote 1GiB in blocks of 4MiB in 6.38438 sec at 160MiB/sec 40
> IOPS
>
> Thanks,
> Mario
>
>
>
> On Tue, Dec 24, 2019 at 1:46 A
> random write. No high CPU load/interface saturation is noted when running
> tests against the rbd.
>
>
>
> When testing with a 4K block size against an RBD on a dedicated metal test
> host (same specs as other cluster nodes noted above) I get the following
> (c
1201 - 1226 of 1226 matches
Mail list logo