le to
> > protect the volume in this scenario?
> >
> >
> > _______
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ce
> full error on cache tier, I was thinking that only one of pools can stops
> and other without cache tier should still work.
>
Once you activate a cache-tier it becomes for all intends and purposes the
the pool it's caching for.
So any problem with it will be fatal.
Christian
--
running ceph 0.80.9 and have a cluster of 126
> > OSDs with only 64 pgs allocated to the pool. As a result, 2 OSDs are now
> > 88% full, while the pool is only showing as 6% used.
> >
> > Based on my understanding, this is clearly a placement problem, so the
> > plan
es just fine.
>
> I know that the messages pop up due to a version mismatch, but is there any
> way to suppress them?
>
> Wido
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph
istered in England and Wales no. 05611763
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> __
essive on the
> >higher end with multiple threads, but figured for most of our nodes with
> >4-6 OSD's the intel were a bit more proven and had better "light-medium"
> >load numbers.
> >
> >Carlos M. Perez
> >CMP Consulting Services
> >305-6
ists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Sys
urnal but a write cache during operation. I had that
> > kind of configuration with 1 SSD for 20 SATA HDD. With a Ceph bench,
> > i notice that my rate whas limited between 350 and 400 MB/s. In fact,
> > a iostat show me that my SSD was 100% utilised with a rate of 350-40
sure if I can delete the fs and re-create it using the
> > existing
> > > data pool and the cloned metadata pool.
> > >
> > > Thank you.
> > >
> > >
> > > Zhang Di
> > >
> > > ___
>
Hello,
On Tue, 12 Jul 2016 11:01:30 +0200 Mateusz Skała wrote:
> Thank You for replay. Answers below.
>
> > -Original Message-
> > From: Christian Balzer [mailto:ch...@gol.com]
> > Sent: Tuesday, July 12, 2016 3:37 AM
> > To: ceph-users@lists.ceph.com
>
but read the recent filestore merge and
split threads, including the entirety of this thread:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg29243.html
Christian
> Thanks for the hints.
>
> On Tue, Jul 12, 2016 at 8:19 PM, Christian Balzer wrote:
>
> >
> > Hello,
d
> > Total time run: 10.546251
> > Total reads made: 293
> > Read size:4194304
> > Object size: 4194304
> > Bandwidth (MB/sec): 111.13
> > Average IOPS: 27
> > Stddev IOPS: 2
> > Max IOPS: 32
&
is is especially important if you were to run a MON on those machines as
well.
Christian
> Thanks,
> Ashley
>
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: 13 July 2016 01:12
> To: ceph-users@lists.ceph.com
> Cc: Wido den Hollander
is
> addressed or by their designee. If the reader of this message is not the
> intended recipient, you are on notice that any distribution of this message,
> in any form, is strictly prohibited. If you have received this message in
> erro
| erro 0/s | drpo 0/s |
>
> /dev/sda is the OS and journaling SSD. The other three are OSDs.
>
> Am I missing anything?
>
> Thanks,
>
>
>
>
> Zhang, Di
> Postdoctoral Associate
> Baylor College of Medicine
&g
Hello,
On Thu, 14 Jul 2016 13:37:54 +0200 Steffen Weißgerber wrote:
>
>
> >>> Christian Balzer schrieb am Donnerstag, 14. Juli 2016 um
> 05:05:
>
> Hello,
>
> > Hello,
> >
> > On Wed, 13 Jul 2016 09:34:35 + Ashley Merrick wrote:
> &
low requests, but they cleared up very quickly and
definitely did not require any of the OSDs to be brought back up.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
s which we wanted to add to the Jewel
> > cluster.
> > >
> > > Using Salt and ceph-disk we ran into a partprobe issue in
> > combination with ceph-disk. There was already a Pull Request for
> > the fix, but that was not included in Jewel 10.2.2.
> &g
iling list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > _______
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://l
ove is not a bug.
Christian
>
> So you can obviously ignore the ceph --show-config command. Its simply
> not working correctly.
>
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
Hello,
On Tue, 19 Jul 2016 15:15:55 +0200 Mateusz Skała wrote:
> Hello,
>
> > -Original Message-
> > From: Christian Balzer [mailto:ch...@gol.com]
> > Sent: Wednesday, July 13, 2016 4:03 AM
> > To: ceph-users@lists.ceph.com
> > Cc: Mateusz Skała
g 6.4 (6.4) -> up [53,24,6] acting [53,6,26]
>
> # ceph pg map 5.24
> osdmap e1054 pg 5.24 (5.24) -> up [32,13,56] acting [32,13,51]
>
> # ceph pg map 5.306
> osdmap e1054 pg 5.306 (5.306) -> up [44,60,26] acting [44,7,59]
>
>
> To complete
he
pool, but in practice I think it would break horribly, at least until you
removed the broken cache pool manually.
The readforward and readproxy modes will cache writes (and thus reads for
objects that have been written to and are still in the cache).
And as such they contain your most valuable da
on
your RGW (since you mention cosbench).
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@l
on different performance
> hardware, however is there any automation possible in Ceph that will promote
> data from slow hardware to fast one and back?
>
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
Christian
--
Christian BalzerNetwork/Systems Engineer
ng ! :)
>
A dedicated pool (something with replica 1 or 2 and backed by RAID6 OSDs
for example or an EC pool) and "manual" moving would likely be better
suited than the approach above.
Christian
> Thanks for feedback and suggestion on how to handle data you "never will
&
ery few seconds or so), not
really needed either.
> 2) If there is a journal disk failure, how long does the ceph cluster
> detect the journal disk failure?
>
> Is there any config option to allow the ceph cluster to detect the status
> of journal disk?
>
Same as abov
.
Christian
> Thanks for the help
> Goncalo
>
>
>
> From: Christian Balzer [ch...@gol.com]
> Sent: 20 July 2016 19:36
> To: ceph-us...@ceph.com
> Cc: Goncalo Borges
> Subject: Re: [ceph-users] pgs stuck unclean after reweight
>
> Hello,
>
> On Wed
at was run after restarting
> services so it is still unclear to me why the new value is not picked up
> and why running 'ceph --show-config --conf /dev/null | grep
> mon_osd_nearfull_ratio' still shows 0.85
>
Don't use that, do something like:
ceph --admin-daem
sts keep on swapping, and others don't?
> Could this be some issue?
>
> Thanks !!
>
> Kenneth
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian Balzer
onnection @ 1x 10GB; A second
> connection is also connected via 10GB but provides only a Backup
> connection when the active Switch fails - no LACP possible.
> - We do not use Jumbo Frames yet..
> - Public and Cluster-Network related Ceph traffic is going through this
> one active 10GB Interface on each S
per OSD is [30, 300],
> any
> other pg_num out of this range with bring cluster to HEALTH_WARN.
>
> So what I would like to say: is the document misleading? Should we fix it?
>
Definitely.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.co
256 per pool.
Again, see pgcalc.
Christian
> Thanks.
> ________
> 发件人: ceph-users 代表 Christian Balzer
>
> 发送时间: 2016年7月29日 2:47:59
> 收件人: ceph-users
> 主题: Re: [ceph-users] too many PGs per OSD (307 > max 300)
>
> On Fri, 29 Jul 2016 0
On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote:
> On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> >
> > > Hi list,
> > >
> > > I just followed the placement
OTAL), so your individual
pools would be 46 PGs on average, meaning small ones would be 32 and
larger ones 64 or 128.
Getting this right with a small # of OSDs is a challenge.
Christian
>
> Thanks.
>
>
> 发件人: Christian Balzer
> 发送时间: 2016年7月29日 3
gt;
> I have no idea on what I should do for RGW, RBD and CephFS, should I
> just have them all running on the 3 nodes?
>
I don't see how RGW and CephFS enter your setup at all, RBD is part of the
Ceph basics, no extra server required for it.
Christian
> Thanks again!
>
Christian
> Thanks again.
>
> Richard
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
not so much.
Your network can't even saturate one 200GB DC S3710.
>From a redundancy point of view you might be better off with more nodes.
Christian
>
> Kind regards,
> Tom
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
ing that your cache pool needs to be just as reliable as everything
else.
> Is a caching tier with one SSD recommended or should i always have two SSD
> in replicated mode ?
>
See above.
Christian
>
> Kind regards,
> Tom
>
>
>
> On Mon, Aug 1, 2016 at 2:00 PM, Chri
. ^o^
Thanks,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
X>
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test
> > --show-utilization-all -- - Pastebin.com<http://pastebin.com/ar6SAFnX>
> > pastebin.com
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/2mbBnmSM>
> >
>
s with this
> > workload? Are you really writing ~600TB/month??
> >
> > Jan
> >
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
C
anks
> Jan
>
> > On 03 Aug 2016, at 13:33, Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > yeah, I was particular interested in the Power_Loss_Cap_Test bit, as it
> > seemed to be such an odd thing to fail (given that's not single capacitor)
onstantly at its breaking point, it's also an operation that should be
doable w/o major impacts.
I'd start with 1024 PGs on those 20 OSDs, at 50 OSDs go to 4096 PGs and at
around 100 OSDs it is safe to go to 8192 PGs.
Christian
--
Christian BalzerNetwork/Systems Engineer
t won't be systemd (as Jewel actually has the targets
now), but the inability to deal with a manually deployed environment like
mine.
Expect news about that next week the latest.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLi
s ceph user.
>
> It works when i don't specify a separate journal
>
> Any idea of what i'm doing wrong ?
>
> thks
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.
ke this:
> > > ceph-deploy osd prepare ceph-osd1:sdd:sdf7
> > > And then:
> > > ceph-deploy osd activate ceph-osd1:sdd:sdf7
> > > I end up with "wrong permission" on the osd when activating, complaining
> > > about "tmp" directory where
t;:".
Christian
> Le 5 août 2016 02:30, "Christian Balzer" a écrit :
>
> >
> > Hello,
> >
> > On Fri, 5 Aug 2016 02:11:31 +0200 Guillaume Comte wrote:
> >
> > > I am reading half your answer
> > >
> > > Do you mean that
0
---
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph
n't know.
>
> B) Add 2 monitors to each site. This would make each site with 3 monitors
> and the overall cluster will have 9 monitors. The reason we wanted to try
> this is, we think that the OSDs are going down as the the quorum is unable
> to find the minimum number nodes (may
ver according to atop the avio per HDD is
12ms with XFS and 8ms with EXT4.
Some food for thought, minor though with BlueStore in the pipeline.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communic
ients.
You will also want faster and more cores and way more memory (at least
64GB), how much depends on your CephFS size (number of files).
> I assume to use for an acceleration SSD for a cache and a log of OSD.
MDS don't hold any local data (caches), a logging SSD is fine.
Christian
-
5:11 +03:00 от Christian Balzer :
> >
> >
> >Hello,
> >
> >On Mon, 08 Aug 2016 17:39:07 +0300 Александр Пивушков wrote:
> >
> >>
> >> Hello dear community!
> >> I'm new to the Ceph and not long ago took up the theme of build
t it does work.
>
Well, I saw this before I gave my answer:
http://www.ovirt.org/develop/release-management/features/storage/cinder-integration/
And based on that I would say oVirt is not a good fit for Ceph at this
time.
Even less so than OpenNebula, which currently needs an additional
ONE drivers are mostly a set of shell scripts).
>
Thanks, I'll give that a spin next week.
Christian
> Best regards,
> Vladimir
>
>
> С уважением,
> Дробышевский Владимир
> Компания "АйТи Город"
> +7 343 192
>
> Аппаратное и программное обеспече
Sitz und Registergericht: Hamburg, HRB 90934
> > >>Vorstand: Jens-U. Mozdzen
> > >> USt-IdNr. DE 814 013 983
> > >>
> > >> ___
> > >> ceph-users
have dedicated slots on the back for OS disks, then i
> >>> would recomend using SATADOM flash modules directly into a SATA port
> >>> internal in the machine. Saves you 2 slots for osd's and they are
> >>> quite reliable. you could even use 2 sd cards if your machine have
> >>> the internal SD s
reate the osd
> # ceph osd unset noout
>
> Cheers
> Goncalo
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Sy
message in error, please immediately notify the sender and delete or
> > destroy any copy of this message!
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
es at 1.9 Ghz
> > 3. MDS - 2 pcs. All virtual server:
> > a. 1 Gbps Ethernet / c - 1 port.
> > b. SATA drive 40 GB for installation of the operating system (or
> > booting from the network, which is preferable)
> > c. SATA drive 40 GB
> > d. 6
ne get > zone.conf.json
> unable to initialize zone: (2) No such file or directory
>
> This could have something to do with the other error radosgw-admin is
> giving me.
>
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote:
> Christian,
>
> thanks a lot for your time. Please see below.
>
>
> 2016-08-17 5:41 GMT+05:00 Christian Balzer :
>
> >
> > Hello,
> >
> > On Wed, 17 Aug 201
>
> Sadly this directory is empty.
>
> -- Dan
>
> > Wido
> >
> >> Thanks,
> >>
> >> -- Dan J___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> htt
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http:/
crappy SATA disks each (so 16 OSDs), I can get better and
more consistent write speed than you, around 100MB/s.
Christian
> Anyway, some basic idea on those concepts or some pointers to some good
> docs or articles would be wonderful. Thank you!
>
> Lewis George
>
>
&g
be blocked given the above values.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.cep
Holly thread necromancy Batman!
On Fri, 19 Aug 2016 15:39:13 +1200 Mark Kirkwood wrote:
> On 15/06/16 13:18, Christian Balzer wrote:
> >
> > "osd_scrub_min_interval": "86400",
> > "osd_scrub_max_interval": "604800",
>
t;
> Lewis George
>
>
> --------
> From: "Christian Balzer"
> Sent: Thursday, August 18, 2016 6:31 PM
> To: ceph-users@lists.ceph.com
> Cc: "lewis.geo...@innoscale.net"
> Subject: Re: [ceph-users] Understanding write performance
>
> Hello
;>
> >> >> >>>> Hi,
> >> >> >>>>
> >> >> >>>> Same here, I've read some blog saying that vmware will
> >> >> >>>> frequently verify the locking on VMFS over iSCSI, hence it will
> >> >> >
Hello,
On Sun, 21 Aug 2016 09:57:40 +0100 Nick Fisk wrote:
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Christian Balzer
> > Sent: 21 August 2016 09:32
> > To: ceph-users
> &g
On Mon, 22 Aug 2016 10:18:51 +0300 Александр Пивушков wrote:
> Hello,
> Several answers below
>
> >Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer :
> >
> >
> >Hello,
> >
> >On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote:
>>> increase the measured iops.
> > >>>>
> > >>>> Our ceph.conf is pretty basic (debug is set to 0/0 for
> > >>>> everything) and
> > >>>> the crushmap just defines the different buckets/rules for
>
cached_test_cache 5 71625M 23.57 226G 185
> test_cache 6 44324M 1.75 2421G 189
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
in the
> controller.
>
If it were that last bit, I'd be for it, if it isn't then something that
you can fully control akin to fstrim would be a much better idea.
That being said, I'm disinclined to deploy any SSDs that actually REQUIRE
trim/discard to maintai
On Tue, 27 Jun 2017 13:24:45 +0200 (CEST) Wido den Hollander wrote:
> > Op 27 juni 2017 om 13:05 schreef Christian Balzer :
> >
> >
> > On Tue, 27 Jun 2017 11:24:54 +0200 (CEST) Wido den Hollander wrote:
> >
> > > Hi,
> > >
> >
On Tue, 27 Jun 2017 14:07:24 +0200 Dan van der Ster wrote:
> On Tue, Jun 27, 2017 at 1:56 PM, Christian Balzer wrote:
> > On Tue, 27 Jun 2017 13:24:45 +0200 (CEST) Wido den Hollander wrote:
> >
> >> > Op 27 juni 2017 om 13:05 schreef Christian Balzer :
> >>
.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
c
l
> > 45/5084377 objects degraded (0.001%)
> > 1103 active+clean
> >4 active+degraded
> > 109 active+remapped
> > client io 21341 B/s rd, 477 kB/s wr, 118 op/s
> >
> >
> > Any idea how
ers that support for it it's going to be removed in
> > the near future. The documentation must be updated accordingly and it
> > must be clearly emphasized in the release notes.
> >
> > Simply disabling the tests while keeping the code in the distribution
2 with sufficiently small/fast SSDs.
With bcache etc just caching reads, you can get away with a single
replication of course, however failing SSDs may then cause your cluster to
melt down.
Christian
--
Christian BalzerNetwork/Systems Engineer
Hello,
On Mon, 3 Jul 2017 14:18:27 +0200 Mateusz Skała wrote:
> @Christian ,thanks for quick answer, please look bellow.
>
> > -Original Message-
> > From: Christian Balzer [mailto:ch...@gol.com]
> > Sent: Monday, July 3, 2017 1:39 PM
> > To: ceph-users@
sd_debug_reject_backfill_probability": "0",
> "osd_recovery_op_priority": "5",
> "osd_recovery_priority": "5",
> "osd_recovery_cost": "20971520",
> "osd_recovery_op_warn_multiple": "
crush rules to separate the
distinct users so that those reads go to OSDs that aren't used by the
batch stuff.
Beyond that, journal SSDs (future WAL SSDs for Bluestore), SSDs for bcache
or so to cache reads, SSD pools, cache-tiering, etc.
Christian
--
Christian BalzerNetwo
Hello,
so this morning I was greeted with the availability of 10.2.8 for both
Jessie and Stretch (much appreciated), but w/o any announcement here or
updated release notes on the website, etc.
Any reason other "Friday" (US time) for this?
Christian
--
Christian BalzerNetwo
s decision?
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/ceph has these permissions: "drwxr-x---", while
> every directory below it still has the world aXessible bit set.
>
> This makes it impossible (by default) for nagios and other non-root bits
> to determine the disk usage for example.
>
> Any rhyme or reason for this dec
other professional privilege. They are
> intended solely for the attention and use of the named addressee(s). They may
> only be copied, distributed or disclosed with the consent of the copyright
> owner. If you have received this email by mistake or by breach of the
>
-least 40% high
> >as compared with HDD's OSD bench.
> >
> >Did I miss anything here? Any hint is appreciated.
> >
> >Thanks
> >Swami
> >___
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
&
ache tier is that Ceph is going to need to promote and
> evict stuff all the time (not free). A lot of people that want to use SSD
> cache tiering for RBDs end up with slower performance because of this.
> Christian Balzer is the expert on Cache Tiers for RBD usage. His primary
> stan
>> „filestore“ to „bluestore“ 😊
> >>>
> >>> As far as i have read bluestore consists of
> >>> - „the device“
> >>> - „block-DB“: device that store RocksDB metadata
> >>> - „block-WAL“: device that stores RocksDB „write-ahead journal“
tic tests.
> --> Yes, the problem is that I have to buy a HW and for Windows 10 VDI...
> and I cannot make realistic tests previously :( but I will work on this
> line...
>
> Thanks a lot again!
>
>
>
> 2017-08-18 3:14 GMT+02:00 Christian Balzer :
>
> >
//lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> _
result:
https://forum.proxmox.com/threads/slow-ceph-journal-on-samsung-850-pro.27733/
Christian
>
> > Op 20 aug. 2017 om 06:03 heeft Christian Balzer het
> > volgende geschreven:
> >
> > DWPD
>
>
--
Christian BalzerNetwork/Systems Eng
ets lost.
The network part is unavoidable (a local SAS/SATA link is not the same as
a bonded 10Gbps link), though 25Gbps, IB etc can help.
The Ceph stack will benefit from faster CPUs as mentioned above.
> We are using Ceph Jewel 10.2.5-1trusty, kernel 4.4.0.-31 generic, Ubuntu
> 14.04
>
solely for the attention and use of the named addressee(s). They may
> only be copied, distributed or disclosed with the consent of the copyright
> owner. If you have received this email by mistake or by breach of the
> confidentiality clause, please notify the sender immediately by return email
> and delete or destroy all copies of the email. Any confidentiality, privilege
> or copyright is not waived or lost because this email has been sent to you by
> mistake.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
n "expensive" in most cases and this is
no exception.
Smaller hosts are more expensive in terms of space and parts (a NIC for
each OSD instead of one per 12, etc).
And before you mention really small hosts with 1GbE NICs, the latency
penalty is significant there, the limitation to 100MB/
t worse due to the overhead
outside the SSD itself.
Christian
> On Sun, Aug 20, 2017 at 9:33 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Sat, 19 Aug 2017 23:22:11 +0530 M Ranga Swami Reddy wrote:
> >
> >> SSD make details : SSD 850 EVO 2.5&q
drive)
> but obviously performance is nowhere near to SSDs or NVMe.
>
> So, what do you think? Does anybody have some opinions or experience he would
> like to share?
>
> Thanks!
> Xavier.
>
>
>
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Rakuten Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
le.
> And
> I don't have enough hardware to setup a test cluster of any significant
> size to run some actual testing.
>
You may want to set up something to get a feeling for CephFS, if it's
right for you or if something else on top of RBD may be more suitable.
Christian
--
Ch
of the
> volumes listed in the cache pool, but the objects didn't change at
> all, the total number was also still 39. For the rbd_header objects I
> don't even know how to identify their "owner", is there a way?
>
> Has anyone a hint what else I could c
701 - 800 of 1226 matches
Mail list logo