Hello.
On 07/13/2016 03:31 AM, Christian Balzer wrote:
Hello,
did you actually read my full reply last week, the in-line parts,
not just the top bit?
http://www.spinics.net/lists/ceph-users/msg29266.html
On Tue, 12 Jul 2016 16:16:09 +0300 George Shuklin wrote:
Yes, linear io speed was conce
Hello,
did you actually read my full reply last week, the in-line parts,
not just the top bit?
http://www.spinics.net/lists/ceph-users/msg29266.html
On Tue, 12 Jul 2016 16:16:09 +0300 George Shuklin wrote:
> Yes, linear io speed was concern during benchmark. I can not predict how
> much linea
Hi Vincent,
On 12.07.2016 15:03, Vincent Godin wrote:
> Hello.
>
> I've been testing Intel 3500 as journal store for few HDD-based OSD. I
> stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
> sometime do not appear after partition creation). And I'm thinking that
> partition
Yes, linear io speed was concern during benchmark. I can not predict how
much linear IO would be generated by clients (compare to IOPS) so we
going to balance HDD-OSD per SSD according to real usage. If users would
generate too much random IO, we will raise HDD/SSD ratio, if they would
generate
2016-07-12 15:03 GMT+02:00 Vincent Godin :
> Hello.
>
> I've been testing Intel 3500 as journal store for few HDD-based OSD. I
> stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
> sometime do not appear after partition creation). And I'm thinking that
> partition is not tha
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I
stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
sometime do not appear after partition creation). And I'm thinking that
partition is not that useful for OSD management, because linux do no
allow
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Zoltan Arnold Nagy
> Sent: 08 July 2016 08:51
> To: Christian Balzer
> Cc: ceph-users ; n...@fisk.me.uk
> Subject: Re: [ceph-users] multiple journals on SSD
>
>
Hi Christian,On 08 Jul 2016, at 02:22, Christian Balzer wrote:Hello,On Thu, 7 Jul 2016 23:19:35 +0200 Zoltan Arnold Nagy wrote:Hi Nick,How large NVMe drives are you running per 12 disks?In my current setup I have 4xP3700 per 36 disks but I feel like I couldget by with 2… Just looking for community
Hello,
On Thu, 7 Jul 2016 23:19:35 +0200 Zoltan Arnold Nagy wrote:
> Hi Nick,
>
> How large NVMe drives are you running per 12 disks?
>
> In my current setup I have 4xP3700 per 36 disks but I feel like I could
> get by with 2… Just looking for community experience :-)
>
This is funny, because
607.84
> 1.36 19.425.60 34.13 4.04 19.60
> sdn 0.50 0.00 23.000.00 2670.00 0.00 232.17
> 0.072.962.960.00 2.43 5.60
>
> Pretty much 10x the latency. I'm seriously impressed with these NVME things.
>
>
>>
Hi Christian,
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: 07 July 2016 12:57
> To: ceph-users@lists.ceph.com
> Cc: Nick Fisk
> Subject: Re: [ceph-users] multiple journals on SSD
>
>
> Hello Nick,
>
> On Thu, 7 Jul 20
00
> 607.84 1.36 19.425.60 34.13 4.04 19.60
> sdn 0.50 0.00 23.000.00 2670.00 0.00
> 232.17 0.072.962.960.00 2.43 5.60
>
> Pretty much 10x the latency. I'm seriously impressed with these NVME
> things.
>
>
> > -Original Messa
The are two problems I found so far:
1) You can not alter parition table if it is in the use. That means you
need to stop all ceph-osd who use journals on given OSD to change
anything on it. Worse: you can change, but you can not force kernel to
reread partition table.
2) I found udev bug with
h-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 07 July 2016 03:23
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] multiple journals on SSD
>
>
> Hello,
>
> I have a multitude of of problems with the benchmarks and concl
I have 12 journals on 1 SSD, but I wouldn't recommend it if you want any
real performance.
I use it on an archive type environment.
On Wed, Jul 6, 2016 at 9:01 PM Goncalo Borges
wrote:
> Hi George...
>
>
> On my latest deployment we have set
>
> # grep journ /etc/ceph/ceph.conf
> osd journal si
Hi George...
On my latest deployment we have set
# grep journ /etc/ceph/ceph.conf
osd journal size = 2
and configured the OSDs for each device running 'ceph-disk prepare'
# ceph-disk -v prepare --cluster ceph --cluster-uuid XXX --fs-type
xfs /dev/sdd /dev/sdb
# ceph-disk -v
Hello,
I have a multitude of of problems with the benchmarks and conclusions
here, more below.
But firstly to address the question of the OP, definitely not filesystem
based journals.
Another layer of overhead and delays, something I'd be willing to ignore
if we're talking about a full SSD as O
Yes.
On my lab (not production yet) with 9 7200 SATA (OSD) and one INTEL
SSDSC2BB800G4 (800G, 9 journals) during random write I got ~90%
utilization of 9 HDD with ~5% utilization of SSD (2.4k IOPS). With
linear writing it somehow worse: I got 250Mb/s on SSD, which translated
to 240Mb of all O
Hi George,
We have several journal partition on our SSDs too. Using ceph-deploy
utility (as Dan mentioned before),I think it is best way:
ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
where journal will be the path to journal disk (not to partition):
ceph-deploy osd
Hi George,
interesting result for your benchmark. May you please supply some more numbers?
As we didn't get that good of a result
on our tests.
Thanks.
Cheers,
Alwin
On 07/06/2016 02:03 PM, George Shuklin wrote:
> Hello.
>
> I've been testing Intel 3500 as journal store for few HDD-based OSD
We have 5 journal partitions per SSD. Works fine (on el6 and el7).
Best practice is to use ceph-disk:
ceph-disk prepare /dev/sde /dev/sdc # where e is the osd, c is an SSD.
-- Dan
On Wed, Jul 6, 2016 at 2:03 PM, George Shuklin wrote:
> Hello.
>
> I've been testing Intel 3500 as journal stor
uklin
Sent: 6. heinäkuuta 2016 15:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] multiple journals on SSD
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I
stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
sometime do not appear af
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I
stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
sometime do not appear after partition creation). And I'm thinking that
partition is not that useful for OSD management, because linux do no
a
23 matches
Mail list logo