uesday, 23 July, 2013 4:27:07 AM
Subject: Re: [ceph-users] SSD recommendations for OSD journals
On Mon, Jul 22, 2013 at 7:10 PM, Mark Nelson wrote:
On 07/22/2013 01:02 PM, Oliver Fuckner wrote:
Good evening,
On the second look you see that they use 4 Sandisk X100 SSDs in RAID5
and those SSDs
ceph-users@lists.ceph.com
Sent: Tuesday, 23 July, 2013 4:27:07 AM
Subject: Re: [ceph-users] SSD recommendations for OSD journals
On Mon, Jul 22, 2013 at 7:10 PM, Mark Nelson wrote:
> On 07/22/2013 01:02 PM, Oliver Fuckner wrote:
>> Good evening,
>>
>>
>>
>> On the
Barring the cost, sTec solutions have proven reliable for me.
Check out the s1122 with 1.6 TB capacity and 90PB write endurance:
http://www.stec-inc.com/products/s1120-pcie-accelerator/
Sounds expensive, how much do these cards cost? The smallest/cheapest
should be big enough as a ceph journ
发自我的 iPhone
在 2013-7-23,4:35,"Charles 'Boyo"
mailto:charlesb...@gmail.com>> 写道:
Hi,
On Mon, Jul 22, 2013 at 2:08 AM, Chen, Xiaoxi
mailto:xiaoxi.c...@intel.com>> wrote:
Hi,
> Can you share any information on the SSD you are using, is it PCIe
connected?
Depends, if you use H
On 07/23/13 07:35, Charles 'Boyo wrote:
Considering using a mSATA to PCIe adapter with a SATA III mSATA SSD.
Any thoughts on what to expect from this combination?
Going PCIe I think I would use a SSD card rather than adding yet another
(relatively slow) bus. I haven't looked at the models but
Hi,
On Mon, Jul 22, 2013 at 2:08 AM, Chen, Xiaoxi wrote:
> Hi,
>
> ** **
>
> > Can you share any information on the SSD you are using, is it
> PCIe connected?
>
>Depends, if you use HDD as your OSD data disk, a SATA/SAS SSD is
> enough for you. Instead of Intel 520, I woul
On Mon, Jul 22, 2013 at 7:10 PM, Mark Nelson wrote:
> On 07/22/2013 01:02 PM, Oliver Fuckner wrote:
>> Good evening,
>>
>>
>>
>> On the second look you see that they use 4 Sandisk X100 SSDs in RAID5
>> and those SSDs only have 80TBytes Write Endurance each... that makes me
>> nervous.
>
> I'm less
On 07/22/2013 01:02 PM, Oliver Fuckner wrote:
> Good evening,
>
>
>> I have not yet had the opportunity to try one, but something like the
>> Marvell Dragonfly might be a very interesting option for servers with
>> 24+ drives:
>>
>> https://origin-www.marvell.com/storage/dragonfly/nvram/
>
> yes
Good evening,
> I have not yet had the opportunity to try one, but something like the
> Marvell Dragonfly might be a very interesting option for servers with
> 24+ drives:
>
> https://origin-www.marvell.com/storage/dragonfly/nvram/
yes, Marvell Dragonfly also looks very promising to me, i like
发自我的 iPhone
在 2013-7-22,23:16,"Gandalf Corvotempesta" 写道:
> 2013/7/22 Chen, Xiaoxi :
>> With “journal writeahead”,the data first write to journal ,ack to the
>> client, and write to OSD, note that, the data always keep in memory before
>> it write to both OSD and journal,so the write is direct
On 07/22/2013 11:26 AM, Chen, Xiaoxi wrote:
>
>
> 发自我的 iPhone
>
> 在 2013-7-23,0:21,"Gandalf Corvotempesta" 写道:
>
>> 2013/7/22 Chen, Xiaoxi :
>>> Imaging you have several writes have been flushed to journal and acked,but
>>> not yet write to disk. Now the system crash by kernal panic or power
Hi,
RAM is *MUCH* more reliable than SSD. I've never seen a single RAM
module (server grade) failed from the latest 5-6 year.
We have had that happen a few times - running primarily IBM hardware.
Then you get those nasty MCEs - so I wouldn't run journals in RAM.
Unless ofcourse you're only u
发自我的 iPhone
在 2013-7-23,0:21,"Gandalf Corvotempesta" 写道:
> 2013/7/22 Chen, Xiaoxi :
>> Imaging you have several writes have been flushed to journal and acked,but
>> not yet write to disk. Now the system crash by kernal panic or power
>> failure,you will lose your data in ram disk,thus lose d
2013/7/22 Chen, Xiaoxi :
> Imaging you have several writes have been flushed to journal and acked,but
> not yet write to disk. Now the system crash by kernal panic or power
> failure,you will lose your data in ram disk,thus lose data that assumed to be
> successful written.
The same apply in ca
2013/7/22 Chen, Xiaoxi :
> With “journal writeahead”,the data first write to journal ,ack to the
> client, and write to OSD, note that, the data always keep in memory before
> it write to both OSD and journal,so the write is directly from memory to
> OSDs. This mode suite for XFS and EXT4.
What ha
22, 2013 5:04 AM
To: Mikaël Cluseau
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] SSD recommendations for OSD journals
Thank you for the information Mikael.
Counting on the kernel's cache, it appears I will be best served purchasing
write-opti
On Mon, Jul 22, 2013 at 08:45:07AM +1100, Mikaël Cluseau wrote:
> On 22/07/2013 08:03, Charles 'Boyo wrote:
> >Counting on the kernel's cache, it appears I will be best served
> >purchasing write-optimized SSDs?
> >Can you share any information on the SSD you are using, is it PCIe
> >connected?
>
On 22/07/2013 08:03, Charles 'Boyo wrote:
Counting on the kernel's cache, it appears I will be best served
purchasing write-optimized SSDs?
Can you share any information on the SSD you are using, is it PCIe
connected?
We are on a standard SAS bus so any SSD going to 500MB/s and being
stable o
18 matches
Mail list logo