Hello folks
A few weeks ago, there was a discussion started by me regarding
abysmal read/write performance using ZFS mirror on 8.0-RELEASE. I was
using an Atom 330 system with 2GB ram and it was pointed out to me
that my problem was most likely having both disks attached to a PCI
SIL3124 controlle
Dan Naumov wrote:
> [j...@atombsd ~]$ dd if=/dev/zero of=/home/jago/test2 bs=1M count=4096
> 4096+0 records in
> 4096+0 records out
> 4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)
>
> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
> 4GB in 143.8 second
Am 26.01.2010 00:15, schrieb Daniel O'Connor:
> On Tue, 26 Jan 2010, Dan Naumov wrote:
>> CPU-performance-wise, I am not really worried. The current system is
>> an Atom 330 and even that is a bit overkill for what I do with it and
>> from what I am seeing, the new Atom D510 used on those boards is
On 01/25/10 19:53, Jeremy Chadwick wrote:
That's just the thing -- I/O transactions, not to mention ZFS itself,
are CPU-bound. If you start seeing slow I/O as a result of the Atom's
limitations, I don't think there's anything that can be done about it.
Choose wisely. :-)
It's not *that* terr
On Tue, 26 Jan 2010, Dan Naumov wrote:
> CPU-performance-wise, I am not really worried. The current system is
> an Atom 330 and even that is a bit overkill for what I do with it and
> from what I am seeing, the new Atom D510 used on those boards is a
> tiny bit faster. What I want and care about fo
Alexander Motin wrote:
Chris Whitehouse wrote:
Dan Naumov wrote:
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I
Chris Whitehouse wrote:
> Dan Naumov wrote:
>>
>> CPU-performance-wise, I am not really worried. The current system is
>> an Atom 330 and even that is a bit overkill for what I do with it and
>> from what I am seeing, the new Atom D510 used on those boards is a
>> tiny bit faster. What I want and c
Dan Naumov wrote:
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliabil
On Mon, Jan 25, 2010 at 08:39:01PM +0200, Dan Naumov wrote:
> On Mon, Jan 25, 2010 at 8:32 PM, Alexander Motin wrote:
> > Dan Naumov wrote:
> >> Alexander, since you seem to be experienced in the area, what do you
> >> think of these 2 for use in a FreeBSD8 ZFS NAS:
> >>
> >> http://www.supermicro
On Mon, Jan 25, 2010 at 8:32 PM, Alexander Motin wrote:
> Dan Naumov wrote:
>> Alexander, since you seem to be experienced in the area, what do you
>> think of these 2 for use in a FreeBSD8 ZFS NAS:
>>
>> http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
>> http://www.superm
Dan Naumov wrote:
> Alexander, since you seem to be experienced in the area, what do you
> think of these 2 for use in a FreeBSD8 ZFS NAS:
>
> http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
> http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H&IPMI=Y
On Mon, Jan 25, 2010 at 08:02:58PM +0200, Dan Naumov wrote:
> On Mon, Jan 25, 2010 at 7:40 PM, Alexander Motin wrote:
> > Artem Belevich wrote:
> >> aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
> >> controllers when I tried it with 6 and 8 disks.
> >> I think the problem is that MV
On Mon, Jan 25, 2010 at 7:40 PM, Alexander Motin wrote:
> Artem Belevich wrote:
>> aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
>> controllers when I tried it with 6 and 8 disks.
>> I think the problem is that MV8 only does 32K per transfer and that
>> does seem to matter when you
Artem Belevich wrote:
> aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
> controllers when I tried it with 6 and 8 disks.
> I think the problem is that MV8 only does 32K per transfer and that
> does seem to matter when you have 8 drives hooked up to it. I don't
> have hard numbers, but
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
controllers when I tried it with 6 and 8 disks.
I think the problem is that MV8 only does 32K per transfer and that
does seem to matter when you have 8 drives hooked up to it. I don't
have hard numbers, but peak throughput of MV8 with 8-d
> I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
> get a lot more bandwidth..
I would goalong with that - I have precisely the same controller, with
a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
meg/second out of them if I try. My controller is, how
It depends on the bandwidth of the bus that it is on and the controller
itself.
I like to use pci-x with aoc-sat2-mv8 cards or pci-e cardsthat way you
get a lot more bandwidth..
On Mon, Jan 25, 2010 at 3:32 AM, Dan Naumov wrote:
> On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov wrote:
> > On M
On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov wrote:
> On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
> wrote:
>> On Mon, 25 Jan 2010, Dan Naumov wrote:
>>>
>>> I've checked with the manufacturer and it seems that the Sil3124 in
>>> this NAS is indeed a PCI card. More info on the card in question
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
wrote:
> On Mon, 25 Jan 2010, Dan Naumov wrote:
>>
>> I've checked with the manufacturer and it seems that the Sil3124 in
>> this NAS is indeed a PCI card. More info on the card in question is
>> available at
>> http://green-pcs.co.uk/2009/01/28/tra
Dan Naumov wrote:
> On Mon, Jan 25, 2010 at 2:14 AM, Dan Naumov wrote:
>> On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin wrote:
>>> Dan Naumov wrote:
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with th
On Mon, 25 Jan 2010, Dan Naumov wrote:
I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page
On Mon, Jan 25, 2010 at 2:14 AM, Dan Naumov wrote:
> On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin wrote:
>> Dan Naumov wrote:
>>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
>>> bonnie results. It
On Sun, Jan 24, 2010 at 11:53 PM, Alexander Motin wrote:
> Dan Naumov wrote:
>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
>> bonnie results. It also sadly seems to confirm the very slow speed :(
>> The
Dan Naumov wrote:
> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
> bonnie results. It also sadly seems to confirm the very slow speed :(
> The disks are attached to a 4-port Sil3124 controller and again, my
On Sun, Jan 24, 2010 at 8:34 PM, Jason Edwards wrote:
>> ZFS writes to a mirror pair
>> requires two independent writes. If these writes go down independent I/O
>> paths, then there is hardly any overhead from the 2nd write. If the
>> writes
>> go through a bandwidth-limited shared path then the
On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn
wrote:
> On Sun, 24 Jan 2010, Dan Naumov wrote:
>>
>> This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
>> 4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
>> bonnie results. It also sadly seems to confirm the ve
On Sun, Jan 24, 2010 at 7:42 PM, Dan Naumov wrote:
> On Sun, Jan 24, 2010 at 7:05 PM, Jason Edwards wrote:
>> Hi Dan,
>>
>> I read on FreeBSD mailinglist you had some performance issues with ZFS.
>> Perhaps i can help you with that.
>>
>> You seem to be running a single mirror, which means you wo
On Sun, Jan 24, 2010 at 7:05 PM, Jason Edwards wrote:
> Hi Dan,
>
> I read on FreeBSD mailinglist you had some performance issues with ZFS.
> Perhaps i can help you with that.
>
> You seem to be running a single mirror, which means you won't have any speed
> benefit regarding writes, and usually R
Note: Since my issue is slow performance right off the bat and not
performance degradation over time, I decided to start a separate
discussion. After installing a fresh pure ZFS 8.0 system and building
all my ports, I decided to do some benchmarking. At this point, about
a dozen of ports has been b
29 matches
Mail list logo