Hi all.
I've been running the ZFS boot "netinstall" setup on SXCE's snv_69 and
snv_70 very happily. I'm anticipating the release of xVM with build 75,
and wondering if the same ZFS install procedure is likely to work, or if
I'll be left waiting further for more changes.
I understand that thi
Gary Gendel wrote:
> Norco usually uses Silicon Image based SATA controllers.
Ah, yes, I remember hearing SI SATA multiplexer horror stories when I
was researching storage possibilities.
However, I just heard back from Norco:
> Thank you for interest in Norco products.
> Most of part uses by D
[EMAIL PROTECTED] wrote:
>> If you don't have a 64bit cpu, add more ram(tm).
>
>
> Actually, no; if you have a 32 bit CPU, you must not add too much
> RAM or the kernel will run out of space to put things.
Hrm. Do you have a working definition of "too much"?
adam
___
Hello, Robert,
Robert Milkowski wrote:
> Because it offers upto 1GB of memory, 32bit shouldn't be an issue.
Sorry, could someone expand on this?
The only received opinion I've seen on 32-bit is from the ZFS best
practice wiki, which simply says "Run ZFS on a system that runs a 64-bit
kernel."
Hey all,
Has anyone else noticed Norco's recently-announced DS-520 and thought
ZFS-ish thoughts? It's a five-SATA, Celeron-based desktop NAS that ships
without an OS.
http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-520
What practical impact is a 32-bit processor going to hav
Heya Kent,
Kent Watsen wrote:
>> It sounds good, that way, but (in theory), you'll see random I/O
>> suffer a bit when using RAID-Z2: the extra parity will drag
>> performance down a bit.
> I know what you are saying, but I , wonder if it would be noticeable? I
Well, "noticeable" again comes
Kent Watsen wrote:
>> What are you *most* interested in for this server? Reliability?
>> Capacity? High Performance? Reading or writing? Large contiguous reads
>> or small seeks?
>>
>> One thing that I did that got a good feedback from this list was
>> picking apart the requirements of the most
Kent Watsen wrote:
> I'm putting together a OpenSolaris ZFS-based system and need help
> picking hardware.
Fun exercise! :)
> I'm thinking about using this 26-disk case: [FYI: 2-disk RAID1 for the
> OS & 4*(4+2) RAIDZ2 for SAN]
What are you *most* interested in for this server? Reliability?
Back in April, I pinged this list[1] for help in specifying a ZFS server
that would handle high-capacity reads and writes. That server was
finally built and delivered, and I've blogged the results[2] as part of
a larger series[3] about that server.
[1] http://www.opensolaris.org/jive/thread.jsp
[EMAIL PROTECTED] wrote:
I suspect that if you have a bottleneck in your system, it would be due
to the available bandwidth on the PCI bus.
Mm. yeah, it's what I was worried about, too (mostly through ignorance
of the issues), which is why I was hoping HyperTransport and PCIe were
going to give
Hi, hope you don't mind if I make some portions of your email public in
a reply--I hadn't seen it come through on the list at all, so it's no
duplicate to me.
Johansen wrote:
> Adam:
>
> Sorry if this is a duplicate, I had issues sending e-mail this morning.
>
> Based upon your CPU choices, I t
Anton B. Rang wrote:
If you're using this for multimedia, do some serious testing first. ZFS tends to have
"bursty" write behaviour, and the worst-case latency can be measured in seconds. This has
been improved a bit in recent builds but it still seems to "stall" periodically.
I had wondered
Richard Elling wrote:
Does anyone have a clue as to where the bottlenecks are going to be
with this:
16x hot swap SATAII hard drives (plus an internal boot drive)
Be sure to check the actual bandwidth of the drives when installed in the
final location. We have been doing some studies on the
Nicholas Lee wrote:
On 4/19/07, *Adam Lindsay* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID dr
[EMAIL PROTECTED] wrote:
Adam:
Does anyone have a clue as to where the bottlenecks are going to be with
this:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD Opteron
In asking about ZFS performance in streaming IO situations, discussion
quite quickly turned to potential bottlenecks. By coincidence, I was
wondering about the same thing.
Richard Elling said:
We know that channels, controllers, memory, network, and CPU bottlenecks
can and will impact actual p
Thanks, Richard, for your comments.
Richard Elling wrote:
so much data, so little time... :-)
:) indeed.
Adam Lindsay wrote:
Clearly, there are elements of the model that don't apply to our
sustained read/writes, so does anyone have any guidance (theoretical
or empirical) on what we
Bart Smaalders wrote:
Adam Lindsay wrote:
Okay, the way you say it, it sounds like a good thing. I misunderstood
the performance ramifications of COW and ZFS's opportunistic write
locations, and came up with much more pessimistic guess that it would
approach random writes. As it is, I
Hello Bart,
Thanks for the answers...
Bart Smaalders wrote:
Clearly, there are elements of the model that don't apply to our
sustained read/writes, so does anyone have any guidance (theoretical
or empirical) on what we could expect in that arena?
I've seen some references to a different ZFS m
Hi folks. I'm looking at putting together a 16-disk ZFS array as a server, and
after reading Richard Elling's writings on the matter, I'm now left wondering
if it'll have the performance we expect of such a server. Looking at his
figures, 5x 3-disk RAIDZ sets seems like it *might* be made to do
20 matches
Mail list logo