Blake wrote:
> I'm looking for a rackmount chassis for an x86 ZFS fileserver I wan to
> build for my organization.
>
> Requirements:
>
> Hot-swap SATA disk support
> Minimum of 4-disk SATA support (would prefer 6+)
> Hot-swap power supply (redundant)
> Some kind of availability for replacement p
Greetings,
Last April, in this discussion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517
...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts. We're about to
reconfigure a 6120 here for use wit
> Now, what if that system had been using ZFS root? I have a
> hardware failure, I replace the raid card, the devid of the boot
> device changes.
I am not sure on Solaris, but on FreeBSD I always use glabel:ed
devices in my ZFS pools, making them entirely location independent.
--
/ Peter Schulle
[Hit wrong reply button...]
On 9/28/07, Blake <[EMAIL PROTECTED]> wrote:
> I'm looking for a rackmount chassis for an x86 ZFS fileserver I wan to build
> for my organization.
>
> Requirements:
>
> Hot-swap SATA disk support
> Minimum of 4-disk SATA support (would prefer 6+)
> Hot-swap power supply
> 1. It appears that OpenSolaris has no way to get updates from Sun.
> So ... how do people "patch" OpenSolaris?
Easy, by upgrading to the next OpenSolaris build.
I guess this is a kind of FAQ
There are no patches for OpenSolaris, by defintion. All fixes and
new features are always first integr
sliceing say "S0" to be used as root-filesystem would make
ZFS not using the write-buffer on the disks.
This would be a slight performance degrade, but would increate
reliability of the system (since root is mirrored).
Why not living on the edge and booting from ZFS ?
This would nearly eliminate U
Just keep in mind that I tried the patched driver and occasionally had kernel
panics because of recursive mutex calls. I believe that it isn't
multi-processor safe. I switched to the Marvell chipset and have been much
happier.
This message posted from opensolaris.org
___
IMHO, a better investment is in the NVidia MCP-55 chipsets which
support more than 4 SATA ports. The NForce 680a boasts 12 SATA
ports. Nevada builds 72+ should see these as SATA drives using
the nv_sata driver and not as ATA/IDE disks.
-- richard
Christopher wrote:
> I'm new to the list so thi
pet peeve below...
Kent Watsen wrote:
>
>> I think I have managed to confuse myself so i am asking outright hoping for
>> a straight answer.
>>
> Straight answer:
>
> ZFS does not (yet) support adding a disk to an existing raidz set -
> the only way to expand an existing pool is by
Kris Kasner wrote:
>> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because
>> I
>> don't like it with 2 SATA disks either. There isn't enough drives to put the
>> State Database Replicas so that if either drive failed, the system would
>> reboot unattended. Unless there i
David Runyon wrote:
> actually, want 200 megabytes/sec (200 MB/sec), OK with using 2 or 4 GbE ports
> to network as needed.
200 MBytes/s isochronous sustained is generally difficult for a small system.
Even if you have enough "port bandwidth" you often approach internal bottlenecks
of small syste
I'm looking for a rackmount chassis for an x86 ZFS fileserver I wan to build
for my organization.
Requirements:
Hot-swap SATA disk support
Minimum of 4-disk SATA support (would prefer 6+)
Hot-swap power supply (redundant)
Some kind of availability for replacement parts
I'll be putting in a board
>
> Using build 70, I followed the zfsboot instructions
> at http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
> to the letter.
>
> I tried first with a mirror zfsroot, when I try to boot to zfsboot
> the screen is flooded with "init(1M) exited on fatal signal 9"
Could be this
On Fri, 28 Sep 2007, Kugutsumen wrote:
>
> Using build 70, I followed the zfsboot instructions at http://
> www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
> letter.
>
> I tried first with a mirror zfsroot, when I try to boot to zfsboot
> the screen is flooded with "init(1M) exit
actually, want 200 megabytes/sec (200 MB/sec), OK with using 2 or 4 GbE ports
to network as needed.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
This looks real promising. At the $30/GB target it is 1/2 the market price for
decent ram.
Effective lifetime is obviously lower given that it is flash. Although most of
the SSD makers have been doing some pretty impressive cell balancing to make it
worth it.
Personally I would like to see s
FYI only - may be of interest to ZFSers (and not available yet):
http://www.tgdaily.com/content/view/34065/135/
Also would require an OpenSolaris custom driver (AFAICT).
Regards,
Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134 T
Christopher wrote:
> Kent - I see your point and it's a good one and, but for me, I only want a
> big fileserver with redundancy for my music collection, movie collection and
> pictures etc. I would of course make a backup of the most important data as
> well from time to time.
>
Chris,
We ha
Dale Ghent wrote:
> Yes, it's in there:
>
> [EMAIL PROTECTED]/$ cat /etc/release
> Solaris 10 8/07 s10x_u4wos_12b X86
>
It's also available in U3 (and probably earlier releases as well) after
installing kernel patch 120011-14 or 120012-14. I checked this last night.
Pr
I would agree that the performance of the SiI 3114 is not great. I have a
similar ASUS board, and have used the standalone controller as well.
Adaptec makes a nice 2-channel SATA card that is a lot better, though about
2x as much money. The Supermicro/Marvell controller is very well rated and
sup
I'm new to the list so this is probably a noob question: Are this forum part of
a mailinglist or something? I keep getting some answers to my posts in this
thread on email as well as some here, but it seems that those answers/posts on
email aren't posted on this forum..?? Or do I just get a copy
I just tried again with Tim Foster's script ( http://
mediacast.sun.com/share/timf/zfs-actual-root-install.sh ) and I get
the same negative results...
with mirror c1t0d0s0 c2t0d0s0, I get "init(1M) exited on fatal signal
9" spam.
with a straight c1t0d0s0, I get the same problem...
I tried w
Hello!
Does anyone know if/when ZFS will support DMAPI?
regards
Oliver
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Using build 70, I followed the zfsboot instructions at http://
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
letter.
I tried first with a mirror zfsroot, when I try to boot to zfsboot
the screen is flooded with "init(1M) exited on fatal signal 9"
Than I tried with a simp
I made a mistake in calculating the mttdl-drop for adding stripes - it
should have read:
2 disks: space=500 GB, mttdl=760.42 years, iops=158
4 disks: space=1000 GB, mttdl=380 years, iops=316
6 disks: space=1500 GB, mttdl=*253* years, iops=474
8 disks: space=2000 GB, mttdl=*190* yea
I think I have managed to confuse myself so i am asking outright hoping for a straight answer.
Straight answer:
ZFS does not (yet) support adding a disk to an existing raidz set -
the only way to expand an existing pool is by adding a stripe.
Stripes can either be mirror, raid5, o
26 matches
Mail list logo