Sorry about my previous message, and for starting a new thread (I'm a
fast deleter). S10U2 supports SATA hot plug but just for a few SATA
controllers (notably, not the one in the x2100, which is why I thought
support was absent altogether).
Judging from the log messages that were posted, you do
On September 7, 2006 12:25:47 PM -0700 "Anton B. Rang" <[EMAIL PROTECTED]>
wrote:
The bigger problem with system utilization for software RAID is the
cache, not the CPU cycles proper. Simply preparing to write 1 MB of data
will flush half of a 2 MB L2 cache. This hurts overall system performance
On September 8, 2006 9:34:29 PM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
My first real-hardware Solaris install. I've installed S10 u2 on a
system with an Asus M2n-SLI Deluxe nForce 570-SLI motherboard, Athlon
64 X2 dual core CPU. It's in a Chenbro SR107 case with two Chenbro
4-drive
On September 8, 2006 5:59:47 PM -0700 Richard Elling - PAE
<[EMAIL PROTECTED]> wrote:
Ed Gould wrote:
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
softw
David Dyer-Bennet wrote:
[...]
> So, having gotten this far, and it being a scratch install and all, I
> reached over and pulled out C3D0. I then typed a zpool status
> command. This hung after the first line of output. And I started
> getting messages on the console, saying things like (retyp
Anton B. Rang wrote:
JBOD probably isn't dead, simply because motherboard manufacturers are unlikely to pay
the extra $10 it might cost to use a RAID-enabled chip rather than a plain chip (and
the cost is more if you add cache RAM); but basic RAID is at least cheap.
NVidia MCPs (later NForce
The better SATA RAID cards have hardware support. One site comparing
controllers is:
http://tweakers.net/reviews/557
Five of the eight controllers they looked at implemented RAID in hardware; one
of the others implemented only the XOR in hardware. Chips like the Adaptec
AIC-8210 implement m
It sounds like the SATA (or SD) driver might be overambitious at retrying
operations. It seems to me, coming from the SCSI world, that a "select failed"
really ought to be a pretty strong indication that the device is gone; but
perhaps SATA acts quite differently.
As for these messages, they ou
My first real-hardware Solaris install. I've installed S10 u2 on a
system with an Asus M2n-SLI Deluxe nForce 570-SLI motherboard, Athlon
64 X2 dual core CPU. It's in a Chenbro SR107 case with two Chenbro
4-drive SATA hot-swap bays.
C1D0 is in the first hot-swap bay, and is the boot drive (an 80
Ed Gould wrote:
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
software running on the outside. (Right?)
No. All that's in the chipset is enough to rea
> Dunno about eSATA jbods, but eSATA host ports have
> appeared on at least two HDTV-capable DVRs for storage
> expansion (looks like one model of the Scientific Atlanta
> cable box DVR's as well as on the shipping-any-day-now
> Tivo Series 3).
>
> It's strange that they didn't go with firewire
On Sep 8, 2006, at 14:22, Ed Gould wrote:
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago.
All of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All
were less than $150.
In other words, the days of ha
On Sep 8, 2006, at 11:35, Torrey McMahon wrote:
If I read between the lines here I think you're saying that the raid
functionality is in the chipset but the management can only be done by
software running on the outside. (Right?)
No. All that's in the chipset is enough to read a RAID volume f
Ed Gould wrote:
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are over
except for
On Fri, 2006-09-08 at 09:33 -0700, Richard Elling - PAE wrote:
> There has been some recent discussion about eSATA JBODs in the press. I'm not
> sure they will gain much market share. iPods and flash drives have a much
> larger
> market share.
Dunno about eSATA jbods, but eSATA host ports have
Josip Gracin wrote:
Hello!
Could somebody please explain the following bad performance of a machine
running ZFS. I have a feeling it has something to with the way ZFS uses
memory, because I've checked with ::kmastat and it shows that ZFS uses
huge amounts of memory which I think is killing t
[EMAIL PROTECTED] wrote:
I don't quite see this in my crystal ball. Rather, I see all of the SAS/SATA
chipset vendors putting RAID in the chipset. Basically, you can't get a
"dumb" interface anymore, except for fibre channel :-). In other words, if
we were to design a system in a chassis with
On 09/08/06 15:20, Mark Maybee wrote:
Gavin,
Please file a bug on this.
I filed 6468748. Attach the core now.
Cheers
Gavin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Gavin,
Please file a bug on this.
Thanks,
-Mark
Gavin Maltby wrote:
Hi,
My desktop paniced last night during a zfs receive operation. This
is a dual opteron system running snv_47 and bfu'd to DEBUG project bits
that
are in sync with the onnv gate as of two days ago. The project bits
are f
>I believe that add_install_client [with a -b option?] is what is
>creating my vfstab entries. I haven't had reboot issues until
>overnight (a system move), and I have been doing PXE boot of some x64
>systems only recently, i.e. since the most recent power failure.
>
>Install images are being
I've just met the same issue. It is tracked in Bug 6418732.
Regards,
Victor
Thomas Wagner wrote:
Steffen,
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to "no" for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at
[EMAIL PROTECTED] wrote On 09/08/06 09:06,:
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to "no" for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question "when should a l
rbourbon writes:
> I don't think it was the point of the post. I've read
> it to mean that some customers because of outside
> consideration from ZFS have some need to use storage array in ways
> that may not allow ZFS to develop it's full potential.
I've been following this thread because we ha
>I have the same with my home-installserver. As a dirty solution I
>set mount-at-boot to "no" for the lofs Filesystems, to get the system up.
>But with every new OS added by JET the mount at reboot reappears.
>
>Seems to me as the question "when should a lofs filesystem be mounted at boot".
>When
Nicolas Dorfsman wrote:
Hi,
I'm currently doing some tests on a SF15K domain with Solaris 10
installed.
The target is to convince my cu to use Solaris 10 for this domain AND
establish a list of recommendations.
The ZFS perimeter is really an issue for me.
For n
Steffen,
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to "no" for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question "when should a lofs filesystem be mounted at boot".
Wh
Hi,
I'm currently doing some tests on a SF15K domain with Solaris 10
installed.
The target is to convince my cu to use Solaris 10 for this domain AND
establish a list of recommendations.
The ZFS perimeter is really an issue for me.
For now, I'm waiting for fresh
I have a jumpstart server where the install images are on a ZFS pool.
For PXE boot, several lofs mounts are created and configured in
/etc/vfstab. My system does not boot properly anymore because the
mounts referring to jumstart files haven't been mounted yet via ZFS.
What is the best way of w
On Fri, 8 Sep 2006, Jim Sloey wrote:
> > Roch - PAE wrote:
> > The hard part is getting a set of simple requirements. As you go into
> > more complex data center environments you get hit with older Solaris
> > revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> > most of us se
Hi,
My desktop paniced last night during a zfs receive operation. This
is a dual opteron system running snv_47 and bfu'd to DEBUG project bits that
are in sync with the onnv gate as of two days ago. The project bits
are for Opteron FMA and don't appear at all active in the panic.
I'll log a bug
Jim Sloey writes:
> > Roch - PAE wrote:
> > The hard part is getting a set of simple requirements. As you go into
> > more complex data center environments you get hit with older Solaris
> > revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> > most of us seem to be pl
zfs "hogs all the ram" under a sustained heavy write load. This is
being tracked by:
6429205 each zpool needs to monitor it's throughput and throttle heavy
writers
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Hello James,
Thursday, September 7, 2006, 8:58:10 PM, you wrote:
JD> with ZFS I have found that memory is a much greater limitation, even
JD> my dual 300mhz u2 has no problem filling 2x 20MB/s scsi channels, even
JD> with compression enabled, using raidz and 10k rpm 9GB drives, thanks
JD> to its
On Fri, Sep 08, 2006 at 09:41:58AM +0100, Darren J Moffat wrote:
> [EMAIL PROTECTED] wrote:
> >Richard, when I talk about cheap JBOD I think about home users/small
> >servers/small companies. I guess you can sell 100 X4500 and at the same
> >time 1000 (or even more) cheap JBODs to the small compani
Jim,
how can we find out whether your suspicion wrt sd is correct? How about other
drivers (are others relevant?)
IHAC who's interested in this feature.
Thx
Michael
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
[EMAIL PROTECTED] wrote:
Richard, when I talk about cheap JBOD I think about home users/small
servers/small companies. I guess you can sell 100 X4500 and at the same
time 1000 (or even more) cheap JBODs to the small companies which for sure
will not buy the big boxes. Yes, I know, you earn more s
Hello!
Could somebody please explain the following bad performance of a machine
running ZFS. I have a feeling it has something to with the way ZFS uses
memory, because I've checked with ::kmastat and it shows that ZFS uses
huge amounts of memory which I think is killing the performance of the
Torrey McMahon writes:
> Nicolas Dorfsman wrote:
> >> The hard part is getting a set of simple
> >> requirements. As you go into
> >> more complex data center environments you get hit
> >> with older Solaris
> >> revs, other OSs, SOX compliance issues, etc. etc.
> >> etc. The world where
> Roch - PAE wrote:
> The hard part is getting a set of simple requirements. As you go into
> more complex data center environments you get hit with older Solaris
> revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
> most of us seem to be playing with ZFS is on the lower end
On Thu, Sep 07, 2006 at 12:14:20PM -0700, Richard Elling - PAE wrote:
> [EMAIL PROTECTED] wrote:
> >This is the case where I don't understand Sun's politics at all: Sun
> >doesn't offer really cheap JBOD which can be bought just for ZFS. And
> >don't even tell me about 3310/3320 JBODs - they are ho
41 matches
Mail list logo