>> >
>> Bit of a chicken and egg that, isn't it?
>>
>> You need to run the tool to see if the board's worth buying and you need
>> to buy the board to run the tool!
>>
>
> *Somebody* has to be that first early adopter. After that, we all get
> to ride on their experience.
I am sure the Tier-1 st
Miles Nordin wrote:
"bh" == Brandon High writes:
bh> From what I've read, the Hitachi and Samsung drives both
bh> support CCTL, which is in the ATA-8 spec. There's no way to
bh> toggle it on from OpenSolaris (yet) and it doesn't persist
bh> through reboot so it's not really ide
On Thu, 2010-05-13 at 13:25 +1200, Ian Collins wrote:
> On 05/13/10 12:46 PM, Erik Trimble wrote:
> > I've gotten a couple of the newest prototype AMD systems, with the C34
> > and G34 sockets. All have run various flavors of OpenSolaris quite
> > well, with the exception of a couple of flaky netw
On 05/13/10 12:46 PM, Erik Trimble wrote:
I've gotten a couple of the newest prototype AMD systems, with the C34
and G34 sockets. All have run various flavors of OpenSolaris quite
well, with the exception of a couple of flaky network problems, which
we've tracked down to pre-production NIC hardw
I've gotten a couple of the newest prototype AMD systems, with the C34
and G34 sockets. All have run various flavors of OpenSolaris quite
well, with the exception of a couple of flaky network problems, which
we've tracked down to pre-production NIC hardware and early-access
drivers. This is a sim
On 12/05/10 11:21 PM, Thomas Burgess wrote:
Now wait just a minute. You're casting aspersions on
stuff here without saying what you're talking about,
still less where you're getting your info from.
Be specific - put up, or shut up.
I think he was just trying to tell me that m
On May 11, 2010, at 10:17 PM, schickb wrote:
> I'm looking for input on building an HA configuration for ZFS. I've read the
> FAQ and understand that the standard approach is to have a standby system
> with access to a shared pool that is imported during a failover.
>
> The problem is that we u
On May 12, 2010, at 3:06 PM, Manoj Joseph
wrote:
Ross Walker wrote:
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared po
On 05/13/10 08:55 AM, Jens Elkner wrote:
On Wed, May 12, 2010 at 09:34:28AM -0700, Doug wrote:
We have a 2006 Sun X4500 with Hitachi 500G disk drives. Its been running for over
four years and just now fmadm& zpool reports a disk has failed. No data was
lost (RAIDZ2 + hot spares worked a
On 05/13/10 03:27 AM, Lori Alt wrote:
On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I
used zfs send/receive to copy the volume (to keep some older dumps)
then ran dumpadm -d to use the new location. This caused a panic.
Nothing end
> "bh" == Brandon High writes:
bh> From what I've read, the Hitachi and Samsung drives both
bh> support CCTL, which is in the ATA-8 spec. There's no way to
bh> toggle it on from OpenSolaris (yet) and it doesn't persist
bh> through reboot so it's not really ideal.
bh> Here
On Wed, May 12, 2010 at 09:34:28AM -0700, Doug wrote:
> We have a 2006 Sun X4500 with Hitachi 500G disk drives. Its been running for
> over four years and just now fmadm & zpool reports a disk has failed. No
> data was lost (RAIDZ2 + hot spares worked as expected.) But, the server is
> out of
> "eg" == Emily Grettel writes:
eg> What do people already use on their enterprise level NAS's?
For a SOHO NAS similar to the one you are running, I mix manufacturer
types within a redundancy set so that a model-wide manufacturing or
firmware glitch like the ones of which we've had sever
> "bh" == Brandon High writes:
bh> If you boot from usb and move your rpool from one port to
bh> another, you can't boot. If you plug your boot sata drive into
bh> a different port on the motherboard, you can't
bh> boot. Apparently if you are missing a device from your rpool
> "jcm" == James C McPherson writes:
>> storage controllers are more difficult for driver support.
jcm> Be specific - put up, or shut up.
marvell controller hangs machine when a drive is unplugged
marvell controller does not support NCQ
marvell driver is closed-source blob
sil3124
schickb wrote:
> I'm looking for input on building an HA configuration for ZFS. I've
> read the FAQ and understand that the standard approach is to have a
> standby system with access to a shared pool that is imported during a
> failover.
>
> The problem is that we use ZFS for a specialized purpos
Ross Walker wrote:
> On May 12, 2010, at 1:17 AM, schickb wrote:
>
>> I'm looking for input on building an HA configuration for ZFS. I've
>> read the FAQ and understand that the standard approach is to have a
>> standby system with access to a shared pool that is imported during
>> a failov
- "Doug" skrev:
> Does anyone have any bad experiences replacing a disk on an X4500 with
> a non-Sun Hitachi? The hdadm tool reports the write cache is enabled
> on all the disks. Are their any customized firmware on the Sun disks
> that make them safer for using write cache?
Compared to w
On Wed, May 12 at 8:45, Freddie Cash wrote:
On Wed, May 12, 2010 at 4:05 AM, Emily Grettel
<[1]emilygrettelis...@hotmail.com> wrote:
Hello,
Â
I've decided to replace my WD10EADS and WD10EARS drives as I've checked
the SMART values and they've accrued some insanely high numbe
On Wed, May 12, 2010 at 4:05 AM, Emily Grettel
wrote:
> I've decided to replace my WD10EADS and WD10EARS drives as I've checked the
> SMART values and they've accrued some insanely high numbers for the
> load/unload counts (40K+ in 120 days on one!).
Running WDIDLE.EXE on the drives as soon as yo
On Wed, May 12, 2010 at 4:05 AM, Emily Grettel
wrote:
>
> I'm wondering what other people are using, even though the Green series has
> let me down, I'm still a Western Digital gal.
>
> What do people already use on their enterprise level NAS's? Any good Seagates?
>
FWIW, I looked at the WDs, bu
We have a 2006 Sun X4500 with Hitachi 500G disk drives. Its been running for
over four years and just now fmadm & zpool reports a disk has failed. No data
was lost (RAIDZ2 + hot spares worked as expected.) But, the server is out of
warranty and we have no hardware support on it.
I found the
>From: James C. McPherson [mailto:james.mcpher...@oracle.com]
>Sent: Wednesday, May 12, 2010 2:28 AM
>
>On 12/05/10 03:18 PM, Geoff Nordli wrote:
>
>> I have been wondering what the compatibility is like on OpenSolaris.
>> My perception is basic network driver support is decent, but storage
>
On Tue, 11 May 2010, A Darren Dunham wrote:
In the pre-ZFS world I would have suggested unmounting the filesystem
between runs. With ZFS, I doubt that is sufficient. I would suppose a
zpool export/import might be enough, but I'd want to test that as well.
Evidence suggests that unmounting th
On Wed, May 12, 2010 at 4:05 AM, Emily Grettel <
emilygrettelis...@hotmail.com> wrote:
> Hello,
>
> I've decided to replace my WD10EADS and WD10EARS drives as I've checked the
> SMART values and they've accrued some insanely high numbers for the
> load/unload counts (40K+ in 120 days on one!).
>
On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I
used zfs send/receive to copy the volume (to keep some older dumps)
then ran dumpadm -d to use the new location. This caused a panic.
Nothing ended up in messages and needless to say,
Hello,
probably a lot of people have done this, now its my time. I wanted to test the
performance of comstar over 8 GB FC. My Idea was to create a pool from a
ramdisk, a thin provisioned zvol over it and so some benchmarks.
However performance is worse then to the disk backend. So I measured
The problem is the Solaris team and lsi have put a lot of work into the new
2008 cards. Claiming there are issues without listing specific bugs they can
address is, I'm sure, frustrating to say the least.
On May 12, 2010 8:22 AM, "Thomas Burgess" wrote:
>>
>
> Now wait just a minute. You're cast
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared pool that is imported during
a failover.
The problem is that we use Z
Hello,
Is there actually any difference in creating a LU with sbdadm create-lu and
stmfadm create-lu ? Are there any special cases in which one should be used
over another - im thinking here for comstar should i use sbdadm or stmfadm
for creating LU's for my zvols ?
Thanks
--
ing. Vadim Comanesc
>
>
>>
> Now wait just a minute. You're casting aspersions on
> stuff here without saying what you're talking about,
> still less where you're getting your info from.
>
> Be specific - put up, or shut up.
>
>
I think he was just trying to tell me that my cpu should be fine, that the
only thing whic
On Wed, May 12, 2010 at 09:05:14PM +1000, Emily Grettel wrote:
>
> Hello,
>
>
>
> I've decided to replace my WD10EADS and WD10EARS drives as I've checked the
> SMART values and they've accrued some insanely high numbers for the
> load/unload counts (40K+ in 120 days on one!).
>
>
>
> I w
Thank you very much.
I now know that I am not crazy, surprised but not crazy :-).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 12/05/10 03:18 PM, Geoff Nordli wrote:
I have been wondering what the compatibility is like on OpenSolaris. My
perception is basic network driver support is decent, but storage
controllers are more difficult for driver support.
Now wait just a minute. You're casting aspersions on
stuff
This is how i understand it.
I know the network cards are well supported and i know my storage cards are
supportedthe onboard sata may work and it may not. If it does, great,
i'll use it for booting, if not, this board has 2 onboard bootable USB
sticksluckily usb seems to work regardless
Hello,
I've decided to replace my WD10EADS and WD10EARS drives as I've checked the
SMART values and they've accrued some insanely high numbers for the load/unload
counts (40K+ in 120 days on one!).
I was leaning towards the Black drives but now I'm a bit worried about the TLER
lackingne
I just tried moving a dump volume form rpool into another pool so I used
zfs send/receive to copy the volume (to keep some older dumps) then ran
dumpadm -d to use the new location. This caused a panic. Nothing ended
up in messages and needless to say, there isn't a dump!
Creating a new volum
37 matches
Mail list logo