I would get a new 1.5 TB and make sure it has the new firmware and replace
c6t3d0 right away - even if someone here comes up with a magic solution, you
don't want to wait for another drive to fail.
http://hardware.slashdot.org/article.pl?sid=09/01/17/0115207
http://techreport.com/discussions.x/1
Hi Jeffrey,
jeffrey huang wrote:
> Hi, Jan,
>
> After successfully install AI on SPARC(zpool/zfs created), without
> reboot, I want try a installation again, so I want to destroy the rpool.
>
> # dumpadm -d swap --> ok
> # zfs destroy rpool/dump --> ok
> # swap -l
> # swap -d /dev/zvol/dsk/rpool/s
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able
to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you
should only use ZFS?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
"ZFS works well with storage based protecte
I would in this case also immediately export the pool (to prevent any
write attempts) and see about a firmware update for the failed drive
(probably need windows for this).
Sent from my iPhone
On Jan 20, 2009, at 3:22 AM, zfs user wrote:
> I would get a new 1.5 TB and make sure it has the n
Hi,
I'm completely new to Solaris, but have managed to bumble through installing it
to a single disk, creating an additional 3 disk RAIDZ array and then copying
over data from a separate NTFS formatted disk onto the array using NTFS-3G.
However, the single disk that was used for the OS installa
On Mon, Jan 19, 2009 at 5:39 PM, Adam Leventhal wrote:
> > And again, I say take a look at the market today, figure out a
> percentage,
> > and call it done. I don't think you'll find a lot of users crying foul
> over
> > losing 1% of their drive space when they don't already cry foul over the
>
Luke,
You're looking for a `zpool list`, followed by a `zpool import
` after Solaris has correctly recognised the attachment of
the three original disks (ie. they appear in `format` and/or `cfgadm -
al`).
Complete docs here, now you know what you are looking for ...
http://opensolaris.org/o
Luke Scammell wrote:
> Hi,
>
> I'm completely new to Solaris, but have managed to bumble through installing
> it to a single disk, creating an additional 3 disk RAIDZ array and then
> copying over data from a separate NTFS formatted disk onto the array using
> NTFS-3G.
>
> However, the single
I think maybe it means that if ZFS can't 'see' the block (the
controller does that in HW RAID), it can't checksum said block.
cheers,
Blake
On Tue, Jan 20, 2009 at 6:34 AM, Orvar Korvar
wrote:
> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not
> able to heal corrupted b
Nobody can comment on this?
-Brian
Brian H. Nelson wrote:
> I noticed this issue yesterday when I first started playing around with
> zfs send/recv. This is on Solaris 10U6.
>
> It seems that a zfs send of a zvol issues 'volblocksize' reads to the
> physical devices. This doesn't make any sens
Good observations, Eric, more below...
Eric D. Mudama wrote:
> On Mon, Jan 19 at 23:14, Greg Mason wrote:
>> So, what we're looking for is a way to improve performance, without
>> disabling the ZIL, as it's my understanding that disabling the ZIL
>> isn't exactly a safe thing to do.
>>
>> We'r
> > Ross wrote:
> >> The problem is they might publish these numbers, but we
> really have
> >> no way of controlling what number manufacturers will
> choose to use
> >> in the future.
> >>
> >> If for some reason future 500GB drives all turn out to be slightly
> >> smaller than the current
Brian H. Nelson wrote:
> Nobody can comment on this?
>
> -Brian
>
>
> Brian H. Nelson wrote:
>> I noticed this issue yesterday when I first started playing around with
>> zfs send/recv. This is on Solaris 10U6.
>>
>> It seems that a zfs send of a zvol issues 'volblocksize' reads to the
>> phys
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I see http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
as a pretty outdated (3 years old) document. is there any plan to update
it?.
Maybe somebody could update it every time a new ZFS pool version is
available?.
- --
Jesus Cea Avion
Orvar Korvar wrote:
> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not
> able to heal corrupted blocks? Then this is evidence against ZFS + HW raid,
> and you should only use ZFS?
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
>
> "ZFS works
Any recommendations for an SSD to work with an X4500 server? Will the SSDs
used in the 7000 series servers work with X4500s or X4540s?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
On 1/20/2009 1:14 PM, Richard Elling wrote:
> Orvar Korvar wrote:
>
>> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not
>> able to heal corrupted blocks? Then this is evidence against ZFS + HW raid,
>> and you should only use ZFS?
>>
>> http://www.solarisinternals.com
> "mj" == Moore, Joe writes:
mj> For a ZFS pool, (until block pointer rewrite capability) this
mj> would have to be a pool-create-time parameter.
naw. You can just make ZFS do it all the time, like the other storage
vendors do. no parameters.
You can invent parameter-free ways of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nicolas Williams wrote:
> I'd recommend waiting for ZFS crypto rather than using lofi with ZFS.
Wait... for how long?. Any schedule?.
I am very interested in ZFS Crypto, although I have lost hope of seeing
in Solaris 10.
- --
Jesus Cea Avion
So ZFS is not hindered at all, if you use it in conjunction with HW raid? ZFS
can utilize all functionality and "heal corrupted blocks" without problems -
with HW raid?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discus
Probably Richard Elling's blog:
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
Miles Nordin wrote:
> > "mj" == Moore, Joe writes:
>
> mj> For a ZFS pool, (until block pointer rewrite capability) this
> mj> would have to be a pool-create-time parameter.
>
> naw. You can just make ZFS do it all the time, like the other storage
> vendors do. no parameters.
Ot
d...@yahoo.com said:
> Any recommendations for an SSD to work with an X4500 server? Will the SSDs
> used in the 7000 series servers work with X4500s or X4540s?
The Sun System Handbook (sunsolve.sun.com) for the 7210 appliance (an
X4540-based system) lists the "logzilla" device with this fine pri
[I hate to keep dragging this thread forward, but...]
Moore, Joe wrote:
> And there is no way to change this after the pool has been created,
> since after that time, the disk size can't be changed. So whatever
> policy is used by default, it is very important to get it right.
Today, vdev size c
I have been testing the 32 GB X25-E last week.
When I connect it to one of the onboard (tyan 2925) sata ports, it's not
detected by opensolaris 2008.11.
When I connect it to an PCIE LSI 3081 , The disk is found But I'm getting
trouble when I run performance tests via filebench.
Filebench expor
> "jm" == Moore, Joe writes:
jm> Sysadmins should not be required to RTFS.
I never said they were. The comparison was between hardware RAID and
ZFS, not between two ZFS alternatives. The point: other systems'
behavior is enitely secret. Therefore, secret opaque undiscussed
right-sizin
I have configured a test system with a mirrored rpool and one hot spare. I
powered the systems off, pulled one of the disks from rpool to simulate a
hardware failure.
The hot spare is not activating automatically. Is there something more i
should have done to make this work ?
pool: rpool
sta
On Tue, 20 Jan 2009 12:13:00 PST, Orvar Korvar
wrote:
> So ZFS is not hindered at all, if you use it in conjunction
> with HW raid? ZFS can utilize all functionality
> and "heal corrupted blocks" without problems - with HW raid?
Only if you build the zpool from a mirror where each side of
the mi
An interesting interpretation of using hot spares.
Could it be that the hot-spare code only fires if the disk goes down
whilst the pool is active?
hm.
Nathan.
Scot Ballard wrote:
> I have configured a test system with a mirrored rpool and one hot spare.
> I powered the systems off, pulled on
What software are you running? There was a bug where offline device
failure did not trigger hot spares, but that should be fixed now (at
least in OpenSolaris, not sure about s10u6).
- Eric
On Wed, Jan 21, 2009 at 09:57:42AM +1100, Nathan Kroenert wrote:
> An interesting interpretation of using h
On Tue, Jan 20, 2009 at 2:26 PM, Moore, Joe wrote:
>
> Other storage vendors have specific compatibility requirements for the
> disks you are "allowed" to install in their chassis.
>
And again, the reason for those requirements is 99% about making money, not
a technical one. If you go back far
>The user DEFINITELY isn't expecting 5 bytes, or what you meant to say
>5000
>bytes, they're expecting 500GB. You know, 536,870,912,000 bytes. But even if
>the drive mfg's
>calculated it correctly, they wouldn't even be getting that due to filesystem
>overhead.
I doubt there
On Tue, Jan 20 at 9:04, Richard Elling wrote:
>
> Yes. And I think there are many more use cases which are not
> yet characterized. What we do know is that using an SSD for
> the separate ZIL log works very well for a large number of cases.
> It is not clear to me that the efforts to characteriz
Sigh. Richard points out in private email that automatic savecore functionality
is disabled in OpenSolaris; you need to manually set up a dump device and save
core files if you want them. However, the stack may be sufficient to ID the bug.
--
This message posted from opensolaris.org
On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
> On Tue, Jan 20 at 9:04, Richard Elling wrote:
>>
>> Yes. And I think there are many more use cases which are not
>> yet characterized. What we do know is that using an SSD for
>> the separate ZIL log works very well for a large number of cases.
>>
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy
another bigger drive again?
the drives are RMA replacements for the other disks that faulted in the array
before. they are the same brand, model and model number, apparently not so
under the label though, but no way I
36 matches
Mail list logo