On Sat, Dec 12, 2009 at 11:39 AM, Brent Jones wrote:
> On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
> wrote:
>> On Sat, 12 Dec 2009, Brent Jones wrote:
>>
>>> I've noticed some extreme performance penalties simply by using snv_128
>>
>> Does the 'zpool scrub' rate seem similar to before? Do
My system was pingable again, unfortunately I disabled all services such as
ssh. My console was still hung, but I was wondering if I had hung USB crap
(since I use a USB keyboard and everything had been hung for days).
I force rebooted and the pool was not imported :(. I started the process off
Most manufacturers have a utility available that sets this behavior.
For WD drives, it's called WDTLER.EXE. You have to make a bootable USB stick to
run the app, but it is simple to change the setting to the enterprise behavior.
--
This message posted from opensolaris.org
___
It's been over 72 hours since my last import attempt.
System still is non-responsive. No idea if it's doing anything
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Hi,
The compressratio property seems to be a ratio of compression for a
given dataset calculated in such a way so all data in it (compressed or
not) is taken into account.
The dedupratio property on the other hand seems to be taking into
account only dedupped data in a pool.
So for example if
Andrey Kuzmin wrote:
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I doubt both will be pleased :-)
As usual it depends on your workload. In many real-life scenarios the
bandwidth probably won
On Dec 12, 2009, at 10:32 AM, Mattias Pantzare wrote:
On Sat, Dec 12, 2009 at 18:08, Richard Elling > wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new
motherboard
an
On Sat, Dec 12, 2009 at 9:58 AM, Edward Ned Harvey
wrote:
> I would suggest something like this: While the system is still on, if the
> failed drive is at least writable *a little bit* … then you can “dd
> if=/dev/zero of=/dev/rdsk/FailedDiskDevice bs=1024 count=1024” … and then
> after the syste
On Sat, Dec 12, 2009 at 7:55 AM, Bob Friesenhahn
wrote:
> On Sat, 12 Dec 2009, Brent Jones wrote:
>
>> I've noticed some extreme performance penalties simply by using snv_128
>
> Does the 'zpool scrub' rate seem similar to before? Do you notice any read
> performance problems? What happens if yo
On 12-Dec-09, at 1:32 PM, Mattias Pantzare wrote:
On Sat, Dec 12, 2009 at 18:08, Richard Elling
wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new
motherboard
and
On Sat, Dec 12, 2009 at 18:08, Richard Elling wrote:
> On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
>
>> On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
>>
>>> The host identity had - of course - changed with the new motherboard
>>> and it no longer recognised the zpool as its own
On Sat, 12 Dec 2009, dick hoogendijk wrote:
Because, like I said, I always understood it was very difficult to
change disks to another system and run the installed solaris version on
that new hardware.
A place where I used to work had several thousand Sun workstations and
I noticed that if a
Am I missing something?
I have had monthly,weekly,daily,hourly,frequent snapshots since March 2009.
Now with new b129 I lost all of them.
>From zpool history:
2009-12-12.20:30:02 zfs destroy -r
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-11-26-09:28
2009-12-12.20:30:03 zfs destroy -r
rpool/ROOT/b
On Sat, 2009-12-12 at 09:08 -0800, Richard Elling wrote:
> On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
>
> > On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
> >
> >> The host identity had - of course - changed with the new motherboard
> >> and it no longer recognised the zpool as
I've found that when I build a system, it's worth the initial effort
to install drives one by one to see how they get mapped to names. Then
I put labels on the drives and SATA cables. If there were room to
label the actual SATA ports on the motherboard and cards, I would.
While this isn't foolproo
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new motherboard
and it no longer recognised the zpool as its own. 'zpool import -f
rpool' to take ownership, reboot and it all wor
On Sat, Dec 12, 2009 at 8:17 AM, Paul Bruce wrote:
> Hi,
> I'm just about to build a ZFS system as a home file server in raidz, but I
> have one question - pre-empting the need to replace one of the drives if it
> ever fails.
> How on earth do you determine the actual physical drive that has faile
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I doubt both will be pleased :-)
On 12/12/09, Robert Milkowski wrote:
> Jens Elkner wrote:
>> Hi,
>>
>> just got a quote from our campus reseller, th
This is especially important, because if you have 1 failed drive, and you
pull the wrong drive, now you have 2 failed drives. And that could destroy
the dataset (depending on whether you have raidz-1 or raidz-2)
Whenever possible, always get the hotswappable hardware, that will blink a
red lig
On Sat, 12 Dec 2009, Brent Jones wrote:
I've noticed some extreme performance penalties simply by using snv_128
Does the 'zpool scrub' rate seem similar to before? Do you notice any
read performance problems? What happens if you send to /dev/null
rather than via ssh?
Bob
--
Bob Friesenha
Jens Elkner wrote:
Hi,
just got a quote from our campus reseller, that readzilla and logzilla
are not available for the X4540 - hmm strange Anyway, wondering
whether it is possible/supported/would make sense to use a Sun Flash
Accelerator F20 PCIe Card in a X4540 instead of 2.5" SSDs?
If
Hi!
I tried to add an other FiweFire Drive to my existing four devices but it
turned out, that the OpenSolaris IEEE1394 support doen't seem to be
well-engineered.
After not recognizing the new device and exporting and importing the existing
zpool, I get this zpool status:
pool: tank
state:
Hi,
I'm just about to build a ZFS system as a home file server in raidz, but I
have one question - pre-empting the need to replace one of the drives if it
ever fails.
How on earth do you determine the actual physical drive that has failed ?
I've got the while zpool status thing worked out, but h
Hi!
My OpenSolaris 2009.06 box runs into kernel panics almost every day. There are
4 FireWire drives, as a RaidZ pool attached to a MacMini. The panic seems to be
related to this known bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6835533
Since there are no known workarounds,
sed a majority of my
snapshots to do this:
receiving incremental stream of pdxfilu01/vault/0...@20091212-01:15:00
into pdxfilu02/vault/0...@20091212-01:15:00
received 13.8KB stream in 491 seconds (28B/sec)
De-dupe is NOT enabled on any pool, but I have upgraded to the newest
ZFS pool version,
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
> The host identity had - of course - changed with the new motherboard
> and it no longer recognised the zpool as its own. 'zpool import -f
> rpool' to take ownership, reboot and it all worked no problem (which
> was amazing in itself as
26 matches
Mail list logo