Hi folks,
At home I run OpenSolaris x86 with a 4 drive Raid-Z (4x1TB) zpool and it's not
in great shape. A fan stopped spinning and soon after the top disk failed
(cause you know, heat rises). Naturally, OpenSolaris and ZFS didn't skip a
beat; I didn't even notice it was dead until I saw the
Though the rsync switch is probably the answer to your problem...
You might want to consider upgrading to Nexenta 3.0, switching checksums from
fletcher to sha1 and then enabling block level deduplication. You'd probably
use less GB per snapshot even with rsync running inefficiently.
--
This me
> Did you try with -f? I doubt it will help.
Yep, no luck with -f, -F or -fF.
> > * If replace 1TB dead disk with a blank disk, might
> the import work?
>
> Only if the import is failing because the dead disk
> is nonresponsive in a way that makes the import hang.
> Otherwise, you'd import the
Oops, I meant SHA256. My mind just maps SHA->SHA1, totally forgetting that ZFS
actually uses SHA256 (a SHA-2 variant).
More on ZFS dedup, checksums and collisions:
http://blogs.sun.com/bonwick/entry/zfs_dedup
http://www.c0t0d0s0.org/archives/6349-Perceived-Risk.html
--
This message posted from
>> * Should I be able to import a degraded pool?
> In general, yes. But it is complaining about corrupted data, which can
> be due to another failure.
Any suggestions on how to discover what that failure might be?
>> * If not, shouldn't there be a warning when exporting a degraded pool?
> What sh
Hi folks,
At home I run OpenSolaris x86 with a 4 drive Raid-Z (4x1TB) zpool and it's not
in great shape. A fan stopped spinning and soon after the top disk failed
(cause you know, heat rises). Naturally, OpenSolaris and ZFS didn't skip a
beat; I didn't even notice it was dead until I saw the
Had an idea, could someone please tell me why it's wrong? (I feel like it has
to be).
A RaidZ-2 pool with one missing disk offers the same failure resilience as a
healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted
to do single parity raidz pool (5disk), but after a
Hi folks,
I'm in the market for a couple of JBODs. Up until now I've been relatively
lucky with finding hardware that plays very nicely with ZFS. All my gear
currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i)
with backplanes powered by LSI SAS expanders (Sun x4250, Su
isks in 4U enclosure on top of those mentioned
> in this message or the data-on? We are trying to build
> super-high-density-storage racks.
>
> Cedric Tineo
>
>
> On 13 nov. 2012, at 21:08, Peter Tripp wrote:
>
>> Hi folks,
>>
>> I'm in the market
Hi Nathan,
You've misunderstood how the Zil works and why it reduces write latency for
synchronous writes.
Since you've partitioned a single SSD into two silces, one as pool storage and
one as Zil for that pool, all sync writes will be 2X amplified. There's no way
around it. ZFS will write to
Hi Jerry,
Couple of things that might help you troubleshoot your Intel SASUC8I HBA:
1. Are you seeing all the 8 devices in the BIOS for the card?
2. If yes, do other operating systems (say a Linux LiveCD) see all the disks
too?
3. Is there any difference between the disks (e.g. four 2TB Seagate
HI Eugen,
Whether it's compatible entirely depends on the chipset of the SATA controller.
Basically that card is just a dual port 6gbps PCIe SATA controller with the
space to mount one ($149) or two ($299) 2.5inch disks. Sonnet, a mac focused
company, offers it as a way to better utilize exist
Hey folks,
While scrubbing, zpool status shows nearly 40MB "repaired" but 0 in each of the
read/write/checksum columns for each disk. One disk has "(repairing)" to the
right but once the scrub completes there's no mention that anything ever needed
fixing. Any idea what would need to be repair
13 matches
Mail list logo