Did you ever get a solution for this?
I have the same problem on a box with 20Terabytes of data. :-(
Regards
john
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
Here's how I recovered my situation:
1. Boot from openSolaris CD
2. Import the pool
'zpool import' to list it
'zpool import -f '
This took quite some time. I guess it had to check/repair it, so I went to
lunch.
3. Export the pool again
'zpool export '
4. Reboot
5. Import the pool again.
I would suggest a CPU with small L2 cache, as L2 cache will not help a file
server. This allows you use AMD's new 45W cpu. And 64 bit. 2-4 cores.
And use raidz2.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opens
Hi all!
I've decided to take the "big jump" and build a ZFS home filer (although it
might also do "other work" like caching DNS, mail, usenet, bittorent and so
forth). YAY! I wonder if anyone can shed some light on how long a pool scrub
would take on a fairly decent rig. These are the specs as-orde
On Sun, Nov 22, 2009 at 10:15 AM, Colin Raven wrote:
> Hi all!
> I've decided to take the "big jump" and build a ZFS home filer (although it
> might also do "other work" like caching DNS, mail, usenet, bittorent and so
> forth). YAY! I wonder if anyone can shed some light on how long a pool scrub
Colin Raven wrote:
Hi all!
I've decided to take the "big jump" and build a ZFS home filer
(although it might also do "other work" like caching DNS, mail,
usenet, bittorent and so forth). YAY! I wonder if anyone can shed some
light on how long a pool scrub would take on a fairly decent rig.
Th
Yesterday's integration of
6678033 resilver code should prefetch
as part of changeset 74e8c05021f1 (which should be in build 129 when it
comes out) may improve scrub times, particularly if you have a large
number of small files and a large number of snapshots. I recently
tested an early version
On my home server (currently having problems with random reboots), it takes
around 1.5hrs to do a scrub of my RAIDZ1 6 x 1.5TB array, with around 2TB of
data on it.
Specs are:
CPU: core2duo 2.5GHz
RAM: 2GB 800Mhz DDR2
OS DIsks: 120GB Seagate ATA
Storage drives: 6 x 15TB seagate sata2 7200rpm
--
Bill Sommerfeld wrote:
Yesterday's integration of
6678033 resilver code should prefetch
as part of changeset 74e8c05021f1 (which should be in build 129 when it
comes out) may improve scrub times, particularly if you have a large
number of small files and a large number of snapshots. I recentl
Thanks for replying! I did look into that. The AMD design was my second choice.
It was :
AMD Athlon II X2 240e (to get low power; the dual core and lack of L3 help
there)
ASUS motherboard (see considerations below)
Cheap VGA? LAN card? This is the mire that ultimately bogged down this one.
Give
On Sun, Nov 22, 2009 at 12:43 PM, R.G. Keen wrote:
> Thanks for replying! I did look into that. The AMD design was my second
> choice.
>
> It was :
> AMD Athlon II X2 240e (to get low power; the dual core and lack of L3 help
> there)
> ASUS motherboard (see considerations below)
> Cheap VGA? LAN
> Someone can correct me if I'm wrong... but I believe
> that opensolaris can do the ECC scrubbing in software
> even of the motherboard BIOS doesn't support
> it.
That's interesting - I didn't run into that in the background search.
I suspect that some motherboards just accept the ECC memory bit
Thank you Al! That's exactly the kind of information
I needed. I very much appreciate the help.
> It would be helpful to give us a broad description of
> what type of
> data you're planning on storing. Small files, large
> files, required
> capactity etc. and we can probably make some
> specif
Team
I'm missing something? First off I normally play around with
OpenSolaris & it's been a while since I played with Solaris 10.
I'm doing all this via VirtualBox (Vista host) and I've set-up the
network (I believe) as I can ping, ssh and telnet from Vista into the
S10 virtual machine 192.
On Sun, Nov 22, 2009 at 4:18 PM, Trevor Pretty wrote:
> Team
>
> I'm missing something? First off I normally play around with OpenSolaris &
> it's been a while since I played with Solaris 10.
>
> I'm doing all this via VirtualBox (Vista host) and I've set-up the network
> (I believe) as I can pin
Tim Cook wrote:
On Sun, Nov 22, 2009 at 4:18 PM, Trevor
Pretty
wrote:
Team
I'm missing something? First off I normally play around with
OpenSolaris & it's been a while since I played with Solaris 10.
I'm doing all this via VirtualBox (Vista host)
OK I've also a S700 simulator as a VM and it seems to have done what I
would expect.
7000# zfs get sharesmb pool-0/local/trevors_stuff/tlp
NAMEPROPERTY VALUE SOURCE
pool-0/local/trevors_stuff/tlp sharesmb name=trevors_stuff_tlp
inherited from
Hi Trevor,
The native CIFS/SMB stuff was never backported to S10, so you would have
to use the Samba on your S10 vm
Cheers,
Peter
Trevor Pretty wrote:
Team
I'm missing something? First off I normally play around with
OpenSolaris & it's been a while since I played with Solaris 10.
I'm do
Trevor Pretty wrote:
Tim Cook wrote:
On Sun, Nov 22, 2009 at 4:18 PM, Trevor Pretty
mailto:trevor_pre...@eagle.co.nz>> wrote:
Team
I'm missing something? First off I normally play around with
OpenSolaris & it's been a while since I played with Solaris 10.
I'm doing al
Trevor Pretty wrote:
OK I've also a S700 simulator as a VM and it seems to have done what I
would expect.
7000# zfs get sharesmb pool-0/local/trevors_stuff/tlp
NAMEPROPERTY VALUE SOURCE
pool-0/local/trevors_stuff/tlp sharesmb name=trevors_st
Thanks old friend
I was surprised to read in the S10 zfs man page that there was the
option sharesmb=on.
I though I had missed the CIFs server making S10 whilst I was not
looking, but I was quickly coming to the conclusion that the CIFs stuff
was just not there, despite being tantalised by t
Hi,
I'm having trouble with scsi timeouts, but it appears to only happen
when I use ZFS.
I've tried to replicate with SVM, but I can't get the timeouts to happen
when that is the underlying volume manager, however the performance with
ZFS is much better when it does work.
The symptom is tha
Hi All,
Sorry if this question is already addressed in the documentation, but I am
still unclear about some details ZIL devices.
I am looking at provisioning some network attached storage with ZFS on the back
end.
In the interests of the 'inexpensive' part of the acronym 'RAID', I am looking
hi,
I think you use the following command
[b]jkt:/# zpool destroy [/b]
Hope it helps.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
24 matches
Mail list logo