After a crash, in my zpool tree, some dataset report this we i do a ls -la:
brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts
also if i set
zfs set mountpoint=legacy dataset
and then i mount the dataset to other location
before the directory tree was only :
dataset
- vdisk.raw
The file was
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wolfraider
>
> target mode, using both ports. We have 1 zvol connected to 1 windows
> server and the other zvol connected to another windows server with both
> windows servers having a qlogic 2
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
>
> > For example, if you start with an empty drive, and you write a large
> > amount
> > of data to it, you will have no fragmentation. (At least, no
> significant
> > fragme
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marty Scholes
>
> What appears to be missing from this discussion is any shred of
> scientific evidence that fragmentation is good or bad and by how much.
> We also lack any detail on how much
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
>
> The ability to remove the slogs isn't really the win here, it's import
> -F. The
Disagree.
Although I agree the -F is important and good, I think the log device
remov
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to nearly
10T before deciding that there was an error and not completing, reseat
disk and it's doing it all again.
It's happened on another pool as well,
What OpenSolaris build are you running?
victor
On 17.09.10 13:53, Valerio Piancastelli wrote:
After a crash, in my zpool tree, some dataset report this we i do a ls -la:
brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts
also if i set
zfs set mountpoint=legacy dataset
and then i mount the
with uname -a :
SunOS disk-01 5.11 snv_111b i86pc i386 i86pc Solaris
it is Opesolaris 2009.06
other useful info:
zfs list sas/mail-cts
NAME USED AVAIL REFER MOUNTPOINT
sas/mail-cts 149G 250G 149G /sas/mail-cts
and with df
Filesystem 1K-blocks Used Availa
On Fri, 17 Sep 2010, Tom Bird wrote:
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to nearly 10T
before deciding that there was an error and not completing, reseat disk and
it's doing it all again.
Bob Friesenhahn wrote:
On Fri, 17 Sep 2010, Tom Bird wrote:
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to nearly
10T before deciding that there was an error and not completing, reseat
disk and it
On Thu, September 16, 2010 14:04, Miles Nordin wrote:
>> "dd" == David Dyer-Bennet writes:
>
> dd> Sure, if only a single thread is ever writing to the disk
> dd> store at a time.
>
> video warehousing is a reasonable use case that will have small
> numbers of sequential readers and w
On 09/17/10 06:24, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
The ability to remove the slogs isn't really the win here, it's import
-F. The
Disagree.
Although I agree the -F is impor
Looking at migrating zones built on an M8000 and M5000 to a new M9000. On the
M9000 we started building new deployments using ZFS. The environments on the
M8/M5 are UFS. these are whole root zones. they will use global zone resources.
Can this be done? Or would a ZFS migration be needed?
than
On Sep 16, 2010, at 12:33 PM, Marty Scholes wrote:
> David Dyer-Bennet wote:
>> Sure, if only a single thread is ever writing to the
>> disk store at a time.
>>
>> This situation doesn't exist with any kind of
>> enterprise disk appliance,
>> though; there are always multiple users doing stuff.
>
On 09/18/10 04:28 AM, Tom Bird wrote:
Bob Friesenhahn wrote:
On Fri, 17 Sep 2010, Tom Bird wrote:
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to
nearly 10T before deciding that there was an error
> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> > you lose information. Not your whole pool. You lose up to
> > 30 sec of writes
>
> The default isĀ now 5 seconds (zfs_txg_timeout).
When did that become default? Should I *ever* say 30 sec anymore?
In my world, the oldest machine is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tom Bird
>
We recently had a long discussion in this list, about resilver times versus
raid types. In the end, the conclusion was: resilver code is very
inefficient for raidzN. Someday it m
On Sep 17, 2010, at 20:32, Edward Ned Harvey wrote:
When did that become default? Should I *ever* say 30 sec anymore?
June 8, 2010, revision 12586:b118bbd65be9:
http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/txg.c
_
On 09/17/10 18:32, Edward Ned Harvey wrote:
From: Neil Perrin [mailto:neil.per...@oracle.com]
you lose information. Not your whole pool. You lose up to
30 sec of writes
The default is now 5 seconds (zfs_txg_timeout).
When did that become default?
It was changed more rec
On 09/18/10 04:46 PM, Neil Perrin wrote:
On 09/17/10 18:32, Edward Ned Harvey wrote:
From: Neil Perrin [mailto:neil.per...@oracle.com]
you lose information. Not your whole pool. You lose up to
30 sec of writes
The default is now 5 seconds (zfs_txg_timeout).
When did t
On 09/17/10 23:31, Ian Collins wrote:
On 09/18/10 04:46 PM, Neil Perrin wrote:
On 09/17/10 18:32, Edward Ned Harvey wrote:
From: Neil Perrin [mailto:neil.per...@oracle.com]
you lose information. Not your whole pool. You lose up to
30 sec of writes
The default is now 5 seconds
Hi all
one of our system just developed something remotely similar:
s06:~# zpool status
pool: atlashome
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to comple
22 matches
Mail list logo