That section doesn't actually prescribe one size, so what size did you
choose and how exactly did you set it?
You haven't told us, neither has anyone asked you about the basic
system config. For starters, what CPU, memory and storage? What other
stuff is this machine doing?
Also we do rea
Richard,
thanks for the explanation.
So can we say that the problem is in the disks loosing a command now and then
under stress?
Best regards.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
On Mon, Oct 5, 2009 at 10:04 AM, Sam wrote:
> Hi,
> I've been having some serious problems with my RaidZ2 array since I updated
> to 2009.06 on friday (from 2008.05). Its 10 drives with 1 hot spare with 8
> drives on a SAS card and 3 drives on the motherboards SATA connectors. I'm
> worried t
Hi,
I've been having some serious problems with my RaidZ2 array since I updated to
2009.06 on friday (from 2008.05). Its 10 drives with 1 hot spare with 8 drives
on a SAS card and 3 drives on the motherboards SATA connectors. I'm worried
that the SAS card is either malfunctioning or 2009.6 is
Rob Logan wrote:
>
> >> Directory "1" takes between 5-10 minutes for the same command to
> return
> >> (it has about 50,000 files).
>
> > That said, directories with 50K files list quite quickly here.
>
> a directory with 52,705 files lists in half a second here
>
> 36 % time \ls -1 > /dev/n
On Sat, Oct 3, 2009 at 11:33 AM, Richard Elling wrote:
> On Oct 3, 2009, at 10:26 AM, Chris Banal wrote:
>
> On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling
>> wrote:
>>
>> c is the current size the ARC. c will change dynamically, as memory
>> pressure
>> and demand change.
>>
>> How is the rela
OOPS just spotted you said
you don't want a FS for each sub-dir :-)
Trevor Pretty wrote:
Edward
If you look at the man page:-
snapshot
A read-only version of a file system or volume at a given
point
in time. It is specified as filesys...@name or vol...@name.
On Oct 4, 2009, at 11:51 AM, Miles Nordin wrote:
"re" == Richard Elling writes:
re> The probability of the garbage having both a valid fletcher2
re> checksum at the proper offset and having the proper sequence
re> number and having the right log chain link and having the
re> righ
Hi all !
I have a serious problem, with a server, and i'm hoping that some one
could help me how to understand what's wrong.
So basically i have a server with a pool of 6 disks, and after a zpool
scrub i go the message :
errors: Permanent errors have been detected in the following files:
Edward
If you look at the man page:-
snapshot
A read-only version of a file system or volume at a given point
in time. It is specified as filesys...@name or vol...@name.
I think you've taken volume
snapshots. I believe you need to make file system snapshots and each
users/use
Action: Restore the file in question if possible. Otherwise restore
the
entire pool from backup.
:<0x0>
:<0x15>
bet its in a snapshot that looks to have been destroyed already. try
zpool clear POOL01
zpool scrub POOL01
___
zfs-dis
>> Directory "1" takes between 5-10 minutes for the same command to
return
>> (it has about 50,000 files).
> That said, directories with 50K files list quite quickly here.
a directory with 52,705 files lists in half a second here
36 % time \ls -1 > /dev/null
0.41u 0.07s 0:00.50 96.0%
perh
Bruno Sousa wrote:
Action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
:<0x0>
:<0x15>
Hmm, and what file(s) would this be?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b1
Hi all !
I have a serious problem, with a server, and i'm hoping that some one
could help me how to understand what's wrong.
So basically i have a server with a pool of 6 disks, and after a zpool
scrub i go the message :
errors: Permanent errors have been detected in the following files:
> "re" == Richard Elling writes:
re> The probability of the garbage having both a valid fletcher2
re> checksum at the proper offset and having the proper sequence
re> number and having the right log chain link and having the
re> right block size is considerably lower than the
On Sat, 3 Oct 2009, Jeff Haferman wrote:
When I go into directory "0", it takes about a minute for an "ls -1 |
grep wc" to return (it has about 12,000 files). Directory "1" takes
between 5-10 minutes for the same command to return (it has about 50,000
files).
This seems kind of slow. In the
16 matches
Mail list logo