Looking at format it is missing 12 discs!
Which is probably not suprisingly the number of discs in the external storage
controller.
The other presnt disc have moved to c2 from c0.
The driver is the same for both the discs (it is the HP CQPAry3 driver) and the
external storage is on the same con
We haven't done 'zfs upgrade ...' any. I'll give that a try the next time the
system can be taken down.
Ben
> A little gotcha that I found in my 10u6 update
> process was that 'zpool
> upgrade [poolname]' is not the same as 'zfs upgrade
> [poolname]/[filesystem(s)]'
>
> What does 'zfs upgrade'
Ray Galvin wrote:
> I appear to be seeing the performance of a local ZFS file system degrading
> over a short period of time.
>
> My system configuration:
>
> 32 bit Athlon 1800+ CPU
> 1 Gbyte of RAM
>
SInce you only have 1 GByte of RAM, I would suspect the ARC is
filling and beco
Kristof,
> Jim Yes, in step 5 commands were executed on both nodes.
>
> We did some more tests with opensolaris 2008.11. (build 101b)
>
> We managed to get AVS setup up and running, but we noticed that
> performance was really bad.
>
> When we configured a zfs volume for replication, we noticed
Ben Miller wrote:
> We haven't done 'zfs upgrade ...' any. I'll give that a try the next time
> the system can be taken down.
>
>
No need to take the system down, it can be done on the fly.
The only downside to the upgrade is that you may not be able
to import the pool or file system on an ol
Hi Jim,
Thanks for your informative reply. I am involved with kristof
(original poster) in the setup, please allow me to reply below
> Was the follow 'test' run during resynchronization mode or replication
> mode?
>
Neither, testing was done while in logging mode. This was chosen to
simply avoid
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have a system connected to a StorageTek 2530 SAS array (12 disks), on which i
want to run ZFS. in the past, when using ZFS on an external array, i would
simply create one LUN on the array and create the ZFS pool on this; but for
various well-do
> I've seen reports of a recent Seagate firmware update
> bricking drives again.
>
> What's the output of 'zpool import' from the LiveCD?
> It sounds like
> ore than 1 drive is dropping off.
r...@opensolaris:~# zpool import
pool: tank
id: 16342816386332636568
state: FAULTED
status: The p
On Sat, 24 Jan 2009, River Tarnell wrote:
>
> i have a system connected to a StorageTek 2530 SAS array (12 disks), on which
> i
> want to run ZFS. in the past, when using ZFS on an external array, i would
> simply create one LUN on the array and create the ZFS pool on this; but for
> various well
If zfs says that one disk is broken, how do I locate it? It says that disk
c0t3d0 is broken. Which disk is that? I must locate them during install?
In Thumper it is possible to issue a ZFS command, and the corresponding disk's
lamp will flash? Is there any "zlocate" command that will flash a par
Thanks for the information Richard!
The output of running arcstat.pl is included below. A potentially
interesting thing I see is that the "Prefetch miss percentage" is
100% during this test. I would have thought that a large sequential
read test would be an easy case for prefetch prediction.
Ray Galvin wrote:
> Thanks for the information Richard!
>
> The output of running arcstat.pl is included below. A potentially
> interesting thing I see is that the "Prefetch miss percentage" is
> 100% during this test. I would have thought that a large sequential
> read test would be an easy ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bob Friesenhahn:
> Please make sure to read my write-up of how I configured a StorageTek
> 2540 array (FC version of same hardware) for use with ZFS. It can be
> found at
> "http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performan
On Sat, 24 Jan 2009, River Tarnell wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Bob Friesenhahn:
>> Please make sure to read my write-up of how I configured a StorageTek
>> 2540 array (FC version of same hardware) for use with ZFS. It can be
>> found at
>> "http://www.simplesystem
First, I've been very impressed with ZFS performance and notification of
problems. It was because of this early notification that I should have been
able
to salvage an array properly!
Onto the problem I'm hoping you can solve ! :
Running on a 64bit platform with 5, 500GB HDDs in a basic raidz
FYI (version 0.3):
http://www.eall.com.br/blog/?p=970
Leal
[ http://www.eall.com.br/blog ]
> Hello all..
> I did some tests to understand the behaviour of ZFS
> and slog (SSD), and for understand the workload i
> did implement a simple software to visualize the
> data blocks (read/write).
> I'm
Each ZFS block pointer contains up to three DVAs (data virtual addresses),
to implement 'ditto blocks' (multiple copies of the data, above and beyond
any replication provided by mirroring or RAID-Z). Semantically, ditto blocks
are a lot like mirrors, so we actually use the mirror code to read them
Orvar Korvar wrote:
> If zfs says that one disk is broken, how do I locate it? It says that disk
> c0t3d0 is broken. Which disk is that? I must locate them during install?
>
> In Thumper it is possible to issue a ZFS command, and the corresponding
> disk's lamp will flash? Is there any "zlocate"
18 matches
Mail list logo