On 10/16/10 12:29 PM, Marty Scholes wrote:
On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes
wrote:
My home server's main storage is a 22 (19 + 3) disk
RAIDZ3 pool backed up hourly to a 14 (11+3) RAIDZ3
backup pool.
How long does it take to resilver a disk in that
pool? And how l
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cassandra Pugh
>
> I would like to know how to replace a failed vdev in a non redundant
> pool?
Non redundant ... Failed ... What do you expect? This seems like a really
simple answer... You
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> Point taken!
>
> So, what would you suggest, if I wanted to create really big pools? Say
> in the 100 TB range? That would be quite a number of single drives
> then, especially when you want to go with zpool raid-1.
You have a lot of disk
You should only see a "HOLE" in your config if you removed a slog after having
added more stripes. Nothing to do with bad sectors.
On 14 Oct 2010, at 06:27, Matt Keenan wrote:
> Hi,
>
> Can someone shed some light on what this ZPOOL_CONFIG is exactly.
> At a guess is it a bad sector of the dis
The following new test versions have had STEP pkgs built for them.
[You are receiving this email because you are listed as the owner of the
testsuite in the STC.INFO file, or you are on the s...@sun.com alias]
tcp v2.7.10 STEP pkg built for Solaris Snv
zfstest v1.23 STEP pkg built for Solaris
Hi,
Can someone shed some light on what this ZPOOL_CONFIG is exactly.
At a guess is it a bad sector of the disk, non writable and thus ZFS
marks it as a hole ?
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Am 12.10.10 14:21, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
c3t211378AC0253d0 ONLINE 0 0 0
How many disks are there inside of c3t211378AC0253d0?
How are they
Thanks you very much Victor for the update.
Regards,
Anand
From: Victor Latushkin
To: j...@opensolaris.org
Cc: Anand Bhakthavatsala ; zfs-discuss discuss
Sent: Fri, 8 October, 2010 1:33:57 PM
Subject: Re: [zfs-discuss] ZPool creation brings down the host
O
looks like the attachment missed in the earlier mail
-Anand
From: Anand Bhakthavatsala
To: j...@opensolaris.org; Ramesh Babu
Cc: zfs-discuss@opensolaris.org
Sent: Fri, 8 October, 2010 10:56:55 AM
Subject: Re: [zfs-discuss] ZPool creation brings down the host
Thanks James for the response.
Please find attached here with the crash dump that we got from the admin.
Regards,
Anand
From: James C. McPherson
To: Ramesh Babu
Cc: zfs-discuss@opensolaris.org; anand_...@yahoo.com
Sent: Thu, 7 October, 2010 11:56:36 AM
Subje
> On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes
> wrote:
> > My home server's main storage is a 22 (19 + 3) disk
> RAIDZ3 pool backed up hourly to a 14 (11+3) RAIDZ3
> backup pool.
>
> How long does it take to resilver a disk in that
> pool? And how long
> does it take to run a scrub?
>
> When
If the pool is non-redundant and your vdev has failed, you have lost your data.
Just rebuild the pool, but consider a redundant configuration.
On Oct 15, 2010, at 3:26 PM, Cassandra Pugh wrote:
> Hello,
>
> I would like to know how to replace a failed vdev in a non redundant pool?
>
> I am u
On Oct 15, 2010, at 5:34 PM, Ian D wrote:
>> Has anyone suggested either removing L2ARC/SLOG
>> entirely or relocating them so that all devices are
>> coming off the same controller? You've swapped the
>> external controller but the H700 with the internal
>> drives could be the real culprit. Coul
Hello,
I would like to know how to replace a failed vdev in a non redundant pool?
I am using fiber attached disks, and cannot simply place the disk back into
the machine, since it is virtual.
I have the latest kernel from sept 2010 that includes all of the new ZFS
upgrades.
Please, can you help
On Oct 15, 2010, at 9:18 AM, Stephan Budach wrote:
> Am 14.10.10 17:48, schrieb Edward Ned Harvey:
>>
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Toby Thain
>>>
I don't want to heat up the discussion about ZFS managed discs v
On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes wrote:
> My home server's main storage is a 22 (19 + 3) disk RAIDZ3 pool backed up
> hourly to a 14 (11+3) RAIDZ3 backup pool.
How long does it take to resilver a disk in that pool? And how long
does it take to run a scrub?
When I initially setup
Sorry, I can't not respond...
Edward Ned Harvey wrote:
> whatever you do, *don't* configure one huge raidz3.
Peter, whatever you do, *don't* make a decision based on blanket
generalizations.
> If you can afford mirrors, your risk is much lower.
> Because although it's
> hysically possible for
> Has anyone suggested either removing L2ARC/SLOG
> entirely or relocating them so that all devices are
> coming off the same controller? You've swapped the
> external controller but the H700 with the internal
> drives could be the real culprit. Could there be
> issues with cross-controller IO in t
On Wed, 13 Oct 2010, Edward Ned Harvey wrote:
raidzN takes a really long time to resilver (code written inefficiently,
it's a known problem.) If you had a huge raidz3, it would literally never
finish, because it couldn't resilver as fast as new data appears. A week
In what way is the code wr
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ian D
> Sent: Friday, October 15, 2010 4:19 PM
> To: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Performance issues with iSCSI under Linux
>
> A little
On 15 oct. 2010, at 22:19, Ian D wrote:
> A little setback We found out that we also have the issue with the Dell
> H800 controllers, not just the LSI 9200-16e. With the Dell it's initially
> faster as we benefit from the cache, but after a little while it goes sour-
> from 350MB/sec down
A little setback We found out that we also have the issue with the Dell
H800 controllers, not just the LSI 9200-16e. With the Dell it's initially
faster as we benefit from the cache, but after a little while it goes sour-
from 350MB/sec down to less than 40MB/sec. We've also tried with a
The mpt_sas driver supports it. We've had LSI 2004 and 2008 controllers hang
for quite some time when used with SuperMicro chassis and Intel X25-E SSDs
(OSOL b134 and b147). It seems to be a firmware issue that isn't fixed with
the last update.
Do you mean to include all the PCie cards not just
After contacting LSI they say that the 9200-16e HBA is not supported in
OpenSolaris, just Solaris. Aren't Solaris drivers the same as OpenSolaris?
Is there anyone here using 9200-16e HBAs? What about the 9200-8e? We have a
couple lying around and we'll test one shortly.
Ian
--
This message
> Does the Linux box have the same issue to any other
> server ?
> What if the client box isn't Linux but Solaris or
> Windows or MacOS X ?
That would be a good test. We'll try that.
--
This message posted from opensolaris.org
___
zfs-discuss mailing l
On 15/10/2010 19:09, Ian D wrote:
It's only when a Linux box SEND/RECEIVE data to the NFS/iSCSI shares that we
have problems. But if the Linux box send/receive file through scp on the
external disks mounted by the Nexenta box as a local filesystem then there is
no problem.
Does the Linux bo
> As I have mentioned already, it would be useful to
> know more about the
> onfig, how the tests are being done, and to see some
> basic system
> performance stats.
I will shortly. Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss
> You mentioned a second Nexenta box earlier. To rule
> out client-side issues, have you considered testing
> with Nexenta as the iSCSI/NFS client?
If you mean running the NFS client AND server on the same box then yes, and it
doesn't show the same performance issues. It's only when a Linux box
Derek,
The c0t5000C500268CFA6Bd0 disk has some kind of label problem.
You might compare the label of this disk to the other disks.
I agree with Richard that using whole disks (use the d0 device)
is best.
You could also relabel it manually by using the format-->fdisk-->
delete the current partit
As I have mentioned already, it would be useful to know more about the
config, how the tests are being done, and to see some basic system
performance stats.
On 15/10/2010 15:58, Ian D wrote:
As I have mentioned already, we have the same performance issues whether we
READ or we WRITE to the a
> I've had a few people sending emails directly
> suggesting it might have something to do with the
> ZIL/SLOG. I guess I should have said that the issue
> happen both ways, whether we copy TO or FROM the
> Nexenta box.
You mentioned a second Nexenta box earlier. To rule out client-side issues,
Using snv_111b and yesterday both the Mac OS X Finder and Solaris File Browser
started reporting that I had 0 space available on the SMB shares. Earlier in
the day I had copied some files from the Mac to the SMB shares and no problems
reported by the Mac (Automator will report errors if the des
> He already said he has SSD's for dedicated log. This
> means the best
> solution is to disable WriteBack and just use
> WriteThrough. Not only is it
> more reliable than WriteBack, it's faster.
>
> And I know I've said this many times before, but I
> don't mind repeating: If
> you have slog d
As I have mentioned already, we have the same performance issues whether we
READ or we WRITE to the array, shouldn't that rule out caching issues?
Also we can get great performances with the LSI HBA if we use the JBODs as a
local file system. The issues only arise when it is done through iSCSI
Hi
so to be absolutely clear
in the same session, you ran an update, commit and select, and the
select returned an earlier value than the committed update?
Things like
ALTER SESSION set ISOLATION_LEVEL = SERIALIZABLE;
will cause a session to NOT see commits from other sessions, but in
Oracle
A customer is running ZFS version15 on Solaris SPARC 10/08 supporting Oracle
10.2.0.3 databases in a dev and production test environment. We have come
across some cache inconsistencies with one of the Oracle databases where
fetching a record displays a 'historical value' (that has been changed
Am 14.10.10 17:48, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one would use it
anymor
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Phil Harman
>
> I'm wondering whether your HBA has a write through or write back cache
> enabled? The latter might make things very fast, but could put data at
> risk if not sufficiently non-vo
38 matches
Mail list logo