Ian,
It would help to have some config detail (e.g. what options are you using?
zpool status output; property lists for specific filesystems and zvols; etc)
Some basic Solaris stats can be very helpful too (e.g. peak flow samples of
vmstat 1, mpstst 1, iostat -xnz 1, etc)
It would also be grea
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Derek G Nokes
>
> r...@dnokes.homeip.net:~# zpool create marketData raidz2
> c0t5000C5001A6B9C5Ed0 c0t5000C5001A81E100d0 c0t5000C500268C0576d0
> c0t5000C500268C5414d0 c0t5000C500268CFA6Bd0 c0t5
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wilkinson, Alex
>
> can you paste them anyway ?
Note: If you have more than one adapter, I believe you can specify -aALL in
the commands below, instead of -a0
I have 2 disks (slots 4 & 5) th
Thank you both. I did try without specifying the 's0' portion before posting
and got the following error:
r...@dnokes.homeip.net:~# zpool create marketData raidz2 c0t5000C5001A6B9C5Ed0
c0t5000C5001A81E100d0 c0t5000C500268C0576d0 c0t5000C500268C5414d0
c0t5000C500268CFA6Bd0 c0t5000C500268D0821d0
0n Thu, Oct 14, 2010 at 09:54:09PM -0400, Edward Ned Harvey wrote:
>If you happen to find that MegaCLI is the right tool for your hardware, let
>me know, and I'll paste a few commands here, which will simplify your life.
>When I first started using it, I found it terribly cumbers
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian D
>
> ok... we're making progress. After swapping the LSI HBA for a Dell
> H800 the issue disappeared. Now, I'd rather not use those controllers
> because they don't have a JBOD mode. We
Derek,
> I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
> months). I recently added 6 new drives to one of my servers and I would like
> to create a new RAIDZ2 pool called 'marketData'.
>
> I figured the command to do this would be something like:
>
> zpool create mar
On Oct 14, 2010, at 5:08 PM, Derek G Nokes wrote:
> I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
> months). I recently added 6 new drives to one of my servers and I would like
> to create a new RAIDZ2 pool called 'marketData'.
>
> I figured the command to do this wo
I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
months). I recently added 6 new drives to one of my servers and I would like to
create a new RAIDZ2 pool called 'marketData'.
I figured the command to do this would be something like:
zpool create marketData raidz2 c0t5000
> Earlier you said you had eliminated the ZIL as an
> issue, but one difference
> between the Dell H800 and the LSI HBA is that the
> H800 has an NV cache (if
> you have the battery backup present).
>
> A very simple test would be when things are running
> slow, try disabling
> the ZIL temporarily
rewar...@hotmail.com said:
> ok... we're making progress. After swapping the LSI HBA for a Dell H800 the
> issue disappeared. Now, I'd rather not use those controllers because they
> don't have a JBOD mode. We have no choice but to make individual RAID0
> volumes for each disks which means we nee
> Our next test is to try with a different kind of HBA,
> we have a Dell H800 lying around.
ok... we're making progress. After swapping the LSI HBA for a Dell H800 the
issue disappeared. Now, I'd rather not use those controllers because they
don't have a JBOD mode. We have no choice but to mak
On 14-Oct-10, at 11:48 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one would use i
I've had a few people sending emails directly suggesting it might have
something to do with the ZIL/SLOG. I guess I should have said that the issue
happen both ways, whether we copy TO or FROM the Nexenta box.
--
This message posted from opensolaris.org
> Sounding more and more like a networking issue - are
> the network cards set up in an aggregate? I had some
> similar issues on GbE where there was a mismatch
> between the aggregate settings on the switches and
> the LACP settings on the server. Basically the
> network was wasting a ton of time
On Thu, Oct 14, 2010 at 11:47 PM, Oskar wrote:
> I know that this is not necessarily the right forum, but the FreeBSD forum
> haven't been able to help me...
>
> I recently updated my FreeBSD 8.0 RC3 to 8.1 and after the update I can't
> import my zpool. My computer says that no such pool exists
I know that this is not necessarily the right forum, but the FreeBSD forum
haven't been able to help me...
I recently updated my FreeBSD 8.0 RC3 to 8.1 and after the update I can't
import my zpool. My computer says that no such pool exists, even though it can
be seen with the zpool status comma
I had to upgrade zfs
zfs upgrade -a
then
pfexec zfs set sharesmb=off data
pfexec zfs set sharesmb=on data
after this zfs diff failed with the old snapshots.
But with newly created snapshots it worked.
Thanks Tim,
Dirk
--
This message posted from opensolaris.org
___
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Toby Thain
>
> > I don't want to heat up the discussion about ZFS managed discs vs.
> > HW raids, but if RAID5/6 would be that bad, no one would use it
> > anymore.
>
> It is. And there's no r
On 14-Oct-10, at 3:27 AM, Stephan Budach wrote:
I'd like to see those docs as well.
As all HW raids are driven by software, of course - and software can
be buggy.
It's not that the software 'can be buggy' - that's not the point here.
The point being made is that conventional RAID just d
> From: David Magda [mailto:dma...@ee.ryerson.ca]
>
> On Wed, October 13, 2010 21:26, Edward Ned Harvey wrote:
>
> > I highly endorse mirrors for nearly all purposes.
>
> Are you a member of BAARF?
>
> http://www.miracleas.com/BAARF/BAARF2.html
Never heard of it. I don't quite get it ...
On Wed, October 13, 2010 21:26, Edward Ned Harvey wrote:
> I highly endorse mirrors for nearly all purposes.
Are you a member of BAARF?
http://www.miracleas.com/BAARF/BAARF2.html
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
a diff to list the file differences between snapshots
http://arc.opensolaris.org/caselog/PSARC/2010/105/mail
Dave
On 10/13/10 15:48, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of dirk schelfhout
Wanted to test the
Sorry for the long post but I know trying to decide on hardware often want to
see details about what people are using.
I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am
starting to use.
I successfully transferred a deduped zpool with 1.x TB of files and 60 or so
On 13 oct. 2010, at 18:37, Marty Scholes wrote:
> The only thing that still stands out is that network operations (iSCSI and
> NFS) to external drives are slow, correct?
>
> Just for completeness, what happens if you scp a file to the three different
> pools? If the results are the same as NF
We got a R710 + 3 MD1000s running zfs, with intel 10GE network card.
There was a period of time that R710 freezing randomly, when we used osol b12x
release. I checked in google and there were reports of freezes caused by a new
mpt driver used in b12x release which could be the cause. Changed to
I'd like to see those docs as well.
As all HW raids are driven by software, of course - and software can be buggy.
I don't want to heat up the discussion about ZFS managed discs vs. HW raids,
but if RAID5/6 would be that bad, no one would use it anymore.
So… just post the link and I will take a
27 matches
Mail list logo