On Thu, Jul 06, 2006 at 12:46:57AM -0700, Patrick Mauritz wrote:
> Hi,
> after some unscheduled reboots (to put it lightly), I've got an interesting
> setup on my notebook's zfs partition:
> setup: simple zpool, no raid or mirror, a couple of zfs partitions, one zvol
> for swap. /foo is one such
[stirring the pot a little...]
Jim Mauro wrote:
I agree with Greg - For ZFS, I'd recommend a larger number of raidz
luns, with a smaller number
of disks per LUN, up to 6 disks per raidz lun.
For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z
or RAID-Z2. For 3-5 disks, RAID-Z2
On Tue, Jul 18, 2006 at 10:10:33AM +1000, Nathan Kroenert wrote:
> Jeff -
>
> That sounds like a great idea...
>
> Another idea might to be have a zpool create announce the 'availability'
> of any given configuration, and output the Single points of failure.
>
> # zpool create mypool a b
Jeff -
That sounds like a great idea...
Another idea might to be have a zpool create announce the 'availability'
of any given configuration, and output the Single points of failure.
# zpool create mypool a b c
NOTICE: This pool has no redundancy.
Without hardware redund
I agree with Greg - For ZFS, I'd recommend a larger number of raidz
luns, with a smaller number
of disks per LUN, up to 6 disks per raidz lun.
This will more closely align with performance best practices, so it
would be cool to find
common ground in terms of a sweet-spot for performance and RA
Hi Everyone,
I thought I'd share some benchmarking and playing around that we had
done with making zpools from "disks" that were iscsi volumes. The
numbers are representative of 6 benchmarking rounds per.
The interesting finding at least for us was the filebench varmail
(50:50 reads-write
On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote:
> Add an option to zpool(1M) to dump the pool config as well as the
> configuration of the volumes within it to an XML file. This file
> could then be "sucked in" to zpool at a later date to recreate/
> replicate the pool and its volu
I take it you already have solved the problem.
Yes, my problems went away once my device supported the extended SCSI
instruction set.
Julian
--
Julian King
Computer Officer, University of Cambridge, Unix Support
___
zfs-discuss mailing list
zfs-di
To maximize the throughput, I'd go with 8 5-disk raid-z{2} luns. Using that configuration, a full-width stripe write should be a single operation for each controller.In production, the application needs would probably dictate the resulting disk layout. If the application doesn't need tons of i/o
ZFS fans,
I'm preparing some analyses on RAS for large JBOD systems such as
the Sun Fire X4500 (aka Thumper). Since there are zillions of possible
permutations, I need to limit the analyses to some common or desirable
scenarios. Naturally, I'd like your opinions. I've already got a few
scenario
Hi.
Sorry for forward but maybe this will be more visible that way.
I really think something strange is going on here and it's
virtually impossible that I have a problem with hardware and get
CKSUM errors (many of them) only for ditto blocks.
This is a forwarded message
From: Robert
Mikael Kjerrman wrote:
Jeff,
thanks for your answer, and I almost wish I did type it wrong (the easy
explanation that I messed up :-) but from what I can tell I did get it right
--- zpool commands I ran ---
bash-3.00# grep zpool /.bash_history
zpool
zpool create data raidz c1t0d0 c1t1d0 c1t
James Dickens wrote:
On 7/17/06, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
-Mark
PERMISSION GRANTING
zfs allow [-l] [-d] <"ever
Glenn Skinner wrote:
The following is a nit-level comment, so I've directed it onl;y to you,
rather than to the entire list.
Date: Mon, 17 Jul 2006 09:57:35 -0600
From: Mark Shellenbaum <[EMAIL PROTECTED]>
Subject: [zfs-discuss] Proposal: delegated administration
The following i
Hello J.P.,
Monday, July 17, 2006, 3:57:01 PM, you wrote:
>> Well if in fact sd/ssd with EFI labels still have limit to 2TB than
>> create SMI label with one slice representing whole disk and then put
>> zfs on that slice. Eventually manually turn on write cache then.
JPK> How do you suggest tha
On 7/17/06, Jonathan Wheeler <[EMAIL PROTECTED]> wrote:
Hi All,
I've just built an 8 disk zfs storage box, and I'm in the testing phase before
I put it into production. I've run into some unusual results, and I was hoping
the community could offer some suggestions. I've bascially made the swit
On 7/17/06, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
-Mark
PERMISSION GRANTING
zfs allow [-l] [-d] <"everyone"|user|group> [,..
On Mon, Jul 17, 2006 at 10:11:35AM -0700, Matthew Ahrens wrote:
> > I want root to create a new filesystem for a new user under
> > the /export/home filesystem, but then have that user get the
> > right privs via inheritance rather than requiring root to run
> > a set of zfs commands.
>
> In that
Bart Smaalders wrote:
Matthew Ahrens wrote:
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote:
So as administrator what do I need to do to set
/export/home up for users to be able to create their own
snapshots, create dependent filesystems (but still mounted
underneath their /expor
Or if you have the right patches ...
http://blogs.sun.com/roller/page/torrey?entry=really_big_luns
Cindy Swearingen wrote:
Hi Julian,
Can you send me the documentation pointer that says 2 TB isn't supported
on the Solaris 10 6/06 release?
The 2 TB limit was lifted in the Solaris 10 1/06 rele
Matthew Ahrens wrote:
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote:
So as administrator what do I need to do to set
/export/home up for users to be able to create their own
snapshots, create dependent filesystems (but still mounted
underneath their /export/home/usrname)?
In ot
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote:
> >>So as administrator what do I need to do to set
> >>/export/home up for users to be able to create their own
> >>snapshots, create dependent filesystems (but still mounted
> >>underneath their /export/home/usrname)?
> >>
> >>In oth
Bart Smaalders wrote:
Matthew Ahrens wrote:
On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote:
Mark Shellenbaum wrote:
PERMISSION GRANTING
zfs allow -c [,...]
-c "Create" means that the permission will be granted (Locally) to the
creator on any newly-created descendant file
Matthew Ahrens wrote:
On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote:
Mark Shellenbaum wrote:
PERMISSION GRANTING
zfs allow -c [,...]
-c "Create" means that the permission will be granted (Locally) to the
creator on any newly-created descendant filesystems.
ALLOW EXA
On Mon, 17 Jul 2006, Roch wrote:
>
> Sorry to plug my own blog but have you had a look at these ?
>
> http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to (raidz)
> http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs
>
> Also, my thinking is that raid-z is probabl
On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote:
> Mark Shellenbaum wrote:
> >PERMISSION GRANTING
> >
> > zfs allow -c [,...]
> >
> >-c "Create" means that the permission will be granted (Locally) to the
> >creator on any newly-created descendant filesystems.
> >
> >ALLOW EXAMPL
Mark Shellenbaum wrote:
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
Overall this looks really good.
I might have some detailed comments after a third reading, but I think
it certainly co
Mark Shellenbaum wrote:
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
-Mark
PERMISSION GRANTING
zfs al
Dana H. Myers wrote:
Jonathan Wheeler wrote:
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
mirror MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
8 disk 8196
I too have seen this recently, due to a partially failed drive.
When I physically removed the drive, ZFS figured everything out and
I was back up and running. Alas, I have been unable to recreate.
There is a bug lurking here, if someone has a more clever way to
test, we might be able to nail it d
Robert Milkowski writes:
> Hello zfs-discuss,
>
> What would you rather propose for ZFS+ORACLE - zvols or just files
> from the performance standpoint?
>
>
> --
> Best regards,
> Robert mailto:[EMAIL PROTECTED]
> ht
Sorry to plug my own blog but have you had a look at these ?
http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to (raidz)
http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs
Also, my thinking is that raid-z is probably more friendly
when the config contains
The following is the delegated admin model that Matt and I have been
working on. At this point we are ready for your feedback on the
proposed model.
-Mark
PERMISSION GRANTING
zfs allow [-l] [-d] <"everyone"|user|group> [,...] \
zfs allow [-l] [-d] -u [,..
Jonathan Wheeler wrote:
I'm not a ZFS expert - I'm just an enthusiastic user inside Sun.
Here are some brief observations:
> Bonnie
> ---Sequential Output ---Sequential Input--
> --Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --See
Hi Sean, You suffer from an extreme bout of
6429205 each zpool needs to monitor it's throughput and throttle heavy writers
When this is fixed, your responsiveness will be better.
Note to Mark, Sean is more than willing to test any fix we
would have for this...
-r
On Mon, 17 Jul 2006, Cindy Swearingen wrote:
Hi Julian,
Can you send me the documentation pointer that says 2 TB isn't supported
on the Solaris 10 6/06 release?
As per my original post:
http://docs.sun.com/app/docs/doc/817-5093/6mkisoq1k?a=view#disksconcepts-17
This doesn't say which version
On Mon, 17 Jul 2006, Darren J Moffat wrote:
> Jeff Bonwick wrote
> > zpool create data unreplicated A B C
> >
> > The extra typing would be annoying, but would make it almost impossible
> > to get the wrong behavior by accident.
>
> I think that is a very good idea from a usability view point.
Well, it's not related to RAID-Z at all, but yes, mirroring is better with ZFS.
The checksums allow bad data on either side of the mirror to be detected, so if
for some reason one disk is sometimes losing or damaging a write, the other
disk can provide the good data (and ZFS can tell which is co
Jeff Bonwick wrote
zpool create data unreplicated A B C
The extra typing would be annoying, but would make it almost impossible
to get the wrong behavior by accident.
I think that is a very good idea from a usability view point. It is
better to have to type a few more chars to explic
Hi Julian,
Can you send me the documentation pointer that says 2 TB isn't supported
on the Solaris 10 6/06 release?
The 2 TB limit was lifted in the Solaris 10 1/06 release, as described
here:
http://docs.sun.com/app/docs/doc/817-5093/6mkisoq1j?a=view#ftzen
Thanks,
Cindy
J.P. King wrote:
Well if in fact sd/ssd with EFI labels still have limit to 2TB than
create SMI label with one slice representing whole disk and then put
zfs on that slice. Eventually manually turn on write cache then.
Well, in fact it turned out that the firmware on the device needed
upgrading to support the a
Well if in fact sd/ssd with EFI labels still have limit to 2TB than
create SMI label with one slice representing whole disk and then put
zfs on that slice. Eventually manually turn on write cache then.
How do you suggest that I create a slice representing the whole disk?
format (with or without
Darren J Moffat wrote:
Buth the
reason thing is how do you tell the admin "its done now the filesystem
is safe". With compression you don't generally care if some old stuff
didn't compress (and with the current implementation it has to compress
a certain amount or it gets written uncompressed
Hello J.P.,
Monday, July 17, 2006, 2:15:56 PM, you wrote:
JPK> Possibly not the right list, but the only appropriate one I knew about.
JPK> I have a Solaris box (just reinstalled to Sol 10 606) with a 3.19TB device
JPK> hanging off it, attatched by fibre.
JPK> Solaris refuses to see this device
This is change request:
6428639 large writes to zvol synchs too much, better cut down a little
which I have a fix for, but it hasn't been put back.
Neil.
Jürgen Keil wrote On 07/17/06 04:18,:
Further testing revealed
that it wasn't an iSCSI performance issue but a zvol
issue. Testing on a SA
Hi All,
I've just built an 8 disk zfs storage box, and I'm in the testing phase before
I put it into production. I've run into some unusual results, and I was hoping
the community could offer some suggestions. I've bascially made the switch to
Solaris on the promises of ZFS alone (yes I'm that
Possibly not the right list, but the only appropriate one I knew about.
I have a Solaris box (just reinstalled to Sol 10 606) with a 3.19TB device
hanging off it, attatched by fibre.
Solaris refuses to see this device except as a 1.19 TB device.
Documentation that I have found
(http://docs.
Thanks Robert,
that's exactly what I was looking for. I will try it when I come back home
tomorrow. Is it possible to set this value in /etc/system, too?
Cheers,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
Jeff,
thanks for your answer, and I almost wish I did type it wrong (the easy
explanation that I messed up :-) but from what I can tell I did get it right
--- zpool commands I ran ---
bash-3.00# grep zpool /.bash_history
zpool
zpool create data raidz c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c2t0d0 c2
> Further testing revealed
> that it wasn't an iSCSI performance issue but a zvol
> issue. Testing on a SATA disk locally, I get these
> numbers (sequentual write):
>
> UFS: 38MB/s
> ZFS: 38MB/s
> Zvol UFS: 6MB/s
> Zvol Raw: ~6MB/s
>
> ZFS is nice and fast but Zvol performance just drops
> off
Jeff Bonwick wrote:
One option -- I confess up front that I don't really like it -- would be
to make 'unreplicated' an explicit replication type (in addition to
mirror and raidz), so that you couldn't get it by accident:
zpool create data unreplicated A B C
>
The extra typing would b
> I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot
> the whole pool became unavailable after apparently loosing a diskdrive.
> [...]
> NAMESTATE READ WRITE CKSUM
> dataUNAVAIL 0 0 0 insufficient replicas
> c1t0d0ON
Hi,
so it happened...
I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot the whole
pool became unavailable after apparently loosing a diskdrive. (The drive is
seemingly ok as far as I can tell from other commands)
--- bootlog ---
Jul 17 09:57:38 expprd fmd: [ID 441519 daemon
Hello,
I'm curious if anyone would mind sharing their experiences with zvol's. I
recently started using zvol as an iSCSI backend and was supprised by the
performance I was getting. Further testing revealed that it wasn't an iSCSI
performance issue but a zvol issue. Testing on a SATA disk l
54 matches
Mail list logo