Bob,
Using a separate pool would dictate other limitations, such as not been able to
use more space than what's allocated in the pool. You could "add" space as
needed, but you can't remove (move) devices freely.
By using a shared pool with a hint of desired vdev/space allocation policy, you
co
On Wed, Feb 17, 2010 at 00:27, Ethan wrote:
> On Tue, Feb 16, 2010 at 23:57, Daniel Carosone wrote:
>
>> On Tue, Feb 16, 2010 at 11:39:39PM -0500, Ethan wrote:
>> > If slice 2 is the whole disk, why is zpool trying to using slice 8 for
>> all
>> > but one disk?
>>
>> Because it's finding at leas
On Tue, Feb 16, 2010 at 23:57, Daniel Carosone wrote:
> On Tue, Feb 16, 2010 at 11:39:39PM -0500, Ethan wrote:
> > If slice 2 is the whole disk, why is zpool trying to using slice 8 for
> all
> > but one disk?
>
> Because it's finding at least part of the labels for the pool member there.
>
> Ple
On Tue, Feb 16, 2010 at 10:33:26PM -0600, David Dyer-Bennet wrote:
> Here's what I've started: I've created a mirrored pool called rp2 on
> the new disks, and I'm zfs send -R a current snapshot over to the new
> disks. In fact it just finished. I've got an altroot set, and
> obviously I ga
On Tue, Feb 16 at 9:44, Brian E. Imhoff wrote:
But, at the end of the day, this is quite a bomb: "A single raidz2
vdev has about as many IOs per second as a single disk, which could
really hurt iSCSI performance."
If I have to break 24 disks up in to multiple vdevs to get the
expected performan
On Tue, Feb 16, 2010 at 11:39:39PM -0500, Ethan wrote:
> If slice 2 is the whole disk, why is zpool trying to using slice 8 for all
> but one disk?
Because it's finding at least part of the labels for the pool member there.
Please check the partition tables of all the disks, and use zdb -l on
th
On Tue, Feb 16, 2010 at 23:24, Richard Elling wrote:
> On Feb 16, 2010, at 7:57 PM, Ethan wrote:
> > On Tue, Feb 16, 2010 at 22:35, Daniel Carosone wrote:
> > On Wed, Feb 17, 2010 at 02:30:28PM +1100, Daniel Carosone wrote:
> > > > c9t4d0s8 UNAVAIL corrupted data
> > > >
I've got the new controller and the new system disks running in the
system, for anybody keeping score at home.
So I'm looking at how to migrate to the new system disks. They're a
different size (160GB vs 80GB) and form factor (2.5" vs 3.5") from the
old disks (I've got a mirrored pool for my
On Feb 16, 2010, at 7:57 PM, Ethan wrote:
> On Tue, Feb 16, 2010 at 22:35, Daniel Carosone wrote:
> On Wed, Feb 17, 2010 at 02:30:28PM +1100, Daniel Carosone wrote:
> > > c9t4d0s8 UNAVAIL corrupted data
> > > c9t5d0s2 ONLINE
> > > c9t2d0s8 UNAVAIL corrupted
On Tue, Feb 16, 2010 at 22:35, Daniel Carosone wrote:
> On Wed, Feb 17, 2010 at 02:30:28PM +1100, Daniel Carosone wrote:
> > > c9t4d0s8 UNAVAIL corrupted data
> > > c9t5d0s2 ONLINE
> > > c9t2d0s8 UNAVAIL corrupted data
> > > c9t1d0s8 UNAVAIL
Anyone else got stats to share?
Note: the below is 4*Caviar Black 500GB drives, 1*Intel x-25m setup as both
ZIL and L2ARC, decent ASUS mobo, 2GB of fast RAM.
-marc
r...@opensolaris130:/tank/myfs# /usr/benchmarks/bonnie++/bonnie++ -u root -d
/tank/myfs -f -b
Using uid:0, gid:0.
Writing intelligen
On Tue, Feb 16, 2010 at 04:47:11PM -0800, Christo Kutrovsky wrote:
> One of the ideas that sparkled is have a "max devices" property for
> each data set, and limit how many mirrored devices a given data set
> can be spread on. I mean if you don't need the performance, you can
> limit (minimize) the
On Wed, Feb 17, 2010 at 02:30:28PM +1100, Daniel Carosone wrote:
> > c9t4d0s8 UNAVAIL corrupted data
> > c9t5d0s2 ONLINE
> > c9t2d0s8 UNAVAIL corrupted data
> > c9t1d0s8 UNAVAIL corrupted data
> > c9t0d0s8 UNAVAIL corrupted data
On Tue, Feb 16, 2010 at 10:06:13PM -0500, Ethan wrote:
> This is the current state of my pool:
>
> et...@save:~# zpool import
> pool: q
> id: 5055543090570728034
> state: UNAVAIL
> status: One or more devices contains corrupted data.
> action: The pool cannot be imported due to damaged devi
On Sun, Feb 14, 2010 at 12:51 PM, Tracey Bernath wrote:
> I went from all four disks of the array at 100%, doing about 170 read
> IOPS/25MB/s
> to all four disks of the array at 0%, once hitting nealyr 500 IOPS/65MB/s
> off the cache drive (@ only 50% load).
> And, keepĀ in mind this was on less
On Tue, Feb 16, 2010 at 06:28:05PM -0800, Richard Elling wrote:
> The problem is that MTBF measurements are only one part of the picture.
> Murphy's Law says something will go wrong, so also plan on backups.
+n
> > Imagine this scenario:
> > You lost 2 disks, and unfortunately you lost the 2 side
This is the current state of my pool:
et...@save:~# zpool import
pool: q
id: 5055543090570728034
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
On Feb 16, 2010, at 4:47 PM, Christo Kutrovsky wrote:
> Just finished reading the following excellent post:
>
> http://queue.acm.org/detail.cfm?id=1670144
>
> And started thinking what would be the best long term setup for a home
> server, given limited number of disk slots (say 10).
>
> I cons
On Tue, 16 Feb 2010, Christo Kutrovsky wrote:
The goal was to do "damage control" in a disk failure scenario
involving data loss. Back to the original question/idea.
Which would you prefer, loose a couple of datasets, or loose a
little bit of every file in every dataset.
This ignores the f
On Tue, 16 Feb 2010, Christo Kutrovsky wrote:
Just finished reading the following excellent post:
http://queue.acm.org/detail.cfm?id=1670144
A nice article, even if I don't agree with all of its surmises and
conclusions. :-)
In fact, I would reach a different conclusion.
I considered some
On Feb 16, 2010, at 12:39 PM, Daniel Carosone wrote:
> On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote:
>> On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone wrote:
>>> Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
>>> add more devices and let them load-balance.
Thanks for your feedback James, but that's not the direction where I wanted
this discussion to go.
The goal was not how to create a better solution for an enterprise.
The goal was to do "damage control" in a disk failure scenario involving data
loss. Back to the original question/idea.
Which
On Feb 16, 2010, at 9:44 AM, Brian E. Imhoff wrote:
> Some more back story. I initially started with Solaris 10 u8, and was
> getting 40ish MB/s reads, and 65-70MB/s writes, which was still a far cry
> from the performance I was getting with OpenFiler. I decided to try
> Opensolaris 2009.06,
On Tue, Feb 16, 2010 at 6:47 PM, Christo Kutrovsky wrote:
> Just finished reading the following excellent post:
>
> http://queue.acm.org/detail.cfm?id=1670144
>
> And started thinking what would be the best long term setup for a home
> server, given limited number of disk slots (say 10).
>
> I con
On Tue, Feb 16, 2010 at 3:13 PM, Tiernan OToole wrote:
> Cool... Thanks for the advice! Buy why would it be a good idea to change
> layout on bigger disks?
On top of the reasons Bob gave, your current layout will be very
unbalanced after adding devices. You can't currently add more devices
to a
Just finished reading the following excellent post:
http://queue.acm.org/detail.cfm?id=1670144
And started thinking what would be the best long term setup for a home server,
given limited number of disk slots (say 10).
I considered something like simply do a 2way mirror. What are the chances fo
On Tue, 16 Feb 2010, Tiernan OToole wrote:
Cool... Thanks for the advice! Buy why would it be a good idea to
change layout on bigger disks?
Larger disks take longer to resilver, have a higher probability of
encountering an error during resilvering or normal use, and are often
slower. This m
Robert,
That would be pretty cool especially if it makes into the 2010.02 release. I
hope there are no weird special cases that pop-up from this improvement.
Regarding workaround.
That's not my experience, unless it behaves differently on ZVOLs and datasets.
On ZVOLs it appears the setting ki
Ok, now that you explained it, it makes sense. Thanks for replying Daniel.
Feel better now :) Suddenly, that Gigabyte i-Ram is no longer a necessity but a
"nice to have" thing.
What would be really good to have is the that per-data set ZIL control in
2010.02. And perhaps add another mode "sync
On 16/02/2010 22:53, Christo Kutrovsky wrote:
Jeff, thanks for link, looking forward to per data set control.
6280630 zil synchronicity
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630)
It's been open for 5 years now :) Looking forward to not compromising my entire
storage
Cool... Thanks for the advice! Buy why would it be a good idea to change layout
on bigger disks?
-Original Message-
From: Brandon High
Sent: 16 February 2010 18:26
To: Tiernan OToole
Cc: Robert Milkowski ; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Plan for upgrading a ZFS b
On Tue, Feb 16, 2010 at 02:53:18PM -0800, Christo Kutrovsky wrote:
> looking to answer myself the following question:
> Do I need to rollback all my NTFS volumes on iSCSI to the last available
> snapshot every time there's a power failure involving the ZFS storage server
> with a disabled ZIL.
Jeff, thanks for link, looking forward to per data set control.
6280630 zil synchronicity
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630)
It's been open for 5 years now :) Looking forward to not compromising my entire
storage with disabled ZIL when I only need it on a few d
> People used fastfs for years in specific environments (hopefully
> understanding the risks), and disabling the ZIL is safer than fastfs.
> Seems like it would be a useful ZFS dataset parameter.
We agree. There's an open RFE for this:
6280630 zil synchronicity
No promise on date, but it will
Hi Bruno,
I've tried to reproduce this panic you are seeing. However, I had
difficulty following your procedure. See below:
On 02/08/10 15:37, Bruno Damour wrote:
On 02/ 8/10 06:38 PM, Lori Alt wrote:
Can you please send a complete list of the actions taken: The
commands you used to c
Darren J Moffat wrote:
You have done a risk analysis and if you are happy that your NTFS
filesystems could be corrupt on those ZFS ZVOLs if you lose data then
you could consider turning off the ZIL. Note though that it isn't
just those ZVOLs you are serving to Windows that lose access to a ZIL
On Tue, Feb 16, 2010 at 06:20:05PM +0100, Juergen Nickelsen wrote:
> Tony MacDoodle writes:
>
> > Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
> > empty
> > (6/6)
> > svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
> > failed: exit status 1
>
On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote:
> On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone wrote:
> > Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
> > add more devices and let them load-balance. This is especially true
> > if you're sharing ssd wri
Eric, is this answer by George wrong?
http://opensolaris.org/jive/message.jspa?messageID=439187#439187
Are we to expect the fix soon or is there still no schedule?
Thanks,
Moshe
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi,
when I delegate the zfs roles to a user, the user can create a snapshot of zfs
filesystem, but cannot snapshot a zone contained in that filesystem.
The output is:
$ /usr/sbin/zfs snapshot tank/zones/dashboardbuild/ROOT/z...@1install
cannot create snapshot 'tank/zones/dashboardbuild/ROOT/z...
I'm trying to import a pool into b132 which once had dedup enabled, after the
machine was shut down with an "init 5".
However, the import hangs the whole machine and I eventually get kicked off my
SSH sessions. As it's a VM, I can see that processor usage jumps up to near
100% very quickly, and
On Tue, Feb 16, 2010 at 8:25 AM, Tiernan OToole wrote:
> So, does that work with RAIDZ1 and 2 pools?
Yes. Replace all the disks in one vdev, and that vdev will become
larger. Your disk layout won't change though - You'll still have a
raidz vdev, a raidz2 vdev. It might be a good idea to revise th
Some more back story. I initially started with Solaris 10 u8, and was getting
40ish MB/s reads, and 65-70MB/s writes, which was still a far cry from the
performance I was getting with OpenFiler. I decided to try Opensolaris
2009.06, thinking that since it was more "state of the art & up to dat
On Tue, 16 Feb 2010, Dave Pooser wrote:
If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
speed, or will the extra parity writes reduce speed, or will the two factors
offset and leave things a wash?
I should mention that the usage of this system is as storage for large
(5-300GB)
Tony MacDoodle writes:
> Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
> empty
> (6/6)
> svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
> failed: exit status 1
>
> And yes, there is data in the /data/apache file system...
I think it is co
So, does that work with RAIDZ1 and 2 pools?
On Tue, Feb 16, 2010 at 1:47 PM, Robert Milkowski wrote:
>
>
> On Mon, 15 Feb 2010, Tiernan OToole wrote:
>
> Good morning all.
>>
>> I am in the process of building my V1 SAN for media storage in house, and
>> i
>> am already thinkg ov the V2 build..
On Mon, 15 Feb 2010, Tracey Bernath wrote:
If the device itself was full, and items were falling off the L2ARC, then I
could see having two
separate cache devices, but since I am only at about 50% utilization of the
available capacity, and
maxing out the IO, then mirroring seemed smarter.
Am
On Feb 15, 2010, at 11:34 PM, Ragnar Sundblad wrote:
>
> On 15 feb 2010, at 23.33, Bob Beverage wrote:
>
>>> On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
>>> wrote:
>>> I've seen exactly the same thing. Basically, terrible
>>> transfer rates
>>> with Windows
>>> and the server sitting there
Why would I get the following error:
Reading ZFS config: done.
Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
empty
(6/6)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
failed: exit status 1
And yes, there is data in the /data/apache file syste
> If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
> speed, or will the extra parity writes reduce speed, or will the two factors
> offset and leave things a wash?
I should mention that the usage of this system is as storage for large
(5-300GB) video files, so what's most important
I currently am getting good speeds out of my existing system (8x 2TB in a
RAIDZ2 exported over fibre channel) but there's no such thing as too much
speed, and these other two drive bays are just begging for drives in
them If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase
speed, o
On Mon, 15 Feb 2010, Tiernan OToole wrote:
Good morning all.
I am in the process of building my V1 SAN for media storage in house, and i
am already thinkg ov the V2 build...
Currently, there are 8 250Gb hdds and 3 500Gb disks. the 8 250s are in a
RAIDZ2 array, and the 3 500s will be in RAIDZ
On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone wrote:
> On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote:
> > Now, to add the second SSD ZIL/L2ARC for a mirror.
>
> Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
> add more devices and let them load-balance.
I have booted up an osol-dev-131 live CD on a Dell Precision T7500,
and the AHCI driver successfully loaded, to give access
to the two sata DVD drives in the machine.
(Unfortunately, I did not have the opportunity to attach
any hard drives, but I would expect that also to work.)
'scanpci' identif
Richard,
thanks for the heads-up. I found some material here that sheds a bit
more light on it:
http://en.wikipedia.org/wiki/ZFS
http://all-unix.blogspot.com/2007/04/transaction-file-system-and-cow.html
Regards,
heinz
Richard Elling wrote:
On Feb 15, 2010, at 8:43 PM, heinz zerbes wrote:
55 matches
Mail list logo