>The write cache is _not_ being disabled. The write cache is being marked
>as non-volatile.
Of course you're right :) Please filter my postings with a "sed 's/write
cache/write cache flush/g'" ;)
>BTW, why is a Sun/Oracle branded product not properly respecting the NV
>bit in the cache flush com
>Oh, one more comment. If you don't mirror your ZIL, and your unmirrored SSD
>goes bad, you lose your whole pool. Or at least suffer data corruption.
Hmmm, I thought that in that case ZFS reverts to the "regular on disks" ZIL?
With kind regards,
Jeroen
--
This message posted from opensolaris.or
> > Just to make sure you know ... if you disable the ZIL altogether, and
> you
> > have a power interruption, failed cpu, or kernel halt, then you're
> likely to
> > have a corrupt unusable zpool, or at least data corruption. If that
> is
> > indeed acceptable to you, go nuts. ;-)
>
> I believe
> So you think it would be ok to shutdown, physically remove the log
> device,
> and then power back on again, and force import the pool? So although
> there
> may be no "live" way to remove a log device from a pool, it might still
> be
> possible if you offline the pool to ensure writes are all c
> if you disable the ZIL altogether, and you have a power interruption, failed
> cpu,
> or kernel halt, then you're likely to have a corrupt unusable zpool
the pool will always be fine, no matter what.
> or at least data corruption.
yea, its a good bet that data sent to your file or zvol wil
> If the ZIL device goes away then zfs might refuse to use the pool
> without user affirmation (due to potential loss of uncommitted
> transactions), but if the dedicated ZIL device is gone, zfs will use
> disks in the main pool for the ZIL.
>
> This has been clarified before on the list by top zf
> Anyway, my question is, [...]
> as expected I can't import it because the pool was created
> with a newer version of ZFS. What options are there to import?
I'm quite sure there is no option to import or receive or downgrade a zfs
filesystem from a later version. I'm pretty sure your only option
On Tue, 30 Mar 2010, Edward Ned Harvey wrote:
If this is true ... Suppose you shutdown a system, remove the ZIL device,
and power back on again. What will happen? I'm informed that with current
versions of solaris, you simply can't remove a zil device once it's added to
a pool. (That's chang
> Again, we can't get a straight answer on this one..
> (or at least not 1 straight answer...)
>
> Since the ZIL logs are committed atomically they are either committed
> in FULL, or NOT at all (by way of rollback of incomplete ZIL applies at
> zpool mount time / or transaction rollbacks if th
On Tue, 30 Mar 2010, Edward Ned Harvey wrote:
But the speedup of disabling the ZIL altogether is
appealing (and would
probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and you
have a power interruption, failed cpu, or kernel halt, th
> The problem that I have now is that each created snapshot is always
> equal to zero... zfs just not storing changes that I have made to the
> file system before making a snapshot.
>
> r...@sl-node01:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> mypool01
what size is the gz file if you do an incremental send to file?
something like:
zfs send -i sn...@vol sn...@vol | gzip > /somplace/somefile.gz
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
> standard ZIL: 7m40s (ZFS default)
> 1x SSD ZIL: 4m07s (Flash Accelerator F20)
> 2x SSD ZIL: 2m42s (Flash Accelerator F20)
> 2x SSD mirrored ZIL: 3m59s (Flash Accelerator F20)
> 3x SSD ZIL: 2m47s (Flash Accelerator F20)
> 4x S
> But the speedup of disabling the ZIL altogether is
> appealing (and would
> probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and you
have a power interruption, failed cpu, or kernel halt, then you're likely to
have a corrupt unusable
Our backup system has a couple of datasets used for iscsi
that have somehow lost their baseline snapshots with the
live system. In fact zfs list -t snapshots doesn't show
any snapshots at all for them. We rotate backup and live
every now and then, so these datasets have been shared
at some time.
Richard Elling wrote:
On Mar 30, 2010, at 3:32 PM, Jeroen Roodhart wrote:
If you are going to trick the system into thinking a volatile cache is
nonvolatile, you
might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like?
On Mar 30, 2010, at 3:32 PM, Jeroen Roodhart wrote:
>> If you are going to trick the system into thinking a volatile cache is
>> nonvolatile, you
>> might as well disable the ZIL -- the data corruption potential is the same.
>
> I'm sorry? I believe the F20 has a supercap or the like? The advise
>If you are going to trick the system into thinking a volatile cache is
>nonvolatile, you
>might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like? The advise on:
http://wikis.sun.com/display/Performance/Tuning+ZFS+for+t
I'm running Windows 7 64bit and VMware player 3 with Solaris 10 64bit
as a client. I have added additional hard drive to virtual Solaris 10
as physical drive. Solaris 10 can see and use already created zpool
without problem. I could also create additional zpool on the other
mounted raw device. I ca
On Mar 30, 2010, at 2:50 PM, Jeroen Roodhart wrote:
> Hi Karsten. Adam, List,
>
> Adam Leventhal wrote:
>
>> Very interesting data. Your test is inherently single-threaded so I'm not
>> surprised that the benefits aren't more impressive -- the flash modules on
>> the F20 card are optimized more
On 03/31/10 10:39 AM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare back.
But: why ha
Hi all,
yes it works with the partitions.
I think that I made a typo during the initial testing off adding a partition as
cache, probably swapped the 0 for an o.
Tested with a b134 gui and text installer on the x86 platform.
So here it goes:
Install opensolaris into a partition and leave some s
Hi Karsten. Adam, List,
Adam Leventhal wrote:
>Very interesting data. Your test is inherently single-threaded so I'm not
>surprised that the benefits aren't more impressive -- the flash modules on the
>F20 card are optimized more for concurrent IOPS than single-threaded latency.
Well, I actual
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare back.
But: why hasn't it used the spare to cover the other fai
> "et" == Erik Trimble writes:
et> Add this zvol as the cache device (L2arc) for your other pool
doesn't bug 6915521 mean this arrangement puts you at risk of deadlock?
pgpLKXWAeF2QV.pgp
Description: PGP signature
___
zfs-discuss mailing list
I've lost a few drives on a thumper I look after in the past week and
I've noticed a couple of issues with the resilver process that could be
improved (or maybe have, the system is running Solaris 10 update 8).
1) While the pool has been resilvering, I have been copying a large
(2TB) filesyste
Hello,
wanted to know if there are any updates on this topic ?
Regards,
Robert
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks - have run it and returns pretty quickly. Given the output (attached)
what action can I take?
Thanks
James
--
This message posted from opensolaris.orgDirty time logs:
tank
outage [300718,301073] length 356
outage [301138,301139] length 2
outage [301149,30
On 3/30/2010 2:44 PM, Adam Leventhal wrote:
> Hey Karsten,
>
> Very interesting data. Your test is inherently single-threaded so I'm not
> surprised that the benefits aren't more impressive -- the flash modules on
> the F20 card are optimized more for concurrent IOPS than single-threaded
> laten
On Mon, 29 Mar 2010, Jim wrote:
Thanks for the suggestion, but have tried detaching but it refuses
reporting no valid replicas. Capture below.
Could you run 'zdb -ddd tank | | awk '/^Dirty/ {output=1} /^Dataset/ {output=0}
{if (output) {print}}'
This will print the dirty time log of the pool
F. Wessels wrote:
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a t
Hey Karsten,
Very interesting data. Your test is inherently single-threaded so I'm not
surprised that the benefits aren't more impressive -- the flash modules on the
F20 card are optimized more for concurrent IOPS than single-threaded latency.
Adam
On Mar 30, 2010, at 3:30 AM, Karsten Weiss wr
On Mar 29, 2010, at 1:10 PM, F. Wessels wrote:
> Hi,
>
> as Richard Elling wrote earlier:
> "For more background, low-cost SSDs intended for the boot market are
> perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
> and the rest for an L2ARC. For small form factor machines or machin
Thanks for the details Edward, that is good to know.
Another quick question.
In my test setup I created the pool using snv_134 because I wanted to see how
things would run as the next release is supposed to be based off of snv_134
(from my understanding). However, I recently read that the 2010
OK, I see what the problem is: the /etc/zfs/zpool.cache file.
When the pool was split, the zpool.cache file was also split - and the split
happens prior to the config file being updated. So, after booting off the
split side of the mirror, zfs attempts to mount rpool based on the information
in
> you can't use anything but a block device for the L2ARC device.
sure you can...
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html
it even lives through a reboot (rpool is mounted before other pools)
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zp
Just clarifying Darren's comment - we got bitten by this pretty badly so I
figure it's worth saying again here. ZFS will *allow* you to use a ZVOL of
one pool as a ZDEV in another pool, but it results in race conditions and an
unstable system. (At least on Solaris 10 update 8).
We tried to use a
http://fixunix.com/solaris-rss/570361-make-most-your-ssd-zfs.html
I think this is what you are looking for. GParted FTW.
Cheers,
_GP_
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
If you "zfs export" it will offline your pool. This is what you do when
you're going to intentionally remove disks from the live system.
If you suffered a hardware problem, and you're migrating your
uncleanly-unmounted disks to another system, then as Brandon described
below, you'll need the "
> On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams
> wrote:
> > One really good use for zfs diff would be: as a way to index zfs send
> > backups by contents.
>
> Or to generate the list of files for incremental backups via NetBackup
> or similar. This is especially important for file systems w
F. Wessels wrote:
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn'
Thank you Darren.
So no zvol's as L2ARC cache device. That leaves partitions and slices.
When I tried to add a second partition, the first contained slices with the
root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2
(note P2) wasn't supported. Perhaps I did something
Hi, I did some tests on a Sun Fire x4540 with an external J4500 array
(connected via two
HBA ports). I.e. there are 96 disks in total configured as seven 12-disk raidz2
vdevs
(plus system, spares, unused disks) providing a ~ 63 TB pool with fletcher4
checksums.
The system was recently equipped w
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn't seem right. Or is
Darren J Moffat wrote:
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An L2AR
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An L2ARC device must be a physi
Darren J Moffat wrote:
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd to
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd to the
data pool with a zp
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no slices or
partitio
I'm running Solaris 10 Sparc with rather updated patches (as of ~30
days ago?) on a netra x1.
I had set up zfs root with two IDE 40GB hard disks. all was fine
until my secondary master died. no read/write errors; just dead.
No matter what I try (booting with the dead drive in place, booting
51 matches
Mail list logo