work in my
set up rather than simply ignoring and moving on.
Didn't you read Richard's post? "You can have only one Solaris partition
at a time."
Your original example failed when you tried to add a second.
--
Ian.
___
zfs-
to create two slices.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob Friesenhahn wrote:
On Wed, 27 Feb 2013, Ian Collins wrote:
I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing "deduplication" for backups without even enabling
deduplication in zfs. Now backup stor
the same for all of our "legacy" operating system backups. Take a
snapshot then do an rsync and an excellent way of maintaining
incremental backups for those.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
It's been a long time, but I'm sure LU only supports UFS->ZFS for the
root pool.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
other bugs which are fixed in S11 and not in
Illumos (and vice-versa).
There may well be, but in seven+ years of using ZFS, this was the first
one to cost me a pool.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
get too close to filling the enlarged pool, you
will probably be OK performance wise. The old data access times will be
no worse, the new data better.
If you can spread some of your old data around after added the new vdev,
do so.
--
Ian.
___
zfs
extended vdev).
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ake a
copy of a suitably large filesystem, then deleted the original and
renamed the copy. I had to do this a couple of times to redistribute
data, but it saved a lot of down time.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opens
ch is to say, I understand there's stuff you can't talk about, and support
you can't give freely or openly. But to the extent you're still able to
discuss publicly known things, thank you.
+1.
--
Ian.
___
zfs-discuss mailing li
Richard Elling wrote:
On Feb 16, 2013, at 10:16 PM, Bryan Horstmann-Allen
wrote:
+--
| On 2013-02-17 18:40:47, Ian Collins wrote:
|
One of its main advantages is it has been platform agnostic. We see
Solaris
, platform agnostic, home for this list.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
OI or Solaris11.1)
and recover his data.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Klimov wrote:
On 2013-02-12 10:32, Ian Collins wrote:
Ram Chander wrote:
Hi Roy,
You are right. So it looks like re-distribution issue. Initially there
were two Vdev with 24 disks ( disk 0-23 ) for close to year. After
which which we added 24 more disks and created additional vdevs. The
.
Now how to find files that are present in a Vdev or a disk. That way
I can remove and re-copy back to distribute data. Any other way to
solve this ?
The only way is to avoid the problem in the first place by not mixing
vdev sizes in a pool.
--
Ian
_1800 into
backup/vbox/windows@Wednesday_1800
received 380MB stream in 18 seconds (21.1MB/sec)
On the Solaris 11.1 sender:
zfs get -H version tank/vbox/windows
tank/vbox/windows version 5 -
Odd! I assume an error code was being misreporte
that can be done to most
Java processes :)
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
aiting a long time for! I have to run a
periodic "fill the pool with zeros" cycle on a couple of iSCSI backed
pools to reclaim free space.
I guess the big question is do oracle storage appliances advertise SCSI
UNMAP?
--
Ian.
___
zfs-dis
VMs zoned offdefault
Which is casing one of my scripts grief.
Does anyone know why these are showing up?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hope this will be a public update. Within a week of
upgrading to 11.1 I hit this bug and I had to rebuild my main pool. I'm
still restoring backups.
Without this fix, 11.1 is a bomb waiting to go off!
--
Ian.
___
zfs-discuss mailing list
zfs-di
his might enable you to recreate the error for testing.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ssion was first
introduced, it would cause writes on a Thumper to be CPU bound. It was
all but unusable on that machine. Today with better threading, I barely
notice the overhead on the same box.
There are very few situations where this option is better than the default lzjb.
That part I do agre
Ian Collins wrote:
I look after a remote server that has two iSCSI pools. The volumes for
each pool are sparse volumes and a while back the target's storage
became full, causing weird and wonderful corruption issues until they
manges to free some space.
Since then, one pool has been reaso
my part of the world, that isn't much fun.
Buy and equivalent JBOD and head unit and pretend you have a new Thumper.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/22/12 10:15, Ian Collins wrote:
I look after a remote server that has two iSCSI pools. The volumes for
each pool are sparse volumes and a while back the target's storage
became full, causing weird and wonderful corruption issues until they
manges to free some space.
Since then, one
file
removals and additions.
I'm currently zero filling the bad pool to recover space on the target
storage to see if that improves matters.
Has anyone else seen similar behaviour with previously degraded iSCSI
pools?
--
Ian.
___
zfs-discu
magnitude too big...
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
try, but for now, no...
SmartOS.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/31/12 23:35, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
Have have a recently upgraded (to Solaris 11.1) test system that fails
to mount its filesystems on
fine in the original BE. The root (only) pool in a
single drive.
Any ideas?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sync) to send a snapshot from a newer zpool to an
older one?
You have to create pools/filesystems with the older versions used by the
destination machine.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
nefits when cache is available, and less
benefits when it isn't?
Why bother with cache devices at all if you are moving the pool around?
As you hinted above, the cache can take a while to warm up and become
useful.
You should zpool remove the cache device
On 10/13/12 22:13, Jim Klimov wrote:
2012-10-13 0:41, Ian Collins пишет:
On 10/13/12 02:12, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
There are at least a couple of solid reasons *in favor* of partitioning.
#1 It seems common, at least to me, that I'll build a s
sks to the main data pool.
How do you provision a spare in that situation?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
normal, or is there any difference?
It can be sent as normal.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/06/12 07:57, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Cusack
On Fri, Oct 5, 2012 at 3:17 AM, Ian Collins wrote:
I do have to suffer a slow, glitchy WAN to a
would have been the leaf
filesystems filesystems themselves.
By spreading the data over more filesystems, the individual incremental
sends are smaller, so there is less data to resend if the link burps
during a transfer.
--
Ian.
___
zfs-discuss mail
ne is successfully
shared). The share and sharenfs properties on the origin filesystem are
unchanged.
I have to run zfs share on the origin filesystem to restore the share.
Feature or a bug??
--
Ian.
___
zfs-discuss mailing list
zfs-di
systems isn't as bad
as it was. On our original Thumper I had to amalgamate all our user
home directories into one filesystem due to slow boot. Now I have split
them again to send over a slow WAN...
Large numbers of snapshots (10's of thousands) don't appear to
ould I be panicking yet?
No.
Do you have compression on on one side but no the other? Either way,
let things run to completion.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
alk backwards through the remote
snaps until a common snapshot is found and destroy non-matching remote
snapshots"
That's what I do as party of my "destroy snapshots not on the source"
check. Over many years of managing various distributed systems, I've
discovered th
sing the standard library set container and algorithms.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
reasonable tradeoff.
The 313 series looks like a consumer price SLC drive aimed at the recent
trend in windows cache drives.
Should be worth a look.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
on't use to have any room problems, so if I'm
going to enable the compression flag it has to be because of the write
speed improvements.
I always enable compression by default and only turn it off for
filesystems I know hold un-compressible data such as media files.
--
Ian.
On 07/10/12 05:26 AM, Brian Wilson wrote:
Yep, thanks, and to answer Ian with more detail on what TruCopy does.
TruCopy mirrors between the two storage arrays, with software running on
the arrays, and keeps a list of dirty/changed 'tracks' while the mirror
is split. I think th
On 07/ 7/12 11:29 AM, Brian Wilson wrote:
On 07/ 6/12 04:17 PM, Ian Collins wrote:
On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
I'd like a sanity check from people more knowledgeable than myself.
I'm managing backups on a production system. Previously I was using
another volu
hot
and back up the clone?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 07/ 5/12 11:32 PM, Carsten John wrote:
-Original message-
To: Carsten John;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins
Sent: Thu 05-07-2012 11:35
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
On 07/ 5/12 09:25 PM, Carsten John wrote:
Hi Ian
On 07/ 5/12 09:25 PM, Carsten John wrote:
Hi Ian,
yes, I already checked that:
svcs -a | grep zfs
disabled 11:50:39 svc:/application/time-slider/plugin:zfs-send
is the only service I get listed.
Odd.
How did you install?
Is the manifest there
(/lib/svc/manifest/system/filesystem
ed Jul_02 svc:/system/filesystem/zfs/auto-snapshot:weekly
disabled Jul_02 svc:/application/time-slider/plugin:zfs-send
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 07/ 1/12 08:57 PM, Ian Collins wrote:
On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote:
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collins wrote:
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
and initiator behaviour
On 05/29/12 08:42 AM, Richard Elling wrote:
On May 28, 2012, at 2:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
..
If the drives show up at all, chances are you only need to work around
the power-up issue in Dell HDD firmware.
Here's what I had to do to get the d
On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote:
On Sun, Jul 1, 2012 at 4:18 AM, Ian Collins wrote:
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
and initiator behaviour.
Thanks Richard, I 'll have a look.
On 06/30/12 03:01 AM, Richard Elling wrote:
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI
target
and initiator behaviour.
Thanks Richard, I 'll have a look.
I'm assuming the pool is hosed?
-- richard
On Jun 28, 2012, at 10:47 PM, Ian Collins w
st
Any ideas how to determine the cause of the problem and remedy it?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r to decouple my applications
from changes to the API. Generally the API has been stable for basic
operations such as iteration and accessing properties. Not so for send
and receive!
I have a simple (150 line) C++ wrapper that supports iteration and
property access I'm happy to
On 05/28/12 11:01 PM, Sašo Kiselkov wrote:
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One oddity is the box has two SATA
SSDs which also
On 05/ 7/12 04:08 PM, Ian Collins wrote:
On 05/ 7/12 03:42 PM, Greg Mason wrote:
I am currently trying to get two of these things running Illumian. I don't have
any particular performance requirements, so I'm thinking of using some sort of
supported hypervisor, (either RHEL and KVM
ply refuse to die and I'm still using them in various
test systems.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/14/12 10:32 PM, Carson Gaspar wrote:
On 5/14/12 2:02 AM, Ian Collins wrote:
Adding the log was OK:
zpool add -f export log mirror c10t3d0s0 c10t4d0s0
But adding the cache fails:
zpool add -f export cache c10t3d0s1 c10t4d0s1
invalid vdev specification
the following errors must be
555200
1 unassignedwm2675 - 19931 103.22GB(17257/0/0) 216471808
2 backupwu 0 - 19931 119.22GB(19932/0/0) 250027008
Is there a solution?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolari
On 05/11/12 02:01 AM, Mike Gerdts wrote:
On Thu, May 10, 2012 at 5:37 AM, Ian Collins wrote:
I have an application I have been using to manage data replication for a
number of years. Recently we started using a new machine as a staging
server (not that new, an x4540) running Solaris 11 with a
0 194K 0
tank12.5T 6.58T 99258 209K 1.50M
tank12.5T 6.58T196296 294K 1.31M
tank12.5T 6.58T188130 229K 776K
Can anyone offer any insight or further debugging tips?
Thanks.
--
Ian.
___
On 05/ 8/12 08:36 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
On a Solaris 11 (SR3) system I have a zfs destroy process what appears
to be doing nothing and can't be killed. It has used 5 se
k position. My next attempt
would be SmartOs if I can't get the cards swapped (the R720 currently
has a Broadcom 5720 NIC).
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm trying to configure a DELL R720 (not a pleasant experience) which
has an H710p card fitted.
The H710p definitely doesn't support JBOD, but the H310 looks like it
might (the data sheet mentions non-RAID). Has anyone used one with ZFS?
Thank
his was an old problem that was fixed long ago in Solaris 10
(I had several temporary patches over the years), but it appears to be
alive and well.
Any hints?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server, /ex
nodes from the server? Much simplier
than
your example, and all data is available on all machines/nodes.
That assumes the data set will fit on one machine, and that machine won't be a
performance bottleneck.
Aren't those general considerations when specifying a
ta is read.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ybe even pool loss) while the
new drive is resilvering.
I would only use raidz for unimportant data, or for a copy of data from
a more robust pool.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 04/12/12 09:51 AM, Peter Jeremy wrote:
On 2012-Apr-11 18:34:42 +1000, Ian Collins wrote:
I use an application with a fairly large receive data buffer (256MB) to
replicate data between sites.
I have noticed the buffer becoming completely full when receiving
snapshots for some filesystems
On 04/12/12 09:00 AM, Jim Klimov wrote:
2012-04-11 23:55, Ian Collins wrote:
Odd. The pool is a single iSCSI volume exported from a 7320 and there is
18TB free.
Lame question: is that 18Tb free on the pool inside the
iSCSI volume, or on the backing pool on 7320?
I mean that as far as the
On 04/12/12 04:17 AM, Richard Elling wrote:
On Apr 11, 2012, at 1:34 AM, Ian Collins wrote:
I use an application with a fairly large receive data buffer (256MB)
to replicate data between sites.
I have noticed the buffer becoming completely full when receiving
snapshots for some filesystems
scattered.
Is there any way to improve this situation?
Thanks,
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the
time, and that's where we are now...).
Why is that configuration stupid? Your pool a a stripe of two disk
mirrors, not two disk stripes.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
tly applied to the dataset)
but I'm afraid there's some kind of corruption.
Does zfs receive produce any warnings? Have you tried adding -v?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
y to-do list.
I have also seen the same issue (a long time ago) and the application I
use for replication still has a one second pause between sends to "fix"
the problem.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 03/10/12 01:48 AM, Jim Klimov wrote:
2012-03-09 9:24, Ian Collins wrote:
I sent the snapshot to a file, coped the file to the remote host and
piped the file into zfs receive. That worked and I was able to send
further snapshots with ssh.
Odd.
Is it possible that in case of "zfs
On 03/ 3/12 11:57 AM, Ian Collins wrote:
Hello,
I am problems sending some snapshots between two fully up to date
Solaris 11 systems:
zfs send -i tank/live/fs@20120226_0705 tank/live/fs@20120226_1105 | ssh
remote zfs receive -vd fileserver/live
receiving incremental stream of tank/live/fs
.
Other filesystems that were upgraded yesterday receive fine, so I don't
think the problem is directly related to the upgrade.
Any ideas?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
On 02/28/12 12:53 PM, Ulrich Graef wrote:
Hi Ian,
On 26.02.12 23:42, Ian Collins wrote:
I had high hopes of significant performance gains using zfs diff in
Solaris 11 compared to my home-brew stat based version in Solaris 10.
However the results I have seen so far have been disappointing
minutes. I haven't tried my old tool, but I would
expect the same diff to take a couple of hours.
The box is well specified, an x4270 with 96G of RAM and a FLASH
accelerator card used for log and cache.
Are there any ways to improve diff performance?
-
ience. It gets bugfixes and new features sooner than commercial
solaris.
Solaris 11 express is long gone.
You don't just pay them for "An OS". Compare the sensible support
pricing for their Linux offering the the ridiculous price for Solaris.
--
Ian.
_
flags = 24
The source pool version is 31, the remote pool version is 33. Both the
source filesystem and parent on the remote box are version 5.
I've never seen this before, any clues?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opens
On 12/ 9/11 11:37 AM, Betsy Schwartz wrote:
On Dec 7, 2011, at 9:50 PM, Ian Collins wrote:
On 12/ 7/11 05:12 AM, Mark Creamer wrote:
Since the zfs dataset datastore/zones is created, I don't understand what the
error is trying to get me to do. Do I have to do:
zfs create datastore/
g on the stream. Deduplicated
streams cannot be received on systems that do not
support the stream deduplication feature.
Is there any more published information on how this feature works?
--
Ian.
___
zfs-discuss m
eone can point
out my error for me. Thanks for your help!
You shouldn't have to, but it won't do any harm.
If you don't get any further, try zones-discuss.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
5': snapshot is cloned
It turns out there was a zfs receive writing to the filesystem.
A more sensible error would have been "dataset is busy".
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
FS
all or not all of these bytes are backed by physical storage 1:1.
If you use "du" on the ZFS filesystem, you'll see the logical
storage size, which takes into account compression and sparse
bytes. So the "du" size should be not greater than "ls&q
On 11/14/11 04:00 AM, Jeff Savit wrote:
On 11/12/2011 03:04 PM, Ian Collins wrote:
It turns out this was a problem with e1000g interfaces. When we
swapped over to an igb port, the problem went away.
Ian, could you summarize what the e1000g problem was? It might be
interesting or useful
such as filesystems
with documents) changes.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 09/30/11 08:12 AM, Ian Collins wrote:
On 09/30/11 08:03 AM, Bob Friesenhahn wrote:
On Fri, 30 Sep 2011, Ian Collins wrote:
Slowing down replication is not a good move!
Do you prefer pool corruption? ;-)
Probably they fixed a dire bug and this is the cost of the fix.
Could be. I
On 11/11/11 08:52 PM, darkblue wrote:
2011/11/11 Ian Collins mailto:i...@ianshome.com>>
On 11/11/11 02:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org
<mailto:zfs-discuss-boun...@opensolaris.org>
[mailto
my supported (Oracle)
systems, but I've never had problems with my own build Solaris Express
systems.
I waste far more time on (now luckily legacy) fully supported Solaris 10
boxes!
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensola
On 11/ 5/11 02:37 PM, Matthew Ahrens wrote:
On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins <mailto:i...@ianshome.com>> wrote:
I just tried sending from a oi151a system to a Solaris 10 backup
server and the server barfed with
zfs_receive: stream is unsupported version
spreading your IOPs. I haven't
tried an all SSD pool, but I have tried adding a lump of spinning rust
as a log to pool of identical dives and it did give a small improvement
to NFS performance.
--
Ian.
___
zfs-discuss mailing list
zfs-di
1 - 100 of 860 matches
Mail list logo