faked ACL over NFS, modifies it and sends it back..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@op
ntial errors to be hidden by ECC, so it
disables ECC to see them if they occur.
>
> You can enable it in the memtest menu.
>
> Casper
>
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-d
napshotAlbum/data/
> Photos: http://dd-b.net/photography/gallery/
> Dragaera: http://dragaera.info
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st.
On 08 March, 2010 - Chris Banal sent me these 0,8K bytes:
> Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the need
> for a scrub?
No, it won't read redundant copies of the data, which a scrub will.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.um
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Ume
On 08 March, 2010 - Bill Sommerfeld sent me these 0,4K bytes:
> On 03/08/10 12:43, Tomas Ögren wrote:
> So we tried adding 2x 4GB USB sticks (Kingston Data
>> Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
>> snapshot times down to about 30 seconds.
&g
eate a new pool with lun2,lun3 and a sparse file the same size as
lun2&3.
Get rid of the file.
Copy data over from lun1 (old single lun thing) to the raidz
(lun2,lun3,missingfile)
Destroy old pool
replace missingfile with lun1
With this method, the pool is lacking redundancy between step
6486009344
Roughly 6GB has been written to the device, and slightly less than 2.5GB
is actually in use.
> p 775528448
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Solaris 10 'man zfs', under 'receive':
-uFile system that is associated with the received
stream is not mounted.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric
?
The rest can and will be used if L2ARC needs it. It's not wasted, it's
just a number that doesn't match what you think it should be.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
understand that l2arc size reflected by zpool iostat is much larger
> becuase of COW and l2_size from kstat is the actual size of l2arc data.
>
> so can any one tell me why I am loosing my workingset from l2_size actual
> data !!!
Maybe the data in the l2arc was invalidated, because the
ade it back to l2arc from the
> tail of ARC !!!
>
> Am I right
Sounds plausible.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
fter erasing) instead of just before
the timing critical write(), you can make stuff go faster.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
z
On 12 April, 2010 - David Magda sent me these 0,7K bytes:
> On Mon, April 12, 2010 10:48, Tomas Ögren wrote:
> > On 12 April, 2010 - Bob Friesenhahn sent me these 0,9K bytes:
> >
> >> Zfs is designed for high thoughput, and TRIM does not seem to improve
> >> throu
On 21 April, 2010 - Justin Lee Ewing sent me these 0,3K bytes:
> So I can obviously see what zpools I have imported... but how do I see
> pools that have been exported? Kind of like being able to see deported
> volumes using "vxdisk -o alldgs list".
'zpool import&
. Enabling EA on a file works, but
creating one with EA doesn't.. So it seems like a Finder bug..
Copying via terminal (and cp) works.
> At the moment I have a workaround: I use sftp to copy the files from the
> laptop to the server. But this is a pain in the ass and I'm sure the
mkdir or
creating a file can take 30 seconds.. Single write()s can take 5-30
seconds.. Without the scrub, it's perfectly fine. Local performance
during scrub is fine. NFS performance becomes useless.
This means we can't do a scrub, because doing so will basically disable
the NFS service
systems that live at 75% all day are obviously going to have
> more problems than people who live at 25%!
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner , and is
> believed to be clean.
> ___
On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
> On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
>
> > I got this hint from Richard Elling, but haven't had time to test it much.
> > Perhaps someone else could help?
> >
> > ro
et with zfs they
go downhill in some conditions..
> > Should have enough oompf, but when you combine snapshot with a
> > scrub/resilver, sync performance gets abysmal.. Should probably try
> > adding a ZIL when u9 comes, so we can remove it again if performance
> > goe
rse files will make this
differ, and you can't really tell them apart.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-di
(as long as they don't start messing up some
bus or similar). They can be added/removed at any time as well.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_
. It just behaves like a cache miss
for that specific block... If this happens often enough to become a
performance problem, then you should throw away that L2ARC device
because it's broken beyond usability.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Co
ystems due to
various scalability problems, esp if you're doing NFS as well. It will
be slow to create and slow when (re)booting, but other than that it
might be ok..
Look into the zfs userquota/groupquota instead.. That's what I did, and
it's partly because of these issues
ike 2-3 months.. Then I told smartd
to poll the disk every 5 seconds to prevent it from falling asleep.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_
reads.. It reads
the filesystem tree, not "block 0, block 1, block 2..". You won't get
60MB/s sustained, not even close.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
rites shouldn't take 1.3 seconds.
Some of your disks are not feeling well, possibly doing
block-reallocation like mad all the time, or block recovery of some
form. Service times should be closer to what sd1 and 2 are doing.
sd2,3,4 seems to be getting about the same amount of read+write, bu
On 20 May, 2010 - John Andrunas sent me these 0,3K bytes:
> Can I make a pool not mount on boot? I seem to recall reading
> somewhere how to do it, but can't seem to find it now.
zpool export thatpool
zpool import thatpool when you want it back.
/Tomas
--
Tomas Ögren, st...@acc.u
gt; length, the interleave needed between two writes and the interleave if a
> track-to-track seek is involved. Of course you can always learn more about a
> disk, but that's a good starting point.
Since X, X+1, X+2 seems to be the optimally worst case, try just
skipping over
0ONLINE 0 0 0
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc
more data (which then will be beyond the original 100%) ..
and visiting blocks ...
* .. reaching the initial "last block", which since then has gotten lots
of new friends afterwards.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970
/Tomas
--
Tomas Ögren, st...@ac
e while
> following a reboot, or is it always constant once it builds the L2ARC once?
L2ARC is currently cleared at boot. There is an RFE to make it
persistent.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science,
ilesystem and then fill the
inner filesystem with null (dd if=/dev/zero of=file bs=1024k) and remove
that file, then remove compression (if you want). This is just a
temporary thing, as the filesystem will be used on the inside (with Copy
on Write), the outer one will grow back again.
/Tomas
--
The only change is the name of the directory.
> This would, obviously, be fairly easy to test; and, if I removed the
> snapshots afterward, wouldn't take space permanently (have to make sure
> that the scheduler doesn't do one of my permanent snapshots during the
> test). But
ed to us that we stay with our current Vendor.
>
> * Will there be official Solaris 10, or OpenSolaris releases with ZFS
> User quotas? (Will 2010.02 contain ZFS User quotas?)
http://sparcv9.blogspot.com/2009/08/solaris-10-update-8-1009-is-comming.html
which is in no way official, says
pensolaris.org/bugdatabase/view_bug.do?bug_id=6740597
which also refers to
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2178540
So it seems like it's fixed in snv114 and s10u8, which won't help your
s10u4 unless you update..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http:/
oot+root work fine
> on this machine?
Check for instance 'iostat -xnzmp 1' while doing this and see if any
disk is behaving badly, high service times etc.. Even your speedy
3-4MB/s is nowhere close to what you should be getting, unless you've
connected a bunch of floppy drives
'll work too.
> [b]/usr/sbin/shutdown -g0 -i6 -y[/b]
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
're expanding to in step 4.. But stuff shouldn't fail
this way IMO.. Maybe comparing timestamps and see that label 2/3 aren't
so hot anymore and ignore them, or something..
zdb -l and zpool import dumps at:
http://www.acc.umu.se/~stric/tmp/zdb-dump/
/Tomas
--
Tomas Ögren, st...@
zpool
import" picks information from different labels and presents it as one
piece of info.
If I was using some SAN and my lun got increased, and the new storage
space had some old scrap data on it, I could get hit by the same issue.
> Maybe I missed the point. Let me know.
>
> Cin
igger it. My tests has
been as above.
Output from all of the above + zfs list, zfs get all, zfs userspace, ls
-l and zdb -vvv is at:
http://www.acc.umu.se/~stric/tmp/zfs-userquota.txt
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, Unive
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
> Tomas Ögren wrote:
>> On a related note, there is a way to still have quota used even after
>> all files are removed, S10u8/SPARC:
>
> In this case there are two directories that have not actually been
>
nd didnot show the
> used quota. Does this feature only work with OpenSolaris or is it
> intended to work on Solaris 10?
ZFS userspace quota doesn't support rquotad reporting. (.. yet?)
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computin
ize to 1GB or so (due
to buffers currently being handled, setting primarycache=metadata will
give crap performance in my testing) and let metadata take as much as
it'd like.. Is there a chance of getting something like this?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http:
NWPython-share
and it'll work. Some ZFS stuff (userspace, allow, ..) started using
python in u8.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
__
rg/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#arc_reclaim_needed
See line 1956 .. I tried some tuning on a pure nfs server (although
s10u8) here, and got it to use a bit more of "the last 1GB" out of 8G..
I think it was swapfs_minfree that I poked with a sharp stick. No i
et browser
> cached files, etc.
Using extended attributes + cron, you could provide the same service
yourself and other similar (or not) things people would like to do
without developers providing it for you in the fs..
Start at 'man fsattr'
/Tomas
--
Tomas Ögren, st...@acc.umu.se,
index.php?option=com_content&task=view&id=392&Itemid=60&limit=1&limitstart=6
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-d
On 20 January, 2010 - Mr. T Doodle sent me these 1,0K bytes:
> I currently have one filesystem / (root), is it possible to put a quota on
> let's say /var? Or would I have to move /var to it's own filesystem in the
> same pool?
Only filesystems can have different settings
/dev/zvol/dsk/rpool/blahcache
LABEL 0
----
version=15
state=4
guid=6931317478877305718
...
It did indeed overwrite my formerly clean blahcache.
Smells like a serious bug.
/Tomas
--
Tomas Ögren, st...
--apparent-size
print apparent sizes, rather than disk usage;
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#arc_reclaim_needed
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing l
r posts won't catch newer files
with an old timestamp (which could happen for various reasons, like
being copied with kept timestamps from somewhere else).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,
gt;
> howto setup
> http://www.napp-it.org/napp-it.pdf
>
>
> gea
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-
they do 1500 iops each which is
probably more than your current disks. Or if you can stick an Intel
X25-M/E in there through SATA/SAS.
You can add/remove L2ARCs at will and they don't need to be 100%
reliable either, so if you add several of them they will be raid0'd for
performance.
/Tom
http://blogs.sun.com/brendan/entry/l2arc_screenshots
>
> And follow up, can you tell how much of each data set is in the arc or l2arc?
kstat -m zfs
(p, c, l2arc_size)
arc_stat.pl is good, but doesn't show l2arc..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|
s to
spare..
Thoughts?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se - 070-5858487
___
zfs-discuss mailing list
zfs-discuss@opensolari
On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
> Am 20.02.10 03:22, schrieb Tomas Ögren:
>> On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
>>> How do you tell how much of your l2arc is populated? I've been looking for
>>> a
On 21 February, 2010 - Richard Elling sent me these 1,3K bytes:
> On Feb 21, 2010, at 9:18 AM, Tomas Ögren wrote:
>
> > On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
> >
> >> Am 20.02.10 03:22, schrieb Tomas Ögren:
> >>> On 19 February,
gt; any strain for zfs at all although it can cause considerable stress on
> applications.
>
> 400 million tiny files is quite a lot and I would hate to use anything
> but mirrors with so many tiny files.
Another tought is "am I using the correct storage model for this dat
gesize
(206*4096=8446861312 right now)
set zfs:zfs_arc_max = 835000
set zfs:zfs_arc_meta_limit = 70
* some tuning
set ncsize = 50
set nfs:nrnode = 5
And I've done runtime modifications to swapfs_minfree to force usage of another
chunk of memory.
/Tomas
--
Tomas
free thin provisioned zvol space can
> fill the unused blocks wirth 0s easity with simple tools (e.g. dd
> if=/dev/zero of=/MYFILE bs=1M; rm /MYFILE) and the space is freed again on
> the zvol side.
>
> Does anyone know why this is not incorporated into ZFS ?
What you can do
;
> Thanks!
>
> Here's what I'm seeing.
> zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1
Just fyi, this is an inefficient variant of a mirror. More cpu required
and lower performance.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu
kaka/kex
cannot create 'kaka/kex': 'dedup' is readonly
scratchy:~# zfs set dedup=on kaka
cannot set property for 'kaka': 'dedup' is readonly
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
urse your copy will be slow. Disk is probably having a hard time
reading the data or something.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
start with a
non-raidz vdev).
You can expand a pool by adding more vdevs.
You can not transform a raidz from one form to another.
You can not remove a vdev.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, Uni
during resilver, resilver counter will not
> update accordingly, and it will show resilvering 100% for needed time
> to catch up.
I believe this was fixed recently, by displaying how many blocks it has
checked vs how many to check...
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.ac
> Now, scrub would reveal corrupted blocks on the devices, but is there a way
> to identify damaged files as well?
Is this a trick question or something? The filenames are right over
your question..?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Com
some information here...
> Calling:
>
> `startx /usr/bin/dbus-launch --exit-with-session gnome-session' from
> console. Which is how I've been starting X for some time.
This thread started out way off-topic from ZFS discuss (the filesystem)
and has continued off course.
/Tomas
--
To
ne how I could have done
> anything to set that bit. Is this a ZFS weirdness?
It's mkfile.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
__
quot;zfs set com.sun\:auto-snapshot=false tank",
> correct?), will see if the log messages disappear. Did the filesystem
> kill off some snapshots or something in an effort to free up space?
Probably.
zfs list -t all to see all the snapshots as well.
/Tomas
--
Tomas Ögren, st..
as ZFS does not have checkpoints (and
is pretty much the same as a snapshot).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss maili
On 05 December, 2010 - Chris Gerhard sent me these 0,3K bytes:
> Alas you are hosed. There is at the moment no way to shrink a pool which is
> what you now need to be able to do.
>
> back up and restore I am afraid.
.. or add a mirror to that drive, to keep some redundancy.
/Tom
> Short of doing a find | wc.
GNU df can show, and regular Solaris could too but chooses not to.
statvfs() should be able to report as well. In ZFS, you will run out of
inodes at the same time as you run out of space.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student
t; ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysa
isk, it's just
the other way that's bad. I guess ZFS could start defaulting to 4k, but
ideally it should do the right thing depending on content (although
that's hard for disks that are lying).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing
Windows 7 machine that
> does backups of the server. Bios and drivers are available from the
> Silicon Image site, but nothing for Solaris.
The problem itself is sparc vs x86 and firmware for the card. AFAIK,
there is no sata card with drivers for solaris sparc. Use a SAS card.
/Tomas
--
Tom
on ZFS and mbuffer.
> -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- S
lowed to during
the current circumstances (arc size).
> Does this explain the hang?
No..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
sy mail server. Full
> pools might cause a performance penalty, but no other issues.
>
>
>
> Dave
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinf
Self-test log
Num Test Status segment LifeTime LBA_first_err
[SK ASC ASQ]
Description number (hours)
# 1 Default Completed - 293 -
[- --]
...
/Tomas
--
Tomas Ögren
On 07 April, 2011 - Russ Price sent me these 0,7K bytes:
> On 04/05/2011 03:01 PM, Tomas Ögren wrote:
>> On 05 April, 2011 - Joe Auty sent me these 5,9K bytes:
>>> Has this changed, or are there any other techniques I can use to check
>>> the health of an individual SAT
gt; possible before thinking them through properly.
I can't think of any, so what are your uses?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
__
4M
> c8t2d00 29.5G 0 0 0 0
Btw, this disk seems alone, unmirrored and a bit small..?
> cache - - - - - -
> c8t3d059.4G 3.88M113 64 6.51M 7.31M
> c8t1d059.5G48K 95 69 5.69M 8.08M
>
of my head.
>
> Anybody know the answer to that one?
zdb -bb pool
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing
e. Using 2 fast usb sticks as l2arc, waiting for a
Vertex2EX and a Vertex3 to arrive for ZIL&L2ARC testing. IO to the
filesystems are quite low (50 writes, 500k data per sec average), but
snapshot times goes waay up during backups.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.s
n IBM RS/6000 43P with a PowerPC 604e
cpu, which had about 60MB/s memory bandwidth (which is kind of bad for a
332MHz cpu) and its disks could do 70-80MB/s or so.. in some other
machine..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, U
of other PIDs] 20617tm [others] 20412cm [others]
> #fuser -c /opt
> /opt:
> #
>
> Nothing at all for /opt. So it's safe to unmount? Nope:
...
> Has anyone else seen something like this?
Try something less ancient, Solaris 10u9 reports it just fine for
example. ZFS was pretty
On 10 May, 2011 - Tomas Ögren sent me these 0,9K bytes:
> On 23 November, 2005 - Benjamin Lewis sent me these 3,0K bytes:
>
> > Hello,
> >
> > I'm running Solaris Express build 27a on an amd64 machine and
> > fuser(1M) isn't behaving
> > as I wou
t the various things under Disk and FS,
might help.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
multaneously from multiple filesystems at once?
> >
> > Regards,
> > Gertjan Oude Lohuis
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org (mailto:zfs-discuss@opensolaris.org)
> > http://mail.opensolaris.
On 31 May, 2011 - Gertjan Oude Lohuis sent me these 0,9K bytes:
> On 05/31/2011 03:52 PM, Tomas Ögren wrote:
>> I've done a not too scientific test on reboot times for Solaris 10 vs 11
>> with regard to many filesystems...
>>
>
>> http://www8.cs.umu.se/~stric/tm
bout 700-800 writes/sec. on the hot spare as it resilvers.
> There is no other I/O activity on this box, as this is a remote
> replication target for production data. I have a the replication
> disabled until the resilver completes.
700-800 seq ones perhaps.. for random, you can divide by
>
> Since you can't mix vdev types in a single pool, you'll have to create
> a new pool. But you can use zfs send/recv to move the datasets, so
You can mix as much as you want to, but you can't remove a vdev (yet).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://w
erformance problem.
And if pool usage is >90%, then there's another problem (change of
finding free space algorithm).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
__
shazoo:~# gdd if=/dev/rdsk/c0t5E83A97F1471E0A4d0s0 of=/dev/null bs=1024k
count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.93114 s, 273 MB/s
This is in a x4170m2 with Solaris10.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at
locks
delivered from the devices (SAN I guess?) were broken according to the
checksum. If you had raidz/mirror in zfs, it would have corrected the
problems and written back correct data to the malfunctioning device. Now
it does not. A scrub only reads the data and verifies that data matches
checksums.
/To
ny. I was actually considering this :p
4-way mirror would be way more useful.
> But you have to admit, it would probably be somewhat reliable!
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
Matt Harrison wrote:
>Hi list,
>
>I want to monitor the read and write ops/bandwidth for a couple of
>pools
>and I'm not quite sure how to proceed. I'm using rrdtool so I either
>want an accumulated counter or a gauge.
>
>According to the ZFS admin guide, running zpool iostat without any
>pa
be regained.
Overwriting an previously used block requires a flash erase, and if that
can be done in the background when the timing is not critical instead of
just before you can actually write the block you want, performance will
increase.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.
1 - 100 of 247 matches
Mail list logo