xamples:
>
> Solaris ZFS already has support for 1MB block size.
>
> Support for SCSI UNMAP - both issuing it and honoring it when it is the
> backing store of an iSCSI target.
Would this apply to say a SATA SSD used as ZIL? (which we have, a
vertex2ex with supercap)
/Tomas
--
Tom
know why these are showing up?
They're used for new style sharing in s11.1 as well.
zfs list -t share
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
make dedups intra-datasets (like
"the real thing").
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-di
apologize.
>
> cs
>
> On 01/14/13 14:02, Nico Williams wrote:
>> On Mon, Jan 14, 2013 at 1:48 PM, Tomas Forsman wrote:
>>>> https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599
>>>
>>> Host oraclecorp.com not found: 3(NXDOMAIN)
>>&g
uitous.
>> -- richard
>>
>> --
>>
>> richard.ell...@richardelling.com
>> <mailto:richard.ell...@richardelling.com>
>> +1-760-896-4422
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
&g
s
> to read it, and copy it over (eg. with zfs send | recv) onto a pool
> created with version 28.
>
> Jan
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-disc
orking
> > on it as a Sev 1 Priority1 in sustaining engineering.
> >
> > -
> >
> >
> >
> > I am thinking about switching
On 30 November, 2012 - Jim Klimov sent me these 2,3K bytes:
> On 2012-11-30 15:52, Tomas Forsman wrote:
>> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
>>
>>> Hi all,
>>>
>>> I would like to knwon if with ZFS it's possible to do so
r (possibly tell it to
replace, if you don't have autoreplace on)
Repeat until done.
If you have the physical space, you can first put in a new disk, tell it
to replace and then remove the old.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Comput
quot;/bin/chmod","A=".$acl,$newfile);
}
/bin/find /export -acl -print0 | xargs -0 /blah/aclcopy.pl
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
__
;t be a problem even if it fails
(could be a problem if it's half-failing and just being slow, if so -
get rid of it).
>
>
> --
> Michel Jansens
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail
g is an OCZ Vertex2EX (slc&supercap), cache is an Intel 510.
Host has been rebooted 3 times after the pool was created, all the same
day as creation.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at
1655.460019] raid6: int64x8 1672 MB/s
[961655.528062] raid6: sse2x1 834 MB/s
[961655.596047] raid6: sse2x21273 MB/s
[961655.664028] raid6: sse2x42116 MB/s
[961655.664030] raid6: using algorithm sse2x4 (2116 MB/s)
So raid6 at 2Gbyte/s and raid5 at 6Gbyte/s should be enough on a 6+ year
s fixed in Illumos or even if Illumos was affected by
> this at all.
The code that affects bug 7111576 was introduced between s10 and s11.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.um
way. So you're
> guaranteed to have performance degradation with the dedup.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas For
on.
So in ZFS, which normally uses 128kB blocks, you can instead store them
100% uniquely into 32 bytes.. A nice 4096x compression rate..
decompression is a bit slower though..
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of
.jmcp.homeunix.com/blog
> -
> No virus found in this message.
Good to know, I better trust this info - just like spam that says it's
not spam :P
> Checked by AVG - www.avg.com
> Version: 2012.0.2178 / Virus Database: 2433/5062 - Release Date: 06/11/12
>
> __
e, just serial? Or is it possible to run the scrub in parallel,
> so it takes 5h no matter how many disks?
It walks the filesystem/pool trees, so it's not just reading the disk
from track 0 to track 12345, but validates all possible copies.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, ht
It had its share of merits and bugs.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
built-in..
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
th0.de/creating-raidz-with-missing-device/
.. copy data from temp to new pool, quite important step ;)
> - destroy the temporary pool
> - replace the fake device with now-free disk
> - export the new pool
> - import the new pool and rename it in the process: "zpool import
>
cepted
Severity2-High
Last Updated2012-01-07 00:00:00 GMT+00:00
I've filed an SR pointing at the same bug too, to get momentum.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmi
ref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#2089
>
>
>
> maybe
>
>
> dtrace -n 'fbt:zfs:arc_reclaim_needed:return { trace(arg1) }'
>
>
> Dave
>
>
>
>
>
/Tomas
--
Tomas Forsman, st
0 MB
> memory_throttle_count = 0
> meta_used = 499 MB
> meta_max = 1154 MB
> meta_limit= 0 MB
> arc_no_grow = 1
> arc_tempreserve = 0 MB
> ___
wmload
> so that the image can be copied to the system faster?
You should probably ask on some forum/list that's related to illumos..
This is about the ZFS filesystem..
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University o
uot; gives "chmod: ERROR: extended system attributes not
> supported for share" (even though it has the xattr=on property).
> What is the problem here, why cannot a Solaris 10 filesystem be shared via
> smb?
> And how can extended attributes be set on a zfs filesystem?
>
On 10 November, 2011 - Will Murnane sent me these 1,5K bytes:
> On Thu, Nov 10, 2011 at 14:12, Tomas Forsman wrote:
> > On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
> >> On Wed, 9 Nov 2011, Tomas Forsman wrote:
> >>>>
> >>>> A
On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
> On Wed, 9 Nov 2011, Tomas Forsman wrote:
>>>
>>> At all times, if there's a server crash, ZFS will come back along at next
>>> boot or mount, and the filesystem will be in a consistent state
nt writes block 2.. waaiits.. waiits.. server comes up and, server
says OK and writes it to disk.
Now, from the view of the clients, block 0-2 are all OK'd by the server
and no visible errors.
On the server, block 1 never arrived on disk and you've got silent
corruption.
Too bad NFS is re
ng it.
>
> > zfs get snapdir xxx
> NAME PROPERTY VALUESOURCE
> xxx snapdir hidden default
>
> You would use "zfs set snapdir=hidden " to set the parameter.
.. which is default.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http:/
e systems I look after!
I've found it useful time after time.. do things and then check atime
to see whatever files it looked at..
(yes, I know about truss and dtrace)
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`-
ure.
>
> I also did another huge "mistake" which really brings me into the deep pain.
> I physically removed these two added devices for I though raidz2 can afford
> it.
> But now the whole pool corrupts. I don't know where I can go ...
> Any help will be tremen
/bin/sh
zfs send pool/filesystem1@100911 > /backup/filesystem1.snap &
zfs send pool/filesystem2@100911 > /backup/filesystem2.snap
..?
> i need to incorporate these 2 into a single script with both commands
> running concurrently.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http:/
o don't use desktop drives in raid and don't use raid disks in a
desktop setup. Ofcourse, this is just a config setting - but it's still
reality.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at
3323030303130363120d0s0
> # ln -s /jbod1-diskbackup/restore/deep_san01_lun.dd
> /dev/dsk/c4t526169645765622E436F6D202020202030303131383933323030303130354220d0s0
>
> # zpool import -d .
Just for fun, try an absolute path.
/Tomas
--
Tomas Forsman, st...@acc.umu.se, http://www.acc.
35 matches
Mail list logo