>forgive my ignorance, but what's the advantage of this new dedup over
>the existing compression option?
it may provide another space saving advantage. depending on your data, the
savings can be very significant.
>Wouldn't full-filesystem compression
>naturally de-dupe?
no. compression doesn`t
by some posting on zfs-fuse mailinglist, i came across "zle" compression which
seems to be part of the dedupe-commit some days ago:
http://hg.genunix.org/onnv-gate.hg/diff/e2081f502306/usr/src/uts/common/fs/zfs/zle.c
--snipp
31 + * Zero-length encoding. This is a fast and simple algorithm to el
ore input on request.
regards
roland
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
thanks.
we will try that if the error happens again - needed to reboot as a quick-fix,
as the machine is in production
regards
roland
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
sdelete may be the easiest, but not the best tool here, since it`s made for
secure deletion and not made for filling a disk with zeroes quickly.
i have no windows around here for performance testing, but dd may perform
better:
http://www.chrysocome.net/dd
you should try "dd if=/dev/zero of=la
i have a problem which is perhaps related.
i installed opensolaris snv_130.
after adding 4 additional disks and creating a raidz on them with
compression=gzip and dedup enabled, i got reproducable system freeze (not sure,
but the desktop/mouse-coursor froze) directly after login - without active
seems, my problem is unrelated.
after disabling the gui and working console only, i see no freezes. so it must
be a problem of the desktop/X environment and not kernel/zfs issue.
sorry for the noise.
--
This message posted from opensolaris.org
___
zfs
making transactional,logging filesystems thin-provisioning aware should be hard
to do, as every new and every changed block is written to a new location.
so what applies to zfs, should also apply to btrfs or nilfs or similar
filesystems.
i`m not sure if there is a good way to make zfs thin-prov
Hello,
the ZFS best practices guide at
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide tells:
>* Run ZFS on a system that runs a 64-bit kernel
besides performance aspects, what`s the con`s of running zfs on 32 bit ?
--
This message posted from opensolaris.org
_
so, besides performance there COULD be some stability issues.
thanks for the answers - i think i`ll stay with 32bit, even if there COULD be
issues. (i`m happy to report and help fixing those)
i don`t have free 64bit hardware around for building storage boxes.
--
This message posted from opensol
>the only problems i've run into are: slow (duh) and will not
>take disks that are bigger than 1tb
do you think that 1tb limit is due to 32bit solaris ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
so, we have a 128bit fs, but only support for 1tb on 32bit?
i`d call that a bug, isn`t it ? is there a bugid for this? ;)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
>Solaris is NOT a super-duper-plays-in-all-possible-spaces OS.
yes, i know - but it`s disappointing that not even 32bit and 64bit x86 hardware
is handled the same.
1TB limit on 32bit, less stable on 32bit.
sorry, but if you are used to linux, solaris is really weird.
issue here, limitation ther
/backup1 (that`s easy, just du -s -h
/zfs/backup1) and how much space do the snapshots need (that seems not so easy)
thanks
roland
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
great, will try it tomorrow!
thanks very much!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
just a side-question:
>I folthis thread with much interest.
what are these "*" for ?
why is "followed" turned into "fol*" on this board?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
>Dennis is correct in that there are significant areas where 32-bit
>systems will remain the norm for some time to come.
think of that hundreds of thousands of VMWare ESX/Workstation/Player/Server
installations on non VT capable cpu`s - even if the cpu has 64bit capability, a
VM cannot run in 6
u4. If it does not work out of the
box, can i use that driver with opensolaris/snv ?
thanks
roland
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
mhh, i think i`m afraid, too, as i also need to use zfs on a single, large lun.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>Running this kind of setup absolutely can give you NO garanties at all.
>Virtualisation, OSOL/zfs on WinXP. It's nice to play with and see it
>"working" but would I TRUST precious data to it? No way!
why not?
if i write some data trough virtualization layer which goes straight trough to
raw disk
>As soon as you have more then one disk in the equation, then it is
>vital that the disks commit their data when requested since otherwise
>the data on disk will not be in a consistent state.
ok, but doesn`t that refer only to the most recent data?
why can i loose a whole 10TB pool including all t
thanks for the explanation !
one more question:
> there are situations where the disks doing strange things
>(like lying) have caused the ZFS data structures to become wonky. The
>'broken' data structure will cause all branches underneath it to be
>lost--and if it's near the top of the tree, it c
what`s your disk controller?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello !
How can i export a filesystem /export1 so that sub-filesystems within that
filesystems will be available and usable on the client side without additional
"mount/share effort" ?
this is possible with linux nfsd and i wonder how this can be done with solaris
nfs.
i`d like to use /export
>IIRC the corruption (i.e. pool being not importable) was caused
>when I killed virtual box, because it was hung.
that scares me using zfs inside virtual machines. is such issue known with
vmware?
--
This message posted from opensolaris.org
___
zfs-di
what exact type of sata controller do you use?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
doesn´t solaris have the great builtin dtrace for issues like these ?
if we knew in which syscall or kernel-thread the system is stuck, we may get a
clue...
unfortunately, i don´t have any real knowledge of solaris kernel internals or
dtrace...
--
This message posted from opensolaris.org
_
>Yes, but to see if a separate ZIL will make a difference the OP should
>try his iSCSI workload first with ZIL then temporarily disable ZIL and
>re-try his workload.
or you may use the zilstat utility
--
This message posted from opensolaris.org
___
zf
>Re-surfacing an old thread. I was wondering myself if there are any
>home-use commercial NAS devices with zfs. I did find that there is
>Thecus 7700. But, it appears to come with Linux, and use ZFS in FUSE,
>but I (perhaps unjustly) don't feel comfortable with :)
no, you justly feel unconforta
>I tried making my nfs mount to higher zvol level. But I cannot traverse to the
>sub-zvols from this mount.
i really wonder when someone will come up with a little patch which implements
crossmnt option for solaris nfsd (like it exists for linux nfsd).
ok, even if it´s a hack - if it works it
>SSDs with capacitor-backed write caches
>seem to be fastest.
how to distinguish them from ssd`s without one?
i never saw this explicitly mentioned in the specs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
>I would like to duplicate this scheme using zfs commands.
you don`t want to do that.
zfs is meant for using it as a filesystem on a backup server, but not for
long-term storing of data on removable media
--
This message posted from opensolaris.org
___
what you want is possible with linux nfs, but solaris nfs developers don`t like
this feature and will not implement it. see
http://www.opensolaris.org/jive/thread.jspa?threadID=109178&start=0&tstart=0
--
This message posted from opensolaris.org
___
zf
is it planned to add some other compression algorithm to zfs ?
lzjb is quite good and especially performing very well, but i`d like to have
better compression (bzip2?) - no matter how worse performance drops with this.
regards
roland
This message posted from opensolaris.org
> Take note though, that giving zfs the entire disk gives a possible
> performance win, as zfs will only enable the write cache for the disk
> if it is given the entire disk.
really?
why this?
is this tuneable somehow/somewhere? can i enabyle writecache if only using a
dedicated partition ?
glish phrase or some running gag ? i
have seen it once ago on another blog and so i`m wondering
greetings from the beer and sausage nation ;)
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
hey, thanks for your overwhelming private lesson for english colloquialism :D
now back to the technical :)
> # zfs create pool/gzip
> # zfs set compression=gzip pool/gzip
> # cp -r /pool/lzjb/* /pool/gzip
> # zfs list
> NAMEUSED AVAIL REFER MOUNTPOINT
> pool/gzip 64.9M 33.2G 64.9M
ably not the most elegant solution - but unstable?
could you underline that somehow?
i use the loopback module for years and never had a problem.
anyway - it`s getting a competitor: bugfixed version of dm-loop device-mapper
target has just been posted on dm-devel today.
roland
This message posted fro
for what purpose ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
please be cautious with this benchmarks and don`t make early decisions based on
this.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> So, at this point in time that seems pretty discouraging for an everyday
> user, on Linux.
nobody told, that zfs-fuse is ready for an everyday user at it`s current state
! ;)
although it runs pretty stable for now, there still remain major issues and
especially, it`s not yet being optimized
hi !
i think i have read somewhere that zfs gzip compression doesn`t scale well
since the in-kernel compression isn`t done multi-threaded.
is this true - and if so - will this be fixed ?
what about default lzjb compression - is it different regarding this "issue" ?
thanks
rolan
>For what it's worth, at a previous job I actually ported LZO to an
>OpenFirmware
>implementation. It's very small, doesn't rely on the standard libraries, and
>would be
>trivial to get running in a kernel. (Licensing might be an issue, of course.)
just for my personal interest - are you speak
better speed and better
compression in comparison to lzjb
nothing against lzjb compression - it's pretty nice - but why not taking a
closer look here? maybe here is some room for improvement....
roland
This message posted from opensolaris.org
__
lzo in-kernel implementation for solaris/sparc ?
your answer makes me believe, it exists.
could you give a comment ?
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
last number (2.99x) is compression ratio and was much better than lzjb.
not sure if there is some mistake here, i was quite surprised that it was so
much better than lzjb
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
also eric dillman told, it would perform better than lzjb - at
least with zfs-fuse.
>licensing issues can be sorted out later..
good attitude ! :)
zfs-fuse author/maintainer is Ricardo Correia and the lzo patch was done by
Eric Dillmann. I can provide contact data if you like.
roland
This
nice one !
i think this is one of the best and most comprehensive papers about zfs i have
seen
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
space
very quickly, even when using snapshots and even if only small parts of the
large file are changing.
comments?
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
>So, in your case, you get maximum
>space efficiency, where only the new blocks are stored, and the old
>blocks simply are referenced.
so - i assume that whenever some block is read from file A and written
unchanged to file B, zfs recognizes this and just creates a new reference to
file A ?
tha
whoops - i see i have posted the same several times.
this was duo to i got an error message when posting and thought, it didn`t get
trough
could some moderator probably delete those double posts ?
meanwhile, i did some tests and have very weird results.
first off, i tried "--inplace" to updat
iting very different.
when i use rsync through the network stack( i.e. localhost:/localdestination)
it seems to work as expected.
need some more testing to be real sure.but for now things look more
promising
roland
This message posted from opens
gzip | 0m17.418s | 219.64x
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
some closed source lzo professional which
is even more optimzied.
maybe sun should think about lzo in zfs - albeit those licensing issues. i`m
sure that could be resolved somehow, maybe by spending an appropriate amount of
bucks to mr. oberhumer.
roland
This message posted from
a whole zfs volume after turning
on compression or changing compression scheme ?
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nice idea! :)
>We plan to start with the development of a fast implementation of a Burrows
>Wheeler Transform based algorithm (BWT).
why not starting with lzo first - it`s already in zfs-fuse on linux and it
looks, that it`s just "in between lzjb and gzip" in terms of performance and
compressi
>One thing ZFS is missing is the ability to select which files to compress.
yes.
there is also no filesystem based approach in compressing/decompressing a whole
filesystem. you can have 499gb of data on a 500gb partition - and if you need
some more space you would think turning on compression on
> Wouldn't ZFS's being an integrated filesystem make it
> easier for it to
> identify the file types vs. a standard block device
> with a filesystem
> overlaid upon it?
>
> I read in another post that with compression enabled,
> ZFS attempts to
> compress the data and stores it compressed if it
>6564677 oracle datafiles corrupted on thumper
wow, must be a huuge database server!
:D
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
could you give an example what a 32bit inode script is ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
i don`t have a solution for you - but at least some comments:
- i have read several complaints that esx iscsi is broken to some degree. there
are some known incompatibilities and at least one ceo of a somewhat popular
iscsi software vendor recently gave such statement.
- i have read more than on
take a look at this one
http://www.opensolaris.org/jive/thread.jspa?messageID=98176
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
any news on additional compression-schemes for zfs ?
this is interesting research-topic, imho :)
so, some more real-world tests with zfs-fuse + lzo patch :
-LZO
zfs set compression=lzo mypool
time cp /vmware/vserver1/vserver1.vmdk /mypool
real
what you`re looking for is called a bind-mount, and that`s a linux kernel
feature.
i don`t know if solaris has a perfect equivalent for this - maybe lofs is what
you need.
see "man lofs"
This message posted from opensolaris.org
___
zfs-discuss mail
seems that standard drives are ok.
sun is using Hitachi Deskstar 7K500 for it`s Sunfire x4500/thumper.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
besides re-inventing the wheel somebody at sun should wake up and go ask mr.
oberhumer and pay him $$$ to get lzo into ZFS.
this is taken from http://www.oberhumer.com/opensource/lzo/lzodoc.php :
Copyright
-
LZO is Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
2
for those who are interested in lzo with zfs, i have made a special version of
the patch taken from the zfs-fuse mailinglist:
http://82.141.46.148/tmp/zfs-fuse-lzo.tgz
this file contains the patch in unified diff format and also a broken out
version (i.e. split into single files).
maybe this m
eritas is quite popular, but you need spend lots of bucks for this.
maybe SAM-QFS ?
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and what about compression?
:D
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
being at $300 now - a friend of mine just adding another $100
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is there any pricing information available ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
*bump*
just wanted to keep this into discussion. i think it could be important to zfs
if it could compress faster with a better compressratio.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
i have a difficulty in understanding:
you tell that the device get`s lost whenver the I/O error occurs.
you tell that you cannot use ext3 or xfs, but reiser.
with reiser, the device doesn`t get lost on I/O error ?
that`s very weird.
what`s your distro/kernel version ?
This message posted f
nothing new on this?
i'm really wondering that interest in alternative compression schemes is that
low, especially due to the fact that lzo seems to compress better and be faster
than lzjb.
nobody at sun who has done further investigation ?
This message posted from opensolaris.org
>Try running iostat in another ssh window, you'll see it can't even gather
>stats every 5 seconds >(below is iostats every 5 seconds):
>Tue May 27 09:26:41 2008
>Tue May 27 09:26:57 2008
>Tue May 27 09:27:34 2008
that should not happen!
i`d call that a bug!
how does vmstat behave with lzjb compr
effective size on other fs (e.g. reiser3 cause storing small
files efficiently)
5. ???
TIA
roland k.
sysadmin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
thanks for your feedback!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
something i wouldn`t have expected.
ok, i didn`t expect the same size, but i never would have expected such BIG
difference, since we are basically re-compressing data which is already
compressed.
what`s causing this effect?
can someone probably explain this ?
regards
roland
This message posted
# zpool create 500megpool /home/roland/tmp/500meg.dat
cannot create '500megpool': name must begin with a letter
pool name may have been omitted
huh?
ok - no problem if special characters aren`t allowed, but why _this_ weird
looking limitaton ?
This message posted from opens
t those single-bit-errors and correct this - but what about
single disk setup ?
can zfs protect my data from such single-bit-errors with a single drive ?
regards
roland
This message posted from opensolaris.org
___
zfs-discuss mailing li
thanks for your infos!
> > can zfs protect my data from such single-bit-errors with a single drive ?
> >
>nope.. but it can tell you that it has occurred.
can it also tell (or can i use a tool to determine), which data/file is
affected by this error (and needs repair/restore from backup) ?
T
does it take to transmit 1 TiB over a 1 GB/sec tranmission
link, assuming no overhead ?
See ?
hth
-- Roland
--
Roland Rambau Server and Solution Architects
Principal Field Technologist Global Systems
"transfer megabytes at 1Hz"
therefor its 1'000'000 B/s ( strictly speaking )
Of course usually some protocol overhead is much larger and so the small
1000:1024 difference is irrelevant anyway and can+will be neglected.
-- Roland
Am 17.03.2010 04:45, schrieb Erik Trimble:
gang,
actually a simpler version of that idea would be a "zcp":
if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data
-- Roland
Per Baatrup schrie
Michael,
michael schuster schrieb:
Roland Rambau wrote:
gang,
actually a simpler version of that idea would be a "zcp":
if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read
Per,
Per Baatrup schrieb:
Roland,
Clearly an extension of "cp" would be very nice when managing large files.
Today we are relying heavily on snapshots for this, but this requires disipline
on storing files in separate zfs'es avioding to snapshot too many files that
changes f
Load "chmod" builtin command
$ builtin chmod
3. View help
$ chmod --man
or
$ chmod --help
Does that work for you ?
Bye,
Roland
--
__ . . __
(o.\ \/ /.o) roland.ma...@nrubsig.org
\__\/\/__/ MPEG specialist, C&&JAVA&&Sun&&
#x27;s on my todo list (the tricky part is to find the
person who originally added ACL support to Solaris's "chmod" since I
have a couple of questions...) ...
Bye,
Roland
--
__ . . __
(o.\ \/ /.o) roland.ma...@nrubsig.org
\__\/\/__/ MPEG specialist, C&
ot.
they use a 85$ PC motherboard - that does not have "meager 4x PCI-e slots",
it has one 16x and 3 *1x* PCIe slots, plus 3 PCI slots ( remember, long time
ago: 32-bit wide 33 MHz, probably shared bus ).
Also it seems that all external traffic uses the single GbE motherboar
Hi!
Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
"yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?
----
Bye,
Roland
--
__ . . __
(o.\ \/ /.o) roland.ma...@nrubsig.org
\__\/\/__/ MPEG specialis
Norm Jacobs wrote:
> Roland Mainz wrote:
> > Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
> > "yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
> > etc.) are supported by tmpfs ?
>
> I have some vague recollection t
Ian Collins wrote:
> Roland Mainz wrote:
> > Norm Jacobs wrote:
> >> Roland Mainz wrote:
> >>> Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
> >>> "yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
> &g
Robert Thurlow wrote:
> Roland Mainz wrote:
>
> > Ok... does that mean that I have to create a ZFS filesystem to actually
> > test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
> > other options ?
>
> By all means, test with ZFS. But it's
sooo much - Oracle has offloaded certain database functionality into
the storage nodes. I would not assume that there is a hybrid storage
pool with a file system - it is a distributed data base that knows to
utilize flash storage. I see it as a first quick step.
hth
-- Roland
PS
Chris,
well, "Thumper" is actually a reference to Bambi
The comment about being risque was refering to "Humper" as
a codename proposed for a related server
( and e.g. leo.org confirms that is has a meaning labelled as "[vulg.]" :-)
-- Roland
Chris Ridd sch
quot;$(cat /usr/bin/cat)" # (e.g. the
attempt to pass a giant binary string as argument))) ... and I am
currently working on a new shell code style guideline at
http://www.opensolaris.org/os/project/shell/shellstyle/ with more stuff.
Bye,
Roland
--
__ . . __
(o.\ \/ /.o) [EMAIL
Nicolas Williams wrote:
> On Wed, Jun 27, 2007 at 12:55:15AM +0200, Roland Mainz wrote:
> > Nicolas Williams wrote:
> > > On Sat, Jun 23, 2007 at 12:31:28PM -0500, Nicolas Williams wrote:
> > > > On Sat, Jun 23, 2007 at 12:18:05PM -0500, Nicolas Williams wrote:
&
We are having the same problem.
First with 125025-05 and then also with 125205-07
Solaris 10 update 4 - Know with all Patchesx
We opened a Case and got
T-PATCH 127871-02
we installed the Marvell Driver Binary 3 Days ago.
T127871-02/SUNWckr/reloc/kernel/misc/sata
T127871-02/SUNWmv88sx/reloc/ke
chives containing
> > utf8-incompatible filenames?
>
> Note that the normal ZFS behavior is exactly what you'd expect: you
> get the filenames you wanted; the same ones back you put in.
Does ZFS convert the strings to UTF-8 in this case or will it just store
the multibyte sequ
Tim Haley wrote:
> Roland Mainz wrote:
> > Bart Smaalders wrote:
> >> Marcus Sundman wrote:
> >>> I'm unable to find more info about this. E.g., what does "reject file
> >>> names" mean in practice? E.g., if a program tries to create a fil
1 - 100 of 112 matches
Mail list logo