On Sun, Aug 23 at 14:11, Tristan Ball wrote:
Hang on, in reading that his initial results were 50 writes a second, with
the default xfs write barriers, which to me implies that the drive is
honouring the cache flush. The fact that write rate jumps so significantly
when he turns off barrie
Is there a mechanism by which you can perform a zfs send | zfs receive
and not have the data uncompressed and recompressed at the other end?
I have a gzip-9 compressed filesystem that I want to backup to a
remote system and would prefer not to have to recompress everything
again at such gre
Ross Walker wrote:
[snip]
We turned up our X4540s, and this same tar unpack took over 17
minutes! We disabled the ZIL for testing, and we dropped this to
under 1 minute. With the X25-E as a slog, we were able to run this
test in 2-4 minutes, same as the old storage.
That's pretty impres
On Fri, Aug 21, 2009 at 12:22 PM, Jason
Pfingstmann wrote:
> Any thoughts on this? I don't see why it shouldn't work, but I've only been
> tinkering with ZFS for 2 days now and this is all unexplored territory.
You shouldn't need to fake the size of your file-backed vdevs.
If you plan on having
Kris Larsen wrote:
>
> Thanks. It works for GNU-style chmod usage.
Erm... technically this isn't GNU "chmod", it's a different "chmod"
implementation which includes GNU+BSD+MacOSX options...
> But aren't ACL's supported?
No, not yet... but it's on my todo list (the tricky part is to find the
pe
On Aug 22, 2009, at 7:33 PM, Ross Walker wrote:
On Aug 22, 2009, at 5:21 PM, Neil Perrin wrote:
On 08/20/09 06:41, Greg Mason wrote:
Something our users do quite a bit of is untarring archives with a
lot of small files. Also, many small, quick writes are also one of
the many workloads
On Aug 22, 2009, at 5:21 PM, Neil Perrin wrote:
On 08/20/09 06:41, Greg Mason wrote:
Something our users do quite a bit of is untarring archives with a
lot of small files. Also, many small, quick writes are also one of
the many workloads our users have.
Real-world test: our old Linux-base
On Aug 22, 2009, at 3:47 PM, Kris Larsen wrote:
Thanks. It works for GNU-style chmod usage. But aren't ACL's
supported?
gnu chmod doesn't do ACLs.
Back when I was managing lots of users and needed to do such things,
find scripts seemed to be much more useful and flexible. -R options can
make
Thanks. It works for GNU-style chmod usage. But aren't ACL's supported?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Aug 22, 2009, at 1:02 PM, Kees Nuyt wrote:
On Fri, 21 Aug 2009 18:04:49 -0700, Richard Elling
wrote:
You can get in the same ballpark with at least two top-level
raidz2 devs and copies=2. If you have three or more
top-level raidz2 vdevs, then you can even do better
with copies=3 ;-)
Plea
Kris Larsen wrote:
>
> Hello!
>
> How can I prevent /usr/bin/chmod from following symbolic links? I can't find
> any -P option in the documentation (and it doesn't work either..). Maybe find
> can be used in some way?
[snip]
Try:
1. Start ksh93
$ ksh93
2. Load "chmod" builtin command
$ builtin
On 08/20/09 06:41, Greg Mason wrote:
Something our users do quite a bit of is untarring archives with a lot
of small files. Also, many small, quick writes are also one of the many
workloads our users have.
Real-world test: our old Linux-based NFS server allowed us to unpack a
particular tar
Hello!
How can I prevent /usr/bin/chmod from following symbolic links? I can't find
any -P option in the documentation (and it doesn't work either..). Maybe find
can be used in some way?
Background:
When I'm running chmod on my backup folder structure containing a copy of a
Linux root director
On Fri, 21 Aug 2009 18:04:49 -0700, Richard Elling
wrote:
> You can get in the same ballpark with at least two top-level
> raidz2 devs and copies=2. If you have three or more
> top-level raidz2 vdevs, then you can even do better
> with copies=3 ;-)
Please note that copies=3 will be obsoleted so
I would like some input about the use of zfs snapshot.
The Auto snapshot is nice on rpool but in some of the other zfs fs
I've created that kind of frequency doesn't seem necessary.
However generating my own cron setup for a dozen or so fs to create
snapshots, maybe only when data is transferred,
Scott Laird writes:
> Checksum all of the files using something like md5sum and see if
> they're actually identical. Then test each step of the copy and see
> which one is corrupting your files.
>
> On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnam wrote:
[...]
I didn't do that since I've found th
On Thu, Aug 20, 2009 at 5:25 PM, Robert Milkowski wrote:
> Matthew Stevenson wrote:
>
> Ha ha, I know! Like I say, I do get COW principles!
>
> I guess what I'm after is for someone to look at my specific example (in txt
> file attached to first post) and tell me specifically how to find out wh
On Sat, 22 Aug 2009, dick hoogendijk wrote:
Probably a very easy question for ZFS experts. I have an external USB
drive running 24/7 but would like to turn it off once in a while.
It is on ZFS.
Is it enough to umount it and tunr off the drive or do I have to
*export* the zfs filesystem first an
if you are talking about NFS, this is due to how ZFS file systems work.
When you share a ZFS filesystem via NFS it will share everything IN that
filesystem but if you have 2 filesystems, it will only share that second
fs's mount point.
what i mean is, if you have something like pool/filesystem
an
i had something similar happen to me when i switched to ZFS but it turned
out to be an error with cpio and the mkv format...i'm not sure exactly why
but whenever i tried to backup mkv files with cpio onto ZFS it would give me
corrupted files.
On Fri, Aug 21, 2009 at 4:43 PM, Harry Putnam wrote:
Probably a very easy question for ZFS experts. I have an external USB
drive running 24/7 but would like to turn it off once in a while.
It is on ZFS.
Is it enough to umount it and tunr off the drive or do I have to
*export* the zfs filesystem first and later import it again?
--
Dick Hoogendijk -
On Sat, Aug 22, 2009 at 12:00:42AM -0700, Jason Pfingstmann wrote:
> Thanks for the reply!
>
> The reason I'm not waiting until I have the disks is mostly because it will
> take me several months to get the funds together and in the meantime, I need
> the extra space 1 or 2 drives gets me. Sinc
On 21 Aug 2009, at 22:35, Scott Laird wrote:
Checksum all of the files using something like md5sum and see if
they're actually identical. Then test each step of the copy and see
which one is corrupting your files.
It might be worth checking if they've got funny Unicode chars in the
names.
Thanks for the reply!
The reason I'm not waiting until I have the disks is mostly because it will
take me several months to get the funds together and in the meantime, I need
the extra space 1 or 2 drives gets me. Since the sparse files will only take
up the space in use, if I've migrated 2 of
24 matches
Mail list logo