On Tue, Mar 02, 2010 at 11:10:52AM -0800, Bill Sommerfeld wrote:
> On 03/02/10 08:13, Fredrich Maney wrote:
> >Why not do the same sort of thing and use that extra bit to flag a
> >file, or directory, as being an ACL only file and will negate the rest
> >of the mask? That accomplishes what Paul is
On Mon, Mar 01, 2010 at 09:04:58PM -0800, Paul B. Henson wrote:
> On Mon, 1 Mar 2010, Nicolas Williams wrote:
> > Yes, that sounds useful. (Group modebits could be applied to all ACEs
> > that are neither owner@ nor everyone@ ACEs.)
>
> That sounds an awful lot like the POSI
BTW, it should be relatively easy to implement aclmode=ignore and
aclmode=deny, if you like.
- $SRC/common/zfs/zfs_prop.c needs to be updated to know about the new
values of aclmode.
- $SRC/uts/common/fs/zfs/zfs_acl.c:zfs_acl_chmod()'s callers need to be
modified:
- in the create pat
On Thu, Mar 18, 2010 at 10:38:00PM -0700, Rob wrote:
> Can a ZFS send stream become corrupt when piped between two hosts
> across a WAN link using 'ssh'?
No. SSHv2 uses HMAC-MD5 and/or HMAC-SHA-1, depending on what gets
negotiated, for integrity protection. The chances of random on the wire
corr
On Thu, Mar 25, 2010 at 04:23:38PM +, Darren J Moffat wrote:
> If the data is in the L2ARC that is still better than going out to
> the main pool disks to get the compressed version.
Well, one could just compress it... If you'd otherwise put compression
in the ssh pipe (or elsewhere) then y
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs diff is incredibly cool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Apr 06, 2010 at 11:53:23AM -0400, Tony MacDoodle wrote:
> Can I rollback a snapshot that I did a zfs send on?
>
> ie: zfs send testpool/w...@april6 > /backups/w...@april6_2010
That you did a zfs send does not prevent you from rolling back to a
previous snapshot. Similarly for zfs recv --
On Fri, Apr 16, 2010 at 01:54:45PM -0400, Edward Ned Harvey wrote:
> If you've got nested zfs filesystems, and you're in some subdirectory where
> there's a file or something you want to rollback, it's presently difficult
> to know how far back up the tree you need to go, to find the correct ".zfs"
On Fri, Apr 16, 2010 at 02:19:47PM -0700, Richard Elling wrote:
> On Apr 16, 2010, at 1:37 PM, Nicolas Williams wrote:
> > I've a ksh93 script that lists all the snapshotted versions of a file...
> > Works over NFS too.
> >
> > % zfshist /usr/bin/ls
> > H
On Fri, Apr 16, 2010 at 01:56:07PM -0400, Edward Ned Harvey wrote:
> The typical problem scenario is: Some user or users fill up the filesystem.
> They rm some files, but disk space is not freed. You need to destroy all
> the snapshots that contain the deleted files, before disk space is availabl
On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:
> On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
> > > "zfs list -t snapshot" lists in time order.
> >
> > Good to know. I'll keep that in mind for my "zfs send" scripts but it's not
> > relevant for the case at
On Wed, Apr 21, 2010 at 10:45:24AM -0400, Edward Ned Harvey wrote:
> > From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
> > >
> > > You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
> > the
> > > .zfs/snapshot directory, however, it will only work if you're running
> >
POSIX doesn't allow us to have special dot files/directories outside
filesystem root directories.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Apr 21, 2010 at 01:03:39PM -0500, Jason King wrote:
> ISTR POSIX also doesn't allow a number of features that can be turned
> on with zfs (even ignoring the current issues that prevent ZFS from
> being fully POSIX compliant today). I think an additional option for
> the snapdir property ('
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
> On 5/6/10 5:28 AM, Robert Milkowski wrote:
>
> >sync=disabled
> >Synchronous requests are disabled. File system transactions
> >only commit to stable storage on the next DMU transaction group
> >commit which can be many seconds.
>
> Is
On Wed, May 19, 2010 at 05:33:05AM -0700, Chris Gerhard wrote:
> The reason for wanting to know is to try and find versions of a file.
No, there's no such guarantee. The same inode and generation number
pair is extremely unlikely to be re-used, but the inode number itself is
likely to be re-used.
On Wed, May 19, 2010 at 07:50:13AM -0700, John Hoogerdijk wrote:
> Think about the potential problems if I don't mirror the log devices
> across the WAN.
If you don't mirror the log devices then your disaster recovery
semantics will be that you'll miss any transactions that hadn't been
committed t
On Wed, May 19, 2010 at 02:29:24PM -0700, Don wrote:
> "Since it ignores Cache Flush command and it doesn't have any
> persistant buffer storage, disabling the write cache is the best you
> can do."
>
> This actually brings up another question I had: What is the risk,
> beyond a few seconds of los
On Thu, May 20, 2010 at 04:23:49PM -0400, Thomas Burgess wrote:
> I know i'm probably doing something REALLY stupid.but for some reason i
> can't get send/recv to work over ssh. I just built a new media server and
> i'd like to move a few filesystem from my old server to my new server but
> fo
On Mon, May 24, 2010 at 05:48:56PM -0400, Thomas Burgess wrote:
> I recently got a new SSD (ocz vertex LE 50gb)
>
> It seems to work really well as a ZIL performance wise. My question is, how
> safe is it? I know it doesn't have a supercap so lets' say dataloss
> occursis it just dataloss or
On Fri, Jun 04, 2010 at 12:37:01PM -0700, Ray Van Dolson wrote:
> On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote:
> > On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson wrote:
> > > The ISO's I'm testing with are the 32-bit and 64-bit versions of the
> > > RHEL5 DVD ISO's. While both ha
On Wed, Jun 16, 2010 at 04:44:07PM +0200, Arne Jansen wrote:
> Please keep in mind I'm talking about a usage as ZIL, not as L2ARC or main
> pool. Because ZIL issues nearly sequential writes, due to the NVRAM-protection
> of the RAID-controller the disk can leave the write cache enabled. This means
On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote:
> Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm
> the only user of the fileserver, so there probably won't be more than
> two or three computers, maximum, accessing stuff (and writing stuff)
> remotely.
It
On Thu, Jul 08, 2010 at 08:42:33PM -0700, Garrett D'Amore wrote:
> On Fri, 2010-07-09 at 10:23 +1000, Peter Jeremy wrote:
> > In theory, collisions happen. In practice, given a cryptographic hash,
> > if you can find two different blocks or files that produce the same
> > output, please publicise
On Wed, Jul 14, 2010 at 03:07:59PM -0600, Beau J. Bechdol wrote:
> So not sue if this is the correct list to email to or not. I am curious to
> know on my machine I have two hard drive (c8t0d0 and c8t1d0). Can some one
> explain to me what this exactly means? What does "c8" "t0" and "d0" actually
>
On Thu, Aug 12, 2010 at 07:48:10PM -0500, Norm Jacobs wrote:
> For single file updates, this is commonly solved by writing data to
> a temp file and using rename(2) to move it in place when it's ready.
For anything more complicated you need... a more complicated approach.
Note that "transactional
On Thu, Oct 01, 2009 at 11:03:06AM -0700, Rudolf Potucek wrote:
> Hmm ... I understand this is a bug, but only in the sense that the
> message is not sufficiently descriptive. Removing the file from the
> source filesystem will not necessarily free any space because the
> blocks have to be retained
On Mon, Oct 26, 2009 at 08:53:50PM -0700, Anil wrote:
> I haven't tried this, but this must be very easy with dtrace. How come
> no one mentioned it yet? :) You would have to monitor some specific
> syscalls...
DTrace is not reliable in this sense: it will drop events rather than
overburden the sy
On Mon, Nov 02, 2009 at 12:58:32PM -0500, Dennis Clarke wrote:
> Looking at FIPS-180-3 in sections 4.1.2 and 4.1.3 I was thinking that the
> major leap from SHA256 to SHA512 was a 32-bit to 64-bit step.
ZFS doesn't have enough room in blkptr_t for 512-bi hashes.
Nico
--
_
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
> forgive my ignorance, but what's the advantage of this new dedup over
> the existing compression option? Wouldn't full-filesystem compression
> naturally de-dupe?
If you snapshot/clone as you go, then yes, dedup will do little
On Tue, Nov 10, 2009 at 03:33:22PM -0600, Tim Cook wrote:
> You're telling me a scrub won't actively clean up corruption in snapshots?
> That sounds absolutely absurd to me.
Depends on how much redundancy you have in your pool. If you have no
mirrors, no RAID-Z, and no ditto blocks for data, well
On Mon, Sep 07, 2009 at 09:58:19AM -0700, Richard Elling wrote:
> I only know of "hole punching" in the context of networking. ZFS doesn't
> do networking, so the pedantic answer is no.
But a VDEV may be an iSCSI device, thus there can be networking below
ZFS.
For some iSCSI targets (including ZV
On Thu, Dec 03, 2009 at 03:57:28AM -0800, Per Baatrup wrote:
> I would like to to concatenate N files into one big file taking
> advantage of ZFS copy-on-write semantics so that the file
> concatenation is done without actually copying any (large amount of)
> file content.
> cat f1 f2 f3 f4 f5 >
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
> >if any of f2..f5 have different block sizes from f1
>
> This restriction does not sound so bad to me if this only refers to
> changes to the blocksize of a particular ZFS filesystem or copying
> between different ZFSes in the same poo
On Thu, Dec 17, 2009 at 03:32:21PM +0100, Kjetil Torgrim Homme wrote:
> if the hash used for dedup is completely separate from the hash used for
> data protection, I don't see any downsides to computing the dedup hash
> from uncompressed data. why isn't it?
Hash and checksum functions are slow (h
On Thu, Jan 21, 2010 at 02:11:31PM -0800, Moshe Vainer wrote:
> >PS: For data that you want to mostly archive, consider using Amazon
> >Web Services (AWS) S3 service. Right now there is no charge to push
> >data into the cloud and its $0.15/gigabyte to keep it there. Do a
> >quick (back of the napk
On Thu, Feb 04, 2010 at 03:19:15PM -0500, Frank Cusack wrote:
> BTW, I could just install everything in the global zone and use the
> default "inheritance" of /usr into each local zone to see the data.
> But then my zones are not independent portable entities; they would
> depend on some non-defaul
On Thu, Feb 04, 2010 at 04:03:19PM -0500, Frank Cusack wrote:
> On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:
> >In Frank's case, IIUC, the better solution is to avoid the need for
> >unionfs in the first place by not placing pkg content in directories
> >that one migh
On Fri, Feb 05, 2010 at 03:49:15PM -0500, c.hanover wrote:
> Two things, mostly related, that I'm trying to find answers to for our
> security team.
>
> Does this scenario make sense:
> * Create a filesystem at /users/nfsshare1, user uses it for a while,
> asks for the filesystem to be deleted
> *
On Fri, Feb 05, 2010 at 04:41:08PM -0500, Miles Nordin wrote:
> > "ch" == c hanover writes:
>
> ch> is there a way to a) securely destroy a filesystem,
>
> AIUI zfs crypto will include this, some day, by forgetting the key.
Right.
> but for SSD, zfs above a zvol, or zfs above a SAN tha
On Fri, Feb 05, 2010 at 05:08:02PM -0500, c.hanover wrote:
> In our particular case, there won't be snapshots of destroyed
> filesystems (I create the snapshots, and destroy them with the
> filesystem).
OK.
> I'm not too sure on the particulars of NFS/ZFS, but would it be
> possible to create a 1
On Mon, Feb 08, 2010 at 03:41:16PM -0500, Miles Nordin wrote:
> ch> In our particular case, there won't be
> ch> snapshots of destroyed filesystems (I create the snapshots,
> ch> and destroy them with the filesystem).
>
> Right, but if your zpool is above a zvol vdev (ex COMSTAR on ano
On Wed, Feb 24, 2010 at 02:09:42PM -0600, Bob Friesenhahn wrote:
> I have a directory here containing a million files and it has not
> caused any strain for zfs at all although it can cause considerable
> stress on applications.
The biggest problem is always the apps. For example, ls by default
On Wed, Feb 24, 2010 at 03:31:51PM -0600, Bob Friesenhahn wrote:
> With millions of such tiny files, it makes sense to put the small
> files in a separate zfs filesystem which has its recordsize property
> set to a size not much larger than the size of the files. This should
> reduce waste, res
On Fri, Feb 26, 2010 at 08:23:40AM -0800, Paul B. Henson wrote:
> So far it's been quite a struggle to deploy ACL's on an enterprise central
> file services platform with access via multiple protocols and have them
> actually be functional and reliable. I can see why the average consumer
> might gi
On Fri, Feb 26, 2010 at 02:50:05PM -0800, Paul B. Henson wrote:
> On Fri, 26 Feb 2010, Bill Sommerfeld wrote:
>
> > I believe this proposal is sound.
>
> Mere words can not express the sheer joy with which I receive this opinion
> from an @sun.com address ;).
I believe we can do a bit better.
A
On Fri, Feb 26, 2010 at 05:02:34PM -0600, David Dyer-Bennet wrote:
>
> On Fri, February 26, 2010 12:45, Paul B. Henson wrote:
>
> > I've already posited as to an approach that I think would make a pure-ACL
> > deployment possible:
> >
> >
> > http://mail.opensolaris.org/pipermail/zfs-discuss
On Fri, Feb 26, 2010 at 04:26:43PM -0800, Paul B. Henson wrote:
> On Fri, 26 Feb 2010, Nicolas Williams wrote:
> > I believe we can do a bit better.
> >
> > A chmod that adds (see below) or removes one of r, w or x for owner is a
> > simple ACL edit (the bit may turn
On Fri, Feb 26, 2010 at 03:00:29PM -0500, Miles Nordin wrote:
> >>>>> "nw" == Nicolas Williams writes:
>
> nw> What could we do to make it easier to use ACLs?
>
> 1. how about AFS-style ones where the effective permission is the AND
>of th
On Fri, Aug 20, 2010 at 09:23:56AM +1200, Ian Collins wrote:
> On 08/20/10 08:30 AM, Garrett D'Amore wrote:
> >There is no common C++ ABI. So you get into compatibility concerns
> >between code built with different compilers (like Studio vs. g++).
> >Fail.
>
> Which is why we have extern "C". Ju
On Fri, Aug 20, 2010 at 09:38:51AM +1200, Ian Collins wrote:
> On 08/20/10 09:33 AM, Nicolas Williams wrote:
> >Any driver C++ code would still need a C++ run-time. Either you must
> >statically link it in, or you'll have a problem with multiple drivers
> >using differ
On Fri, Aug 20, 2010 at 10:17:38AM +1200, Ian Collins wrote:
> On 08/20/10 09:48 AM, Nicolas Williams wrote:
> >And anyways, the temptation to build classes that can be used
> >elsewhere becomes rather strong. IMO C++ in the kernel is asking for
> >trouble. And C++ in u
On Sat, Aug 28, 2010 at 12:05:53PM +1200, Ian Collins wrote:
> Think of this from the perspective of an application. How would
> write failure be reported? open(2) returns EACCES if the file can
> not be written but there isn't a corresponding return from write(2).
> Any open file descriptors woul
On Tue, Sep 14, 2010 at 04:13:31PM -0400, Linder, Doug wrote:
> I recently created a test zpool (RAIDZ) on some iSCSI shares. I made
> a few test directories and files. When I do a listing, I see
> something I've never seen before:
>
> [r...@hostname anewdir] # ls -la
> total 6160
> drwxr-xr-x
On Wed, Sep 15, 2010 at 05:18:08PM -0400, Edward Ned Harvey wrote:
> It is absolutely not difficult to avoid fragmentation on a spindle drive, at
> the level I described. Just keep plenty of empty space in your drive, and
> you won't have a fragmentation problem. (Except as required by COW.) How
On Wed, Sep 22, 2010 at 07:14:43AM -0700, Orvar Korvar wrote:
> There was a guy doing that: Windows as host and OpenSolaris as guest
> with raw access to his disks. He lost his 12 TB data. It turned out
> that VirtualBox dont honor the write flush flag (or something
> similar).
VirtualBox has an o
On Wed, Sep 22, 2010 at 12:30:58PM -0600, Neil Perrin wrote:
> On 09/22/10 11:22, Moazam Raja wrote:
> >Hi all, I have a ZFS question related to COW and scope.
> >
> >If user A is reading a file while user B is writing to the same file,
> >when do the changes introduced by user B become visible to
On Thu, Sep 23, 2010 at 06:58:29AM +, Markus Kovero wrote:
> > What is an example of where a checksummed outside pool would not be able
> > to protect a non-checksummed inside pool? Would an intermittent
> > RAM/motherboard/CPU failure that only corrupted the inner pool's block
> > before i
On Tue, Sep 28, 2010 at 12:18:49PM -0700, Paul B. Henson wrote:
> On Sat, 25 Sep 2010, [iso-8859-1] Ralph Böhme wrote:
>
> > Darwin ACL model is nice and slick, the new NFSv4 one in 147 is just
> > braindead. chmod resulting in ACLs being discarded is a bizarre design
> > decision.
>
> Agreed. Wh
On Tue, Sep 28, 2010 at 02:03:30PM -0700, Paul B. Henson wrote:
> On Tue, 28 Sep 2010, Nicolas Williams wrote:
>
> > I've researched this enough (mainly by reading most of the ~240 or so
> > relevant zfs-discuss posts and several bug reports)
>
> And I think some
On Wed, Sep 29, 2010 at 10:15:32AM +1300, Ian Collins wrote:
> Based on my own research, experimentation and client requests, I
> agree with all of the above.
Good to know.
> I have be re-ordering and cleaning (deny) ACEs for one client for a
> couple of years now and we haven't seen any user com
On Wed, Sep 29, 2010 at 03:44:57AM -0700, Ralph Böhme wrote:
> > On 9/28/2010 2:13 PM, Nicolas Williams wrote:
> > The version of samba bundled with Solaris 10 seems to
> > insist on
> > chmod'ing stuff. I've tried all of the various
Just in case it's not
Keep in mind that Windows lacks a mode_t. We need to interop with
Windows. If a Windows user cannot completely change file perms because
there's a mode_t completely out of their reach... they'll be frustrated.
Thus an ACL-and-mode model where both are applied doesn't work. It'd be
nice, but it
On Wed, Sep 29, 2010 at 03:09:22PM -0700, Ralph Böhme wrote:
> > Keep in mind that Windows lacks a mode_t. We need to
> > interop with Windows.
>
> Oh my, I see. Another itch to scratch. Now at least Windows users are
> happy while me and mabye others are not.
Yes. Pardon me for forgetting to m
On Wed, Sep 29, 2010 at 05:21:51PM -0500, Nicolas Williams wrote:
> On Wed, Sep 29, 2010 at 03:09:22PM -0700, Ralph Böhme wrote:
> > > Keep in mind that Windows lacks a mode_t. We need to
> > > interop with Windows.
> >
> > Oh my, I see. Another itch to scratc
On Thu, Sep 30, 2010 at 02:55:26PM -0400, Miles Nordin wrote:
> >>>>> "nw" == Nicolas Williams writes:
> nw> Keep in mind that Windows lacks a mode_t. We need to interop
> nw> with Windows. If a Windows user cannot completely change file
On Thu, Sep 30, 2010 at 03:28:14PM -0500, Nicolas Williams wrote:
> Consider this chronologically-ordered sequence of events:
>
> 1) File is created via Windows, gets SMB/ZFS/NFSv4-style ACL, including
>inherittable ACEs. A mode computed from this ACL might be 664, say.
>
&
On Thu, Sep 30, 2010 at 08:14:24PM -0400, Miles Nordin wrote:
> >> Can the user in (3) fix the permissions from Windows?
>
> no, not under my proposal.
Then your proposal is a non-starter. Support for multiple remote
filesystem access protocols is key for ZFS and Solaris.
The impedance mism
On Thu, Sep 30, 2010 at 08:14:24PM -0400, Miles Nordin wrote:
> >> Can the user in (3) fix the permissions from Windows?
>
> no, not under my proposal.
Let's give it a whirld anyways:
> but it sounds like currently people cannot ``fix'' permissions through
> the quirky autotranslation anyway
On Mon, Oct 04, 2010 at 04:30:05PM -0600, Cindy Swearingen wrote:
> Hi Simon,
>
> I don't think you will see much difference for these reasons:
>
> 1. The CIFS server ignores the aclinherit/aclmode properties.
Because CIFS/SMB has no chmod operation :)
> 2. Your aclinherit=passthrough setting o
On Mon, Oct 04, 2010 at 02:28:18PM -0400, Miles Nordin wrote:
> >>>>> "nw" == Nicolas Williams writes:
>
> nw> I would think that 777 would invite chmods. I think you are
> nw> handwaving.
>
> it is how AFS worked. Since no file on
On Wed, Oct 06, 2010 at 04:38:02PM -0400, Miles Nordin wrote:
> >>>>> "nw" == Nicolas Williams writes:
>
> nw> The current system fails closed
>
> wrong.
>
> $ touch t0
> $ chmod 444 t0
> $ chmod A0+user:$(id -nu):write_data:allow t0
On Wed, Oct 06, 2010 at 05:19:25PM -0400, Miles Nordin wrote:
> >>>>> "nw" == Nicolas Williams writes:
>
> nw> *You* stated that your proposal wouldn't allow Windows users
> nw> full control over file permissions.
>
> me: I have a pr
On Sat, Oct 09, 2010 at 09:52:51PM -0700, Richard Elling wrote:
> Are we living in the past?
>
> In the bad old days, UNIX systems spoke NFS and Windows systems spoke
> CIFS. The cost of creating a file system was expensive -- slices,
> partitions, etc.
>
> With ZFS, file systems (datasets) are r
On Wed, Nov 17, 2010 at 01:58:06PM -0800, Bill Sommerfeld wrote:
> On 11/17/10 12:04, Miles Nordin wrote:
> >black-box crypto is snake oil at any level, IMNSHO.
>
> Absolutely.
As Darren said, much of the design has been discussed in public, and
reviewed by cryptographers. It'd be nicer if we ha
Also, when the IV is stored you can more easily look for accidental IV
re-use, and if you can find hash collisions, them you can even cause IV
re-use (if you can write to the filesystem in question). For GCM IV
re-use is rather fatal (for CCM it's bad, but IIRC not fatal), so I'd
not use GCM with
On Thu, Dec 23, 2010 at 09:32:13AM +, Darren J Moffat wrote:
> On 22/12/2010 20:27, Garrett D'Amore wrote:
> >That said, some operations -- and cryptographic ones in particular --
> >may use floating point registers and operations because for some
> >architectures (sun4u rings a bell) this can
On Thu, Dec 23, 2010 at 11:25:43AM +0100, Stephan Budach wrote:
> as I have learned from the discussion about which SSD to use as ZIL
> drives, I stumbled across this article, that discusses short
> stroking for increasing IOPs on SAS and SATA drives:
There was a thread on this a while back. I fo
On Sat, Dec 25, 2010 at 08:37:42PM -0500, Ross Walker wrote:
> On Dec 24, 2010, at 1:21 PM, Richard Elling wrote:
>
> > Latency is what matters most. While there is a loose relationship between
> > IOPS
> > and latency, you really want low latency. For 15krpm drives, the average
> > latency
>
On Mon, Dec 27, 2010 at 09:06:45PM -0500, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Nicolas Williams
> >
> > > Actually I'd say that latency has a direct relationship to IOP
On Thu, Jan 06, 2011 at 11:44:31AM -0800, Peter Taps wrote:
> I have been told that the checksum value returned by Sha256 is almost
> guaranteed to be unique.
All hash functions are guaranteed to have collisions [for inputs larger
than their output anyways].
> In fact, if
On Thu, Jan 06, 2011 at 06:07:47PM -0500, David Magda wrote:
> On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
>
> > Fletcher is faster than SHA-256, so I think that must be what you're
> > asking about: "can Fletcher+Verification be faster than
> > Sha256+NoV
On Fri, Jan 07, 2011 at 06:39:51AM -0800, Michael DeMan wrote:
> On Jan 7, 2011, at 6:13 AM, David Magda wrote:
> > The other thing to note is that by default (with de-dupe disabled), ZFS
> > uses Fletcher checksums to prevent data corruption. Add also the fact all
> > other file systems don't have
On Sat, Jan 15, 2011 at 10:19:23AM -0600, Bob Friesenhahn wrote:
> On Fri, 14 Jan 2011, Peter Taps wrote:
>
> >Thank you for sharing the calculations. In lay terms, for Sha256,
> >how many blocks of data would be needed to have one collision?
>
> Two.
Pretty funny.
In this thread some of you ar
On Tue, Jan 18, 2011 at 07:16:04AM -0800, Orvar Korvar wrote:
> BTW, I thought about this. What do you say?
>
> Assume I want to compress data and I succeed in doing so. And then I
> transfer the compressed data. So all the information I transferred is
> the compressed data. But, then you don't co
On Fri, Jan 28, 2011 at 01:38:11PM -0800, Igor P wrote:
> I created a zfs pool with dedup with the following settings:
> zpool create data c8t1d0
> zfs create data/shared
> zfs set dedup=on data/shared
>
> The thing I was wondering about was it seems like ZFS only dedup at
> the file level and not
[OT, I know.]
On Fri, Jul 25, 2008 at 07:14:09PM +0200, Justin Vassallo wrote:
> Meanwhile, I had to permit root login (obviously disabled passwd auth;
> PasswordAuthentication no; PAMAuthenticationViaKBDInt no).
Why obviously?
I think instead you may just want to:
PermitRootLogin without-pass
On Thu, Jul 31, 2008 at 01:07:20PM -0500, Paul Fisher wrote:
> Stephen Stogner wrote:
> > True we could have all the syslog data be directed towards the host but the
> > underlying issue remains the same with the performance hit. We have used
> > nfs shares for log hosts and mail hosts and we ar
On Wed, Aug 06, 2008 at 02:23:44PM -0400, Will Murnane wrote:
> On Wed, Aug 6, 2008 at 13:57, Miles Nordin <[EMAIL PROTECTED]> wrote:
> > If that's really the excuse for this situation, then ZFS is not
> > ``always consistent on the disk'' for single-VDEV pools.
> Well, yes. If data is sent, but c
On Wed, Aug 06, 2008 at 03:44:08PM -0400, Miles Nordin wrote:
> > "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>
> c> If that's really the excuse for this situation, then ZFS is
> c> not ``always consistent on the disk'' for single-VDEV pools.
>
> re> I disagree with your
On Fri, Aug 15, 2008 at 08:15:56PM -0600, Mark Shellenbaum wrote:
> We are currently investigating adding more functionality to libsec to
> provide many of the things you desire. We will have iterators, editing
> capabilities and so on.
I'm still ironing a design/architecture document out. I'l
On Thu, Aug 28, 2008 at 11:29:21AM -0500, Bob Friesenhahn wrote:
> Which of these do you prefer?
>
>o System waits substantial time for devices to (possibly) recover in
> order to ensure that subsequently written data has the least
> chance of being lost.
>
>o System immediately
On Thu, Aug 28, 2008 at 01:05:54PM -0700, Eric Schrock wrote:
> As others have mentioned, things get more difficult with writes. If I
> issue a write to both halves of a mirror, should I return when the first
> one completes, or when both complete? One possibility is to expose this
> as a tunable
On Wed, Sep 10, 2008 at 06:35:49PM -0700, Paul B. Henson wrote:
> I'd appreciate any feedback, particularly about things that don't work
> right :).
I bet you think it'd be nice if we had a public equivalent of
_getgroupsbymember()...
Even better if we just had utility functions to do ACL evaluat
On Thu, Sep 11, 2008 at 10:36:38AM -0700, Paul B. Henson wrote:
> On Thu, 11 Sep 2008, Nicolas Williams wrote:
>
> > I bet you think it'd be nice if we had a public equivalent of
> > _getgroupsbymember()...
>
> Indeed, that would be useful in numerous contexts. It
On Thu, Sep 11, 2008 at 10:36:38AM -0700, Paul B. Henson wrote:
> On Thu, 11 Sep 2008, Nicolas Williams wrote:
> > I bet you think it'd be nice if we had a public equivalent of
> > _getgroupsbymember()...
>
> Indeed, that would be useful in numerous contexts. It wo
On Tue, Sep 30, 2008 at 06:09:30PM -0500, Tim wrote:
> On Tue, Sep 30, 2008 at 6:03 PM, Ahmed Kamal <
> [EMAIL PROTECTED]> wrote:
> > BTW, for everyone saying zfs is more reliable because it's closer to the
> > application than a netapp, well at least in my case it isn't. The solaris
> > box will b
On Tue, Sep 30, 2008 at 08:54:50PM -0500, Tim wrote:
> As it does in ANY fileserver scenario, INCLUDING zfs. He is building a
> FILESERVER. This is not an APPLICATION server. You seem to be stuck on
> this idea that everyone is using ZFS on the server they're running the
> application. That doe
On Tue, Sep 30, 2008 at 09:54:04PM -0400, Miles Nordin wrote:
> ok, I get that S3 went down due to corruption, and that the network
> checksums I mentioned failed to prevent the corruption. The missing
> piece is: belief that the corruption occurred on the network rather
> than somewhere else.
>
1 - 100 of 428 matches
Mail list logo