On Thu, 06 Jan 2011 22:42:15 PST Michael DeMan wrote:
> To be quite honest, I too am skeptical about about using de-dupe just based o
> n SHA256. In prior posts it was asked that the potential adopter of the tech
> nology provide the mathematical reason to NOT use SHA-256 only. However, if
> O
On Mon, 20 Dec 2010 11:27:41 PST Erik Trimble wrote:
>
> The problem boils down to this:
>
> When ZFS does a resilver, it walks the METADATA tree to determine what
> order to rebuild things from. That means, it resilvers the very first
> slab ever written, then the next oldest, etc. The pro
> The 45 byte score is the checksum of the top of the tree, isn't that
> right?
Yes. Plus an optional label.
> ZFS snapshots and clones save a lot of space, but the
> 'content-hash == address' trick means you could potentially save
> much more.
Especially if you carry around large files (disk im
> I have budget constraints then I can use only user-level storage.
>
> until I discovered zfs I used subversion and git, but none of them is designe
> d to manage gigabytes of data, some to be versioned, some to be unversioned.
>
> I can't afford silent data corruption and, if the final respons
> why are you providing disk space to students?
>
> When you solve this problem, the quota problem is moot.
>
> NB. I managed a large University network for several years, and
> am fully aware of the costs involved. I do not believe that the
> 1960s timeshare model will survive in such environme
> > It seems to me that once you copy meta data, you can indeed
> > copy all live data sequentially.
>
> I don't see this, given the top down strategy. For instance, if I
> understand the transactional update process, you can't commit the
> metadata until the data is in place.
>
> Can you expla
> Robert Milkowski wrote:
> > Hello Mario,
> >
> > Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
> >
> > MG> I've read that it's supposed to go at full speed, i.e. as fast as
> > MG> possible. I'm doing a disk replace and what zpool reports kind of
> > MG> surprises me. The resilver goes on at 1
> Pawel Jakub Dawidek wrote:
> > This is what I see on Solaris (hole is 4GB):
> >
> > # /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
> > real 23.7
> > # /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
> > real 21.2
> >
> > # /usr/bin/time dd if=/ufs/hole o
[originally reported for ZFS on FreeBSD but Pawel Jakub Dawid
says this problem also exists on Solaris hence this email.]
Summary: on ZFS, overhead for reading a hole seems far worse
than actual reading from a disk. Small buffers are used to
make this overhead more visible.
I ran the following