On Sat, Oct 3, 2009 at 6:50 PM, Jeff Haferman wrote:
>
> A user has 5 directories, each has tens of thousands of files, the
> largest directory has over a million files. The files themselves are
> not very large, here is an "ls -lh" on the directories:
> [these are all ZFS-based]
>
> [r...@cluste
+--
| On 2009-10-03 18:50:58, Jeff Haferman wrote:
|
| I did an rsync of this directory structure to another filesystem
| [lustre-based, FWIW] and it took about 24 hours to complete. We have
| done rsyncs on other directo
Jeff Haferman wrote:
A user has 5 directories, each has tens of thousands of files, the
largest directory has over a million files. The files themselves are
not very large, here is an "ls -lh" on the directories:
[these are all ZFS-based]
[r...@cluster]# ls -lh
total 341M
drwxr-xr-x+ 2 someone
A user has 5 directories, each has tens of thousands of files, the
largest directory has over a million files. The files themselves are
not very large, here is an "ls -lh" on the directories:
[these are all ZFS-based]
[r...@cluster]# ls -lh
total 341M
drwxr-xr-x+ 2 someone cluster 13K Sep 14 19
With respect to relling's Oct 3 2009 7:46 AM Post:
> I think you are missing the concept of pools. Pools contain datasets.
> One form of dataset is a file system. Pools do not contain data per se,
> datasets contain data. Reviewing the checksums used with this
> heirarchy in mind:
> Pool
> Label
Responding to p...@paularcher.org's sep 30 2009 9:21 post:
For the entire file system, I have chosen zfs send/receive, per thread "Best
way to convert checksums". I has concerns, they have been answered.
Do my immediate need is answered. The question remains as to how to copy
portions of tree
Responding to p...@paularcher.org's sep 30 2009 9:21 post:
For the entire file system, I have chosen zfs send/receive, per thread "Best
way to convert checksums". I has concerns, they have been answered.
Do my immediate need is answered. The question remains as to how to copy
portions of tree
On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling wrote:
>
> c is the current size the ARC. c will change dynamically, as memory
> pressure
> and demand change.
How is the relative greediness of c determined? Is there a way to make it
more greedy on systems with lots of free memory?
>
>
> > Whe
On Oct 3, 2009, at 12:22 PM, Miles Nordin wrote:
"re" == Richard Elling writes:
re> If I was to refer to Fletcher's algorithm, I would use
re> Fletcher. When I am referring to the ZFS checksum setting of
re> "fletcher2" I will continue to use "fletcher2"
haha okay, so to clarify,
On Sat, 3 Oct 2009, Miles Nordin wrote:
re> The best I can tell, the comments are changed to indicate
re> fletcher2 is deprecated.
You are saying the ``fix'' was a change in documentation, nothing
else? The default is still fletcher2, and there is no correct
implementation of the Fletcher
> "re" == Richard Elling writes:
re> If I was to refer to Fletcher's algorithm, I would use
re> Fletcher. When I am referring to the ZFS checksum setting of
re> "fletcher2" I will continue to use "fletcher2"
haha okay, so to clarify, when reading a Richard Elling post:
fletche
On Oct 3, 2009, at 10:26 AM, Chris Banal wrote:
On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling > wrote:
c is the current size the ARC. c will change dynamically, as memory
pressure
and demand change.
How is the relative greediness of c determined? Is there a way to
make it more greedy on
On Oct 3, 2009, at 7:46 AM, Ray Clark wrote:
Richard, with respect to:
"This has been answered several times in this thread already.
set checksum=sha256 filesystem
copy your files -- all newly written data will have the sha256
checksums."
I understand that. I understood it before the thread s
I managed to solve this problem thanks to much help from Victor Latushkin.
Anyways, the problem is related to the following bug:
Bug ID 6753869
Synopsislabeling/shrinking a disk in raid-z vdev makes pool
un-importable
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id
On Fri, 2 Oct 2009, Ray Clark wrote:
With the current fletcher2 we have only a 50-50 chance of catching
these multi-bit errors. Probability of multiple bits being changed
is not
What is the current fletcher2? A while back I seem to recall reading
a discussion in the zfs-code forum about ho
Richard, with respect to:
"This has been answered several times in this thread already.
set checksum=sha256 filesystem
copy your files -- all newly written data will have the sha256
checksums."
I understand that. I understood it before the thread started. I did not ask
this. It is a fact that
This is opensolaris on a Tecra M5 using an 128GB SSD as the boot device. This
device is partitioned into two roughly 60GB partitions.
I installed opensolaris 2009.06 into the first partition then did an image
update to build 124 from the dev repository. All went well so then I created a
zpo
Rudolf Potucek wrote:
once you break the model where a snapshot is a point-in-time picture, all sorts
of bad things can happen. You've changed a fundamental assumption of
snapshots, and this then impacts how we view them from all sorts of angles;
it's a huge loss to trade away for a very sma
18 matches
Mail list logo