Richard,
The sub-threads being woven with a couple of other people are very important,
though they are not my immediate issue. I really don't think you need us
debating with *you* about this - I think you could argue our point also. What
we need to get across is a perspective.
I am pretty su
With respect to relling's Oct 3 2009 7:46 AM Post:
> I think you are missing the concept of pools. Pools contain datasets.
> One form of dataset is a file system. Pools do not contain data per se,
> datasets contain data. Reviewing the checksums used with this
> heirarchy in mind:
> Pool
> Label
Responding to p...@paularcher.org's sep 30 2009 9:21 post:
For the entire file system, I have chosen zfs send/receive, per thread "Best
way to convert checksums". I has concerns, they have been answered.
Do my immediate need is answered. The question remains as to how to copy
portions of tree
Responding to p...@paularcher.org's sep 30 2009 9:21 post:
For the entire file system, I have chosen zfs send/receive, per thread "Best
way to convert checksums". I has concerns, they have been answered.
Do my immediate need is answered. The question remains as to how to copy
portions of tree
Richard, with respect to:
"This has been answered several times in this thread already.
set checksum=sha256 filesystem
copy your files -- all newly written data will have the sha256
checksums."
I understand that. I understood it before the thread started. I did not ask
this. It is a fact that
Let me try to refocus:
Given that I have a U4 system with a zpool created with Fletcher2:
What blocks in the system are protected by Fletcher2, or even Fletcher4
although that does not worry me so much.
Given that I only have 1.6TB of data in a 4TB pool, what can I do to change
those blocks to
Re: relling's Oct 2 5:06 Post:
Re: analogy to ECC memory...
I appreciate the support, but the ECC memory analogy does not hold water. ECC
memory is designed to correct for multiple independent events, such as
electrical noise, bits flipped due to alpha particles from the DRAM package, or
cos
Re: Miles Nordin Oct 2, 2009 4:20:
Re: "Anyway, I'm glad the problem is both fixed..."
I want to know HOW it can be fixed? If they fixed it, this will invalidate
every pool that has not been changed from the default (Probably almost all of
them!). This can't be! So what WAS done? In the int
Re: relling's Oct 2, 2009 3:26 Post:
(1) Is this list everything?
(2) Is this the same for U4?
(3) If I change the zpool checksum property on creation as you indicated in
your Oct 1, 12:51 post (evidently very recent versions only), does this change
the checksums used for this list? Why would n
Cindys Oct 2, 2009 2:59, Thanks for staying with me.
Re: "The checksums are aset on the file systems not the pool.":
But previous responses seem to indicate that I can set them for file stored in
the filesystem that appears to be the pool, at the pool level, before I create
any new ones. One
Replying to hakanson's Oct 2, 2009 2:01 post:
Thanks. I suppose it is true that I am not even trying to compare the
peripheral stuff, and simple presence of a file and the data matching covers
some of them.
Using it for moving data, one encounters a longer list: Sparse files, ACL
handling,
My pool was the default, with checksum=256. The default has two copies of all
metadata (as I understand it), and one copy of user data. It was a raidz2 with
eight 750GB drives, yielding just over 4TB of usable space.
I am not happy with the situation, but I recognize that I am 2x better off
Replying to relling's October 1, 2009 3:34 post:
Richard, regarding "when a pool is created, there is only metadata which uses
fletcher4". Was this true in U4, or is this a new change of default with U4
using fletcher2? Similarly, did the Ubberblock use sha256 in U4? I am running
U4.
--Ray
Replying to Cindys Oct 1, 2009 3:34 PM post:
Thank you. The second part was my attempt to guess my way out of this. If
the fundamental structure of the pool (That which was created before I set the
checksum=sha256 property) is using fletcher2, perhaps as I use the pool all of
this structure
Appologize that the preceeding post appears out of context. I expected it to
"indent" as I pushed the reply button on myxiplx' Oct 1, 2009 1:47 post. It
was in response to his question. I will try to remember to provide links
internal to my messages.
--
This message posted from opensolaris.o
Data security. I migrated my organization from Linux to Solaris driven away
from Linux by the the shortfalls of fsck on TB size file systems, and towards
Solaris by the features of ZFS.
At the time I tried to dig up information concerning tradeoffs associated with
Fletcher2 vs. 4 vs. SHA256 an
U4 zpool does not appear to support the -o option... Reading a current zpool
manpage online lists the valid properties for the current zpool -o, and
checksum is not one of them. Are you mistaken or am I missing something?
Another thought is that *perhaps* all of the blocks that comprise an em
Darren, thank you very much! Not only have you answered my question, you have
made me aware of a tool to verify, and probably do alot more (zdb).
Can you comment on my concern regarding what checksum is used in the base zpool
before anything is created in it? (No doubt my terminology is wrong,
Joerg, Thanks. As you (of all people) know, this area is quite a quagmire. I
am confident that I don't have any sparse files, or if I do that they are small
and loosing this property would not be a big impact. I have determined that
none of the files have extended attributes or ACLs. Some ar
Sinking feeling...
zfs01 was originally created with fletcher2. Doesn't this mean that the sort
of "root level" stuff in the zfs pool exist with fletcher2 and so are not well
protected?
If so, is there a way to fix this short of a backup and restore?
--
This message posted from opensolaris.or
Dynamite!
I don't feel comfortable leaving things implicit. That is how
misunderstandings happen.
Would you please acknowlege that zfs send | zfs receive uses the checksum
setting on the receiving pool instead of preserving the checksum algorithm used
by the sending block?
Thanks a million
I made a typo... I only have one pool. I should have typed:
zfs snapshot zfs01/h...@before
zfs send zfs01/h...@before | zfs receive zfs01/home.sha256
Does that change the answer?
And independently if it does or not, zfs01 is a pool, and the property is on
the home zfs file system.
I can
It appears that I have waded into a quagmire. Every option I can find (cpio,
tar (Many versions!), cp, star, pax) has issues. File size and filename or
path length, and ACLs are common shortfalls. "Surely there is an easy answer"
he says naively!
I simply want to copy one zfs filesystem tree
The April 2009 "ZFS Administration Guide" states "...tar and cpio commands, to
save ZFS files. All of these utilities save and restore ZFS file attributes
and ACLs.
I am running 8/07 (U4). Was this true for the U4 verison of ZFS and the tar
and cpio shipped with U4?
Also, I cannot seem to fi
When using zfs send/receive to do the conversion, the receive creates a new
file system:
zfs snapshot zfs01/h...@before
zfs send zfs01/h...@before | zfs receive afx01/home.sha256
Where do I get the chance to "zfs set checksum=sha256" on the new file system
before all of the files are writ
I didn't want my question to lead to an answer, but perhaps I should have put
more information. My idea is to copy the file system with one of the following:
cp -rp
zfs send | zfs receive
tar
cpio
But I don't know what would be the best.
Then I would do a "diff -r" on them before del
What is the "Best" way to convert the checksums of an existing ZFS file system
from one checksum to another? To me "Best" means safest and most complete.
My zpool is 39% used, so there is plenty of space available.
Thanks.
--
This message posted from opensolaris.org
___
My understanding is that if I "zfs set checksum=" to change the
algorithm that this will change the checksum algorithm for all FUTURE data
blocks written, but does not in any way change the checksum for previously
written data blocks.
I need to corroborate this understanding. Could someone p
Re pantzer5's suggestion:
Memory is not a big problem for ZFS, address space is. You may have to
give the kernel more address space on 32-bit CPUs.
eeprom kernelbase=0x8000
This will reduce the usable address space of user processes though.
---
Would you please verify that I understand co
It completed copying 191,xxx MB without issue in 17 hours and 40 minutes,
average transfer rate of 3.0MB/Sec. During the copy (At least the first hour
or so, and an hour in the middle), the machine was reasonably responsive. It
was jerky to a greater or lesser extent, but nothing like even the
This has been an evolution, see defect.opensolaris.com's 5482 memory leak with
top. I have not run anything bug gzip-9 since fixing that by only running top
in one-time mode.
I will start the same copy with compress=off in about 45 minutes (Got to go do
an errand now). Glad to run tests.
--R
Unless something new develops, from my perspective this thread has served its
purpose. I don't want to drag it out and waste people's time. If any "late
readers" have insights that should be captured for the archive please add them.
Thank you all VERY much for the discussion, insights, suggest
> > I think Chris has the right idea. This would give more little opportunities
> > for user
> > processes to get a word in edgewise. Since the blocks are *obviously*
> > taking a
> > LONG time, this would not be a big hit efficiency in the bogged-down
> > condition.
> I still think you are exp
It would be extremely helpful to know what brands/models of disks lie and which
don't. This information could be provided diplomatically simply as threads
documenting problems you are working on, stating the facts. Use of a specific
string of words would make searching for it easy. There shou
I just want to interject here that I think, if memory serves me correctly, that
SPARC has been 64 bit for 10~15 years, and so had LOTS of address space to map
stuff. x86 brought a new restriction.
Regarding the practice of mapping files etc. into virtual memory that does not
exist, now I under
Re: I don't believe either are bundled. Search google for arcstat.pl and
nicstat.pl
--
Thanks. On a related note, there are at leaset 2 or 3 places proclaiming to
provide Solaris 10 packages. How do I know which ones are "safe", both in
terms of quality, and in terms of adhering to a consist
Re: Experimentation will show that the compression ratio does not
increase much at -9 so it is not worth it when you are short on time
or CPU.
---
Yes, and a large part of my experiment was to understand the cost (time) vs.
compression ratio curve. lj?? only gave me 7%, which to me is not worth
Re: gzip compression works a lot better now the compression is threaded.
It's a shame userland gzip isn't!
---
What does "now than" mean? I assume you mean the zfs / kernel gzip (right?) at
some point became threaded. Is this in the past, or in a kernel post 208.11 B2?
--Ray
--
This message
I think Chris has the right idea. This would give more little opportunities
for user processes to get a word in edgewise. Since the blocks are *obviously*
taking a LONG time, this would not be a big hit efficiency in the bogged-down
condition. It would however increase overhead in the well-be
Andewk8 at 11:39 on 11/29 said:
Solaris reports "virtual memory" as the sum of physical memory and page file -
so this is where your strange vmstat output comes from. Running ZFS stress
tests on a system with only 768MB of memory is not a good idea since ZFS uses
large amounts of memory for its
Ref relling's 12:00 post:
My system does not have arcstat or nicstat. But it is the B2 distribution.
Would I expect these to be in the final distribution, or where do these come
from?
Thanks.
--Ray
--
This message posted from opensolaris.org
___
zfs-
Jeff,
Thank you for weighing in, as well as for the additional insight. It is good
to have confidence that I am on the right track.
I like your system ... alot. Got work to do for it to be as slick as a recent
Linux distribution, but you are working on a solid core and just need some
touch
Tim,
I am trying to look at the whole picture. I don't see any unwarranted
assumptions, although I know so little about Solaris and I extrapolated all
over the place based on general knowlege, sort of draping it around and over
what you all said. I see quite a few misconceptions in the thread
Now that it has come out of its slump, I can watch what it is working on vs.
response. Whenever it is going through a folder with alot of incompressible
stuff, it gets worse. .mp3 and .flac are horrible. .iso images and .gz and
.zip files are bad. It is sinking again, but still works. It de
Tim,
I don't think we would really disagree if we were in the same room. I think in
the process of the threaded communication that a few things got overlooked, or
the wrong thing attributed.
You are right that there are many differences. Some of them are:
- Tests done a year ago, I expect th
If I shut down the Linux box, I won't have a host to send stuff to the Solaris
box!
Also, the Solaris box can only support 1024MB. I did have 1024MB in it at one
time and had essentially the same performance. I might note that I had the
same problem with 1024MB, albiet with TOP eating memory
Thanks for the info about "Free Memory". That also links to another sub-thread
regarding kernel memory space.If disk files are mapped into memory space,
that would be a reason that the kernel could make use of address space larger
that virtual memory (RAM+Swap).
Regarding showing stuff as
I get 15 to/from (I don't remember which) Linux LVM to a USB disk. It does
seem to saturate there. I assume due to interrupt service time between
transfers. I appreciate the contention for the IDE, but in a 3MB/Sec system, I
don't think that it is my bottleneck, much less in a 100KByte/second
Pantzer5: Thanks for the "top" "size" explanation.
Re: eeprom kernelbase=0x8000
So this makes the kernel load at the 2G mark? What is the default, something
like C00... for 3G?
Are PCI and AGP space in there too, such that kernel space is 4G - (kernelbase
+ PCI_Size + AGP_Size) ? (Shot
tcook, zpool iostat shows 1.15MB/sec. Is this averaged since boot, or a recent
running average?
The two drives ARE on a single IDE cable, however again, with a 33MB/sec cable
rate and 8 or 16MB cache in the disk, 3 or 4 MB/sec should be able to
time-share the cable without a significant impact
bfriesen,
Andrew brought up NVRAM by refering me to the following link:
Also, NFS to ZFS filesystems will run slowly under certain conditions
-including with the default configuration. See this link for more information:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache
zpool status -v says "No known data errors" for both the root rpool (separate
non-mirrored 80GB drive) and my pool (mirrored 200GB drives).
It is getting very marginal (sluggish/unresponsive) again. Interesting, top
shows 20~30% cpu idle with most of remainder kernel. I wonder if everything i
bfriesen,
Ultimately stuff flows in and stuff flows out. Data is not reused, so a cache
does not do anything for us. As a buffer, it is simply a rubber band, a FIFO.
So if the client wrote something real quick it would complete quickly. But if
it is writing an unlimited amount of data (like
relling, Thank you, you gave me several things to look at. The one thing that
sticks out for me is that I don't see why you listed IDE. Compared to all of
the other factors, it is not the bottleneck by a long shot even if it is a slow
transfer rate (33MB/Sec) by todays standards. What don't
Servo / mg,
I *have* noticed these effects on my system with lzjb, but they are minor.
Things are a little grainy, not smooth.
Eliminating the algorithm that exposes the shortfall in how the compression is
integrated into the system does not change the shortfall (See opensolaris.com
bug 5483)
Hakimian,
So you had a similar experience to what I had with an 800 MHz P3 and 768MB, all
the way down to totally unresponsive. Probably 5 or 6 x the CPU speed
(assuming single core) and 5 x the memory. This can only be a real design
problem or bug, not just expected performance.
Is there a
tcook,
You bring up a good point. exponentially slow is very different from crashed,
though they may have the same net effect. Also that other factors like
timeouts would come into play.
Regarding services, I am new to administering "modern" solaris, and that is on
my learning curve. My imm
Andrewk8,
Thanks for the information. I have some questions.
[1] You said "zfs uses large amounts of memory for its cache". If I
understand correctly, it is not that it uses large amounts, it is that it uses
all memory available. If this is an accurate picture, then it should be just
as ha
Please help me understand what you mean. There is a big difference between
being unacceptably slow and not working correctly, or between being
unacceptably slow and having an implementation problem that causes it to
eventually stop. I expect it to be slow, but I expect it to work. Are you
sa
I am [trying to] perform a test prior to moving my data to solaris and zfs.
Things are going very poorly. Please suggest what I might do to understand
what is going on, report a meaningful bug report, fix it, whatever!
Both to learn what the compression could be, and to induce a heavy load to
60 matches
Mail list logo