I had a drive fail and replaced it with a new drive. During the resilvering
process the new drive had write faults and was taken offline. These faults were
caused by a broken SATA cable (drive checked with Manufacturers software and
all ok). New cable fixed the the failure. However, now the driv
Yes - but it does nothing. The drive remains FAULTED.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the suggestion, but have tried detaching but it refuses reporting no
valid replicas. Capture below.
C3P0# zpool status
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED
Thanks for the suggestion, but have tried detaching but it refuses reporting no
valid replicas. Capture below.
C3P0# zpool status
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
repla
Thanks - have run it and returns pretty quickly. Given the output (attached)
what action can I take?
Thanks
James
--
This message posted from opensolaris.orgDirty time logs:
tank
outage [300718,301073] length 356
outage [301138,301139] length 2
outage [301149,30
Anyone here read the article "Why RAID 5 stops working in 2009" at
http://blogs.zdnet.com/storage/?p=162
Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux
if the RAID has to be rebuilt because of a faulty disk? I imagine so because
of the physical constraints that p
Unclear what you want to do? What's the goal for this excise?
If you want to replace the pool with larger disks and the pool is in mirror or
raidz. You just replace one disk at a time and allow the pool to rebuild it
self. Once all the disk has been replace, it will atomically realize the disk
eed memory, ZFS will release memory being used by the ARC.
But, if no one else wants it
/jim
On Apr 27, 2010, at 9:07 PM, Brad wrote:
> Whats the default size of the file system cache for Solaris 10 x86 and can it
> be tuned?
> I read various posts on the subject and its confusing..
&
For this type of migration a downtime is required. However, it can be reduce
to only a few hours to a few minutes depending how much change need to be
synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well.
First thing is consider is only do the migration once so
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure out what is
the best raid group size for you since zfs don't support changing number of
disk in raidz yet. I usually use the number of the slots per shelf. or a good
number is 7~10
Sorry, I need to correct myself. Mirror luns on the windows side to switch
storage pool under it is a great idea and I think you can do this without
downtime.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
I understand your point. however in most production system the selves are added
incrementally so make sense to be related to number of slots per shelf. and in
most case withstand a shelf failure is to much of overhead on storage any are.
for example in his case he will have to configure 1+0 ra
Sorry for the double post but I think this was better suite for zfs forum.
I am running OpenSolaris snv_134 as a file server in a test environment,
testing deduplication. I am transferring large amount of data from our
production server via using rsync.
The Data pool is on a separated raidz1-0
This is not a performance issue. The rsync will hang hard and one of the child
process can not be killed (I assume it's the one running on the zfs). the
command gets slower I am referring to the output of the file system commands
(zpool, zfs, df, du, etc) from the different shell. I left the
> 3 shelves with 2 controllers each. 48 drive per
> shelf. These are Fibrechannel attached. We would like
> all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across
controllers within vdevs. also may want to leave a least 1 spare
> Why would you recommend a spare for raidz2 or raidz3?
> -- richard
Spare is to minimize the reconstruction time. Because remember a vdev can not
start resilvering until there is a spare disk available. And with disks as big
as they are today, resilvering also take many hours. I rather have
> Would your opinion change if the disks you used took
> 7 days to resilver?
>
> Bob
That will only make a stronger case that hot spare is absolutely needed.
This will also make a strong case for choosing raidz3 over raidz2 as well as
vdev smaller number of disks.
--
This message posted from op
Looks like I am hitting the same issue now
from the earlier post that you responded.
http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=15
Continue my test migration with the dedup=off and synced couple more file
systems.
I decided the merge two of the file systems together by copyi
uild 136..., iSCSI Target Daemon (and ZFS shareiscsi)
are gone, so you will need to reconfigure your two ZVOLs 'vol01/zvol01' and
'vol01/zvol02', under COMSTAR soon.
http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Port
Przem,
> Anybody has an idea what I can do about it?
zfs set shareiscsi=off vol01/zvol01
zfs set shareiscsi=off vol01/zvol02
Doing this will have no impact on the LUs if configured under COMSTAR.
This will also transparently go away with b136, when ZFS ignores the shareiscsi
property.
-
Okay, so after some test with dedup on snv_134. I decided we can not to use
dedup feature for the time being.
While unable to destroy a dedupped file system. I decided to migrate the file
system to another pool then destroy the pool. (see below)
http://opensolaris.org/jive/thread.jspa?threadI
size of snapshot?
r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today
NAMEUSED AVAIL REFER MOUNTPOINT
mpool/export/projects/project1...@today 0 - 407G -
r...@filearch1:/var/adm# zfs list tank/export/projects/project1...@
I was expecting
zfs send tank/export/projects/project1...@today
would send everything up to @today. That is the only snapshot and I am not
using the -i options.
The things worries me is that tank/export/projects/project1_nb was the first
file system that I tested with full dedup and compression
When I boot up without the disks in the slots. I manually bring the pool on
line with
zpool clear
I believe that was what you were missing from your command. However I did not
try to change controller.
Hopefully you only been unplug disks while the system is turn off. If that's
case the
You may or may not need to add the log device back.
zfs clear should bring the pool online.
either way shouldn't affect the data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
Hi All, is there any procedure to recover a filesystem from an office pool or
bring a pool on-line quickly.
Here is my issue.
* One 700GB Zpool
* 1 filesystem with compression turn on (only using few MB)
* Try to migrated another filesystem from a different pool with dedup stream.
with
zfs send
10GB of memory + 5 days later. The pool was imported.
this file server is a virtual machine. I allocated 2GB of memory and 2 CPU
cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am
try to recover is only 700 GB and not the 6TB pool I am try to migrate.
So I decided t
sets. i thought of local zones first, but most
people may init them by packages (though zoneadm says it is copying thousands
of files), so /etc/skel might be a better example of the usecase - though
nearly useless ,)
jim
--
This message posted from opensolaris.org
_
sets. i thought of local zones first, but most
people may init them by packages (though zoneadm says it is copying thousands
of files), so /etc/skel might be a better example of the usecase - though
nearly useless ,)
jim
--
This message posted from opensolaris.org
_
A solution to this problem would be my early Christmas present!
Here is how I lost access to an otherwise healthy mirrored pool two months ago:
Box running snv_130 with two disks in a mirror and an iRAM battery-backed
ZIL device was shutdown orderly and powered down normally. While I was away
o
times per
hour, plus updates to files' atime attr - and that particular scale of
operation will be greatly improved by an NVRAM ZIL.
If I were to use a ZIL again, i'd use something like the ACARD DDR-2 SATA
boxes, and not an SSD or an iRAM.
-- Jim
--
This message po
I have been looking at why a zfs receive operation is terribly slow and one
observation that seemed directly linked to why it is slow is that at any one
time one of the cpus is pegged at 100% sys while the other 5 in my case are
relatively quiet. I haven't dug any deeper than that, but was curi
Just an update, I had a ticket open with Sun regarding this and it looks like
they have a CR for what I was seeing (6975124).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
I had found a way to get around the freeze, but I guess I just delayed the
freeze a little longer. I provided Oracle some explorer output and a crash
dump to analyze and this is the data they used to provide the information I
passed on.
Jim Barker
--
This
ld an international group in English for the Tokyo OSUG. There are
bi-lingual westerners and Japanese on both lists, and we have events in
Yoga as well.
http://mail.opensolaris.org/mailman/listinfo/ug-tsug (English )
http://mail.opensolaris.org/mailman/listinfo/ug-jposug (Japanese)
Jim
--
http://blogs.su
king store
device is not a ZVOL.
Note: For ZVOL support, there is a corresponding ZFS storage pool
change to support this functionality, so a "zpool upgrade ..." to
version 16 is required:
# zpool upgrade -v
.
.
16 stmf property support
- Jim
The options seem
Posting to zfs-discuss. There's no reason this needs to be
kept confidential.
5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
Seems pointless - they'd be much better off using mirrors,
which is a better choice for random IO...
Looking at this now...
/jim
Jeff Savit
created inside a HSM volume,
so that I have the flexibility of ZFS and offline-storage capabilities of HSM?
--
Thanks for any replies, including statements that my ideas are insane or my
views are outdated ;) But constructive ones are more appreciated ;)
//
Thanks for the link, but the main concern in spinning down drives of a ZFS pool
is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a
transaction
group (TXG) which requires a synchronous write of metadata to disk.
I mentioned reading many blogs/forums on the matter, and some
r one?
In general, were there any stability issues with snv_128 during internal/BFU
testing?
TIA,
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rformance of the customers workload.
As an aside, there's nothing about this that requires it be posted
to zfs-discuss-confidential. I posted to zfs-disc...@opensolaris.org.
Thanks,
/jim
Anthony Benenati wrote:
Jim,
The issue with using scan rate alone is if you are looking for why you
Think he's looking for a single, intuitively obvious, easy to acces indicator
of memory usage along the lines of the vmstat free column (before ZFS) that
show the current amount of free RAM.
On Dec 23, 2009, at 4:09 PM, Jim Mauro wrote:
> Hi Anthony -
>
> I don't get
We have a production SunFireV240 that had a zfs mirror until this week. One of
the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don't know what happened next but by the time I got involved there was no
evidence that the remaining go
No. Only slice 6 from what I understand.
I didn't create this (the person who did has left the company) and all I know
is that the pool was mounted on /oraprod before it faulted.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Never mind.
It looks like the controller is flakey. Neither disk in the mirror is clean.
Attempts to backup and recover the remaining disk produced I/O errors that were
traced to the controller.
Thanks for your help Victor.
--
This message posted from opensolaris.org
_
don't run them at 90% full.
Read the link Richard sent for some additional information.
Thanks,
/jim
Tony MacDoodle wrote:
Was wondering if anyone has had any performance issues with Oracle
running on ZFS as compa
o that any end-user OS
(not only ones directly suppporting ZFS) would benefit from ZFS
resiliency, snapshots, caching, etc. with the simplicity of using a
RAID adapter's exported volumes.
Now, it is just a thought. But I wonder if it's possible... Or useful? :)
Or if anyone has already done
ble - to reduce wear and increase
efficiency - but the main idea is hopefully simple.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ks ASAP.
So beside an invitation to bash these ideas and explain why they
are wrong an impossible - if they are - there is also a hope to
stir up a constructive discussion finally leading up to a working
"clustered ZFS" solution, and one more reliable than my ideas
above ;) I
aiting for a chance to write several metadata blocks
as well... Thus I think my second solution is viable.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
f capturing storage from
hosts which died, and avoiding corruptions - but this is
hopefully solved in the past decades of clustering tech's.
Nico also confirmed that "one node has to be a master of
all TXGs" - which is conveyed in both ideas of my original
post.
More directed replies
2011-10-14 15:53, Edward Ned Harvey пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I guess Richard was correct about the usecase description -
I should detail what I'm thinking about, to give some illustration.
Hello, I was asked if the CF port in Thumpers can be accessed by the OS?
In particular, would it be a good idea to use a modern 600x CF card
(some reliable one intended for professional photography) as an L2ARC
device using this port?
Thanks,
//Jim
n, just as it was accessible
to the "old host".
Again. NFS/iscsi/IB = ok.
True, except that this is not an optimal solution in this described
usecase - a farm of server blades with a relatively dumb fast raw
storage (but NOT an intellectual storage server)
ordan
On Fri, Oct 14, 2011 at 5:39 AM, Jim Klimov wrote:
Hello, I was asked if the CF port in Thumpers can be accessed by the OS?
In particular, would it be a good idea to use a modern 600x CF card (some
reliable one intended for professional photography) as an L2ARC device using
this port?
T
2011-10-14 23:57, Gregory Shaw пишет:
You might want to keep in mind that the X4500 was a ~2006 box, and had only
PCI-X slots.
Or, at least, that's what the 3 Iv'e got have. I think the X4540 had PCIe, but
I never got one of those. :-(
I haven't seen any cache accelerator PCI-X cards.
Howe
ical consumer disks did get about 2-3 times faster for
linear RW speeds over the past decade; but for random access
they do still lag a lot. So, "agreed" ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t ports of two managed
switch modules can also become the networking core for the deployment site.
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have been reported several times.
I think another rationale for SSD throttling was with L2ARC tasks -
to reduce probable effects of write overdriving in SSD hardwares
(less efficient and more wear on SSD cells).
//Jim
___
zfs-discuss mailing list
zfs-di
of like send-recv
in the same pool? Why is it not done yet? ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
CDROM spin up - by a characteristic buzz in the
headphones or on the loudspeakers. Whether other components
would fail or not under such EMI - that depends.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oxyde film is scratched off, and the cable works again, for a
few months more...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
|
| Климов Евгений, Jim Klimov |
| технический директор CTO |
| ЗАО "ЦОС и ВТ" JSC COS&HT |
||
| +7-903-7705859 (cel
e to work in Sol10 with little effort.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t the repair shell in order to
continue booting the OS.
* brute force - updating the bootarchive (/platform/i86pc/boot_archive
and /platform/i86pc/amd64/boot_archive ) manually as an FS image, with
files listed in /boot/solaris/filelist.ramdisk. Usually failure on boot
2011-10-19 17:54, Fajar A. Nugraha пишет:
On Wed, Oct 19, 2011 at 7:52 PM, Jim Klimov wrote:
Well, just for the sake of completeness: most of our systems are
using zfs-auto-snap service, including Solaris 10 systems datiing
from Sol10u6. Installation of relevant packages from SXCE (ranging
w that's doable ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
th the reverse of
"zfs destroy @snapshot", meaning that some existing
blocks would be reassigned as "owned" by a newly
embedded snapshot instead of being "owned" by the
live dataset or some more recent snapshot...
//Jim
__
2011-10-30 2:14, Edward Ned Harvey пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
summer, and came up with a new question. In short, is it
possible to add "restartability" to ZFS SEND, for example
Rather tha
2011-10-29 21:57, Jim Klimov пишет:
... In short, is it
possible to add "restartability" to ZFS SEND, for example
by adding artificial snapshots (of configurable increment
size) into already existing datasets [too large to be
zfs-sent successfully as one chunk of stream data]?
On a
ond in kernel probes, the watchdog
program might not catch the problem soon enough to react.
http://thumper.cos.ru/~jim/freeram-watchdog-20110610-v0.11.tgz
Note that it WILL crash your system in case of RAM depletion,
without syncs or service shutdowns. Since the RAM depletion
happens quickly, it mi
2011-10-31 1:13, Jim Klimov пишет:
Sorry, I am late.
...
If my memory and GoogleCache don't fail me too much, I ended
up with the following incantations for pool-import attempts:
:; echo zfs_vdev_max_pending/W0t5 | mdb -kw
:; echo "aok/W 1" | mdb -kw
:; echo "zfs_re
2011-10-31 16:28, Paul Kraus wrote:
How big is / was the snapshot and dataset ? I am dealing with a 7
TB dataset and a 2.5 TB snapshot on a system with 32 GB RAM.
I had a smaller-scale problem, with datasets and snapshots sized
several hundred GB, but on an 8Gb RAM system. So proportionall
they WERE still a useful reference for many of us,
even if posted a few years back...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 -22K -
pool/export/distr@zfs-auto-snap:frequent-2011-11-05-17:00 0 -
4.81G -
pool/export/home@zfs-auto-snap:frequent-2011-11-05-17:00 0 -
396M -
pool/export/home/jim@zfs-auto-snap:frequent-2011-11-05-17:00 0
- 24.7M -
If you only need filesystem
ferred
(and for what reason)?
Also, how do other list readers place and solve their
preferences with their OpenSolaris-based laptops? ;)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0t1 | mdb -kw
In this case I am not very hesitant to recreate the rpool
and reinstall the OS - it was mostly needed to server the
separate data pool. However this option is not always an
acceptable one, so I wonder if anything can be done to
repair an inconsistent non-redundant pool - at
2011-11-08 22:30, Jim Klimov wrote:
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Thanks to my wife's sister, who is my hands and eyes near
the problematic PC, h
2011-11-08 23:36, Bob Friesenhahn wrote:
On Tue, 8 Nov 2011, Jim Klimov wrote:
Second question regards single-HDD reliability: I can
do ZFS mirroring over two partitions/slices, or I can
configure "copies=2" for the datasets. Either way I
think I can get protection from bad blocks o
pool with both nodes
accessing all of the data instantly and cleanly.
Can this be true? ;)
If this is not a deeply-kept trade secret, can the Nexenta
people elaborate in technical terms how this cluster works?
[1] http://www.nexenta.com/corp/sbb?gclid=CIzBg-aEqKwCFUK9zAodCSscsA
d. I should've at least reported it ;)
Thanks for any ideas,
and good luck fixing it for the future ,)
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2011-11-22 10:24, Frank Cusack пишет:
On Mon, Nov 21, 2011 at 10:06 PM, Frank Cusack mailto:fr...@linetwo.net>> wrote:
grub does need to have an idea of the device path, maybe in vbox
it's seen as the 3rd disk (c0t2), so the boot device name written to
grub.conf is "disk3" (whatever
this:
# zfs snapshot -r pool/rpool-backup@2019-05
# zfs send -R pool/rpool-backup@2019-05 | zfs recv -vF rpool
Since the hardware was all the same, there was little else
to do. I revised "RPOOL/rpool/boot/grub/menu.lst" and
"RPOOL/etc/vfstab" just in case,
nts
and /etc/vfstab for that now on some systems, but would
like to aviod such complication if possible...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
size, or the
compressed filesize?
My gut tells me that since they inflated _so_ badly when I storage vmotioned
them, that they are the compressed values, but I would love to know for
sure.
-Matt Breitbach
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sk anyway.
However, the original question was about VM datastores,
so large files were assumed.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ounds reasonable due to practice. If so, the error
message "as is" happens to be valid.
But you're correct that it might be more informative
for this corner case as well... :)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ated !- referred); can that be better diagnosed or
repaired? Can this discrepancy by a few sectors worth of size be a cause or be
caused by that reported metadata error?
Thanks,
// Jim Klimov
sent from a mobile, pardon any typos ,)
___
zfs-dis
An intermediate update to my recent post:
2011-11-30 21:01, Jim Klimov wrote:
Hello experts,
I've finally upgraded my troublesome oi-148a home storage box to oi-151a about a week ago
(using pkg update method from the wiki page - i'm not certain if that repository is fixed
at relea
2011-12-02 18:25, Steve Gonczi пишет:
Hi Jim,
Try to run a "zdb -b poolname" ..
This should report any leaked or double allocated blocks.
(It may or may not run, it tends to run out of memory and crash on large
datasets)
I would be curious what zdb reports, and whether you are a
me theories, suggestions or requests to dig
up more clues - bring them on! ;)
2011-12-02 20:08, Nigel W wrote:
On Fri, Dec 2, 2011 at 02:58, Jim Klimov wrote:
My question still stands: is it possible to recover
from this error or somehow safely ignore it? ;)
I mean, without backing up data and
50 5
block traversal size 11986202624 != alloc 11986203136 (unreachable 512)
bp count: 405927
bp logical:15030449664 avg: 37027
bp physical: 12995855872 avg: 32015 compression: 1.16
bp allocated: 13172434944 avg: 32450
, the electrical links just stopped working
after a while due to oxydization into the bulk of the metal blobs :)
Still, congratulations on that replacement hardware did solve the
problem! ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
space.
* (Technically, for very-often referenced blocks there is a
number of copies, controlled by ditto attribute).
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ith no disk IO. And I would be very
surprised if speeds would be noticeably different ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2011-12-12 19:03, Pawel Jakub Dawidek пишет:
On Sun, Dec 11, 2011 at 04:04:37PM +0400, Jim Klimov wrote:
I would not be surprised to see that there is some disk IO
adding delays for the second case (read of a deduped file
"clone"), because you still have to determine references
to t
parent_type = raidz
zio_err = 50
zio_offset = 0x6ecb163000
zio_size = 0x8000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0
zio_blkid = 0x0
__ttl = 0x1
__tod = 0x4ed70849 0x1a17d120
2011-12-02 13:58, Jim Klimov wrote:
An interme
deep metadata error.
Now, can someone else please confirm this guess? If I were
to just calculate the correct checksum and overwrite the
on-disk version of the block with "correc" one, would I
likely make matters worse or okay? ;)
Thanks to all that have already repl
bad,
* recommended return to 4Kb, we'll do 4*8K)
* greatly increases write speed in filled-up pools
set zfs:metaslab_min_alloc_size = 0x8000
set zfs:metaslab_smo_bonus_pct = 0xc8
**
These values were described in greater detail on the list
this summer, I think.
HTH,
..
Basically this should be equivalent for "root-reserved 5%"
on traditional FSes like UFS, EXT3, etc. Would it be indeed?
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 798 matches
Mail list logo