unlink(1M)?
cheers,
--justin
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
To: Sami Tuominen ; " zfs-discuss@opensolaris.org"
Sent: Monday, 26 November 2012, 14:57
Subject: Re: [zfs-discuss] Directory is not accessible
&
"panic: freeing free" and then
the ensuing fsck-athon convinces the user to just rebuild the fs in question.
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a given filesystem. I think the default is "panic"
anyway. Check mount_ufs manpage for details.
Your answer is to take regular backups, rather than bury your head in the sand.
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
> I think for the cleanness of the experiment, you should also include
"sync" after the dd's, to actually commit your file to the pool.
OK that 'fixes' it:
finsdb137@root> dd if=/dev/random of=ob bs=128k count=1 && sync && while true
> do
> ls -s ob
> sleep 1
> done
0+1 records in
0+1 records o
>Can you check whether this happens from /dev/urandom as well?
It does:
finsdb137@root> dd if=/dev/urandom of=oub bs=128k count=1 && while true
> do
> ls -s oub
> sleep 1
> done
0+1 records in
0+1 records out
1 oub
1 oub
1 oub
1 oub
1 oub
4 oub
4 oub
4 oub
4 oub
4
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
4 ob < changes here
4 ob
4 ob
^C
$ ls -l ob
-rw-r--r-- 1 justin staff 1040 Aug 3 09:28 ob
I was expecting
rn, wouldn't that be the win?
Perhaps I've missed something, but if there was *never* a collision, you'd have
stumbled across a rather impressive lossless compression algorithm. I'm pretty
sure there's some Big Mathematical Rules (Shannon?) that me
e one,
anyway.
Yes dedup is expensive but much like using O_SYNC, it's a conscious decision
here to take a performance hit in order to be sure about our data. Moving the
actual reads to a async thread as I suggested should improve things.
cheers,
--justin
> The point is that hash functions are many to one and I think the point
> was about that verify wasn't really needed if the hash function is good
> enough.
This is a circular argument really, isn't it? Hash algorithms are never
perfect, but we're trying to build a perfect one?
It seems to me
>>You do realize that the age of the universe is only on the order of
>>around 10^18 seconds, do you? Even if you had a trillion CPUs each
>>chugging along at 3.0 GHz for all this time, the number of processor
>>cycles you will have executed cumulatively is only on the order 10^40,
>>still 37 order
WD Green
WD6400AACS (all of which I have tested on another system with the WD read-test
utility). I know these drives get their share of ridicule (and occasional
praise/satisfaction), but I'd appreciate any thoughts on proceeding with the
mirror upgrade. [Backups are a check.]
Jus
On 04/21/10 02:16 PM, Erik Trimble wrote:
Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I
see pools that have been exported? Kind of like being able to see
deported volumes using "vxdisk -o alldgs list"
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see deported
volumes using "vxdisk -o alldgs list".
Justin
___
zfs-discuss mailing list
zfs-discuss@opensolari
Eric,
Thanks for your input, this has been a great learning experience for me on the
workings of ZFS. I will use your suggestion and create the metadevice and run
raidz across 5 "devices" for approximately the same total storage.
--
This message posted from opensolaris.org
I have a question about using mixed vdev in the same zpool and what the
community opinion is on the matter. Here is my setup:
I have four 1TB drives and two 500GB drives. When I first setup ZFS I was
under the assumption that it does not really care much on how you add devices
to the pool and
I'm not sure how there is mistreatment when known that Solaris 10 is the
current production-grade product and OpenSolaris, for all intents and
purposes, a beta product that is currently under active development. I
was actually surprised when SUN provided a level of support for
OpenSolaris abov
its cousin, 1064[E] in
SPARC machines for many years. In fact, I can't think of a
SPARC machine in the current product line that does not use
either 1068 or 1064 (I'm sure someone will correct me, though ;-)
-- richard
Might be worth having a look at the T1000 to s
AM correct, how do you create a concatenated zpool?
You can't.
ZFS dynamically stripes across top-level vdevs. Whichever order you add them into the pool, they will be effectively treated as a
stripe.
regards,
--justin
___
zfs-discuss mail
alrigt, alright, but your fault. you left your workstation logged on, what was
i supposed to do? not chime in?
grotty yank
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
Howdy Matt, thanks for the response.
But I dunno man... I think I disagree... I'm kinda of the opinion that
regardless of what happens to hardware, an OS should be able to work around it,
if it's possible. If a sysadmin wants to yank a hard drive out of a motherboard
(despite the risk of damage
Howdy Matt. Just to make it absolutely clear, I appreciate your response. I
would be quite lost if it weren't for all of the input.
> Unplugging a drive (actually pulling the cable out) does not simulate a
> drive failure, it simulates a drive getting unplugged, which is
> something the hardwar
aye mate, I had the exact same problem, but where i work, we pay some pretty
seriosu dollars for a direct 24/7 line to some of sun's engineers, so i decided
to call them up. after spending some time with tech support, i never really got
the thing resolved, and i instead ended up going back to de
>I know this is too late to help you now, but... Doesn't "zpool status -v"
>do what you want?
Hi,
No indeed it does not. At the top it just says that resilvering is happening
and that's it. Let me guess... it's to do with the zfs version I'm using?
(I'
attach it again tonight, when it won't affect users.
To do that I need to know which disk is inconsistent, but zpool status does
not show me any info in regard.
Is there any way to identify which disk is inconsistent?
Thanks
justin
___
zfs-disc
Thank you for the feedback
Justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was intending to run zfs over the hardware raid. But seeing your
comment I'm inclined to go back on that. Given the huge advances in zfs
since that version, is installing latest zfs version from source an option I
should at all consider? Or am I better put discarding zfs altogether?
j
All 3 boxes I had disk failures on are SunFire x4200 M2 running
Solaris 10 11/06 s10x_u3wos_10 X86 w the zfs it comes with, ie v3
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
r, the fs speed will not be hindered by the slower drive?
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a SCSI and/OR zfs timeout setting I can tune to tell it to flag a
drive as faulty and stop attempting to access it?
I recently replaced some drives by WD drives, set up with a 7s TLER, but
this has not helped the issue!
justin
smime.p7s
Description:
>It depends: if you like to be able to restore single files, zfs send/recv
would
>not be apropriate.
Why not?
With zfs you can easily view any file/dir from a snapshot (via the .zfs
dir). You can also copy that instance of the file into your running fs with
cp.
justin
smime.p7s
Descr
> Actually, my ideal setup would be:
>Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
>2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
Mike, may I ask which eSATA controllers you used? I searched the Solaris HCL
and found very few listed there
Thanks
justin
smime.p7s
Descr
it worked, but I
did that based on a Q&A I googled to.
Will be switching to KbdInteractiveAuthentication as per your
recommendation, since that seems to be the correct method.
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discus
For the record, I am told that I will need to wait for S10 u6 for zfs
delegation; can't upgrade before that.
Meanwhile, I had to permit root login (obviously disabled passwd auth;
PasswordAuthentication no; PAMAuthenticationViaKBDInt no).
Also, the -R option does not work on this zfs version, so
I'm running Solaris 10 8/07 s10x_u4wos_12b X86
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nly
option?
---justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d/receive.
Any clues on what I am missing, or a howto anywhere?
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
es of dedup are countered.
Neither of these hold true for SSDs though, do they? Seeks are essentially
free, and the devices are not cheap.
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cannot offline c3t2d0s0: no valid replicas
given i have 2 partitions in this raidz1, i expected that this vdev acts
similar to a mirror and allows to offline it
thanks
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss m
Image Sil3114
Thanks
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Does anyone know a tool that can look over a dataset and give
> duplication statistics? I'm not looking for something incredibly
> efficient but I'd like to know how much it would actually benefit our
Check out the following blog..:
http://blogs.sun.com/erickustarz/entry/how_dedupalicious_
n't think of a better spend of their time
than a scheduled dedup.
> Perhaps deduplication is a response to an issue which should be solved
> elsewhere?
I don't think you can make this generalisation. For most people, yes, but not
everyone.
er did.
Any suggestions?
Thanks
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
When set up with multi-pathing to dual redundant controllers, is layering
zfs on top of the 6140 of any benefit? AFAIK this array does have internal
redundant paths up to the disk connection.
justin
smime.p7s
Description: S/MIME cryptographic signature
Quiet Mountain, Peaceful Mountain
[EMAIL PROTECTED]:/#zfs set mountpoint=/backup external/backup
back to working condition
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Problem solved.
I did a zfs mount followed by a zfs unmount, and then the zone booted fine.
Thanks to William from the zones-discuss and Mark Musante, both from Sun.
The more i work with zfs, the more confidence i get in it.
justin
smime.p7s
Description: S/MIME cryptographic signature
# zoneadm list -cp
0:global:running:/
-:anzan:installed:/zones/anzan
That of any help?
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
zone anzan failed to verify
why is that when my pool is healthy?
justin
-Original Message-
From: Justin Vassallo [mailto:[EMAIL PROTECTED]
Sent: 23 June 2008 17:30
To: zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] zfs mirror broken?
To add:
Zpool status -xv posted earlier ends
To add:
Zpool status -xv posted earlier ends with:
errors: No known data errors
# fmadm faulty
STATE RESOURCE / UUID
--
degraded zfs://pool=external
cbc49380-8ebc-cf10-a8c5-fcaa0c984117
--
be better
idea...
2) physically replace disk1 with ORIGINAL disk2 and attempt a scrub
justin
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Miles Nordin
Sent: 21 June 2008 02:46
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs mirror broken?
# zoneadm -z ZONE boot
could not verify fs /data: could not access /tank/data: No such file or
directory
zoneadm: zone ZONE failed to verify
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
z
. Is there any risk of breaking zfs?
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thommy,
If I read correctly your post stated that the pools did not automount on
startup, not that they would go corrupt. It seems to me that Paulo is
actually experiencing a corrupt fs
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Thommy M.
Sent: 02 Ju
Hi,
My swap is on raidz1. Df -k and swap -l are showing almost no usage of
swap, while zfs list and zpool list are showing me 96% capacity. Which
should i believe?
Justin
# df -hk
Filesystem size used avail capacity Mounted on
/dev/dsk/c3t0d0s1 14G 4.0G10G
ed and let the raid take care of recovering the data.
Even if there were no further technical reasons, this feature alone is a
great benefit for using these SATA drives in the enterprise
justin
smime.p7s
Description: S/MIME cryptographic signature
___
Hi,
Is it possible to mirror a vdev within a zpool?
My aim is to replace a current raidz2 vdev with a mirror. I was wondering if
it is possible to create a mirrored vdev, use it to mirror my current vdev,
then when resilvering completes remove the old vdev
justin
smime.p7s
c12t0d0p0 ONLINE 0 0 0
c13t0d0p0 ONLINE 0 0 0
errors: No known data errors
thanks
justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolari
;10%. Should I
think that I have an IO bottleneck, or would this fs locking be considered
as weird zfs behavior.
Thanks
justin
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Vincent Fox
Sent: 06 March 2008 18:53
To: zfs-discuss@opensolaris.org
Subject: Re: [
help would be greatly appreciated.
justin
# zpool status external
pool: external
state: ONLINE
scrub: scrub in progress, 0.01% done, 161h29m to go
config:
NAME STATE READ WRITE CKSUM
external ONLINE 0 0 0
raidz2
Fixed - what was needed is an export, followed by an import -f
From: Justin Vassallo [mailto:[EMAIL PROTECTED]
Sent: 29 February 2008 15:13
To: zfs-discuss@opensolaris.org
Subject: zfs pool unavailable!
Hello,
I have a zfs pool on 3 external disks, connected via usb. All 3 disks are
?
Thanks
justin
# rmformat
Looking for devices...
1. Volmgt Node: /vol/dev/aliases/cdrom0
Logical Node: /dev/rdsk/c0t0d0s2
Physical Node: /[EMAIL PROTECTED],0/pci108e,[EMAIL PROTECTED]/[EMAIL
PROTECTED]/[EMAIL PROTECTED],0
Connected Device: AMI Virtual CDROM
> UFS == Ultimate File System
> ZFS == Zettabyte File System
it's a nit, but..
UFS != Ultimate File System
ZFS != Zettabyte File System
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
Hello,
1) If i create a raidz2 pool on some disks, start to use it, then the disks'
controllers change. What will happen to my zpool? Will it be lost or is
there some disk tagging which allows zfs to recognise the disks?
2) if i create a raidz2 on 3 HDs, do i have any resilience? If any one
Have you looked at AVS? (http://opensolaris.org/os/project/avs/)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the response. I don't know enough about the symantics of the device
IDs, I hope it does not change, and that maybe zfs will see that the lun has
grown. Seeing if you can use use a file system or file as a vdev (and can't
they change sizes?) then you'd figure it could do the same with
I have searched high and low and cannot find the answer. I read about how zfs
uses a Device ID for identification, usually provided by the firmware of the
device. So if an controller presents an (array) lun w/a unique device ID, what
would happen if I onlined the pool, and suddenly that lun was
Simple test - mkfile 8gb now and see where the data goes... :)
Unless you've got compression=on, in which case you won't see anything!
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
ou can force a flush at a finer granularity than that. Anyone?
regards,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?
I was expecting there to be something in rmmount.conf to allow a specific device
or pattern to be excluded but there appears to be nothing. Maybe this is an RFE?
___
from your automounter or change your
mountpoint.
cheers,
--justin
I have about a dozen two disk systems that were all setup the same using a
combination of SVM and ZFS.
s0 = / SMV Mirror
s1 = swap
s3 = /tmp
s4 = metadb
s5 = zfs mirror
The system does boot, but once it gets to zfs, zfs fails and all
ave changed in a very large file rather than the whole
file regardless. If 'zfs send' doesn't do something we need to fix it rather
than avoid it, IMO.
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Is this a bug?
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
zfs 14.2G 1.35T 0 62 0 5.46M
raidz214.2G 1.35T 0 62 0 5.46M
c0d0-
Already making use of it, thank you!
http://www.justinconover.com/blog/?p=17
I took 6x250gb disk and tried raidz2/raidz/none
# zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0
df -h zfs
Filesystem size used avail capacity Mounted on
zfs915G49K 915G
72 matches
Mail list logo