unlink(1M)?
cheers,
--justin
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
To: Sami Tuominen ; " zfs-discuss@opensolaris.org"
Sent: Monday, 26 November 2012, 14:57
Subject: Re: [zfs-discuss] Directory is not accessible
> From: zfs-discu
> would be very annoying if ZFS barfed on a technicality and I had to reinstall
> the whole OS because of a kernel panic and an unbootable system.
Is this a known scenario with ZFS then? I can't recall hearing of this
happening.
I've seen plenty of UFS filesystems dieing with "panic: freeing f
> has only one drive. If ZFS detects something bad it might kernel panic and
> lose the whole system right?
What do you mean by "lose the whole system"? A panic is not a bad thing, and
also does not imply that the machine will not reboot successfully. It certainly
doesn't guarantee your OS wi
> I think for the cleanness of the experiment, you should also include
"sync" after the dd's, to actually commit your file to the pool.
OK that 'fixes' it:
finsdb137@root> dd if=/dev/random of=ob bs=128k count=1 && sync && while true
> do
> ls -s ob
> sleep 1
> done
0+1 records in
0+1 records o
>Can you check whether this happens from /dev/urandom as well?
It does:
finsdb137@root> dd if=/dev/urandom of=oub bs=128k count=1 && while true
> do
> ls -s oub
> sleep 1
> done
0+1 records in
0+1 records out
1 oub
1 oub
1 oub
1 oub
1 oub
4 oub
4 oub
4 oub
4 oub
4
While this isn't causing me any problems, I'm curious as to why this is
happening...:
$ dd if=/dev/random of=ob bs=128k count=1 && while true
> do
> ls -s ob
> sleep 1
> done
0+1 records in
0+1 records out
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
> Since there is a finite number of bit patterns per block, have you tried to
> just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
> if there is ever a collision? If you found an algorithm that produced no
> collisions for any possible block bit pattern, wouldn't that
> This assumes you have low volumes of deduplicated data. As your dedup
> ratio grows, so does the performance hit from dedup=verify. At, say,
> dedupratio=10.0x, on average, every write results in 10 reads.
Well you can't make an omelette without breaking eggs! Not a very nice one,
anyway.
Yes
> The point is that hash functions are many to one and I think the point
> was about that verify wasn't really needed if the hash function is good
> enough.
This is a circular argument really, isn't it? Hash algorithms are never
perfect, but we're trying to build a perfect one?
It seems to me
>>You do realize that the age of the universe is only on the order of
>>around 10^18 seconds, do you? Even if you had a trillion CPUs each
>>chugging along at 3.0 GHz for all this time, the number of processor
>>cycles you will have executed cumulatively is only on the order 10^40,
>>still 37 order
Richard Elling wrote:
Miles Nordin wrote:
"ave" == Andre van Eyssen writes:
"et" == Erik Trimble writes:
"ea" == Erik Ableson writes:
"edm" == "Eric D. Mudama" writes:
ave> The LSI SAS controllers with SATA ports work nicely with
ave> SPARC.
I think what you mean is ``s
But, if mypool was a concatenation, things would get written onto the c0t1d0
first, and if any one of the subsequent disks were to fail, I should be able to
recover everything off of mypool, as long as I have not filled up c0t1d0, since
things were written sequentially, rather than across all
> with other Word files. You will thus end up seeking all over the disk
> to read _most_ Word files. Which really sucks.
> very limited, constrained usage. Disk is just so cheap, that you
> _really_ have to have an enormous amount of dup before the performance
> penalties of dedup are co
> Does anyone know a tool that can look over a dataset and give
> duplication statistics? I'm not looking for something incredibly
> efficient but I'd like to know how much it would actually benefit our
Check out the following blog..:
http://blogs.sun.com/erickustarz/entry/how_dedupalicious_
> Raw storage space is cheap. Managing the data is what is expensive.
Not for my customer. Internal accounting means that the storage team gets paid
for each allocated GB on a monthly basis. They have
stacks of IO bandwidth and CPU cycles to spare outside of their daily busy
period. I can't t
> UFS == Ultimate File System
> ZFS == Zettabyte File System
it's a nit, but..
UFS != Ultimate File System
ZFS != Zettabyte File System
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
Simple test - mkfile 8gb now and see where the data goes... :)
Unless you've got compression=on, in which case you won't see anything!
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
zpool list doesn't reflect pool usage stats instantly. Why?
This is no different to how UFS behaves.
If you rm a file, this uses the system call unlink(2) to do the work which is
asynchronous.
In other words, unlink(2) almost immediately returns a successful return code to
rm (which can the
Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?
I was expecting there to be something in rmmount.conf to allow a specific device
or pattern to be excluded but there appears to be nothing. Maybe this is an RFE?
___
Matt,
Can't see anything wrong with that procedure. However, could the problem be that
you're trying to mount on /home which is usually used by the automounter?
e.g.
$ grep home /etc/auto_master
/home auto_home -nobrowse
Maybe you need to deconfigure this from your automounte
> Why aren't you using amanda or something else that uses
> tar as the means by which you do a backup?
Using something like tar to take a backup forgoes the ability to do things like
the clever incremental backups that ZFS can achieve though; e.g. only backing
the few blocks that have changed i
21 matches
Mail list logo