I am posting this once again as my previous post went into the middle of the
thread and may go unnoticed.
Ed,
Thank you for sharing the calculations. In lay terms, for Sha256, how many
blocks of data would be needed to have one collision?
Assuming each block is 4K is size, we probably can calc
Ed,
Thank you for sharing the calculations. In lay terms, for Sha256, how many
blocks of data would be needed to have one collision?
Assuming each block is 4K is size, we probably can calculate the final data
size beyond which the collision may occur. This would enable us to make the
following
Hello,
We are building a zfs-based storage system with generic but high-quality
components. We would like to test the new system under various loads. If we
find that the iometer reading has started to reduce under certain loads, I am
wondering what performance counters we should look for to ide
Thank you all for your help. I am the OP.
I haven't looked at the link that talks about the probability of collision.
Intuitively, I still wonder how the chances of collision can be so low. We are
reducing a 4K block to just 256 bits. If the chances of collision are so low,
*theoretically* it i
Folks,
I have been told that the checksum value returned by Sha256 is almost
guaranteed to be unique. In fact, if Sha256 fails in some case, we have a
bigger problem such as memory corruption, etc. Essentially, adding verification
to sha256 is an overkill.
Perhaps (Sha256+NoVerification) would
Hi,
Thank you for your help.
I actually had the script working. However, I just wanted to make sure that
spaces are not permitted within the field value itself. Otherwise, the regular
expression would break.
Regards,
Peter
--
This message posted from opensolaris.org
__
Folks,
Command "zpool get all poolName" does not provide any option to generate
parsable output. The returned output contains 4 fields - name, property, value
and source. These fields seems to be separated by spaces. I am wondering if it
is safe to assume that there are no spaces in the field v
Thank you for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks,
I am trying to understand if there is a way to increase the capacity of a
root-vdev. After reading zpool man pages, the following is what I understand:
1. If you add a new disk by using "zpool add," this disk gets added as a new
root-vdev. The existing root-vdevs are not changed.
2. You
Thank you all for your help. Looks like "beadm" is the utility I was looking
for.
When I run "beadm list," it gives me the complete list and indicates which one
is currently active. It doesn't tell me which one is the "default" boot. Can I
assume that whatever is "active" is also the "default?"
Folks,
My understanding is that there is a way to create a zfs "checkpoint" before
doing any system upgrade or installing a new software. If there is a problem,
one can simply rollback to the stable checkpoint.
I am familiar with snapshots and clones. However, I am not clear on how to
manage
Folks,
>From zfs documentation, it appears that a "vdev" can be built from more vdevs.
>That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a
>mirror can be built across a few raidz vdevs.
Is my understanding correct? Also, is there a limit on the depth of a vdev?
Thank
Hi Neil,
if the file offset does not match, the chances that the checksum would match,
especially sha256, is almost 0.
May be I am missing something. Let's say I have a file that contains 11 letters
- ABCDEFGHIJK. Let's say the block size is 5.
For the first file, the block contents are "ABCDE
Folks,
Let's say I have a volume being shared over iSCSI. The dedup has been turned on.
Let's say I copy the same file twice under different names at the initiator
end. Let's say each file ends up taking 5 blocks.
For dedupe to work, each block for a file must match the corresponding block
fro
Folks,
If I have 20 disks to build a raidz3 pool, do I create one big raidz vdev or do
I create multiple raidz3 vdevs? Is there any advantage of having multiple
raidz3 vdevs in a single pool?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
__
Folks,
As I understand, the hash generated by sha256 is "almost" guaranteed not to
collide. I am thinking it is okay to turn off "verify" property on the zpool.
However, if there is indeed a collision, we lose data. "Scrub" cannot recover
such lost data.
I am wondering in real life when is it
Freddie,
Thank you very much for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks,
Command "zpool status" reports disk status that includes read errors, write
errors, and checksum errors. These values have always been 0 in our test
environment. Is there any tool out there that can corrupt the state? At the
very least, we should be able to write to the disk directly and
Folks,
One of the zpool properties that is reported is "dedupditto." However, there is
no documentation available, either in man pages or anywhere else on the
Internet. What exactly is this property?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.or
Folks,
I am a bit confused on the dedup relationship between the filesystem and its
pool.
The dedup property is set on a filesystem, not on the pool.
However, the dedup ratio is reported on the pool and not on the filesystem.
Why is it this way?
Thank you in advance for your help.
Regards,
P
Folks,
Here is the list of ZFS enhancements as mentioned for the latest Solaris 10
update:
* ZFS device replacement enhancements - namely autoexpand
* some changes to the zpool list command
* Holding ZFS snapshots
* Triple parity RAID-Z (raidz3)
* The logbias property
*
Neil,
Thank you for your help.
However, I don't see anything about l2cache under "Cache devices" man pages.
To be clear, there are two different vdev types defined in zfs source code -
"cache" and "l2cache." I am familiar with "cache" devices. I am curious about
"l2cache" devices.
Regards,
Pe
Folks,
While going through zpool source code, I see a configuration option called
l2cache. What is this option for? It doesn't seem to be documented.
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
z
Thank you all for your help.
Can properties be set on file systems as well as pools? When I try "zpool set"
command with a local property, I can an error "invalid property."
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss ma
Thank you all for your help.
It appears it is better to use "on" instead of "sha256." This way, you are
letting zfs decide the best option.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris
Folks,
One of the articles on the net says that the following two commands are exactly
the same:
# zfs set dedup=on tank
# zfs set dedup=sha256 tank
Essentially, "on" is just a pseudonym for "sha256" and "verify" is just a
pseudonym for "sha256,verify."
Can someone please confirm if this is t
Hi James,
Appreciate your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks,
When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2,
etc. How do I get back this information for an existing pool? The status
command does not reveal this information:
# zpool status mypool
When this command is run, I can see the disks in use. However, I don't
Thank you, Eric. Your explanation is clear to understand.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am running ZFS file system version 5 on Nexenta.
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you all for your help. It appears my understanding of parity was rather
limited. I kept on thinking about parity in memory where the extra bit would be
used to ensure that the total of all 9 bits is always even.
In case of zfs, the above type of checking is actually moved into checksum.
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it
simple let's not consider block sizes.
Let's say I send a write
Hi,
I am going through understanding the fundamentals of raidz. From the man pages,
a raidz configuration of P disks and N parity provides (P-N)*X storage space
where X is the size of the disk. For example, if I have 3 disks of 10G each and
I configure it with raidz1, I will have 20G of usable
Thank you all for your help. It turns out that I just need to ignore the ones
that have their mount points either not defined or are marked as "legacy."
It is good to learn about history command. Could come in handy.
Regards,
Peter
--
This message posted from opensolaris.org
___
Folks,
In my application, I need to present user-created filesystems. For my test, I
created a zfs pool called mypool and two file systems called cifs1 and cifs2.
However, when I run "zfs list," I see a lot more entries:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool
Mucho gracias, Brad.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks,
I need to store some application-specific settings for a ZFS filesystem. Is it
possible to extend a ZFS filesystem and add additional properties?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.org
_
> Btw, if you want a commercially supported and maintained product, have
> you looked at NexentaStor? Regardless of what happens with OpenSolaris,
> we aren't going anywhere. (Full disclosure: I'm a Nexenta Systems
> employee. :-)
>
> -- Garrett
Hi Garrett,
I would like to know why you think Nex
Folks,
I now know that ZFS is capable of preserving AD account sids. I have verified
the scenario with CIFS integration.
I am now wondering if it is possible to achieve a similar AD integration over
WebDAV. Is it possible to retain security permissions on files and folders over
WebDAV?
Thank
Folks,
This is probably a very naive question.
Is it possible to set zfs for bi-directional synchronization of data across two
locations? I am thinking this is almost impossible. Consider two files A and B
at two different sites. There are three possible cases that require
synchronization:
Folks,
I would appreciate it if you can create a separate thread for Mac Mini.
Back to the original subject.
NetApp has deep pockets. A few companies have already backed out of zfs as they
cannot afford to go through a lawsuit. I am in a stealth startup company and we
rely on zfs for our appli
Folks,
As you may have heard, NetApp has a lawsuit against Sun in 2007 (and now
carried over to Oracle) for patent infringement with the zfs file system. Now,
NetApp is taking a stronger stance and threatening zfs storage suppliers to
stop selling zfs-based storage.
http://www.theregister.co.u
Thank you all, especially Edward, for the enlightenment.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks,
While going through a quick tutorial on zfs, I came across a way to create zfs
filesystem within a filesystem. For example:
# zfs create mytest/peter
where mytest is a zpool filesystem.
When does this way, the new filesystem has the mount point as /mytest/peter.
When does it make sense
Awesome. Thank you, CIndy.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Folks,
I am learning more about zfs storage. It appears, zfs pool can be created on a
raw disk. There is no need to create any partitions, etc. on the disk. Does
this mean there is no need to run "format" on a raw disk?
I have added a new disk to my system. It shows up as /dev/rdsk/c8t1d0s0. Do
46 matches
Mail list logo