Richard,
thanks for the heads-up. I found some material here that sheds a bit
more light on it:
http://en.wikipedia.org/wiki/ZFS
http://all-unix.blogspot.com/2007/04/transaction-file-system-and-cow.html
Regards,
heinz
Richard Elling wrote:
On Feb 15, 2010, at 8:43 PM, heinz zerbes wrote:
On Feb 15, 2010, at 8:43 PM, heinz zerbes wrote:
>
> Gents,
>
> We want to understand the mechanism of zfs a bit better.
>
> Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks?
> Q: what criteria is there for zfs to start reclaiming blocks
The answer to these questions
Gents,
We want to understand the mechanism of zfs a bit better.
Q: what is the design/algorithm of zfs in terms of reclaiming unused blocks?
Q: what criteria is there for zfs to start reclaiming blocks
Issue at hand is an LDOM or zone running in a virtual (thin-provisioned)
disk on a NFS serv
Gents,
We want to understand the mechanism of zfs a bit better.
Q: what is the design/algorithm of zfs in terms of reclaiming unused
blocks?
Q: what criteria is there for zfs to start reclaiming blocks
Issue at hand is an LDOM or zone running in a virtual
(thin-provisioned) disk on a NFS ser
On Nov 25, 2009, at 11:55 AM, andrew.r...@sun.com wrote:
I am trying to understand the ARC's behavior based on different
permutations of (a)sync Reads and (a)sync Writes.
thank you, in advance
o does the data for a *sync-write* *ever* go into the ARC?
always
eg, my understanding is that
I am trying to understand the ARC's behavior based on different
permutations of (a)sync Reads and (a)sync Writes.
thank you, in advance
o does the data for a *sync-write* *ever* go into the ARC?
eg, my understanding is that the data goes to the ZIL (and
the SLOG, if present), but how does i
Vdbench IS a Sun tool, and it is in the process of being open sourced.
You can find the latest GA version at
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/[EMAIL
PROTECTED]
Henk.
This message posted from opensolaris.org
___
Thommy M. wrote:
> Richard Gilmore wrote:
>
>> Hello Zfs Community,
>>
>> I am trying to locate if zfs has a compatible tool to Veritas's
>> vxbench? Any ideas? I see a tool called vdbench that looks close, but
>> it is not a Sun tool, does Sun recommend something to customers moving
>> fro
Richard Gilmore wrote:
> Hello Zfs Community,
>
> I am trying to locate if zfs has a compatible tool to Veritas's
> vxbench? Any ideas? I see a tool called vdbench that looks close, but
> it is not a Sun tool, does Sun recommend something to customers moving
> from Veritas to ZFS and like vxb
Hello Zfs Community,
I am trying to locate if zfs has a compatible tool to Veritas's
vxbench? Any ideas? I see a tool called vdbench that looks close, but
it is not a Sun tool, does Sun recommend something to customers moving
from Veritas to ZFS and like vxbench and its capabilities?
Thanks,
> 1. Can I create ZFS volumes on a ZFS file system from one server,
> attach the file system read-write to a different server (to load data),
> then detach the file system from that server and attach the file system
> read-only to multiple other servers?
I don't think so today. Th
On Thu, 14 Dec 2006, Dave Burleson wrote:
> 1. Can I create ZFS volumes on a ZFS file system from one server,
> attach the file system read-write to a different server (to load data),
> then detach the file system from that server and attach the file system
> read-only to multiple oth
I will have a file system in a SAN using ZFS. Can someone answer my
questions?
1. Can I create ZFS volumes on a ZFS file system from one server,
attach the file system read-write to a different server (to load data),
then detach the file system from that server and attach the file syst
Peter,
Are you sure your customer is not hitting this:
6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which
calls biowait()and deadlock/hangs host
I have a fix that you could have your customer try.
Thanks,
George
Peter Wilk wrote:
IHAC that is asking the following. any tho
The current behavior depends on the implementation of the driver and
support for hotplug events. When a drive is yanked, one of two things
can happen:
- I/Os will fail, and any attempt to re-open the device will result in
failure.
- I/Os will fail, but the device can continued to be opened by
IHAC that is asking the following. any thoughts would be appreciated
Take two drives, zpool to make a mirror.
Remove a drive - and the server HANGS. Power off and reboot the server,
and everything comes up cleanly.
Take the same two drives (still Solaris 10). Install Veritas Volume
Manager (4.1).
On Fri, Jul 28, 2006 at 10:52:50AM -0400, John Cecere wrote:
> Can someone explain to me what the 'volinit' and 'volfini' options to zfs
> do ? It's not obvious from the source code and these options are
> undocumented.
These are unstable private interfaces which create and destroy the
/dev/zvol
Can someone explain to me what the 'volinit' and 'volfini' options to zfs do ? It's not obvious from the source code and these
options are undocumented.
Thanks,
John
--
John Cecere
Sun Microsystems
732-302-3922 / [EMAIL PROTECTED]
___
zfs-discuss mai
zfs depends on ldi_get_size(), which depends on the device being
accessed exporting one of the properties below. i guess the
the devices generated by IBMsdd and/or EMCpower/or don't
generate these properties.
ed
On Wed, Jul 26, 2006 at 01:53:31PM -0700, Eric Schrock wrote:
> On Wed, Jul 26, 200
Does format show these drives to be available and containing a non-zero
size?
Eric Schrock wrote:
On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote:
Eric,
Here is the output:
# ./dtrace2.dtr
dtrace: script './dtrace2.dtr' matched 4 probes
CPU IDFUNCTION:
On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote:
> Eric,
>
> Here is the output:
>
> # ./dtrace2.dtr
> dtrace: script './dtrace2.dtr' matched 4 probes
> CPU IDFUNCTION:NAME
> 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c
> 0 16197
Eric,
Here is the output:
# ./dtrace2.dtr
dtrace: script './dtrace2.dtr' matched 4 probes
CPU IDFUNCTION:NAME
0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c
0 16197 ldi_get_otyp:return 0
0 15546ldi_prop_exists:
So it does look like something's messed up here. Before we pin this
down as a driver bug, we should double check that we are indeed opening
what we think we're opening, and try to track down why ldi_get_size is
failing. Try this:
#!/usr/sbin/dtrace -s
ldi_open_by_name:entry
{
trace(stri
Eric,
Here is what the customer gets trying to create the pool using the
software alias: (I added all the ldi_open's to the script)
# zpool create -f extdisk vpath1c
# ./dtrace.script
dtrace: script './dtrace.script' matched 6 probes
CPU IDFUNCTION:NAME
0 7233
zfs should work fine with disks under the control of solaris mpxio.
i don't know about any of the other multipathing solutions.
if you're trying to use a device that's controlled by another
multipathing solution, you might want to try specifying the full
path to the device, ex:
zpool creat
This suggests that there is some kind of bug in the layered storage
software. ZFS doesn't do anything special to the underlying storage
device; it merely relies on a few ldi_*() routines. I would try running
the following dtrace script:
#!/usr/sbin/dtrace -s
vdev_disk_open:return,
ldi_open_by_n
Please reply to [EMAIL PROTECTED]
Background / configuration **
zpool will not create a storage pool on fibre channel storage. I'm
attached to an IBM SVC using the IBMsdd driver. I have no problem using
SVM metadevices and UFS on these devices.
List steps to reproduce th
On Mon, Jul 03, 2006 at 11:13:33PM +0800, Steven Sim wrote:
> Could someone elaborate more on the statement "metadata drives
> reconstruction"...
ZFS starts from the ubberblock and works its way down (think recursive
tree traversal) the metadata to find all live blocks and rebuilds the
replaced v
>I understand the copy-on-write thing. That was very well illustrated in
>"ZFS The Last Word in File Systems" by Jeff Bonwick.
>
>But if every block is it's own RAID-Z stripe, if the block is lost, how
>does ZFS recover the block???
You should perhaps not take "block" literally; the block is w
Hello Gurus;
I've been playing with ZFS and reading the materials, BLOGS and FAQs.
It's an awesome FS and I just wish that Sun would evangelize a little
bit more. But that's another story.
I'm writing here to ask a few very simple questions.
I am able to understand the RAID-5 write hole and
So, based on the below, there should be no reason why a flash-based
ZFS filesystem should need to do anything special to avoid problems.
That's a Good Thing.
I think that using flash as the system disk will be the way to go.
Using flash as read-only with a disk or memory for read-write wou
>Well operating systems that *do* get used to build devices *do*
>have these mount options for this purpose, so I imagine that
>someone who does this kind of thing thinks they're worthwhile.
Thinking that soemthing is worthwhile and having done the analysis
to proof that it is worthwhile are two
Richard Elling wrote:
Dana H. Myers wrote:
What I do not know yet is exactly how the flash portion of these hybrid
drives is administered. I rather expect that a non-hybrid-aware OS may
not actually exercise the flash storage on these drives by default; or
should I say, the flash storage will o
[EMAIL PROTECTED] wrote:
Also, options such as "-nomtime" and "-noctime" have been introduced
alongside "-noatime" in some free operating systems to limit the amount
of meta data that gets written back to disk.
Those seem rather pointless. (mtime and ctime generally imply other
changes,
> I assume ZFS only writes something when there is actually data?
Right.
Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
And, this is a worst case, no?
If the device itself also does some funky stuff under the covers, and
ZFS only writes an update if there is *actually* something to write,
then it could be much much longer than 4 years.
Actually - That's an interesting. I assume ZFS only writes something
when the
Dana H. Myers wrote:
What I do not know yet is exactly how the flash portion of these hybrid
drives is administered. I rather expect that a non-hybrid-aware OS may
not actually exercise the flash storage on these drives by default; or
should I say, the flash storage will only be available to a h
Eric Schrock wrote:
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote:
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
Flash is (can be) a bit more sophisticated. The problem is that they
have a limited write endurance -- typically spec'ed at 100k writes to
any sin
Richard Elling wrote:
> Erik Trimble wrote:
>> Oh, and the newest thing in the consumer market is called "hybrid
>> drives", which is a melding of a Flash drive with a Winchester
>> drive. It's originally targetted at the laptop market - think a 1GB
>> flash memory welded to a 40GB 2.5" hard dri
On Tue, Jun 20, 2006 at 02:18:34PM -0600, Gregory Shaw wrote:
> Wouldn't that be:
>
> 5 seconds per write = 86400/5 = 17280 writes per day
> 256 rotated locations for 17280/256 = 67 writes per location per day
>
> Resulting in (10/67) ~1492 days or 4.08 years before failure?
>
> That's still
Wouldn't that be:
5 seconds per write = 86400/5 = 17280 writes per day
256 rotated locations for 17280/256 = 67 writes per location per day
Resulting in (10/67) ~1492 days or 4.08 years before failure?
That's still a long time, but it's not 100 years.
On Jun 20, 2006, at 12:47 PM, Eric Sch
>Also, options such as "-nomtime" and "-noctime" have been introduced
>alongside "-noatime" in some free operating systems to limit the amount
>of meta data that gets written back to disk.
Those seem rather pointless. (mtime and ctime generally imply other
changes, often to the inode; atime doe
Jonathan Adams wrote:
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
Flash is (can be) a bit more sophisticated. The problem is that they
have a limited write endurance -- typically spec'ed at 100k writes to
any single bit. The good flash drives use block relocation, spares,
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote:
> On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
> > Flash is (can be) a bit more sophisticated. The problem is that they
> > have a limited write endurance -- typically spec'ed at 100k writes to
> > any single bit.
On Tue, Jun 20, 2006 at 11:17:42AM -0700, Jonathan Adams wrote:
> On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
> > Flash is (can be) a bit more sophisticated. The problem is that they
> > have a limited write endurance -- typically spec'ed at 100k writes to
> > any single bit.
On Tue, Jun 20, 2006 at 09:32:58AM -0700, Richard Elling wrote:
> Flash is (can be) a bit more sophisticated. The problem is that they
> have a limited write endurance -- typically spec'ed at 100k writes to
> any single bit. The good flash drives use block relocation, spares, and
> write spreadin
Erik Trimble wrote:
That is, start out with adding the ability to differentiate between
access policy in a vdev. Generally, we're talking only about mirror
vdevs right now. Later on, we can consider the ability to migrate data
based on performance, but a lot of this has to take into considera
Saying "Solid State disk" in the storage arena means battery-backed DRAM
(or, rarely, NVRAM). It does NOT include the various forms of
solid-state memory (compact flash, SD, MMC, etc.);"Flash disk" is
reserved for those kind of devices.
This is historical, since Flash disk hasn't been functio
On 6/17/06, Neil A. Wilson <[EMAIL PROTECTED]> wrote:
Darren Reed wrote:
> Solid state disk often has a higher failure rate than normal disk and a
> limited write cycle. Hence it is often desirable to try and redesign the
> filesystem to do fewer writes when it is on (for example) compact flash,
Darren Reed wrote:
Solid state disk often has a higher failure rate than normal disk and a
limited write cycle. Hence it is often desirable to try and redesign the
filesystem to do fewer writes when it is on (for example) compact flash,
so moving "hot blocks" to fast storage can have consequence
Mike Gerdts wrote:
On 6/17/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
The concept of shifting blocks in a zpool around in the background as
part of a scrubbing process and/or on the order of a explicit command
to populate newly added devices seems like it could be right up ZFS's
alley. Perhaps
On 6/17/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
The concept of shifting blocks in a zpool around in the background as
part of a scrubbing process and/or on the order of a explicit command
to populate newly added devices seems like it could be right up ZFS's
alley. Perhaps it could also be done
On Jun 16, 2006, at 11:40 PM, Richard Elling wrote:
Kimberly Chang wrote:
A couple of ZFS questions:
1. ZFS dynamic striping will automatically use new added devices
when there are write requests. Customer has a *mostly read-only*
application with I/O bottleneck, they wonder if there is a
Kimberly Chang wrote:
A couple of ZFS questions:
1. ZFS dynamic striping will automatically use new added devices when
there are write requests. Customer has a *mostly read-only* application
with I/O bottleneck, they wonder if there is a ZFS command or mechanism
to enable the manual rebalanci
A couple of ZFS questions:
1. ZFS dynamic striping will automatically use new added devices when
there are write requests. Customer has a *mostly read-only* application
with I/O bottleneck, they wonder if there is a ZFS command or mechanism
to enable the manual rebalancing of ZFS data when add
55 matches
Mail list logo