On Jul 29, 2008, at 2:24 PM, Chris Cosby wrote:
>
>
> On Tue, Jul 29, 2008 at 5:13 PM, Stefano Pini <[EMAIL PROTECTED]>
> wrote:
> Hi guys,
> we are proposing a customer a couple of X4500 (24 Tb) used as NAS
> (i.e. NFS server).
> Both server will contain the same files and should be acces
I've filed specifically for ZFS:
6735425 some places where 64bit values are being incorrectly accessed
on 32bit processors
eric
On Aug 6, 2008, at 1:59 PM, Brian D. Horn wrote:
> In the most recent code base (both OpenSolaris/Nevada and S10Ux with
> patches)
> all the known marvell88sx probl
On Aug 7, 2008, at 10:25 PM, Anton B. Rang wrote:
>> How would you describe the difference between the file system
>> checking utility and zpool scrub? Is zpool scrub lacking in its
>> verification of the data?
>
> To answer the second question first, yes, zpool scrub is lacking, at
> least to
On Aug 13, 2008, at 5:58 AM, Moinak Ghosh wrote:
> I have to help setup a configuration where a ZPOOL on MPXIO on
> OpenSolaris is being used with Symmetrix devices with replication
> being handled via Symmetrix Remote Data Facility (SRDF).
> So I am curious whether anyone has used this confi
On Aug 21, 2008, at 9:51 AM, Brent Jones wrote:
> Hello,
> I have been experimenting with ZFS on a test box, preparing to
> present it to management.
> One thing I cannot test right now is our real-world application
> load. We write to CIFS shares currently in small files.
> We write about 25
Note that the bad disk on the node caused a normal reboot to hang.
I also verified that sync from the command line hung. I don't know
how ZFS (or Solaris) handles situations involving bad disks...does
a bad disk block proper ZFS/OS handling of all IO, even to the
other healthy disks?
On Jan 26, 2007, at 6:02 AM, Robert Milkowski wrote:
Hello zfs-discuss,
Is anyone working on that bug? Any progress?
For bug:
6343667 scrub/resilver has to start over when a snapshot is taken
I believe that is on Matt and Mark's radar, and they have made some
progress.
eric
_
For your reading pleasure:
http://blogs.sun.com/erickustarz/entry/damaged_files_and_zpool_status
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IIRC Bill posted here some tie ago saying the problem with write cache
on the arrays is being worked on.
Yep, the bug is:
6462690 sd driver should set SYNC_NV bit when issuing SYNCHRONIZE
CACHE to
SBC-2 devices
We have a case going through PSARC that will make things works
correctly with
On Feb 6, 2007, at 10:43 AM, Robert Milkowski wrote:
Hello eric,
Tuesday, February 6, 2007, 5:55:23 PM, you wrote:
IIRC Bill posted here some tie ago saying the problem with write
cache
on the arrays is being worked on.
ek> Yep, the bug is:
ek> 6462690 sd driver should set SYNC_NV bit
On Feb 8, 2007, at 10:53 AM, Robert Milkowski wrote:
Hello Trevor,
Thursday, February 8, 2007, 6:23:21 PM, you wrote:
TW> I am seeing what I think is very peculiar behaviour of ZFS
after sending a
TW> full stream to a remote host - the upshot being that I can't
send an
TW> incremental st
On Feb 12, 2007, at 8:05 AM, Robert Petkus wrote:
Some comments from the author:
1. It was a preliminary scratch report not meant to be exhaustive
and complete by any means. A comprehensive report of our findings
will be released soon.
2. I claim responsibility for any benchmarks gathere
On Feb 12, 2007, at 7:52 AM, Robert Milkowski wrote:
Hello Roch,
Monday, February 12, 2007, 3:54:30 PM, you wrote:
RP> Duh!.
RP> Long sync (which delays the next sync) are also possible on
RP> a write intensive workloads. Throttling heavy writters, I
RP> think, is the key to fixing this.
W
ek> Have you increased the load on this machine? I have seen a
similar
ek> situation (new requests being blocked waiting for the sync
thread to
ek> finish), but that's only been when either 1) the hardware is
broken
ek> and taking too long or 2) the server is way overloaded.
I don't thin
I've been using it in another CR where destroying one of a snapshots
was helping the performance. Nevertheless here it's on that server:
Short period of time:
bash-3.00# ./metaslab-6495013.d
^C
Loops count
value - Distribution - count
-1 |
On Feb 15, 2007, at 6:08 AM, Robert Milkowski wrote:
Hello eric,
Wednesday, February 14, 2007, 5:04:01 PM, you wrote:
ek> I'm wondering if we can just lower the amount of space we're
trying
ek> to alloc as the pool becomes more fragmented - we'll lose a
little I/
ek> O performance, but i
On Feb 18, 2007, at 9:19 PM, Davin Milun wrote:
I have one that looks like this:
pool: preplica-1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwi
ek> If you were able to send over your complete pool, destroy the
ek> existing one and re-create a new one using recv, then that should
ek> help with fragmentation. That said, that's a very poor man's
ek> defragger. The defragmentation should happen automatically or at
ek> least while the pool
On Feb 20, 2007, at 10:43 AM, [EMAIL PROTECTED] wrote:
If you run a 'zpool scrub preplica-1', then the persistent error log
will be cleaned up. In the future, we'll have a background scrubber
to make your life easier.
eric
Eric,
Great news! Are there any details about how thi
On Feb 9, 2007, at 8:02 AM, Carisdad wrote:
I've seen very good performance on streaming large files to ZFS on
a T2000. We have been looking at using the T2000 as a disk storage
unit for backups. I've been able to push over 500MB/s to the
disks. Setup is EMC Clariion CX3 with 84 500GB SA
On Feb 22, 2007, at 10:01 AM, Carisdad wrote:
eric kustarz wrote:
On Feb 9, 2007, at 8:02 AM, Carisdad wrote:
I've seen very good performance on streaming large files to ZFS
on a T2000. We have been looking at using the T2000 as a disk
storage unit for backups. I've been ab
On Feb 27, 2007, at 2:35 AM, Roch - PAE wrote:
Jens Elkner writes:
Currently I'm trying to figure out the best zfs layout for a
thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~
500 MB/s seems to be the maximum on can reach (
On Mar 16, 2007, at 1:29 PM, JS wrote:
I've been seeing this failure to cap on a number of (Solaris 10
update 2 and 3) machines since the script came out (arc hogging is
a huge problem for me, esp on Oracle). This is probably a red
herring, but my v490 testbed seemed to actually cap on 3 s
On Mar 19, 2007, at 7:26 PM, Jens Elkner wrote:
On Wed, Feb 28, 2007 at 11:45:35AM +0100, Roch - PAE wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622
Any estimations, when we'll see a [feature] fix for U3?
Should I open a call, to perhaps rise the priority f
OKAY that's an idea, but then this becomes not so easy to manage. I
have
made some tries and I found iscsi{,t}adm not that cool to use
confronted
to what zfs,zpool interfaces provides.
hey Cedrice,
Could you be more specific here? What wasn't easy? Any suggestions
to improve it?
eri
On Mar 20, 2007, at 10:27 AM, [EMAIL PROTECTED] wrote:
Folks,
Is there any update on the progress of fixing the resilver/
snap/scrub
reset issues? If the bits have been pushed is there a patch for
Solaris
10U3?
http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
Matt and
I just integrated into snv_62:
6529406 zpool history needs to bump the on-disk version
The original CR for 'zpool history':
6343741 want to store a command history on disk
was integrated into snv_51.
Both of these are planned to make s10u4.
But wait, 'zpool history' has existed for several mont
Is this the same panic I observed when moving a FireWire disk from
a SPARC
system running snv_57 to an x86 laptop with snv_42a?
6533369 panic in dnode_buf_byteswap importing zpool
Yep, thanks - i was looking for that bug :) I'll close it out as a dup.
eric
___
On Mar 23, 2007, at 6:13 AM, Łukasz wrote:
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST->PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
On Apr 9, 2007, at 2:20 AM, Dirk Jakobsmeier wrote:
Hello,
was use several cad applications and with one of those we have
problems using zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the
cad application is catia v4.
There are several configuration and data files sto
I can't find the bugid on this one but it exists. You can use the '-
F' flag to 'zfs recv' in the interim:
"
-FForce a rollback of the filesystem to the most
recent snapshot before performing the receive
operation.
"
eric
On Apr 12, 2007, at 2:30 AM,
On Apr 18, 2007, at 2:35 AM, Richard L. Hamilton wrote:
Well, no; his quote did say "software or hardware". The theory is
apparently
that ZFS can do better at detecting (and with redundancy,
correcting) errors
if it's dealing with raw hardware, or as nearly so as possible.
Most SANs
_can
On Apr 19, 2007, at 1:38 AM, Ricardo Correia wrote:
Why doesn't "zpool status -v" display the byte ranges of permanent
errors anymore, like it used to (before snv_57)?
I think it was a useful feature. For example, I have a pool with 17
permanent errors in 2 files with 700 MB each, but no abili
On Apr 19, 2007, at 12:50 PM, Ricardo Correia wrote:
eric kustarz wrote:
Two reasons:
1) cluttered the output (as the path name is variable length). We
could perhaps add another flag (-V or -vv or something) to display
the
ranges.
2) i wasn't convinced that output was useful, espec
Has an analysis of most common storage system been done on how they
treat SYNC_NV bit and if any additional tweaking is needed? Would such
analysis be publicly available?
I am not aware of any analysis and would love to see it done (i'm
sure any vendors who are lurking on this list that supp
On Apr 18, 2007, at 9:33 PM, Robert Milkowski wrote:
Hello Robert,
Thursday, April 19, 2007, 1:57:38 AM, you wrote:
RM> Hello nfs-discuss,
RM> Does anyone have a dtrace script (or any other means) to
track which
RM> files are open/read/write (ops and bytes) by nfsd? To make
things
R
On Apr 20, 2007, at 10:47 AM, Anton B. Rang wrote:
ZFS uses caching heavily as well; much more so, in fact, than UFS.
Copy-on-write and direct i/o are not related. As you say, data gets
written first, then the metadata which points to it, but this isn't
anything like direct I/O. In particu
On Apr 20, 2007, at 1:02 PM, Anton B. Rang wrote:
So if someone has a real world workload where having the ability
to purposely not cache user
data would be a win, please let me know.
Multimedia streaming is an obvious one.
assuming a single reader? or multiple readers at the same spot?
On Apr 20, 2007, at 7:54 AM, Robert Milkowski wrote:
Hello eric,
Friday, April 20, 2007, 4:01:46 PM, you wrote:
ek> On Apr 18, 2007, at 9:33 PM, Robert Milkowski wrote:
Hello Robert,
Thursday, April 19, 2007, 1:57:38 AM, you wrote:
RM> Hello nfs-discuss,
RM> Does anyone have a dtrace s
On Apr 23, 2007, at 10:56 AM, Andy Lubel wrote:
What I'm saying is ZFS doesn't play nice with NFS in all the
scenarios I could think of:
-Single second disk in a v210 (sun72g) write cache on and off =
~1/3 the performance of UFS when writing files using dd over an NFS
mount using the s
On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:
Hello,
I'd like to plan a storage solution for a system currently in
production.
The system's storage is based on code which writes many files to
the file system, with overall storage needs currently around 40TB
and expected to reach hund
On Apr 25, 2007, at 9:16 AM, Oliver Gould wrote:
Hello-
I was planning on sending out a more formal sort of introduction in a
few weeks, but.. hey- it came up.
I will be porting ZFS to NetBSD this summer. Some info on this
project
can be found at:
http://www.olix0r.net/bitbucket/i
RP> Correction, it's now Fix Delivered build snv_56.
RP> 4894692 caching data in heap inflates crash dump
Good to know.
I hope it will make it into U4.
Yep, it will. You know its kinda silly we don't expose that info to
the public via:
http://bugs.opensolaris.org/view_bug.do?bug_
In order to prevent the so-called "poor man's cluster" from
corrupting your data, we now store the hostid and verify it upon
importing a pool.
Check it out at:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
This is bug:
6282725 hostname/hostid should be stored in the label
Av
On May 7, 2007, at 7:11 AM, Frank Batschulat wrote:
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online
doesn't seem to give
a reasonable information, any particular reason for this ?
# zpool status
pool: blade-
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my different jumpstart
installations. This system is continuously installed and
reinstalled with different system builds.
For some builds I have a finish script that creates a zpool using
On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my
different jumpstart
installations. This system is continuously
installed and
reinstalled with different system builds.
For some b
On May 15, 2007, at 9:37 AM, XIU wrote:
Hey,
I'm currently running on Nexenta alpha 6 and I have some corrupted
data in a pool.
The output from sudo zpool status -v data is:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.
On May 15, 2007, at 4:49 PM, Nigel Smith wrote:
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/
001162.html
But after a reboot the iscsi target was not longer ava
Don't take this numbers too seriously - those were only first tries to
see where my port is and I was using OpenSolaris for comparsion, which
has debugging turned on.
Yeah, ZFS does a lot of extra work with debugging on (such as
verifying checksums in the ARC), so always do serious performa
Won't disabling ZIL minimize the chance of a consistent zfs-
filesystem
if - for some reason - the server did an unplanned reboot?
ZIL in ZFS is only used to speed-up various workloads, it has
nothing to
do with file system consistency. ZFS is always consistent on disk no
matter if you use
I've been looking into the performance impact of NCQ. Here's what i
found out:
http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Curiously, there's not too much performance data on NCQ available via
a google search ...
enjoy,
eric
On May 29, 2007, at 1:25 PM, Lida Horn wrote:
Point one, the comments that Eric made do not give the complete
picture.
All the tests that Eric's referring to were done through ZFS
filesystem.
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
Do
2) Following Chris's advice to do more with snapshots, I
played with his cron-triggered snapshot routine:
http://blogs.sun.com/chrisg/entry/snapping_every_minute
Now, after a couple of days, zpool history shows almost
100,000 lines of output (from all the snapshots and
deletions..
On Jun 1, 2007, at 2:09 PM, John Plocher wrote:
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/
prune the log as then it becomes unreliable - ooops i made a
mistake, i better clear the log and file the bug against zfs
I understand - auditing
On Jun 2, 2007, at 8:27 PM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Toby Thain wrote:
Sorry, I should have cited it. Blew my chance to moderate by
posting to
the thread :)
http://ask.slashdot.org/comments.pl?sid=236627&cid=19319903
I computed the FUD factor by sorti
Hi Jeff,
You should take a look at this:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
We added the hostid/hostname to the vdev label. What this means is
that we stop you from importing a pool onto multiple machines (which
would have lead to corruption).
eric
On May 30, 2
Would be very nice if the improvements would be documented
anywhere :-)
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the "What's New in ZFS?" section.
eric
_
Just got the latest ;login: and Pawel has an article on "Porting the
Solaris ZFS File System to the FreeBSD Operating System".
Lots of interesting stuff in there, such as the differences between
OpenSolaris and FreeBSD, as well as getting ZFS to work with FreeBSD
jails (a new 'jailed' prope
On Jun 11, 2007, at 12:52 AM, Borislav Aleksandrov wrote:
Panic on snv_65&64 when:
#mkdir /disk
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool create data mirror /disk/disk1 /disk/disk2
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
At this point you have completely overwritten t
Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s, user0m45.330s, sys 0m50.118s
star xfv linux-2.6.21.tar.bz2
real3m26.053s, user0m43.069s, sys 0m33.726s
star -no-fsync -x -v -f linux-2.6.21.tar.bz2
real
On Jun 12, 2007, at 12:57 AM, Roch - PAE wrote:
Hi Seigfried, just making sure you had seen this:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
You have very fast NFS to non-ZFS runs.
That seems only possible if the hosting OS did not sync the
data when NFS required it or the
On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAME
On Jun 19, 2007, at 11:23 AM, Huitzi wrote:
Hi once again and thank you very much for your reply. Here is
another thread.
I'm planning to deploy a small file server based on ZFS. I want to
know if I can start with 2 RAIDs, and add more RAIDs in the future
(like the gray RAID in the attac
On Jun 20, 2007, at 1:25 PM, mario heimel wrote:
Linux is the first operating system that can boot from RAID-1+0,
RAID-Z or RAID-Z2 ZFS, really cool trick to put zfs-fuse in the
initramfs.
( Solaris can only boot from single-disk or RAID-1 pools )
http://www.linuxworld.com/news/2007/06180
On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote:
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
2007-06-20.10:20:
On Jun 21, 2007, at 3:25 PM, Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any way to specify devices in a
pool to use for the ZIL specifically? I've been thinking through
architectures to mitigate performance problems on SAN and various
other storage technolog
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and we'll try to
make that happen.
I see this as a nice supplement to t
On Jun 26, 2007, at 4:26 AM, Roshan Perera wrote:
Hi all,
I am after some help/feedback to the subject issue explained below.
We are in the process of migrating a big DB2 database from a
6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
25K 12 CPU dual core x 1800Mhz with
On Jul 4, 2007, at 7:50 AM, Wout Mertens wrote:
>> A data structure view of ZFS is now available:
>> http://www.opensolaris.org/os/community/zfs/structures/
>>
>> We've only got one picture up right now (though its a juicy one!),
>> but let us know what you're interested in seeing, and
>> we'll t
>
> However, I've one more question - do you guys think NCQ with short
> stroked zones help or hurt performance? I have this feeling (my
> gut, that is), that at a low queue depth it's a Great Win, whereas
> at a deeper queue it would degrade performance more so than without
> it. Any tho
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
> You sir, are a gentleman and a scholar! Seriously, this is exactly
> the information I was looking for, thank you very much!
>
> Would you happen to know if this has improved since build 63 or if
> chipset has any effect one way or the ot
On Jul 8, 2007, at 8:05 PM, Peter C. Norton wrote:
> List,
>
> Sorry if this has been done before - I'm sure I'm not the only person
> interested in this, but I haven't found anything with the searches
> I've done.
>
> I'm looking to compare nfs performance between nfs on zfs and a
> lower-end ne
On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote:
> Hmm. Odd. I've got PowerPath working fine with ZFS with both
> Symmetrix and Clariion back ends.
> PowerPath Version is 4.5.0, running on leadville qlogic drivers.
> Sparc hardware. (if it matters)
>
> I ran one our test databases on ZFS
Here's some info on the changes we've made to the vdev cache (in
part) to help database performance:
http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help
enjoy your properly inflated I/O,
eric
___
zfs-discuss mailing list
zfs-discuss
ntegrity. VxFS can't do that - your data is always at risk.
Hopefully you can articulate that to the decision makers...
eric
> --
> Sean
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of eric kustarz
> Sent: Thur
On Jul 22, 2007, at 7:39 PM, JS wrote:
> There a way to take advantage of this in Sol10/u03?
>
> "sorry, variable 'zfs_vdev_cache_max' is not defined in the 'zfs'
> module"
That tunable/hack will be available in s10u4:
http://bugs.opensolaris.org/view_bug.do?bug_id=6472021
wait about a month
On Jul 25, 2007, at 11:46 PM, asa wrote:
> Hello all,
> I am interested in getting a list of the changed files between two
> snapshots in a fast and zfs-y way. I know that zfs knows all about
> what blocks have been changed, but can one map that to a file list? I
> know this could be solved
On Jul 26, 2007, at 10:00 AM, gerald anderson wrote:
> Customer question:
>
> Oracle 10
>
> Customer has a 6540 with 4 trays of 300G 10k drives. The raid sets
> are 3 + 1
>
> vertically stripped on the 4 trays. Two 400G volumes are created on
> each
>
> raid set. Would it be best to put all o
I've filed:
6586537 async zio taskqs can block out userland commands
to track this issue.
eric
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jul 31, 2007, at 5:44 AM, Orvar Korvar wrote:
> I have begun a scrub on a 1,5TB pool which has 600GB data, and
> seeing that it will take 11h47min I want to stop it. I invoked
> "zpool scrub -s pool" and nothing happens. There is no message:
> "scub stopped" or something similar. The cur
On Aug 29, 2007, at 11:16 PM, Jeffrey W. Baker wrote:
> I have a lot of people whispering "zfs" in my virtual ear these days,
> and at the same time I have an irrational attachment to xfs based
> entirely on its lack of the 32000 subdirectory limit. I'm not
> afraid of
> ext4's newness, since
On Aug 30, 2007, at 12:33 PM, Jeffrey W. Baker wrote:
> On Thu, 2007-08-30 at 12:07 -0700, eric kustarz wrote:
>> Hey jwb,
>>
>> Thanks for taking up the task, its benchmarking so i've got some
>> questions...
>>
>> What does it mean to have an extern
On Sep 14, 2007, at 8:16 AM, Łukasz wrote:
> I have a huge problem with space maps on thumper. Space maps takes
> over 3GB
> and write operations generates massive read operations.
> Before every spa sync phase zfs reads space maps from disk.
>
> I decided to turn on compression for pool ( only
This actually looks like a sd bug... forwarding it to the storage
alias to see if anyone has seen this...
eric
On Sep 14, 2007, at 12:42 PM, J Duff wrote:
> I’d like to report the ZFS related crash/bug described below. How
> do I go about reporting the crash and what additional information
torage benchmark utility, although the crash is not as frequent as
> when
> using my test app.
>
> Duff
>
> -Original Message-
> From: eric kustarz [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 17, 2007 6:58 PM
> To: J Duff; [EMAIL PROTECTED]
> Cc: ZFS
On Sep 15, 2007, at 12:55 PM, Victor Latushkin wrote:
> I'm proposing new project for ZFS community - Block Selection
> Policy and
> Space Map Enhancements.
+1.
I wonder if some of this could look into a dynamic policy. For
example, a policy that switches when the pool becomes "too full".
On Sep 20, 2007, at 6:46 PM, Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Gary Mills wrote:
>
>> You should consider a Netapp filer. It will do both NFS and CIFS,
>> supports disk quotas, and is highly reliable. We use one for 30,000
>> students and 3000 employees. Ours has never failed us.
>
On Sep 21, 2007, at 11:47 AM, Pawel Jakub Dawidek wrote:
> Hi.
>
> I gave a talk about ZFS during EuroBSDCon 2007, and because it won the
> the best talk award and some find it funny, here it is:
>
> http://youtube.com/watch?v=o3TGM0T1CvE
>
> a bit better version is here:
>
> http://p
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
> Paul B. Henson wrote:
>> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>>
>>
>>> The x4500 is very sweet and the only thing stopping us from
>>> buying two
>>> instead of another shelf is the fact that we have lost pools on
>>> Sol10u3
>>> servers a
On Oct 2, 2007, at 1:11 PM, David Runyon wrote:
> We are using MySQL, and love the idea of using zfs for this. We
> are used to using Direct I/O to bypass file system caching (let the
> DB do this). Does this exist for zfs?
Not yet, see:
6429855 Need way to tell ZFS that caching is a lost
>
> Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
> surprised that this is being met with skepticism considering that
> Oracle highly recommends direct IO be used, and, IIRC, Oracle
> performance was the main motivation to adding DIO to UFS back in
> Solaris 2.6. This isn't
On Oct 3, 2007, at 3:44 PM, Dale Ghent wrote:
> On Oct 3, 2007, at 5:21 PM, Richard Elling wrote:
>
>> Slightly off-topic, in looking at some field data this morning
>> (looking
>> for something completely unrelated) I notice that the use of directio
>> on UFS is declining over time. I'm not sur
>
> Client A
> - import pool make couple-o-changes
>
> Client B
> - import pool -f (heh)
>
> Client A + B - With both mounting the same pool, touched a couple of
> files, and removed a couple of files from each client
>
> Client A + B - zpool export
>
> Client A - Attempted import and dropped
On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:
> Hi,
>
> i checked with $nthreads=20 which will roughly represent the
> expected load and these are the results:
Note, here is the description of the 'fileserver.f' workload:
"
define process name=filereader,instances=1
{
thread name=filere
Since you were already using filebench, you could use the
'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
nthreads set to 20, iosize set to 128k) to achieve the same things.
With the latest version of filebench, you can then use the '-c'
option to compare your results in a nic
>
> That all said - we don't have a simple dd benchmark for random
> seeking.
Feel free to try out randomread.f and randomwrite.f - or combine them
into your own new workload to create a random read and write workload.
eric
___
zfs-discuss mailing
On Oct 10, 2007, at 11:23 AM, Bernhard Duebi wrote:
> Hi everybody,
>
> I tested the following scenario:
>
> I have two machine attached to the same SAN LUN.
> Both machines run Solaris 10 Update 4.
> Machine A is active with zpool01 imported.
> Machine B is inactive.
> Machine A crashes.
> Machi
This looks like a bug in the sd driver (SCSI).
Does this look familiar to anyway from the sd group?
eric
On Oct 10, 2007, at 10:30 AM, Claus Guttesen wrote:
> Hi.
>
> Just migrated to zfs on opensolaris. I copied data to the server using
> rsync and got this message:
>
> Oct 10 17:24:04 zetta ^
On Oct 22, 2007, at 2:52 AM, Mertol Ozyoney wrote:
> I know I havent defined my particular needs. However I am looking
> for a
> simple explanation of waht is available today and what will be
> available in
> short term.
>
> Example. One to one asynch replication is suported, many to one sync
1 - 100 of 217 matches
Mail list logo