On Aug 21, 2008, at 9:51 AM, Brent Jones wrote:
> Hello,
> I have been experimenting with ZFS on a test box, preparing to
> present it to management.
> One thing I cannot test right now is our real-world application
> load. We write to CIFS shares currently in small files.
> We write about 25
On Aug 13, 2008, at 5:58 AM, Moinak Ghosh wrote:
> I have to help setup a configuration where a ZPOOL on MPXIO on
> OpenSolaris is being used with Symmetrix devices with replication
> being handled via Symmetrix Remote Data Facility (SRDF).
> So I am curious whether anyone has used this confi
On Aug 7, 2008, at 10:25 PM, Anton B. Rang wrote:
>> How would you describe the difference between the file system
>> checking utility and zpool scrub? Is zpool scrub lacking in its
>> verification of the data?
>
> To answer the second question first, yes, zpool scrub is lacking, at
> least to
I've filed specifically for ZFS:
6735425 some places where 64bit values are being incorrectly accessed
on 32bit processors
eric
On Aug 6, 2008, at 1:59 PM, Brian D. Horn wrote:
> In the most recent code base (both OpenSolaris/Nevada and S10Ux with
> patches)
> all the known marvell88sx probl
On Jul 29, 2008, at 2:24 PM, Chris Cosby wrote:
>
>
> On Tue, Jul 29, 2008 at 5:13 PM, Stefano Pini <[EMAIL PROTECTED]>
> wrote:
> Hi guys,
> we are proposing a customer a couple of X4500 (24 Tb) used as NAS
> (i.e. NFS server).
> Both server will contain the same files and should be acces
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
> On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
>>
>>>> clients do not. Without per-filesystem mounts, 'df' on the client
>>>> will not report correct data though.
>>>
>&g
On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote:
> On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
>> On Fri, 6 Jun 2008, Brian Hechinger wrote:
>>
>>> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separa
On Jun 3, 2008, at 11:16 AM, Chris Siebenmann wrote:
> Is there any way to set ZFS on a system so that it will not
> automatically import all of the ZFS pools it had active when it was
> last
> running?
>
> The problem with automatic importation is preventing disasters in a
> failover situation
On May 8, 2008, at 12:31 PM, Carson Gaspar wrote:
> Luke Scharf wrote:
>> Dave wrote:
>>> On 05/08/2008 08:11 AM, Ross wrote:
>>>
It may be an obvious point, but are you aware that snapshots need
to be stopped any time a disk fails? It's something to consider
if you're plannin
On May 5, 2008, at 9:51 PM, Bill McGonigle wrote:
> Is it also true that ZFS can't be re-implemented in GPLv2 code
> because then the CDDL-based patent protections don't apply?
Some of it has already been done:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/grub/grub-0.95/stage2
On May 5, 2008, at 4:43 PM, Bob Friesenhahn wrote:
> On Mon, 5 May 2008, eric kustarz wrote:
>>
>> That's not true:
>> http://blogs.sun.com/erickustarz/entry/zil_disable
>>
>> Perhaps people are using "consistency" to mean different things
>&
On May 5, 2008, at 1:43 PM, Bob Friesenhahn wrote:
> On Mon, 5 May 2008, Marcelo Leal wrote:
>
>> Hello, If you believe that the problem can be related to ZIL code,
>> you can try to disable it to debug (isolate) the problem. If it is
>> not a fileserver (NFS), disabling the zil should not impact
On Apr 27, 2008, at 4:39 PM, Carson Gaspar wrote:
> Ian Collins wrote:
>> Carson Gaspar wrote:
>
>>> If this is possible, it's entirely undocumented... Actually, fmd's
>>> documentation is generally terrible. The sum total of configuration
>>> information is:
>>>
>>> FILES
>>> /etc/fm/fmd
If you are really sure that disks c5t2d0 and c5t6d0 are not in use by
anyone and want to add them as spares, then dd'ing 0s over the labes
should suffice (front and back labels). Usual warnings about dd'ing
0s over a disk apply here. I'd probably do one at a time.
eric
On Apr 2, 2008, at
On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote:
> On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:
>>
>> This causes the sync to happen much faster, but as you say,
>> suboptimal.
>> Haven't had the time to go through the bug report, but probably
>> CR 6429205 each zpool needs to monitor its th
> messages:
> # tail /var/adm/messages
> Mar 22 17:28:36 hancock genunix: [ID 936769 kern.info] fssnap0 is /
> pseudo/[EMAIL PROTECTED]
> Mar 22 17:28:36 hancock pseudo: [ID 129642 kern.info] pseudo-
> device: winlock0
> Mar 22 17:28:36 hancock genunix: [ID 936769 kern.info] winlock0 is /
> pseu
internal events to track them.
eric
>
> David
>
> On Fri, 2008-03-21 at 13:10 -0700, eric kustarz wrote:
>>> Also history only tells me what someone typed. It doesn't tell me
>>> what other changes may have occur
> Also history only tells me what someone typed. It doesn't tell me
> what other changes may have occurred.
What other changes were you thinking about?
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Mar 20, 2008, at 3:59 PM, Robert Milkowski wrote:
> Hello Cyril,
>
> Thursday, March 20, 2008, 9:51:35 PM, you wrote:
>
> CP> On Thu, Mar 20, 2008 at 11:26 PM, Mark A. Carlson
> <[EMAIL PROTECTED]> wrote:
>>>
>>> I think the answer is that the configuration is hidden
>>> and cannot be back
On Mar 17, 2008, at 6:21 AM, Mertol Ozyoney wrote:
> Hi All ;
>
>
>
> I am not a Solaris or ZFS expert and I am in need of your help.
>
>
>
> When I run the following command
>
>
>
> zfs send –i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh 10.10.103.42 zfs
> receive –F data/data41
>
>
>
> if some
On Mar 12, 2008, at 12:35 PM, Ben Middleton wrote:
> Hi,
>
> Sorry if this is a RTM issue - but I wanted to be sure before
> continuing. I received a corrupted file error on one of my pools. I
> removed the file, and the status command now shows the following:
>
>> zpool status -v rpool
> p
On Mar 6, 2008, at 7:58 AM, Brian D. Horn wrote:
> Take a look at CR 6634371. It's worse than you probably thought.
The only place i see ZFS mentioned in that bug report is regarding
z_mapcnt. Its being atomically inc/dec in zfs_addmap()/zfs_delmap()
- so those are ok.
In zfs_frlock(), te
If you can't file a RFE yourself (with the attached diffs), then
yeah, i'd like to see them so i can do it.
cool stuff,
eric
On Feb 26, 2008, at 4:35 AM, [EMAIL PROTECTED] wrote:
> Hi All,
> I have modified zdb to do decompression in zdb_read_block. Syntax is:
>
> # zdb -R poolname:devid:blkn
On Feb 20, 2008, at 2:16 PM, Robert Milkowski wrote:
> Hello eric,
>
> Tuesday, February 12, 2008, 7:33:14 PM, you wrote:
>
> ek> On Feb 1, 2008, at 7:17 AM, Nicolas Dorfsman wrote:
>
>>> Hi,
>>>
>>> I wrote an hobbit script around lunmap/hbamap commands to monitor
>>> SAN health.
>>> I'd like to
On Feb 16, 2008, at 5:26 PM, Bob Friesenhahn wrote:
> Some of us are still using Solaris 10 since it is the version of
> Solaris released and supported by Sun. The 'filebench' software from
> SourceForge does not seem to install or work on Solaris 10. The
> 'pkgadd' command refuses to recognize
On Feb 1, 2008, at 7:17 AM, Nicolas Dorfsman wrote:
> Hi,
>
> I wrote an hobbit script around lunmap/hbamap commands to monitor
> SAN health.
> I'd like to add detail on what is being hosted by those luns.
>
> With svm metastat -p is helpful.
>
> With zfs, zpool status output is awful for scrip
On Feb 4, 2008, at 5:10 PM, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>> FYI, you can use the '-c' option to compare results from various
>> runs and
>> have one single report to look at.
>
> That's a handy feature. I've added a couple of such comparisons:
> http://acc.ohsu.edu/
>
> While browsing the ZFS source code, I noticed that "usr/src/cmd/
> ztest/ztest.c", includes ztest_spa_rename(), a ZFS test which
> renames a ZFS storage pool to a different name, tests the pool
> under its new name, and then renames it back. I wonder why this
> functionality was not expo
On Feb 1, 2008, at 11:17 AM, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>> Depending on needs for space vs. performance, I'd probably pixk
>> eithr 5*9 or
>> 9*5, with 1 hot spare.
>
> [EMAIL PROTECTED] said:
>> How you can check the speed (I'm totally newbie on Solaris)
>
> We're d
On Jan 25, 2008, at 6:06 AM, Niksa Franceschi wrote:
> Yes, the link explains quite well the issue we have.
> Only difference is that server1 can be manually rebooted, and while
> it's still down I can mount ZFS pool on server2 even without -f
> option, and yet server1 when booted up still mo
On Jan 22, 2008, at 5:39 PM, manoj nayak wrote:
>>
>> Manoj Nayak writes:
>>> Hi All,
>>>
>>> If any dtrace script is available to figure out the vdev_cache (or
>>> software track buffer) reads in kiloBytes ?
>>>
>>> The document says the default size of the read is 128k , However
>>> vdev_cach
On Jan 18, 2008, at 4:23 AM, Sengor wrote:
> On 1/17/08, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>>> Pardon my ignorance, but is ZFS with compression safe to use in a
>>> production environment?
>>
>> Yes, why wouldn't it be ? If it wasn't safe it wouldn't have been
>> delivered.
>
> Few reas
>
> I'm using raidz2 across 8 drives, but if I had it to do again, I'd
> probably just use mirroring. Unfortunately, raidz2 kills your random
> read and write performance, and that makes Time Machine really, really
> slow. I'm running low on space now, and considering throwing another
> 8 drives
On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
> www.mozy.com appears to have unlimited backups for 4.95 a month.
> Hard to beat that. And they're owned by EMC now so you know they
> aren't going anywhere anytime soon.
I just signed on and am trying Mozy out. Note, its $5 per computer
an
On Jan 10, 2008, at 5:13 PM, Jim Dunham wrote:
> Eric,
>
>>
>> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>>
>>> Hi
>>>I'm using ZFS on few X4500 and I need to backup them.
>>> The data on source pool keeps changing so the online replication
>>> would be the best solution.
>>>
>>>As I k
On Jan 10, 2008, at 9:32 AM, Carson Gaspar wrote:
> eric kustarz wrote:
>> On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
>>
>>> I need automatic system. Now I'm using zfs send but it
>>> takes too much human resources to control it.
>>
>> cron
On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
> Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
>> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>>
>>> Hi
>>> I'm using ZFS on few X4500 and I need to backup them.
>>> The data on sourc
On Jan 9, 2008, at 9:09 PM, Rob Logan wrote:
>
> fun example that shows NCQ lowers wait and %w, but doesn't have
> much impact on final speed. [scrubbing, devs reordered for clarity]
Here are the results i found when comparing random reads vs.
sequential reads for NCQ:
http://blogs.sun.com/eri
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
> Hi
> I'm using ZFS on few X4500 and I need to backup them.
> The data on source pool keeps changing so the online replication
> would be the best solution.
>
> As I know AVS doesn't support ZFS - there is a problem with
> mounting backup pool
This should work just fine with latest bits (Nevada 77 and later) via:
http://bugs.opensolaris.org/view_bug.do?bug_id=6425096
Its backport is currently targeted for an early build of s10u6.
eric
On Jan 8, 2008, at 7:13 AM, Andreas Koppenhoefer wrote:
> [I apologise for reposting this... but no
>
> So either we're hitting a pretty serious zfs bug, or they're purposely
> holding back performance in Solaris 10 so that we all have a good
> reason to
> upgrade to 11. ;)
In general, for ZFS we try to push all changes from Nevada back to
s10 updates.
In particular, "6535160 Lock contenti
On Dec 23, 2007, at 7:53 PM, David Dyer-Bennet wrote:
> Just out of curiosity, what are the dates ls -l shows on a snapshot?
> Looks like they might be the pool creation date.
The ctime and mtime are from the file system creation date. The
atime is the current time. See:
http://src.opensolar
On Dec 12, 2007, at 3:03 PM, Robert Milkowski wrote:
> Hello zfs-discuss,
>
> http://sunsolve.sun.com/search/document.do?assetkey=1-1-6604198-1
>
>
> Is there a patch for S10? I thought it's been fixed.
It was fixed via "6460622 zio_nowait() doesn't live up to its name"
and that is in s10u
On Dec 5, 2007, at 8:38 PM, Anton B. Rang wrote:
> This might have been affected by the cache flush issue -- if the
> 3310 flushes its NVRAM cache to disk on SYNCHRONIZE CACHE commands,
> then ZFS is penalizing itself. I don't know whether the 3310
> firmware has been updated to support th
>
> Also... doesn't ZFS do some form of read ahead .. 64KB anyways?
>
I believe you are referring to the vdev cache here. Check out:
http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help
eric
___
zfs-discuss mailing list
zfs-discuss@o
>
>>
>>> Basically, I want to know if somebody here on this list is using
>>> a ZFS
>>> file system for a proxy cache and what will be it's performance?
>>> Will it
>>> improve and degrade Squid's performance? Or better still, is
>>> there any
>>> kind of benchmark tools for ZFS performance?
On Oct 26, 2007, at 3:21 AM, Matt Buckland wrote:
> Hi forum,
>
> I did something stupid the other day, managed to connect an
> external disk that was part of zpool A such that it appeared in
> zpool B. I realised as soon as I had done zpool status that zpool B
> should not have been online
On Oct 22, 2007, at 2:52 AM, Mertol Ozyoney wrote:
> I know I havent defined my particular needs. However I am looking
> for a
> simple explanation of waht is available today and what will be
> available in
> short term.
>
> Example. One to one asynch replication is suported, many to one sync
This looks like a bug in the sd driver (SCSI).
Does this look familiar to anyway from the sd group?
eric
On Oct 10, 2007, at 10:30 AM, Claus Guttesen wrote:
> Hi.
>
> Just migrated to zfs on opensolaris. I copied data to the server using
> rsync and got this message:
>
> Oct 10 17:24:04 zetta ^
On Oct 10, 2007, at 11:23 AM, Bernhard Duebi wrote:
> Hi everybody,
>
> I tested the following scenario:
>
> I have two machine attached to the same SAN LUN.
> Both machines run Solaris 10 Update 4.
> Machine A is active with zpool01 imported.
> Machine B is inactive.
> Machine A crashes.
> Machi
>
> That all said - we don't have a simple dd benchmark for random
> seeking.
Feel free to try out randomread.f and randomwrite.f - or combine them
into your own new workload to create a random read and write workload.
eric
___
zfs-discuss mailing
Since you were already using filebench, you could use the
'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
nthreads set to 20, iosize set to 128k) to achieve the same things.
With the latest version of filebench, you can then use the '-c'
option to compare your results in a nic
On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:
> Hi,
>
> i checked with $nthreads=20 which will roughly represent the
> expected load and these are the results:
Note, here is the description of the 'fileserver.f' workload:
"
define process name=filereader,instances=1
{
thread name=filere
>
> Client A
> - import pool make couple-o-changes
>
> Client B
> - import pool -f (heh)
>
> Client A + B - With both mounting the same pool, touched a couple of
> files, and removed a couple of files from each client
>
> Client A + B - zpool export
>
> Client A - Attempted import and dropped
On Oct 3, 2007, at 3:44 PM, Dale Ghent wrote:
> On Oct 3, 2007, at 5:21 PM, Richard Elling wrote:
>
>> Slightly off-topic, in looking at some field data this morning
>> (looking
>> for something completely unrelated) I notice that the use of directio
>> on UFS is declining over time. I'm not sur
>
> Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
> surprised that this is being met with skepticism considering that
> Oracle highly recommends direct IO be used, and, IIRC, Oracle
> performance was the main motivation to adding DIO to UFS back in
> Solaris 2.6. This isn't
On Oct 2, 2007, at 1:11 PM, David Runyon wrote:
> We are using MySQL, and love the idea of using zfs for this. We
> are used to using Direct I/O to bypass file system caching (let the
> DB do this). Does this exist for zfs?
Not yet, see:
6429855 Need way to tell ZFS that caching is a lost
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
> Paul B. Henson wrote:
>> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>>
>>
>>> The x4500 is very sweet and the only thing stopping us from
>>> buying two
>>> instead of another shelf is the fact that we have lost pools on
>>> Sol10u3
>>> servers a
On Sep 21, 2007, at 11:47 AM, Pawel Jakub Dawidek wrote:
> Hi.
>
> I gave a talk about ZFS during EuroBSDCon 2007, and because it won the
> the best talk award and some find it funny, here it is:
>
> http://youtube.com/watch?v=o3TGM0T1CvE
>
> a bit better version is here:
>
> http://p
On Sep 20, 2007, at 6:46 PM, Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Gary Mills wrote:
>
>> You should consider a Netapp filer. It will do both NFS and CIFS,
>> supports disk quotas, and is highly reliable. We use one for 30,000
>> students and 3000 employees. Ours has never failed us.
>
On Sep 15, 2007, at 12:55 PM, Victor Latushkin wrote:
> I'm proposing new project for ZFS community - Block Selection
> Policy and
> Space Map Enhancements.
+1.
I wonder if some of this could look into a dynamic policy. For
example, a policy that switches when the pool becomes "too full".
torage benchmark utility, although the crash is not as frequent as
> when
> using my test app.
>
> Duff
>
> -Original Message-
> From: eric kustarz [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 17, 2007 6:58 PM
> To: J Duff; [EMAIL PROTECTED]
> Cc: ZFS
This actually looks like a sd bug... forwarding it to the storage
alias to see if anyone has seen this...
eric
On Sep 14, 2007, at 12:42 PM, J Duff wrote:
> I’d like to report the ZFS related crash/bug described below. How
> do I go about reporting the crash and what additional information
On Sep 14, 2007, at 8:16 AM, Łukasz wrote:
> I have a huge problem with space maps on thumper. Space maps takes
> over 3GB
> and write operations generates massive read operations.
> Before every spa sync phase zfs reads space maps from disk.
>
> I decided to turn on compression for pool ( only
On Aug 30, 2007, at 12:33 PM, Jeffrey W. Baker wrote:
> On Thu, 2007-08-30 at 12:07 -0700, eric kustarz wrote:
>> Hey jwb,
>>
>> Thanks for taking up the task, its benchmarking so i've got some
>> questions...
>>
>> What does it mean to have an extern
On Aug 29, 2007, at 11:16 PM, Jeffrey W. Baker wrote:
> I have a lot of people whispering "zfs" in my virtual ear these days,
> and at the same time I have an irrational attachment to xfs based
> entirely on its lack of the 32000 subdirectory limit. I'm not
> afraid of
> ext4's newness, since
On Jul 31, 2007, at 5:44 AM, Orvar Korvar wrote:
> I have begun a scrub on a 1,5TB pool which has 600GB data, and
> seeing that it will take 11h47min I want to stop it. I invoked
> "zpool scrub -s pool" and nothing happens. There is no message:
> "scub stopped" or something similar. The cur
I've filed:
6586537 async zio taskqs can block out userland commands
to track this issue.
eric
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jul 25, 2007, at 11:46 PM, asa wrote:
> Hello all,
> I am interested in getting a list of the changed files between two
> snapshots in a fast and zfs-y way. I know that zfs knows all about
> what blocks have been changed, but can one map that to a file list? I
> know this could be solved
On Jul 26, 2007, at 10:00 AM, gerald anderson wrote:
> Customer question:
>
> Oracle 10
>
> Customer has a 6540 with 4 trays of 300G 10k drives. The raid sets
> are 3 + 1
>
> vertically stripped on the 4 trays. Two 400G volumes are created on
> each
>
> raid set. Would it be best to put all o
On Jul 22, 2007, at 7:39 PM, JS wrote:
> There a way to take advantage of this in Sol10/u03?
>
> "sorry, variable 'zfs_vdev_cache_max' is not defined in the 'zfs'
> module"
That tunable/hack will be available in s10u4:
http://bugs.opensolaris.org/view_bug.do?bug_id=6472021
wait about a month
ntegrity. VxFS can't do that - your data is always at risk.
Hopefully you can articulate that to the decision makers...
eric
> --
> Sean
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of eric kustarz
> Sent: Thur
Here's some info on the changes we've made to the vdev cache (in
part) to help database performance:
http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help
enjoy your properly inflated I/O,
eric
___
zfs-discuss mailing list
zfs-discuss
On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote:
> Hmm. Odd. I've got PowerPath working fine with ZFS with both
> Symmetrix and Clariion back ends.
> PowerPath Version is 4.5.0, running on leadville qlogic drivers.
> Sparc hardware. (if it matters)
>
> I ran one our test databases on ZFS
On Jul 8, 2007, at 8:05 PM, Peter C. Norton wrote:
> List,
>
> Sorry if this has been done before - I'm sure I'm not the only person
> interested in this, but I haven't found anything with the searches
> I've done.
>
> I'm looking to compare nfs performance between nfs on zfs and a
> lower-end ne
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
> You sir, are a gentleman and a scholar! Seriously, this is exactly
> the information I was looking for, thank you very much!
>
> Would you happen to know if this has improved since build 63 or if
> chipset has any effect one way or the ot
>
> However, I've one more question - do you guys think NCQ with short
> stroked zones help or hurt performance? I have this feeling (my
> gut, that is), that at a low queue depth it's a Great Win, whereas
> at a deeper queue it would degrade performance more so than without
> it. Any tho
On Jul 4, 2007, at 7:50 AM, Wout Mertens wrote:
>> A data structure view of ZFS is now available:
>> http://www.opensolaris.org/os/community/zfs/structures/
>>
>> We've only got one picture up right now (though its a juicy one!),
>> but let us know what you're interested in seeing, and
>> we'll t
On Jun 26, 2007, at 4:26 AM, Roshan Perera wrote:
Hi all,
I am after some help/feedback to the subject issue explained below.
We are in the process of migrating a big DB2 database from a
6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
25K 12 CPU dual core x 1800Mhz with
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and we'll try to
make that happen.
I see this as a nice supplement to t
On Jun 21, 2007, at 3:25 PM, Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any way to specify devices in a
pool to use for the ZIL specifically? I've been thinking through
architectures to mitigate performance problems on SAN and various
other storage technolog
On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote:
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
2007-06-20.10:20:
On Jun 20, 2007, at 1:25 PM, mario heimel wrote:
Linux is the first operating system that can boot from RAID-1+0,
RAID-Z or RAID-Z2 ZFS, really cool trick to put zfs-fuse in the
initramfs.
( Solaris can only boot from single-disk or RAID-1 pools )
http://www.linuxworld.com/news/2007/06180
On Jun 19, 2007, at 11:23 AM, Huitzi wrote:
Hi once again and thank you very much for your reply. Here is
another thread.
I'm planning to deploy a small file server based on ZFS. I want to
know if I can start with 2 RAIDs, and add more RAIDs in the future
(like the gray RAID in the attac
On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAME
On Jun 12, 2007, at 12:57 AM, Roch - PAE wrote:
Hi Seigfried, just making sure you had seen this:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
You have very fast NFS to non-ZFS runs.
That seems only possible if the hosting OS did not sync the
data when NFS required it or the
Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s, user0m45.330s, sys 0m50.118s
star xfv linux-2.6.21.tar.bz2
real3m26.053s, user0m43.069s, sys 0m33.726s
star -no-fsync -x -v -f linux-2.6.21.tar.bz2
real
On Jun 11, 2007, at 12:52 AM, Borislav Aleksandrov wrote:
Panic on snv_65&64 when:
#mkdir /disk
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool create data mirror /disk/disk1 /disk/disk2
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
At this point you have completely overwritten t
Just got the latest ;login: and Pawel has an article on "Porting the
Solaris ZFS File System to the FreeBSD Operating System".
Lots of interesting stuff in there, such as the differences between
OpenSolaris and FreeBSD, as well as getting ZFS to work with FreeBSD
jails (a new 'jailed' prope
Would be very nice if the improvements would be documented
anywhere :-)
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the "What's New in ZFS?" section.
eric
_
Hi Jeff,
You should take a look at this:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
We added the hostid/hostname to the vdev label. What this means is
that we stop you from importing a pool onto multiple machines (which
would have lead to corruption).
eric
On May 30, 2
On Jun 2, 2007, at 8:27 PM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Toby Thain wrote:
Sorry, I should have cited it. Blew my chance to moderate by
posting to
the thread :)
http://ask.slashdot.org/comments.pl?sid=236627&cid=19319903
I computed the FUD factor by sorti
On Jun 1, 2007, at 2:09 PM, John Plocher wrote:
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/
prune the log as then it becomes unreliable - ooops i made a
mistake, i better clear the log and file the bug against zfs
I understand - auditing
2) Following Chris's advice to do more with snapshots, I
played with his cron-triggered snapshot routine:
http://blogs.sun.com/chrisg/entry/snapping_every_minute
Now, after a couple of days, zpool history shows almost
100,000 lines of output (from all the snapshots and
deletions..
On May 29, 2007, at 1:25 PM, Lida Horn wrote:
Point one, the comments that Eric made do not give the complete
picture.
All the tests that Eric's referring to were done through ZFS
filesystem.
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
Do
I've been looking into the performance impact of NCQ. Here's what i
found out:
http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Curiously, there's not too much performance data on NCQ available via
a google search ...
enjoy,
eric
Won't disabling ZIL minimize the chance of a consistent zfs-
filesystem
if - for some reason - the server did an unplanned reboot?
ZIL in ZFS is only used to speed-up various workloads, it has
nothing to
do with file system consistency. ZFS is always consistent on disk no
matter if you use
Don't take this numbers too seriously - those were only first tries to
see where my port is and I was using OpenSolaris for comparsion, which
has debugging turned on.
Yeah, ZFS does a lot of extra work with debugging on (such as
verifying checksums in the ARC), so always do serious performa
On May 15, 2007, at 4:49 PM, Nigel Smith wrote:
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/
001162.html
But after a reboot the iscsi target was not longer ava
On May 15, 2007, at 9:37 AM, XIU wrote:
Hey,
I'm currently running on Nexenta alpha 6 and I have some corrupted
data in a pool.
The output from sudo zpool status -v data is:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.
1 - 100 of 217 matches
Mail list logo