Yes, I've learnt that I get the e-mail reply a long while before it appears on
the boards. Not entirely sure how these boards are run, it's certainly odd for
somebody used to forums and not mailing lists, but they do seem to work
eventually :)
Thanks for the help Vic, will try to get back into
Alan,
I'm using nexenta core rc4 which is based on nevada 81/82.
zfs casesensitivity is set to 'insensitive'
Best regards.
Maurilio.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Maybe a basic zfs question ...
I have a pool:
# zpool status backup
pool: backup
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
backupONLINE 0 0 0
mirror ONLINE 0 0 0
c1t0d0s
Hi,
I would like to continue this (maybe a bit outdated) thread with
the question:
1. How to create a netinstall image?
2. How to write the netinstall image back as an ISO9660 on DVD?
(after patching it for the zfsboot)
Roman
This message posted from opensolaris.org
Hi Marc,
# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
I don't know if my application uses synchronous I/O transactions...I'm using
Sun's Glassfish v2u1.
I've deleted the ZFS partition and have setup an SVM stripe/mirror just to see
if "ZFS" is getting in the wa
We run a cron job that does a 'zpool status -x' to check for any degraded
pools. We just happened to find a pool degraded this morning by running 'zpool
status' by hand and were surprised that it was degraded as we didn't get a
notice from the cron job.
# uname -srvp
SunOS 5.11 snv_78 i386
#
>
> While browsing the ZFS source code, I noticed that "usr/src/cmd/
> ztest/ztest.c", includes ztest_spa_rename(), a ZFS test which
> renames a ZFS storage pool to a different name, tests the pool
> under its new name, and then renames it back. I wonder why this
> functionality was not expo
There is a write up of similar findings and more information about
sharemgr
http://developers.sun.com/solaris/articles/nfs_zfs.html
Unfortunately I don't see anything that says those changes will be in
u5.
Shawn
On Feb 5, 2008, at 8:21 PM, Paul B. Henson wrote:
>
> I was curious to see abo
On Solaris 10 u3 (11/06) I can execute the following:
bash-3.00# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy ip sctp usba
nca md zfs random ipc nfs crypto cpc fctl fcip logindmux ptm sppp ]
> arc::print
{
anon = ARC_anon
mru = ARC_mru
mru_ghost = ARC_mru_g
I disabled file prefetch and there was no effect.
Here are some performance numbers. Note that, when the application server used
a ZFS file system to save its data, the transaction took TWICE as long. For
some reason, though, iostat is showing 5x as much disk writing (to the physical
disks) o
On Feb 4, 2008, at 5:10 PM, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>> FYI, you can use the '-c' option to compare results from various
>> runs and
>> have one single report to look at.
>
> That's a handy feature. I've added a couple of such comparisons:
> http://acc.ohsu.edu/
Jure Pečar wrote:
> Maybe a basic zfs question ...
>
> I have a pool:
>
> # zpool status backup
> pool: backup
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> backupONLINE 0 0 0
> mirror ONLINE
Hello everybody,
I'm thinking of building out a second machine as a backup for our mail
spool where I push out regular filesystem snapshots, something like a
warm/hot spare situation.
Our mail spool is currently running snv_67 and the new machine would
probably be running whatever the lates
On Feb 6, 2008 6:36 PM, William Fretts-Saxton
<[EMAIL PROTECTED]> wrote:
> Here are some performance numbers. Note that, when the
> application server used a ZFS file system to save its data, the
> transaction took TWICE as long. For some reason, though, iostat is
> showing 5x as much disk writin
It is a striped/mirror:
# zpool status
NAMESTATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
mirrorONL
I now have a improved sata and marvell88sx driver modules that
deal with various error conditions in a much more solid way.
Changes include reducing the number of required device resets,
properly reporting media errors (rather than "no additional sense"),
clearing aborted packets more rapidly so th
Solaris 10u4 eh?
Sounds a lot like fsync issues we want into, trying to run Cyrus mail-server
spools in ZFS.
This was highlighted for us by the filebench software varmail test.
OpenSolaris nv78 however worked very well.
This message posted from opensolaris.org
__
William Fretts-Saxton sun.com> writes:
>
> I disabled file prefetch and there was no effect.
>
> Here are some performance numbers. Note that, when the application server
> used a ZFS file system to save its data, the transaction took TWICE as long.
> For some reason, though, iostat is showing
Hi all, Any thoughts on if and when ZFS, MySQL, and Lustre 1.8 (and
beyond) will work together and be supported so by Sun?
- Network Systems Architect
Advanced Digital Systems Internet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
Marc Bevand wrote:
> William Fretts-Saxton sun.com> writes:
>
>> I disabled file prefetch and there was no effect.
>>
>> Here are some performance numbers. Note that, when the application server
>> used a ZFS file system to save its data, the transaction took TWICE as long.
>> For some reason,
[EMAIL PROTECTED] said:
> Your finding for random reads with or without NCQ match my findings: http://
> blogs.sun.com/erickustarz/entry/ncq_performance_analysis
>
> Disabling NCQ looks like a very tiny win for the multi-stream read case. I
> found a much bigger win, but i was doing RAID-0 inst
[EMAIL PROTECTED] said:
> Here are some performance numbers. Note that, when the application server
> used a ZFS file system to save its data, the transaction took TWICE as long.
> For some reason, though, iostat is showing 5x as much disk writing (to the
> physical disks) on the ZFS partition. C
Neil Perrin Sun.COM> writes:
>
> The ZIL doesn't do a lot of extra IO. It usually just does one write per
> synchronous request and will batch up multiple writes into the same log
> block if possible.
Ok. I was wrong then. Well, William, I think Marion Hakanson has the
most plausible explanatio
Hey all -
I'm working on an interesting issue where I'm seeing ZFS being quite
cranky about writing O_SYNC written blocks.
Bottom line is that I have a small test case that does essentially this:
open file for writing -- O_SYNC
loop(
write() 8KB of random data
print time taken
24 matches
Mail list logo