Is there any change regarding fsflush such as autoup tunable for zfs ?
Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Do you want data availability, data retention, space, or performance?
-- richard
Robert Milkowski wrote:
Hello zfs-discuss,
While waiting for Thumpers to come I'm thinking how to configure
them. I would like to use raid-z. As thumper has 6 SATA controllers
each 8-port then maybe it would
As far as zfs performance is concerned, O_DSYNC and O_SYNC are equivalent.
This is because, zfs saves all posix layer transactions (eg WRITE,
SETATTR, RENAME...) in the log. So both meta data and data is always
re-created if a replay is needed.
Anton B. Rang wrote On 10/12/06 15:42,:
fsync() sho
Hello zfs-discuss,
While waiting for Thumpers to come I'm thinking how to configure
them. I would like to use raid-z. As thumper has 6 SATA controllers
each 8-port then maybe it would make sense to create raid-z groups
from 6 disks each from separate controller. Then combine 7 such
group
On Oct 5, 2006, at 2:28 AM, George Wilson wrote:
Andreas,
The first ZFS patch will be released in the upcoming weeks. For
now, the latest available bits are the ones from s10 6/06.
George, will there at least be a T patch available?
I'm anxious for these because my ZFS-backed NFS server ju
Hello Anton,
Thursday, October 12, 2006, 11:45:40 PM, you wrote:
ABR> Yes, set the block size to 8K, to avoid a read-modify-write cycle inside
ZFS.
Unfortunately it won't help on 06/06 until patch is released to fix a
bug (not to read old block if it's "overwritten"). However it still is
wise t
Yes, set the block size to 8K, to avoid a read-modify-write cycle inside ZFS.
As you suggest, using a separate mirror for the transaction log will only be
useful if you're on different disks -- otherwise you will be forcing the disk
head to move back and forth between slices each time you write.
fsync() should theoretically be better because O_SYNC requires that each
write() include writing not only the data but also the inode and all indirect
blocks back to the disk.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
this really bothers me too.. i was an early x2100 adopter and been
waiting almost a year for this.. come on sun, please release a patch
to fully support your own hardware on solaris 10!!
On Oct 11, 2006, at 9:23 PM, Frank Cusack wrote:
On October 11, 2006 11:14:59 PM -0400 Dale Ghent
<[
Bart Smaalders wrote:
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS
on RAID. And someday you lost your server completely (fired
motherboard, physical crash, ...). Is there any way to connect the
RAID to some an
Steven Goldberg wrote:
Thanks Matt. So is the config/meta info for the pool that is stored
within the pool kept in a file? Is the file user readable or binary?
It is not user-readable. See the on-disk format document, linked here:
http://www.opensolaris.org/os/community/zfs/docs/
--matt
_
Thanks Matt. So is the config/meta info for the pool that is stored
within the pool kept in a file? Is the file user readable or binary?
Steve
Matthew Ahrens wrote:
James
McPherson wrote:
On 10/12/06, Steve Goldberg
<[EMAIL PROTECTED]> wrote:
Where is the ZFS configuration (
Brian Hechinger wrote:
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized ra
Sergey wrote:
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID.
And someday you lost your server completely (fired motherboard, physical crash,
...). Is there any way to connect the RAID to some another server and restore
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you
boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or st
> I was asking if it was going to be replaced because it would really
> simplify ZFS root.
>
> Dick.
>
> [0] going from:
> http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-3.html
I don't know about "replaced", but presumably with the addition of
hostid to the pool data, it co
John Sonnenschein wrote:
I *just* figgured out this problem, looking for a potential solution (or at the
very least some validation that i'm not crazy)
Okay, so here's the deal. I've been using this terrible horrible no-good very bad hackup of a couple partitions spread across 3 drives as a zpo
On Thu, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > On Thu, Joerg Schilling wrote:
> > > Spencer Shepler <[EMAIL PROTECTED]> wrote:
> > >
> > > > The close-to-open behavior of NFS clients is what ensures that the
> > > > file data is on stable storage when close() re
Quite helpful, thank you.
I think I should set the zfs mirror block size to 8K to match it with db, right
?
and do you think I should create another zfs mirror for transaction log of
pgsql ? or is this only useful if I create zfs mirror on a different set of
disks but not slices ?
Mete
Thi
Ceri Davies wrote:
On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
On 12/10/06, Ceri Davies <[EMAIL PROTECTED]> wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mou
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> On Thu, Joerg Schilling wrote:
> > Spencer Shepler <[EMAIL PROTECTED]> wrote:
> >
> > > The close-to-open behavior of NFS clients is what ensures that the
> > > file data is on stable storage when close() returns.
> >
> > In the 1980s this was definit
Mirroring will give you the best performance for small write operations.
If you can get by with two disks, I’d divide each of them into two slices, s0
and s1, say. Set up an SVM mirror between d0s0 and d1s0 and use that for your
root. Set up a ZFS mirror between d0s1 and d1s1 and use that for yo
The configuration data is stored on the disk devices themselves, at least
primarily.
There is also a copy of the basic configuration data in the file
/etc/zfs/zpool.cache on the boot device. If this file is missing, ZFS will not
automatically import pools, but you can manually import them.
(I’
On Thu, 12 Oct 2006, Ian Collins wrote:
> Al Hopper wrote:
>
> >On Wed, 11 Oct 2006, Dana H. Myers wrote:
> >
> >
> >
> >>Al Hopper wrote:
> >>
> >>
> >>
> >>>Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
> >>>sticks for a starter, cost effective, system. 4*512Mb for a
On Thu, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > The close-to-open behavior of NFS clients is what ensures that the
> > file data is on stable storage when close() returns.
>
> In the 1980s this was definitely not the case. When did this change?
It has not. NFS
On Thu, Oct 12, 2006 at 08:52:34AM -0500, Al Hopper wrote:
> On Thu, 12 Oct 2006, Brian Hechinger wrote:
>
> > Ok, previous threads have lead me to believe that I want to make raidz
> > vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
> > Do I want to create a zfs pool with
On Thu, Oct 12, 2006 at 02:54:05PM +0100, Ceri Davies wrote:
> On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
> > On 12/10/06, Ceri Davies <[EMAIL PROTECTED]> wrote:
> > >On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
> >
> > >> FYI, /etc/zfs/zpool.cache just tells u
On Thu, Oct 12, 2006 at 07:53:37AM -0600, Mark Maybee wrote:
> Ceri Davies wrote:
> >On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
> >
> >>James McPherson wrote:
> >>
> >>>On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:
> >>>
> Where is the ZFS configuration (zpools, mo
On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
> On 12/10/06, Ceri Davies <[EMAIL PROTECTED]> wrote:
> >On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
>
> >> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
> >> up. Everything else (mountpoin
Ceri Davies wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
James McPherson wrote:
On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:
Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris? Is there something akin to vfs
On Thu, 12 Oct 2006, Brian Hechinger wrote:
> Ok, previous threads have lead me to believe that I want to make raidz
> vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
> Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Personally I think that 5 disks fo
Dick Davies wrote:
On 11/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:
>> You might want to check the HCL at http://www.sun.com/bigadmin/hcl to
>> find out which hardware is supported by Solaris 10.
> I tried th
Dick Davies wrote:
On 12/10/06, Michael Schuster <[EMAIL PROTECTED]> wrote:
James C. McPherson wrote:
> Dick Davies wrote:
>> On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
>>
>>> FYI, /etc/zfs/zpool.cache just tells us what pools to open when
you boot
>>> up. Everything else (mountpo
On 12/10/06, Ceri Davies <[EMAIL PROTECTED]> wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
> up. Everything else (mountpoints, filesystems, etc) is stored in the
> pool itself.
What happens if
On Thu, Oct 12, 2006 at 05:46:24PM +1000, Nathan Kroenert wrote:
>
> A few of the RAID controllers I have played with has an option to
> 'rebuild' a raid set, which I get the impression (though have never
> tried) allows you to essentially tell the controller there is a raid set
> there, and if
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized raidz vdevs
in a pool? If
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
> James McPherson wrote:
> >On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:
> >>Where is the ZFS configuration (zpools, mountpoints, filesystems,
> >>etc) data stored within Solaris? Is there something akin to vfstab
> >>or per
Yeah, I looked at the tool. Unfortunately it doesnt help at all with choosing what to buy.On 10/12/06, Dick Davies <
[EMAIL PROTECTED]> wrote:On 11/10/06, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:> Dick Davies wrote:>> > On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]
> wrote:> >> You might wa
On 12/10/06, Michael Schuster <[EMAIL PROTECTED]> wrote:
James C. McPherson wrote:
> Dick Davies wrote:
>> On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
>>
>>> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
>>> up. Everything else (mountpoints, filesystems, etc
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or str
James C. McPherson wrote:
Dick Davies wrote:
On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or str
Dick Davies wrote:
On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or strategies to remove this depe
"Frank Batschulat (Home)" <[EMAIL PROTECTED]> wrote:
> On Tue, 10 Oct 2006 01:25:36 +0200, Roch <[EMAIL PROTECTED]> wrote:
>
> > You tell me ? We have 2 issues
> >
> > can we make 'tar x' over direct attach, safe (fsync)
> > and posix compliant while staying close to current
> > perfor
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> The close-to-open behavior of NFS clients is what ensures that the
> file data is on stable storage when close() returns.
In the 1980s this was definitely not the case. When did this change?
> The meta-data requirements of NFS is what ensures that fi
Roch <[EMAIL PROTECTED]> wrote:
> > Neither Sun tar nor GNU tar call fsync which is the only way to
> > enforce data integrity over NFS.
>
> I tend to agree with this although I'd say that in practice,
> from performance perspective, calling fsync should be more
> relevant for direct attach.
Hi all,
I am going to have solaris 06/06, a database (postgresql), application server
(tomcat or sun app. server) on x86, 2GB with max. 3 scsi (hardware raid
possible) disks. There will be no e-mail or file serving.
The ratio of write/read operations to database is >=2 (and there is no other
s
On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or strategies to remove this dependancy?
--
Rasputin
On 11/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:
>> You might want to check the HCL at http://www.sun.com/bigadmin/hcl to
>> find out which hardware is supported by Solaris 10.
> I tried that myself - th
On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
So are there any pci-e SATA cards that are supported ? I was hoping to go
with a sempron64. Using old-pci seems like a waste.
I recently built a am2 sempron64 based zfs box.
motherboard: ASUS M2NPV-MX
cpu: amd am2 sempron64 2800+
The
On Wed, Oct 11, 2006 at 06:36:28PM -0500, David Dyer-Bennet wrote:
> On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> >> The more I learn about Solaris hardware support, the more I see it as
> >> a minefield.
> >
> >
> >I've found this to be true for almost all open source platforms w
I'll take a crack at this.
First off, I'm assuming that the RAID you are talking about it provided
by the hardware and not by ZFS.
IF that's the case, then it will depend on the way you created the raid
set, the bios of the controller, and whether or not these two things
match up with any ot
On 10/12/06, John Sonnenschein <[EMAIL PROTECTED]> wrote:
well, it's an SiS 960 board, and it appears my only option to turn off probing
of the drives is to enable RAID mode (which makes them inacessable by the OS)
I think the option is in the standard CMOS setup section, and allows you
to set
Hi Darren,
The Solaris Operating System for x86 Installation Check Tool 1.1 is
designed to report whether Solaris drivers are available for the
devices the tool detects on a x86 system and determine quickly whether
your system is likely to be able to install the Solaris OS. It is not
desig
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID.
And someday you lost your server completely (fired motherboard, physical crash,
...). Is there any way to connect the RAID to some another server and restore
ZFS layout (no
Al Hopper wrote:
>On Wed, 11 Oct 2006, Dana H. Myers wrote:
>
>
>
>>Al Hopper wrote:
>>
>>
>>
>>>Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
>>>sticks for a starter, cost effective, system. 4*512Mb for a good long
>>>term solution.
>>>
>>>
>>Due to fan-ou
well, it's an SiS 960 board, and it appears my only option to turn off probing
of the drives is to enable RAID mode (which makes them inacessable by the OS)
what would be my next (cheapest) option, a proper SATA add-in card? I've heard
good things about the silicon image 3132 based cards, but I'
56 matches
Mail list logo