Hello przemolicc,
Monday, March 12, 2007, 8:50:57 AM, you wrote:
ppf> On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
>> Hello Carisdad,
>>
>> Friday, March 9, 2007, 7:05:02 PM, you wrote:
>>
>> C> I have a setup with a T2000 SAN attached to 90 500GB SATA drives
>> C> present
On Mon, Mar 12, 2007 at 09:34:22AM +0100, Robert Milkowski wrote:
> Hello przemolicc,
>
> Monday, March 12, 2007, 8:50:57 AM, you wrote:
>
> ppf> On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
> >> Hello Carisdad,
> >>
> >> Friday, March 9, 2007, 7:05:02 PM, you wrote:
> >>
>
Working with a small txg_time means we are hit by the pool
sync overhead more often. This is why the per second
throughpuot has smaller peak values.
With txg_time = 5, we have another problem which is that
depending on timing of the pool sync, some txg can end up
with too little data in them an
Ayaz,
Ayaz Anjum wrote:
HI !
I have some concerns here, from my experience in the past, touching a
file ( doing some IO ) will cause the ufs filesystem to failover, unlike
zfs where it did not ! Why the behaviour of zfs different than ufs ? is
not this compromising data integrity ?
As ot
Brian Hechinger wrote:
On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
* ability to add disks to mirror the root filesystem at any time,
should they become available
Can't this be done with
Did you run "touch" from a client ?
ZFS and UFS are different in general but in response to a local "touch"
command neither need to generate immediate I/O and in response to a client
"touch" both do.
-r
Ayaz Anjum writes:
> HI !
>
> Well as per my actual post, i created a zfs file as part
Frank Cusack writes:
> On March 7, 2007 8:50:53 AM -0800 Matt B <[EMAIL PROTECTED]> wrote:
> > Any thoughts on the best practice points I am raising? It disturbs me
> > that it would make a statement like "don't use slices for production".
>
> I think that's just a performance thing.
>
Ri
I am configuring my first thumper. Our goal is to reduce the odds that a single
failure will take down the file system. DOes such a design exist. I cannot
find it.
Questions:
1)Do boot disks (currently controller 5 , disk 0 and 4) have to be on one
controller or can they be split (eg contolle
> ZFS supports swap to /dev/vzol, however, I do not
> have data related to
> performance.
> Also note that ZFS does not support dump yet, see RFE
> 5008936.
I am getting ready to install a new server from scratch. While I had been
hoping to do a full-radiz2 system, from what I am understanding h
Malachi de AElfweald wrote:
1) How do I at least mirror the root partition during install (instead of the
convoluted after-the-fact instructions all over the net)
Use Jumpstart. A profile to install your machine with mirroring should
be pretty short, simple, and easy to create. It will be do
On 12-Mar-07, at 11:28 AM, Malachi de AElfweald wrote:
ZFS supports swap to /dev/vzol, however, I do not
have data related to
performance.
Also note that ZFS does not support dump yet, see RFE
5008936.
I am getting ready to install a new server from scratch. While I
had been hoping to do a
> I am getting ready to install a new server from scratch. While I had
> been hoping to do a full-radiz2 system, from what I am understanding
> here, my best bet is to still do a UFS drive for the root/boot
> partition, since ZFS does not support dump. Is this correct?
There is no official suppor
I took a look at some Jumpstart instructions...
As a n00b to Solaris Administration, I think I am likely to screw that up at
the moment.
I know that with my FreeBSD system, I specified the RAID at the hardware
level then fdisk detected the volume as something I could install to...
Last time I tr
> I know that with my FreeBSD system, I specified the RAID at the hardware
> level then fdisk detected the volume as something I could install to...
> Last time I tried that with Solaris, it didn't detect any drives.
HW raid depends on the specific hardware and the existence of Solaris
drivers to
This is great news. A question crossed my mind. I'm sure it's a dumb one but I
thought I'd ask anyway...
How will LiveUpdate work when the boot partition is in the pool?
Gary
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
After the interesting revelations about the X2100 and it's hot-swap abilities,
what are the abilities of the X2200-M2's disk subsystem, and is ZFS going to
tickle any wierdness out of them?
-brian
--
"The reason I don't use Gnome: every single other window manager I know of is
very powerfully ext
[sorry for the late reply, the original got stuck in the mail]
clarification below...
> > Ian Collins wrote:
> > > Thanks for the heads up.
> > >
> > > I'm building a new file server at the moment and
> > I'd like to make sure I
> > > can migrate to ZFS boot when it arrives.
> > >
> > > My current
Hi Brian,
To my understanding the X2100 M2 and X2200 M2 are basically the same
board OEM'd from Quanta...except the 2200 M2 has two sockets.
As to ZFS and their weirdness, it would seem to me that fixing it
would be more an issue of the SATA/SCSI driver. I may be wrong here.
-J
On 3/12/07, Bri
Richard Elling wrote:
[sorry for the late reply, the original got stuck in the mail]
clarification below...
Ian Collins wrote:
Thanks for the heads up.
I'm building a new file server at the moment and
I'd like to make sure I
can migrate to ZFS boot when it arrives.
My current plan is to cr
Jason J. W. Williams wrote:
Hi Brian,
To my understanding the X2100 M2 and X2200 M2 are basically the same
board OEM'd from Quanta...except the 2200 M2 has two sockets.
As to ZFS and their weirdness, it would seem to me that fixing it
would be more an issue of the SATA/SCSI driver. I may be wro
On 12-Mar-07, at 2:37 PM, Bart Smaalders wrote:
Jason J. W. Williams wrote:
Hi Brian,
To my understanding the X2100 M2 and X2200 M2 are basically the same
board OEM'd from Quanta...except the 2200 M2 has two sockets.
As to ZFS and their weirdness, it would seem to me that fixing it
would be mo
So what is the progress of the SATA Framework integration??
http://www.opensolaris.org/os/community/on/flag-days/pages/2006011301/
I don't have an X2200-M2, but I would love to run my SATA drives in SATA mode :)
Rayson
On 3/12/07, Toby Thain <[EMAIL PROTECTED]> wrote:
On 12-Mar-07, at 2:37
On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTECTED]> wrote:
So what is the progress of the SATA Framework integration??
http://www.opensolaris.org/os/community/on/flag-days/pages/2006011301/
The framework is integrated in both Solaris and OpenSolaris, as demonstrated
by the fact th
On Mon, Mar 12, 2007 at 12:14:00PM -0700, Frank Cusack wrote:
> On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTECTED]> wrote:
> >So what is the progress of the SATA Framework integration??
> >
> >http://www.opensolaris.org/os/community/on/flag-days/pages/2006011301/
>
> The framework is
On Mon, 2007-03-12 at 12:14 -0700, Frank Cusack wrote:
> On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTECTED]> wrote:
> > So what is the progress of the SATA Framework integration??
> >
> > http://www.opensolaris.org/os/community/on/flag-days/pages/2006011301/
>
> The framework is integ
What about the nVidia MCP55PXE (Asus L1N64-SLI board)?
Malachi
On 3/12/07, Marty Faltesek <[EMAIL PROTECTED]> wrote:
On Mon, 2007-03-12 at 12:14 -0700, Frank Cusack wrote:
> On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTECTED]>
wrote:
> > So what is the progress of the SATA Framewo
On 12/03/07, Darren Dunham <[EMAIL PROTECTED]> wrote:
> On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
> > On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
> > >* ability to add disks to mirror the root filesystem at any time,
> > > should they become avai
> > *if* you already have the root filesystem under SVM in the first place,
> > then no reboot should be required to add a mirror. And I assume that's
> > all we're talking about for the ZFS mirroring as well.
>
> Is there any reason you'd have SVM on just the one partition? I can
> see why you'd
What issues, if any, are likely to surface with using Solaris
inside vmware as a guest os, if I choose to use ZFS?
I'm assuming that ZFS's ability to maintain data integrity
will prevail and protect me from any problems that the
addition of vmware might introduce.
Are there likely to be any issu
> I am configuring my first thumper. Our goal is to
> reduce the odds that a single failure will take down
> the file system. DOes such a design exist. I cannot
> find it.
It should ship this way. There should be a zpool created with redundancy.
> Questions:
> 1)Do boot disks (currently contro
My copies of zpool(1M) have three entries for zpool import:
zpool import [-d dir] [-D]
zpool import [-d dir] [-D] [-f] [-o opts] [-R root] pool | id [newpool]
zpool import [-d dir] [-D] [-f] [-a]
Shouldn't the last one be
zpool import [-d dir] [-D] [-f] -a
? That is, if the
Hi Marty,
We'd love to beta the driver. Currently, have 5 X2100 M2s in
production and 1 in development.
Best Regards,
Jason
On 3/12/07, Marty Faltesek <[EMAIL PROTECTED]> wrote:
On Mon, 2007-03-12 at 12:14 -0700, Frank Cusack wrote:
> On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTEC
> I have a setup with a T2000 SAN attached to 90 500GB SATA drives
> presented as individual luns to the host. We will be sending mostly
> large streaming writes to the filesystems over the network (~2GB/file)
> in 5/6 streams per filesystem. Data protection is pretty important, but
> we need
On 3/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
What issues, if any, are likely to surface with using Solaris
inside vmware as a guest os, if I choose to use ZFS?
works great in vmware server, IO rates suck.
I'm assuming that ZFS's ability to maintain data integrity
will prevail an
On Mon, 2007-03-12 at 20:53 -0600, James Dickens wrote:
>
>
> On 3/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> What issues, if any, are likely to surface with using Solaris
> inside vmware as a guest os, if I choose to use ZFS?
>
> works great in vmware server, IO rate
On 3/12/07, Erast Benson <[EMAIL PROTECTED]> wrote:
On Mon, 2007-03-12 at 20:53 -0600, James Dickens wrote:
>
>
> On 3/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> What issues, if any, are likely to surface with using Solaris
> inside vmware as a guest os, if I choose to
Len Zaifman wrote:
> 9 pools of 5 disk raidzs.
You should use only one pool, with multiple raidz vdevs. Perhaps you
just made a typo, as the below commands create one pool ("adp").
For maximum redundancy, consider using double-parity raidz (raidz2).
As Richard mentioned, don't worry about w
37 matches
Mail list logo