Robert,
We are looking to try to get patches out by late September which will
include this and many other fixes. I'll post all the changes in another
thread.
Thanks,
George
Robert Milkowski wrote:
Hello Fred,
Friday, July 28, 2006, 12:37:22 AM, you wrote:
FZ> Hi Robert,
FZ> The fix for 6
Malahat Qureshi wrote:
Is anyone shar with me the steps to configure hardware RAID in T2000
server (LSI drivers) and use rootdisk hardware mirror --
Hi Malahat,
please view and follow the documentation:
http://docs.sun.com/source/819-3249-11/erie-volume-man.html
http://docs.sun.com/app/docs
Guru's, Is anyone shar with me the steps to configure hardware RAID in T2000 server (LSI drivers) and use rootdisk hardware mirror -- Thanks a lot! Malahat Qureshi Ph.D. (MIS)Email: [EMAIL PROTECTED] ___
zfs-discuss mailing list
zfs-discuss@opensolaris
Hi Folks, Is any one have a comparison between zfs vs. vxfs, I'm working on a presentation for my management on this --- thanks in advance, Malahat Qureshi Ph.D. (MIS)Email: [EMAIL PROTECTED] ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
Sean,
This is looking better! Once you get to the latest ZFS changes that we
just putback into s10 you will be able to upgrade to ZFS version 3 which
will provide such key features as Hot spares, RAID-6, clone promotion,
and fast snapshots. Additionally, there are more performance gains that
Hi George; life is better for us now.
we upgraded to s10s_u3wos_01 last Friday
on itsm-mpk-2.sfbay , the production Canary server http://canary.sfbay.
What do we look like now?
# zpool upgrade
This system is currently running ZFS version 2.
All pools are formatted using this version.
we ad
hi all,
i recently replaced the drive in my ferrari 4000 with a 7200rpm drive and i put
the original drive in a silvestone USB enclosure. when i plug it vold puts the
icon on the desktop and i can see the root UFS filesystem, but i can't import
the zpool that held all my user data. ;(
i found
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file syst
Brian Hechinger wrote:
On Fri, Jul 28, 2006 at 02:02:13PM -0700, Richard Elling wrote:
Joseph Mocker wrote:
Richard Elling wrote:
The problem is that there are at least 3 knobs to turn (space, RAS, and
performance) and they all interact with each other.
Good point. then how about something mo
Andrew wrote:
Jeff Bonwick wrote:
For a synchronous write to a pool with mirrored disks, does the write
unblock after just one of the disks' write caches is flushed, or
only after all of the disks' caches are flushed?
The latter. We don't consider a write to be committed until the data is
on s
Jeff Bonwick wrote:
>> For a synchronous write to a pool with mirrored disks, does the write
>> unblock after just one of the disks' write caches is flushed,
>> or only after all of the disks' caches are flushed?
> The latter. We don't consider a write to be committed until
> the data is on stable
Torrey,
On 7/28/06 10:11 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote:
> That said a 3510 with a raid controller is going to blow the door, drive
> brackets, and skin off a JBOD in raw performance.
I'm pretty certain this is not the case.
If you need sequential bandwidth, each 3510 only bring
On Fri, Jul 28, 2006 at 02:02:13PM -0700, Richard Elling wrote:
> Joseph Mocker wrote:
> >Richard Elling wrote:
> >>The problem is that there are at least 3 knobs to turn (space, RAS, and
> >>performance) and they all interact with each other.
> >
> >Good point. then how about something more like
>
13 matches
Mail list logo