Richard Elling <[EMAIL PROTECTED]> writes:
> Vishal Dhuru wrote:
>> Hi ,
>> I am looking for customer shareable presentation on the ZFS vs VxFS
>> , Any pointers to URL or direct attached prezo is highly appreciated
>> !
>
> 40,000 foot level, one slide for PHBs, one slide for Dilberts :-)
> -- r
Hey Johansen,
I can confirm that 6566207 has been fixed by the new driver!
I haven't been able to reproduce 6565894 so cannot confirm whether or not this
is fixed.
Thanks for the fix, open source rocks :)
Murray
This message posted from opensolaris.org
_
In an attempt to speed up progress on some of the si3124 bugs that Roger
reported, I've created a workspace with the fixes for:
6565894 sata drives are not identified by si3124 driver
6566207 si3124 driver loses interrupts.
I'm attaching a driver which contains these fixes as well as a diff
Orvar,
around 50 to 60 MB/sec I've seen when zwo disks are writing
and around 100MB/s when reading round-robin.
The limiting faktor has been the old PCI-Bus (*not* 32-Bit
slot length) and in another test the 1-lane PCI-X bus.
(Sil680/SIL3124-2 and SIL3132 Chip)
So if you can see the differenc
It looks like there is a problem dumping a kernel panic on an X4500.
During the self induced panic, there where additional syslog messages
that indicate a problem writing to the two disks that make up
/dev/md/dsk/d2 in my case. It is as if the SATA controllers are being
reset during the crash dump
On Tue, Jul 17, 2007 at 03:08:44PM +1000, James C. McPherson wrote:
> >>Log a new case with Sun, and make sure you supply
> >>a crash dump so people who know ZFS can analyze
> >>the issue.
> >>
> >>You can use sync, sync, or
> >>
> >>reboot -dq
> >>
That does appear to have caused a panic/kernel
On Jul 15, 2007, at 12:59 PM, Peter Tribble wrote:
> On 7/13/07, Torrey McMahon <[EMAIL PROTECTED]> wrote:
>> ZFS needs to use the top level multipath device or bad things will
>> probably happen in a failover or in initial zpool creation. Fopr
>> example: You'll try to use the device on two path
How are the drives connected? USB or SATA?
Also, is this hardware raid or are you using raidz?
If sata, what controller is being used?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
DRM wrote:
> When I unmount all my ZFS mounts I'm still able to access the data under
> root.. why is this?
>
>
Can you tell us more? How did you do the unmounts?
What "root" do you mean?
You mean a zfs root file system? If so, the unmount of
the root file system should have failed.
Lori
When I unmount all my ZFS mounts I'm still able to access the data under root..
why is this?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 7/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> It in what is integrated into OpenSolaris and this is an OpenSolaris.org
> list not an @sun.com support list for Solaris 10.
True enough. I stand corrected.
--
Just me,
Wire ...
Blog:
___
zfs-d
On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
Darren Dunham wrote:
If it helps at all. We're having a similar problem. Any LUN's
configured with their default owner to be SP B, don't get along
with
ZFS. We're running on a T2000, With Emulex cards and the ssd
driver.
MPXIO seems
Wee Yeh Tan wrote:
> On 7/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Wee Yeh Tan wrote:
>> > Firstly, zonepaths in ZFS is no yet supported. But this is the
>> > hacker's forum so...
>>
>> I don't think that is actually true, particularly given that you can use
>> zoneadm clone using a ZF
On 7/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Wee Yeh Tan wrote:
> > Firstly, zonepaths in ZFS is no yet supported. But this is the
> > hacker's forum so...
>
> I don't think that is actually true, particularly given that you can use
> zoneadm clone using a ZFS snapshot/clone to "copy"
On Tue, 17 Jul 2007, Nigel Smith wrote:
> You can see the status of bug here:
>
> http://bugs.opensolaris.org/view_bug.do?bug_id=6566207
>
> Unfortunately, it's showing no progress since 20th June.
>
> This fix really could do to be in place for S10u4 and snv_70.
>
Drop Roger Fujii <[EMAIL PROTE
On Tue, 17 Jul 2007, Kwang-Hyun Baek wrote:
> # uname -a
> SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc
>
> ===
> What's more interesting is that ZFS version shows that it's 8does it
> even exist?
Yes, 8 was created to support
# uname -a
SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc
===
What's more interesting is that ZFS version shows that it's 8does it even
exist?
[EMAIL PROTECTED]:/# zpool upgrade
This system is currently running ZFS version 6.
Th
Wee Yeh Tan wrote:
> On 7/17/07, Mike Salehi <[EMAIL PROTECTED]> wrote:
>> Sorry, my question is not clear enough. These pools contain a zone each.
>
> Firstly, zonepaths in ZFS is no yet supported. But this is the
> hacker's forum so...
I don't think that is actually true, particularly given th
Hello zfs-discuss,
root@ # uname -a
SunOS XXX 5.10 Generic_125100-07 sun4u sparc SUNW,Sun-Fire-880 Solaris
root@ #
root@ # zpool status
pool: zones
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action:
One last question, when it comes to patching these zones, is it better to patch
it normally or destroy all the local zones and patch only the global zone and
use sh file to recreate all the zones.
This message posted from opensolaris.org
___
zfs-dis
On 7/17/07, Mike Salehi <[EMAIL PROTECTED]> wrote:
> Sorry, my question is not clear enough. These pools contain a zone each.
Firstly, zonepaths in ZFS is no yet supported. But this is the
hacker's forum so...
No change for importing the ZFS pool. Now you're gonna need to hack
the zones in.
Fo
On 7/17/07, Richard Elling <[EMAIL PROTECTED]> wrote:
> Performance-wise, these are pretty wimpy. You should be able to saturate
> the array controller, even without enabling RAID-5 on it. Note that the
> T3's implementation of RAID-0 isn't quite the same as other arrays, so it
> may perform some
You can see the status of bug here:
http://bugs.opensolaris.org/view_bug.do?bug_id=6566207
Unfortunately, it's showing no progress since 20th June.
This fix really could do to be in place for S10u4 and snv_70.
Thanks
Nigel Smith
This message posted from opensolaris.org
_
Hi
What is the general feeling for production readiness when it comes to:
ZFS
Oracle 10G R2
6140-type storage
OLTP workloads
1-3TB sizes
Running UFS with directio is stable, fast and one can sleep at night.
Can the same be said for zfs at this moment?
Should one hold out for Solaris 10 U4? (I b
[EMAIL PROTECTED] wrote on 17/07/2007 02:36:06 PM:
> Running Solaris 10 Update 3 on an X4500 I have found that it is possible
> to reproducibly block all writes to a ZFS pool by running "chgrp -R"
> on any large filesystem in that pool. As can be seen below in the zpool
> iostat output below, aft
25 matches
Mail list logo