On 9/9/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Sep 9, 2006, at 12:04 PM, David Dyer-Bennet wrote:
> Thanks, that seems fairly clear. So another approach I could take is
> to buy one of the supported controllers, if they're available on a
> card I could plug in.
The Silicon Image chipset i
On 9/9/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On September 9, 2006 10:51:30 AM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
> On 9/9/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
>> On September 8, 2006 9:34:29 PM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
>> wrote:
>> > My first real-h
On 9/9/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On September 9, 2006 10:51:30 AM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
> I see suggestions on what might be a usable workaround (basically
> telling zfs manually to stop using the disk before physically removing
> it), and a hope tha
Background: We have a ZFS pool setup from LUNS which are from a SAN connected
StorageTek/Engenio Flexline 380 storage system. Just this past Friday the
storage environment went down causing the system to go down.
After looking at the storage environment, we had several volume groups which
ne
On 9/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Robert Milkowski wrote:
> Hi.
>
> bash-3.00# zfs get quota f3-1/d611
> NAME PROPERTY VALUE SOURCE
> f3-1/d611quota 400G local
> bash-3.00#
>
> bash-3.00# zfs list |
Robert Milkowski wrote:
Hi.
bash-3.00# zfs get quota f3-1/d611
NAME PROPERTY VALUE SOURCE
f3-1/d611quota 400G local
bash-3.00#
bash-3.00# zfs list | egrep "^f3-1 |f3-1/d611|AVA"
NAME USED AVAIL REF
Hi.
bash-3.00# zfs get quota f3-1/d611
NAME PROPERTY VALUE SOURCE
f3-1/d611quota 400G local
bash-3.00#
bash-3.00# zfs list | egrep "^f3-1 |f3-1/d611|AVA"
NAME USED AVAIL REFER MOUNTPOINT
f3-1
On Sep 9, 2006, at 12:04 PM, David Dyer-Bennet wrote:
Thanks, that seems fairly clear. So another approach I could take is
to buy one of the supported controllers, if they're available on a
card I could plug in.
The Silicon Image chipset is pretty popular and can be found on many
SATA and e
On September 9, 2006 10:51:30 AM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
On 9/9/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On September 8, 2006 9:34:29 PM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
> My first real-hardware Solaris install. I've installed S10 u2 on a
> system
On September 9, 2006 10:51:30 AM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
I see suggestions on what might be a usable workaround (basically
telling zfs manually to stop using the disk before physically removing
it), and a hope that full hot-swap might appear in a later release.
My exp
On 9/9/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
Sorry about my previous message, and for starting a new thread (I'm a
fast deleter). S10U2 supports SATA hot plug but just for a few SATA
controllers (notably, not the one in the x2100, which is why I thought
support was absent altogether).
Jud
On 9/9/06, Dana H. Myers <[EMAIL PROTECTED]> wrote:
David Dyer-Bennet wrote:
Here's my take without looking anything up. While the drive is physically
hot-pluggable, the software stack doesn't support what you did. I *think*
the correct sequence of events would probably be:
1. Detach the mi
On 9/9/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On September 8, 2006 9:34:29 PM -0500 David Dyer-Bennet <[EMAIL PROTECTED]>
wrote:
> My first real-hardware Solaris install. I've installed S10 u2 on a
> system with an Asus M2n-SLI Deluxe nForce 570-SLI motherboard, Athlon
> 64 X2 dual core CPU
On Sep 9, 2006, at 1:32 AM, Frank Cusack wrote:
On September 7, 2006 12:25:47 PM -0700 "Anton B. Rang"
<[EMAIL PROTECTED]> wrote:
The bigger problem with system utilization for software RAID is the
cache, not the CPU cycles proper. Simply preparing to write 1 MB
of data
will flush half of a
One correction in the interest of full disclosure, tests were conducted on a
machine that is different from my original post indicated a server
configuration. Here's the server config used in tests:
- E25K domain (1 board: 4P/8Way x 32GB)
- 2 2Gbps FC
- MPxIO
- Solaris 10 Update 2 (06/06); no ot
I finally got around to running a 'benchmark' using the AOL clickstream data
(2GB of text files and approximately 36 million rows). Here are the Oracle
settings during the test.
- Same Oracle settings for all tests
- All disks in question are 32GB EMC hypers
- I had the standard Oracle tablespac
16 matches
Mail list logo