> I'm attempting to expand a root pool for a VMware VM
> that is on an 8GB virtual disk. I mirrored it to a
> 20GB disk and detached the 8GB disk. I did
> "installgrub" to install grub onto the second virtual
> disk, but I get a kernel panic when booting. Is there
> an extra step I need to perform
I'm attempting to expand a root pool for a VMware VM that is on an 8GB virtual
disk. I mirrored it to a 20GB disk and detached the 8GB disk. I did
"installgrub" to install grub onto the second virtual disk, but I get a kernel
panic when booting. Is there an extra step I need to perform to get th
Bob Says:
"But a better solution is to assign a processor set to run only
the application -- a good idea any time you need a predictable
response."
Bob's suggestion above along with "no interrupts on that pset", and a
fixed scheduling class for the application/processes in question could
On Sat, 26 Jul 2008, Richard Elling wrote:
>
> Is it doing buffered or sync writes? I'll try it later today or
> tomorrow...
I have not seen the source code but truss shows that this program is
doing more than expected such as using send/recv to send a message.
In fact, send(), pollsys(), recv(
Bob Friesenhahn wrote:
> On Sat, 26 Jul 2008, Bob Friesenhahn wrote:
>
>
>> I suspect that the maximum peak latencies have something to do with
>> zfs itself (or something in the test program) rather than the pool
>> configuration.
>>
>
> As confirmation that the reported timings have vir
On Sat, 26 Jul 2008, Bob Friesenhahn wrote:
> I suspect that the maximum peak latencies have something to do with
> zfs itself (or something in the test program) rather than the pool
> configuration.
As confirmation that the reported timings have virtually nothing to do
with the pool configura
On Sat, 26 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
> It is impossible to simulate my scenario with iozone. iozone performs very
> well for ZFS. OTOH,
> iozone does not measure latency.
>
> Please find attached tool (Solaris x86), which we have written to measure
> latency.
Very intere
On Sat, 26 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
>
> 1.Re configure array with 12 independent disks
> 2. Allocate disks to RAIDZed pool
Using raidz will penalize your transaction performance since all disks
will need to perform I/O for each write. It is definitely better to
use load
Carson Gaspar wrote:
> Brandon High wrote:
>
>
>> All I know is that the x4540 uses the LSI 1068e chipset, and that the
>> X4500 used the Marvell 88SX chipset. Since buyers of both of these
>> systems probably have an expectation that they'll well, work, I assume
>> that the drivers in Solaris s
Deal All,
Thank you very much for the continuous support
Sorry for the late reply ...
I was trying to allocate 2540 & 2 x 4600 for try out your
recommendation ...
Finally, I could reserve 2540 disk array for testing purpose.
so I am free to try out each and every points you have emphasi
Brandon High wrote:
> On Fri, Jul 25, 2008 at 9:17 AM, David Collier-Brown <[EMAIL PROTECTED]>
> wrote:
>
>>And do you really have 4-sided raid 1 mirrors, not 4-wide raid-0 stripes???
>
>
> Or perhaps 4 RAID1 mirrors concatenated?
>
I wondered that too, but he insists he doesn't have 0+1 or 1+
We have been using some 1068-1078 based cards (both raid:AOC-USAS-H4IR
and jbod:LSISAS3801E) with b87-b90 and in s10u5 without issue for some
time. Both the downloaded LSI driver and the bundled one have worked
fine for us for around 6 months of moderate usage. The LSI jbod card is
similar to the
James C. McPherson wrote:
> Miles Nordin wrote:
>>> "bh" == Brandon High <[EMAIL PROTECTED]> writes:
>> bh> a system built around the Marvell or LSI chipsets
>>
>> according to The Blogosphere, source of all reliable information,
>> there's some issue with LSI, too. The driver is not avail
On Fri, Jul 25, 2008 at 1:02 PM, Matt Wreede <[EMAIL PROTECTED]> wrote:
> Howdy.
>
> My plan:
>
> I'm planning an ESX-iSCSI target/NFS serving box.
>
> I'm planning on using an Areca RAID card, as I've heard mixed things about
> hot-swapping with Solaris/ZFS, and I'd like the stability of a hardwa
Miles Nordin wrote:
>> "bh" == Brandon High <[EMAIL PROTECTED]> writes:
>
> bh> a system built around the Marvell or LSI chipsets
>
> according to The Blogosphere, source of all reliable information,
> there's some issue with LSI, too. The driver is not available in
> stable Solaris nor
Nico,
I require pubkey auth on *all* my ssh sessions, thus achieving two-factor
auth with minimal overheads. I'd thought this was widely implemented.
PAMAuthenticationViaKBDInt actually works in practice for me, disallowing
kyb interactive for all accounts. I've no clue how and why it worked, but
16 matches
Mail list logo