2012-05-15 19:17, casper@oracle.com wrote:
Your old release of Solaris (nearly three years old) doesn't support
disks over 2TB, I would think.
(A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb)
While this was proven correct by my initial experiments,
it seems that th
On May 16, 2012, at 12:35 PM, "Paynter, Richard"
wrote:
> Does anyone know what the minimum value for zfs_arc_max should be set to?
> Does it depend on the amount of memory on the system, and – if so – is there
> a formula, or percentage, to use to determine what the minimum value is?
It dep
Does anyone know what the minimum value for zfs_arc_max should be set
to? Does it depend on the amount of memory on the system, and - if so -
is there a formula, or percentage, to use to determine what the minimum
value is?
Thanks
Richard Paynter
Confidentiality Note: This e-mail, inclu
2012-05-16 22:21, bofh wrote:
There's something going on then. I have 7x 3TB disk at home, in
raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes
about 2.5 hours. I had done the resilvering as well, and that did not
take 15 hours/drive.
That is the critical moment ;)
The syst
bofh wrote:
> There's something going on then. I have 7x 3TB disk at home, in
> raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes
> about 2.5 hours. I had done the resilvering as well, and that did not
> take 15 hours/drive. Copying 3TBs onto 2.5" SATA drives did take more
>
On Wed, 16 May 2012, Jim Klimov wrote:
Your idea actually evolved for me into another (#7?), which
is simple and apparent enough to be ingenious ;)
DO use the partitions, but split the "2.73Tb" drives into a
roughly "2.5Tb" partition followed by a "250Gb" partition of
the same size as vdevs of t
There's something going on then. I have 7x 3TB disk at home, in
raidz3, so about 12TB usable. 2.5TB actually used. Scrubbing takes
about 2.5 hours. I had done the resilvering as well, and that did not
take 15 hours/drive. Copying 3TBs onto 2.5" SATA drives did take more
than a day, but a 2.5"
Hello fellow BOFH,
I also went by that title in a previous life ;)
2012-05-16 21:58, bofh wrote:
Err, why go to all that trouble? Replace one disk per pool. Wait for
resilver to finish. Replace next disk. Once all/enough disks have
been replaced, turn on autoexpand, and you're done.
As I
On Wed, May 16, 2012 at 1:45 PM, Jim Klimov wrote:
> Your idea actually evolved for me into another (#7?), which
> is simple and apparent enough to be ingenious ;)
> DO use the partitions, but split the "2.73Tb" drives into a
> roughly "2.5Tb" partition followed by a "250Gb" partition of
> the sa
2012-05-16 13:30, Joerg Schilling написал:
Jim Klimov wrote:
We know that large redundancy is highly recommended for
big HDDs, so in-place autoexpansion of the raidz1 pool
onto 3Tb disks is out of the question.
Before I started to use my thumper, I reconfigured it to use RAID-Z2.
This allow
2012-05-16 6:18, Bob Friesenhahn wrote:
You forgot IDEA #6 where you take advantage of the fact that zfs can be
told to use sparse files as partitions. This is rather like your IDEA #3
but does not require that disks be partitioned.
This is somewhat the method of making "missing devices" when c
On 05/16/2012 10:17 AM, Koopmann, Jan-Peter wrote:
>>
>>
>> One thing came up while trying this - I'm on a text install
>> image system, so my / is a ramdisk. Any ideas how I can change
>> the sd.conf on the USB disk or reload the driver configuration on
>> the fly? I tried looking for the file o
I have a small server at home (HP Proliant Micro N36) that I use
for file, DNS, DHCP, etc. services. I currently have a zpool of four
mirrored 1 TB Seagate ES2 SATA drives. Well, it was a zpool of four
until last night when one of the drives died. ZFS did it's job and all
the data is still OK.
Hi Bruce,
My opinions and two cents are inline. Take them with appropriate
amounts of salt ;)
On 05/16/12 04:20 AM, Bruce McGill wrote:
Hi All,
I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
nodes running Veritas Cluster Server software. For now, the
configuration on
IMHO
Just use whole stack of Vcs
Vxfm vxfs
Sent from my iPhone
On May 16, 2012, at 5:20 AM, Bruce McGill wrote:
> Hi All,
>
> I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
> nodes running Veritas Cluster Server software. For now, the
> configuration on NetApp is as
Jim Klimov wrote:
> We know that large redundancy is highly recommended for
> big HDDs, so in-place autoexpansion of the raidz1 pool
> onto 3Tb disks is out of the question.
Before I started to use my thumper, I reconfigured it to use RAID-Z2.
This allows me to just replace disks during operati
Hi All,
I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
nodes running Veritas Cluster Server software. For now, the
configuration on NetApp is as follows:
/vol/EBUSApp/EBUSApp 100G
online MBSUN04 : 0 MBSUN05 : 0
/vol/EBUSBi
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote:
> Hi,
>
> are those DELL branded WD disks? DELL tends to manipulate the firmware of
> the drives so that power handling with Solaris fails. If this is the case
> here:
>
> Easiest way to make it work is to modify /kernel/drv/sd.conf and add an
>
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote:
> Hi,
>
> are those DELL branded WD disks? DELL tends to manipulate the
> firmware of the drives so that power handling with Solaris fails.
> If this is the case here:
>
> Easiest way to make it work is to modify /kernel/drv/sd.conf and
> add an
Hi,
are those DELL branded WD disks? DELL tends to manipulate the firmware of
the drives so that power handling with Solaris fails. If this is the case
here:
Easiest way to make it work is to modify /kernel/drv/sd.conf and add an
entry
for your specific drive similar to this
sd-config-list= "WD
Hi,
I'm getting weird errors while trying to install openindiana 151a on a
Dell R715 with a PERC H200 (based on an LSI SAS 2008). Any time the OS
tries to access the drives (for whatever reason), I get this dumped into
syslog:
genunix: WARNING: Device
/pci@0,0/pci1002,5a18@4/pci10b58424@0/pci10b5
21 matches
Mail list logo