On Aug 8, 2011, at 4:01 PM, Peter Jeremy wrote:
> On 2011-Aug-08 17:12:15 +0800, Andrew Gabriel
> wrote:
>> periodic scrubs to cater for this case. I do a scrub via cron once a
>> week on my home system. Having almost completely filled the pool, this
>> was taking about 24 hours. However, now
On Aug 8, 2011, at 9:01 AM, John Martin wrote:
> Is there a list of zpool versions for development builds?
>
> I found:
>
> http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
Since Oracle no longer shares that info, you might look inside the firewall :-)
>
> where it says Solaris 11
I discovered my problem. I didn't notice that the base rpool was broken out
and mounted as /rpool . After restoring /rpool the machine booted without
error.
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Hiya,
Is there any reason (and anything to worry about) if disk target IDs don't
start at 0 (zero). For some reason mine are like this (3 controllers - 1
onboard and 2 PCIe);
AVAILABLE DISK SELECTIONS:
0. c8t0d0
/pci@0,0/pci10de,cb84@5/disk@0,0
1. c8t1d0
/pci
nothing to worry about
as for dd you need s? in addition to c8t0d0
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 9, 2011, at 4:51, Lanky Doodle wrote:
> Hiya,
>
> Is there any reason (and anything to worry about) if disk target IDs don't
> start at 0 (zero). For some reason mine ar
On Tue, Aug 9, 2011 at 7:51 AM, Lanky Doodle wrote:
> Is there any reason (and anything to worry about) if disk target IDs don't
> start at 0
> (zero). For some reason mine are like this (3 controllers - 1 onboard and 2
> PCIe);
>
> AVAILABLE DISK SELECTIONS:
> 0. c8t0d0
> /pci@
On Tue, Aug 9, 2011 at 8:20 AM, Paul Kraus wrote:
> Nothing to worry about here. Controller IDs (c) are assigned
> based on the order the kernel probes the hardware. On the SPARC
> systems you can usually change this in the firmware (OBP), but they
> really don't _mean_ anything (other than the
Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.
We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
9200-8e hba. We are not using failover/redundancy. Meaning that one
port of the hba goes to the primary front
On Tue, 9 Aug 2011, Gregory Durham wrote:
Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.
root@cm-srfe03:/home/gdurham~# zpool destroy fooPool0
root@cm-srfe03:/home/gdurham~# sh createPool.sh 4
What is 'createPool.sh'?
On Tue, Aug 9, 2011 at 8:45 PM, Gregory Durham wrote:
> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks.
So you are creating one zpool with one disk per vdev and varying the
number of vdevs (the number of
10 matches
Mail list logo