> If I understand correctly, the performance would then drop to the same
> level as if I set them up as separate volumes in the first place.
>
> So, I get double the performance for 75% of my data, and equal
> performance for 25% of my data, and my L2ARC will adapt to my working
> set across both
Thanks, Ian.
If I understand correctly, the performance would then drop to the same level as
if I set them up as separate volumes in the first place.
So, I get double the performance for 75% of my data, and equal performance for
25% of my data, and my L2ARC will adapt to my working set across b
On 10/29/10 09:40 AM, Rob Cohen wrote:
I have a couple drive enclosures:
15x 450gb 15krpm SAS
15x 600gb 15krpm SAS
I'd like to set them up like RAID10. Previously, I was using two hardware
RAID10 volumes, with the 15th drive as a hot spare, in each enclosure.
Using ZFS, it could be nice to ma
I have a couple drive enclosures:
15x 450gb 15krpm SAS
15x 600gb 15krpm SAS
I'd like to set them up like RAID10. Previously, I was using two hardware
RAID10 volumes, with the 15th drive as a hot spare, in each enclosure.
Using ZFS, it could be nice to make them a single volume, so that I could
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi SR,
You can create a mirrored storage pool, but you can't mirror
an existing raidz2 pool nor can you convert a raidz2 pool
to a mirrored pool.
You would need to copy the data from the existing pool,
destroy the raidz2 pool, and create a mirrored storage
pool.
Cindy
On 10/28/10 11:19, SR wro
I have a raidz2 zpool which I would like to create a mirror of.
Is it possible to create a mirror of a zpool?
I know I can create multi way mirrors of vdevs, do zfs/send receive etc.. to
mirror data. But can I create a mirror at the zpool level?
Thanks
SR
--
This message posted from opensola
PS obviously these are home systems; in a real environment,
I'd only be sharing out filesystems with user or application
data, and not local system filesystems! But since it's just
me, I somewhat trust myself not to shoot myself in the foot.
--
This message posted from opensolaris.org
___
I have sharesmb=on set for a bunch of filesystems,
including three that weren't mounted. Nevertheless,
all of those are advertised. Needless to say,
the one that isn't mounted can't be accessed remotely,
even though since advertised, it looks like it could be.
# zfs list -o name,mountpoint,share
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 8/7/2010 4:11 PM, Terry Hull wrote:
>
> It is just that lots of the PERC controllers do not do JBOD very well. I've
> done it several times making a RAID 0 for each drive. Unfortunately, that
> means the server has lots of RAID hardware that is
Hi.
I install solaris 10 x86 on PowerEdge R510 with PERC H700 without problem.
8HDD configured with RAID 6.
Only question is how to monitor this controller?
Do you have any tools which allow you to monitor this controller?
Get HDD status.
Thank you for help.
PS.
I know this is OpenSolaris not
Thanks! I will try later today and report back the result.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
I am running Netalk on OSol snv134 on a Dell R610, 32 GB RAM server. I
am experiencing different speeds when when writing to and reading from
the pool.
The pool itself consists of two FC LUNs that each build a vdev (no
comments on that please, we discussed that already! ;) ).
Now, I
On Oct 28, 2010, at 04:44, Jan Hellevik wrote:
So, my best action would be to delete the zpool.cache and then do a
zpool import?
Should I try to match disks with cables as it was previously
connected before I do the import? Will that make any difference?
BTW, ZFS version is 22.
I'd say
I think the 'corruption' is caused by the shuffling and mismatch of the disks.
One 1.5TB is now believed to be part of a mirror with a 2TB, a 1TB part of a
mirror with a 1.5TB and so on. It would be better if zfs would try to find the
second disk of each mirror instead of relying on what control
15 matches
Mail list logo