Just went to Oracle's website and just noticed that you can download Solaris 11
Express.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with
no errors, everything looks good but when we try to access zvols shared out
with COMSTAR, windows reports that the devices have bad blocks. Everything has
been working great until last night and no changes have been
We have the following setup configured. The drives are running on a couple PAC
PS-5404s. Since these units do not support JBOD, we have configured each
individual drive as a RAID0 and shared out all 48 RAID0’s per box. This is
connected to the solaris box through a dual port 4G Emulex fibrechan
We downloaded zilstat from
http://www.richardelling.com/Home/scripts-and-programs-1 but we never could get
the script to run. We are not really sure how to debug. :(
./zilstat.ksh
dtrace: invalid probe specifier
#pragma D option quiet
inline int OPT_time = 0;
inline int OPT_txg = 0;
inline
Cool, we can get the Intel X25-E's for around $300 a piece from HP with the
sled. I don't see the X25-M available so we will look at 4 of the X25-E's.
Thanks :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC
devices to our pool. We are looking into getting 4 – 32GB Intel X25-E SSD
drives. Would this be a good solution to slow write speeds? We are currently
sharing out different slices of the pool to windows servers using com
Our server locked up hard yesterday and we had to hard power it off and back
on. The server locked up again on reading ZFS config (I left it trying to read
the zfs config for 24 hours). I went through and removed the drives for the
data pool we created and powered on the server and it booted suc
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel
attached. We would like all 144 drives added to the same large pool.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
> Mirrors are made with vdevs (LUs or disks), not
> pools. However, the
> vdev attached to a mirror must be the same size (or
> nearly so) as the
> original. If the original vdevs are 4TB, then a
> migration to a pool made
> with 1TB vdevs cannot be done by replacing vdevs
> (mirror method).
> --
> On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
> > The original drive pool was configured with 144 1TB
> drives and a hardware raid 0 strip across every 4
> drives to create 4TB luns.
>
> For the archives, this is not a good idea...
Exactly, This is the reason I wan
We are running the latest dev release.
I was hoping to just mirror the zfs voumes and not the whole pool. The original
pool size is around 100TB in size. The spare disks I have come up with will
total around 40TB. We only have 11TB of space in use on the original zfs pool.
--
This message poste
The original drive pool was configured with 144 1TB drives and a hardware raid
0 strip across every 4 drives to create 4TB luns. These luns where then
combined into 6 raidz2 luns and added to the zfs pool. I would like to delete
the original hardware raid 0 stripes and add the 144 drives directl
We would like to delete and recreate our existing zfs pool without losing any
data. The way we though we could do this was attach a few HDDs and create a new
temporary pool, migrate our existing zfs volume to the new pool, delete and
recreate the old pool and migrate the zfs volumes back. The bi
We are sharing the LUNS out with Comstar from 1 big pool. In essence, we
created our own low cost SAN. We currently have our windows clients connected
with Fibrechannel to the COMSTAR target.
--
This message posted from opensolaris.org
___
zfs-discuss
> On Mar 25, 2010, at 7:20 AM, Wolfraider wrote:
> > This assumes that you have the storage to replicate
> or at least restore all data to a DR site. While this
> is another way to do it, it is not really cost
> effective in our situation.
>
> If the primary and DR site ar
This assumes that you have the storage to replicate or at least restore all
data to a DR site. While this is another way to do it, it is not really cost
effective in our situation.
What I am thinking is basically having 2 servers. One has the zpool attached
and sharing out our data. The other i
It seems like the zpool export will ques the drives and mark the pool as
exported. This would be good if we wanted to move the pool at that time but we
are thinking of a disaster recovery scenario. It would be nice to export just
the config to where if our controller dies, we can use the zpool i
Sorry if this has been dicussed before. I tried searching but I couldn't find
any info about it. We would like to export our ZFS configurations in case we
need to import the pool onto another box. We do not want to backup the actual
data in the zfs pool, that is already handled through another p
18 matches
Mail list logo