Hi Ian,
Other than bug fixes, the only notable feature in the Solaris 10 5/09
release is that Solaris Live Upgrade supports additional zones configurations.
You can read about these configurations here:
http://docs.sun.com/app/docs/doc/819-5461/gigek?l=en&a=view
I hope someone else from the tea
Is there a published list of updates to ZFS for Solaris 10 update 7?
I can't find anything specific in the release notes.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, May 1 at 14:19, Miles Nordin wrote:
Secondly I'm not sure I buy the USENIX claim that you can limp along
less one head. The last failed drive I took apart, was indeed failed
on just one head, but it had scraped all the rust off the platter
(down to glass! it was really glass!), and the
On Fri, 1 May 2009, Eric D. Mudama wrote:
On Fri, May 1 at 11:44, Bob Friesenhahn wrote:
Hard drives are comprised of multiple platters, with typically an
independently navigated head on each side.
This is a gap in your assumptions I believe.
The headstack is a single physical entity, so al
Has the issue with "disappearing" single-LUN zpools causing corruption
been fixed?
I'd have to look up the bug, but I got bitten by this last year about
this time:
Config:
single LUN export from array to host, attached via FC.
Scenario:
(1) array is turned off while host is alive, but whil
> "edm" == Eric D Mudama writes:
>> Hard drives are comprised of multiple platters, with typically
>> an independently navigated head on each side.
edm> This is a gap in your assumptions I believe.
edm> The headstack is a single physical entity, so all heads move
edm> in
On 5/1/2009 2:01 PM, Miles Nordin wrote:
I've never heard of using multiple-LUN stripes for storage QoS before.
Have you actually measured some improvement in this configuration over
a single LUN? If so that's interesting.
Because of the way queing works in the OS and in most array controllers
> "sl" == Scott Lawson writes:
> "wa" == Wilkinson, Alex writes:
> "dg" == Dale Ghent writes:
> "djm" == Darren J Moffat writes:
sl> Specifically I am talking of ZFS snapshots, rollbacks,
sl> cloning, clone promotion,
[...]
sl> Of course to take maximum advantage
On Fri, May 1 at 11:44, Bob Friesenhahn wrote:
Hard drives are comprised of multiple platters, with typically an
independently navigated head on each side.
This is a gap in your assumptions I believe.
The headstack is a single physical entity, so all heads move in unison
to the same position
This morning as I was reading USENIX conference summaries which
suggested that maybe SATA/SAS is not an optimimum interface for SSDs
it came to mind that some out-of-the-box thinking is needed for hard
drives as well. Hard drive storage densities have been increasing
dramatically so that lates
Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS
I think the writing is on the wall, right next to "Romani ite domum" :-)
Today, laptops have 500 GByte drives, desktops have 1.5 TByte drives.
UFS really does not work well with SMI label and 1 TByte limitations.
-- richard
Dale Ghent wrote:
On May 1, 2009, at 4:01 AM, Ian Collins wrote:
Dale Ghent wrote:
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS ?
Not at all. Just export lots of LUNs from your EMC to get the IO
scheduling win, not one giant one, and configure th
On Fri, May 01, 2009 at 09:52:54AM -0400, Dale Ghent wrote:
>
> EMC. It's where data lives.
I thought it was, "EMC. It's where data goes to die." :-D
-brian
--
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard
On May 1, 2009, at 4:01 AM, Ian Collins wrote:
Dale Ghent wrote:
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS ?
Not at all. Just export lots of LUNs from your EMC to get the IO
scheduling win, not one giant one, and configure the zpool as a
str
Ulrich Graef wrote:
According: ZFS encryption
Will it be possible to have an encrypted root pool?
We don't encrypt pools, we encrypt datasets. This is the same as what
is done for compression.
It will be possible in the initial integration to have encrypted
datasets in the root pool. How
Le 6 févr. 09 à 20:54, Ross Smith a écrit :
Something to do with cache was my first thought. It seems to be able
to read and write from the cache quite happily for some time,
regardless of whether the pool is live.
If you're reading or writing large amounts of data, zfs starts
experiencing IO
Dale Ghent wrote:
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS ?
Not at all. Just export lots of LUNs from your EMC to get the IO
scheduling win, not one giant one, and configure the zpool as a stripe.
What, no redundancy?
--
Ian.
___
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
I currently have a single 17TB MetaLUN that i am about to present
to an
OpenSolaris initiator and it will obviously be ZFS. H
18 matches
Mail list logo