2011/11/11 Ian Collins
> On 11/11/11 02:42 AM, Edward Ned Harvey wrote:
>
>> From:
>> zfs-discuss-bounces@**opensolaris.org[mailto:
>>> zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of darkblue
>>>
>>> 1 * XEON 5606
>>> 1 * supermirco X8DT3-LN4F
>>> 6 * 4G RECC RAM
>>> 22 * WD RE3 1T hardd
On Nov 10, 2011, at 18:41, Daniel Carosone wrote:
> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
>> Under both Solaris 10 and Solaris 11x, I receive the evil message:
>> | I/O request is not aligned with 4096 disk sector size.
>> | It is handled through Read Modify Write but t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> The silent corruption (of zfs) does not occur due to simple reason
> that flushing all of the block writes are acknowledged by the disks
> and then a new transaction occurs
On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
> Under both Solaris 10 and Solaris 11x, I receive the evil message:
> | I/O request is not aligned with 4096 disk sector size.
> | It is handled through Read Modify Write but the performance is very low.
I got similar with 4k secto
On 11/11/11 02:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue
1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis
On 10 November, 2011 - Will Murnane sent me these 1,5K bytes:
> On Thu, Nov 10, 2011 at 14:12, Tomas Forsman wrote:
> > On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
> >> On Wed, 9 Nov 2011, Tomas Forsman wrote:
>
> At all times, if there's a server crash, ZFS will co
On Thu, Nov 10, 2011 at 14:12, Tomas Forsman wrote:
> On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
>> On Wed, 9 Nov 2011, Tomas Forsman wrote:
At all times, if there's a server crash, ZFS will come back along at next
boot or mount, and the filesystem will be in a
On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
> On Wed, 9 Nov 2011, Tomas Forsman wrote:
>>>
>>> At all times, if there's a server crash, ZFS will come back along at next
>>> boot or mount, and the filesystem will be in a consistent state, that was
>>> indeed a valid state which
Hi John,
CR 7102272:
ZFS storage pool created on a 3 TB USB 3.0 device has device label
problems
Let us know if this is still a problem in the OS11 FCS release.
Thanks,
Cindy
On 11/10/11 08:55, John D Groenveld wrote:
In message<4e9db04b.80...@oracle.com>, Cindy Swearingen writes:
This
On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeff Savit
Also, not a good idea for
performance to partition the disks as you suggest.
Not totally true. By default, if you partition the disks
In message <4e9db04b.80...@oracle.com>, Cindy Swearingen writes:
>This is CR 7102272.
What is the title of this BugId?
I'm trying to attach my Oracle CSI to it but Chuck Rozwat
and company's support engineer can't seem to find it.
Once I get upgraded from S11x SRU12 to S11, I'll reproduce
on a mo
On Nov 9, 2011, at 6:08 PM, Francois Dion wrote:
> Some laptops have pc card and expresscard slots, and you can get an adapter
> for sd card, so you could set up your os non mirrored and just set up home on
> a pair of sd cards. Something like
> http://www.amazon.com/Sandisk-SDAD109A11-Digital-
On Wed, 9 Nov 2011, Tomas Forsman wrote:
At all times, if there's a server crash, ZFS will come back along at next
boot or mount, and the filesystem will be in a consistent state, that was
indeed a valid state which the filesystem actually passed through at some
moment in time. So as long as al
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of darkblue
>
> Why would you want your root pool to be on the SSD? Do you expect an
> extremely high I/O rate for the OS disks? Also, not a good idea for
> performance to partition the disks as y
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of darkblue
>
> 1 * XEON 5606
> 1 * supermirco X8DT3-LN4F
> 6 * 4G RECC RAM
> 22 * WD RE3 1T harddisk
> 4 * intel 320 (160G) SSD
> 1 * supermicro 846E1-900B chassis
I just want to say, this isn't
I have a Solaris 10 machine that I've been having an interesting time with
today. (Live Upgrade didn't work, stmsboot didn't work, I managed to rebuild
it with jumpstart at about the 10th attempt.)
Anyway, it looks like one of my drives has had its label overwritten by
fdisk,
pool: disk00
i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jeff Savit
>
> Also, not a good idea for
> performance to partition the disks as you suggest.
Not totally true. By default, if you partition the disks, then the disk write
cache gets disable
> From: Gregg Wonderly [mailto:gregg...@gmail.com]
>
> > There is no automatic way to do it.
> For me, this is a key issue. If there was an automatic rebalancing
mechanism,
> that same mechanism would work perfectly to allow pools to have disk sets
> removed. It would provide the needed basic me
AFAIK, there is no change in open source policy for Oracle Solaris
On 11/9/2011 10:34 PM, Fred Liu wrote:
... so when will zfs-related improvement make it to solaris-
derivatives :D ?
I am also very curious about Oracle's policy about source code. ;-)
Fred
__
19 matches
Mail list logo