On Fri, May 7, 2010 at 4:57 AM, Brandon High wrote:
> I believe that the L2ARC behaves the same as a pool with multiple
> top-level vdevs. It's not typical striping, where every write goes to
> all devices. Writes may go to only one device, or may avoid a device
> entirely while using several othe
Hi Gary,
I would not remove this line in /etc/system.
We have been combatting this bug for a while now on our ZFS file system running
JES Commsuite 7.
I would be interested in finding out how you were able to pin point the
problem.
We seem to have no worries with the system currently, but whe
On 06/05/2010 21:45, Nicolas Williams wrote:
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction grou
On Tue, May 4, 2010 at 11:34 AM, Brandon High wrote:
> On Tue, May 4, 2010 at 10:19 AM, Tony MacDoodle
> wrote:
> > How would one determine if I should have a separate ZIL disk? We are
> using
> > ZFS as the backend of our Guest Domains boot drives using LDom's. And we
> are
> > seeing bad/very
On May 6, 2010, at 11:08 AM, Michael Sullivan wrote:
> Well, if you are striping over multiple devices the you I/O should be spread
> over the devices and you should be reading them all simultaneously rather
> than just accessing a single device. Traditional striping would give 1/n
> performanc
Hi--
Even though the dedup property can be set on a file system basis,
dedup space usage is accounted for from the pool level by using
zpool list command.
My non-expert opinion is that it would be near impossible to report
space usage for dedup and non-dedup file systems at the file system
level
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
> On 5/6/10 5:28 AM, Robert Milkowski wrote:
>
> >sync=disabled
> >Synchronous requests are disabled. File system transactions
> >only commit to stable storage on the next DMU transaction group
> >commit which can be many seconds.
>
> Is
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction group
commit which can be many seconds.
Is there a way (short of DTrace) to write() some data and get notified
when th
On Fri, 2010-05-07 at 03:10 +0900, Michael Sullivan wrote:
> This is interesting, but what about iSCSI volumes for virtual machines?
>
> Compress or de-dupe? Assuming the virtual machine was made from a clone of
> the original iSCSI or a master iSCSI volume.
>
> Does anyone have any real world
On 06/05/2010 19:08, Michael Sullivan wrote:
Hi Marc,
Well, if you are striping over multiple devices the you I/O should be
spread over the devices and you should be reading them all
simultaneously rather than just accessing a single device.
Traditional striping would give 1/n performance im
On Thu, May 6, 2010 at 11:31 AM, eXeC001er wrote:
> How can i get this info?
$ man zpool
$ zpool list
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
rpool 111G 15.5G 95.5G13% 1.00x ONLINE -
tank 7.25T 3.16T 4.09T43% 1.12x ONLINE -
$ zpool get dedupratio tank
NAME
On Thu, May 6, 2010 at 11:08 AM, Michael Sullivan
wrote:
> The round-robin access I am referring to, is the way the L2ARC vdevs appear
> to be accessed. So, any given object will be taken from a single device
> rather than from several devices simultaneously, thereby increasing the I/O
> throughp
Hi Bob,
You can review the latest Solaris 10 and OpenSolaris release dates here:
http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/059542.pdf
Solaris 10 release, CY2010
OpenSolaris release, 1st half CY2010
Thanks,
Cindy
On 05/05/10 18:03, Bob Friesenhahn wrote:
On Wed, 5 M
On Fri, 7 May 2010, Michael Sullivan wrote:
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you
should be reading them all simultaneously rather than just accessing a single
device. Traditional
striping would give 1/n performance improvement r
Hi.
How can i get this info?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey wrote:
> > From the information I've been reading about the loss of a ZIL device,
> What the heck? Didn't I just answer that question?
> I know I said this is answered in ZFS Best Practices Guide.
>
> http://www.solarisinternals.com/wiki/index.php
This is interesting, but what about iSCSI volumes for virtual machines?
Compress or de-dupe? Assuming the virtual machine was made from a clone of the
original iSCSI or a master iSCSI volume.
Does anyone have any real world data this? I would think the iSCSI volumes
would diverge quite a bit
Hi Marc,
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you should be reading them all simultaneously rather than
just accessing a single device. Traditional striping would give 1/n
performance improvement rather than 1/1 where n is the number
Hi Michael,
What makes you think striping the SSDs would be faster than round-robin?
-marc
On Thu, May 6, 2010 at 1:09 PM, Michael Sullivan wrote:
> Everyone,
>
> Thanks for the help. I really appreciate it.
>
> Well, I actually walked through the source code with an associate today and
> we
Everyone,
Thanks for the help. I really appreciate it.
Well, I actually walked through the source code with an associate today and we
found out how things work by looking at the code.
It appears that L2ARC is just assigned in round-robin fashion. If a device
goes offline, then it goes to the
On Wed, May 5, 2010 at 8:47 PM, Michael Sullivan
wrote:
> While it explains how to implement these, there is no information regarding
> failure of a device in a striped L2ARC set of SSD's. I have been hard
> pressed to find this information anywhere, short of testing it myself, but I
> don't h
On 06/05/2010 15:31, Tomas Ögren wrote:
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes:
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache devic
Hi all,
It seems like the market has yet another type of ssd device, this time a
USB 3.0 portable SSD device by OCZ.
Going on the specs it seems to me that if this device has a good price
it might be quite useful for caching purposes on ZFS based storage.
Take a look at
http://www.ocztechnology.co
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes:
> On Wed, 5 May 2010, Edward Ned Harvey wrote:
>>
>> In the L2ARC (cache) there is no ability to mirror, because cache device
>> removal has always been supported. You can't mirror a cache device, because
>> you don't need it.
>
> How do
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache device, because
you don't need it.
How do you know that I don't need it? The ability seems useful to me.
Bob
--
B
On May 6, 2010, at 8:34 AM, Edward Ned Harvey
wrote:
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
In neither case do you have data or filesystem corruption.
ZFS probably is still OK, since it's designed to handle this (?),
but the data can't be OK if you lose 30 secs of writes.. 30 secs o
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ragnar Sundblad
>
> But if you have an application, protocol and/or user that demands
> or expects persistant storage, disabling ZIL of course could be fatal
> in case of a crash. Examples are
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
>
> > In neither case do you have data or filesystem corruption.
> >
>
> ZFS probably is still OK, since it's designed to handle this (?),
> but the data can't be OK if you lose 30 secs of writes.. 30 secs of
> writes
> that have been ack'd being done
On Thu, May 06, 2010 at 01:15:41PM +0100, Robert Milkowski wrote:
> On 06/05/2010 13:12, Robert Milkowski wrote:
> >On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
> >>I read that this property is not inherited and I can't see why.
> >>If what I read is up-to-date, could you tell why?
> >
> >It is
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it should or sh
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it should or should not be inherited, then we propose tha
On Wed, 2010-05-05 at 09:45 -0600, Evan Layton wrote:
> No that doesn't appear like an EFI label. So it appears that ZFS
> is seeing something there that it's interpreting as an EFI label.
> Then the command to set the bootfs property is failing due to that.
>
> To restate the problem the BE can't
On 5/05/10 10:42 PM, Bruno Sousa wrote:
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus and
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited, this changed as a result of the PSARC review.
--
Darren J Moffat
___
zf
On Thu, May 06, 2010 at 11:28:37AM +0100, Robert Milkowski wrote:
> With the put back of:
>
> [PSARC/2010/108] zil synchronicity
>
> zfs datasets now have a new 'sync' property to control synchronous
> behaviour.
> The zil_disable tunable to turn synchronous requests into asynchronous
> requests
Based on comments, some people say nay, some say yah. so I decided
to give it a spin, and see
how I get on.
To make my mirror bootable I followed instructions posted here :
http://www.taiter.com/blog/2009/04/opensolaris-200811-adding-disk.html
I plan to do a quick write up myself of my
Please find this thread for further info about this topic :
http://www.opensolaris.org/jive/thread.jspa?threadID=120824&start=0&tstart=0
In short, ZFS doesn't support thin reclamation today, although we have RFE open
to implement it somewhere in the future.
Regards,
sendai
--
This message pos
With the put back of:
[PSARC/2010/108] zil synchronicity
zfs datasets now have a new 'sync' property to control synchronous behaviour.
The zil_disable tunable to turn synchronous requests into asynchronous
requests (disable the ZIL) has been removed. For systems that use that switch
on upgrade
On Thu, May 6, 2010 at 1:31 AM, Brandon High wrote:
> Any other way to fix it? There's no data in the zvol that I can't
> easily reproduce if it needs to be destroyed.
I did a rollback to the most recent snapshot, which seems to have worked.
-B
--
Brandon High : bh...@freaks.com
__
I'm unable to snapshot a dataset, receiving the error "dataset is
busy". Google and some bug reports suggest it's from a zil that hasn't
been completely replayed, and that mounting and unmounting the dataset
will fix it. Which is great, except it's a zvol.
Any other way to fix it? There's no data
On 6 maj 2010, at 08.17, Pasi Kärkkäinen wrote:
> On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Robert Milkowski
>>>
>>> if you can disable ZIL and compare the perfor
On Thu, May 6, 2010 at 2:06 AM, Richard Jahnel wrote:
> I've googled this for a bit, but can't seem to find the answer.
>
> What does compression bring to the party that dedupe doesn't cover already?
Compression will reduce the storage requirements for non-duplicate data.
As an example, I have a
42 matches
Mail list logo