The Discard/Trim command is also available as part of the SCSI standard now.
Now, if you look from a SAN perspective, you will need a little of both.
Filesystems will need to be able to deallocate blocks and then the same should
be triggered as a SCSI Trim to the Storage Controller.
For a virtual
Under my OpenSolaris 2008-11 install I reached the conclusion this
didn't work due to a combination of bugs. Is this fixed in 2009-6?
Specifically, I want to do a nightly incremental send from the most
recent common snapshot and apply it to an external (USB) drive also
formatted as ZFS. I wa
On Mon, Sep 7, 2009 at 15:59, Henrik Johansson wrote:
> Hello Will,
> On Sep 7, 2009, at 3:42 PM, Will Murnane wrote:
>
> What can cause this kind of behavior, and how can I make my pool
> finish scrubbing?
>
>
> No idea what is causing this but did you try to stop the scrub?
I haven't done so yet
On Mon, Sep 7, 2009 at 12:05, Chris Gerhard wrote:
> Looks like this bug:
>
> http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
>
> Workaround: Don't run zpool status as root.
I'm not, and yet the scrub continues. To be more specific, here's a
complete current interaction with zpool status:
Hello Will,
On Sep 7, 2009, at 3:42 PM, Will Murnane wrote:
What can cause this kind of behavior, and how can I make my pool
finish scrubbing?
No idea what is causing this but did you try to stop the scrub? If so
what happened? (Might not be a good idea since this is not a normal
state?
2009/9/7 Richard Elling :
> On Sep 7, 2009, at 10:20 AM, Bob Friesenhahn wrote:
>
>> The purpose of the TRIM command is to allow the FLASH device to reclaim
>> and erase storage at its leisure so that the writer does not need to wait
>> for erasure once the device becomes full. Otherwise the FLASH
On Sep 7, 2009, at 11:48 AM, Bob Friesenhahn wrote:
On Mon, 7 Sep 2009, Richard Elling wrote:
Yep, it is there to try and solve the problem of rewrites in a
small area,
smaller than the bulk erase size. While it would be trivial to
traverse the
spacemap and TRIM the free blocks, it might
On Mon, 7 Sep 2009, Richard Elling wrote:
Yep, it is there to try and solve the problem of rewrites in a small area,
smaller than the bulk erase size. While it would be trivial to traverse the
spacemap and TRIM the free blocks, it might not improve performance
for COW file systems. My crystal b
On Sep 7, 2009, at 10:20 AM, Bob Friesenhahn wrote:
On Mon, 7 Sep 2009, Richard Elling wrote:
This is an article about the new TRIM command. It would be
important for
file systems which write their metadata to the same physical
location or use
a MRU replacement algorithm. But ZFS is copy-o
On Mon, 7 Sep 2009, Richard Elling wrote:
This is an article about the new TRIM command. It would be important for
file systems which write their metadata to the same physical location or use
a MRU replacement algorithm. But ZFS is copy-on-write, so the metadata is
allocated from free space and
On Sep 7, 2009, at 3:49 AM, Sriram Narayanan wrote:
Folks:
I gave a presentation last weekend on how one could use Zones, ZFS and
Crossbow to recreate deployments scenarios on one's computer (to the
extent possible).
I've received the following question, and would like to ask the ZFS
Community
Looks like this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
Workaround: Don't run zpool status as root.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
On Sep 7, 2009, at 1:32 AM, James Lever wrote:
On 07/09/2009, at 10:46 AM, Ross Walker wrote:
zpool is RAIDZ2 comprised of 10 * 15kRPM SAS drives behind an LSI
1078 w/ 512MB BBWC exposed as RAID0 LUNs (Dell MD1000 behind PERC
6/E) with 2x SSDs each partitioned as 10GB slog and 36GB remain
On Mon, Sep 7, 2009 at 2:01 AM, Karel Gardas wrote:
> What's your uptime? Usually it scrubs memory during the idle time and
> usually waits quite a long nearly till the deadline -- which is IIRC 12
> hours. So do you have more than 12 hours of uptime?
> --
>
10:43am up 30 days 6:47, 1 user,
Hi, I noticed that counters will not get updated if data amount increases
during scrub/resilver, so if application has written new data during scrub,
counter will not give realistic estimate.
This happens with resilvering and scrub, somebody could fix this?
Yours
Markus Kovero
-Original
I have a pool composed of a single raidz2 vdev, which is currently
degraded (missing a disk):
config:
NAME STATE READ WRITE CKSUM
pool DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c8d1 ONLINE 0 0 0
Final rant on this.
Managed to get the box re-installed and the performance issue has vanished.
So there is a performance bug in zfs some where.
Not sure to put in a bug log as I can't now provide any more information.
--
This message posted from opensolaris.org
http://www.scmagazineuk.com/Apache-publishes-detailed-report-about-security-breach-with-aims-to-prevent-a-recurrence/article/148282/
-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Folks:
I gave a presentation last weekend on how one could use Zones, ZFS and
Crossbow to recreate deployments scenarios on one's computer (to the
extent possible).
I've received the following question, and would like to ask the ZFS
Community for answers.
-- Sriram
-- Forwarded message
What's your uptime? Usually it scrubs memory during the idle time and usually
waits quite a long nearly till the deadline -- which is IIRC 12 hours. So do
you have more than 12 hours of uptime?
--
This message posted from opensolaris.org
___
zfs-discus
20 matches
Mail list logo