Robert,
That's great info.
Do you know how you can check the number of CORRECTED errors by ECC in
OpenSolaris?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
Thanks Tomas,
I got what I needed, and more. The header size is of particular interest.
On Fri, Feb 19, 2010 at 9:22 PM, Tomas Ögren wrote:
> On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
>
> > Hello,
> >
> > How do you tell how much of your l2arc
Hello,
How do you tell how much of your l2arc is populated? I've been looking for a
while now, can't seem to find it.
Must be easy, as this blog entry shows it over time:
http://blogs.sun.com/brendan/entry/l2arc_screenshots
And follow up, can you tell how much of each data set is in the arc or
Dan,
Exactly what I meant. An allocation policy, that will help in distributing the
data in a way that when one disk is lost (entire mirror) than some data remains
fully accessible as opposed to not been able to access pieces all over the
storage pool.
--
This message posted from opensolaris.o
Dan,
"loose" was a typo. I meant "lose". Interesting how a typo (write error) can
cause a lot of confusion on what exactly I mean :) Resulting in corrupted
interpretation.
Note that my idea/proposal is targeted for a growing number of home users. To
those, value for money usually is a much mo
Bob,
Using a separate pool would dictate other limitations, such as not been able to
use more space than what's allocated in the pool. You could "add" space as
needed, but you can't remove (move) devices freely.
By using a shared pool with a hint of desired vdev/space allocation policy, you
co
Thanks for your feedback James, but that's not the direction where I wanted
this discussion to go.
The goal was not how to create a better solution for an enterprise.
The goal was to do "damage control" in a disk failure scenario involving data
loss. Back to the original question/idea.
Which
Just finished reading the following excellent post:
http://queue.acm.org/detail.cfm?id=1670144
And started thinking what would be the best long term setup for a home server,
given limited number of disk slots (say 10).
I considered something like simply do a 2way mirror. What are the chances fo
Robert,
That would be pretty cool especially if it makes into the 2010.02 release. I
hope there are no weird special cases that pop-up from this improvement.
Regarding workaround.
That's not my experience, unless it behaves differently on ZVOLs and datasets.
On ZVOLs it appears the setting ki
Ok, now that you explained it, it makes sense. Thanks for replying Daniel.
Feel better now :) Suddenly, that Gigabyte i-Ram is no longer a necessity but a
"nice to have" thing.
What would be really good to have is the that per-data set ZIL control in
2010.02. And perhaps add another mode "sync
Jeff, thanks for link, looking forward to per data set control.
6280630 zil synchronicity
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630)
It's been open for 5 years now :) Looking forward to not compromising my entire
storage with disabled ZIL when I only need it on a few d
Has anyone seen soft corruption in NTFS iSCSI ZVOLs after a power loss?
I mean, there is no guarantee writes will be executed in order, so in theory,
one could corrupt it's NTFS file system.
Would best practice be to rollback the last snapshot before making those iSCSI
available again?
--
This
Darren, thanks for reply.
Still not clear to me thought.
The only purpose of the slog is to serve the ZIL. There may be many "ZIL"s on a
single slog.
>From Milek's blog:
logbias=latency - data written to slog first
logbias=throughtput - data written directly to dataset.
Here's my problem. I h
Eric,
I am confused. What's difference between:
- turning off slogs (via logbias)
vs
- turning off ZIL (via kernel tunable)
Isn't that similar, just one is more granular?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-dis
Eric, thanks for clarifying.
Could you confirm the release for #1 ? As "today" can be misleading depending
on the user.
Is there a schedule/target for #2 ?
And just to confirm the alternative to turn off the ZIL globally is the
equivalent to always throwing away some commited data on a crash/r
Me too, I would like to know the answer.
I am considering Gigabyte's i-RAM for ZIL, but I don't want to worry what
happens if the battery dies after a system crash.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
Thanks Bill, that looks relevant. Note however this only happens with gzip
compression, but it's definiteness something I've experienced.
I've decided to wait for the next full release before upgrading. I was just
wondering if the problem was resolved.
I'll migrate to COMSTAR soon, I hope the k
Thanks for your replies.
I am aware of the 512 bytes concept, thus my selection of 8 KB (matched with
8KB ntfs). Even 20% reduction is still good, that's like having 20% extra ram
(for cache).
I haven't experimented with the default lzjb compression. If I want to compress
something usually I w
Hello All,
I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and
blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.
Whenever I start copying files from Windows onto the ZFS disk, after about
100-200 Mb been copied the server starts to experience freezes. I h
, Jan 29, 2010 at 4:04 PM, Richard Elling wrote:
> On Jan 29, 2010, at 12:01 PM, Christo Kutrovsky wrote:
> > Hello,
> >
> > I have PDSMi board (
> http://www.supermicro.com/products/motherboard/PD/E7230/PDSMi.cfm) with
> Intel® ICH7R SATA2 (3 Gbps) controller built-in.
&
Hello,
I have PDSMi board
(http://www.supermicro.com/products/motherboard/PD/E7230/PDSMi.cfm) with Intel®
ICH7R SATA2 (3 Gbps) controller built-in.
I suspect NCQ is not working as I never see "actv" bigger than 1.0 i in iostat,
even though I have requests in "wait".
How can I verify the statu
Thanks for info Dan,
I will test it out, but won't be anytime soon. Waiting for that SSD.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In the case of a ZVOL with the following settings:
primarycache=off, secondarycache=all
How does the L2ARC get populated if the data never makes it to ARC ? Is this
even a valid configuration?
The reason I ask is I have iSCSI volumes for NTFS, I intend to use an SSD for
l2arc. If something is
I have the exact same questions.
I am very interested in the answers of those.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am interested in this as well.
My machine is with 5 gb ram, and will soon have an 80gb SSD device.
My free memory hovers around 750 Mb, and the arc around 3GB.
This machine doesn't do anything other than iSCSI/CIFS, I wouldn't mind using
some extra 500 Mb for caching.
And this becomes especi
(but oracle database server)
on the system, a db_cache size in the 70 GiB range would be perfectly
acceptable.
Don't forget to set pga_aggregate_target to something reasonable too, like 20
GiB.
Christo Kutrovsky
Senior DBA
The Pythian Group
I Blog at: www.pythian.com/news
--
This message po
Hello,
Any hints on how to re-propagate all ACL entries from a given parent directory
down?
For example, you set your inheritable ACLs the way you want by running multiple:
chmod A+:dir_inherit/file_inherit PARRENT_DIR
Then what command you would run to "add" these to all already created f
27 matches
Mail list logo