On 7 mai 09, at 04:03, Adam Leventhal wrote:
After all this discussion, I am not sure if anyone adequately
answered the
original poster's question as to whether at 2540 with SAS 15K
drives would
provide substantial synchronous write throughput improvement when
used as
a L2ARC device.
I
>> After all this discussion, I am not sure if anyone adequately answered the
>> original poster's question as to whether at 2540 with SAS 15K drives would
>> provide substantial synchronous write throughput improvement when used as
>> a L2ARC device.
>
> I was under the impression that the L2AR
On May 6, 2009, at 20:46, Bob Friesenhahn wrote:
After all this discussion, I am not sure if anyone adequately
answered the original poster's question as to whether at 2540 with
SAS 15K drives would provide substantial synchronous write
throughput improvement when used as a L2ARC device.
On Thu, 7 May 2009, Scott Lawson wrote:
Something nice about the STK2540 solution is that if the server system
dies. The STK2540's can quickly be swung over to another system via a quick
'zfs import'.
Sure provided they have it attached to a fibre channel switch or
have a nice long fibre lead.
Bob Friesenhahn wrote:
On Thu, 7 May 2009, Scott Lawson wrote:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Just thought I would point out that these are hardware backed RAID
arrays. You m
> "re" == Richard Elling writes:
re> We forget because it is no longer a problem ;-)
bug number?
re> I think it is disingenuous to compare an enterprise-class RAID
re> array with the random collection of hardware on which Solaris
re> runs.
compare with a Sun-integrated Sola
Miles Nordin wrote:
"djm" == Darren J Moffat writes:
djm> If you only present a single lun to ZFS it may not be able to
djm> repair any detected errors.
And also the problems with pools becoming corrupt and unimportable,
especially when the SAN reboots or loses connectivity
On Thu, 7 May 2009, Scott Lawson wrote:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Just thought I would point out that these are hardware backed RAID
arrays. You might be better off using t
Miles Nordin wrote:
"re" == Richard Elling writes:
re> Note: in the Caiman world, this is only an issue for the first
re> BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
be rewritten, so it's
Roger Solano wrote:
Hello,
Does it make any sense to use a bunch of 15K SAS drives as L2ARC
cache for several TBs of SATA disks?
For example:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
On Wed, May 6, 2009 at 2:54 AM, wrote:
>
>>On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
>>> PS: At one point the old JumpStart code was encumbered, and the
>>> community wasn't able to assist. I haven't looked at the next-gen
>>> jumpstart framework that was delivered as part of the OpenSo
Ben Rockwood's written a very useful util called arc_summary:
http://www.cuddletech.com/blog/pivot/entry.php?id=979
It's really good for looking at ARC usage (including memory usage).
You might be able to make some guesses based on "kstat -n zfs_file_data"
and "kstat -n zfs_file_data_buf". Look
> "re" == Richard Elling writes:
re> Note: in the Caiman world, this is only an issue for the first
re> BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
be rewritten, so it's still an issue. no?
pgpsh8Y
On Wed, 6 May 2009, Richard Elling wrote:
Memory is meant to be used. 96% RAM use is good since it represents an
effective use of your investment.
Actually, I think a percentage of RAM is a bogus metric to measure.
For example, on a 2TBytes system, you would be wasting 80 GBytes.
Perhaps you
Bob Friesenhahn wrote:
On Wed, 6 May 2009, Troy Nancarrow (MEL) wrote:
Please forgive me if my searching-fu has failed me in this case, but
I've been unable to find any information on how people are going about
monitoring and alerting regarding memory usage on Solaris hosts using
ZFS.
The pr
On Wed, May 6, 2009 at 11:14 AM, Rich Teer wrote:
> On Wed, 6 May 2009, Richard Elling wrote:
>
>> popular interactive installers much more simplified. I agree that
>> interactive installation needs to remain as simple as possible.
>
> How about offering a choice an installation time: "Custom or
This sounds like a good idea to me, but it should be brought up
on the caiman-disc...@opensolaris.org mailing list, since this
is not just, or even primarily, a zfs issue.
Lori
Rich Teer wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified.
On Wed, 6 May 2009, Troy Nancarrow (MEL) wrote:
Please forgive me if my searching-fu has failed me in this case, but
I've been unable to find any information on how people are going about
monitoring and alerting regarding memory usage on Solaris hosts using
ZFS.
The problem is not that the ZFS
Fajar A. Nugraha wrote:
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL)
wrote:
So how are others monitoring memory usage on ZFS servers?
I think you can get the amount of memory zfs arc uses with arcstat.pl.
http://www.solarisinternals.com/wiki/index.php/Arcstat
arcstat is a
On Wed, 6 May 2009, Richard Elling wrote:
> popular interactive installers much more simplified. I agree that
> interactive installation needs to remain as simple as possible.
How about offering a choice an installation time: "Custom or default?"?
Those that don't want/need the interactive flex
Roger Solano wrote:
Hello,
Does it make any sense to use a bunch of 15K SAS drives as L2ARC
cache for several TBs of SATA disks?
For example:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
Alternatively, you can purchase non-Sun 500 GBy
Ellis, Mike wrote:
How about a generic "zfs options" field in the JumpStart profile?
(essentially an area where options can be specified that are all applied
to the boot-pool (with provisions to deal with a broken-out-var))
We had this discussion a while back and, IIRC, it was expected that
Troy Nancarrow (MEL) schrieb:
> Hi,
>
> Please forgive me if my searching-fu has failed me in this case, but
> I've been unable to find any information on how people are going about
> monitoring and alerting regarding memory usage on Solaris hosts using ZFS.
>
> The problem is not that the ZFS
On Wed, May 6, 2009 at 1:08 PM, Troy Nancarrow (MEL)
wrote:
> So how are others monitoring memory usage on ZFS servers?
I think you can get the amount of memory zfs arc uses with arcstat.pl.
http://www.solarisinternals.com/wiki/index.php/Arcstat
IMHO it's probably best to set a limit on ARC size
>On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
>> PS: At one point the old JumpStart code was encumbered, and the
>> community wasn't able to assist. I haven't looked at the next-gen
>> jumpstart framework that was delivered as part of the OpenSolaris SPARC
>> preview. Can anyone provide any
25 matches
Mail list logo