> > Does anyone know where I can find exact numbers for the RAM cost?
>
> Doesn't it also vary by RAID type in use? A calculator would be useful
> but may have to account for each version of zpool in the wild given
> that the source is available...
If we can get the numbers for a given zpool vers
On Apr 25, 2011, at 6:11 AM, Roy Sigurd Karlsbakk wrote:
> Does anyone know where I can find exact numbers for the RAM cost?
Doesn't it also vary by RAID type in use? A calculator would be useful but may
have to account for each version of zpool in the wild given that the source is
available...
> (2) L2arc is not simply a slower extension of L1arc as you seem to be
> thinking. Every entry in the L2arc requires an entry in the L1arc. I
> don't know what the multiplier ratio is, but I hear something between
> 10x and 20x. So if you have for example 20G of L2arc, that would
> consume somethi
> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
>
> > With the 8T storage... Add 8-16G ram to your server on top of your
> > baseline.
>
> I never got around to filling it. Already at about 1.2TB fill, it was dead
> slow,
> and for that the 8GB RAM should suffice.
>
> > I'm not sure ho
> In theory, dedup should accelerate performance when data is
> duplicated.
>
> In the specs you quoted above, you have not nearly enough ram. I would
> say you should consider 4G or 8G to be baseline if you have no dedup
> and no l2arc. But when you add the dedup and the l2arc, your ram
> require
On Apr 23, 2011, at 12:29 PM, Gary Driggs wrote:
>> Unless you buy the fishworks based storage products, which I believe
>> includes the dedup feature and is sold for production environments. I just
>> don't remember if the dedup feature is labelled experimental or similar.
>
> The unified sto
On 04/23/11 06:01 AM, Edward Ned Harvey wrote:
>> From: Alan Coopersmith [mailto:alan.coopersm...@oracle.com]
>>
>> While I'm fairly sure Oracle disagrees with Mr. Harvey's claim that it's
>> not considered production worthy,
>
> Here's what I meant when I said that:
> The current production rele
On Apr 23, 2011, at 8:21 AM, Gregory Youngblood wrote:
> Unless you buy the fishworks based storage products, which I believe includes
> the dedup feature and is sold for production environments. I just don't
> remember if the dedup feature is labelled experimental or similar.
The unified stor
Unless you buy the fishworks based storage products, which I believe includes
the dedup feature and is sold for production environments. I just don't
remember if the dedup feature is labelled experimental or similar.
Sent from my Droid Incredible.
Edward Ned Harvey wrote:
>> From: Alan Coope
On 23.04.2011, at 16:10, Edward Ned Harvey wrote:
>> From: Toomas Soome [mailto:toomas.so...@mls.ee]
>>
>> well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can
> fit
>> 5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M
>> entries thats with default meta
> From: Tomas Bodzar [mailto:tomas.bod...@gmail.com]
>
> Isn't it too much?
"Too much ram" is an oxymoron.
Always add more ram. And then double the ram. Or else don't complain about
performance. ;-)
___
OpenIndiana-discuss mailing list
OpenIndi
On Sat, Apr 23, 2011 at 3:06 PM, Edward Ned Harvey
wrote:
>> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
>>
>> That's theory, in practice, even with sufficient RAM/L2ARC and some amount
>> of SLOG, dedup slows down writes to a minimum. My test was done with 8TB
>> net storage, 8GB RAM,
> From: Toomas Soome [mailto:toomas.so...@mls.ee]
>
> well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can
fit
> 5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M
> entries thats with default metadata limit. unless i did my
calculations
> wrong, that wil
> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
>
> That's theory, in practice, even with sufficient RAM/L2ARC and some amount
> of SLOG, dedup slows down writes to a minimum. My test was done with 8TB
> net storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG
> (mirrored) an
> From: Alan Coopersmith [mailto:alan.coopersm...@oracle.com]
>
> While I'm fairly sure Oracle disagrees with Mr. Harvey's claim that it's
> not considered production worthy,
Here's what I meant when I said that:
The current production release is still Solaris 10, which does not include
dedup ye
well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can fit
5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M
entries thats with default metadata limit. unless i did my calculations
wrong, that will explain the slowdown.
On 22.04.2011, at 21:19, Roy
That's theory, in practice, even with sufficient RAM/L2ARC and some amount of
SLOG, dedup slows down writes to a minimum. My test was done with 8TB net
storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG (mirrored)
and the rest for L2ARC. Application tested was Bacula with the OI b
On 04/22/11 07:43 AM, Gary Driggs wrote:
> On Apr 22, 2011, at 5:22 AM, Edward Ned Harvey wrote:
>> Even in solaris 11 express, which has a significantly newer version, dedup
>> isn't considered production worthy. So I would advise you to live without
>> it for now.
>
> Then what do you suppose i
On 04/22/11 08:09 AM, Sriram Narayanan wrote:
> At least one other project at opensolaris.org continues to receive
> updates. This is the IPS project which I closely track. There may be
> others too - I've not checked.
The Caiman installers do as well, as do the projects to build & package
mostly-
On 04/22/11 08:03 AM, Jerry Kemp wrote:
> Can someone "in the know" comment further on this?
Sorry, but no, we can't comment on that.
--
-Alan Coopersmith-alan.coopersm...@oracle.com
Oracle Solaris Platform Engineering: X Window System
_
On Fri, Apr 22, 2011 at 8:39 PM, Sriram Narayanan wrote:
> At least one other project at opensolaris.org continues to receive
> updates. This is the IPS project which I closely track. There may be
> others too - I've not checked.
>
I just remembered that I also track the caiman-installer project
It appears that I mis-understood the memo.
Thanks for setting me straight.
Jerry
On 04/22/11 10:09, Sriram Narayanan wrote:
> There was a "leaked" Oracle memo which stated that Oracle would stop
> commits, and would make source code available only when the final
> binary release of Solaris 11
The information about releasing the source was part of a leaked memo.
Oracle has made no official comitment to such a thing and the author
of the memo is no longer at Oracle.
No Oracle employee can reveal that information even if they had it
without risking both his job and a lawsuit.
On Fri, Apr
Jerry Kemp writes:
> That was my understanding also. I thought that only the binary/distro
> roll outs were stopping.
>
> Can someone "in the know" comment further on this?
>
> On 04/22/11 08:23, Ben Taylor wrote:
> > I thought Oracle was going to continue to release source snapshots after
> > a b
On Fri, Apr 22, 2011 at 8:33 PM, Jerry Kemp wrote:
> That was my understanding also. I thought that only the binary/distro
> roll outs were stopping.
>
> Can someone "in the know" comment further on this?
>
This has been discussed a lot last year and this year.
There was a "leaked" Oracle memo
That was my understanding also. I thought that only the binary/distro
roll outs were stopping.
Can someone "in the know" comment further on this?
Jerry
On 04/22/11 08:23, Ben Taylor wrote:
>
> I thought Oracle was going to continue to release source snapshots after
> a binary release had bee
On Apr 22, 2011, at 5:22 AM, Edward Ned Harvey wrote:
> Even in solaris 11 express, which has a significantly newer version, dedup
> isn't considered production worthy. So I would advise you to live without
> it for now.
Then what do you suppose is Oracle using for dedup in their ZFS based unifie
On Fri, Apr 22, 2011 at 8:22 AM, Edward Ned Harvey
wrote:
>> From: James Kohout [mailto:jkoh...@yahoo.com]
>>
>> So looking to upgrade to io148 to be able to enable deduplication. So
>> does have any experience running a ZFS RaidZ2 pool with deduplication in
>> a production environment? Is ZFS
> From: James Kohout [mailto:jkoh...@yahoo.com]
>
> So looking to upgrade to io148 to be able to enable deduplication. So
> does have any experience running a ZFS RaidZ2 pool with deduplication in
> a production environment? Is ZFS deduplication in oi148 considered
> stable/production ready? I
On 21 Apr 2011, at 23:07, Toomas Soome wrote:
>
> the basic math behind the scenes is following (and not entirely determined):
>
> 1. DTT data is kept in metadata part of ARC;
> 2. metadata default max is arc_c_max / 4.
>
> note that you can rise that limit.
>
> 3. arc max is RAM - 1GB.
>
the basic math behind the scenes is following (and not entirely determined):
1. DTT data is kept in metadata part of ARC;
2. metadata default max is arc_c_max / 4.
note that you can rise that limit.
3. arc max is RAM - 1GB.
so, if you have 8GB of ram, your arc max is 7GB and max metadata is
On Thu, Apr 21 at 14:12, James Kohout wrote:
All,
Been running opensolaris 134 with a 9T RaidZ2 array as a backup server
in a production environment. Whenever I tried to turn the ZFS
deduplication I always had crashes and other issues, which I most likely
attributed to the know ZFS dedup bugs
All,
Been running opensolaris 134 with a 9T RaidZ2 array as a backup server
in a production environment. Whenever I tried to turn the ZFS
deduplication I always had crashes and other issues, which I most likely
attributed to the know ZFS dedup bugs in 134. Once I rebuild the pool
without
33 matches
Mail list logo