> In other words, for random access across a working set larger (by say X%)
> than the SSD-backed L2 ARC, the cache is useless. This should asymptotically
> approach truth as X grows and experience shows that X=200% is where it's
> about 99% true.
>
Ummm, before we throw around phrases like
> Actually, it does seem to work quite
> well when you use a read optimized
> SSD for the L2ARC. In that case,
> "random" read workloads have very
> fast access, once the cache is warm.
One would expect so, yes. But the usefulness of this is limited to the cases
where the entire working set wi
Chris Greer wrote:
> Right now we are not using Oracle...we are using iorate so we don't have
> separate logs. When the testing was with Oracle the logs were separate.
> This test represents the 13 data luns that we had during those test.
>
> The reason it wasn't striped with vxvm is that the o
Luke Lonergan wrote:
> ZFS works marvelously well for data warehouse and analytic DBs. For lots of
> small updates scattered across the breadth of the persistent working set,
> it's not going to work well IMO.
>
Actually, it does seem to work quite well when you use a read optimized
SSD for
On Sun, 23 Nov 2008, Tamer Embaby wrote:
>> That is the trade-off between "always consistent" and
>> "fast".
>>
> Well, does that mean ZFS is not best suited for database engines as
> underlying filesystem? With databases it will always be fragmented,
> hence slow performance?
Assuming that the
ZFS works marvelously well for data warehouse and analytic DBs. For lots of
small updates scattered across the breadth of the persistent working set, it's
not going to work well IMO.
Note that we're using ZFS to host databases as large as 10,000 TB - that's 10PB
(!!). Solaris 10 U5 on X4540.
Kees Nuyt wrote:
> My explanation would be: Whenever a block within a file
> changes, zfs has to write it at another location ("copy on
> write"), so the previous version isn't immediately lost.
>
> Zfs will try to keep the new version of the block close to
> the original one, but after several cha
On Sat, 22 Nov 2008, Chris Greer wrote:
> zfs with the datafiles recreated after the recordsize change was 3079 IOPS
> So now we are at least in the ballpark.
ZFS is optimized for fast bulk data storage and data integrity and not
so much for transactions. It seems that adding a non-volatile
ha
On Fri, 21 Nov 2008, zerk wrote:
> I have OpenSolaris on an Amd64 Asus-A8NE with 2gig of Rams and 4x320 gig sata
> drives in raidz1.
>
> With dd, I can write at quasi disk maximum speed of 80meg each for a total of
> 250meg/s if I have no Xsession at all (only console tty).
>
> But as soon as I
Right now we are not using Oracle...we are using iorate so we don't have
separate logs. When the testing was with Oracle the logs were separate. This
test represents the 13 data luns that we had during those test.
The reason it wasn't striped with vxvm is that the original comparison test was
> For those interested, we are using the iorate command from EMC for
> the benchmark. For the different test, we have 13 luns presented.
> Each one is its own volume and filesystem and a singel file on those
> filesystems. We are running 13 iorate processes in parallel (there
> is no cpu
Are you putting your archive and redo logs on a separate zpool (not
just a different zfs fs with the same pool as your data files) ?
Are you using direct io at all in any of the config scenarios you
listed?
/dale
On Nov 22, 2008, at 12:41 PM, Chris Greer wrote:
> So to give a little backgr
My Supermicro H8DA3-2's onboard 1068E SAS chip isn't recognized in OpenSolaris,
and I'd like to keep this particular system "all Supermicro," so the L8i it is.
I know there have been issues with Supermicro-branded 1068E controllers, so
just wanted to verify that the stock mpt driver supports it.
zfs with the datafiles recreated after the recordsize change was 3079 IOPS
So now we are at least in the ballpark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
that should be set zfs:zfs_nocacheflush=1
in the post above...that was my typo in the post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So to give a little background on this, we have been benchmarking Oracle RAC on
Linux vs. Oracle on Solaris. In the Solaris test, we are using vxvm and vxfs.
We noticed that the same Oracle TPC benchmark at roughly the same transaction
rate was causing twice as many disk I/O's to the backend DMX
Great it worker,
mlockall returned -1 probably because the system wasn't able to allocate blocks
of 512M contiguously... but using memset for each blocks commited the memory
and I saw the same zfs perf problem as with X & Vbox.
Thanks a lot for the hint :)
Now I guess i'll have to buy more RAM
>Hi,
>
>thanks for the reply
>
>I though that was that too, so I wrote a C program that allocated 1 gig of
>ram doing nothing with it. So the system was left with only 1 gig for ZFS
>and I saw absolutely no performance hit.
Lock it in memory and then try again; if you allocate the memory but you
Hi Pawel,
Yes, it did change in the last few months.
On older versions of solaris the default for 'zfs list' was to show all
filesystems AND snapshots.
This got to be a real pain when you had lots of snapshots as you couldn't
easily see what was what, so it was changed so that the default for 'z
Hi,
thanks for the reply
I though that was that too, so I wrote a C program that allocated 1 gig of ram
doing nothing with it. So the system was left with only 1 gig for ZFS and I saw
absolutely no performance hit.
I tried the same thing for the CPU by doing a loop and taking 100% on one of
t
On 11/22/08, Jens Elkner <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 03:42:17PM -0800, David Pacheco wrote:
> > Pawel Tecza wrote:
> > > But I still don't understand why `zfs list` doesn't display snapshots
> > > by default. I saw it in the Net many times at the examples of zfs usage.
>
21 matches
Mail list logo