I would add that you didn't mention what if any optimizations you made
with vxfs. Specifically, a default vxfs file system will have a file
system block size of 1k, 2k, 4k, or 8k, depending on the file system
size. Since you are using Oracle, you should always set the file system
block size to 8k,
In kernel threadlist (guds), I see txg_sync_thread is doing spa_sync()->
dsl_pool_sync() - responsible for writing all dirty datasets. Due to NFS
operation, it may be large number of synchronous writes resulting in
frequent spa_sync(). Customer noticed that even disabling zfs zil did
n't show a
The reason rfs3_{write|create} waiting longer in txg_wait_open is because
there is a syncing txg taking longer to complete.
You may want to trace and track the syncing txg to get the reason for the
delay.
--
Prabahar.
On Sun, Nov 23, 2008 at 05:51:44PM -0800, Amer Ather wrote:
> IHAC who is seei
IHAC who is seeing very slow NFS transactions over ZFS. rfs3_write(),
rfs3_create() and others are taking in the order of 17-20 seconds to
complete. Profiling these transactions showing most of the time is spent
in txg_wait_open() - waiting for transaction group to open.
We tried "zfs_nocachefl
On Sun, Nov 23, 2008 at 4:55 PM, James C. McPherson <[EMAIL PROTECTED]
> wrote:
> On Sun, 23 Nov 2008 06:13:51 -0800 (PST)
> Ross <[EMAIL PROTECTED]> wrote:
>
> > I'd also like to know how easy it is to identify drives when you use
> > this card? Is it easy to know which is which after you've had
On Sun, 23 Nov 2008 06:13:51 -0800 (PST)
Ross <[EMAIL PROTECTED]> wrote:
> I'd also like to know how easy it is to identify drives when you use
> this card? Is it easy to know which is which after you've had a few
> failure & swapped drives around?
Hi Ross,
in general, it's just as easy to ident
On Sun, 23 Nov 2008, Bob Netherton wrote:
>> This argument can be proven by basic statistics without need to resort
>> to actual testing.
>
> Mathematical proof <> reality of how things end up getting used.
Right. That is a good thing since otherwise the technologies that Sun
has recently deplo
> This argument can be proven by basic statistics without need to resort
> to actual testing.
Mathematical proof <> reality of how things end up getting used.
> Luckily, most data access is not completely random in nature.
Which was my point exactly. I've never seen a purely mathematical
mod
I watched both the youtube video
http://www.youtube.com/watch?v=CN6iDzesEs0
and the one on http://www.opensolaris.com/, "ZFS – A Smashing Hit".
In the first one is obvious that the app stops working when they smash the
drives; they have to physically detach the drive before the array
reconstru
Dnia 2008-11-23, nie o godzinie 18:14 +0100, Paweł Tęcza pisze:
> Dnia 2008-11-22, sob o godzinie 07:06 -0800, Simon Breden pisze:
> > Hi Pawel,
> >
> > Yes, it did change in the last few months.
>
> Hello Simon,
>
> Thanks a lot for your reply! I didn't know it, because I'm not so
> long-time O
Dnia 2008-11-22, sob o godzinie 07:06 -0800, Simon Breden pisze:
> Hi Pawel,
>
> Yes, it did change in the last few months.
Hello Simon,
Thanks a lot for your reply! I didn't know it, because I'm not so
long-time OpenSolaris user ;) I was confused, because recently I've
seen Roman Strobl's ZFS b
On Sat, 22 Nov 2008, Bob Netherton wrote:
>
>> In other words, for random access across a working set larger (by
>> say X%) than the SSD-backed L2 ARC, the cache is useless. This
>> should asymptotically approach truth as X grows and experience
>> shows that X=200% is where it's about 99% true
Dnia 2008-11-23, nie o godzinie 13:41 +0530, Sanjeev Bagewadi pisze:
> > The uncomplete one - where is the '-t all' option? It's really annoying,
> > error prone, time consuming to type stories on the command line ...
> > Does anybody remember the "keep it small and simple" thing?
> >
> This ch
On Sat, Nov 22, 2008 at 11:41 AM, Chris Greer <[EMAIL PROTECTED]> wrote:
> vxvm with vxfs we achieved 2387 IOPS
In this combination you should be using odm, which comes as part of
the Storage Foundation for Oracle or Storage Foundation for Oracle RAC
products. It makes the database files on vxfs
I'd also like to know how easy it is to identify drives when you use this card?
Is it easy to know which is which after you've had a few failure & swapped
drives around?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
>Great it worker,
>
>mlockall returned -1 probably because the system wasn't able to allocate
>blocks of 512M contiguous
ly... but using memset for each blocks commited the memory and I saw the same
zfs perf problem as w
ith X & Vbox.
You need to be have the proper privilege. Ordinary users can
Jel,
Jens Elkner wrote:
> On Fri, Nov 21, 2008 at 03:42:17PM -0800, David Pacheco wrote:
>
>> Pawel Tecza wrote:
>>
>>> But I still don't understand why `zfs list` doesn't display snapshots
>>> by default. I saw it in the Net many times at the examples of zfs usage.
>>>
>> This was
17 matches
Mail list logo