After working with Sanjeev, and putting in a bunch of timing statement
throughout the code, it turns out that file writes ARE NOT the bottleneck, as
would be assumed.
It is actually reading the file into a byte buffer that is the culprit.
Specifically, this java command:
byteBuffer = file.get
I ran this dtrace script and got no output. Any ideas?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It does. The file size is limited to the original creation size, which is 65k
for files with 1 data sample.
Unfortunately, I have zero experience with dtrace and only a little with truss.
I'm relying on the dtrace scripts from people on this thread to get by for now!
This message posted fr
Hi Daniel. I take it you are an RRD4J user?
I didn't see anything in the "performance issues" area that would help. Please
let me know if I'm missing something:
- The default of RRD4J is to use NIO backend, so that is already in place.
- Pooling won't help because there is almost never a time
We are going to get a 6120 for this temporarily. If all goes well, we are
going to move to a 6140 SAN solution.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
> The other thing to keep in mind is that the tunables
> like compression
> and recsize only affect newly written blocks. If you
> have a bunch of
> data that was already laid down on disk and then you
> change the tunable,
> this will only cause new blocks to have the new size.
> If you experime
To avoid making multiple posts, I'll just write everything here:
-Moving to nv_82 did not seem to do anything, so I doesn't look like fsync was
the issue.
-Disabling ZIL didn't do anything either
-Still playing with 'recsize' values but it doesn't seem to be doing much...I
don't think I have a g
Slight correction. 'recsize' must be a power of 2 so it would be 8192.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RRD4J isn't a DB, per se, so it doesn't really have a "record" size. In fact,
I don't even know if, when data is written to the binary, whether it is
contiguous or not so the amount written may not directly correlate to a proper
record-size.
I did run your command and found the size patterns y
One thing I just observed is that the initial file size is 65796 bytes. When
it gets an update, the file size remains @ 65796.
Is there a minimum file size?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
I just installed nv82 so we'll see how that goes. I'm going to try the
recordsize idea above as well.
A note about UFS: I was told by our local Admin guru that ZFS turns on
write-caching for disks, which is something that a UFS file system should not
have turned on, so that if I convert the Z
Unfortunately, I don't know the record size of the writes. Is it as simple as
looking @ the size of a file, before and after a client request, and noting the
difference in size? This is binary data, so I don't know if that makes a
difference, but the average write size is a lot smaller than th
It is a striped/mirror:
# zpool status
NAMESTATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
mirrorONL
I disabled file prefetch and there was no effect.
Here are some performance numbers. Note that, when the application server used
a ZFS file system to save its data, the transaction took TWICE as long. For
some reason, though, iostat is showing 5x as much disk writing (to the physical
disks) o
Hi Marc,
# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
I don't know if my application uses synchronous I/O transactions...I'm using
Sun's Glassfish v2u1.
I've deleted the ZFS partition and have setup an SVM stripe/mirror just to see
if "ZFS" is getting in the wa
Some more information about the system. NOTE: Cpu utilization never goes above
10%.
Sun Fire v40z
4 x 2.4 GHz proc
8 GB memory
3 x 146 GB Seagate Drives (10k RPM)
1 x 146 GB Fujitsu Drive (10k RPM)
This message posted from opensolaris.org
___
zfs-d
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an
application server which is reading/writing to hundreds of thousands of files
on it, thousands of files @ a time.
If 1 client uses the app server, the transaction (rea
Okay, so back to this. What's the best way of getting user usage of a ZFS file
system?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This was asked before, but was not responded to. Is there a ZFS
equivalent to the 'quot' command?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
feld wrote:
On Wed, 2006-08-16 at 11:49 -0400, Eric Enright wrote:
On 8/16/06, William Fretts-Saxton <[EMAIL PROTECTED]> wrote:
I'm having trouble finding information on any hooks into ZFS. Is
there information on a ZFS API so I can access ZFS information
directly as opposed to havin
400, Eric Enright wrote:
On 8/16/06, William Fretts-Saxton <[EMAIL PROTECTED]> wrote:
I'm having trouble finding information on any hooks into ZFS. Is
there information on a ZFS API so I can access ZFS information
directly as opposed to having to constantly parse 'zpool' and
I'm having trouble finding information on any hooks into ZFS. Is there
information on a ZFS API so I can access ZFS information directly as opposed to
having to constantly parse 'zpool' and 'zfs' command output?
This message posted from opensolaris.org
___
22 matches
Mail list logo