On Mon, May 10 at 9:08, Erik Trimble wrote:
Abhishek Gupta wrote:
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created
raidz2 with a few slices on a single disk. I was expecting a good
read/write performance but I got the speed of 12-15MBps.
How can I enhance the read/write
Abhishek Gupta wrote:
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created
raidz2 with a few slices on a single disk. I was expecting a good
read/write performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
You ab
After working with Sanjeev, and putting in a bunch of timing statement
throughout the code, it turns out that file writes ARE NOT the bottleneck, as
would be assumed.
It is actually reading the file into a byte buffer that is the culprit.
Specifically, this java command:
byteBuffer = file.get
> Is deleting the old files/directories in the ZFS file system
> sufficient or do I need to destroy/recreate the pool and/or file
> system itself? I've been doing the former.
The former should be sufficient, it's not necessary to destroy the pool.
-j
I ran this dtrace script and got no output. Any ideas?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It does. The file size is limited to the original creation size, which is 65k
for files with 1 data sample.
Unfortunately, I have zero experience with dtrace and only a little with truss.
I'm relying on the dtrace scripts from people on this thread to get by for now!
This message posted fr
On Feb 5, 2008 9:52 PM, William Fretts-Saxton <[EMAIL PROTECTED]>
wrote:
> This may not be a ZFS issue, so please bear with me!
>
> I have 4 internal drives that I have striped/mirrored with ZFS and have an
> application server which is reading/writing to hundreds of thousands of
> files on it, th
Hello William,
Thursday, February 7, 2008, 7:46:51 PM, you wrote:
WFS> -Setting zfs_nocacheflush, though got me drastically increased
WFS> throughput--client requests took, on average, less than 2 seconds each!
That's interesting - a bug in scsi driver for v40z?
--
Best regards,
Robert
William Fretts-Saxton wrote:
> Unfortunately, I don't know the record size of the writes. Is it as
> simple as looking @ the size of a file, before and after a client
> request, and noting the difference in size?
and
> The I/O is actually done by RRD4J, [...] a Java version of 'rrdtool'
If it b
Hi Daniel. I take it you are an RRD4J user?
I didn't see anything in the "performance issues" area that would help. Please
let me know if I'm missing something:
- The default of RRD4J is to use NIO backend, so that is already in place.
- Pooling won't help because there is almost never a time
We are going to get a 6120 for this temporarily. If all goes well, we are
going to move to a 6140 SAN solution.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
> The other thing to keep in mind is that the tunables
> like compression
> and recsize only affect newly written blocks. If you
> have a bunch of
> data that was already laid down on disk and then you
> change the tunable,
> this will only cause new blocks to have the new size.
> If you experime
William Fretts-Saxton wrote:
> Unfortunately, I don't know the record size of the writes. Is it as simple
> as looking @ the size of a file, before and after a client request, and
> noting the difference in size? This is binary data, so I don't know if that
> makes a difference, but the averag
> -Setting zfs_nocacheflush, though got me drastically
> increased throughput--client requests took, on
> average, less than 2 seconds each!
>
> So, in order to use this, I should have a storage
> array, w/battery backup, instead of using the
> internal drives, correct? I have the option of using
> -Still playing with 'recsize' values but it doesn't seem to be doing
> much...I don't think I have a good understand of what exactly is being
> written...I think the whole file might be overwritten each time
> because it's in binary format.
The other thing to keep in mind is that the tunables li
To avoid making multiple posts, I'll just write everything here:
-Moving to nv_82 did not seem to do anything, so I doesn't look like fsync was
the issue.
-Disabling ZIL didn't do anything either
-Still playing with 'recsize' values but it doesn't seem to be doing much...I
don't think I have a g
Slight correction. 'recsize' must be a power of 2 so it would be 8192.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RRD4J isn't a DB, per se, so it doesn't really have a "record" size. In fact,
I don't even know if, when data is written to the binary, whether it is
contiguous or not so the amount written may not directly correlate to a proper
record-size.
I did run your command and found the size patterns y
William,
It should be fairly easy to find the record size using DTrace. Take an
aggregation of the
the writes happening (aggregate on size for all the write(2) system calls).
This would give fair idea of the IO size pattern.
Does RRD4J have a record size mentioned ? Usually if it is a
database
One thing I just observed is that the initial file size is 65796 bytes. When
it gets an update, the file size remains @ 65796.
Is there a minimum file size?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
I just installed nv82 so we'll see how that goes. I'm going to try the
recordsize idea above as well.
A note about UFS: I was told by our local Admin guru that ZFS turns on
write-caching for disks, which is something that a UFS file system should not
have turned on, so that if I convert the Z
Unfortunately, I don't know the record size of the writes. Is it as simple as
looking @ the size of a file, before and after a client request, and noting the
difference in size? This is binary data, so I don't know if that makes a
difference, but the average write size is a lot smaller than th
Neil Perrin Sun.COM> writes:
>
> The ZIL doesn't do a lot of extra IO. It usually just does one write per
> synchronous request and will batch up multiple writes into the same log
> block if possible.
Ok. I was wrong then. Well, William, I think Marion Hakanson has the
most plausible explanatio
[EMAIL PROTECTED] said:
> Here are some performance numbers. Note that, when the application server
> used a ZFS file system to save its data, the transaction took TWICE as long.
> For some reason, though, iostat is showing 5x as much disk writing (to the
> physical disks) on the ZFS partition. C
Marc Bevand wrote:
> William Fretts-Saxton sun.com> writes:
>
>> I disabled file prefetch and there was no effect.
>>
>> Here are some performance numbers. Note that, when the application server
>> used a ZFS file system to save its data, the transaction took TWICE as long.
>> For some reason,
William Fretts-Saxton sun.com> writes:
>
> I disabled file prefetch and there was no effect.
>
> Here are some performance numbers. Note that, when the application server
> used a ZFS file system to save its data, the transaction took TWICE as long.
> For some reason, though, iostat is showing
Solaris 10u4 eh?
Sounds a lot like fsync issues we want into, trying to run Cyrus mail-server
spools in ZFS.
This was highlighted for us by the filebench software varmail test.
OpenSolaris nv78 however worked very well.
This message posted from opensolaris.org
__
It is a striped/mirror:
# zpool status
NAMESTATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
mirrorONL
On Feb 6, 2008 6:36 PM, William Fretts-Saxton
<[EMAIL PROTECTED]> wrote:
> Here are some performance numbers. Note that, when the
> application server used a ZFS file system to save its data, the
> transaction took TWICE as long. For some reason, though, iostat is
> showing 5x as much disk writin
I disabled file prefetch and there was no effect.
Here are some performance numbers. Note that, when the application server used
a ZFS file system to save its data, the transaction took TWICE as long. For
some reason, though, iostat is showing 5x as much disk writing (to the physical
disks) o
Hi Marc,
# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
I don't know if my application uses synchronous I/O transactions...I'm using
Sun's Glassfish v2u1.
I've deleted the ZFS partition and have setup an SVM stripe/mirror just to see
if "ZFS" is getting in the wa
William Fretts-Saxton sun.com> writes:
>
> Some more information about the system. NOTE: Cpu utilization never
> goes above 10%.
>
> Sun Fire v40z
> 4 x 2.4 GHz proc
> 8 GB memory
> 3 x 146 GB Seagate Drives (10k RPM)
> 1 x 146 GB Fujitsu Drive (10k RPM)
And what version of Solaris or what bui
Some more information about the system. NOTE: Cpu utilization never goes above
10%.
Sun Fire v40z
4 x 2.4 GHz proc
8 GB memory
3 x 146 GB Seagate Drives (10k RPM)
1 x 146 GB Fujitsu Drive (10k RPM)
This message posted from opensolaris.org
___
zfs-d
33 matches
Mail list logo