-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil Perrin wrote:
> I suppose if you know
> the disk only contains zfs slices then write caching could be
> manually enabled using "format -e" -> cache -> write_cache -> enable
When will we have write cache control over ATA/SATA drives? :-).
- --
Je
Robert,
The patches will be available sometime late September. This may be a
week or so before s10u3 actually releases.
Thanks,
George
Robert Milkowski wrote:
Hello eric,
Thursday, July 27, 2006, 4:34:16 AM, you wrote:
ek> Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006,
For S10U3, RR is 11/13/06 and GA is 11/27/06.
Gary
Bennett, Steve wrote:
Eric said:
For U3, these are the performance fixes:
6424554 full block re-writes need not read data in
6440499 zil should avoid txg_wait_synced() and use dmu_sync()
to issue
parallelIOs when fsyncing
64473
Eric said:
> For U3, these are the performance fixes:
> 6424554 full block re-writes need not read data in
> 6440499 zil should avoid txg_wait_synced() and use dmu_sync()
> to issue
> parallelIOs when fsyncing
> 6447377 ZFS prefetch is inconsistant
> 6373978 want to take lots of snapshots quickly
Hello eric,
Thursday, July 27, 2006, 4:34:16 AM, you wrote:
ek> Robert Milkowski wrote:
>>Hello George,
>>
>>Wednesday, July 26, 2006, 7:27:04 AM, you wrote:
>>
>>
>>GW> Additionally, I've just putback the latest feature set and bugfixes
>>GW> which will be part of s10u3_03. There were some add
Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006, 7:27:04 AM, you wrote:
GW> Additionally, I've just putback the latest feature set and bugfixes
GW> which will be part of s10u3_03. There were some additional performance
GW> fixes which may really benefit plus it will provide h
Hello George,
Wednesday, July 26, 2006, 7:27:04 AM, you wrote:
GW> Additionally, I've just putback the latest feature set and bugfixes
GW> which will be part of s10u3_03. There were some additional performance
GW> fixes which may really benefit plus it will provide hot spares support.
GW> Once
On Wed, Jul 26, 2006 at 08:38:16AM -0600, Neil Perrin wrote:
>
>
> >GX620 on my desk at work and I run snv_40 on the Latitude D610 that I
> >carry with me. In both cases the machines only have one disk, so I need
> >to split it up for UFS for the OS and ZFS for my data. How do I turn on
> >writ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil Perrin wrote:
> I suppose if you know
> the disk only contains zfs slices then write caching could be
> manually enabled using "format -e" -> cache -> write_cache -> enable
When will we have write cache control over ATA/SATA drives? :-).
- --
Je
Brian Hechinger wrote On 07/26/06 06:49,:
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:
If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
it label and use the disks, it will automatically turn on the write
cache for you.
What if you can't give ZFS whol
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:
>
> If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
> it label and use the disks, it will automatically turn on the write
> cache for you.
What if you can't give ZFS whole disks? I run snv_38 on the Optiplex
GX
Karen and Sean,
You mention ZFS version 6 do yo mean that you are running s10u2_06? If
so, then definitely you want to upgrade to the RR version of s10u2 which
is s10u2_09a.
Additionally, I've just putback the latest feature set and bugfixes
which will be part of s10u3_03. There were some ad
Hi Torrey; we are the cobblers kids. We borrowed this T2000 from
Niagara engineering after we did some performance tests for them. I am
trying to get a thumper to run this data set. This could take up to 3-4
months. Today we are watching 750 Sun Ray servers and 30,000 employees.
Lets see
1) Sol
Given the amount of I/O wouldn't it make sense to get more drives
involved or something that has cache on the front end or both? If you're
really pushing the amount of I/O you're alluding too - Hard to tell
without all the details - then you're probably going to hit a limitation
on the drive IO
On Tue, Jul 25, 2006 at 03:39:11PM -0700, Karen Chau wrote:
> Our application Canary has approx 750 clients uploading to the server
> every 10 mins, that's approx 108,000 gzip tarballs per day writing to
> the /upload directory. The parser untars the tarball which consists of
> 8 ascii files into
Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory. The parser untars the tarball which consists of
8 ascii files into the /archives directory. /app is our application and
tools (apache,
16 matches
Mail list logo