i've setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log device.
to see how well it
works, i ran bonnie++, but never saw any io's on the log device (using iostat
-nxce) . pool status is good - no issues or errors. any ideas?
jmh
--
This message posted from opensolaris.org
David Magda wrote:
On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote:
I have seen UPSs help quite a lot for short glitches lasting seconds,
or a minute. Otherwise the outage is usually longer than the UPSs
can stay up since the problem required human attention.
A standby generator is needed
Tim Haley wrote:
Ian Collins wrote:
Ian Collins wrote:
Tim Haley wrote:
Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the receiving system, at init 6, the system will ju
On Tue, 30 Jun 2009, MC wrote:
Any news on the ZFS deduplication work being done? I hear Jeff Bonwick might
speak about it this month.
Yes, it is definately on the agenda for Kernel Conference Australia
(http://www.kernelconference.net) - you should come along!
--
Andre van Eyssen.
mail:
Any news on the ZFS deduplication work being done? I hear Jeff Bonwick might
speak about it this month.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
> On Tue, 30 Jun 2009, Bob Friesenhahn wrote:
>
> Note that this issue does not apply at all to NFS
> service, database
> service, or any other usage which does synchronous
> writes.
I see read starvation with NFS. I was using iometer on a Windows VM, connecting
to an NFS mount on a 2008.11 phy
On Tue, 30 Jun 2009, Rob Logan wrote:
CPU is smoothed out quite a lot
yes, but the area under the CPU graph is less, so the
rate of real work performed is less, so the entire
job took longer. (allbeit "smoother")
For the purpose of illustration, the case showing the huge sawtooth
was when ru
David Magda wrote:
On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote:
I have seen UPSs help quite a lot for short glitches lasting seconds,
or a minute. Otherwise the outage is usually longer than the UPSs
can stay up since the problem required human attention.
A standby generator is neede
Interesting to see that it makes such a difference, but I wonder what effect it
has on ZFS's write ordering, and it's attempts to prevent fragmentation?
By reducing the write buffer, are you loosing those benefits?
Although on the flip side, I guess this is no worse off than any other
filesyste
> CPU is smoothed out quite a lot
yes, but the area under the CPU graph is less, so the
rate of real work performed is less, so the entire
job took longer. (allbeit "smoother")
Rob
___
zfs-discuss mailing list
zfs-discuss
On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote:
I have seen UPSs help quite a lot for short glitches lasting
seconds, or a minute. Otherwise the outage is usually longer than
the UPSs can stay up since the problem required human attention.
A standby generator is needed for any long outag
On Tue, 30 Jun 2009, Brent Jones wrote:
Maybe there could be a supported ZFS tuneable (per file system even?)
that is optimized for 'background' tasks, or 'foreground'.
Beyond that, I will give this tuneable a shot and see how it impacts
my own workload.
Note that this issue does not apply at
On Tue, Jun 30, 2009 at 12:25 PM, Bob
Friesenhahn wrote:
> On Mon, 29 Jun 2009, Lejun Zhu wrote:
>>
>> With ZFS write throttle, the number 2.5GB is tunable. From what I've read
>> in the code, it is possible to e.g. set zfs:zfs_write_limit_override =
>> 0x800 (bytes) to make it write 128M inste
> "ms" == Monish Shah writes:
> "sl" == Scott Lawson writes:
> "np" == Neal Pollack writes:
ms> If you are on a UPS, is it OK to disable ZIL?
sl> I have seen numerous UPS' failures over the years,
yeah at my place in NYC we've had more problems with the UPS than with
the s
On Mon, 29 Jun 2009, Lejun Zhu wrote:
With ZFS write throttle, the number 2.5GB is tunable. From what I've
read in the code, it is possible to e.g. set
zfs:zfs_write_limit_override = 0x800 (bytes) to make it write
128M instead.
This works, and the difference in behavior is profound. No
For what it is worth, I too have seen this behavior when load testing our zfs
box. I used iometer and the RealLife profile (1 worker, 1 target, 65% reads,
60% random, 8k, 32 IOs in the queue). When writes are being dumped, reads drop
close to zero, from 600-700 read IOPS to 15-30 read IOPS.
zpo
On Tue, Jun 30, 2009 at 1:36 PM, Erik Trimble wrote:
> Bob Friesenhahn wrote:
>>
>> On Tue, 30 Jun 2009, Neal Pollack wrote:
>>
>>> Actually, they do quite a bit more than that. They create jobs, generate
>>> revenue for battery manufacturers, and tech's that change batteries and do
>>> PM maintena
Bob Friesenhahn wrote:
On Tue, 30 Jun 2009, Neal Pollack wrote:
Actually, they do quite a bit more than that. They create jobs,
generate revenue for battery manufacturers, and tech's that change
batteries and do PM maintenance on the large units. Let's not
It sounds like this is a responsib
On Tue, 30 Jun 2009, Neal Pollack wrote:
Actually, they do quite a bit more than that. They create jobs,
generate revenue for battery manufacturers, and tech's that change
batteries and do PM maintenance on the large units. Let's not
It sounds like this is a responsibility which should be mo
On 06/30/09 03:00 AM, Andre van Eyssen wrote:
On Tue, 30 Jun 2009, Monish Shah wrote:
The evil tuning guide says "The ZIL is an essential part of ZFS and
should never be disabled." However, if you have a UPS, what can go
wrong that really requires ZIL?
Without addressing a single ZFS-specif
On Tue, 30 Jun 2009, Ross wrote:
However, it completely breaks any process like this that can't
afford 3-5s delays in processing, it makes ZFS a nightmare for
things like audio or video editing (where it would otherwise be a
perfect fit), and it's also horrible from the perspective of the end
On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
Today I experimented with doubling this value to 688128 and was happy to
see a large increase in sequential read performance from my ZFS pool which
is based on six mirrors vdevs. Sequential read performanc
Monish Shah wrote:
A related question: If you are on a UPS, is it OK to disable ZIL?
The evil tuning guide says "The ZIL is an essential part of ZFS and
should never be disabled." However, if you have a UPS, what can go
wrong that really requires ZIL?
The UPS.
Opinions?
Monish
- O
Monish Shah wrote:
A related question: If you are on a UPS, is it OK to disable ZIL?
I think the answer to this is no. UPS's do fail. If you have two
redundant units, answer *might* be maybe. But prudence says *no*.
I have seen numerous UPS' failures over the years, cascading UPS
failures
On Tue, 30 Jun 2009, Monish Shah wrote:
The evil tuning guide says "The ZIL is an essential part of ZFS and should
never be disabled." However, if you have a UPS, what can go wrong that
really requires ZIL?
Without addressing a single ZFS-specific issue:
* panics
* crashes
* hardware failur
Haudy Kazemi wrote:
Hello,
I've looked around Google and the zfs-discuss archives but have not
been able to find a good answer to this question (and the related
questions that follow it):
How well does ZFS handle unexpected power failures? (e.g.
environmental power failures, power supply
A related question: If you are on a UPS, is it OK to disable ZIL?
The evil tuning guide says "The ZIL is an essential part of ZFS and should
never be disabled." However, if you have a UPS, what can go wrong that
really requires ZIL?
Opinions?
Monish
- Original Message -
From: "Ro
I've seen enough people suffer from corrupted pools that a UPS is definitely
good advice. However, I'm running a (very low usage) ZFS server at home and
it's suffered through at least half a dozen power outages without any problems
at all.
I do plan to buy a UPS as soon as I can, but it seems
I'm trying to scrub a pool on a backup server running Solaris 10 Update
7 and the scrub restarts each time a snap is received.
I thought this was fixed in update 6?
The machine was recently upgraded from update5, which did have the issue.
--
Ian.
> backup windows using primarily iSCSI. When those
> writes occur to my RaidZ volume, all activity pauses until the writes
> are fully flushed.
The more I read about this, the worse it sounds. The thing is, I can see where
the ZFS developers are coming from - in theory this is a more efficient u
30 matches
Mail list logo