Depends.
a) Pool design
5 x SSD as raidZ = 4 SSD space - read I/O performance of one drive
Adding 5 cheap 40 GB L2ARC device (which are pooled) increases the read
performance for your working window of 200 GB.
If you have a pool of mirrors - adding L2ARC does not make sence.
b) SSD type
Is yo
Hy all,
recently I upgraded to S10U8 a T5120 using LU. The system had a zones
configured and at time of upgrade procedure the zones was still alive
and worked fine. The LU procedure was ended successfully. Zones on the
system was installed in a ZFS filesystem. Here the result at the end
of LU (ABE
Henrik
http://sparcv9.blogspot.com
On 9 jan 2010, at 04.49, bank kus wrote:
dd if=/dev/urandom of=largefile.txt bs=1G count=8
cp largefile.txt ./test/1.txt &
cp largefile.txt ./test/2.txt &
Thats it now the system is totally unusable after launching the two
8G copies. Until these copies
> Probably not, but ZFS only runs in userspace on Linux
> with fuse so it
> will be quite different.
I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a
system with low RAM even the dd command makes the system horribly unresponsive.
IMHO not having fairshare or times
Hello again,
I swapped out the PSU and replaced the cables and ran scrubs almost every day
(after hours) with no reported faults. I also upgraded to SNV_130 thanks to
Brock & changed cables and PSU after the suggestion from Richard. I owe you two
both beers!
We thought our troubles were re
I finally managed to resolve this. I received some useful info from Richard
Elling (without List CC):
>> (ME) However I sill think, also the plain IDE driver needs a timeout to
>> hande disk failures, cause cables etc can fail.
>(Richard) Yes, this is a little bit odd. The sd driver should be
On Sat, 9 Jan 2010, bank kus wrote:
Probably not, but ZFS only runs in userspace on Linux
with fuse so it
will be quite different.
I wasnt clear in my description, I m referring to ext4 on Linux. In
fact on a system with low RAM even the dd command makes the system
horribly unresponsive.
I
> I am confused. Are you talking about ZFS under
> OpenSolaris, or are
> you talking about ZFS under Linux via Fuse?
???
> Do you have compression or deduplication enabled on
> the zfs
> filesystem?
Compression no. I m guessing 2009.06 doesnt have dedup.
> What sort of system are you using
On Jan 9, 2010, at 1:32 AM, Lutz Schumann wrote:
Depends.
a) Pool design
5 x SSD as raidZ = 4 SSD space - read I/O performance of one drive
Adding 5 cheap 40 GB L2ARC device (which are pooled) increases the
read performance for your working window of 200 GB.
An interesting thing happens when
> > I wasnt clear in my description, I m referring to ext4 on Linux. In
> > fact on a system with low RAM even the dd command makes the system
> > horribly unresponsive.
> >
> > IMHO not having fairshare or timeslicing between different processes
> > issuing reads is frankly unacceptable given a
We just had our first x4500 disk failure (which of course had to happen
late Friday night ), I've opened a ticket on it but don't expect a
response until Monday so was hoping to verify the hot spare took over
correctly and we still have redundancy pending device replacement.
This is an S10U6 box:
On Jan 9, 2010, at 9:45 AM, Paul B. Henson wrote:
>
> If ZFS removed the drive from the pool, why does the system keep
> complaining about it?
It's not failing in the sense that it's returning I/O errors, but it's flaky,
so it's attaching and detaching. Most likely it decided to attach again a
Paul B. Henson wrote:
We just had our first x4500 disk failure (which of course had to happen
late Friday night ), I've opened a ticket on it but don't expect a
response until Monday so was hoping to verify the hot spare took over
correctly and we still have redundancy pending device replacement.
On Sat, 9 Jan 2010, Eric Schrock wrote:
> > If ZFS removed the drive from the pool, why does the system keep
> > complaining about it?
>
> It's not failing in the sense that it's returning I/O errors, but it's
> flaky, so it's attaching and detaching. Most likely it decided to attach
> again and
On Fri, 08 Jan 2010 18:33:06 +0100, Mike Gerdts wrote:
> I've written a dtrace script to get the checksums on Solaris 10.
> Here's what I see with NFSv3 on Solaris 10.
jfyi, I've reproduces it as well using a Solaris 10 Update 8 SB2000 sparc client
and NFSv4.
much like you I also get READ error
Ben,
I have found that booting from cdrom and importing the pool on the new host,
then boot the hard disk will prevent these issues.
That will reconfigure the zfs to use the new disk device.
When running, zpool detach the missing mirror device and attach a new one.
Mark.
--
This message posted f
On Jan 9, 2010, at 2:02 PM, bank kus wrote:
>> Probably not, but ZFS only runs in userspace on Linux
>> with fuse so it
>> will be quite different.
>
> I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a
> system with low RAM even the dd command makes the system horri
Hi Henrik
I have 16GB Ram on my system on a lesser RAM system dd does cause problems as I
mentioned above. My __guess__ dd is probably sitting in some in memory cache
since du -sh doesnt show the full file size until I do a sync.
At this point I m less looking for QA type repro questions and/or
Btw FWIW if I redo the dd + 2 cp experiment on /tmp the result is far more
disastrous. The GUI stops moving caps lock stops responding for large intervals
no clue why.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
19 matches
Mail list logo