On Aug 6, 2010, at 12:18 PM, Alxen4 wrote:
> Thank you very much for the answer
>
> Yea,that what I was afraid of.
>
> There is something I really cannot understand about zpool structuring...
>
> What is a role these 4 drives play in that tank pool with current
> configuration ?
They are membe
For arc reasons if no other, I would max it out to the 8 gb regardless.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andrew,
Correct. The reason I initially opened the case was because I could
essentially hang a "zfs receive" operation and any further zfs commands issued
on the box would never come back. Just today I had one of my "slow" receives
just come to a screaching halt and where I saw 1 cpu spike al
Jim Barker wrote:
Just an update, I had a ticket open with Sun regarding this and it looks like
they have a CR for what I was seeing (6975124).
That would seem to describe a zfs receive which has stopped for 12 hours.
You described yours as slow, which is not the term I personally would
us
Thank you very much for the answer
Yea,that what I was afraid of.
There is something I really cannot understand about zpool structuring...
What is a role these 4 drives play in that tank pool with current configuration
?
If they are not part of raidz3 array what is a point for Solaris to accept
Just an update, I had a ticket open with Sun regarding this and it looks like
they have a CR for what I was seeing (6975124).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
> ahh that explains it all, god damn that base 1000
> standard , only usefull for sales people :)
As much as it all annoys me too, the SI prefixes are used correctly pretty much
everywhere except in operating systems.
A kilometer is not 1024 meters and a megawatt is not 1048576 watts.
Us, the I
I have been looking at why a zfs receive operation is terribly slow and one
observation that seemed directly linked to why it is slow is that at any one
time one of the cpus is pegged at 100% sys while the other 5 in my case are
relatively quiet. I haven't dug any deeper than that, but was curi
Thomas,
Enabling compression and filling the inner file-system with null fixed
the problem.
I think I might leave compression on. I still need to do more testing on
that.
Thanks!
On 08/05/10 03:15, Tomas Ögren wrote:
On 04 August, 2010 - Karl Rossing sent me these 5,4K bytes:
Hi,
W
> From: Per Jorgensen
> Date: Fri, 06 Aug 2010 04:29:08 PDT
> To:
> Subject: [zfs-discuss] Disk space on Raidz1 configuration
>
> I have 7 * 1,5TB disk in a raidz1 configuration, then the system (how i
> understanding it) uses 1,5TB ( 1 disk ) for parity, but when i uses "df" the
> availa
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of P-O Yliniemi
>
> Drives for storage: 16*1.5TB Seagate ST31500341AS, connected to two
> AOC-SAT2-MV8 controllers
> Drives for operating system: 2*80GB Intel X25-M (mirror)
>
> Is there any advan
Alas the pool in question has a dozen odd other ZFS' that range in importance
from "nice to have" to "let's not even think about it".
On the bright side, at about 14 hours in, my lights are still blinken. Here's
hoping the +RAM and -xVM were the difference.
One related question: In the unfort
Our experience has been that a new out of the box SSD works well for the ZIL
but as soon as it's completely full, performance drops to slower than a regular
SAS hard drive due to the write performance penalty in their fundamental
design, their LBA map strategy and the not yet released (to me at
hi
as already said above. zfs property shareiscsi is obsolet and slow.
use comstar instead!
be careful.
if you switch to comstar, your current iscsi is no longer available.
save the data first. and if you want to have it more user-friendly,
you could als try napp-it, my free web-gui for opensolar
ahh that explains it all, god damn that base 1000 standard , only usefull for
sales people :)
thanks for the help
/pej
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
On Fri, Aug 6, 2010 at 6:44 AM, P-O Yliniemi wrote:
> Hello!
>
> I have built a OpenSolaris / ZFS based storage system for one of our
> customers. The configuration is about this:
>
> Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't remember
> and do not have my specification n
hello
i would say: it depends
if you fill your pool with large videos or media files,
i suppose its ok, but if you have things like databases or webserver,
you will need only good iops values, much more than you can have with spindles.
(ssd could be 100x better than disks for this use)
in this ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
ZFS and "du" use binary byte multipliers (1kB = 1024 B, etc.), while
drive manufacturers use decimal conversion (1kB = 1000 B). So your 1.5TB
drives are in fact ~1.36 TiB (binary TB):
7 x 1,36 TiB = 9.52 TiB - 1,36 TiB for parity = 8.16 TiB
- --
Saso
I have 7 * 1,5TB disk in a raidz1 configuration, then the system (how i
understanding it) uses 1,5TB ( 1 disk ) for parity, but when i uses "df" the
available space in my newly created pool it says
FilesystemSize Used Avail Use% Mounted on
bf8.0T 36K 8.0T 1
Hello!
I have built a OpenSolaris / ZFS based storage system for one of our
customers. The configuration is about this:
Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't
remember and do not have my specification nearby)
RAM: 8GB ECC (X7SBE won't take more)
Drives for storag
On Fri, Aug 6, 2010 at 12:18 AM, Alxen4 wrote:
> I have zpool like that
>
> pool: tank
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> raidz3-0 ONLINE 0 0 0
> ___c6t0d0
I have zpool like that
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
___c6t0d0 ONLINE 0 0 0
___c6t1d0 ONLINE 0 0
22 matches
Mail list logo