On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote:
> On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote:
>
> > From my experience dealing with > 4TB you stop writing after 80% of zpool
> > utilization
>
> YMMV. I have routinely completely filled zpools. There have been some
> improvement
zdb zpool output as below:
bash-3.00# zdb ttt
version=15
name='ttt'
state=0
txg=4
pool_guid=4724975198934143337
hostid=69113181
hostname='cdc-x4100s8'
vdev_tree
type='root'
id=0
guid=4724975198934143337
children[0]
type
Yesterday I received a victim.
"SuperServer 5026T-3RF 19" 2U, Intel X58, 1xCPU LGA1366 8xSAS/SATA hot-swap
drive bays, 8 ports SAS LSI 1068E, 6 ports SATA-II Intel ICH10R, 2xGigabit
Ethernet"
and i have 2 ways Openfiler vs Opensolaris :)
--
This message posted from opensolaris.org
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote:
> I realize that I did things in the wrong order. I should have removed the
> oldest snapshot first, on to the newest, and then removed the data in the
> FS itself.
For the problem in question, this is irrelevant. As discussed in the
Daniel Carosone wrote:
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of time off and on. (It lights nag
David Dyer-Bennet wrote:
On 14-Apr-10 22:44, Ian Collins wrote:
Hint: the southern hemisphere does exist!
I've even been there.
But the month/season relationship is too deeply built into too many
things I follow (like the Christmas books come out of the publisher's
fall list; for that matte
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
> So I turned deduplication on on my staging FS (the one that gets mounted
> on the database servers) yesterday, and since then I've been seeing the
> mount hang for short periods of time off and on. (It lights nagios up
> like a Chris
On 14-Apr-10 22:44, Ian Collins wrote:
On 04/15/10 06:16 AM, David Dyer-Bennet wrote:
Because 132 was the most current last time I paid much attention :-). As
I say, I'm currently holding out for 2010.$Spring, but knowing how to get
to a particular build via package would be potentially interest
As I mentioned earlier, I removed the hardware-based Raid6 array, changed all
the disks to passthrough disks, made a raidz2 pool using all the disk. I used
my backup program to copy 55GB of data to the disk, and now I have errors all
over the place.
# zpool status -v
pool: bigraid
state: DEG
Hello,all, I use opensolaris snv-133
I use comstar to share IPSAN a partition werr/pwd to window client.
Now I copy a 1.05GB file to the format disk. then the partition usage is about
80%,now I delete the file and the disk is idle.cancel IPSAN share I thought
the partition usage is drop to abo
On 04/15/10 06:16 AM, David Dyer-Bennet wrote:
Because 132 was the most current last time I paid much attention :-). As
I say, I'm currently holding out for 2010.$Spring, but knowing how to get
to a particular build via package would be potentially interesting for the
future still.
I hope it's
On 04/14/10 19:51, Richard Jahnel wrote:
This sounds like the known issue about the dedupe map not fitting in ram.
Indeed, but this is not correct:
When blocks are freed, dedupe scans the whole map to ensure each block is not
is use before releasing it.
That's not correct.
dedup uses a da
7:51pm, Richard Jahnel wrote:
This sounds like the known issue about the dedupe map not fitting in ram.
When blocks are freed, dedupe scans the whole map to ensure each block is not
is use before releasing it. This takes a veeery long time if the map doesn't
fit in ram.
If you can try adding
On Wed, Apr 14 at 13:16, David Dyer-Bennet wrote:
I don't get random hangs in normal use; so I haven't done anything to "get
past" this.
Interesting. Win7-64 clients were locking up our 2009.06 server
within seconds while performing common operations like searching and
copying large directory
This sounds like the known issue about the dedupe map not fitting in ram.
When blocks are freed, dedupe scans the whole map to ensure each block is not
is use before releasing it. This takes a veeery long time if the map doesn't
fit in ram.
If you can try adding more ram to the system.
--
This
I have an approx 700GB (of data) FS that I had dedup turned on for. (See
previous posts.) I turned on dedup after the FS was populated, and was not
sure dedup was working. I had another copy of the data, so I removed the data,
and then tried to destroy the snapshots I had taken. The first two di
Richard Elling wrote:
On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote:
From my experience dealing with > 4TB you stop writing after 80% of zpool
utilization
YMMV. I have routinely completely filled zpools. There have been some
improvements in performance of allocations when free space
On Wed, April 14, 2010 15:28, Miles Nordin wrote:
>> "dd" == David Dyer-Bennet writes:
>
> dd> Is it possible to switch to b132 now, for example?
>
> yeah, this is not so bad. I know of two approaches:
Thanks, I've filed and flagged this for reference.
--
David Dyer-Bennet, d...@dd-b.
> "dd" == David Dyer-Bennet writes:
dd> Is it possible to switch to b132 now, for example?
yeah, this is not so bad. I know of two approaches:
* genunix.org assembles livecd's of each b tag. You can burn
one, unplug from the internet, install it. It is nice to have a
livecd ca
On 04/14/10 12:37, Christian Molson wrote:
First I want to thank everyone for their input, It is greatly appreciated.
To answer a few questions:
Chassis I have:
http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm
Motherboard:
http://www.tyan.com/product_board_detail.aspx?pid=56
Just a quick update, Tested using bonnie++ just during its "Intelligent write":
my 5 vdevs of 4x1tb drives wrote around 300-350MB/sec using that test.
The 1vdev of 4x2TB drives wrote more inconsistently, between 200-300.
This is not a complete test... just looking at iostat output while bonnie++
On Wed, Apr 14, 2010 at 5:41 AM, Dmitry wrote:
> Which build is the most stable, mainly for NAS?
> I plann NAS zfs + CIFS,iSCSI
I'm using b133. My current box was installed with 118, upgraded to
128a, then 133.
I'm avoiding b134 due to changes in the CIFS service that affect ACLs.
http://bugs.o
First I want to thank everyone for their input, It is greatly appreciated.
To answer a few questions:
Chassis I have:
http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm
Motherboard:
http://www.tyan.com/product_board_detail.aspx?pid=560
RAM:
24 GB (12 x 2GB)
10 x 1TB Seagates 7
? wrote:
> There is no way in the SUS standard to determinate if a file system is
> case insensitive, i.e. with pathconf?
SUS requires a case sensitive filesystem.
There is no need to request this from a POSIX view
Jörg
--
EMail:jo...@schily.isdn.cs.tu-berlin.de (home)
On Wed, April 14, 2010 12:29, Bob Friesenhahn wrote:
> On Wed, 14 Apr 2010, David Dyer-Bennet wrote:
Not necessarily for a home server. While mine so far is all mirrored
pairs of 400GB disks, I don't even think about "performance" issues, I
never come anywhere near the limits
On Wed, April 14, 2010 11:51, Tonmaus wrote:
>>
>> On Wed, April 14, 2010 08:52, Tonmaus wrote:
>> > safe to say: 2009.06 (b111) is unusable for the
>> purpose, ans CIFS is dead
>> > in this build.
>>
>> That's strange; I run it every day (my home Windows
>> "My Documents" folder
>> and all my pho
Roy, I was looking for a C API which works for all types of file
systems, including ZFS, CIFS, PCFS and others.
Olga
On Wed, Apr 14, 2010 at 7:46 PM, Roy Sigurd Karlsbakk
wrote:
> r...@urd:~# zfs get casesensitivity dpool/test
> NAMEPROPERTY VALUESOURCE
> dpool/test cas
r...@urd:~# zfs get casesensitivity dpool/test
NAMEPROPERTY VALUESOURCE
dpool/test casesensitivity sensitive-
this seems to be settable only by create, not later. See man zfs for more info
Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
ht
There is no way in the SUS standard to determinate if a file system is
case insensitive, i.e. with pathconf?
Olga
On Wed, Apr 14, 2010 at 7:48 PM, Glenn Fowler wrote:
>
> On Wed, 14 Apr 2010 17:54:02 +0200 =?KOI8-R?B?z8zYx8Egy9LZ1sHOz9fTy8HR?=
> wrote:
>> Can I use getconf to test if a ZFS file
On Wed, 14 Apr 2010, David Dyer-Bennet wrote:
Not necessarily for a home server. While mine so far is all mirrored
pairs of 400GB disks, I don't even think about "performance" issues, I
never come anywhere near the limits of the hardware.
I don't see how the location of the server has any bea
Jonathan,
For a different diagnostic perspective, you might use the fmdump -eV
command to identify what FMA indicates for this device. This level of
diagnostics is below the ZFS level and definitely more detailed so
you can see when these errors began and for how long.
Cindy
On 04/14/10 11:08,
>
> Do worry about media errors. Though this is the most
> common HDD
> error, it is also the cause of data loss.
> Fortunately, ZFS detected this
> and repaired it for you.
Right. I assume you do recommend swapping the faulted drive out though?
Other file systems may not
> be so gracious.
>
On Apr 14, 2010, at 5:13 AM, fred pam wrote:
> I have a similar problem that differs in a subtle way. I moved a zpool
> (single disk) from one system to another. Due to my inexperience I did not
> import the zpool but (doh!) 'zpool create'-ed it (I may also have used a -f
> somewhere in there..
[this seems to be the question of the day, today...]
On Apr 14, 2010, at 2:57 AM, bonso wrote:
> Hi all,
> I recently experienced a disk failure on my home server and observed checksum
> errors while resilvering the pool and on the first scrub after the resilver
> had completed. Now everything
comment below...
On Apr 14, 2010, at 1:49 AM, Richard Skelton wrote:
> Hi,
> I have installed OpenSolaris snv_134 from the iso at genunix.org.
> Mon Mar 8 2010 New OpenSolaris preview, based on build 134
> I created a zpool:-
>NAMESTATE READ WRITE CKSUM
>tankON
On Wed, April 14, 2010 12:06, Bob Friesenhahn wrote:
> On Wed, 14 Apr 2010, David Dyer-Bennet wrote:
>>> It should be "safe" but chances are that your new 2TB disks are
>>> considerably slower than the 1TB disks you already have. This should
>>> be as much cause for concern (or more so) than the
Yeah,
--
$smartctl -d sat,12 -i /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)
On Wed, 14 Apr 2010, David Dyer-Bennet wrote:
It should be "safe" but chances are that your new 2TB disks are
considerably slower than the 1TB disks you already have. This should
be as much cause for concern (or more so) than the difference in raidz
topology.
Not necessarily for a home server.
On Apr 14, 2010, at 9:56 AM, Jonathan wrote:
> I just ran 'iostat -En'. This is what was reported for the drive in question
> (all other drives showed 0 errors across the board.
>
> All drives indicated the "illegal request... predictive failure analysis"
> --
> I'm on snv 111b. I attempted to get smartmontools
> workings, but it doesn't seem to want to work as
> these are all sata drives.
Have you tried using '-d sat,12' when using smartmontools?
opensolaris.org/jive/thread.jspa?messageID=473727
--
This message posted from opensolaris.org
__
On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote:
> From my experience dealing with > 4TB you stop writing after 80% of zpool
> utilization
YMMV. I have routinely completely filled zpools. There have been some
improvements in performance of allocations when free space gets low in
the past 6-9 month
I just ran 'iostat -En'. This is what was reported for the drive in question
(all other drives showed 0 errors across the board.
All drives indicated the "illegal request... predictive failure analysis"
--
c7t1d0
>
> On Wed, April 14, 2010 08:52, Tonmaus wrote:
> > safe to say: 2009.06 (b111) is unusable for the
> purpose, ans CIFS is dead
> > in this build.
>
> That's strange; I run it every day (my home Windows
> "My Documents" folder
> and all my photos are on 2009.06).
>
>
> -bash-3.2$ cat /etc/rele
On Apr 14, 2010, at 12:05 AM, Jonathan wrote:
> I just started replacing drives in this zpool (to increase storage). I pulled
> the first drive, and replaced it with a new drive and all was well. It
> resilvered with 0 errors. This was 5 days ago. Just today I was looking
> around and noticed t
From my experience dealing with > 4TB you stop writing after 80% of
zpool utilization
10
On Apr 14, 2010, at 6:53 PM, "eXeC001er" wrote:
20 % - it is big size on for large volumes. right ?
2010/4/14 Yariv Graf
Hi
Keep below 80%
10
On Apr 14, 2010, at 6:49 PM, "eXeC001er" wrote:
Hi A
I would like to see brtfs under a BSD license that
FreeBSD/OpenBSD/NetBSD can adopt it, too.
Olga
2010/4/14 :
>
>>brtfs could be supported on Opensolaris, too. IMO it could even
>>complement ZFS and spawn some concurrent development between both. ZFS
>>is too high end and works very poorly with
20 % - it is big size on for large volumes. right ?
2010/4/14 Yariv Graf
> Hi
> Keep below 80%
>
> 10
>
> On Apr 14, 2010, at 6:49 PM, "eXeC001er" wrote:
>
> Hi All.
>
> How many disk space i need to reserve for save ZFS perfomance ?
>
> any official doc?
>
> Thanks.
>
> __
Can I use getconf to test if a ZFS file system is mounted in case
insensitive mode?
Olga
--
, __ ,
{ \/`o;-Olga Kryzhanovska -;o`\/ }
.'-/`-/ olga.kryzhanov...@gmail.com \-`\-'.
`'-..-| / Solaris/BSD//C/C++ progra
Hi
Keep below 80%
10
On Apr 14, 2010, at 6:49 PM, "eXeC001er" wrote:
Hi All.
How many disk space i need to reserve for save ZFS perfomance ?
any official doc?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
>brtfs could be supported on Opensolaris, too. IMO it could even
>complement ZFS and spawn some concurrent development between both. ZFS
>is too high end and works very poorly with less than 2GB while brtfs
>reportedly works well with 128MB on ARM.
Both have license issues; Oracle can now re-lice
On 14/04/2010 16:04, John wrote:
Hello,
we set our ZFS filesystems to casesensitivity=mixed when we created them.
However, CIFS access to these files is still case sensitive.
Here is the configuration:
# zfs get casesensitivity pool003/arch
NAME PROPERTY VALUESOURCE
Hi All.
How many disk space i need to reserve for save ZFS perfomance ?
any official doc?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, April 13, 2010 10:38, Bob Friesenhahn wrote:
> On Tue, 13 Apr 2010, Christian Molson wrote:
>>
>> Now I would like to add my 4 x 2TB drives, I get a warning message
>> saying that: "Pool uses 5-way raidz and new vdev uses 4-way raidz"
>> Do you think it would be safe to use the -f switch h
No the filesystem was created with b103 or earlier.
Just to add more details, the issue only occurred for the first direct access
to the file.
>From a windows client that has never access the file, you can issue:
dir \\filer\arch\myfolder\myfile.TXT and you will get file not found, if the
file i
On Tue, April 13, 2010 09:48, Christian Molson wrote:
>
> Now I would like to add my 4 x 2TB drives, I get a warning message saying
> that: "Pool uses 5-way raidz and new vdev uses 4-way raidz" Do you think
> it would be safe to use the -f switch here?
Yes. 4-way on the bigger drive is *more*
brtfs could be supported on Opensolaris, too. IMO it could even
complement ZFS and spawn some concurrent development between both. ZFS
is too high end and works very poorly with less than 2GB while brtfs
reportedly works well with 128MB on ARM.
Olga
On Wed, Apr 14, 2010 at 5:31 PM, wrote:
>
>
>
was b130 also the version that created the data set?
-Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, April 14, 2010 08:52, Tonmaus wrote:
> safe to say: 2009.06 (b111) is unusable for the purpose, ans CIFS is dead
> in this build.
That's strange; I run it every day (my home Windows "My Documents" folder
and all my photos are on 2009.06).
-bash-3.2$ cat /etc/release
Hello,
we set our ZFS filesystems to casesensitivity=mixed when we created them.
However, CIFS access to these files is still case sensitive.
Here is the configuration:
# zfs get casesensitivity pool003/arch
NAME PROPERTY VALUESOURCE
pool003/arch casesensitivity mixe
Hi,
Maybe your zfs box used for dedup has a big load, therefore giving
timeouts in nagios checks?
I ask you this because i also suffer from that effect in a system with 2
Intel Xeon 3.0Ghz ;)
Bruno
On 14-4-2010 15:48, Paul Archer wrote:
> So I turned deduplication on on my staging FS (the one th
safe to say: 2009.06 (b111) is unusable for the purpose, ans CIFS is dead in
this build.
I am using B133, but I am not sure if this is best choice. I'd like to hear
from others as well.
-Tonmaus
--
This message posted from opensolaris.org
___
zfs-dis
So I turned deduplication on on my staging FS (the one that gets mounted on
the database servers) yesterday, and since then I've been seeing the mount
hang for short periods of time off and on. (It lights nagios up like a
Christmas tree 'cause the disk checks hang and timeout.)
I haven't turne
Which build is the most stable, mainly for NAS?
I plann NAS zfs + CIFS,iSCSI
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a similar problem that differs in a subtle way. I moved a zpool (single
disk) from one system to another. Due to my inexperience I did not import the
zpool but (doh!) 'zpool create'-ed it (I may also have used a -f somewhere in
there...)
Interestingly the script still gives me the old u
Hi all,
I recently experienced a disk failure on my home server and observed checksum
errors while resilvering the pool and on the first scrub after the resilver had
completed. Now everything seems fine but I'm posting this to get help with
calming my nerves and detect any possible future fault
On 04/ 2/10 10:25 AM, Ian Collins wrote:
Is this callstack familiar to anyone? It just happened on a Solaris
10 update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830
unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072 ke
Hi,
I have installed OpenSolaris snv_134 from the iso at genunix.org.
Mon Mar 8 2010 New OpenSolaris preview, based on build 134
I created a zpool:-
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
c7t4d0ONLINE 0 0 0
On Wed, Apr 14, 2010 at 3:01 AM, Victor Latushkin
wrote:
>
> On Apr 13, 2010, at 9:52 PM, Cyril Plisko wrote:
>
>> Hello !
>>
>> I've had a laptop that crashed a number of times during last 24 hours
>> with this stack:
>>
>> panic[cpu0]/thread=ff0007ab0c60:
>> assertion failed: ddt_object_upda
I just started replacing drives in this zpool (to increase storage). I pulled
the first drive, and replaced it with a new drive and all was well. It
resilvered with 0 errors. This was 5 days ago. Just today I was looking around
and noticed that my pool was degraded (I see now that this occurred
69 matches
Mail list logo