Hi,
I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made up of a six
disks in a raid-z pool with a hot spare.
-bash-3.2$ /sbin/zpool status
pool: nas
stato: ONLINE
scrub: scrub in progress for 9h4m, 81,59% done, 2h2m to go
config:
NAMESTATE READ WRITE CKSU
Cindy:
I believe I may have been mistaken. When I recreated the zpools, you are
correct you receive different numbers for "zpool list" and "zfs list" for the
sizes. I must have typed one command and then the other when creating the
different pools.
Thanks for the assist. Sheepish grin.
Dav
I created a raidz zpool and shares and now the OS is very slow. I timed it and
I can get about eight seconds of use before I get ten seconds of a frozen
screen. I can be doing anything or barely anything (moving the mouse an inch
side to side repeatedly.) This makes the machine unusable. If
David,
May be you can you the iosnoop of the dtrace toolkit:
http://www.solarisinternals.com/wiki/index.php/DTraceToolkit#Scripts
..Remco
David Stewart wrote:
I created a raidz zpool and shares and now the OS is very slow. I timed it and I can get about eight seconds of use before I get ten s
Maurilio Longo wrote:
Hi,
I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made up of a
six disks in a raid-z pool with a hot spare.
...
Now, the problem is that issuing an
iostat -Cmnx 10
or any other time intervall, I've seen, sometimes, a complete stall of disk
I/O due to a d
Carson,
the strange thing is that this is happening on several disks (can it be that
are all failing?)
What is the controller bug you're talking about? I'm running snv_114 on this
pc, so it is fairly recent.
Best regards.
Maurilio.
--
This message posted from opensolaris.org
Maurilio Longo wrote:
the strange thing is that this is happening on several disks (can it be that
are all failing?)
Possible, but less likely. I'd suggest running some disk I/O tests, looking at
the drive error counters before/after.
What is the controller bug you're talking about? I'm run
> Possible, but less likely. I'd suggest running some
> disk I/O tests, looking at
> the drive error counters before/after.
>
These disks have a few months of life and are scrubbed weekly, no errors so far.
I did try to use smartmontools, but it cannot report SMART logs nor start SMART
tests,
Maurilio Longo wrote:
I did try to use smartmontools, but it cannot report SMART logs nor start
SMART tests, so I don't know how to look at their internal state.
Really? That's odd...
You could also have a firmware bug on your disks. You might try lowering
the number of tagged commands per d
Maurilio Longo wrote:
Carson,
the strange thing is that this is happening on several disks (can it be that
are all failing?)
What is the controller bug you're talking about? I'm running snv_114 on this
pc, so it is fairly recent.
Best regards.
Maurilio.
See 'iostat -En' output.
__
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to get
ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible data
on persistent storage devices, what would go wrong if I dynamically
added a ramdisk as a 3rd m
Osvald Ivarsson wrote:
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
wrote:
On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to
my motherboard. The raid, a raidz, which is called "rescamp", has worked
good before until a power f
Max Holm wrote:
Hi,
We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array,
some pools takes a few seconds, but minutes for some
Chris Ridd wrote:
On 1 Oct 2009, at 19:34, Andrew Gabriel wrote:
Pick a file which isn't in a snapshot (either because it's been
created since the most recent snapshot, or because it's been
rewritten since the most recent snapshot so it's no longer sharing
blocks with the snapshot version).
Osvald Ivarsson wrote:
On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin
wrote:
Osvald Ivarsson wrote:
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
wrote:
On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to
my motherboard. Th
Milek,
this is it
# iostat -En
c1t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA Product: ST3808110AS Revision: DSerial No:
Size: 80,03GB <80026361856 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 91 Predictive F
Carson,
they're seagate ST31000340AS with a firmware release CC1H, which from a rapid
googling should have no firmware errors.
Anyway, setting NCQ depth to 1
# echo zfs_vdev_max_pending/W0t1 | mdb -kw
did not solve the problem :(
Maurilio.
--
This message posted from opensolaris.org
__
Errata,
they're ST31000333AS and not 340AS
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Data security. I migrated my organization from Linux to Solaris driven away
from Linux by the the shortfalls of fsck on TB size file systems, and towards
Solaris by the features of ZFS.
At the time I tried to dig up information concerning tradeoffs associated with
Fletcher2 vs. 4 vs. SHA256 an
Appologize that the preceeding post appears out of context. I expected it to
"indent" as I pushed the reply button on myxiplx' Oct 1, 2009 1:47 post. It
was in response to his question. I will try to remember to provide links
internal to my messages.
--
This message posted from opensolaris.o
On 02 October, 2009 - Ray Clark sent me these 4,4K bytes:
> Data security. I migrated my organization from Linux to Solaris
> driven away from Linux by the the shortfalls of fsck on TB size file
> systems, and towards Solaris by the features of ZFS.
[...]
> Before taking rather disruptive actions
Replying to Cindys Oct 1, 2009 3:34 PM post:
Thank you. The second part was my attempt to guess my way out of this. If
the fundamental structure of the pool (That which was created before I set the
checksum=sha256 property) is using fletcher2, perhaps as I use the pool all of
this structure
Replying to relling's October 1, 2009 3:34 post:
Richard, regarding "when a pool is created, there is only metadata which uses
fletcher4". Was this true in U4, or is this a new change of default with U4
using fletcher2? Similarly, did the Ubberblock use sha256 in U4? I am running
U4.
--Ray
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
wrote:
> On 01.10.09 17:54, Osvald Ivarsson wrote:
>>
>> I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to
>> my motherboard. The raid, a raidz, which is called "rescamp", has worked
>> good before until a power failure yester
On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin
wrote:
> Osvald Ivarsson wrote:
>>
>> On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
>> wrote:
>>>
>>> On 01.10.09 17:54, Osvald Ivarsson wrote:
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to
my motherboar
On Fri, Oct 2, 2009 at 2:51 PM, Victor Latushkin
wrote:
> Osvald Ivarsson wrote:
>>
>> On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin
>> wrote:
>>>
>>> Osvald Ivarsson wrote:
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin
wrote:
>
> On 01.10.09 17:54, Osvald Ivarsson w
> > It seems like the appropriate solution would be to
> have a tool that
> > allows removing a file from one or more snapshots
> at the same time as
> > removing the source ...
>
> That would make them not really snapshots. And such
> a tool would have
> to "fix" clones too.
While I concur tha
Hi,
Is there a way or script that helps to find out what files have changed by
comparing two snapshots?
Thanks,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
Interesting answer, thanks :)
I'd like to dig a little deeper if you don't mind, just to further my own
understanding (which is usually rudimentary compared to a lot of the guys on
here). My belief is that ZFS stores two copies of the metadata for any block,
so corrupt metadata really shouldn'
On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to
get ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible
data on persistent storage devices, what wo
Simon Gao wrote:
Hi,
Is there a way or script that helps to find out what files have changed by
comparing two snapshots?
http://blogs.sun.com/chrisg/entry/zfs_versions_of_a_file
Is something along those lines, but since the snapshots are visible
under .zfs/snapshot// as filesystems you coul
I have an HP DL380G4 w/ 3Gb of ram and a slow MSA15 (SATA discs to a
single u320 interface). I was using this with 10u7 for a smb over zfs
file server for a few clients with mild needs. I never benched it as
these unattended wkst's just wrote a slow steady of data and had no
issues.
I now need to
My pool was the default, with checksum=256. The default has two copies of all
metadata (as I understand it), and one copy of user data. It was a raidz2 with
eight 750GB drives, yielding just over 4TB of usable space.
I am not happy with the situation, but I recognize that I am 2x better off
For the archives...
On Oct 2, 2009, at 12:41 AM, Maurilio Longo wrote:
Hi,
I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made
up of a six disks in a raid-z pool with a hot spare.
-bash-3.2$ /sbin/zpool status
pool: nas
stato: ONLINE
scrub: scrub in progress for 9h4m, 81,5
webcl...@rochester.rr.com said:
> To verify data, I cannot depend on existing tools since diff is not large
> file aware. My best idea at this point is to calculate and compare MD5 sums
> of every file and spot check other properties as best I can.
Ray,
I recommend that you use rsync's "-c" to
Does the same thing apply for a "failing" drive? I have a drive that
has not failed but by all indications, it's about to Can I do the
same thing here?
-dan
Jeff Bonwick wrote:
Yep, you got it.
Jeff
On Fri, Jun 19, 2009 at 04:15:41PM -0700, Simon Breden wrote:
Hi,
I have a ZFS st
Suppose I have a storagepool: /storagepool
And I have snapshots on it. Then I can access the snaps under
/storagepool/.zfs/snapshots
But is there any way to enable this within all the subdirs? For example,
cd /storagepool/users/eharvey/some/foo/dir
cd .zf
Yes, you can use the zpool replace process with any kind of drive:
failed, failing, or even healthy.
cs
On 10/02/09 12:15, Dan Transue wrote:
Does the same thing apply for a "failing" drive? I have a drive that
has not failed but by all indications, it's about to Can I do the
same thing
> zfs will use as much memory as is "necessary" but how is "necessary"
calculated?
using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979
my tiny system shows:
Current Size: 4206 MB (arcsize)
Target Size (Adaptive): 4207 MB (c)
Mi
Stuart Anderson wrote:
On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to get
ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible data
on persistent st
Ray,
The checksums are set on the file systems not the pool.
If a new checksum is set and *you* rewrite the data, then the rewritten
data will contain the new checksum. If your pool has the space for you
to duplicate the user data and new checksum is set, then the duplicated
data will have the
On Oct 2, 2009, at 7:46 AM, Ray Clark wrote:
Replying to relling's October 1, 2009 3:34 post:
Richard, regarding "when a pool is created, there is only metadata
which uses fletcher4". Was this true in U4, or is this a new change
of default with U4 using fletcher2? Similarly, did the Ubber
> "re" == Richard Elling writes:
> "r" == Ross writes:
re> The answer to this question must be known before the
re> effectiveness of a checksum can be evaluated.
...well...we can use math to know that a checksum is effective. What
you are really suggesting we evaluate ``empiri
Hi Miles, good to hear from you again.
On Oct 2, 2009, at 1:20 PM, Miles Nordin wrote:
"re" == Richard Elling writes:
"r" == Ross writes:
re> The answer to this question must be known before the
re> effectiveness of a checksum can be evaluated.
...well...we can use math to know that
Replying to hakanson's Oct 2, 2009 2:01 post:
Thanks. I suppose it is true that I am not even trying to compare the
peripheral stuff, and simple presence of a file and the data matching covers
some of them.
Using it for moving data, one encounters a longer list: Sparse files, ACL
handling,
Rudolf Potucek wrote:
It seems like the appropriate solution would be to
have a tool that
allows removing a file from one or more snapshots
at the same time as
removing the source ...
That would make them not really snapshots. And such
a tool would have
to "fi
Cindys Oct 2, 2009 2:59, Thanks for staying with me.
Re: "The checksums are aset on the file systems not the pool.":
But previous responses seem to indicate that I can set them for file stored in
the filesystem that appears to be the pool, at the pool level, before I create
any new ones. One
Re: relling's Oct 2, 2009 3:26 Post:
(1) Is this list everything?
(2) Is this the same for U4?
(3) If I change the zpool checksum property on creation as you indicated in
your Oct 1, 12:51 post (evidently very recent versions only), does this change
the checksums used for this list? Why would n
Re: Miles Nordin Oct 2, 2009 4:20:
Re: "Anyway, I'm glad the problem is both fixed..."
I want to know HOW it can be fixed? If they fixed it, this will invalidate
every pool that has not been changed from the default (Probably almost all of
them!). This can't be! So what WAS done? In the int
On Oct 2, 2009, at 3:05 PM, Ray Clark wrote:
Re: relling's Oct 2, 2009 3:26 Post:
(1) Is this list everything?
AFAIK
(2) Is this the same for U4?
Yes. This hasn't changed in a very long time.
(3) If I change the zpool checksum property on creation as you
indicated in your Oct 1, 12:51
Re: relling's Oct 2 5:06 Post:
Re: analogy to ECC memory...
I appreciate the support, but the ECC memory analogy does not hold water. ECC
memory is designed to correct for multiple independent events, such as
electrical noise, bits flipped due to alpha particles from the DRAM package, or
cos
> "re" == Richard Elling writes:
re> By your logic, SECDED ECC for memory is broken because it only
re> corrects
ECC is not a checksum.
Go ahead, get out your dictionary, enter severe-pedantry-mode. but it
is relevantly different. In for example data transmission scenarios,
FEC's
Let me try to refocus:
Given that I have a U4 system with a zpool created with Fletcher2:
What blocks in the system are protected by Fletcher2, or even Fletcher4
although that does not worry me so much.
Given that I only have 1.6TB of data in a 4TB pool, what can I do to change
those blocks to
> NO. Snapshotting is sacred
LOL!
Ok, ok, I admit that snapshotting the whole ZFS root filesystem (yes, we have
ZFS root in production, oops) instead of creating individual snapshots for
*each* individual ZFS is against the code of good sysadmin-ing. I bow to the
developer gods and will only
On Fri, Oct 2, 2009 at 1:45 PM, Rob Logan wrote:
>> zfs will use as much memory as is "necessary" but how is "necessary"
>> calculated?
>
> using arc_summary.pl from
> http://www.cuddletech.com/blog/pivot/entry.php?id=979
> my tiny system shows:
> Current Size: 4206 MB (arcsize
On Oct 2, 2009, at 3:44 PM, Ray Clark wrote:
Let me try to refocus:
Given that I have a U4 system with a zpool created with Fletcher2:
What blocks in the system are protected by Fletcher2, or even
Fletcher4 although that does not worry me so much.
Given that I only have 1.6TB of data in a
On Oct 2, 2009, at 3:36 PM, Miles Nordin wrote:
"re" == Richard Elling writes:
re> By your logic, SECDED ECC for memory is broken because it only
re> corrects
ECC is not a checksum.
SHA-256 is not a checksum, either, but that isn't the point. The
concern is
that corruption can be
On Oct 2, 2009, at 11:45 AM, Rob Logan wrote:
> zfs will use as much memory as is "necessary" but how is
"necessary" calculated?
using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979
my tiny system shows:
Current Size: 4206 MB (arcsize)
Ta
58 matches
Mail list logo