[the dog jumped on the keyboard and wiped out my first reply, second attempt...]
On Apr 27, 2011, at 9:26 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Neil Perrin
>>
>> No, that's not true. The DDT is just
Hi guys, I've been struggling with this for days.. I have a pool
that's full of 3-way mirrors backed by iscsi targets, I botched the
machine (iscsi target) that handles 1/3 of the mirrors and now I'm
trying to detach the devices which are UNAVAIL and nowehere to be
found in /dev anymore :)
> From: Tomas Ögren [mailto:st...@acc.umu.se]
>
> zdb -bb pool
Oy - this is scary - Thank you by the way for that command - I've been
gathering statistics across a handful of systems now ...
What does it mean / what should you do, if you run that command, and it
starts spewing messages like this
On Thu, Apr 28, 2011 at 4:06 PM, Erik Trimble wrote:
> Which means, that while I can get a list of blocks which are deduped, it
> may not be possible to generate a list of files from that list of
> blocks.
Is it possible to determine which datasets the blocks are referenced from?
Since I have so
On Thu, 2011-04-28 at 15:50 -0700, Brandon High wrote:
> On Thu, Apr 28, 2011 at 3:48 PM, Ian Collins wrote:
> > Dedup is at the block, not file level.
>
> Files are usually composed of blocks.
>
> -B
>
I think the point was, it may not be easy to determine which file a
given block is part of.
On Thu, Apr 28, 2011 at 3:50 PM, Edward Ned Harvey
wrote:
> When a block is scheduled to be written, system performs checksum, and looks
> for a matching entry in DDT in ARC/L2ARC. In the event of an ARC/L2ARC
... which, if it's on L2ARC, is another read too. While most people
will be using a fa
On Thu, Apr 28, 2011 at 3:48 PM, Ian Collins wrote:
> Dedup is at the block, not file level.
Files are usually composed of blocks.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
> From: Brandon High [mailto:bh...@freaks.com]
> Sent: Thursday, April 28, 2011 5:33 PM
>
> On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey
> wrote:
> > Correct me if I'm wrong, but the dedup sha256 checksum happens in
> addition
> > to (not instead of) the fletcher2 integrity checksum. So af
On 04/29/11 07:44 AM, Brandon High wrote:
Is there an easy way to find out what datasets have dedup'd data in
them. Even better would be to discover which files in a particular
dataset are dedup'd.
Dedup is at the block, not file level.
--
Ian.
___
On Thu, Apr 28, 2011 at 3:05 PM, Erik Trimble wrote:
> A careful reading of the man page seems to imply that there's no way to
> change the dedup checksum algorithm from sha256, as the dedup property
> ignores the checksum property, and there's no provided way to explicitly
> set a checksum algori
On Thu, 2011-04-28 at 14:33 -0700, Brandon High wrote:
> On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey
> wrote:
> > Correct me if I'm wrong, but the dedup sha256 checksum happens in addition
> > to (not instead of) the fletcher2 integrity checksum. So after bootup,
>
> My understanding is t
On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey
wrote:
> Correct me if I'm wrong, but the dedup sha256 checksum happens in addition
> to (not instead of) the fletcher2 integrity checksum. So after bootup,
My understanding is that enabling dedup forces sha256.
"The default checksum used for d
On Thu, 2011-04-28 at 13:59 -0600, Neil Perrin wrote:
> On 4/28/11 12:45 PM, Edward Ned Harvey wrote:
> >
> > In any event, thank you both for your input. Can anyone answer these
> > authoritatively? (Neil?) I'll send you a pizza. ;-)
> >
>
> - I wouldn't consider myself an authority on the d
On 4/28/11 12:45 PM, Edward Ned Harvey wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
OK, I just re-looked at a couple of things, and here's what I /think/ is
the correct numbers.
I just checked, and the current size of this structure is 0x178, or 376
bytes.
Each ARC entry, which p
Is there an easy way to find out what datasets have dedup'd data in
them. Even better would be to discover which files in a particular
dataset are dedup'd.
I ran
# zdb -
which gave output like:
index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1>
[L0 deduplicated block] sha
> From: Erik Trimble [mailto:erik.trim...@oracle.com]
>
> OK, I just re-looked at a couple of things, and here's what I /think/ is
> the correct numbers.
>
> I just checked, and the current size of this structure is 0x178, or 376
> bytes.
>
> Each ARC entry, which points to either an L2ARC item
Thanks Peter,
On Apr 26, 2011, at 2:16 PM, Peter Tribble wrote:
> On Mon, Apr 25, 2011 at 11:58 PM, Richard Elling
> wrote:
>> Hi ZFSers,
>> I've been working on merging the Joyent arcstat enhancements with some of my
>> own
>> and am now to the point where it is time to broaden the requirement
Am 28.04.11 15:16, schrieb Victor Latushkin:
On Apr 28, 2011, at 5:04 PM, Stephan Budach wrote:
Am 28.04.11 11:51, schrieb Markus Kovero:
failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo,
spa->spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab
Is this something I s
On Apr 28, 2011, at 5:04 PM, Stephan Budach wrote:
> Am 28.04.11 11:51, schrieb Markus Kovero:
>>> failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo,
>>> spa->spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab
>>> Is this something I should worry about?
>>> uname -a
>>
Am 28.04.11 11:51, schrieb Markus Kovero:
failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo,
spa->spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab
Is this something I should worry about?
uname -a
SunOS E55000 5.11 oi_148 i86pc i386 i86pc Solaris
I thought we were ta
Sync was disabled on the main pool and then let to inherrit to everything else.
The> reason for disabled this in the first place was to fix bad NFS write
performance (even with> Zil on an X25e SSD it was under 1MB/s).
I've also tried setting the logbias to throughput and latency but they bot
> failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo,
> spa->spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab
> Is this something I should worry about?
> uname -a
> SunOS E55000 5.11 oi_148 i86pc i386 i86pc Solaris
I thought we were talking about solaris 11 express,
22 matches
Mail list logo