On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski wrote:
> On 11/06/2010 09:22, sensille wrote:
>
>> Andrey Kuzmin wrote:
>>
>>
>>> On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling
>>> mailto:richard.ell...@gmail.com>> wrote:
>>&g
On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling wrote:
> On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote:
>
> > Andrey Kuzmin wrote:
> >> Well, I'm more accustomed to "sequential vs. random", but YMMW.
> >> As to 67000 512 byte writes (this sounds su
Well, I'm more accustomed to "sequential vs. random", but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting
into cache), did you have write-back enabled?
Regards,
Andrey
On Fri, Jun 11, 2010 at 12:03 AM, Arne Jansen wrote:
> Andrey Kuzmin
On Thu, Jun 10, 2010 at 11:51 PM, Arne Jansen wrote:
> Andrey Kuzmin wrote:
>
>> As to your results, it sounds almost too good to be true. As Bob has
>> pointed out, h/w design targeted hundreds IOPS, and it was hard to believe
>> it can scale 100x. Fantastic.
>>
As to your results, it sounds almost too good to be true. As Bob has pointed
out, h/w design targeted hundreds IOPS, and it was hard to believe it can
scale 100x. Fantastic.
Regards,
Andrey
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski wrote:
> On 21/10/2009 03:54, Bob Friesenhahn wrote:
Sorry, my bad. _Reading_ from /dev/null may be an issue, but not writing to
it, of course.
Regards,
Andrey
On Thu, Jun 10, 2010 at 6:46 PM, Robert Milkowski wrote:
> On 10/06/2010 15:39, Andrey Kuzmin wrote:
>
> On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski wrote:
>
>>
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski wrote:
> On 21/10/2009 03:54, Bob Friesenhahn wrote:
>
>>
>> I would be interested to know how many IOPS an OS like Solaris is able to
>> push through a single device interface. The normal driver stack is likely
>> limited as to how many IOPS it
I believe the name is Compellent Technologies,
http://www.google.com/finance?q=NYSE:CML.
Regards,
Andrey
On Wed, Apr 28, 2010 at 5:54 AM, Richard Elling
wrote:
> Today, Compellant announced their zNAS addition to their unified storage
> line. zNAS uses ZFS behind the scenes.
> http://www.compe
No, until all snapshots referencing the file in question are removed.
Simplest way to understand snapshots is to consider them as
references. Any file-system object (say, file or block) is only
removed when its reference count drops to zero.
Regards,
Andrey
On Sat, Apr 10, 2010 at 10:20 PM, R
There had been a discussion of the topic on this list bout a onth ago,
and I'd been told that similar ideas (compressed metadata/data in
ARC/L2ARC) is on zfs dev agenda.
Regards,
Andrey
On Sun, Mar 28, 2010 at 2:42 AM, Stuart Anderson
wrote:
>
> On Oct 2, 2009, at 11:54 AM, Robert Milkowski w
This is purely tactical, to avoid l2arc write penalty on eviction. You seem
to have missed the very next paragraph:
3644 * 2. The L2ARC attempts to cache data from the ARC before it
is evicted. 3645 * It does this by periodically scanning buffers
from the eviction-end of 3646 * the MFU a
On Thu, Feb 25, 2010 at 12:34 AM, Andrey Kuzmin
wrote:
> On Thu, Feb 25, 2010 at 12:26 AM, Steve wrote:
>> thats not the issue here, as they are spread out in a folder structure based
>> on an integer split into hex blocks... 00/00/00/01 etc...
>>
>> but the numbe
On Thu, Feb 25, 2010 at 12:26 AM, Steve wrote:
> thats not the issue here, as they are spread out in a folder structure based
> on an integer split into hex blocks... 00/00/00/01 etc...
>
> but the number of pointers involved with all these files, and directories
> (which are files)
> must have
On Wed, Feb 24, 2010 at 11:09 PM, Bob Friesenhahn
wrote:
> On Wed, 24 Feb 2010, Steve wrote:
>>
>> The overhead I was thinking of was more in the pointer structures...
>> (bearing in mind this is a 128 bit file system), I would guess that memory
>> requirements would be HUGE for all these files...
I don't see why this couldn't be extended beyond metadata (+1 for the
idea): if zvol is compressed, ARC/L2ARC could store compressed data.
The gain is apparent: if user has compression enabled for the volume,
he/she expects volume's data to be compressable at good ratio,
yielding significant reduct
Try an inexpensive MLC SSD (Intel/Micron) for L2ARC. Won't help
metadat updates, but should boost reads.
Regards,
Andrey
On Thu, Feb 18, 2010 at 11:23 PM, Chris Banal wrote:
> We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops
> of which about 90% are meta data. In hin
Just an observation: panic occurs in avl_add when called from
find_ds_by_guid that tries to add existing snapshot id to the avl tree
(http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/dmu_send.c#find_ds_by_guid).
HTH,
Andrey
On Tue, Feb 9, 2010 at 1:37 AM, Bruno D
On Fri, Feb 5, 2010 at 10:55 PM, Bob Friesenhahn
wrote:
> On Fri, 5 Feb 2010, Miles Nordin wrote:
>>
>> ls> r...@nexenta:/volumes# hdadm write_cache off c3t5
>>
>> ls> c3t5 write_cache> disabled
>>
>> You might want to repeat his test with X25-E. If the X25-E is also
>> dropping cache flush
On Wed, Feb 3, 2010 at 6:11 PM, Ross Walker wrote:
> On Feb 3, 2010, at 9:53 AM, Henu wrote:
>
>> Okay, so first of all, it's true that send is always fast and 100%
>> reliable because it uses blocks to see differences. Good, and thanks for
>> this information. If everything else fails, I can par
On Fri, Jan 22, 2010 at 7:19 AM, Mike Gerdts wrote:
> On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin
> wrote:
>> Looking at dedupe code, I noticed that on-disk DDT entries are
>> compressed less efficiently than possible: key is not compressed at
>> all (I'd expect r
On Thu, Jan 21, 2010 at 10:00 PM, Richard Elling
wrote:
> On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
>
>> Hi all,
>>
>> I'm going to be trying out some tests using b130 for dedup on a server with
>> about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
>> I'm trying
On Fri, Jan 15, 2010 at 2:07 AM, Christopher George
wrote:
>> Why not enlighten EMC/NTAP on this then?
>
> On the basic chemistry and possible failure characteristics of Li-Ion
> batteries?
>
> I will agree, if I had system level control as in either example, one could
> definitely help mitigate s
On Thu, Jan 14, 2010 at 10:02 PM, Christopher George
wrote:
>> That's kind of an overstatement. NVRAM backed by on-board LI-Ion
>> batteries has been used in storage industry for years;
>
> Respectfully, I stand by my three points of Li-Ion batteries as they relate
> to enterprise class NVRAM: ign
On Thu, Jan 14, 2010 at 11:35 AM, Christopher George
wrote:
>> I'm not sure about others on the list, but I have a dislike of AC power
>> bricks in my racks.
>
> I definitely empathize with your position concerning AC power bricks, but
> until the perfect battery is created, and we are far from it
600? I've heard 1.5GBps reported.
On 1/5/10, Eric D. Mudama wrote:
> On Mon, Jan 4 at 16:43, Wes Felter wrote:
>>Eric D. Mudama wrote:
>>
>>>I am not convinced that a general purpose CPU, running other software
>>>in parallel, will be able to be timely and responsive enough to
>>>maximize bandwi
And how do you expect the mirrored iSCSI volume to work after
failover, with secondary (ex-primary) unreachable?
Regards,
Andrey
On Wed, Dec 23, 2009 at 9:40 AM, Erik Trimble wrote:
> Charles Hedrick wrote:
>>
>> Is ISCSI reliable enough for this?
>>
>
> YES.
>
> The original idea is a good o
It might be helpful to contact SSD vendor, report the issue and
inquire if half a year wearing out is expected behavior for this
model. Further, if you have an option to replace one (or both) SSDs
with fresh ones, this could tell for sure if they are the root cause.
Regards,
Andrey
On Mon, Dec
On Sat, Dec 19, 2009 at 7:20 PM, Bob Friesenhahn
wrote:
> On Sat, 19 Dec 2009, Colin Raven wrote:
>>
>> There is no original, there is no copy. There is one block with reference
>> counters.
>>
>> - Fred can rm his "file" (because clearly it isn't a file, it's a filename
>> and that's all)
>> - re
On Thu, Dec 17, 2009 at 6:14 PM, Kjetil Torgrim Homme
wrote:
> Darren J Moffat writes:
>> Kjetil Torgrim Homme wrote:
>>> Andrey Kuzmin writes:
>>>
>>>> Downside you have described happens only when the same checksum is
>>>> used for data prot
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.
On 12/17/09, Kjetil Torgrim Homme wrote:
> Andrey Kuzmin writes:
>> Darren J Mof
performance troubles are due to the calculation of
> two different checksums?
>
> Thanks,
> Chris
>
> -Original Message-
> From: cyril.pli...@gmail.com [mailto:cyril.pli...@gmail.com] On Behalf
> Of Cyril Plisko
> Sent: 16 December 2009 17:09
> To: An
On Wed, Dec 16, 2009 at 8:09 PM, Cyril Plisko wrote:
>>> I've set dedup to what I believe are the least resource-intensive
>>> settings - "checksum=fletcher4" on the pool, & "dedup=on" rather than
>>
>> I believe checksum=fletcher4 is acceptable in dedup=verify mode only.
>> What you're doing is s
On Wed, Dec 16, 2009 at 7:46 PM, Darren J Moffat
wrote:
> Andrey Kuzmin wrote:
>>
>> On Wed, Dec 16, 2009 at 7:25 PM, Kjetil Torgrim Homme
>> wrote:
>>>
>>> Andrey Kuzmin writes:
>>>>
>>>> Yet again, I don't see how RAID-
On Wed, Dec 16, 2009 at 6:41 PM, Chris Murray wrote:
> Hi,
>
> I run a number of virtual machines on ESXi 4, which reside in ZFS file
> systems and are accessed over NFS. I've found that if I enable dedup,
> the virtual machines immediately become unusable, hang, and whole
> datastores disappear f
On Wed, Dec 16, 2009 at 7:25 PM, Kjetil Torgrim Homme
wrote:
> Andrey Kuzmin writes:
>> Yet again, I don't see how RAID-Z reconstruction is related to the
>> subject discussed (what data should be sha256'ed when both dedupe and
>> compression are enabled, raw or c
ut for now sha256 is used for duplicate candidates
look-up only).
Regards,
Andrey
On Wed, Dec 16, 2009 at 5:18 PM, Kjetil Torgrim Homme
wrote:
> Andrey Kuzmin writes:
>
>> Kjetil Torgrim Homme wrote:
>>> for some reason I, like Steve, thought the checksum was calculated on
&
On Tue, Dec 15, 2009 at 3:06 PM, Kjetil Torgrim Homme
wrote:
> Robert Milkowski writes:
>> On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
>>> Because if you can de-dup anyway why bother to compress THEN check?
>>> This SEEMS to be the behaviour - i.e. I would suspect many of the
>>> file
On 12/14/09, Cyril Plisko wrote:
> On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
> wrote:
>>
>> Right, but 'verify' seems to be 'extreme safety' and thus rather rare
>> use case.
>
> Hmm, dunno. I wouldn't set anything, but scratch file sys
On Mon, Dec 14, 2009 at 9:53 PM, wrote:
>
>>On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
>>> ZFS deduplication is block-level, so to deduplicate one needs data
>>> broken into blocks to be written. With compression enabled, you don't
>>
On Sun, Dec 13, 2009 at 11:51 PM, Steve Radich, BitShop, Inc.
wrote:
> I enabled compression on a zfs filesystem with compression=gzip9 - i.e.
> fairly slow compression - this stores backups of databases (which compress
> fairly well).
>
> The next question is: Is the CRC on the disk based on t
On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner
wrote:
> On Sat, Dec 12, 2009 at 04:23:21PM +0000, Andrey Kuzmin wrote:
>> As to whether it makes sense (as opposed to two distinct physical
>> devices), you would have read cache hits competing with log writes for
>> bandwidth.
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I doubt both will be pleased :-)
On 12/12/09, Robert Milkowski wrote:
> Jens Elkner wrote:
>> Hi,
>>
>> just got a quote from our campus reseller, th
On Fri, Dec 11, 2009 at 11:43 PM, Nick wrote:
> No, it is not, for a couple of reasons. First of all, rumor is that SMC is
> being discontinued in favor
> of a WBEM/CIM- based management system.
Any specific implementation meant? Are there any plans wrt OpenPegasus?
Regards,
Andrey
Second,
On Wed, Dec 9, 2009 at 10:43 PM, Bob Friesenhahn
wrote:
> On Wed, 9 Dec 2009, Bruno Sousa wrote:
>>
>> Despite the fact that i agree in general with your comments, in reality
>> it all comes to money..
>> So in this case, if i could prove that ZFS was able to find X amount of
>> duplicated data, a
There are two calls to vfs_rele in the stack trace, may be explaining
why the assertion failed.
Regards,
Andrey
On Tue, Dec 8, 2009 at 10:48 PM, Joep Vesseur wrote:
> Folks,
>
> I've been seeing this for a while, but never had the urge to ask, until now.
> When I take a snapshot of my current
rds,
Andrey
>
> I hope i was somehow clear, but i can try to explain better if needed.
>
> Thanks,
> Bruno
>
> Andrey Kuzmin wrote:
>> On Wed, Dec 9, 2009 at 2:26 PM, Bruno Sousa wrote:
>>
>>> Hi all,
>>>
>>> Is there any way to generate s
On Wed, Dec 9, 2009 at 2:26 PM, Bruno Sousa wrote:
> Hi all,
>
> Is there any way to generate some report related to the de-duplication
> feature of ZFS within a zpool/zfs pool?
> I mean, its nice to have the dedup ratio, but it think it would be also
> good to have a report where we could see wha
On Tue, Dec 8, 2009 at 9:32 PM, Richard Elling wrote:
> FYI,
> Seagate has announced a new enterprise SSD. The specs appear
> to be competitive:
> + 2.5" form factor
> + 5 year warranty
> + power loss protection
> + 0.44% annual failure rate (AFR) (2M hours MTBF, IMHO
On Tue, Dec 8, 2009 at 7:02 PM, Bob Friesenhahn
wrote:
> On Mon, 7 Dec 2009, Michael DeMan (OA) wrote:
>>
>> Args for FreeBSD + ZFS:
>>
>> - Limited budget
>> - We are familiar with managing FreeBSD.
>> - We are familiar with tuning FreeBSD.
>> - Licensing model
>>
>> Args against OpenSolaris + ZF
On Sun, Dec 6, 2009 at 8:11 PM, Anurag Agarwal wrote:
> Hi,
>
> My reading of write code of ZFS (zfs_write in zfs_vnops.c), is that all the
> writes in zfs are logged in the ZIL. And if that indeed is the case, then
IIRC, there is some upper limit (1MB?) on writes that go to ZIL, with
larger ones
50 matches
Mail list logo