On Mar 16, 2013, at 7:01 PM, Andrew Werchowiecki
wrote:
> It's a home set up, the performance penalty from splitting the cache devices
> is non-existant, and that work around sounds like some pretty crazy amount of
> overhead where I could instead just have a mirrored slog.
>
> I'm less conc
On Mar 15, 2013, at 6:09 PM, Marion Hakanson wrote:
> Greetings,
>
> Has anyone out there built a 1-petabyte pool?
Yes, I've done quite a few.
> I've been asked to look
> into this, and was told "low performance" is fine, workload is likely
> to be write-once, read-occasionally, archive stora
On Feb 26, 2013, at 12:33 AM, Tiernan OToole wrote:
> Thanks all! I will check out FreeNAS and see what it can do... I will also
> check my RAID Card and see if it can work with JBOD... fingers crossed... The
> machine has a couple internal SATA ports (think there are 2, could be 4) so i
> was
On Feb 21, 2013, at 8:02 AM, John D Groenveld wrote:
> # zfs list -t vol
> NAME USED AVAIL REFER MOUNTPOINT
> rpool/dump4.00G 99.9G 4.00G -
> rpool/foo128 66.2M 100G16K -
> rpool/swap4.00G 99.9G 4.00G -
>
> # zfs destroy rpool/foo128
> cannot destroy 'rpool/fo
On Feb 20, 2013, at 3:27 PM, Tim Cook wrote:
> On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling
> wrote:
> On Feb 20, 2013, at 2:49 PM, Markus Grundmann wrote:
>
>> Hi!
>>
>> My name is Markus and I living in germany. I'm new to this list and I have a
>&
On Feb 20, 2013, at 2:49 PM, Markus Grundmann wrote:
> Hi!
>
> My name is Markus and I living in germany. I'm new to this list and I have a
> simple question
> related to zfs. My favorite operating system is FreeBSD and I'm very happy to
> use zfs on them.
>
> It's possible to enhance the pro
On Feb 16, 2013, at 10:16 PM, Bryan Horstmann-Allen
wrote:
> +--
> | On 2013-02-17 18:40:47, Ian Collins wrote:
> |
>> One of its main advantages is it has been platform agnostic. We see
>> Solaris, Illumos, BSD and m
On Feb 6, 2013, at 5:17 PM, Gregg Wonderly wrote:
> This is one of the greatest annoyances of ZFS. I don't really understand
> how, a zvol's space can not be accurately enumerated from top to bottom of
> the tree in 'df' output etc. Why does a "zvol" divorce the space used from
> the root of
On Jan 29, 2013, at 6:08 AM, Robert Milkowski wrote:
>> From: Richard Elling
>> Sent: 21 January 2013 03:51
>
>> VAAI has 4 features, 3 of which have been in illumos for a long time. The
> remaining
>> feature (SCSI UNMAP) was done by Nexenta and exists in their
On Jan 20, 2013, at 4:51 PM, Tim Cook wrote:
> On Sun, Jan 20, 2013 at 6:19 PM, Richard Elling
> wrote:
> On Jan 20, 2013, at 8:16 AM, Edward Harvey wrote:
> > But, by talking about it, we're just smoking pipe dreams. Cuz we all know
> > zfs is developmentally ch
On Jan 20, 2013, at 8:16 AM, Edward Harvey wrote:
> But, by talking about it, we're just smoking pipe dreams. Cuz we all know
> zfs is developmentally challenged now. But one can dream...
I disagree the ZFS is developmentally challenged. There is more development
now than ever in every way: #
bloom filters are a great fit for this :-)
-- richard
On Jan 19, 2013, at 5:59 PM, Nico Williams wrote:
> I've wanted a system where dedup applies only to blocks being written
> that have a good chance of being dups of others.
>
> I think one way to do this would be to keep a scalable Bloo
On Jan 19, 2013, at 7:16 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>>
>> If almost all of the I/Os are 4K, maybe your ZVOLs should use a
>> volbl
On Jan 18, 2013, at 4:40 AM, Jim Klimov wrote:
> On 2013-01-18 06:35, Thomas Nau wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
>>>
>>> 4k might be a little small. 8k will have less metadata ove
On Jan 17, 2013, at 9:35 PM, Thomas Nau wrote:
> Thanks for all the answers more inline)
>
> On 01/18/2013 02:42 AM, Richard Elling wrote:
>> On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn > <mailto:bfrie...@simple.dallas.tx.us>> wrote:
>>
>>
On Jan 17, 2013, at 8:35 AM, Jim Klimov wrote:
> On 2013-01-17 16:04, Bob Friesenhahn wrote:
>> If almost all of the I/Os are 4K, maybe your ZVOLs should use a
>> volblocksize of 4K? This seems like the most obvious improvement.
>
>> Matching the volume block size to what the clients are actua
>> default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
>> The pool is made of SAS2 disks (11 x 3-way mirrored) plus mirrored STEC RAM
>> ZIL
>> SSDs and 128G of main memory
>>
>> The iSCSI access pattern (1 hour daytime average) looks like
Tue, Jan 8, 2013 at 11:40 AM, Sašo Kiselkov wrote:
> On 01/08/2013 04:27 PM, mark wrote:
> >> On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
> >>
> >> FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to
> >> locate
> >
On Jan 7, 2013, at 1:20 PM, Marion Hakanson wrote:
> Greetings,
>
> We're trying out a new JBOD here. Multipath (mpxio) is not working,
> and we could use some feedback and/or troubleshooting advice.
Sometimes the mpxio detection doesn't work properly. You can try to
whitelist them,
https://w
On Jan 5, 2013, at 9:42 AM, Russ Poyner wrote:
> I'm configuring a box with 24x 3Tb consumer SATA drives, and wondering about
> the best way to configure the pool. The customer wants capacity on the cheap,
> and I want something I can service without sweating too much about data loss.
> Due to
On Jan 3, 2013, at 8:38 PM, Geoff Nordli wrote:
> Thanks Richard, Happy New Year.
>
> On 13-01-03 09:45 AM, Richard Elling wrote:
>> On Jan 2, 2013, at 8:45 PM, Geoff Nordli wrote:
>>
>>> I am looking at the performance numbers for the Oracle VDI admin guide.
&
On Jan 4, 2013, at 11:12 AM, Robert Milkowski wrote:
>>
>> Illumos is not so good at dealing with huge memory systems but perhaps
>> it is also more stable as well.
>
> Well, I guess that it depends on your environment, but generally I would
> expect S11 to be more stable if only because the sh
On Jan 3, 2013, at 12:33 PM, Eugen Leitl wrote:
> On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
>>
>> Happy $holidays,
>>
>> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
>
> Just a little update on the home NAS project.
>
> I've set the pool sync to disabled, a
On Jan 2, 2013, at 8:45 PM, Geoff Nordli wrote:
> I am looking at the performance numbers for the Oracle VDI admin guide.
>
> http://docs.oracle.com/html/E26214_02/performance-storage.html
>
> From my calculations for 200 desktops running Windows 7 knowledge user (15
> iops) with a 30-70 read/
On Jan 2, 2013, at 2:03 AM, Eugen Leitl wrote:
> On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
>> On Dec 30, 2012, at 9:02 AM, Eugen Leitl wrote:
>
>>> The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
>>> memory, no ECC. All the systems
On Dec 30, 2012, at 9:02 AM, Eugen Leitl wrote:
>
> Happy $holidays,
>
> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
> a raidz3 (no compression nor dedup) with reasonable bonnie++
> 1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s
> Seq-Read @ 53% CPU. It sc
On Dec 6, 2012, at 5:30 AM, Matt Van Mater wrote:
>
>
> I'm unclear on the best way to warm data... do you mean to simply `dd
> if=/volumes/myvol/data of=/dev/null`? I have always been under the
> impression that ARC/L2ARC has rate limiting how much data can be added to the
> cache per inte
bug fix below...
On Dec 5, 2012, at 1:10 PM, Richard Elling wrote:
> On Dec 5, 2012, at 7:46 AM, Matt Van Mater wrote:
>
>> I don't have anything significant to add to this conversation, but wanted to
>> chime in that I also find the concept of a QOS-like capability
On Dec 5, 2012, at 7:46 AM, Matt Van Mater wrote:
> I don't have anything significant to add to this conversation, but wanted to
> chime in that I also find the concept of a QOS-like capability very appealing
> and that Jim's recent emails resonate with me. You're not alone! I believe
> ther
On Dec 5, 2012, at 5:41 AM, Jim Klimov wrote:
> On 2012-12-05 04:11, Richard Elling wrote:
>> On Nov 29, 2012, at 1:56 AM, Jim Klimov > <mailto:jimkli...@cos.ru>> wrote:
>>
>>> I've heard a claim that ZFS relies too much on RAM caching, but
>>&
On Nov 29, 2012, at 1:56 AM, Jim Klimov wrote:
> I've heard a claim that ZFS relies too much on RAM caching, but
> implements no sort of priorities (indeed, I've seen no knobs to
> tune those) - so that if the storage box receives many different
> types of IO requests with different "administrati
On Dec 1, 2012, at 6:54 PM, "Nikola M." wrote:
> On 12/ 2/12 03:24 AM, Nikola M. wrote:
>> It is using Solaris Zones and throttling their disk usage on that level,
>> so you separate workload processes on separate zones.
>> Or even put KVM machines under the zones (Joyent and OI support
>> Joyen
On Nov 23, 2012, at 11:56 AM, Fabian Keil wrote:
>
> Just in case your GNU/Linux experiments don't work out, you could
> also try ZFS on Geli on FreeBSD which works reasonably well.
>
For illumos-based distros or Solaris 11, using ZFS with lofi has been
well discussed for many years. Prior to t
On Nov 13, 2012, at 12:08 PM, Peter Tripp wrote:
> Hi folks,
>
> I'm in the market for a couple of JBODs. Up until now I've been relatively
> lucky with finding hardware that plays very nicely with ZFS. All my gear
> currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i)
On Oct 19, 2012, at 4:59 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>>> At some point, people will bitterly regret s
On Oct 22, 2012, at 6:52 AM, Chris Nagele wrote:
>> If after it decreases in size it stays there it might be similar to:
>>
>>7111576 arc shrinks in the absence of memory pressure
>
> After it dropped, it did build back up. Today is the first day that
> these servers are working under r
On Oct 19, 2012, at 12:16 AM, James C. McPherson wrote:
> On 19/10/12 04:50 PM, Jim Klimov wrote:
>> Hello all,
>>
>> I have one more thought - or a question - about the current
>> strangeness of rpool import: is it supported, or does it work,
>> to have rpools on multipathed devices?
>>
>> If
On Oct 19, 2012, at 6:37 AM, Eugen Leitl wrote:
> Hi,
>
> I would like to give a short talk at my organisation in order
> to sell them on zfs in general, and on zfs-all-in-one and
> zfs as remote backup (zfs send).
Googling will find a few shorter presos. I have full-day presos on
slideshare
ht
On Oct 19, 2012, at 1:04 AM, Michel Jansens wrote:
>> On 10/18/12 21:09, Michel Jansens wrote:
>>> Hi,
>>>
>>> I've been using a Solaris 10 update 9 machine for some time to replicate
>>> filesystems from different servers through zfs send|ssh zfs receive.
>>> This was done to store disaster r
On Oct 12, 2012, at 5:50 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Pedantically, a pool can be made in a file, so it works the same...
>
> Pool can only be made in a file, by
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom
wrote:
>
> On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
>
>> According to a Sun document called something like 'ZFS best practice' I read
>> some time ago, best practice was to use the entire disk for ZFS and not to
>> partition or slice it in
On Oct 11, 2012, at 6:03 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Read it again he asked, "On that note, is there a minimal user-mode zfs thing
>> that would allow
>&g
Hi John,
comment below...
On Oct 11, 2012, at 3:10 AM, Carsten John wrote:
> Hello everybody,
>
> I just wanted to share my experience with a (partially) broken SSD that was
> in use in a ZIL mirror.
>
> We experienced a dramatic performance problem with one of our zpools, serving
> home dir
On Oct 10, 2012, at 9:29 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>>>> If the recipient syste
On Oct 7, 2012, at 3:50 PM, Johannes Totz wrote:
> On 05/10/2012 15:01, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Tiernan OToole
>>>
>>> I am in the process of pl
On Oct 5, 2012, at 1:57 PM, Albert Shih wrote:
> Hi all,
>
> I'm actually running ZFS under FreeBSD. I've a question about how many
> disks I «can» have in one pool.
>
> At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
> (Dell) meaning 48 disks. I've configure with 4 raid
On Oct 4, 2012, at 1:33 PM, "Schweiss, Chip" wrote:
> Again thanks for the input and clarifications.
>
> I would like to clarify the numbers I was talking about with ZiL performance
> specs I was seeing talked about on other forums. Right now I'm getting
> streaming performance of sync writ
Thanks Neil, we always appreciate your comments on ZIL implementation.
One additional comment below...
On Oct 4, 2012, at 8:31 AM, Neil Perrin wrote:
> On 10/04/12 05:30, Schweiss, Chip wrote:
>>
>> Thanks for all the input. It seems information on the performance of the
>> ZIL is sparse and
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber wrote:
> On 10/4/2012 11:48 AM, Richard Elling wrote:
>>
>> On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote:
>>
>>>
>>> This whole thread has been fascinating. I really wish we (OI) had the two
>
On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote:
> Hi,
>
> I have a machine whose zpools are at version 28, and I would like to
> keep them at that version for portability between OSes. I understand
> that 'zpool status' asks me to upgrade, but so does 'zpool status -x'
> (the man page says it should
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote:
>
> This whole thread has been fascinating. I really wish we (OI) had the two
> following things that freebsd supports:
>
> 1. HAST - provides a block-level driver that mirrors a local disk to a
> network "disk" presenting the result as a
If you've been hiding under a rock, not checking your email, then you might
not have heard about the Next Big Whopper Event for ZFS Fans: ZFS Day!
The agenda is now set and the teams are preparing to descend towards San
Francisco's Moscone Center vortex for a full day of ZFS. I'd love to see y'all
On Sep 26, 2012, at 4:28 AM, Sašo Kiselkov wrote:
> On 09/26/2012 01:14 PM, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>>
>>> Got me wondering: how man
On Sep 26, 2012, at 10:54 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> Here's another one.
>
> Two identical servers are sitting side by side. They could be connected to
> each other via anything (presently using crossover ethernet cable.) And
> obviously they bot
On Sep 25, 2012, at 1:46 PM, Jim Klimov wrote:
> 2012-09-24 21:08, Jason Usher wrote:
>>> Ok, thank you. The problem with this is, the
>>> compressratio only goes to two significant digits, which
>>> means if I do the math, I'm only getting an
>>> approximation. Since we may use these numbers
On Sep 25, 2012, at 1:32 PM, Jim Klimov wrote:
> 2012-09-26 0:21, Richard Elling пишет:
>>> Does this mean that importing a pool with iSCSI zvols
>>> on a fresh host (LiveCD instance on the same box, or
>>> via failover of shared storage to a different host)
>>
On Sep 25, 2012, at 11:17 AM, Jason Usher wrote:
>
> Ok - but from a performance point of view, I am only using
> ram/cpu resources for the deduping of just the individual
> filesystems I enabled dedupe on, right ? I hope that
> turning on dedupe for just one filesystem did not incur
> ram/cpu c
On Sep 25, 2012, at 12:30 PM, Jim Klimov wrote:
> Hello all,
>
> With original "old" ZFS iSCSI implementation there was
> a "shareiscsi" property for the zvols to be shared out,
> and I believe all configuration pertinent to the iSCSI
> server was stored in the pool options (I may be wrong,
> b
On Sep 24, 2012, at 10:08 AM, Jason Usher wrote:
> Oh, and one other thing ...
>
>
> --- On Fri, 9/21/12, Jason Usher wrote:
>
>>> It shows the allocated number of bytes used by the
>>> filesystem, i.e.
>>> after compression. To get the uncompressed size,
>> multiply
>>> "used" by
>>> "compre
Hi Bogdan,
On Sep 21, 2012, at 4:00 AM, Bogdan Ćulibrk wrote:
> Greetings,
>
> I'm trying to achieve selective output of "zfs list" command for specific
> user to show only delegated sets. Anyone knows how to achieve this?
There are several ways, but no builtin way, today. Can you provide a u
On Sep 20, 2012, at 10:05 PM, Stefan Ring wrote:
> On Fri, Sep 21, 2012 at 6:31 AM, andy thomas wrote:
>> I have a ZFS filseystem and create weekly snapshots over a period of 5 weeks
>> called week01, week02, week03, week04 and week05 respectively. Ny question
>> is: how do the snapshots relate
On Sep 18, 2012, at 7:31 AM, Eugen Leitl wrote:
>
> Can I actually have a year's worth of snapshots in
> zfs without too much performance degradation?
I've got 6 years of snapshots with no degradation :-)
In general, there is not a direct correlation between snapshot count and
performance.
--
On Sep 15, 2012, at 6:03 PM, Bob Friesenhahn
wrote:
> On Sat, 15 Sep 2012, Dave Pooser wrote:
>
>> The problem: so far the send/recv appears to have copied 6.25TB of
>> 5.34TB.
>> That... doesn't look right. (Comparing zfs list -t snapshot and looking at
>> the 5.34 ref for the snapshot v
On Sep 12, 2012, at 12:44 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> I send a replication data stream from one host to another. (and receive).
> I discovered that after receiving, I need to remove the auto-snapshot
> property on the receiving side, and set the readon
For illumos-based distributions, there is a "written" and "written@" property
that shows the
amount of data writtent to each snapshot. This helps to clear the confusion
over the way
the "used" property is accounted.
https://www.illumos.org/issues/1645
-- richard
On Aug 29, 2012, at 11:12 AM,
On Aug 24, 2012, at 6:50 AM, Sašo Kiselkov wrote:
> This is something I've been looking into in the code and my take on your
> proposed points this:
>
> 1) This requires many and deep changes across much of ZFS's architecture
> (especially the ability to sustain tlvdev failures).
>
> 2) Most of
On Aug 13, 2012, at 8:59 PM, Scott wrote:
> On Mon, Aug 13, 2012 at 10:40:45AM -0700, Richard Elling wrote:
>>
>> On Aug 13, 2012, at 2:24 AM, Sa?o Kiselkov wrote:
>>
>>> On 08/13/2012 10:45 AM, Scott wrote:
>>>> Hi Saso,
>>>>
>>&g
On Aug 13, 2012, at 2:24 AM, Sašo Kiselkov wrote:
> On 08/13/2012 10:45 AM, Scott wrote:
>> Hi Saso,
>>
>> thanks for your reply.
>>
>> If all disks are the same, is the root pointer the same?
>
> No.
>
>> Also, is there a "signature" or something unique to the root block that I can
>> search
On Aug 9, 2012, at 4:11 AM, joerg.schill...@fokus.fraunhofer.de (Joerg
Schilling) wrote:
> Sa?o Kiselkov wrote:
>
>> On 08/09/2012 01:05 PM, Joerg Schilling wrote:
>>> Sa?o Kiselkov wrote:
>>>
> To me it seems that the "open-sourced ZFS community" is not open, or
> could you
> p
On Aug 2, 2012, at 5:40 PM, Nigel W wrote:
> On Thu, Aug 2, 2012 at 3:39 PM, Richard Elling
> wrote:
>> On Aug 1, 2012, at 8:30 AM, Nigel W wrote:
>>
>>
>> Yes. +1
>>
>> The L2ARC as is it currently implemented is not terribly useful for
>> sto
On Jul 31, 2012, at 8:05 PM, opensolarisisdeadlongliveopensolaris wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>> I believe what you meant to say was "dedup with HDDs sux.&
On Aug 1, 2012, at 12:21 AM, Suresh Kumar wrote:
> Dear ZFS-Users,
>
> I am using Solarisx86 10u10, All the devices which are belongs to my zpool
> are in available state .
> But I am unable to import the zpool.
>
> #zpool import tXstpool
> cannot import 'tXstpool': one or more devices is curr
On Aug 1, 2012, at 8:30 AM, Nigel W wrote:
> On Wed, Aug 1, 2012 at 8:33 AM, Sašo Kiselkov wrote:
>> On 08/01/2012 04:14 PM, Jim Klimov wrote:
>>> chances are that
>>> some blocks of userdata might be more popular than a DDT block and
>>> would push it out of L2ARC as well...
>>
>> Which is why I
On Aug 1, 2012, at 2:41 PM, Peter Jeremy wrote:
> On 2012-Aug-01 21:00:46 +0530, Nigel W wrote:
>> I think a fantastic idea for dealing with the DDT (and all other
>> metadata for that matter) would be an option to put (a copy of)
>> metadata exclusively on a SSD.
>
> This is on my wishlist as w
On Aug 1, 2012, at 8:04 AM, Jesse Jamez wrote:
> Hello,
>
> I recently rebooted my workstation and the disk names changed causing my ZFS
> pool to be unavailable.
What OS and release?
>
> I did not make any hardware changes? My first question is the obvious? Did
> I loose my data? Can I r
On Jul 31, 2012, at 10:07 AM, Nigel W wrote:
> On Tue, Jul 31, 2012 at 9:36 AM, Ray Arachelian wrote:
>> On 07/31/2012 09:46 AM, opensolarisisdeadlongliveopensolaris wrote:
>>> Dedup: First of all, I don't recommend using dedup under any
>>> circumstance. Not that it's unstable or anything, just
On Jul 30, 2012, at 12:25 PM, Tim Cook wrote:
> On Mon, Jul 30, 2012 at 12:44 PM, Richard Elling
> wrote:
> On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
>> - Opprinnelig melding -
>>> On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
>>>
On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
> - Opprinnelig melding -
>> On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
>> wrote:
> Also keep in mind that if you have an SLOG (ZIL on a separate
> device), and then lose this SLOG (disk crash etc), you will
>
On Jul 29, 2012, at 1:53 PM, Jim Klimov wrote:
> 2012-07-30 0:40, opensolarisisdeadlongliveopensolaris пишет:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>>
>>>For several times now I've seen statements on this list
On Jul 29, 2012, at 7:07 AM, Jim Klimov wrote:
> Hello, list
>
> For several times now I've seen statements on this list implying
> that a dedicated ZIL/SLOG device catching sync writes for the log,
> also allows for more streamlined writes to the pool during normal
> healthy TXG syncs, than is
that you
think is 4KB might look very different coming out of ESXi. Use nfssvrtop
or one of the many dtrace one-liners for observing NFS traffic to see what is
really on the wire. And I'm very interested to know if you see 16KB reads
during the "write-only" workload.
more below...
Important question, what is the interconnect? iSCSI? FC? NFS?
-- richard
On Jul 24, 2012, at 9:44 AM, matth...@flash.shanje.com wrote:
> Working on a POC for high IO workloads, and I’m running in to a bottleneck
> that I’m not sure I can solve. Testbed looks like this :
>
> SuperMicro 6026-6R
On Jul 22, 2012, at 10:18 PM, Yuri Vorobyev wrote:
> Hello.
>
> I faced with a strange performance problem with new disk shelf.
> We a using ZFS system with SATA disks for a while.
What OS and release?
-- richard
> It is Supermicro SC846-E16 chassis, Supermicro X8DTH-6F motherboard with 96Gb
On Jul 16, 2012, at 2:43 AM, Michael Hase wrote:
> Hello list,
>
> did some bonnie++ benchmarks for different zpool configurations
> consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
> bytes/sector, 7.2k), and got some strange results, please see
> attachements for exact numb
On Jul 11, 2012, at 1:06 PM, Bill Sommerfeld wrote:
> on a somewhat less serious note, perhaps zfs dedup should contain "chinese
> lottery" code (see http://tools.ietf.org/html/rfc3607 for one explanation)
> which asks the sysadmin to report a detected sha-256 collision to
> eprint.iacr.org or the
On Jul 11, 2012, at 10:23 AM, Sašo Kiselkov wrote:
> Hi Richard,
>
> On 07/11/2012 06:58 PM, Richard Elling wrote:
>> Thanks Sašo!
>> Comments below...
>>
>> On Jul 10, 2012, at 4:56 PM, Sašo Kiselkov wrote:
>>
>>> Hi guys,
>>>
>&g
On Jul 11, 2012, at 10:11 AM, Bob Friesenhahn wrote:
> On Wed, 11 Jul 2012, Richard Elling wrote:
>> The last studio release suitable for building OpenSolaris is available in
>> the repo.
>> See the instructions at
>> http://wiki.illumos.org/display/illumos/How+To+Bui
Thanks Sašo!
Comments below...
On Jul 10, 2012, at 4:56 PM, Sašo Kiselkov wrote:
> Hi guys,
>
> I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
> implementation to supplant the currently utilized sha256.
No need to supplant, there are 8 bits for enumerating hash algorit
To amplify what Mike says...
On Jul 10, 2012, at 5:54 AM, Mike Gerdts wrote:
> ls(1) tells you how much data is in the file - that is, how many bytes
> of data that an application will see if it reads the whole file.
> du(1) tells you how many disk blocks are used. If you look at the
> stat struc
First things first, the panic is a bug. Please file one with your OS supplier.
More below...
On Jul 6, 2012, at 4:55 PM, Ian Collins wrote:
> On 07/ 7/12 11:29 AM, Brian Wilson wrote:
>> On 07/ 6/12 04:17 PM, Ian Collins wrote:
>>> On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
On Jul 2, 2012, at 2:40 PM, Edmund White wrote:
> This depends upon what you want to do. I've used G6 and G7 ProLiants
> extensively in ZFS deployments (Nexenta, mostly). I'm assuming you'd be
> using an external JBOD enclosure?
When I was at Nexenta, we qualed the DL380 G7, D2600, and D2700.
The
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI target
and initiator behaviour.
-- richard
On Jun 28, 2012, at 10:47 PM, Ian Collins wrote:
> I'm trying to work out the case a remedy for a very sick iSCSI pool on a
> Solaris 11 host.
>
> The volume is exported fr
On Jun 25, 2012, at 10:55 AM, Philip Brown wrote:
> I ran into something odd today:
>
> zfs destroy -r random/filesystem
>
> is mindbogglingly slow. But seems to me, it shouldnt be.
> It's slow, because the filesystem has two snapshots on it. Presumably, it's
> busy "rolling back" the snapshot
On Jun 20, 2012, at 5:08 PM, Jim Klimov wrote:
> 2012-06-21 1:58, Richard Elling wrote:
>> On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote:
>>>
>>> Also by default if you don't give the whole drive to ZFS, its cache
>>> may be disabled upon pool import
On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote:
>
> Also by default if you don't give the whole drive to ZFS, its cache
> may be disabled upon pool import and you may have to reenable it
> manually (if you only actively use this disk for one or more ZFS
> pools - which play with caching nicely).
T
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
> by the way
> when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
-- richard
--
ZFS and performance consulting
http://www.RichardElling.com
On Jun 14, 2012, at 1:35 PM, Robert Milkowski wrote:
>> The client is using async writes, that include commits. Sync writes do
>> not need commits.
>>
>> What happens is that the ZFS transaction group commit occurs at more-
>> or-less regular intervals, likely 5 seconds for more modern ZFS
>> sys
[Phil beat me to it]
Yes, the 0s are a result of integer division in DTrace/kernel.
On Jun 14, 2012, at 9:20 PM, Timothy Coalson wrote:
> Indeed they are there, shown with 1 second interval. So, it is the
> client's fault after all. I'll have to see whether it is somehow
> possible to get the s
Hi Tim,
On Jun 14, 2012, at 12:20 PM, Timothy Coalson wrote:
> Thanks for the script. Here is some sample output from 'sudo
> ./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
> is ashift=9, some benchmarking didn't show much difference with
> ashift=12 other than giving up 8
On Jun 13, 2012, at 4:51 PM, Daniel Carosone wrote:
> On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
>> client: ubuntu 11.10
>> /etc/fstab entry: :/mainpool/storage /mnt/myelin nfs
>> bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
>>0
>
>
1 - 100 of 2596 matches
Mail list logo