On 08/30/2012 12:07 PM, Anonymous wrote:
> Hi. I have a spare off the shelf consumer PC and was thinking about loading
> Solaris on it for a development box since I use Studio @work and like it
> better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
> has only one drive. If ZF
On 08/30/2012 04:08 PM, Nomen Nescio wrote:
>>> Hi. I have a spare off the shelf consumer PC and was thinking about loading
>>> Solaris on it for a development box since I use Studio @work and like it
>>> better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
>>> has only one d
On 08/30/2012 04:22 PM, Anonymous wrote:
>> On 08/30/2012 12:07 PM, Anonymous wrote:
>>> Hi. I have a spare off the shelf consumer PC and was thinking about loading
>>> Solaris on it for a development box since I use Studio @work and like it
>>> better than gcc. I was thinking maybe it isn't so sma
On 09/05/2012 05:06 AM, Yaverot wrote:
> "What is the smallest sized drive I may use to replace this dead drive?"
>
> That information has to be someplace because ZFS will say that drive Q is too
> small. Is there an easy way to query that information?
I use fdisk to find this out. For instance
On 09/11/2012 03:32 PM, Dan Swartzendruber wrote:
> I think you may have a point. I'm also inclined to enable prefetch caching
> per Saso's comment, since I don't have massive throughput - latency is more
> important to me.
I meant to say the exact opposite: enable prefetch caching only if your
l
On 09/11/2012 03:41 PM, Dan Swartzendruber wrote:
> LOL, I actually was unclear not you. I understood what you were saying,
> sorry for being unclear. I have 4 disks in raid10, so my max random read
> throughput is theoretically somewhat faster than the L2ARC device, but I
> never really do that
On 09/11/2012 04:06 PM, Dan Swartzendruber wrote:
> Thanks a lot for clarifying how this works.
You're very welcome.
> Since I'm quite happy
> having an SSD in my workstation, I will need to purchase another SSD :) I'm
> wondering if it makes more sense to buy two SSDs of half the size (e.g.
>
On 09/18/2012 04:31 PM, Eugen Leitl wrote:
>
> I'm currently thinking about rolling a variant of
>
> http://www.napp-it.org/napp-it/all-in-one/index_en.html
>
> with remote backup (via snapshot and send) to 2-3
> other (HP N40L-based) zfs boxes for production in
> our organisation. The systems t
Have you tried a zpool clear and subsequent scrub to see if the error
pops up again?
Cheers,
--
Saso
On 09/20/2012 09:45 AM, Stephan Budach wrote:
> Hi,
>
> a couple of days we had an issue with one of our FC switches which led
> to a switch restart. Due to this issue the zpool vdevs had been
>
On 09/21/2012 01:34 AM, Jason Usher wrote:
> Hi,
>
> I have a ZFS filesystem with compression turned on. Does the "used" property
> show me the actual data size, or the compressed data size ? If it shows me
> the compressed size, where can I see the actual data size ?
It shows the allocated n
On 09/25/2012 09:38 PM, Jim Klimov wrote:
> 2012-09-11 16:29, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
>>>
>>> My first thought was everything is
On 09/26/2012 01:14 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> Got me wondering: how many reads of a block from spinning rust
>> suffice for it to ult
On 09/26/2012 05:08 PM, Matt Van Mater wrote:
> I've looked on the mailing list (the evil tuning wikis are down) and
> haven't seen a reference to this seemingly simple question...
>
> I have two OCZ Vertex 4 SSDs acting as L2ARC. I have a spare Crucial SSD
> (about 1.5 years old) that isn't gett
On 09/26/2012 05:18 PM, Matt Van Mater wrote:
>>
>> If the added device is slower, you will experience a slight drop in
>> per-op performance, however, if your working set needs another SSD,
>> overall it might improve your throughput (as the cache hit ratio will
>> increase).
>>
>
> Thanks for yo
On 10/25/2012 05:59 AM, Jerry Kemp wrote:
> I have just acquired a new JBOD box that will be used as a media
> center/storage for home use only on my x86/x64 box running OpenIndiana
> b151a7 currently.
>
> Its strictly a JBOD, no hw raid options, with an eSATA port to each drive.
>
> I am looking
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>
>> Look for Dell's "6Gbps SAS HBA" cards. They can be had new for <$100 and
>> are essentially rebranded LSI 9200-8e cards. Always try to look for OEM
>> card
On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
> On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
>> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>>
>>> Look for Dell's "6Gbps SAS HBA" cards. They can be had new for <$100 and
>>> are essentially r
On 10/25/2012 04:28 PM, Patrick Hahn wrote:
> On Thu, Oct 25, 2012 at 10:13 AM, Sašo Kiselkov wrote:
>
>> On 10/25/2012 04:11 PM, Sašo Kiselkov wrote:
>>> On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
>>>> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>&
On 10/25/2012 05:40 PM, Bob Friesenhahn wrote:
> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>
>> On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
>>> On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
>>>>
>>>> Look for Dell's "6Gbps SAS HBA" cards.
On 11/07/2012 12:39 PM, Tiernan OToole wrote:
> Morning all...
>
> I have a Dedicated server in a data center in Germany, and it has 2 3TB
> drives, but only software RAID. I have got them to install VMWare ESXi and
> so far everything is going ok... I have the 2 drives as standard data
> stores..
On 11/07/2012 01:16 PM, Eugen Leitl wrote:
> I'm very interested, as I'm currently working on an all-in-one with
> ESXi (using N40L for prototype and zfs send target, and a Supermicro
> ESXi box for production with guests, all booted from USB internally
> and zfs snapshot/send source).
Well, seein
We've got a SC847E26-RJBOD1. Takes a bit of getting used to that you
have to wire it yourself (plus you need to buy a pair of internal
SFF-8087 cables to connect the back and front backplanes - incredible
SuperMicro doesn't provide those out of the box), but other than that,
never had a problem wit
On 11/14/2012 11:14 AM, Michel Jansens wrote:
> Hi,
>
> I've ordered a new server with:
> - 4x600GB Toshiba 10K SAS2 Disks
> - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no
> SAS/SATA problems). Specs:
> http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
>
On 01/07/2013 09:32 PM, Tim Fletcher wrote:
> On 07/01/13 14:01, Andrzej Sochon wrote:
>> Hello *Sašo*!
>>
>> I found you here:
>> http://mail.opensolaris.org/pipermail/zfs-discuss/2012-May/051546.html
>>
>> “How about reflashing LSI firmware to the card? I read on Dell's spec
>>
>> sheets that the
On 01/08/2013 04:27 PM, mark wrote:
>> On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
>>
>> FYI, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to
>> locate
>> with their configurators. There might be a more modern equivalent cleverly
>> hidden somewhere difficult to find.
>>
On 01/21/2013 02:28 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> I disagree the ZFS is developmentally challenged.
>
> As an IT consultant, 8 years ago before I heard of ZFS, it was always easy
> to sell Ontap,
On 01/22/2013 03:56 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
>>
>> as far as incompatibility among products, I've yet to come
>> across it
>
> I was talking about ... install solar
On 01/22/2013 12:30 PM, Darren J Moffat wrote:
> On 01/21/13 17:03, Sašo Kiselkov wrote:
>> Again, what significant features did they add besides encryption? I'm
>> not saying they didn't, I'm just not aware of that many.
>
> Just a few examples:
>
> Sol
On 01/22/2013 02:20 PM, Michel Jansens wrote:
>
> Maybe 'shadow migration' ? (eg: zfs create -o shadow=nfs://server/dir
> pool/newfs)
Hm, interesting, so it works as a sort of replication system, except
that the data needs to be read-only and you can start accessing it on
the target before the i
On 01/22/2013 02:39 PM, Darren J Moffat wrote:
>
> On 01/22/13 13:29, Darren J Moffat wrote:
>> Since I'm replying here are a few others that have been introduced in
>> Solaris 11 or 11.1.
>
> and another one I can't believe I missed since I was one of the people
> that helped design it and I did
On 01/22/2013 04:32 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: Darren J Moffat [mailto:darr...@opensolaris.org]
>>
>> Support for SCSI UNMAP - both issuing it and honoring it when it is the
>> backing store of an iSCSI target.
>
> When I search for scsi unmap, I c
On 01/22/2013 05:00 PM, casper@oracle.com wrote:
>> Some vendors call this (and thins like it) "Thin Provisioning", I'd say
>> it is more "accurate communication between 'disk' and filesystem" about
>> in use blocks.
>
> In some cases, users of disks are charged by bytes in use; when not usi
On 01/22/2013 05:34 PM, Darren J Moffat wrote:
>
>
> On 01/22/13 16:02, Sašo Kiselkov wrote:
>> On 01/22/2013 05:00 PM, casper@oracle.com wrote:
>>>> Some vendors call this (and thins like it) "Thin Provisioning", I'd say
>>>> it is more
On 01/22/2013 10:45 PM, Jim Klimov wrote:
> On 2013-01-22 14:29, Darren J Moffat wrote:
>> Preallocated ZVOLs - for swap/dump.
>
> Or is it also supported to disable COW for such datasets, so that
> the preallocated swap/dump zvols might remain contiguous on the
> faster tracks of the drive (i.e.
On 01/22/2013 11:22 PM, Jim Klimov wrote:
> On 2013-01-22 23:03, Sašo Kiselkov wrote:
>> On 01/22/2013 10:45 PM, Jim Klimov wrote:
>>> On 2013-01-22 14:29, Darren J Moffat wrote:
>>>> Preallocated ZVOLs - for swap/dump.
>>>
>>> Or is it also sup
On 01/29/2013 02:59 PM, Robert Milkowski wrote:
>>> It also has a lot of performance improvements and general bug fixes
>> in
>>> the Solaris 11.1 release.
>>
>> Performance improvements such as?
>
>
> Dedup'ed ARC for one.
> 0 block automatically "dedup'ed" in-memory.
> Improvements to ZIL perfo
On 01/29/2013 03:08 PM, Robert Milkowski wrote:
>> From: Richard Elling
>> Sent: 21 January 2013 03:51
>> VAAI has 4 features, 3 of which have been in illumos for a long time. The
> remaining
>> feature (SCSI UNMAP) was done by Nexenta and exists in their NexentaStor
> product,
>> but the CEO made
On 01/31/2013 11:16 PM, Albert Shih wrote:
> Hi all,
>
> I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
> (I known it's bad).
>
> Well I've server running FreeBSD 9.0 with (don't count / on differents
> disks) zfs pool with 36 disk.
>
> The performance is very very g
On 02/05/2013 05:04 PM, Sašo Kiselkov wrote:
> On 01/31/2013 11:16 PM, Albert Shih wrote:
>> Hi all,
>>
>> I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
>> (I known it's bad).
>>
>> Well I've server running FreeB
On 02/11/2013 04:53 PM, Borja Marcos wrote:
>
> Hello,
>
> I'n updating Devilator, the performance data collector for Orca and FreeBSD
> to include ZFS monitoring. So far I am graphing the ARC and L2ARC size, L2ARC
> writes and reads, and several hit/misses data pairs.
>
> Any suggestions to i
On 02/10/2013 01:01 PM, Koopmann, Jan-Peter wrote:
> Why should it?
>
> I believe currently only Nexenta but correct me if I am wrong
The code has been mainlined a while ago, see:
https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd.c#L3702-L3730
http
On 02/13/2013 04:30 PM, Kiley, Heather L (IS) wrote:
> I am trying to replace a failed disk on my zfs system.
> I replaced the disk and while the physical drive status is now OK, my logical
> drive is still failed.
> When I do a zpool status, the new disk comes up as unavailable:
> spa
On 02/15/2013 03:39 PM, Tyler Walter wrote:
> As someone who has zero insider information and feels that there isn't
> much push at oracle to develop or release new zfs features, I have to
> assume it's not coming. The only way I see it becoming a reality is if
> someone in the illumos community de
On 02/16/2013 06:44 PM, Tim Cook wrote:
> We've got Oracle employees on the mailing list, that while helpful, in no
> way have the authority to speak for company policy. They've made that
> clear on numerous occasions And that doesn't change the fact that we
> literally have heard NOTHING from O
On 02/16/2013 09:49 PM, John D Groenveld wrote:
> Boot with kernel debugger so you can see the panic.
Sadly, though, without access to the source code, all he do can at that
point is log a support ticket with Oracle (assuming he has paid his
support fees) and hope it will get picked up by somebody
On 02/16/2013 10:47 PM, James C. McPherson wrote:
> On 17/02/13 06:54 AM, Sašo Kiselkov wrote:
>> On 02/16/2013 09:49 PM, John D Groenveld wrote:
>>> Boot with kernel debugger so you can see the panic.
>>
>> Sadly, though, without access to the source code, all he do
On 02/17/2013 06:40 AM, Ian Collins wrote:
> Toby Thain wrote:
>> Signed up, thanks.
>>
>> The ZFS list has been very high value and I thank everyone whose wisdom
>> I have enjoyed, especially people like you Sašo, Mr Elling, Mr
>> Friesenhahn, Mr Harvey, the distinguished Sun and Oracle engineers
On 02/21/2013 12:27 AM, Peter Wood wrote:
> Will adding another vdev hurt the performance?
In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your data is equally likely to be hit in all places, then you will not
in
On 02/21/2013 04:02 PM, Markus Grundmann wrote:
> On 02/21/2013 03:34 PM, Jan Owoc wrote:
>> Does this do what you want? (zpool destroy is already undo-able) Jan
>
> Jan that's not was I want.
> I want set a property that's enable/disable all modifications with zpool
> commands (e.g. "zfs destroy
On 02/26/2013 09:33 AM, Tiernan OToole wrote:
> As a follow up question: Data Deduplication: The machine, to start, will
> have about 5Gb RAM. I read somewhere that 20TB storage would require about
> 8GB RAM, depending on block size...
The typical wisdom is that 1TB of dedup'ed data = 1GB of RAM.
On 02/26/2013 03:51 PM, Gary Driggs wrote:
> On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
>
> I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
> this list is going to get shut down by Oracle next month.
>
> Whose descrip
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
> On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
>> On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
>>
>> I'd also recommend that you go and subscribe to z...@lists.illumos.org, since
>
> I ca
On 02/27/2013 12:32 PM, Ahmed Kamal wrote:
> How is the quality of the ZFS Linux port today? Is it comparable to Illumos
> or at least FreeBSD ? Can I trust production data to it ?
Can't speak from personal experience, but a colleague of mine has been
PPA builds on Ubuntu and has had, well, less t
On 01/07/2011 10:26 AM, Darren J Moffat wrote:
> On 06/01/2011 23:07, David Magda wrote:
>> On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
>>
>>> Fletcher is faster than SHA-256, so I think that must be what you're
>>> asking about: "can Fletcher+Verification be faster than
>>> Sha256+NoVerifica
On 01/07/2011 01:15 PM, Darren J Moffat wrote:
> On 07/01/2011 11:56, Sašo Kiselkov wrote:
>> On 01/07/2011 10:26 AM, Darren J Moffat wrote:
>>> On 06/01/2011 23:07, David Magda wrote:
>>>> On Jan 6, 2011, at 15:57, Nicolas Williams wrote:
>>>>
>>
On 04/08/2011 05:20 PM, Mark Sandrock wrote:
>
> On Apr 8, 2011, at 7:50 AM, Evaldas Auryla wrote:
>
>> On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do NFS, CIFS, iSCSI, HTTP and WebDav
out of the box.
And you h
On 04/08/2011 06:59 PM, Darren J Moffat wrote:
> On 08/04/2011 17:47, Sašo Kiselkov wrote:
>> In short, I think the X4540 was an elegant and powerful system that
>> definitely had its market, especially in my area of work (digital video
>> processing - heavy on latency, thr
On 04/08/2011 07:22 PM, J.P. King wrote:
>
>> No, I haven't tried a S7000, but I've tried other kinds of network
>> storage and from a design perspective, for my applications, it doesn't
>> even make a single bit of sense. I'm talking about high-volume real-time
>> video streaming, where you strea
On 04/08/2011 07:45 PM, Sašo Kiselkov wrote:
> On 04/08/2011 07:22 PM, J.P. King wrote:
>>
>>> No, I haven't tried a S7000, but I've tried other kinds of network
>>> storage and from a design perspective, for my applications, it doesn't
>>> even
On 04/09/2011 01:41 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Julian King
>>
>> Actually I think our figures more or less agree. 12 disks = 7 mbits
>> 48 disks = 4x7mbits
>
> I know that sounds like terri
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (>50) sequentially read a
large dataset (>10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write ops using iostat, but that doesn't tell me how contiguous the
data
On 05/19/2011 03:35 PM, Tomas Ögren wrote:
> On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
>
>> Hi all,
>>
>> I'd like to ask whether there is a way to monitor disk seeks. I have an
>> application where many concurrent readers (>50) sequentially read a
>> large dataset (>10T) at a fai
On 05/19/2011 07:47 PM, Richard Elling wrote:
> On May 19, 2011, at 5:35 AM, Sašo Kiselkov wrote:
>
>> Hi all,
>>
>> I'd like to ask whether there is a way to monitor disk seeks. I have an
>> application where many concurrent readers (>50) sequentially read
On 05/24/2011 03:08 PM, a.sm...@ukgrid.net wrote:
> Hi,
>
> see the seeksize script on this URL:
>
> http://prefetch.net/articles/solaris.dtracetopten.html
>
> Not used it but looks neat!
>
> cheers Andy.
I already did and it does the job just fine. Thank you for your kind
suggestion.
BR,
-
Hi All,
I'd like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I'm doing a large amount of video streaming
from a storage pool while also slowly continuously writing a constant
volume of data to it (using a normal file descriptor, *not* in O_SYNC).
When r
On 06/24/2011 02:29 PM, Sašo Kiselkov wrote:
> Hi All,
>
> I'd like to ask about whether there is a method to enforce a certain txg
> commit frequency on ZFS. I'm doing a large amount of video streaming
> from a storage pool while also slowly continuously writing a consta
On 06/26/2011 06:17 PM, Richard Elling wrote:
>
> On Jun 24, 2011, at 5:29 AM, Sašo Kiselkov wrote:
>
>> Hi All,
>>
>> I'd like to ask about whether there is a method to enforce a certain txg
>> commit frequency on ZFS. I'm doing a large amount of v
On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
>> Also there is a buffer-size limit, like this (384Mb):
>> set zfs:zfs_write_limit_override = 0x1800
>>
>> or on command-line like this:
>> # echo zfs_write_limit_override/W0t402653184 | mdb -kw
>
> Currently
On 06/27/2011 11:59 AM, Jim Klimov wrote:
>
>> I'd like to ask about whether there is a method to enforce a
>> certain txg
>> commit frequency on ZFS.
>
> Well, there is a timer frequency based on TXG age (i.e 5 sec
> by default now), in /etc/system like this:
>
> set zfs:zfs_txg_synctime =
On 06/30/2011 01:10 PM, Jim Klimov wrote:
> 2011-06-30 11:47, Sašo Kiselkov пишет:
>> On 06/30/2011 02:49 AM, Jim Klimov wrote:
>>> 2011-06-30 2:21, Sašo Kiselkov пишет:
>>>> On 06/29/2011 02:33 PM, Sašo Kiselkov wrote:
>>>>>> Also there is a
On 06/30/2011 01:33 PM, Jim Klimov wrote:
> 2011-06-30 15:22, Sašo Kiselkov пишет:
>> I tried increasing this
>>>> value to 2000 or 3000, but without an effect - prehaps I need to set it
>>>> at pool mount time or in /etc/system. Could somebody with more
>&
On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
> On 06/30/2011 01:33 PM, Jim Klimov wrote:
>> 2011-06-30 15:22, Sašo Kiselkov пишет:
>>> I tried increasing this
>>>>> value to 2000 or 3000, but without an effect - prehaps I need to set it
>>>>> at p
101 - 172 of 172 matches
Mail list logo