and ecc memory. I
> thought using this kind of hardware would prevent me (mainly the ecc memory)
> from errors.
> Can the problem come from the sas/sata controler. I have an ibm m1015 (sas,
> for the first vdev) and a lsi (a cheap one, sata, for the second)
>
>
>
> Le 2
On Oct 23, 2013, at 2:01 AM, Clement BRIZARD wrote:
> The disks are in a Fractal XL case which have little rubber pad to contain
> vibrations.
> As long as it works I will leave it. When I will have some money I will build
> a proper server
>
> Once the resilvering is done, should I do a scrub
On Oct 22, 2013, at 11:46 PM, Clement BRIZARD wrote:
> I cleared the "degraded" disk. we will see what happens in 131hours
Yes, clearing is the proper procedure.
The predicted time to complete is usually wildly inaccurate until you get near
the end
of resilvering or scrubbing. The estimated ti
On Oct 20, 2013, at 6:52 AM, "Chris Murray" wrote:
> Hi all,
>
> I'm hoping for some troubleshooting advice. I have an OpenIndiana
> oi_151a8 virtual machine which was functioning correctly on vSphere 5.1
> but now isn't on vSphere 5.5 (ESXi-5.5.0-1331820-standard)
>
> A small corner of my net
On Sep 9, 2013, at 11:09 AM, Simon Toedt wrote:
> On Mon, Sep 9, 2013 at 7:52 PM, Peter Tribble wrote:
>> Hi,
>>
>> topic says it all. I want to install OpenIndiana on a UFS filesystem. No
>>> typo. I do not want to use ZFS on my boot disk. Can you choose what
>>> filesystem you want to use for
On Aug 19, 2013, at 4:02 AM, Edward Ned Harvey (openindiana)
wrote:
>> From: Steve Goldthorpe [mailto:openindi...@waistcoat.org.uk]
>> Sent: Sunday, August 18, 2013 12:23 PM
>>
>> No matter what I try I can't seem to get a 4K aligned root pool using the
>> OpenIndiana installer (oi151-a7 live i
Hi Willem,
On Aug 14, 2013, at 10:49 AM, w...@vandenberge.us wrote:
> Good morning,
> Last week we put three identical oi_151a7 systems into pre-production. Each
> system has 240 drives in 9drive RAIDZ1 vdevs (I'm aware of the potential DR
> issues with this configuration and I'm ok with them in
On Aug 8, 2013, at 11:34 AM, Lionel Cons wrote:
> On 8 August 2013 17:11, Richard Elling
> wrote:
>> On Aug 7, 2013, at 2:50 PM, Jason Lawrence wrote:
>>
>>> This might be a better question for the Illumos group, so please let me
>>> know.
>>>
On Aug 7, 2013, at 2:50 PM, Jason Lawrence wrote:
> This might be a better question for the Illumos group, so please let me know.
>
> I have a zvol for a KVM instance which I felt was taking up too much space.
> After doing a little research, I stumbled upon
> http://support.freenas.org/ticket
;>>> opposed to a collection of discs in raidz or whatever its called.
>>>>
>>>> My needs will be well handled by something like 3tb of storage so
>>>> something like 6 1tb discs for a mirrored setup.
>>>>
>>>> What else could I ge
On Aug 5, 2013, at 3:58 AM, Gary Gendel wrote:
> When I reboot my machine, fmstat always shows 12 counts for zfs-* categories.
> fmdump and fmdump -e don't report anything and I don't see anything in the
> logs of the current or previous BE (when applicable). I'm at a bit of a loss
> to fig
On Jul 25, 2013, at 3:21 PM, James Relph wrote:
> Hi Karl,
>
>> I think we need more information to be able to help.
>> Have you enabled mpxio? Have a look at the stmsboot command.
>
> mpxio is enabled.
>
>> What kind of Qlogic card do you have. Oem or original Qlogic, and model.
>> In "old"
On Jul 11, 2013, at 9:30 AM, Laurent Blume wrote:
> On 2013-07-11 6:56 PM, James Carlson wrote:
>> I've been using it for a while, first on OpenSolaris.
>
> Yes, me too, on and off until S11.1, when I dumped it for good because
> it annoyed me one time too many. I do know the thing :-)
>
>> Sim
On Jun 18, 2013, at 5:34 AM, Sebastian Gabler wrote:
> Am 18.06.2013 06:15, schrieb openindiana-discuss-requ...@openindiana.org:
>> Message: 7
>> Date: Mon, 17 Jun 2013 17:00:37 -0700
>> From: Richard Elling
>> To: Discussion list for OpenIndiana
>>
>
On Jun 17, 2013, at 1:36 PM, Sebastian Gabler wrote:
> Dear Bill, Peter, Richard, and Saso.
>
> Thanks for the great comments.
>
> Now, changing to reverse gear, isn't it more likely to loose data by having a
> pool that spans across mutiple HBAs than if you connect all drives to a
> single H
On Jun 17, 2013, at 7:12 AM, Sebastian Gabler wrote:
> Hi,
>
> it occured to me that obviously some ZFS Storage systems only feature a
> single SAS HBA, including the ZFSSA 7320. At least, as far as I understand.
> From what I saw in the 7320 documentation, each of the two HBA ports is
> conne
On Jun 16, 2013, at 3:11 PM, Alberto Picón Couselo wrote:
> Hi, Saso
>
>> I don't think there's any in-kernel support. But before you go out on a
>> software digging expedition into clustered filesystems, have you made sure
>> that you *really* need it? High Availability does not necessarily
On Apr 21, 2013, at 3:47 AM, Jim Klimov wrote:
> On 2013-04-21 06:13, Richard Elling wrote:
>> Terminology warning below…
>
>
>> BER is the term most often used in networks, where the corruption is
>> transient. For permanent
>> data faults, the equivalent
comment below…
On Apr 18, 2013, at 5:17 AM, Edward Ned Harvey (openindiana)
wrote:
>> From: Timothy Coalson [mailto:tsc...@mst.edu]
>>
>> Did you also compare the probability of bit errors causing data loss
>> without a complete pool failure? 2-way mirrors, when one device
>> completely
>> di
will be lost. On the other hand, raid-z2 will still
>>> >have available redundancy, allowing every single block to have a bad read
>>> >on any single component disk, without losing data. I haven't done the math
>>> >on this, but I seem to rec
[catching up... comment below]
On Apr 18, 2013, at 2:03 PM, Timothy Coalson wrote:
> On Thu, Apr 18, 2013 at 10:24 AM, Sebastian Gabler
> wrote:
>
>> Am 18.04.2013 16:28, schrieb
>> openindiana-discuss-request@**openindiana.org
>> :
>>
>>> Message: 1
>>> Date: Thu, 18 Apr 2013 12:17:47 +
;
> Thanks for pointing at that. I stand corrected with my previous statement
> about Richard's MTTDL model excluding BER/UER. Asking Richard Elling to
> accept my apology.
No worries.
Unfortunately, Oracle totally hosed the older Sun blogs. I do have on my todo
list the
task
For the context of ZPL, easy answer below :-) ...
On Apr 16, 2013, at 4:12 PM, Timothy Coalson wrote:
> On Tue, Apr 16, 2013 at 6:01 PM, Jim Klimov wrote:
>
>> On 2013-04-16 23:56, Jay Heyl wrote:
>>
>>> result in more devices being hit for both read and write. Or am I wrong
>>> about reads b
clarification below...
On Apr 16, 2013, at 2:44 PM, Sašo Kiselkov wrote:
> On 04/16/2013 11:37 PM, Timothy Coalson wrote:
>> On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov wrote:
>>
>>> If you are IOPS constrained, then yes, raid-zn will be slower, simply
>>> because any read needs to hit all d
Julien,
Good idea. Please file an RFE at illumos.org, thanks
-- richard
On Apr 14, 2013, at 7:44 AM, Julien Ramseier wrote:
> Hi there,
>
> I was playing with OI 151a7, and I noticed a strange noise
> coming from my hard drive each time the system was shut down.
>
> After some digging, it se
On Apr 14, 2013, at 8:15 AM, Wim van den Berge wrote:
> Hello,
>
> We have been running OpenIndiana (and its various predecessors) as storage
> servers in production for the last couple of years. Over that time the
> majority of our storage infrastructure has been moved to Open Indiana to the
> p
On Mar 28, 2013, at 11:04 PM, "Shvayakov A." wrote:
> I found this: https://www.illumos.org/issues/1437
>
> But not sure that it will be without problem
>
> Anybody knows - what's the reason for this limitation?
WAG. The old limit was 32 because nobody would ever need more than 32 targets,
On Mar 16, 2013, at 5:02 PM, Richard Elling
wrote:
> there is a way to get this info from mdb... I added a knowledge base article
> on this at Nexenta a few years ago, lemme see if I can dig it up from my
> archives…
And the winner is:
echo "::mptsas -t" | mdb
there is a way to get this info from mdb... I added a knowledge base article on
this at Nexenta a few years ago, lemme see if I can dig it up from my
archives...
-- richard
On Mar 15, 2013, at 11:22 PM, "Richard L. Hamilton" wrote:
> Running on something older (SXCE snv_97 on SPARC, or ther
On Feb 16, 2013, at 3:59 PM, Sašo Kiselkov wrote:
> On 02/17/2013 12:52 AM, Grant Albitz wrote:
>> Yes jim I actually used something similar to enable the 9000 mtu that's why
>> I want familiar with the config file method.
>>
>> dladm set-linkprop -p mtu=9000 InterfaceName
>>
>>
>> Flowcontro
On Feb 8, 2013, at 6:33 AM, real-men-dont-cl...@gmx.net wrote:
> Hello,
>
> given the lack of encryption in current open-source zfs I came across the so
> called self-encrypting-disks (eg. HGST UltraStar A7K2000 BDE 1000GB).
>
> Did anybody try to use them under OI so far?
Soon, many, if not
This is a bug in the mpt_sas driver. I'm not sure of the RTI date, but I
believe it was
scheduled to be fixed soon. I've CC'ed Dan McDonald who has been working in this
area. He'll know for sure :-)
-- richard
On Feb 7, 2013, at 1:20 AM, Randy S wrote:
>
> Hi,
>
> Thanks for the link. I also
On Dec 28, 2012, at 12:40 AM, Jim Klimov wrote:
> On 2012-12-28 09:00, Ram Chander wrote:
>> Hi,
>>
>>
>> Is there /boot/loader.conf in OI. I want to set below in the file, will it
>> take effect ? If not where to specify ?
>>
>> vfs.zfs.zil_disable="0"
>> vfs.zfs.txg.timeout="5"
>> vfs.zfs
On Nov 30, 2012, at 9:59 AM, Nicholas Metsovon wrote:
> I've been building an OpenIndiana server to replace our existing Linux web
> server. I've always - since the 70's - wanted to run a real Unix server. I
> have the server almost built, and everything so far is working great. Glory
> be
On Nov 16, 2012, at 4:19 PM, Jim Klimov wrote:
> On 2012-11-17 00:46, Roel_D wrote:
>> How about teaming? Is it supported under OI?
>
>
> My memory serves me not worse than google: teaming is one of the
> umbrella terms to describe what is implemented by LACP - a means
> of representing several
On Nov 1, 2012, at 1:24 AM, Jim Klimov wrote:
> On 2012-11-01 01:47, Richard Elling wrote:
>> Finally, a data point: using MTU of 1500 with ixgbe you can hit wire speed
>> on a
>> modern CPU.
>
>> There is no CSMA/CD on gigabit and faster available from any vendor
On Oct 31, 2012, at 3:37 AM, Jim Klimov wrote:
> 2012-10-31 13:58, Sebastian Gabler wrote:
>>> 2012-10-30 19:21, Sebastian Gabler wrote:
>Whereas that's relative: performance is still at a quite miserable 62
>MB/s through a gigabit link. Apparently, my environment has room for
>imp
On Oct 31, 2012, at 5:53 AM, Roy Sigurd Karlsbakk wrote:
>> 2012-10-30 19:21, Sebastian Gabler wrote:
>>> Whereas that's relative: performance is still at a quite miserable
>>> 62
>>> MB/s through a gigabit link. Apparently, my environment has room for
>>> improvement.
>>
>> Does your gigabit et
On Oct 28, 2012, at 5:10 AM, Robin Axelsson
wrote:
> On 2012-10-24 21:58, Timothy Coalson wrote:
>> On Wed, Oct 24, 2012 at 6:17 AM, Robin Axelsson<
>> gu99r...@student.chalmers.se> wrote:
>>> It would be interesting to know how you convert a raidz2 stripe to say a
>>> raidz3 stripe. Let's say t
On Oct 22, 2012, at 9:13 AM, James Carlson wrote:
> Daniel Kjar wrote:
>> I have this problem with any VM running on either Sol10 Nevada,
>> Opensolaris, openindiana. I have the ARC restricted now but for some
>> reason, and 'people at sun that know these things' have mentioned it
>> before when
On Oct 19, 2012, at 3:51 PM, "Dan Swartzendruber" wrote:
>
> Hi, all. I've got an issue that is bugging me. I've got an OI 151a7 VM and
> ssh to it takes 15 seconds or so, then I get a prompt. It's not the usual
> reverse dns or gssapi stuff, since my backup node is also OI 151a7 and it
> res
On Oct 15, 2012, at 3:00 PM, heinrich.vanr...@gmail.com wrote:
> Most of my storage background is with EMC CX and VNX and that is used in a
> vast amount of datacenters.
> They run a process called sniiffer that runs in the background and request a
> read of all blocks on each disk individual
On Oct 8, 2012, at 2:07 PM, Roel_D wrote:
> I still think this whole discussion is like renting a 40 meter long truck to
> move your garden hose.
>
> We all know that it is possible to rent such a truck but nobody tries to role
> up the hose
>
> SSD's are good for fast reads and occasi
On Oct 8, 2012, at 4:07 PM, Martin Bochnig wrote:
> Marilio,
>
>
> at first a reminder: never ever detach a disk before you have a third
> disk that already completed resilvering.
> The term "detach" is misleading, because it detaches the disk from the
> pool. Afterwards you cannot access the d
On Sep 27, 2012, at 5:15 PM, Reginald Beardsley wrote:
> --- On Thu, 9/27/12, Richard Elling wrote:
>>>
>>> zfs_scrub_delay = 100
>>
>> a bit extreme, but probably ok
>>
>>> zfs_scan_idle = 1000
>>
>> no, you'll want to
On Sep 29, 2012, at 6:46 AM, Bryan N Iotti wrote:
> Hi all,
>
> thought you'd like to know the following...
>
> I have my rpool on a 146GB SCSI 15K rpm disk.
>
> I regularly back it up with the following sequence of commands:
> - zfs snapshot -r rpool@
> - cd to backup dir and su
> - zfs send
On Sep 27, 2012, at 3:24 PM, Reginald Beardsley wrote:
> --- On Thu, 9/27/12, Richard Elling wrote:
>
>> From: Richard Elling
>> Subject: Re: [OpenIndiana-discuss] Mitigating the performance impact of scrub
>> To: "Discussion list for OpenIndiana"
>>
t;
> Reg
>
> --- On Thu, 9/27/12, Richard Elling wrote:
>
>> From: Richard Elling
>> Subject: Re: [OpenIndiana-discuss] Mitigating the performance impact of scrub
>> To: "Discussion list for OpenIndiana"
>> Date: Thursday, September 27, 201
On Sep 27, 2012, at 8:44 AM, Reginald Beardsley wrote:
> The only thing google turned up was "stop the scrub if it impacts performance
> too badly" which is not really all that helpful. Or ways to speed up scrubs &
> resilvers.
On modern ZFS implementations, scrub I/O is throttled to avoid imp
On Sep 25, 2012, at 11:41 AM, Peter Tribble wrote:
> On Tue, Sep 25, 2012 at 1:50 PM, Richard Elling
> wrote:
>>
>> Use what you need. Most people don't need or want to use swap. Why?
>> Because...
>> if you have to swap, performance will suck. Period
On Sep 24, 2012, at 2:22 AM, Gabriele Bulfon wrote:
> Hi,
> I noticed that I usually have to grow the default swap installed by OI or
> XStreamOS, because the
> default text installer set up following some rules (stated inside the python
> sources):
> memorytype requiredsi
On Sep 24, 2012, at 10:29 PM, Jaco Schoonen wrote:
>>
After studying all the information about 4K-disks I figured out that to
get more space in my server I need to create a new pool, consisting of
4K-disks and then moving everything from the old 512-byte pool to the new
on
On Sep 25, 2012, at 4:19 AM, Jim Klimov wrote:
> 2012-09-25 11:52, Armin Maier wrote:
>> Hello, is there an easy way wo find out when the last update occured to
>> an zfs filesystem, my goal is to only make a backup of a filesystem when
>> something has changed. At this time i make it in a comma
On Sep 24, 2012, at 2:30 PM, Jaco Schoonen wrote:
> Dear all,
>
> After studying all the information about 4K-disks I figured out that to get
> more space in my server I need to create a new pool, consisting of 4K-disks
> and then moving everything from the old 512-byte pool to the new one.
On Sep 11, 2012, at 10:46 AM, Ray Arachelian wrote:
> On 09/10/2012 09:14 AM, Sašo Kiselkov wrote:
>> I recommend losing some large unused app blobs that nobody needs on a
>> Live CD. I don't know what you've got in there, but I recommend you
>> throw out stuff like image editing software and the
On Sep 6, 2012, at 8:08 AM, Roel_D wrote:
> Reading this it reminds me of the old days where IRQ's were important to
> systems.
> Those days my serial mouse could interfere with my modem.
>
> But I thought those days were way back..
Interrupt conflicts are syslogged at boot (and other times
Thanks for the update, Bryan! Well done!
-- richard
On Aug 28, 2012, at 10:06 AM, Bryan N Iotti wrote:
> Folks,
>
> just thought you'd like to know that the Veterinary Sciences Faculty of
> the University Of Torino, Italy, is now running an open source PACS
> DICOM server based on OpenIndian
On Aug 6, 2012, at 5:15 AM, James Carlson wrote:
>
> It's never been possible to mount NFS at boot.
Well, some of us old farts remember nd, and later, NFS-based diskless
workstations :-)
The current lack of support for diskless leaves an empty feeling in my heart :-P
-- richard
--
ZFS Perform
On Jul 24, 2012, at 9:11 AM, Jason Matthews wrote:
> are you missing a zero to the left of the decimal place?
Been there, done that, wrote a whitepaper. Add 2 zeros.
-- richard
> Sent from Jasons' hand held
>
> On Jul 23, 2012, at 8:57 PM, "John T. Bittner" wrote:
>
>> Subject: ZFS and AVS gu
On Jul 24, 2012, at 2:04 PM, Ray Arachelian wrote:
> On 07/24/2012 02:41 PM, Jim Klimov wrote:
>>
>> Did you try to "zpool import -o readonly=on" and using TXG rollback?
> Can't do that now, the pool is actually imported, and it won't let me
> export it, nor offline it.
>
>> If your drives wrote
On Jul 20, 2012, at 12:01 PM, Bob Friesenhahn wrote:
> On Fri, 20 Jul 2012, Ichiko Sakamoto wrote:
>
>> Hi, all
>>
>> I have a disk that has many bad sectors.
>> I created zpool with this disk and expected that
>> zpool told me the disk has meny errors.
>> But zpool told me everything was fine u
Hi Ichiko,
This behaviour does not appear to be correct. What version of the OS are
you running? (hint: cat /etc/release)
-- richard
On Jul 20, 2012, at 2:29 AM, Ichiko Sakamoto wrote:
> Hi, all
>
> I have a disk that has many bad sectors.
> I created zpool with this disk and expected that
> z
On Jul 19, 2012, at 11:41 AM, st...@linuxsuite.org wrote:
>> On Jul 19, 2012, at 9:03 AM, st...@linuxsuite.org wrote:
>>
>>> c3::dsk/c3t3d0 disk connectedconfigured
>>> unknown
>>> c4 scsi-sas connectedconfigured
>>> unknown
>>> c4::
On Jul 19, 2012, at 9:03 AM, st...@linuxsuite.org wrote:
>
> Howdy!
>
> I have a Dell 610 with LSI 9200-8e HBA connected to Supermicro 847
> (45 disk 4U JBOD)
> Each port on the LSI is connect by separate cable to one of the 2
> BackPlanes on
> the SM847.
>
> How come format and cfga
On Jul 6, 2012, at 12:59 AM, Richard L. Hamilton wrote:
>
> It's been my impression on a few occasions that a disk with very limited
> damage might have any bad areas discovered and effectively repaired by a
> scan; even on a semi-modern (e.g. older Fibre Channel) disk, the
> manufacturer's and
On Jul 5, 2012, at 10:13 AM, Reginald Beardsley wrote:
> I had a power failure last night. The UPS alarms woke me up and I powered
> down the systems. (some day I really will automate shutdowns) It's also been
> quite hot (90 F) in the room where the computer is.
>
> At boot the BIOS on the H
On Jul 2, 2012, at 2:49 PM, Rich wrote:
> Hm, we appear to have been discussing a different problem, which is
> fascinating.
>
> I have a number of devices which are in the Supermicro SC846A-R1200
> chassis - which has no expanders, just 6 SFF-8087 ports on it, running
> into LSI 9201-16i contro
On Jun 26, 2012, at 10:36 AM, Dan Swartzendruber wrote:
> On 6/26/2012 1:15 PM, Richard Elling wrote:
>> On Jun 26, 2012, at 6:29 AM, Dan Swartzendruber wrote:
>>
>>
>>> Keep in mind this is almost 2 yrs old, though. I seem to recall a thread
>>>
On Jun 26, 2012, at 6:29 AM, Dan Swartzendruber wrote:
> Keep in mind this is almost 2 yrs old, though. I seem to recall a thread
> here or there that has pinned the SATA toxicity issues to an mpt driver bug
> or somesuch?
Not really. Search for other OSes and their tales of woe. In some cases,
UFS root certainly works, but not sure if the OI installer makes it easy?
-- richard
On Jun 25, 2012, at 7:37 PM, Gordon Ross wrote:
> UFS root should still work, also NFS root (convenient for ZFS debug work:)
>
> On Mon, Jun 25, 2012 at 9:00 PM, Jan Owoc wrote:
>> On Mon, Jun 25, 2012 at 6:55
On Jun 25, 2012, at 2:06 PM, Ray Arachelian wrote:
> On 06/25/2012 03:31 PM, michelle wrote:
>> I did a hard reset and moved the drive to another channel.
>>
>> The fault followed the drive so I'm certain it is the drive, as people
>> have said.
>>
>> The thing that bugs me is that this ZFS faul
t;
> Renamed to:
> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks
>
> In the course of discussion, and by further updates from Richard Elling,
> I was led to believe that the sd.conf overrides are part of the lowlevel
> illumos-gate shared by several distros, includin
On Jun 11, 2012, at 9:58 PM, Rich wrote:
> On Tue, Jun 12, 2012 at 12:50 AM, Richard Elling
> wrote:
>> On Jun 11, 2012, at 6:08 PM, Bob Friesenhahn wrote:
>>
>>> On Mon, 11 Jun 2012, Jim Klimov wrote:
>>>> ashift=12 (2^12 = 4096). For disks which do not
On Jun 11, 2012, at 6:08 PM, Bob Friesenhahn wrote:
> On Mon, 11 Jun 2012, Jim Klimov wrote:
>> ashift=12 (2^12 = 4096). For disks which do not lie, it
>> works properly out of the box. The patched zpool binary
>> forced ashift=12 at the user's discretion.
>
> It seems like new pools should provi
On Jun 10, 2012, at 7:45 AM, michelle wrote:
> The system seems to have hung again, any query to the ZFS system hangs the
> session.
>
> Nothing in /var/adm/messages.
Try fmdump -eV
-- richard
>
> I'm wondering whether the combination of having a mirrored zfs set with one
> drive on e-sata
On Jun 6, 2012, at 4:22 AM, Matt Clark wrote:
> It's true you need a decent SSD, but they don't have to be expensive. The
> Intel 320 series has power loss protection and good performance. The 80GB
> (10,000 write IOPS, 10TB endurance) model is available for £100 un the UK.
> Every ZFS serv
On Jun 5, 2012, at 10:32 AM, Nick Hall wrote:
> On Mon, Jun 4, 2012 at 10:48 AM, Jan Owoc wrote:
>
>>
>> The data on the main pool is always consistent in that a certain
>> operation either made it to the disk or it didn't. However, if your
>> application depends on the fact that writes make it
On Jun 4, 2012, at 10:06 AM, Dan Swartzendruber wrote:
> On 6/4/2012 11:56 AM, Richard Elling wrote:
>> On Jun 4, 2012, at 8:24 AM, Nick Hall wrote:
>> For NFS workloads, the ZIL implements the synchronous semantics between
>> the NFS server and client. The best way to get
On Jun 4, 2012, at 8:48 AM, Jan Owoc wrote:
> On Mon, Jun 4, 2012 at 9:24 AM, Nick Hall wrote:
>> I'm considering buying a separate SSD drive for my ZIL as I do quite a bit
>> over NFS and would like the latency to improve. But first I'm trying to
>> understand exactly how the ZIL works and what h
On Jun 4, 2012, at 8:24 AM, Nick Hall wrote:
> I'm considering buying a separate SSD drive for my ZIL as I do quite a bit
> over NFS and would like the latency to improve. But first I'm trying to
> understand exactly how the ZIL works and what happens in case of a problem.
> I'll list my understan
On Jun 1, 2012, at 10:45 PM, Richard L. Hamilton wrote:
> In a non-COW filesystem, one would expect that rewriting an already allocated
> block would never fail for out-of-space (ENOSPC).
This seems like a rather broad assumption. It may hold for FAT or UFS, but
might not
hold for some of the m
idea at the bottom...
On May 29, 2012, at 12:56 PM, Jason Cox wrote:
> Let me start by saying that I am very new to OpenIndiana and Solaris
> 10/11 in general. I normally deal with Red Hat Linux. I wanted to use
> OI for ZFS support for a vmware shared storage server to mount LUNs on
> my SAN.
>
On May 23, 2012, at 8:31 PM, Jim Klimov wrote:
> 2012-05-24 3:50, Richard Elling wrote:
>>> As a side note, it is then possible to augment GRUB to be
>>> able to import and export an rpool and thus help IDE-SATA
>>> migrations?
>>
>> Go for it.
>
On May 23, 2012, at 2:37 AM, Jim Klimov wrote:
> 2012-05-23 8:00, Richard Elling wrote:
>> This procedure is far too complex. Let's edit it...
>
> Thanks... that seemed far too easy ;)
>
> As a side note, it is then possible to augment GRUB to be
> able to import
On May 22, 2012, at 1:41 PM, Robbie Crash wrote:
> Gaming iperf you can get close to theoretical maximums on wire connections,
> but if you're just on a 10/100 network looks liek you've got everything
> working properly. Real world performance (for me) sits at around 400Mb/sec
> for medium (4-100
On May 22, 2012, at 12:36 PM, Jim Klimov wrote:
> 2012-05-22 23:29, Jim Klimov wrote:
>> There are workarounds, likely posted in archives of zfs-discuss
>> list and many other sources. If I google anything good up, I'll
>> post a link here :)
>
> What do you know? I posted some myself, and found t
On May 22, 2012, at 2:40 PM, Jason Matthews wrote:
> Let me get this straight...
>
> You installed the OS on the disk with the BIOS set to IDE. Later, you
> changed the BIOS to AHCI and the system crashes when booting. Is that about
> right?
Since the OS is not yet running, I don't consider it
On May 18, 2012, at 7:04 AM, Doug Hughes wrote:
> A third recommendation for iperf. It's the tool you want. Don't mess around
> with anything else.
+1
There has been some discussion recently about using Poisson distributed
interarrival
times instead of fixed interval. This could have an impact
On May 2, 2012, at 12:25 AM, Mark wrote:
> There are two issues.
>
> The first is correct partition alignment, the second ashift value.
>
> In "theory", I haven't tested this yet, manually creating the slices with a
> start position to sector 64 and using slices instead of whole disks for the
On May 1, 2012, at 8:41 PM, Tim Dunphy wrote:
> hello list
>
> I have attempted to enable link aggregation on my oi 151 box using the
> command dladm create-aggr -d e1000g0 -d e1000g1 1 then I plumbed it
> with an address of 192.168.1.200 and echoed 192.168.1.1 >
> defaultrouter
>
> I noticed th
On Apr 29, 2012, at 7:38 PM, Gordon Ross wrote:
> On Sun, Apr 29, 2012 at 8:46 PM, Richard Elling
> wrote:
>>
>> On Apr 29, 2012, at 11:45 AM, George Wilson wrote:
> [...]
>>>
>>> Speaking of 4K sectors, I've taken a slightly different approach that
On Apr 29, 2012, at 11:45 AM, George Wilson wrote:
>
> On Apr 29, 2012, at 1:28 PM, Roy Sigurd Karlsbakk wrote:
>
Also, I posted a bug report for it here
https://www.illumos.org/issues/2663
>>>
>>> Thanks :-). We can now track the progress of the OI-specific
>>> discussion about this
On Apr 24, 2012, at 12:35 PM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> There was a discussion some time back about some (or most?) SSDs not honoring
> cache flushes, that is, something is written to, say, the SLOG, and ZFS sends
> a flush(), the SSD issues a NOP and falsely acknowledges the flu
On Apr 23, 2012, at 6:27 AM, paolo marcheschi wrote:
> HI
>
> I see that there is a variant of opensolaris known as Omnios:
No, it is an illumos distribution.
>
> http://omnios.omniti.com/
>
> Is that related with Openindiana ?, Are there any advantages with it ?
It is designed for the serve
On Apr 9, 2012, at 2:20 PM, Martin Frost wrote:
> Is there some issue with sharing via both SMB/CIFS and NFS?
NFSv3 does not have ACLs. NFSv4 does have ACLs. So there is not a
problem with NFS, per se, but the version your clients use might not
understand ACLs.
-- richard
--
ZFS Performance an
On Apr 1, 2012, at 12:48 PM, Hugh McIntyre wrote:
> On 3/30/12 8:41 AM, Richard Elling wrote:
>>
>> On Mar 30, 2012, at 2:01 AM, Harry Putnam wrote:
>>>> USB drives tend to ignore cache flush commands, which can appear as
>>>> unreliable disks. Shouldn
On Mar 30, 2012, at 2:01 AM, Harry Putnam wrote:
>
> Richard Elling writes:
>
>> On Mar 26, 2012, at 12:34 PM, Jonathan Adams wrote:
>>
>>> Probably not the most reliable, but definitely the easiest, way to get
>>> access to your data is to use USB dis
answer below...
On Mar 28, 2012, at 11:54 AM, Dan Swartzendruber wrote:
> On 3/28/2012 2:40 PM, Richard Elling wrote:
>> On Mar 28, 2012, at 11:24 AM, Dan Swartzendruber wrote:
>>
>>> On 3/28/2012 1:38 PM, Richard Elling wrote:
>>>> On Mar 28, 2012,
On Mar 28, 2012, at 11:24 AM, Dan Swartzendruber wrote:
> On 3/28/2012 1:38 PM, Richard Elling wrote:
>> On Mar 28, 2012, at 9:52 AM, Dan Swartzendruber wrote:
>>
>>
>>> So I have an M1015 and it works fine. I noticed the other day I hotplugged
>>>
On Mar 28, 2012, at 9:52 AM, Dan Swartzendruber wrote:
> So I have an M1015 and it works fine. I noticed the other day I hotplugged a
> crucial M4 into the last free port on the HBA, and later noticed in the dmesg
> output:
>
> Mar 27 17:55:40 nas genunix: [ID 483743 kern.info]
> /scsi_vhci/d
1 - 100 of 109 matches
Mail list logo