Re: [zfs-discuss] new google group for ZFS on OSX

2009-10-24 Thread Craig Morgan
Gruber (http://daringfireball.net/linked/2009/10/23/zfs) is normally  
well-informed and has some feedbackseems possible that legal canned it.


--Craig

On 23 Oct 2009, at 20:42, Tim Cook wrote:




On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling > wrote:

FYI,
The ZFS project on MacOS forge (zfs.macosforge.org) has provided the
following announcement:

  ZFS Project Shutdown2009-10-23
  The ZFS project has been discontinued. The mailing list and  
repository will

  also be removed shortly.

The community is migrating to a new google group:
  http://groups.google.com/group/zfs-macos

-- richard


Any official word from Apple on the abandonment?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: cr...@cinnabar-solutions.com
w: www.cinnabar-solutions.com



--
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: craig.mor...@sun.com

~ 


 NOTICE:  This email message is for the sole use of the intended
 recipient(s) and may contain confidential and privileged information.
 Any unauthorized review, use, disclosure or distribution is  
prohibited.

 If you are not the intended recipient, please contact the sender by
 reply email and destroy all copies of the original message.
~ 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-18 Thread Craig Morgan
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this 
side of failure doing heavy retries that id dragging the pool down.

Craig

--
Craig Morgan


On 18 Dec 2011, at 16:23, Jan-Aage Frydenbø-Bruvoll  wrote:

> Hi,
> 
> On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert  wrote:
>>  I know some others may already have pointed this out - but I can't see it
>> and not say something...
>> 
>> Do you realise that losing a single disk in that pool could pretty much
>> render the whole thing busted?
>> 
>> At least for me - the rate at which _I_ seem to lose disks, it would be
>> worth considering something different ;)
> 
> Yeah, I have thought that thought myself. I am pretty sure I have a
> broken disk, however I cannot for the life of me find out which one.
> zpool status gives me nothing to work on, MegaCli reports that all
> virtual and physical drives are fine, and iostat gives me nothing
> either.
> 
> What other tools are there out there that could help me pinpoint
> what's going on?
> 
> Best regards
> Jan
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + Dell MD1200's - MD3200 necessary?

2012-01-06 Thread Craig Morgan
Ray,

If you are intending to go Nexenta then speak to your local Nexenta SE, 
we've got HSL qualified solutions which cover our h/w support and we've
explicitly qualed some MD1200 configs with Dell for certain deployments
to guarantee support via both Dell h/w support and ourselves.

If you don't know who that would be drop me a line and I'll find someone
local to you …

We tend to go with the LSI cards, but even there there are some issues
with regard to Dell supply or over the counter.

HTH

Craig

On 6 Jan 2012, at 01:28, Ray Van Dolson wrote:

> We are looking at building a storage platform based on Dell HW + ZFS
> (likely Nexenta).
> 
> Going Dell because they can provide solid HW support globally.
> 
> Are any of you using the MD1200 JBOD with head units *without* an
> MD3200 in front?  We are being told that the MD1200's won't "daisy
> chain" unless the MD3200 is involved.
> 
> We would be looking to use some sort of LSI-based SAS controller on the
> Dell front-end servers.
> 
> Looking to confirm from folks who have this deployed in the wild.
> Perhaps you'd be willing to describe your setup as well and anything we
> might need to take into consideration (thinking best option for getting
> ZIL/L2ARC devices into Dell R510 head units for example in a supported
> manner).
> 
> Thanks,
> Ray
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

—
Craig Morgan
e: cr...@nexenta.com
t: +44 (0)7913 383190
s: craig.morgan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SUNWzfsg missing in Solaris 11 express?

2010-11-18 Thread Craig Morgan
It was never a component that was released as part of OpenSolaris and as 
Sol11Exp is a derivative of that release rather than the Solaris10 line then I 
guess its not included.

The GUI was a plug-in to Sun WebConsole which is/was a Solaris10 feature … I 
would expect some integration of that going forward, but you'd have to check 
with Oracle on integration plans.

HTH

Craig

On 18 Nov 2010, at 18:10, SR wrote:

> SUNWzfsg (zfs admin gui) seems to be missing from Solaris 11 express.  Is 
> this no longer available or has it been integrated with something else?
> 
> Suresh
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: cr...@cinnabar-solutions.com
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup success stories (take two)

2011-02-02 Thread Craig Morgan
Two caveats inline …

On 1 Feb 2011, at 01:05, Garrett D'Amore wrote:

> On 01/31/11 04:48 PM, Roy Sigurd Karlsbakk wrote:
>>> As I've said here on the list a few times earlier, the last on the
>>> thread 'ZFS not usable (was ZFS Dedup question)', I've been doing some
>>> rather thorough testing on zfs dedup, and as you can see from the
>>> posts, it wasn't very satisfactory. The docs claim 1-2GB memory usage
>>> per terabyte stored, ARC or L2ARC, but as you can read from the post,
>>> I don't find this very likely.
>>> 
>> Sorry about the initial post - it was wrong. The hardware configuration was 
>> right, but for initial tests, I use NFS, meaning sync writes. This obviously 
>> stresses the ARC/L2ARC more than async writes, but the result remains the 
>> same.
>> 
>> With 140GB with of L2ARC on two X25-Ms and some 4GB partitions on the same 
>> devices, 4GB each, in a mirror, the write speed was reduced to something 
>> like 20% of the origian speed. This was with about 2TB used on the zpool 
>> with a single data stream, no parallelism whatsoever. Still with 8GB ARC and 
>> 140GB of L2ARC on two SSDs, this speed is fairly low. I could not see 
>> substantially high CPU or I/O load during this test.
>>   
> 
> I would not expect good performance on dedup with write... dedup isn't going 
> to make write's fast - its something you want on a system with a lot of 
> duplicated data that sustain a lot of reads.  (That said, highly duplicate 
> date with a DDT that fits entirely in RAM might see a benefit from not having 
> to write meta data frequently.  But I suspect an SLOG here is going to be 
> critical to get good performance since you'll still have a lot of synchronous 
> meta data writes.)
> 
>- Garrett

There is one circumstance where the write operation could be an improvement, in 
a system with data which is highly de-dupable *and* undergoing heavy write 
load, it may be useful to forego the large write and instead convert into a 
smaller (and more frequent) small metadata write, SLOGs would then show more 
benefit and we'd release pressure on the back-end for thruput.

On a system with a high read ratio, de-duped data currently would be quite 
efficient, but there is one pathology in current ZFS which impacts this 
somewhat, last time I looked each ARC ref to a de-duped block leads to a 
inflated ARC copy of the data, hence a highly ref'ed block (20x for instance), 
could exist 20x in an inflated state in ARC after read refs to each occurrence.
De-dup of inflated data in ARC was a pending ZFS optimisation …

Craig
  
>> Vennlige hilsener / Best regards
>> 
>> roy
>> --
>> Roy Sigurd Karlsbakk
>> (+47) 97542685
>> r...@karlsbakk.net
>> http://blogg.karlsbakk.net/
>> --
>> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
>> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
>> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate 
>> og relevante synonymer på norsk.
>> _______
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: cr...@cinnabar-solutions.com
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Format returning bogus controller info

2011-03-01 Thread Craig Morgan
Surprised that one of the most approachable outputs for any customer to
use which would enable simple identification/resolution of many
of these discussions didn't come up, namely:

cfgadm -al

for a reasonable physical mapping in which SAS/SATA drives are
relatively easy to map out by ID and of course by association
with their distinct physical controller paths.

And if you want to relate this view to the underlying physical
path to the device, then

cfgadm -alv

will clarify the physical mapping.

Both commands can take :: arguments if you 
want to restrict the voluminous output somewhat.

BTW, you also should be looking to

mpathadm show LU 

to successfully decode the virtual device entries.

Craig

On 1 Mar 2011, at 16:10, Cindy Swearingen wrote:

> (Dave P...I sent this yesterday, but it bounced on your email address)
> 
> A small comment from me would be to create some test pools and replace
> devices in the pools to see if device names remain the same or change
> during these operations.
> 
> If the device names change and the pools are unhappy, retest similar
> operations while the pools' are exported.
> 
> I've seen enough controller/device numbering wreak havoc on pool
> availability that I'm automatically paranoid when I see the controller
> numbering that you started with.
> 
> Thanks,
> 
> Cindy
> 
> 
> 
> On 02/28/11 22:39, Dave Pooser wrote:
>> On 2/28/11 4:23 PM, "Garrett D'Amore"  wrote:
>>> Drives are ordered in the order they are *enumerated* when they *first*
>>> show up in the system.  *Ever*.
>> Is the same true of controllers? That is, will c12 remain c12 or
>> /pci@0,0/pci8086,340c@5 remain /pci@0,0/pci8086,340c@5 even if other
>> controllers are active?
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: cr...@cinnabar-solutions.com
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good SLOG devices?

2011-03-02 Thread Craig Morgan
And I'd just add … amke sure you are running a recent enough release of ZFS to 
support
importation of a pool without the SLOG device being available, just in case 
export/recovery
of the pool need to be attempted elsewhere than the traditional server.

Of course they also limit deployments in poor man cluster scenarios and or HA 
configs such
as NexentaStor.

Craig
 
On 1 Mar 2011, at 16:35, Garrett D'Amore wrote:

> The PCIe based ones are good (typically they are quite fast), but check
> the following first:
> 
>   a) do you need an SLOG at all?  Some workloads (asynchronous ones) will
> never benefit from an SLOG.
> 
>   b) form factor.  at least one manufacturer uses a PCIe card which is
> not compliant with the PCIe form-factor and will not fit in many cases
> -- especially typical 1U boxes.
> 
>   c) driver support.
> 
>   d) do they really just go straight to ram/flash, or do they have an
> on-device SAS or SATA bus?  Some PCIe devices just stick a small flash
> device on a SAS or SATA controller.  I suspect that those devices won't
> see a lot of benefit relative to an external drive (although they could
> theoretically drive that private SAS/SATA bus at much higher rates than
> an external bus -- but I've not checked into it.)
> 
> The other thing with PCIe based devices is that they consume an IO slot,
> which may be precious to you depending on your system board and other
> I/O needs. 
> 
>   - Garrett
> 
> On Tue, 2011-03-01 at 17:03 +0100, Roy Sigurd Karlsbakk wrote:
>> Hi
>> 
>> I'm running OpenSolaris 148 on a few boxes, and newer boxes are getting 
>> installed as we speak. What would you suggest for a good SLOG device? It 
>> seems some new PCI-E-based ones are hitting the market, but will those 
>> require special drivers? Cost is obviously alsoo an issue here
>> 
>> Vennlige hilsener / Best regards
>> 
>> roy
>> --
>> Roy Sigurd Karlsbakk
>> (+47) 97542685
>> r...@karlsbakk.net
>> http://blogg.karlsbakk.net/
>> --
>> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
>> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
>> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate 
>> og relevante synonymer på norsk.
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: cr...@cinnabar-solutions.com
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Free space on ZFS file system unexpectedly missing

2011-03-10 Thread Craig Morgan
But even the 'zfs list -o space' is now limited by not displaying snapshots
by default, so the catch all is now

zfs list -o space -t all

shouldn't miss anything then …

;-)

Craig

On 10 Mar 2011, at 03:38, Richard Elling wrote:

> 
> On Mar 9, 2011, at 4:05 PM, Tom Fanning wrote:
> 
>> On Wed, Mar 9, 2011 at 10:37 PM, Peter Jeremy
>>  wrote:
>>> On 2011-Mar-10 05:50:53 +0800, Tom Fanning  wrote:
>>>> I have a FreeNAS 0.7.2 box, based on FreeBSD 7.3-RELEASE-p1, running
>>>> ZFS with 4x1TB SATA drives in RAIDz1.
>>>> 
>>>> I appear to have lost 1TB of usable space after creating and deleting
>>>> a 1TB sparse file. This happened months ago.
>>> 
>>> AFAIR, ZFS on FreeBSD 7.x was always described as experimental.
>>> 
>>> This is a known problem (OpenSolaris bug id 6792701) that was fixed in
>>> OpenSolaris onnv revision 9950:78fc41aa9bc5 which was committed to
>>> FreeBSD as r208775 in head and r208869 in 8-stable.  The fix was never
>>> back-ported to 7.x and I am unable to locate any workaround.
>>> 
>>>> - Exported the pool from FreeBSD, imported it on OpenIndiana 148 -
>>>> but not upgraded - same problem, much newer ZFS implementation. Can't
>>>> upgrade the pool to see if the issue goes away since for now I need a
>>>> route back to FreeBSD and I don't have spare storage.
>>> 
>>> I thought that just importing a pool on a system with the bugfix would
>>> free the space.  If that doesn't work, your only options are to either
>>> upgrade to FreeBSD 8.1-RELEASE or later (preferably 8.2 since there
>>> are a number of other fairly important ZFS fixes since 8.1) and
>>> upgrade your pool to v15 or rebuild your pool (via send/recv or similar).
>>> 
>>> --
>>> Peter Jeremy
>> 
>> Well I never. Just by chance I did zfs list -t snapshot, and now it
>> shows a 1TB snapshot which it wasn't showing before.
> 
> There was a change where snapshots are no longer shown by default.
> This can be configured back to the old behaviour setting the zpool 
> "listsnapshots" property to "on"
> 
> Otherwise, you need to use the "-t snapshot" list.
> 
> But, a much better method of tracking this down is to use:
>   zfs list -o space
> 
> That will show the accounting for all dataset objects.
> -- richard
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: cr...@cinnabar-solutions.com
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot on SAN

2008-12-16 Thread Craig Morgan
I'd first resolve the OBP and HBA fcode issues, then I'd verify that  
you are starting from a cold-reset system. "Fast Data Access MMU Miss"  
is a notorious problem on SF280R and is very often associated with  
attempting to reboot after a warm cycle of the system.

We instituted a cold cycle of the platform years ago at every change  
to alleviate the problem, which seems more prevalent on very early  
issue mboards/CPU combos (we have a significant number of first  
release systems still doing sterling service!).

HTH

Craig

On 16 Dec 2008, at 15:28, Tim wrote:

> I'd start by upgrading the fcode on the QLogic adapter as well as  
> upgrading the obp on the server.
> http://filedownloads.qlogic.com/Files/TempDownlods/20340/qla23xxFcode2.12.tar.Z
>
> I'd also double check your LUN security on the storage array.  Seems  
> to me you might not have it configured properly, although if you  
> managed to get Solaris installed on it already from this systems,  
> that's probably a moot point.
>
> --Tim
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: craig.mor...@sun.com

~ 

  NOTICE:  This email message is for the sole use of the intended
  recipient(s) and may contain confidential and privileged information.
  Any unauthorized review, use, disclosure or distribution is  
prohibited.
  If you are not the intended recipient, please contact the sender by
  reply email and destroy all copies of the original message.
~ 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do you "re-attach" a 3 disk RAIDZ array to a new OS installation?

2009-01-20 Thread Craig Morgan
Luke,

You're looking for a `zpool list`, followed by a `zpool import  
` after Solaris has correctly recognised the attachment of  
the three original disks (ie. they appear in `format` and/or `cfgadm - 
al`).

Complete docs here, now you know what you are looking for ... 
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

HTH

Craig

On 20 Jan 2009, at 13:05, Luke Scammell wrote:

> Hi,
>
> I'm completely new to Solaris, but have managed to bumble through  
> installing it to a single disk, creating an additional 3 disk RAIDZ  
> array and then copying over data from a separate NTFS formatted disk  
> onto the array using NTFS-3G.
>
> However, the single disk that was used for the OS installation has  
> since died (it was very old) and I have had to reinstall 2008.11  
> from scratch onto a new disk.  I would like to retain the data on  
> those 3 disks (the RAIDZ array) and "reattach" (what's the correct  
> terminology here?) them to the new OS installation without losing  
> any data.
>
> As I'm unsure of the terminology I should be using I've been unable  
> to find anything by searching either online or in the forums.  Any  
> assistance would be greatly received, thanks :)
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: craig.mor...@sun.com

~ 

  NOTICE:  This email message is for the sole use of the intended
  recipient(s) and may contain confidential and privileged information.
  Any unauthorized review, use, disclosure or distribution is  
prohibited.
  If you are not the intended recipient, please contact the sender by
  reply email and destroy all copies of the original message.
~ 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "locate" disk command? locate broken disk?

2009-01-25 Thread Craig Morgan
There is an optional utility supplied by Sun (for all supported OSes)  
to map the internal drives of the X4500/X4540 to their platform  
specific device IDs, its called 'hd' and is on one of the support CD's  
supplied with the systems (and can be downloaded if you've mislaid the  
disk!).

Documentation here (including link to download) ... 
http://docs.sun.com/source/820-1120-19/hdtool_new.html#0_64301

HTH

Craig

On 24 Jan 2009, at 18:39, Orvar Korvar wrote:

> If zfs says that one disk is broken, how do I locate it? It says  
> that disk c0t3d0 is broken. Which disk is that? I must locate them  
> during install?
>
> In Thumper it is possible to issue a ZFS command, and the  
> corresponding disk's lamp will flash? Is there any "zlocate" command  
> that will flash a particular disk's lamp?
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: craig.mor...@sun.com

~ 

  NOTICE:  This email message is for the sole use of the intended
  recipient(s) and may contain confidential and privileged information.
  Any unauthorized review, use, disclosure or distribution is  
prohibited.
  If you are not the intended recipient, please contact the sender by
  reply email and destroy all copies of the original message.
~ 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problems using ZFS on Smart Array P400

2009-01-27 Thread Craig Morgan
>> scrub: resilver completed after 0h0m with 0 errors on Tue Jan 27  
>> 03:30:16 2009
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> test DEGRADED 0 0 0
>> raidz2 DEGRADED 0 0 0
>> c1t2d0p0 ONLINE 0 0 0
>> c1t3d0p0 ONLINE 0 0 0
>> c1t4d0p0 ONLINE 0 0 0
>> c1t5d0p0 UNAVAIL 0 0 0 cannot open
>> c1t6d0p0 ONLINE 0 0 0
>> c1t8d0p0 ONLINE 0 0 0
>>
>> errors: No known data errors
>> bash-3.00# zpool online test c1t5d0p0
>> warning: device 'c1t5d0p0' onlined, but remains in faulted state
>> use 'zpool replace' to replace devices that are no longer present
>>
>> bash-3.00# dmesg
>>
>> Jan 27 03:27:40 unknown cpqary3: [ID 823470 kern.notice] NOTICE:  
>> Smart Array
>> P400 Controller
>> Jan 27 03:27:40 unknown cpqary3: [ID 823470 kern.notice] Hot-plug  
>> drive
>> inserted, Port: 2I Box: 1 Bay: 3
>> Jan 27 03:27:40 unknown cpqary3: [ID 479030 kern.notice] Configured  
>> Drive ?
>> ... YES
>> Jan 27 03:27:40 unknown cpqary3: [ID 10 kern.notice]
>> Jan 27 03:27:40 unknown cpqary3: [ID 823470 kern.notice] NOTICE:  
>> Smart Array
>> P400 Controller
>> Jan 27 03:27:40 unknown cpqary3: [ID 834734 kern.notice] Media  
>> exchange
>> detected, logical drive 6
>> Jan 27 03:27:40 unknown cpqary3: [ID 10 kern.notice]
>> ...
>> Jan 27 03:36:24 unknown scsi: [ID 107833 kern.warning] WARNING:
>> /p...@38,0/pci1166,1...@10/pci103c,3...@0/s...@5,0 (sd6):
>> Jan 27 03:36:24 unknown SYNCHRONIZE CACHE command failed (5)
>> ...
>> Jan 27 03:47:58 unknown scsi: [ID 107833 kern.warning] WARNING:
>> /p...@38,0/pci1166,1...@10/pci103c,3...@0/s...@5,0 (sd6):
>> Jan 27 03:47:58 unknown drive offline
>> -- 
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: craig.mor...@sun.com

~ 

  NOTICE:  This email message is for the sole use of the intended
  recipient(s) and may contain confidential and privileged information.
  Any unauthorized review, use, disclosure or distribution is  
prohibited.
  If you are not the intended recipient, please contact the sender by
  reply email and destroy all copies of the original message.
~ 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [Fwd: What Veritas is saying vs ZFS]

2007-06-21 Thread Craig Morgan
Also introduces the Veritas sfop utility, which is the 'simplified'  
front-end to VxVM/VxFS.


As "imitation is the sincerest form of flattery", this smacks of a  
desperate attempt to prove to their customers that Vx can be just as  
slick as ZFS.


More details at <http://www.symantec.com/enterprise/products/ 
agents_options_details.jsp?pcid=2245&pvid=203_1&aoid=sf_simple_admin>  
including a ref. guide ...


Craig

On 21 Jun 2007, at 08:03, Selim Daoud wrote:




From: Ric Hall <[EMAIL PROTECTED]>
Date: 20 June 2007 22:46:48 BDT
To: DMA Ambassadors <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]
Subject: What Veritas is saying vs ZFS


Thought it might behoove us all to see this presentation from the
Veritas conference last week, and understand what they are saying  
vs ZFS

and our storage plans.

Some interesting performance claims to say the least

Ric


--
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: [EMAIL PROTECTED]

 
~

 NOTICE:  This email message is for the sole use of the intended
 recipient(s) and may contain confidential and privileged information.
 Any unauthorized review, use, disclosure or distribution is  
prohibited.

 If you are not the intended recipient, please contact the sender by
 reply email and destroy all copies of the original message.
 
~




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-25 Thread Craig Morgan
Spare a thought also for the remote serviceability aspects of these  
systems, if customers raise calls/escalations against such systems  
then our remote support/solution centre staff would find such an  
output useful in identifying and verifying the config.


I'm don't have visibility of the Explorer development sites at the  
moment, but I believe that the last publicly available Explorer I  
looked at (v5.4) still didn't gather any ZFS related info, which  
would scare me mightily for a FS released in a production-grade  
Solaris 10 release ... how do we expect our support personnel to  
engage??


Craig

On 18 Jul 2006, at 00:53, Matthew Ahrens wrote:


On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote:

Add an option to zpool(1M) to dump the pool config as well as the
configuration of the volumes within it to an XML file. This file
could then be "sucked in" to zpool at a later date to recreate/
replicate the pool and its volume structure in one fell swoop. After
that, Just Add Data(tm).


Yep, this has been on our to-do list for quite some time:

RFE #6276640 "zpool config"
RFE #6276912 "zfs config"

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Craig Morgan
Cinnabar Solutions Ltd

t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: [EMAIL PROTECTED]
w: www.cinnabar-solutions.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Craig Morgan
If you take a look at these messages the somewhat unusual condition  
that may lead to unexpected behaviour (ie. fast giveup) is that  
whilst this is a SAN connection it is achieved through a non- 
Leadville config, note the fibre-channel and sd references. In a  
Leadville compliant installation this would be the ssd driver, hence  
you'd have to investigate the specific semantics and driver tweaks  
that this system has applied to sd in this instance.


Maybe the sd retries have been `tuned` down ... ??

More info ... ie. an explorer would be useful ... before we jump to  
any incorrect conclusions.


Craig

On 4 Dec 2006, at 14:47, Douglas Denny wrote:


Last Friday, one of our V880s kernel panicked with the following
message.This is a SAN connected ZFS pool attached to one LUN. From
this, it appears that the SAN 'disappeared' and then there was a panic
shortly after.

Am I reading this correctly?

Is this normal behavior for ZFS?

This is a mostly patched Solaris 10 6/06 install. Before patching this
system we did have a couple of NFS related panics, always on Fridays!
This is the fourth panic, first time with a ZFS error. There are no
errors in zpool status.

Dec  1 20:30:21 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:21 foobar SCSI transport failed: reason 'incomplete':
retrying command
Dec  1 20:30:21 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:21 foobar SCSI transport failed: reason 'incomplete':
retrying command
Dec  1 20:30:21 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:21 foobar disk not responding to selection
Dec  1 20:30:21 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:21 foobar disk not responding to selection
Dec  1 20:30:21 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:21 foobar disk not responding to selection
Dec  1 20:30:21 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:21 foobar disk not responding to selection
Dec  1 20:30:22 foobar scsi: [ID 107833 kern.warning] WARNING:
/[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 (sd17):
Dec  1 20:30:22 foobar disk not responding to selection
Dec  1 20:30:22 foobar unix: [ID 836849 kern.notice]
Dec  1 20:30:22 foobar ^Mpanic[cpu2]/thread=2a100aedcc0:
Dec  1 20:30:22 foobar unix: [ID 809409 kern.notice] ZFS: I/O failure
(write on  off 0: zio 3004c0ce540 [L0 unallocated]
2L/2P DVA
[0]=<0:2ae190:2> fletcher2 uncompressed BE contiguous
birth=586818 fill=0
cksum=102297a2db39dfc:cc8e38087da7a38f:239520856ececf15:c2fd36
9cea9db4a1): error 5
Dec  1 20:30:22 foobar unix: [ID 10 kern.notice]
Dec  1 20:30:22 foobar genunix: [ID 723222 kern.notice]
02a100aed740 zfs:zio_done+284 (3004c0ce540, 0, a8, 70513bf0, 0,
60001374940)
Dec  1 20:30:22 foobar genunix: [ID 179002 kern.notice]   %l0-3:
03006319fc80 70513800 0005 0005
Dec  1 20:30:22 foobar   %l4-7: 7b224278 0002
0008f442 0005
Dec  1 20:30:22 foobar genunix: [ID 723222 kern.notice]
02a100aed940 zfs:zio_vdev_io_assess+178 (3004c0ce540, 8000, 10, 0,
0, 10)
Dec  1 20:30:22 foobar genunix: [ID 179002 kern.notice]   %l0-3:
0002 0001  0005
Dec  1 20:30:22 foobar   %l4-7: 0010 35a536bc
 00043d7293172cfc
Dec  1 20:30:22 foobar genunix: [ID 723222 kern.notice]
02a100aeda00 genunix:taskq_thread+1a4 (600012a0c38, 600012a0be0,
50001, 43d72c8bfb810,
2a100aedaca, 2a100aedac8)
Dec  1 20:30:22 foobar genunix: [ID 179002 kern.notice]   %l0-3:
0001 0600012a0c08 0600012a0c10 0600012a0c12
Dec  1 20:30:22 foobar   %l4-7: 030060946320 0002
 0600012a0c00
Dec  1 20:30:22 foobar unix: [ID 10 kern.notice]
Dec  1 20:30:22 foobar genunix: [ID 672855 kern.notice] syncing  
file systems...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Craig Morgan
Given your description of the physical installation, I'd initially  
suspect that you have a poorly SCSI bus before proceeding. What is  
the bus type and what length are the cables?


If you've got 7 devices and hence 7 individual enclosures, with  
associated wiring between them, you may have exceeded the working  
length of the scsi bus, or have an issue with one of the later  
devices due to sync.


Have you tried the same drive moved in the chain (as ZFS will id the  
disk irrespective of its solaris path)?


What card (or onboard) and platform are you running ...

Craig

On 5 Dec 2006, at 16:01, Krzys wrote:



ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting "Corrupt label; wrong magic number" messages, then  
when I looked in format it did not see that disk... (last disk) I  
had that setup running for few months now and all of the sudden  
last disk failed. So I ordered another disk, had it replaced like a  
week ago, I did issue replace command after disk replacement, it  
was resilvering disks since forever, then I got hints from this  
group that snaps could be causing it so yesterday I did disable  
snaps and this morning I di dnotice the same disk that I replaced  
is gone... Does it seem weird that this disk would fail? Its new  
disk... I have Solaris 10 U2, 4 internal drives and then 7 external  
drives which are in single enclousures connected via scsi chain to  
each other... So it seems like last disk is failing. Those nipacks  
from sun have self termination so there is no terminator at the  
end... Any ideas what should I do? Do I need to order another drive  
and replace that one too? Or will it happen again? What do you  
think could be the problem? Ah, when I look at that enclosure I do  
see green light on it so it seems like it did not fail...


format
Searching for disks...
efi_alloc_and_init failed.
done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 sec 809>

  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 sec 809>

  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t2d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c1t3d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c3t0d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   5. c3t1d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   6. c3t2d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   7. c3t3d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   8. c3t4d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   9. c3t5d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  10. c3t6d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0



zpool status -v
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: mypool2
 state: DEGRADED
 scrub: resilver completed with 0 errors on Mon Dec  4 22:34:57 2006
config:

NAME  STATE READ WRITE CKSUM
mypool2   DEGRADED 0 0 0
  raidz   DEGRADED 0 0 0
c3t0d0ONLINE   0 0 0
c3t1d0ONLINE   0 0 0
c3t2d0ONLINE   0 0 0
c3t3d0ONLINE   0 0 0
c3t4d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
replacing UNAVAIL  0   775 0  insufficient  
replicas

  c3t6d0s0/o  UNAVAIL  0 0 0  cannot open
  c3t6d0  UNAVAIL  0   940 0  cannot open

errors: No known data errors

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-13 Thread Craig Morgan
This is probably an attempt to 'short-stroke' a larger disk with the  
intention utilising only a small ammount of the disk surface, as a  
technique it used to be quite common for certain apps (notably DBs).  
Hence you saw deployments of quite large disks but with perhaps only  
1/4-1/2 physical utilisation.


As the industry has moved toward HW RAID, its less prevalent, but  
still has some merits on occassion.


Craig

On 13 Dec 2006, at 16:08, Darren Dunham wrote:


  $mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000

  The above command creates vxfs file system on first 2048000  
blocks (each block size is 1024 bytes)  of  /dev/rdsk/c5t20d9s2 .


Like this is there a option to limit the size of ZFS file system.? if
so what it is ? how it is ?


No, there's nothing similar.

Space is managed at a pool level.  Writes by any filesystem may occur
anywhere in the pool.

Can I ask why this would be useful to you?  What can you accomplish by
limiting the filesystem to a particular location?  There might be
alternatives.

--
Darren Dunham
[EMAIL PROTECTED]
Senior Technical Consultant TAOShttp:// 
www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay  
area

 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss