If you are attaching a new enclosure, make a new zpool in that enclosure with a
temporary name and 'zfs send' snapshots from the old pool to the new pool,
reading in with 'zfs recv'.
Craig Cory
Senior Instructor
ExitCertified.com
On Dec 1, 2012, at 3:20 AM, Albert Shih
scuss/2009-December/034721.html.
Regards,
Craig
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
When viewing a raidz|raidz1|raidz2 pool, 'zpool list|status' will report the
total "device" space; ie: 3 1TB drives in a raidz will show approx. 3TB space.
'zfs list' will show available FILESYSTEM space, ie: 3 1TB raidz disks, approx
2TB space.
Logic wrote:
> Ian Collins (i...@ianshome.com) wrot
Gruber (http://daringfireball.net/linked/2009/10/23/zfs) is normally
well-informed and has some feedbackseems possible that legal canned it.
--Craig
On 23 Oct 2009, at 20:42, Tim Cook wrote:
On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling > wrote:
FYI,
The ZFS project on MacOS fo
Hi Chris,
It sounds like there is some confusion with the recommendation about raidz?
vdevs. It is recommended that each raidz? TLD be a "single-digit" number of
disks - so up to 9. The total number of these single digit TLDs is not
practically limited.
Craig
Christopher White wrot
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
side of failure doing heavy retries that id dragging the pool down.
Craig
--
Craig Morgan
On 18 Dec 2011, at 16:23, Jan-Aage Frydenbø-Bruvoll wrote:
> Hi,
>
> On Sun, Dec 18, 2011 at 22:14, Nathan Kroene
you don't know who that would be drop me a line and I'll find someone
local to you …
We tend to go with the LSI cards, but even there there are some issues
with regard to Dell supply or over the counter.
HTH
Craig
On 6 Jan 2012, at 01:28, Ray Van Dolson wrote:
> We are lookin
g-term storage reliability and minimal
maintenance). I want to setup the zpools correct the first time to avoid any
future issues.
Thanks for you help and insight.
-- Craig
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
server?
Thanks,
Craig
My config:
[u]Use:[/u] Home File Server supporting PC and Mac machines -- audio/video
files, home directories (documents & photos), and backup images. Will not run
any other services.
[u]Objective:[/u] adequate performance for home use; maximize protec
group) and Macs; running
OpenSolaris B134.
Thanks,
Craig
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
html
>
> A recap of the history at:
>
> http://www.theregister.co.uk/2010/09/09/oracle_netapp_zfs_dismiss/
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
evice2 to associate
> with the new name i.e. newpool,
>
> I want to do it for the same for snapshot device created using array/hardware
> snapshot.
>
> Thanks & Regards,
> sridhar.
> --
> This message posted from opensolaris.org
> _
going forward, but you'd have to check
with Oracle on integration plans.
HTH
Craig
On 18 Nov 2010, at 18:10, SR wrote:
> SUNWzfsg (zfs admin gui) seems to be missing from Solaris 11 express. Is
> this no longer available or has it been integrated with something else?
>
> Su
data currently would be quite
efficient, but there is one pathology in current ZFS which impacts this
somewhat, last time I looked each ARC ref to a de-duped block leads to a
inflated ARC copy of the data, hence a highly ref'ed block (20x for instance),
could exist 20x in an inflated state in AR
inous output somewhat.
BTW, you also should be looking to
mpathadm show LU
to successfully decode the virtual device entries.
Craig
On 1 Mar 2011, at 16:10, Cindy Swearingen wrote:
> (Dave P...I sent this yesterday, but it bounced on your email address)
>
> A small comment fr
ts in poor man cluster scenarios and or HA
configs such
as NexentaStor.
Craig
On 1 Mar 2011, at 16:35, Garrett D'Amore wrote:
> The PCIe based ones are good (typically they are quite fast), but check
> the following first:
>
> a) do you need an SLOG at all? Some worklo
But even the 'zfs list -o space' is now limited by not displaying snapshots
by default, so the catch all is now
zfs list -o space -t all
shouldn't miss anything then …
;-)
Craig
On 10 Mar 2011, at 03:38, Richard Elling wrote:
>
> On Mar 9, 2011, at 4:05
dd larger disks to a mirrored pool, you can replace the mirror
members, one at a time, with the larger disk and wait for resilver to
complete. Then replace the other disk, resilver again.
Craig
--
Craig Cory
Senior Instructor :: ExitCertified
: Oracle/Sun Certified System Administrator
: O
ctly what you'll need to do. Without the -f zpool will stop and
warn you that you have a "mismatch" in reliability. So, to get the space:
zpool add -f
Then later,
zpool attach
HTH
Craig
--
Craig Cory
Senior Instructor :: ExitCertified
: Oracle/Sun Certified Sy
two pools, one ~500GB and one ~300GB.
As long as the mirrored pairs match they do not have to be all the same in the
pool.
Craig
Tiernan OToole wrote:
> Thanks for the info. need to rebuild my machine and ZFS pool kind of new
> to this and realized i built it as a stripe, not a mirr
uted a cold cycle of the platform years ago at every change
to alleviate the problem, which seems more prevalent on very early
issue mboards/CPU combos (we have a significant number of first
release systems still doing sterling service!).
HTH
Craig
On 16 Dec 2008, at 15:28, Tim wrote:
>
s.org/os/community/zfs/docs/zfsadmin.pdf
HTH
Craig
On 20 Jan 2009, at 13:05, Luke Scammell wrote:
> Hi,
>
> I'm completely new to Solaris, but have managed to bumble through
> installing it to a single disk, creating an additional 3 disk RAIDZ
> array and then copying
e mislaid the
disk!).
Documentation here (including link to download) ...
http://docs.sun.com/source/820-1120-19/hdtool_new.html#0_64301
HTH
Craig
On 24 Jan 2009, at 18:39, Orvar Korvar wrote:
> If zfs says that one disk is broken, how do I locate it? It says
> that disk c0t3d0 is broken.
AID sub-system will be
capable of automated recovery in most circumstances of simple failures.
Craig
On 27 Jan 2009, at 13:00, Edmund White wrote:
> I'm testing the same thing on a DL380 G5 with P400 controller. I set
> individual RAID 0 logical drives for each disk. I ended up with
NLINE 0 0 0
/300d1 ONLINE 0 0 0
/300d2 ONLINE 0 0 0
/300d3 ONLINE 0 0 0
errors: No known data errors
---
Does this describe what y
ity representation, some
computation will be needed to render the live data. This would have to add
*some* overhead to the io.
Craig Cory
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://www.symantec.com/enterprise/products/
agents_options_details.jsp?pcid=2245&pvid=203_1&aoid=sf_simple_admin>
including a ref. guide ...
Craig
On 21 Jun 2007, at 08:03, Selim Daoud wrote:
From: Ric Hall <[EMAIL PROTECTED]>
Date: 20 June 2007 22:46:48 BDT
To: DMA Am
The GUI is an implementation of the webmin tool. You must be running the
server - started with
/usr/sbin/smcwebserver start
Then, access it with
https://:6789/zfs
Regards
Craig Cory
Senior Instructor :: ExitCertified
: Sun Certified System Administrator
: Sun Certified Network
eFoot is in mouth! My mistake - confused the two tools. That's what I get for
answering off the cuff.
The rest still stands, as confirmed elsewhere.
Craig
In response to Boyd Adamson, who said:
> "Craig Cory" <[EMAIL PROTECTED]> writes:
>> The GUI is an implemen
Hello, I am fairly new to Solaris and ZFS. I am testing both out in a sandbox
at work. I am playing with virtual machines running on a windows front-end that
connects to a zfs back-end for its data needs. As far as i know my two options
are sharesmb and shareiscsci for data sharing. I have a cou
to make this fit
well in a Windows world.
-Craig
> sharesmb presents ntfs to windows, so you're still hampered by that file
> system's 'features' such as lots of broadcast packets and a long timeout.
>
> One other option you should consider is using NFS, for which you
I have had a brief introduction to ZFS and while discussing it with some other
folks the question came up about use with multipathed storage. What, if any,
configuration or interaction does ZFS have with a multipathed storage setup -
however it may be managed.
thanks!
Craig Cory
Senior
rt personnel to
engage??
Craig
On 18 Jul 2006, at 00:53, Matthew Ahrens wrote:
On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote:
Add an option to zpool(1M) to dump the pool config as well as the
configuration of the volumes within it to an XML file. This file
could then be "su
rrect conclusions.
Craig
On 4 Dec 2006, at 14:47, Douglas Denny wrote:
Last Friday, one of our V880s kernel panicked with the following
message.This is a SAN connected ZFS pool attached to one LUN. From
this, it appears that the SAN 'disappeared' and then there was a panic
shortly after
u may have exceeded the working
length of the scsi bus, or have an issue with one of the later
devices due to sync.
Have you tried the same drive moved in the chain (as ZFS will id the
disk irrespective of its solaris path)?
What card (or onboard) and platform are you running ...
Craig
On
2 physical utilisation.
As the industry has moved toward HW RAID, its less prevalent, but
still has some merits on occassion.
Craig
On 13 Dec 2006, at 16:08, Darren Dunham wrote:
$mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000
The above command creates vxfs file system on fir
(incapacitated) ngz admins from the
gz admins.
Regards,
Craig
On Wed, May 3, 2006 3:05 pm, Eric Schrock said:
> On Wed, May 03, 2006 at 02:47:57PM -0700, eric kustarz wrote:
>> Jason Schroeder wrote:
>>
>> >eric kustarz wrote:
>> >
>> >>The following cas
I haven't seen any mention of it in this forum yet, so FWIW you might be
interested in the details of ZFS deduplication mentioned in this recently-filed
case.
Case log: http://arc.opensolaris.org/caselog/PSARC/2009/571/
Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115507
V
Sad to hear that Apple is apparently going in another direction.
http://www.macrumors.com/2009/10/23/apple-shuts-down-open-source-zfs-project/
-cheers, CSB
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris
I just stumbled across a clever visual representation of deduplication:
http://loveallthis.tumblr.com/post/166124704
It's a flowchart of the lyrics to "Hey Jude". =-)
Nothing is compressed, so you can still read all of the words. Instead, all of
the duplicates have been folded together. -ch
Great stuff, Jeff and company. You all rock. =-)
A potential topic for the follow-up posts: auto-ditto, and the philosophy
behind choosing a default threshold for creating a second copy.
--
This message posted from opensolaris.org
___
zfs-discuss mai
On a related note, it looks like Constantin is developing a nice SMF service
for auto scrub:
http://blogs.sun.com/constantin/entry/new_opensolaris_zfs_auto_scrub
This is an adaptation of the well-tested auto snapshot service. Amongst other
advantages, this approach means that you don't have t
Tristan, there's another dedup system for "zfs send" in PSARC 2009/557. This
can be used independently of whether the in-pool data was deduped.
Case log: http://arc.opensolaris.org/caselog/PSARC/2009/557/
Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115082
So I believe your
Joerg just posted a lengthy answer to the fsck question:
http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html
Good stuff. I see two answers to "nobody complained about lying hardware
before ZFS".
One: The user has never tried another filesystem that tests for end-to-en
Roman, I like to check here for recent putbacks:
http://hg.genunix.org/onnv-gate.hg/shortlog
To see new cases: http://arc.opensolaris.org/caselog/PSARC/
Also, to see what should appear in upcoming builds (although not recently
updated): http://hub.opensolaris.org/bin/view/Community+Group+on/
I don't have any problem with a rewrite, but please allow a non-GUI-dependent
solution for headless servers. Also please add rsync as an option, rather than
replacing zfs send/recv. Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss
You may be interested in PSARC 2009/670: "Read-Only Boot from ZFS Snapshot".
Here's the description from:
http://arc.opensolaris.org/caselog/PSARC/2009/670/20091208_joep.vesseur
> Allow for booting from a ZFS snapshot. The boot image will be read-only.
> Early in boot a clone of the root is cr
I am also accustomed to seeing diluted properties such as compressratio. IMHO
it could be useful (or perhaps just familiar) to see a diluted dedup ratio for
the pool, or maybe see the size / percentage of data used to arrive at
dedupratio.
As Jeff points out, there is enough data available to
Mike, I believe that ZFS treats runs of zeros as holes in a sparse file, rather
than as regular data. So they aren't really present to be counted for
compressratio.
http://blogs.sun.com/bonwick/entry/seek_hole_and_seek_data
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/017565.htm
I take your point Mike. Yes, this seems to be an inconsistency in accounting.
I have simply become accustomed to this (esp. when dealing with virtual disk
images), so I just don't think about it, but it *is* harder to balance accounts.
For instance, if my guest cleans up it's vdisk by writing
50 matches
Mail list logo