provide an invitation to the new list shortly.
Thanks for your patience.
Cindy
On 03/20/13 15:05, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
wrote:
I can't seem to find any factual indication that
opensolaris.org<http://opensolaris.org> mailing
lists are going away, and I
I can't seem to find any factual indication that opensolaris.org mailing lists
are going away, and I can't even find the reference to whoever said it was EOL
in a few weeks ... a few weeks ago.
So ... are these mailing lists going bye-bye?
___
zfs-disc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Hans J. Albertsson
>
> I'm looking for something that would make me afterwards understand what,
> say, commands like zpool import ... or zfs send ... actually do, and
> some idea as to why, so
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrew Werchowiecki
>
> muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
> Password:
> cannot open '/dev/dsk/c25t10d1p2': I/O error
> muslimwookie@Pyzee:~$
>
> I have two SSDs in th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Tim, Simon, Volker, Chris, and Erik - How do you use it?
> I am making the informed guess, that you're using it primarily on non-
> laptops, which have second hard drives,
> From: Tim Cook [mailto:t...@cook.ms]
>
> We can agree to disagree.
>
> I think you're still operating under the auspices of Oracle wanting to have an
> open discussion. This is patently false.
I'm just going to respond to this by saying thank you, Cindy, Casper, Neil, and
others, for all t
> From: Tim Cook [mailto:t...@cook.ms]
>
> Why would I spend all that time and
> energy participating in ANOTHER list controlled by Oracle, when they have
> shown they have no qualms about eliminating it with basically 0 warning, at
> their whim?
>From an open source, community perspective, I und
> From: Tim Cook [mailto:t...@cook.ms]
> Sent: Friday, February 15, 2013 11:14 AM
>
> I have a few coworkers using it. No horror stories and it's been in use
> about 6
> months now. If there were any showstoppers I'm sure I'd have heard loud
> complaints by now :)
So, I have discovered a *coup
> From: cindy swearingen [mailto:cindy.swearin...@gmail.com]
>
> This was new news to use too and we're just talking over some options
> yesterday
> afternoon so please give us a chance to regroup and provide some
> alternatives.
>
> This list will be shutdown but we can start a new one on java.n
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> Good for you. I am sure that Larry will be contacting you soon.
hehehehehe... he knows better. ;-)
> Previously Oracle announced and invited people to join their
> discussion forums, which are web-based and virtually dead.
> From: sriram...@gmail.com [mailto:sriram...@gmail.com] On Behalf Of
> Sriram Narayanan
>
> Or, given that this is a weekend, we assume that someone within Oracle
> would see this mail only on Monday morning Pacific Time, then send out
> some mails within, and be able to respond in public only by
> From: Tim Cook [mailto:t...@cook.ms]
>
> That would be the logical decision, yes. Not to poke fun, but did you really
> expect an official response after YEARS of nothing from Oracle? This is the
> same company that refused to release any Java patches until the DHS issued
> a national warning
Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
Sent: Friday, February 15, 2013 11:00 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL
So, I hear, in a couple weeks' time, opensolaris.org is shutting down. What
does that mean
Anybody using maczfs / ZEVO? Have good or bad things to say, in terms of
reliability, performance, features?
My main reason for asking is this: I have a mac, I use Time Machine, and I
have VM's inside. Time Machine, while great in general, has the limitation of
being unable to intelligently
So, I hear, in a couple weeks' time, opensolaris.org is shutting down. What
does that mean for this mailing list? Should we all be moving over to
something at illumos or something?
I'm going to encourage somebody in an official capacity at opensolaris to
respond...
I'm going to discourage uno
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
>
> What's the correct way of finding out what actually uses/reserves that 1023G
> of FREE in the zpool?
Maybe this isn't exactly what you need, but maybe:
for fs in `zfs list -H -o name` ; do echo $fs ; zfs get
reservation,refreservation,usedbyrefr
> From: Gregg Wonderly [mailto:gregg...@gmail.com]
>
> This is one of the greatest annoyances of ZFS. I don't really understand how,
> a zvol's space can not be accurately enumerated from top to bottom of the
> tree in 'df' output etc. Why does a "zvol" divorce the space used from the
> root of
I have a bunch of VM's, and some samba shares, etc, on a pool. I created the
VM's using zvol's, specifically so they would have an appropriate
refreservation and never run out of disk space, even with snapshots. Today, I
ran out of disk space, and all the VM's died. So obviously it didn't wor
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> I can tell you I've had terrible everything rates when I used dedup.
So, the above comment isn't fair, really. The truth is here:
http://mail.opensolaris.org/pipermail/zf
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Koopmann, Jan-Peter
>
> all I can tell you is that I've had terrible scrub rates when I used dedup.
I can tell you I've had terrible everything rates when I used dedup.
> The
> DDT was a bi
> From: Robert Milkowski [mailto:rmilkow...@task.gda.pl]
>
> That is one thing that always bothered me... so it is ok for others, like
> Nexenta, to keep stuff closed and not in open, while if Oracle does it they
> are bad?
Oracle, like Nexenta, and my own company CleverTrove, and Microsoft, and
> From: Gary Mills [mailto:gary_mi...@fastmail.fm]
>
> > In solaris, I've never seen it swap out idle processes; I've only
> > seen it use swap for the bad bad bad situation. I assume that's all
> > it can do with swap.
>
> You would be wrong. Solaris uses swap space for paging. Paging out
> u
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
>
> As for swap... really, you don't want to swap. If you're swapping you
> have problems.
For clarification, the above is true in Solaris and derivatives, but it's not
unive
> From: Darren J Moffat [mailto:darr...@opensolaris.org]
>
> Support for SCSI UNMAP - both issuing it and honoring it when it is the
> backing store of an iSCSI target.
When I search for scsi unmap, I come up with all sorts of documentation that
... is ... like reading a medical journal when all
> From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
>
> as far as incompatibility among products, I've yet to come
> across it
I was talking about ... install solaris 11, and it's using a new version of zfs
that's incompatible with anything else out there. And vice-versa. (Not sure
if featu
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> I disagree the ZFS is developmentally challenged.
As an IT consultant, 8 years ago before I heard of ZFS, it was always easy to
sell Ontap, as long as it fit into the budget. 5 years ago, whenever I told
customers about ZFS, it was
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
>
> To decide if a block needs dedup one would first check the Bloom
> filter, then if the block is in it, use the dedup code path, else the
> non-dedup codepath and insert the bl
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: Saturday, January 19, 2013 5:39 PM
>
> the space allocation more closely resembles a variant
> of mirroring,
> like some vendors call "RAID-1E"
Awesome, thank you. :-)
___
zfs-discuss m
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
>
> I've wanted a system where dedup applies only to blocks being written
> that have a good chance of being dups of others.
>
> I think one way to do this would be to keep a sca
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> And regarding the "considerable activity" - AFAIK there is little way
> for ZFS to reliably read and test "TXGs newer than X"
My understanding is like this: When you make a sn
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> I am always experiencing chksum errors while scrubbing my zpool(s), but
> I never experienced chksum errors while resilvering. Does anybody know
> why that would be?
When y
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> If almost all of the I/Os are 4K, maybe your ZVOLs should use a
> volblocksize of 4K? This seems like the most obvious improvement.
Oh, I forgot to mention - The above log
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eugen Leitl
>
> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
> a raidz3 (no compression nor dedup) with reasonable bonnie++
> 1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU
> From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
>
> Which man page are you referring to?
>
> I see the zfs receive -o syntax in the S11 man page.
Oh ... It's the latest openindiana. So I suppose it must be a new feature
post-rev-28 in the non-open branch...
But it's no big deal
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz
>
> I have not yet tried this syntax. Because you mentioned it, I looked for it
> in
> the man
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of bob netherton
>
> You can, with recv, override any property in the sending stream that can
> be
> set from the command line (ie, a writable).
>
> # zfs send repo/support@cpu-0412 | zfs recv
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Netherton
>
> At this point, the only thing would be to use 11.1 to create a new pool at
> 151's
> version (-o version=) and top level dataset (-O version=). Recreate the file
> system h
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of sol
>
> I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but
> it crashed and dumped core.
>
> However the zpool 'create' command managed to create a pool on the whole
> d
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
>
> BTW, anyone played NDMP in solaris? Or is it feasible to transfer snapshot via
> NDMP protocol?
I've heard you could, but I've never done it. Sorry I'm not much help, except
as
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> On Thu, Dec 6, 2012 at 12:35 AM, Albert Shih wrote:
> Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
>
> > 2) replace the disks with larger ones one-by-one, waiting for a
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland -
>
> Say I have an ldoms guest that is using zfs root pool that is mirrored,
> and the two sides of the mirror are coming from two separate vds
> servers, that i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> this is
> the part I am not certain about - it is roughly as cheap to READ the
> gzip-9 datasets as it is to read lzjb (in terms of CPU decompression).
Nope. I know LZJB is not
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chris Dunbar - Earthside, LLC
>
> # zpool replace tank c11t4d0
> # zpool clear tank
I would expect this to work, or detach/attach. You should scrub periodically,
and ensure no errors after s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> And you can try 'zpool online' on the failed drive to see if it comes back
> online.
Be cautious here - I have an anecdote, which might represent a trend in best
practice, or
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> I really hope someone better versed in compression - like Saso -
> would chime in to say whether gzip-9 vs. lzjb (or lz4) sucks in
> terms of read-speeds from the pools. My HDD-b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eugen Leitl
>
> can I make e.g. LSI SAS3442E
> directly do SSD caching (it says something about CacheCade,
> but I'm not sure it's an OS-side driver thing), as it
> is supposed to boost IOPS? U
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sami Tuominen
>
> How can one remove a directory containing corrupt files or a corrupt file
> itself? For me rm just gives input/output error.
I was hoping to see somebody come up with an answ
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> I wonder if it would make weird sense to get the boxes, forfeit the
> cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
> get the most flexibility and bang for
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> I look after a remote server that has two iSCSI pools. The volumes for
> each pool are sparse volumes and a while back the target's storage
> became full, causing weird and won
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> As for ZIL - even if it is used with the in-pool variant, I don't
> think your setup needs any extra steps to disable it (as Edward likes
> to suggest), and most other setups don
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
>
> I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb.
>
> As part of my work, I have used it both as a RAW device (cxtxdxp1) and
> wrapped partition 1 with
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> An easier event to trigger is the starting of the virtualbox guest. Upon vbox
> guest starting, check the service properties for that instance of vboxsvc, and
> chmod if
> From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
>
> > Found quite a few posts on
> > various
> > forums of people complaining that RDP with external auth doesn't work (or
> > not reliably),
>
> Actually, it does work, and it works
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Well, as a simple stone-age solution (to simplify your SMF approach),
> you can define custom attributes on dataset, zvols included. I think
> a custom attr must include a colon
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>
> Instead of using vdi, I use comstar targets and then use vbox built-in scsi
> initiator.
Based on my recent experiences, I am hesitant to use the iscsi ... I don't know
if it
When I google around for anyone else who cares and may have already solved the
problem before I came along - it seems we're all doing the same thing for the
same reason. If by any chance you are running VirtualBox on a solaris /
opensolaris / openidiana / whatever ZFS host, you could of course
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
>
> Well, I think I give up for now. I spent quite a few hours over the last
> couple of days trying to get gnome desktop working on bare-metal OI,
> followed by virtualbox.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eugen Leitl
>
> On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>
> > Yes you can, with the help of Dell,
> From: Karl Wagner [mailto:k...@mouse-hole.com]
>
> If I was doing this now, I would probably use the ZFS aware OS bare metal,
> but I still think I would use iSCSI to export the ZVols (mainly due to the
> ability
> to use it across a real network, hence allowing guests to be migrated simply)
Y
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> I have to admit Ned's (what do I call you?)idea is interesting. I may give
> it a try...
Yup, officially Edward, most people call me Ned.
I contributed to the OI VirtualBox instructions. See here:
http://wiki.openindiana.org/oi/Virtual
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> Now you have me totally confused. How does your setup get data from the
> guest to the OI box? If thru a wire, if it's gig-e, it's going to be
> 1/3-1/2 the speed of the other way. If you're saying you use 10gig or
> some-such, we're ta
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> I am just wondering why you export the ZFS system through NFS?
> I have had much better results (albeit spending more time setting up) using
> iSCSI. I found that performance wa
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
> hardware access to the HBA(s) and harddisks at raw speeds, with no
> extra layers of lags in between.
> From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
>
> Stuff like that. I could go on, but it basically comes down to: With
> openindiana, you can do a lot more than you can with ESXi. Because it's a
> complete OS. You simply have more freedom, bett
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> I'm curious here. Your experience is 180 degrees opposite from mine. I
> run an all in one in production and I get native disk performance, and
> ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS
> datastore, since
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tiernan OToole
>
> I have a Dedicated server in a data center in Germany, and it has 2 3TB
> drives,
> but only software RAID. I have got them to install VMWare ESXi and so far
> everything is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> >> ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0)
> >>
> >> The system boots up fine in the original BE. The root (only) pool in a
> >> single drive.
> >>
> >> Any ideas?
> > devfs
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> Have have a recently upgraded (to Solaris 11.1) test system that fails
> to mount its filesystems on boot.
>
> Running zfs mount -a results in the odd error
>
> #zfs mount -a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> I tend to agree that parity calculations likely
> are faster (even if not all parities are simple XORs - that would
> be silly for double- or triple-parity sets which may use dif
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
>
> So my
> suggestion is actually just present one huge 25TB LUN to zfs and let
> the SAN handle redundancy.
Oh - No
Definitely let zfs handle the redundancy. Because Z
> From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
>
> Performance is much better if you use mirrors instead of raid. (Sequential
> performance is just as good either way, but sequential IO is unusual for most
> use cases. Random IO is much better with mirrors, an
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Logically, yes - I agree this is what we expect to be done.
> However, at least with the normal ZFS reading pipeline, reads
> of redundant copies and parities only kick in if the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> I can only speak anecdotally, but I believe it does.
>
> Watching zpool iostat it does read all data on both disks in a mirrored
> pair.
>
> Logically, it would not make sense
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> The only thing I think Oracle should have done differently is to allow
> either a downgrade or creating a send stream in a lower version
> (reformatting the data where necessary
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> At some point, people will bitterly regret some "zpool upgrade" with no way
> back.
>
> uhm... and how is that different than anything else in the software world?
>
> No attempt at backward compatibility, and no downgrade path, not eve
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
>One idea I have is that a laptop which only has a single HDD slot,
> often has SD/MMC cardreader slots. If populated with a card for L2ARC,
> can it be expected to boost the l
> From: Jim Klimov [mailto:jimkli...@cos.ru]
> Sent: Monday, October 22, 2012 7:26 AM
>
> Are you sure that the system with failed mounts came up NOT in a
> read-only root moment, and that your removal of /etc/zfs/zpool.cache
> did in fact happen (and that you did not then boot into an earlier
> B
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
> least in my case) to re-import rpool, and another pool, but it didn't figure
> out
> to re-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gary Mills
>
> On Sun, Oct 21, 2012 at 11:40:31AM +0200, Bogdan Ćulibrk wrote:
> >Follow up question regarding this: is there any way to disable
> >automatic import of any non-rpool on
> From: Timothy Coalson [mailto:tsc...@mst.edu]
> Sent: Friday, October 19, 2012 9:43 PM
>
> A shot in the dark here, but perhaps one of the disks involved is taking a
> long
> time to return from reads, but is returning eventually, so ZFS doesn't notice
> the problem? Watching 'iostat -x' for b
If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
least in my case) to re-import rpool, and another pool, but it didn't figure
out to re-import some other pool.
How does the system decide, in the absence of rpool.cache, which pools it's
going to import at boot?
__
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
>> At some point, people will bitterly regret some "zpool upgrade" with no way
>> back.
>
> uhm... and how is that different than anything else in the software world?
No atte
Yikes, I'm back at it again, and so frustrated.
For about 2-3 weeks now, I had the iscsi mirror configuration in production, as
previously described. Two disks on system 1 mirror against two disks on system
2, everything done via iscsi, so you could zpool export on machine 1, and then
zpoo
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of James C. McPherson
>
> As far as I'm aware, having an rpool on multipathed devices
> is fine.
Even a year ago, a new system I bought from Oracle came with multipath devices
for all devices b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> You have to create pools/filesystems with the older versions used by the
> destination machine.
Apparently "zpool create -d -o version=28" you might want to do on the new
syst
Can anyone explain to me what the openindiana-1 filesystem is all about? I
thought it was the "backup" copy of the openindiana filesystem, when you apply
OS updates, but that doesn't seem to be the case...
I have time-slider enabled for rpool/ROOT/openindiana. It has a daily snapshot
(amongst
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul van der Zwan
>
> What was c5t2 is now c7t1 and what was c4t1 is now c5t2.
> Everything seems to be working fine, it's just a bit confusing.
That ... Doesn't make any sense. Did you resh
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> A solid point. I don't.
>
> This doesn't mean you can't - it just means I don't.
This response was kind of long-winded. So here's a simpler version:
Suppose 6 disks i
> From: Ian Collins [mailto:i...@ianshome.com]
>
> On 10/13/12 02:12, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
> > There are at least a couple of solid reasons *in favor* of partitioning.
> >
> > #1 It seems common, at least to me, that
Jim, I'm trying to contact you off-list, but it doesn't seem to be working.
Can you please contact me off-list?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of andy thomas
>
> According to a Sun document called something like 'ZFS best practice' I
> read some time ago, best practice was to use the entire disk for ZFS and
> not to partition or slice it
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Pedantically, a pool can be made in a file, so it works the same...
Pool can only be made in a file, by a system that is able to create a pool.
Point is, his receiving system runs linux and doesn't have any zfs; his
receiving system
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Read it again he asked, "On that note, is there a minimal user-mode zfs thing
> that would allow
> receiving a stream into an image file?" Something like:
> zfs send ... | ssh user@host "cat > file"
He didn't say he wanted to cat
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sami Tuominen
>
> Unfortunately there aren't any snapshots.
> The version of zpool is 15. Is it safe to upgrade that?
> Is zpool clear -F supported or of any use here?
The only thing that will
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> >> If the recipient system doesn't support "zfs receive," [...]
> >
> > On that note, is there a minimal user-mode zfs thing that would allow
> > receiving a stream into an i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Albert Shih
>
> I'm actually running ZFS under FreeBSD. I've a question about how many
> disks I have in one pool.
>
> At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Cusack
>
> On Fri, Oct 5, 2012 at 3:17 AM, Ian Collins wrote:
> I do have to suffer a slow, glitchy WAN to a remote server and rather than
> send stream files, I broke the data on the re
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> I must be missing something - I don't see anything above that indicates any
> required vs optional dependencies.
Ok, I see that now. (Thanks to the SMF FAQ).
A dependenc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tiernan OToole
>
> I am in the process of planning a system which will have 2 ZFS servers, one on
> site, one off site. The on site server will be used by workstations and
> servers
> in house
> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> In general - yes, but it really depends. Multiple synchronous writes of any
> size
> across multiple file systems will fan out across the log devices. That is
> because there is a separate independent log chain for each file system.
>
> Also
1 - 100 of 145 matches
Mail list logo