> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
>I wondered if the "copies" attribute can be considered sort
> of equivalent to the number of physical disks - limited to seek
> times though. Namely, for the same amount of st
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
>For several times now I've seen statements on this list implying
> that a dedicated ZIL/SLOG device catching sync writes for the log,
> also allows for more streamlined writes
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Thanks... but doesn't your description imply that the sync writes
> would always be written twice?
That is correct, regardless of whether you have slog or not. In the case of
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tristan Klocke
>
> I want to switch to ZFS, but still want to encrypt my data. Native Encryption
> for ZFS was added in "ZFS Pool Version Number 30", but I'm using ZFS on
> FreeBSD with Version
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> > > Also keep in mind that if you have an SLOG (ZIL on a separate
> > > device), and then lose this SLOG (disk crash etc), you will probably
> > > lose the pool. So if
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tim Cook
>
> I would think a flag to allow you to automatically continue with a disclaimer
> might be warranted (default behavior obviously requiring human input).
This already exists. It's c
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
>
> The copies thing is a really only for laptops, where the likelihood of
> redundancy is very low
ZFS also stores multiple copies of things that it considers "extra important.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ray Arachelian
>
> One thing you can do is enable dedup when you copy all your data from
> one zpool to another, then, when you're done, disable dedup. It will no
> longer waste a ton of memor
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> I believe what you meant to say was "dedup with HDDs sux." If you had
> used fast SSDs instead of HDDs, you will find dedup to be quite fast.
> -- richard
Yes, but this is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Availability of the DDT is IMHO crucial to a deduped pool, so
> I won't be surprised to see it forced to triple copies.
Agreed, although, the DDT is also paramount to performa
> From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
> Sent: Wednesday, August 01, 2012 9:56 AM
>
> On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
> >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> >> boun...@opensolari
> From: opensolarisisdeadlongliveopensolaris
> Sent: Wednesday, August 01, 2012 2:08 PM
>
> L2ARC is a read cache. Hence the "R" and "C" in "L2ARC."
> This means two major things:
> #1 Writes don't benefit,
> and
> #2 There'
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Well, there is at least a couple of failure scenarios where
> copies>1 are good:
>
> 1) A single-disk pool, as in a laptop. Noise on the bus,
> media degradation, or any oth
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> 2012-08-01 22:07, opensolarisisdeadlongliveopensolaris пишет:
> > L2ARC is a read cache. Hence the "R" and "C" in "L2ARC.&
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> 2012-08-01 23:40, opensolarisisdeadlongliveopensolaris пишет:
>
> > Agreed, ARC/L2ARC help in finding the DDT, but whenever you've got a
> snaps
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> In some of my cases I was "lucky" enough to get a corrupted /sbin/init
> or something like that once, and the box had no other BE's yet, so the
> OS could not do anything reasona
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Joerg Schilling
>
> Jim Klimov wrote:
>
> > In the end, the open-sourced ZFS community got no public replies
> > from Oracle regarding collaboration or lack thereof, and decided
> > to part w
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Joerg Schilling
>
> Well, why then has there been a discussion about a "closed zfs mailing list"?
> Is this no longer true?
Oracle can do anything internally they want. I would presume they h
> From: Joerg Schilling [mailto:joerg.schill...@fokus.fraunhofer.de]
> Sent: Thursday, August 09, 2012 11:35 AM
>
> > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > > boun...@opensolaris.org] On Behalf Of Joerg Schilling
> > >
> > > Well, why then has there been a discussion
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Murray Cullen
>
> I've copied an old home directory from an install of OS 134 to the data
> pool on my OI install. Opensolaris apparently had wine installed as I
> now have a link to / in my da
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Anonymous
>
> Hi. I have a spare off the shelf consumer PC and was thinking about loading
> Solaris on it for a development box since I use Studio @work and like it
> better than gcc. I was thi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
>
> My first thought was everything is
> hitting in ARC, but that is clearly not the case, since it WAS gradually
> filling up
> the cache device.
When things become cold
I send a replication data stream from one host to another. (and receive).
I discovered that after receiving, I need to remove the auto-snapshot property
on the receiving side, and set the readonly property on the receiving side, to
prevent accidental changes (including auto-snapshots.)
Question
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Question #2: What's the best way to find the latest matching snap on both
> the source and destination? At present, it seems, I'll have to build a list
> of
> sender snaps, and a list of receiver snaps, and parse and search them, till
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Question #2: What's the best way to find the latest matching snap on both
> the source and destination? At present, it seems, I'll have to build a list
> of
> sender sn
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave Pooser
>
> Unfortunately I did not realize that zvols require disk space sufficient
> to duplicate the zvol, and my zpool wasn't big enough. After a false start
> (zpool add is dangerous w
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bill Sommerfeld
>
> > But simply creating the snapshot on the sending side should be no
> problem.
>
> By default, zvols have reservations equal to their size (so that writes
> don't fail due
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Got me wondering: how many reads of a block from spinning rust
> suffice for it to ultimately get into L2ARC? Just one so it
> gets into a recent-read list of the ARC and then ex
When I create a 50G zvol, it gets "volsize" 50G, and it gets "used" and
"refreservation" 51.6G
I have some filesystems already in use, hosting VM's, and I'd like to mimic the
refreservation setting on the filesystem, as if I were smart enough from the
beginning to have used the zvol. So my que
Here's another one.
Two identical servers are sitting side by side. They could be connected to
each other via anything (presently using crossover ethernet cable.) And
obviously they both connect to the regular LAN. You want to serve VM's from at
least one of them, and even if the VM's aren't
> From: Tim Cook [mailto:t...@cook.ms]
> Sent: Wednesday, September 26, 2012 3:45 PM
>
> I would suggest if you're doing a crossover between systems, you use
> infiniband rather than ethernet. You can eBay a 40Gb IB card for under
> $300. Quite frankly the performance issues should become almos
Formerly, if you interrupted a zfs receive, it would leave a clone with a % in
its name, and you could find it via "zdb -d" and then you could destroy the
clone, and then you could destroy the filesystem you had interrupted receiving.
That was considered a bug, and it was fixed, I think by Sun.
I am confused, because I would have expected a 1-to-1 mapping, if you create an
iscsi target on some system, you would have to specify which LUN it connects
to. But that is not the case...
I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some
online examples, where you fi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> > If they are close enough for "crossover cable" where the cable is UTP,
> > then they are
> > close enough for SAS.
>
> Pardon my ignorance, can a system easily serve its local
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> --- How how will improving ZIL latency improve performance of my pool that
> is used as a NFS share to ESXi hosts which forces sync writes only (i.e will
> it be
> noticeable in an end-to-end context)?
Just perform a bunch of w
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ariel T. Glenn
>
> I have the same issue as described by Ned in his email. I had a zfs
> recv going that deadlocked against a zfs list; after a day of leaving
> them hung I finally had to hard
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> it doesn't work right - It turns out, iscsi
> devices (And I presume SAS devices) are not removable storage. That
> means, if the device goes offline and comes back onlin
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> How can I determine for sure that my ZIL is my bottleneck? If it is the
> bottleneck, is it possible to keep adding mirrored pairs of SSDs to the ZIL to
> make it faster? O
> From: Andrew Gabriel [mailto:andrew.gabr...@cucumber.demon.co.uk]
>
> > Temporarily set sync=disabled
> > Or, depending on your application, leave it that way permanently. I know,
> for the work I do, most systems I support at most locations have
> sync=disabled. It all depends on the workload
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> . The ZIL can have any number of SSDs attached either mirror or
> individually. ZFS will stripe across these in a raid0 or raid10 fashion
> depending on how you configure.
> From: Jim Klimov [mailto:jimkli...@cos.ru]
>
> Well, on my system that I complained a lot about last year,
> I've had a physical pool, a zvol in it, shared and imported
> over iscsi on loopback (or sometimes initiated from another
> box), and another pool inside that zvol ultimately.
Ick. And
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> There are also loops ;)
>
> # svcs -d filesystem/usr
> STATE STIMEFMRI
> online Aug_27 svc:/system/scheduler:default
> ...
>
> # svcs -d scheduler
> STAT
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Schweiss, Chip
>
> If I get to build it this system, it will house a decent size VMware
> NFS storage w/ 200+ VMs, which will be dual connected via 10Gbe. This is all
> medical imaging resear
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Neil Perrin
>
> The ZIL code chains blocks together and these are allocated round robin
> among slogs or
> if they don't exist then the main pool devices.
So, if somebody is doing sync writes
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Well, it seems just like a peculiar effect of required vs. optional
> dependencies. The loop is in the default installation. Details:
>
> # svcprop filesystem/usr | grep schedul
> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> In general - yes, but it really depends. Multiple synchronous writes of any
> size
> across multiple file systems will fan out across the log devices. That is
> because there is a separate independent log chain for each file system.
>
> Also
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tiernan OToole
>
> I am in the process of planning a system which will have 2 ZFS servers, one on
> site, one off site. The on site server will be used by workstations and
> servers
> in house
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> I must be missing something - I don't see anything above that indicates any
> required vs optional dependencies.
Ok, I see that now. (Thanks to the SMF FAQ).
A dependenc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Cusack
>
> On Fri, Oct 5, 2012 at 3:17 AM, Ian Collins wrote:
> I do have to suffer a slow, glitchy WAN to a remote server and rather than
> send stream files, I broke the data on the re
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Albert Shih
>
> I'm actually running ZFS under FreeBSD. I've a question about how many
> disks I have in one pool.
>
> At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> >> If the recipient system doesn't support "zfs receive," [...]
> >
> > On that note, is there a minimal user-mode zfs thing that would allow
> > receiving a stream into an i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sami Tuominen
>
> Unfortunately there aren't any snapshots.
> The version of zpool is 15. Is it safe to upgrade that?
> Is zpool clear -F supported or of any use here?
The only thing that will
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Read it again he asked, "On that note, is there a minimal user-mode zfs thing
> that would allow
> receiving a stream into an image file?" Something like:
> zfs send ... | ssh user@host "cat > file"
He didn't say he wanted to cat
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> Pedantically, a pool can be made in a file, so it works the same...
Pool can only be made in a file, by a system that is able to create a pool.
Point is, his receiving system runs linux and doesn't have any zfs; his
receiving system
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of andy thomas
>
> According to a Sun document called something like 'ZFS best practice' I
> read some time ago, best practice was to use the entire disk for ZFS and
> not to partition or slice it
Jim, I'm trying to contact you off-list, but it doesn't seem to be working.
Can you please contact me off-list?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> From: Ian Collins [mailto:i...@ianshome.com]
>
> On 10/13/12 02:12, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
> > There are at least a couple of solid reasons *in favor* of partitioning.
> >
> > #1 It seems common, at least to me, that
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> A solid point. I don't.
>
> This doesn't mean you can't - it just means I don't.
This response was kind of long-winded. So here's a simpler version:
Suppose 6 disks i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul van der Zwan
>
> What was c5t2 is now c7t1 and what was c4t1 is now c5t2.
> Everything seems to be working fine, it's just a bit confusing.
That ... Doesn't make any sense. Did you resh
Can anyone explain to me what the openindiana-1 filesystem is all about? I
thought it was the "backup" copy of the openindiana filesystem, when you apply
OS updates, but that doesn't seem to be the case...
I have time-slider enabled for rpool/ROOT/openindiana. It has a daily snapshot
(amongst
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> You have to create pools/filesystems with the older versions used by the
> destination machine.
Apparently "zpool create -d -o version=28" you might want to do on the new
syst
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of James C. McPherson
>
> As far as I'm aware, having an rpool on multipathed devices
> is fine.
Even a year ago, a new system I bought from Oracle came with multipath devices
for all devices b
Yikes, I'm back at it again, and so frustrated.
For about 2-3 weeks now, I had the iscsi mirror configuration in production, as
previously described. Two disks on system 1 mirror against two disks on system
2, everything done via iscsi, so you could zpool export on machine 1, and then
zpoo
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
>> At some point, people will bitterly regret some "zpool upgrade" with no way
>> back.
>
> uhm... and how is that different than anything else in the software world?
No atte
If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
least in my case) to re-import rpool, and another pool, but it didn't figure
out to re-import some other pool.
How does the system decide, in the absence of rpool.cache, which pools it's
going to import at boot?
__
> From: Timothy Coalson [mailto:tsc...@mst.edu]
> Sent: Friday, October 19, 2012 9:43 PM
>
> A shot in the dark here, but perhaps one of the disks involved is taking a
> long
> time to return from reads, but is returning eventually, so ZFS doesn't notice
> the problem? Watching 'iostat -x' for b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gary Mills
>
> On Sun, Oct 21, 2012 at 11:40:31AM +0200, Bogdan Ćulibrk wrote:
> >Follow up question regarding this: is there any way to disable
> >automatic import of any non-rpool on
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at
> least in my case) to re-import rpool, and another pool, but it didn't figure
> out
> to re-
> From: Jim Klimov [mailto:jimkli...@cos.ru]
> Sent: Monday, October 22, 2012 7:26 AM
>
> Are you sure that the system with failed mounts came up NOT in a
> read-only root moment, and that your removal of /etc/zfs/zpool.cache
> did in fact happen (and that you did not then boot into an earlier
> B
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
>One idea I have is that a laptop which only has a single HDD slot,
> often has SD/MMC cardreader slots. If populated with a card for L2ARC,
> can it be expected to boost the l
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> At some point, people will bitterly regret some "zpool upgrade" with no way
> back.
>
> uhm... and how is that different than anything else in the software world?
>
> No attempt at backward compatibility, and no downgrade path, not eve
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> The only thing I think Oracle should have done differently is to allow
> either a downgrade or creating a send stream in a lower version
> (reformatting the data where necessary
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> I can only speak anecdotally, but I believe it does.
>
> Watching zpool iostat it does read all data on both disks in a mirrored
> pair.
>
> Logically, it would not make sense
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Logically, yes - I agree this is what we expect to be done.
> However, at least with the normal ZFS reading pipeline, reads
> of redundant copies and parities only kick in if the
> From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
>
> Performance is much better if you use mirrors instead of raid. (Sequential
> performance is just as good either way, but sequential IO is unusual for most
> use cases. Random IO is much better with mirrors, an
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
>
> So my
> suggestion is actually just present one huge 25TB LUN to zfs and let
> the SAN handle redundancy.
Oh - No
Definitely let zfs handle the redundancy. Because Z
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> I tend to agree that parity calculations likely
> are faster (even if not all parities are simple XORs - that would
> be silly for double- or triple-parity sets which may use dif
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> Have have a recently upgraded (to Solaris 11.1) test system that fails
> to mount its filesystems on boot.
>
> Running zfs mount -a results in the odd error
>
> #zfs mount -a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> >> ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0)
> >>
> >> The system boots up fine in the original BE. The root (only) pool in a
> >> single drive.
> >>
> >> Any ideas?
> > devfs
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tiernan OToole
>
> I have a Dedicated server in a data center in Germany, and it has 2 3TB
> drives,
> but only software RAID. I have got them to install VMWare ESXi and so far
> everything is
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> I'm curious here. Your experience is 180 degrees opposite from mine. I
> run an all in one in production and I get native disk performance, and
> ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS
> datastore, since
> From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
>
> Stuff like that. I could go on, but it basically comes down to: With
> openindiana, you can do a lot more than you can with ESXi. Because it's a
> complete OS. You simply have more freedom, bett
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
> hardware access to the HBA(s) and harddisks at raw speeds, with no
> extra layers of lags in between.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> I am just wondering why you export the ZFS system through NFS?
> I have had much better results (albeit spending more time setting up) using
> iSCSI. I found that performance wa
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> Now you have me totally confused. How does your setup get data from the
> guest to the OI box? If thru a wire, if it's gig-e, it's going to be
> 1/3-1/2 the speed of the other way. If you're saying you use 10gig or
> some-such, we're ta
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> I have to admit Ned's (what do I call you?)idea is interesting. I may give
> it a try...
Yup, officially Edward, most people call me Ned.
I contributed to the OI VirtualBox instructions. See here:
http://wiki.openindiana.org/oi/Virtual
> From: Karl Wagner [mailto:k...@mouse-hole.com]
>
> If I was doing this now, I would probably use the ZFS aware OS bare metal,
> but I still think I would use iSCSI to export the ZVols (mainly due to the
> ability
> to use it across a real network, hence allowing guests to be migrated simply)
Y
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eugen Leitl
>
> On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>
> > Yes you can, with the help of Dell,
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
>
> Well, I think I give up for now. I spent quite a few hours over the last
> couple of days trying to get gnome desktop working on bare-metal OI,
> followed by virtualbox.
When I google around for anyone else who cares and may have already solved the
problem before I came along - it seems we're all doing the same thing for the
same reason. If by any chance you are running VirtualBox on a solaris /
opensolaris / openidiana / whatever ZFS host, you could of course
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>
> Instead of using vdi, I use comstar targets and then use vbox built-in scsi
> initiator.
Based on my recent experiences, I am hesitant to use the iscsi ... I don't know
if it
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Well, as a simple stone-age solution (to simplify your SMF approach),
> you can define custom attributes on dataset, zvols included. I think
> a custom attr must include a colon
> From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
>
> > Found quite a few posts on
> > various
> > forums of people complaining that RDP with external auth doesn't work (or
> > not reliably),
>
> Actually, it does work, and it works
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> An easier event to trigger is the starting of the virtualbox guest. Upon vbox
> guest starting, check the service properties for that instance of vboxsvc, and
> chmod if
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
>
> I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb.
>
> As part of my work, I have used it both as a RAW device (cxtxdxp1) and
> wrapped partition 1 with
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> As for ZIL - even if it is used with the in-pool variant, I don't
> think your setup needs any extra steps to disable it (as Edward likes
> to suggest), and most other setups don
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> I look after a remote server that has two iSCSI pools. The volumes for
> each pool are sparse volumes and a while back the target's storage
> became full, causing weird and won
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> I wonder if it would make weird sense to get the boxes, forfeit the
> cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
> get the most flexibility and bang for
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Sami Tuominen
>
> How can one remove a directory containing corrupt files or a corrupt file
> itself? For me rm just gives input/output error.
I was hoping to see somebody come up with an answ
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eugen Leitl
>
> can I make e.g. LSI SAS3442E
> directly do SSD caching (it says something about CacheCade,
> but I'm not sure it's an OS-side driver thing), as it
> is supposed to boost IOPS? U
1 - 100 of 145 matches
Mail list logo