On 13-01-04 02:08 PM, Richard Elling wrote:
All of these IOPS <--> VDI users guidelines are wrong. The problem
is that the variability of
response time is too great for a HDD. The only hope we have of
getting the back-of-the-napkin
calculations to work is to reduce the variability by using a
Thanks Richard, Happy New Year.
On 13-01-03 09:45 AM, Richard Elling wrote:
On Jan 2, 2013, at 8:45 PM, Geoff Nordli <mailto:geo...@gnaa.net>> wrote:
I am looking at the performance numbers for the Oracle VDI admin guide.
http://docs.oracle.com/html/E26214_02/performance-storage.h
I am looking at the performance numbers for the Oracle VDI admin guide.
http://docs.oracle.com/html/E26214_02/performance-storage.html
From my calculations for 200 desktops running Windows 7 knowledge user
(15 iops) with a 30-70 read/write split it comes to 5100 iops. Using
7200 rpm disks the
On 12-11-16 03:02 AM, Jim Klimov wrote:
On 2012-11-15 21:43, Geoff Nordli wrote:
Instead of using vdi, I use comstar targets and then use vbox built-in
scsi initiator.
Out of curiosity: in this case are there any devices whose ownership
might get similarly botched, or you've tested that
On 12-11-15 11:57 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
When I google around for anyone else who cares and may have already
solved the problem before I came along - it seems we're all doing the
same thing for the same reason.If by any chance you are running
Virtu
Dan,
If you are going to do the all in one with vbox, you probably want to look
at:
http://sourceforge.net/projects/vboxsvc/
It manages the starting/stopping of vbox vms via smf.
Kudos to Jim Klimov for creating and maintaining it.
Geoff
On Thu, Nov 8, 2012 at 7:32 PM, Dan Swartzendruber wro
I am running NexentaOS_134f
This is really weird, but I for some reason the defer_destroy property
is being set on new snapshots and I can't turn it off. Normally it
should be enabled when using the zfs destroy -d command. The property
doesn't seem to be inherited from anywhere.
It seems to
trying to figure out a reliable way to identify drives to make sure I pull the
right drive when there is a failure. These will be smaller installations
(<16 drives)
I am pretty sure the wwn name on a sas device is preassigned like a MAC
address, but I just want to make sure. Is there any sc
>-Original Message-
>From: Thierry Delaitre
>Sent: Wednesday, February 23, 2011 4:42 AM
>To: zfs-discuss@opensolaris.org
>Subject: [zfs-discuss] Using Solaris iSCSI target in VirtualBox iSCSI
Initiator
>
>Hello,
>
>Im using ZFS to export some iscsi targets for the virtual box iscsi
initi
>From: Darren J Moffat
>Sent: Monday, December 20, 2010 4:15 AM
>Subject: Re: [zfs-discuss] a single nfs file system shared out twice with
different
>permissions
>
>On 18/12/2010 07:09, Geoff Nordli wrote:
>> I am trying to configure a system where I have two different N
>From: Richard Elling
>Sent: Monday, December 20, 2010 8:14 PM
>Subject: Re: [zfs-discuss] a single nfs file system shared out twice with
different
>permissions
>
>On Dec 20, 2010, at 11:26 AM, "Geoff Nordli" wrote:
>
>>> From: Edward Ned Harvey
>
>From: Edward Ned Harvey
>Sent: Monday, December 20, 2010 9:25 AM
>Subject: RE: [zfs-discuss] a single nfs file system shared out twice with
different
>permissions
>
>> From: Richard Elling
>>
>> > zfs create tank/snapshots
>> > zfs set sharenfs=on tank/snapshots
>>
>> "on" by default sets the NFS
>-Original Message-
>From: Edward Ned Harvey
>[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
>Sent: Saturday, December 18, 2010 6:13 AM
>To: 'Geoff Nordli'; zfs-discuss@opensolaris.org
>Subject: RE: [zfs-discuss] a single nfs file system sha
I am trying to configure a system where I have two different NFS shares
which point to the same directory. The idea is if you come in via one path,
you will have read-only access and can't delete any files, if you come in
the 2nd path, then you will have read/write access.
For example, create the
I am running the latest version of Nexenta Core 3.0 (b134 + extra
backports).
The time to run zfs list is starting to increase as the number of datasets
increase where it takes almost 30 seconds now to return 1500 datasets.
r...@zfs1:/etc# time zfs list -t all | wc -l
1491
real
>From: Richard Elling
>Sent: Monday, September 27, 2010 1:01 PM
>
>On Sep 27, 2010, at 11:54 AM, Geoff Nordli wrote:
>>
>> Are there any properties I can set on the clone side?
>
>Each clone records its origin snapshot in the "origin" property.
>
>
>From: Darren J Moffat
>Sent: Monday, September 27, 2010 11:03 AM
>
>
>On 27/09/2010 18:14, Geoff Nordli wrote:
>> Is there a way to find out if a dataset has children or not using zfs
>> properties or other scriptable method?
>>
>> I am looking for a more
Is there a way to find out if a dataset has children or not using zfs
properties or other scriptable method?
I am looking for a more efficient way to delete datasets after they are
finished being used. Right now I use custom property to set delete=1 on a
dataset, and then I have a script that r
I am running Nexenta NCP 3.0 (134f).
My stmf configuration was corrupted. I was getting errors like in
/var/adm/messages:
Sep 1 10:32:04 llift-zfs1 svc-stmf[378]: [ID 130283 user.error] get
property view_entry-0/all_hosts failed - entity not found
Sep 1 10:32:04 llift-zfs1 svc.startd[9]: [ID 6
>From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
>Sent: Monday, August 09, 2010 4:25 AM
>
>> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>> >
>> >I have an R710... Not quite the same, but similar.
>> >
>> Thanks Edward.
>>
>&g
>From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
>Sent: Sunday, August 08, 2010 8:34 PM
>
>> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>>
>> Anyone have any experience with a R510 with the PERC H200/H700
>> controller with ZFS?
>>
>> My p
>-Original Message-
>From: Terry Hull [mailto:t...@nrg-inc.com]
>Sent: Saturday, August 07, 2010 1:12 PM
>
>> From: Geoff Nordli
>> Date: Sat, 7 Aug 2010 08:39:46 -0700
>>
>>> From: Brian Hechinger [mailto:wo...@4amlunch.net]
>>> Sent:
>-Original Message-
>From: Brian Hechinger [mailto:wo...@4amlunch.net]
>Sent: Saturday, August 07, 2010 8:10 AM
>To: Geoff Nordli
>Subject: Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS
>
>On Sat, Aug 07, 2010 at 08:00:11AM -0700, Geoff Nordli wrot
Anyone have any experience with a R510 with the PERC H200/H700 controller
with ZFS?
My perception is that Dell doesn't play well with OpenSolaris.
Thanks,
Geoff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
>-Original Message-
>From: Erik Trimble
>Sent: Friday, July 09, 2010 6:45 PM
>Subject: Re: [zfs-discuss] block align SSD for use as a l2arc cache
>
>On 7/9/2010 5:55 PM, Geoff Nordli wrote:
>I have an Intel X25-M 80GB SSD.
>
>For optimum performance
I have an Intel X25-M 80GB SSD.
For optimum performance, I need to block align the SSD device, but I am not
sure exactly how I should to it.
If I run the format -> fdisk it allows me to partition based on a cylinder,
but I don't think that is sufficient enough.
Can someone tell me how
> Actually, I think the rule-of-thumb is 270 bytes/DDT
> entry. It's 200
> bytes of ARC for every L2ARC entry.
>
> DDT doesn't count for this ARC space usage
>
> E.g.:I have 1TB of 4k files that are to be
> deduped, and it turns
> out that I have about a 5:1 dedup ratio. I'd also
> lik
>From: Arne Jansen
>Sent: Friday, June 25, 2010 3:21 AM
>
>Now the test for the Vertex 2 Pro. This was fun.
>For more explanation please see the thread "Crucial RealSSD C300 and cache
>flush?"
>This time I made sure the device is attached via 3GBit SATA. This is also
only a
>short test. I'll rete
>-Original Message-
>From: Linder, Doug
>Sent: Friday, June 18, 2010 12:53 PM
>
>Try doing inline quoting/response with Outlook, where you quote one
section,
>reply, quote again, etc. It's impossible. You can't split up the quoted
section to
>add new text - no way, no how. Very infuriati
>From: Fco Javier Garcia
>Sent: Tuesday, June 15, 2010 11:21 AM
>
>> Realistically, I think people are overtly-enamored with dedup as a
>> feature - I would generally only consider it worth-while in cases
>> where you get significant savings. And by significant, I'm talking an
>> order of magnitude
>
>Brandon High wrote:
>On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
>
>
>What VM software are you using? There are a few knobs you can turn in VBox
>which will help with slow storage. See
>http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions on
>reducing the flush interval.
> On Behalf Of Joe Auty
>Sent: Tuesday, June 08, 2010 11:27 AM
>
>
>I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm
>evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet)
is
>giving me kernel panics on the host while starting up VMs which are
obviousl
-Original Message-
From: Matt Connolly
Sent: Wednesday, May 26, 2010 5:08 AM
I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
sh-4.0# zfs create rpool/iscsi
sh-4.0# zfs set shareiscsi=on rpool/iscsi sh-4.0# zfs create -s -V 10g
rpool/iscsi/test
The underlying z
>-Original Message-
>From: Edward Ned Harvey [mailto:solar...@nedharvey.com]
>Sent: Monday, May 17, 2010 6:29 AM
>>
>> I was messing around with a ramdisk on a pool and I forgot to remove
>> it before I shut down the server. Now I am not able to mount the
>> pool. I am not concerned wit
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I am
not concerned with the data in this pool, but I would like to try to figure
out how to recover it.
I am running Nexenta 3.0 NCP (b134+).
I have trie
>From: James C. McPherson [mailto:james.mcpher...@oracle.com]
>Sent: Wednesday, May 12, 2010 2:28 AM
>
>On 12/05/10 03:18 PM, Geoff Nordli wrote:
>
>> I have been wondering what the compatibility is like on OpenSolaris.
>> My perception is basic network driver sup
On Behalf Of James C. McPherson
>Sent: Tuesday, May 11, 2010 5:41 PM
>
>On 12/05/10 10:32 AM, Michael DeMan wrote:
>> I agree on the motherboard and peripheral chipset issue.
>>
>> This, and the last generation AMD quad/six core motherboards
> > all seem to use the AMD SP56x0/SP5100 chipset, whic
>-Original Message-
>From: Brandon High [mailto:bh...@freaks.com]
>Sent: Monday, May 10, 2010 5:56 PM
>
>On Mon, May 10, 2010 at 3:53 PM, Geoff Nordli wrote:
>> Doesn't this alignment have more to do with aligning writes to the
>> stripe/segment size of a
>-Original Message-
>From: Brandon High [mailto:bh...@freaks.com]
>Sent: Monday, May 10, 2010 3:12 PM
>
>On Mon, May 10, 2010 at 1:53 PM, Geoff Nordli wrote:
>> You are right, I didn't look at that property, and instead I was
>> focused on the record size
>-Original Message-
>From: Brandon High [mailto:bh...@freaks.com]
>Sent: Monday, May 10, 2010 9:55 AM
>
>On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli wrote:
>> I am looking at using 8K block size on the zfs volume.
>
>8k is the default for zvols.
>
You ar
I am using ZFS as the backing store for an iscsi target running a virtual
machine.
I am looking at using 8K block size on the zfs volume.
I was looking at the comstar iscsi settings and there is also a blk size
configuration, which defaults as 512 bytes. That would make me believe that
al
>-Original Message-
>From: Ross Walker [mailto:rswwal...@gmail.com]
>Sent: Friday, April 23, 2010 7:08 AM
>>
>> We are currently porting over our existing Learning Lab Infrastructure
>> platform from MS Virtual Server to VBox + ZFS. When students
>> connect into
>> their lab environment it
>From: Ross Walker [mailto:rswwal...@gmail.com]
>Sent: Thursday, April 22, 2010 6:34 AM
>
>On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote:
>
>
>If you combine the hypervisor and storage server and have students
>connect to the VMs via RDP or VNC or XDM then you will
>From: matthew patton [mailto:patto...@yahoo.com]
>Sent: Tuesday, April 20, 2010 12:54 PM
>
>Geoff Nordli wrote:
>
>> With our particular use case we are going to do a "save
>> state" on their
>> virtual machines, which is going to write 100-400 MB
&
up
their socks or they will be joining San Jose on the sidelines. With Ottawa,
Montreal on the way out too, it could be a tough spring for Canadian hockey
fans.
>
>On Apr 18, 2010, at 11:21 PM, Geoff Nordli wrote:
>
>> Hi Richard.
>>
>> Can you explain in a little bit m
>On Apr 13, 2010, at 5:22 AM, Tony MacDoodle wrote:
>
>> I was wondering if any data was lost while doing a snapshot on a
>running system?
>
>ZFS will not lose data during a snapshot.
>
>> Does it flush everything to disk or would some stuff be lost?
>
>Yes, all ZFS data will be committed to disk a
46 matches
Mail list logo