On Jun 16, 2011, at 7:23 PM, Erik Trimble wrote:
> On 6/16/2011 1:32 PM, Paul Kraus wrote:
>> On Thu, Jun 16, 2011 at 4:20 PM, Richard Elling
>> wrote:
>>
>>> You can run OpenVMS :-)
>> Since *you* brought it up (I was not going to :-), how does VMS'
>> versioning FS handle those issues ?
>>
On Jun 17, 2011, at 7:06 AM, Edward Ned Harvey
wrote:
> I will only say, that regardless of whether or not that is or ever was true,
> I believe it's entirely irrelevant. Because your system performs read and
> write caching and buffering in ram, the tiny little ram on the disk can't
> possibly
On Mar 16, 2011, at 8:13 AM, Paul Kraus wrote:
> On Tue, Mar 15, 2011 at 11:00 PM, Edward Ned Harvey
> wrote:
>
>> BTW, what is the advantage of the kernel cifs server as opposed to samba?
>> It seems, years ago, somebody must have been standing around and saying
>> "There is a glaring deficien
On Dec 24, 2010, at 1:21 PM, Richard Elling wrote:
> Latency is what matters most. While there is a loose relationship between
> IOPS
> and latency, you really want low latency. For 15krpm drives, the average
> latency
> is 2ms for zero seeks. A decent SSD will beat that by an order of magni
On Dec 15, 2010, at 6:48 PM, Bob Friesenhahn
wrote:
> On Wed, 15 Dec 2010, Linder, Doug wrote:
>
>> But it sure would be nice if they spared everyone a lot of effort and
>> annoyance and just GPL'd ZFS. I think the goodwill generated
>
> Why do you want them to "GPL" ZFS? In what way would
On Dec 8, 2010, at 11:41 PM, Edward Ned Harvey
wrote:
> For anyone who cares:
>
> I created an ESXi machine. Installed two guest (centos) machines and
> vmware-tools. Connected them to each other via only a virtual switch. Used
> rsh to transfer large quantities of data between the two guest
On Dec 7, 2010, at 9:49 PM, Edward Ned Harvey
wrote:
>> From: Ross Walker [mailto:rswwal...@gmail.com]
>>
>> Well besides databases there are VM datastores, busy email servers, busy
>> ldap servers, busy web servers, and I'm sure the list goes on and on.
>>
On Dec 7, 2010, at 12:46 PM, Roy Sigurd Karlsbakk wrote:
>> Bear a few things in mind:
>>
>> iops is not iops.
>
>
> I am totally aware of these differences, but it seems some people think RAIDz
> is nonsense unless you don't need speed at all. My testing shows (so far)
> that the speed is q
On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen wrote:
> On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
>> Hi all,
>>
>> Let me tell you all that the MC/S *does* make a difference...I had a
>> windows fileserver using an ISCSI connection to a host running snv_134
>> with
On Nov 16, 2010, at 7:49 PM, Jim Dunham wrote:
> On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
>> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
>>
>> For iSCSI one just needs to have a s
On Nov 16, 2010, at 4:04 PM, Tim Cook wrote:
>
>
> On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin wrote:
> > "tc" == Tim Cook writes:
>
>tc> Channeling Ethernet will not make it any faster. Each
>tc> individual connection will be limited to 1gbit. iSCSI with
>tc> mpxio may wo
On Nov 1, 2010, at 3:33 PM, Mark Sandrock wrote:
> Hello,
>
> I'm working with someone who replaced a failed 1TB drive (50% utilized),
> on an X4540 running OS build 134, and I think something must be wrong.
>
> Last Tuesday afternoon, zpool status reported:
>
> scrub: resilver in progre
On Nov 1, 2010, at 5:09 PM, Ian D wrote:
>> Maybe you are experiencing this:
>> http://opensolaris.org/jive/thread.jspa?threadID=11942
>
> It does look like this... Is this really the expected behaviour? That's just
> unacceptable. It is so bad it sometimes drop connection and fail copies and
On Oct 19, 2010, at 4:33 PM, Tuomas Leikola wrote:
> On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote:
>> So are we all agreed then, that a vdev failure will cause pool loss ?
>> --
>
> unless you use copies=2 or 3, in which case your data is still safe
> for those datasets that have this op
On Oct 15, 2010, at 5:34 PM, Ian D wrote:
>> Has anyone suggested either removing L2ARC/SLOG
>> entirely or relocating them so that all devices are
>> coming off the same controller? You've swapped the
>> external controller but the H700 with the internal
>> drives could be the real culprit. Coul
On Oct 15, 2010, at 9:18 AM, Stephan Budach wrote:
> Am 14.10.10 17:48, schrieb Edward Ned Harvey:
>>
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Toby Thain
>>>
I don't want to heat up the discussion about ZFS managed discs v
On Oct 12, 2010, at 8:21 AM, "Edward Ned Harvey" wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Stephan Budach
>>
>> c3t211378AC0253d0 ONLINE 0 0 0
>
> How many disks are there inside of c3t211378
On Sep 9, 2010, at 8:27 AM, Fei Xu wrote:
>>
>> Service times here are crap. Disks are malfunctioning
>> in some way. If
>> your source disks can take seconds (or 10+ seconds)
>> to reply, then of
>> course your copy will be slow. Disk is probably
>> having a hard time
>> reading the data or som
On Aug 27, 2010, at 1:04 AM, Mark wrote:
> We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
> installed I selected the best bang for the buck on the speed vs capacity
> chart.
>
> We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all
> running NFS,
On Aug 21, 2010, at 4:40 PM, Richard Elling wrote:
> On Aug 21, 2010, at 10:14 AM, Ross Walker wrote:
>> I'm planning on setting up an NFS server for our ESXi hosts and plan on
>> using a virtualized Solaris or Nexenta host to serve ZFS over NFS.
>
> Please follow
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld wrote:
> On 08/21/10 10:14, Ross Walker wrote:
>> I am trying to figure out the best way to provide both performance and
>> resiliency given the Equallogic provides the redundancy.
>
> (I have no specific experience with Equallo
I'm planning on setting up an NFS server for our ESXi hosts and plan on using a
virtualized Solaris or Nexenta host to serve ZFS over NFS.
The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI.
I am trying to figure out the best way to provide both performance and
resil
On Aug 19, 2010, at 9:26 AM, joerg.schill...@fokus.fraunhofer.de (Joerg
Schilling) wrote:
> "Edward Ned Harvey" wrote:
>
>> The reasons for ZFS not in Linux must be more than just the license issue.
>
> If Linux has ZFS, then it would be possible to do
>
> -I/O performance analysis based
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn
wrote:
> On Wed, 18 Aug 2010, Joerg Schilling wrote:
>>
>> Linus is right with his primary decision, but this also applies for static
>> linking. See Lawrence Rosen for more information, the GPL does not distinct
>> between static and dynamic linkin
On Aug 17, 2010, at 5:44 AM, joerg.schill...@fokus.fraunhofer.de (Joerg
Schilling) wrote:
> Frank Cusack wrote:
>
>> On 8/16/10 9:57 AM -0400 Ross Walker wrote:
>>> No, the only real issue is the license and I highly doubt Oracle will
>>> re-release ZFS under
On Aug 16, 2010, at 11:17 PM, Frank Cusack wrote:
> On 8/16/10 9:57 AM -0400 Ross Walker wrote:
>> No, the only real issue is the license and I highly doubt Oracle will
>> re-release ZFS under GPL to dilute it's competitive advantage.
>
> You're saying Oracle wan
On Aug 15, 2010, at 9:44 PM, Peter Jeremy
wrote:
> Given that both provide similar features, it's difficult to see why
> Oracle would continue to invest in both. Given that ZFS is the more
> mature product, it would seem more logical to transfer all the effort
> to ZFS and leave btrfs to die.
On Aug 16, 2010, at 9:06 AM, "Edward Ned Harvey" wrote:
> ZFS does raid, and mirroring, and resilvering, and partitioning, and NFS, and
> CIFS, and iSCSI, and device management via vdev's, and so on. So ZFS steps
> on a lot of linux peoples' toes. They already have code to do this, or that,
On Aug 14, 2010, at 8:26 AM, "Edward Ned Harvey" wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> #3 I previously believed that vmfs3 was able to handle sparse files
>> amazingly well, like, when you create
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais wrote:
>
> Le 5 août 2010 à 19:49, Ross Walker a écrit :
>
>> On Aug 5, 2010, at 11:10 AM, Roch wrote:
>>
>>>
>>> Ross Walker writes:
>>>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>>&
On Aug 5, 2010, at 11:10 AM, Roch wrote:
>
> Ross Walker writes:
>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>
>>>
>>> Ross Walker writes:
>>>> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>>>>
>>>>>
>>>&g
On Aug 4, 2010, at 12:04 PM, Roch wrote:
>
> Ross Walker writes:
>> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>>
>>>
>>>
>>> Ross Asks:
>>> So on that note, ZFS should disable the disks' write cache,
>>> not enable t
On Aug 4, 2010, at 9:20 AM, Roch wrote:
>
>
> Ross Asks:
> So on that note, ZFS should disable the disks' write cache,
> not enable them despite ZFS's COW properties because it
> should be resilient.
>
> No, because ZFS builds resiliency on top of unreliable parts. it's able to
> deal
On Aug 4, 2010, at 3:52 AM, Roch wrote:
>
> Ross Walker writes:
>
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>> On Wed, May 26, 2010 at 5:08 A
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote:
> On 03/08/2010 22:49, Ross Walker wrote:
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>
>>&g
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote:
> On 03/08/2010 22:49, Ross Walker wrote:
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>
>>&g
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais wrote:
>
> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>
>> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
>> wrote:
>>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
>>>
>>> sh-4.0# zfs create rpool/iscsi
>>> sh-4.0#
On Jul 26, 2010, at 2:51 PM, Dav Banks wrote:
> I wanted to test it as a backup solution. Maybe that's crazy in itself but I
> want to try it.
>
> Basically, once a week detach the 'backup' pool from the mirror, replace the
> drives, add the new raidz to the mirror and let it resilver and sit
On Jul 23, 2010, at 10:14 PM, Edward Ned Harvey wrote:
>> From: Arne Jansen [mailto:sensi...@gmx.net]
>>>
>>> Can anyone else confirm or deny the correctness of this statement?
>>
>> As I understand it that's the whole point of raidz. Each block is its
>> own
>> stripe.
>
> Nope, that doesn't
On Jul 22, 2010, at 2:41 PM, Miles Nordin wrote:
>> "sw" == Saxon, Will writes:
>
>sw> 'clone' vs. a 'copy' would be very easy since we have
>sw> deduplication now
>
> dedup doesn't replace the snapshot/clone feature for the
> NFS-share-full-of-vmdk use case because there's no equi
On Jul 20, 2010, at 6:12 AM, v wrote:
> Hi,
> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one
> physical disk iops, since raidz1 is like raid5 , so is raid5 has same
> performance like raidz1? ie. random iops equal to one physical disk's ipos.
On reads, no, any part of
The whole disk layout should be copied from disk 1 to 2, then the slice on disk
2 that corresponds to the slice on disk 1 should be attached to the rpool which
forms an rpool mirror (attached not added).
Then you need to add the grub bootloader to disk 2.
When it finishes resilvering then you
On Jul 11, 2010, at 5:11 PM, Freddie Cash wrote:
> ZFS-FUSE is horribly unstable, although that's more an indication of
> the stability of the storage stack on Linux.
Not really, more an indication of the pseudo-VFS layer implemented in fuse.
Remember fuse provides it's own VFS API separate fro
On Jul 10, 2010, at 5:46 AM, Erik Trimble wrote:
> On 7/10/2010 1:14 AM, Graham McArdle wrote:
>>> Instead, create "Single Disk" arrays for each disk.
>>>
>> I have a question related to this but with a different controller: If I'm
>> using a RAID controller to provide non-RAID single-disk
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote:
> On 24/06/2010 14:32, Ross Walker wrote:
>> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
>>
>>
>>> On 23/06/2010 18:50, Adam Leventhal wrote:
>>>
>>>>> Does i
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
> On 23/06/2010 18:50, Adam Leventhal wrote:
>>> Does it mean that for dataset used for databases and similar environments
>>> where basically all blocks have fixed size and there is no other data all
>>> parity information will end-up on one
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
>
> 128GB.
>
> Does it mean that for dataset used for databases and similar environments
> where basically all blocks have fixed size and there is no other data all
> parity information will end-up on one (z1) or two (z2) specific disks?
W
On Jun 22, 2010, at 8:40 AM, Jeff Bacon wrote:
>> The term 'stripe' has been so outrageously severely abused in this
>> forum that it is impossible to know what someone is talking about when
>> they use the term. Seemingly intelligent people continue to use wrong
>> terminology because they thin
On Jun 16, 2010, at 9:02 AM, Carlos Varela
wrote:
Does the machine respond to ping?
Yes
If there is a gui does the mouse pointer move?
There is no GUI (nexentastor)
Does the keyboard numlock key respond at all ?
Yes
I just find it very hard to believe that such a
situation cou
On Jun 13, 2010, at 2:14 PM, Jan Hellevik
wrote:
Well, for me it was a cure. Nothing else I tried got the pool back.
As far as I can tell, the way to get it back should be to use
symlinks to the fdisk partitions on my SSD, but that did not work
for me. Using -V got the pool back. What is
On Jun 11, 2010, at 2:07 AM, Dave Koelmeyer
wrote:
I trimmed, and then got complained at by a mailing list user that
the context of what I was replying to was missing. Can't win :P
If at a minimum one trims the disclaimers, footers and signatures,
that's better then nothing.
On long th
On Jun 10, 2010, at 5:54 PM, Richard Elling
wrote:
On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote:
Andrey Kuzmin wrote:
Well, I'm more accustomed to "sequential vs. random", but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to
32Mb fitting into cache), did you have w
On Jun 8, 2010, at 1:33 PM, besson3c wrote:
Sure! The pool consists of 6 SATA drives configured as RAID-Z. There
are no special read or write cache drives. This pool is shared to
several VMs via NFS, these VMs manage email, web, and a Quickbooks
server running on FreeBSD, Linux, and Wind
On Jun 7, 2010, at 2:10 AM, Erik Trimble
wrote:
Comments in-line.
On 6/6/2010 9:16 PM, Ken wrote:
I'm looking at VMWare, ESXi 4, but I'll take any advice offered.
On Sun, Jun 6, 2010 at 19:40, Erik Trimble
wrote:
On 6/6/2010 6:22 PM, Ken wrote:
Hi,
I'm looking to build a virtualiz
On Jun 2, 2010, at 12:03 PM, zfsnoob4 wrote:
Wow thank you very much for the clear instructions.
And Yes, I have another 120GB drive for the OS, separate from A, B
and C. I will repartition the drive and install Solaris. Then maybe
at some point I'll delete the entire drive and just instal
On May 20, 2010, at 7:17 PM, Ragnar Sundblad wrote:
On 21 maj 2010, at 00.53, Ross Walker wrote:
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems to be one SSD
that is considered "working", the Zeus IOPS. Even if I had the
money, I can't buy it. As my ap
On May 12, 2010, at 7:12 PM, Richard Elling
wrote:
On May 11, 2010, at 10:17 PM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared pool that is impo
On May 12, 2010, at 3:06 PM, Manoj Joseph
wrote:
Ross Walker wrote:
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access t
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared pool that is imported during
a failover.
The problem is that we use Z
On May 6, 2010, at 8:34 AM, Edward Ned Harvey
wrote:
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
In neither case do you have data or filesystem corruption.
ZFS probably is still OK, since it's designed to handle this (?),
but the data can't be OK if you lose 30 secs of writes.. 30 secs o
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote:
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thursday, April 22, 2010 6:34 AM
On Apr 20, 2010, at 4:44 PM, Geoff Nordli
wrote:
If you combine the hypervisor and storage server and have students
connect to the VMs via RDP or VNC
On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote:
From: matthew patton [mailto:patto...@yahoo.com]
Sent: Tuesday, April 20, 2010 12:54 PM
Geoff Nordli wrote:
With our particular use case we are going to do a "save
state" on their
virtual machines, which is going to write 100-400 MB
per VM v
On Apr 20, 2010, at 12:13 AM, Sunil wrote:
Hi,
I have a strange requirement. My pool consists of 2 500GB disks in
stripe which I am trying to convert into a RAIDZ setup without data
loss but I have only two additional disks: 750GB and 1TB. So, here
is what I thought:
1. Carve a 500GB s
On Apr 19, 2010, at 12:50 PM, Don wrote:
Now I'm simply confused.
Do you mean one cachefile shared between the two nodes for this
zpool? How, may I ask, would this work?
The rpool should be in /etc/zfs/zpool.cache.
The shared pool should be in /etc/cluster/zpool.cache (or wherever
you p
On Fri, Apr 2, 2010 at 8:03 AM, Edward Ned Harvey
wrote:
>> > Seriously, all disks configured WriteThrough (spindle and SSD disks
>> > alike)
>> > using the dedicated ZIL SSD device, very noticeably faster than
>> > enabling the
>> > WriteBack.
>>
>> What do you get with both SSD ZIL and WriteBack
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat
wrote:
> On 01/04/2010 14:49, Ross Walker wrote:
>>>
>>> We're talking about the "sync" for NFS exports in Linux; what do they
>>> mean
>>> with "sync" NFS exports?
>>
>>
On Apr 1, 2010, at 8:42 AM, casper@sun.com wrote:
Is that what "sync" means in Linux?
A sync write is one in which the application blocks until the OS
acks that
the write has been committed to disk. An async write is given to
the OS,
and the OS is permitted to buffer the write to di
On Mar 31, 2010, at 11:58 PM, Edward Ned Harvey
wrote:
We ran into something similar with these drives in an X4170 that
turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact same version of
firmware
a
On Mar 31, 2010, at 11:51 PM, Edward Ned Harvey
wrote:
A MegaRAID card with write-back cache? It should also be cheaper than
the F20.
I haven't posted results yet, but I just finished a few weeks of
extensive
benchmarking various configurations. I can say this:
WriteBack cache is much
On Mar 31, 2010, at 10:25 PM, Richard Elling
wrote:
On Mar 31, 2010, at 7:11 PM, Ross Walker wrote:
On Mar 31, 2010, at 5:39 AM, Robert Milkowski
wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS
server? :)
I don
On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS
server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more tha
On Mar 20, 2010, at 11:48 AM, vikkr wrote:
THX Ross, i plan exporting each drive individually over iSCSI.
I this case, the write, as well as reading, will go to all 6 discs
at once, right?
The only question - how to calculate fault tolerance of such a
system if the discs are all different
On Mar 20, 2010, at 10:18 AM, vikkr wrote:
Hi sorry for bad eng and picture :).
Can such a decision?
3 servers openfiler give their drives 2 - 1 tb ISCSI server to
OpenSolaris
On OpenSolaris assembled a RAID-Z with double parity.
Server OpenSolaris provides NFS access to this array, and du
On Mar 17, 2010, at 2:30 AM, Erik Ableson wrote:
On 17 mars 2010, at 00:25, Svein Skogen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 22:31, erik.ableson wrote:
On 16 mars 2010, at 21:00, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen mailto
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote:
On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker
wrote:
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this single iscsi target. ZFS best
practices, tell me that to be safe in case of
corruption, pools should always be m
On Mar 15, 2010, at 12:19 PM, Ware Adams
wrote:
On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote:
Well, I actually don't know what implementation is inside this
legacy machine.
This machine is an AMI StoreTrends ITX, but maybe it has been built
around IET, don't know.
Well, maybe I s
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon
wrote:
Hello,
I'd like to check for any guidance about using zfs on iscsi storage
appliances.
Recently I had an unlucky situation with an unlucky storage machine
freezing.
Once the storage was up again (rebooted) all other iscsi clients
were
On Mar 11, 2010, at 12:31 PM, Andrew wrote:
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) /
p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I need to run to reference thi
On Mar 11, 2010, at 8:27 AM, Andrew wrote:
Ok,
The fault appears to have occurred regardless of the attempts to
move to vSphere as we've now moved the host back to ESX 3.5 from
whence it came and the problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais
wrote:
I think This is highlighting that there is extra CPU requirement to
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and
16K reads/writes form the application.
Next step for you is to figure
On Mar 8, 2010, at 11:46 PM, ольга крыжановская anov...@gmail.com> wrote:
tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.
True, but if the OP is looking for those features they are more then
unlikely looking for an in-memory fi
On Feb 25, 2010, at 9:11 AM, Giovanni Tirloni
wrote:
On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto > wrote:
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
hobbyist curiosity status for us. If I even mention
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with
respect to correctness, it may be that some of our performance
workaround are still unsafe (i.e. if my iSCSI client assumes a
On Feb 9, 2010, at 1:55 PM, matthew patton wrote:
The cheapest solution out there that isn't a Supermicro-like server
chassis, is DAS in the form of HP or Dell MD-series which top out at
15 or 16 3" drives. I can only chain 3 units per SAS port off a HBA
in either case.
The new Dell MD11
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey > wrote:
How are you managing UID's on the NFS server? If user eharvey
connects to
server from client Mac A, or Mac B, or Windows 1, or Windows 2, or
any of
the linux machines ... the server has to know it's eharvey, and
assign the
correct UID'
On Feb 5, 2010, at 10:49 AM, Robert Milkowski wrote:
Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared
to raid-10 pool you should get better throughput if at least 4
drives are used. Basically it is due to the fact that in RAID-10 the
maximum you can g
Interesting, can you explain what zdb is dumping exactly?
I suppose you would be looking for blocks referenced in the snapshot
that have a single reference and print out the associated file/
directory name?
-Ross
On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote:
Hi Ross,
zdb - f..
system
functions offered by OS. I scan every byte in every file manually
and it
^^^
On February 3, 2010 10:11:01 AM -0500 Ross Walker >
wrote:
Not a ZFS method, but you could use rsync with the dry run option
to list
all changed fi
On Feb 3, 2010, at 8:59 PM, Frank Cusack
wrote:
On February 3, 2010 6:46:57 PM -0500 Ross Walker
wrote:
So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories
deleted
and file/directories changed)?
Find
On Feb 3, 2010, at 12:35 PM, Frank Cusack z...@linetwo.net> wrote:
On February 3, 2010 12:19:50 PM -0500 Frank Cusack > wrote:
If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff. The docs don't expla
On Feb 3, 2010, at 9:53 AM, Henu wrote:
Okay, so first of all, it's true that send is always fast and 100%
reliable because it uses blocks to see differences. Good, and thanks
for this information. If everything else fails, I can parse the
information I want from send stream :)
But am I
On Jan 30, 2010, at 2:53 PM, Mark wrote:
I have a 1U server that supports 2 SATA drives in the chassis. I
have 2 750 GB SATA drives. When I install opensolaris, I assume it
will want to use all or part of one of those drives for the install.
That leaves me with the remaining part of disk 1
On Jan 21, 2010, at 6:47 PM, Daniel Carosone wrote:
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
+ support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
I think the original author meant files bigger then 2
On Jan 14, 2010, at 10:44 AM, "Mr. T Doodle"
wrote:
Hello,
I have played with ZFS but not deployed any production systems using
ZFS and would like some opinions
I have a T-series box with 4 internal drives and would like to
deploy ZFS with availability and performance in mind ;
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn > wrote:
On Mon, 11 Jan 2010, bank kus wrote:
Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris
if there is no fair I/O scheduling between multiple read sources
until that
On Wed, Jan 6, 2010 at 4:30 PM, Wes Felter wrote:
> Michael Herf wrote:
>
>> I agree that RAID-DP is much more scalable for reads than RAIDZx, and
>> this basically turns into a cost concern at scale.
>>
>> The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be
>> used instead of n
On Mon, Jan 4, 2010 at 2:27 AM, matthew patton wrote:
> I find it baffling that RaidZ(2,3) was designed to split a record-size block
> into N (N=# of member devices) pieces and send the uselessly tiny requests to
> spinning rust when we know the massive delays entailed in head seeks and
> rotat
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones wrote:
> On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker wrote:
>> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
>> wrote:
>>
>> Hello,
>>
>> I was doing performance testing, validating zvol performa
1 - 100 of 196 matches
Mail list logo