On Mar 8, 2010, at 11:46 PM, ольга крыжановская anov...@gmail.com> wrote:
tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.
True, but if the OP is looking for those features they are more then
unlikely looking for an in-memory fi
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais
wrote:
I think This is highlighting that there is extra CPU requirement to
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and
16K reads/writes form the application.
Next step for you is to figure
On Mar 11, 2010, at 8:27 AM, Andrew wrote:
Ok,
The fault appears to have occurred regardless of the attempts to
move to vSphere as we've now moved the host back to ESX 3.5 from
whence it came and the problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any
On Mar 11, 2010, at 12:31 PM, Andrew wrote:
Hi Ross,
Ok - as a Solaris newbie.. i'm going to need your help.
Format produces the following:-
c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) /
p...@0,0/pci15ad,1...@10/s...@4,0
what dd command do I need to run to reference thi
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon
wrote:
Hello,
I'd like to check for any guidance about using zfs on iscsi storage
appliances.
Recently I had an unlucky situation with an unlucky storage machine
freezing.
Once the storage was up again (rebooted) all other iscsi clients
were
On Mar 15, 2010, at 12:19 PM, Ware Adams
wrote:
On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote:
Well, I actually don't know what implementation is inside this
legacy machine.
This machine is an AMI StoreTrends ITX, but maybe it has been built
around IET, don't know.
Well, maybe I s
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this single iscsi target. ZFS best
practices, tell me that to be safe in case of
corruption, pools should always be m
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote:
On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker
wrote:
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this
On Mar 17, 2010, at 2:30 AM, Erik Ableson wrote:
On 17 mars 2010, at 00:25, Svein Skogen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 22:31, erik.ableson wrote:
On 16 mars 2010, at 21:00, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen mailto
On Mar 20, 2010, at 10:18 AM, vikkr wrote:
Hi sorry for bad eng and picture :).
Can such a decision?
3 servers openfiler give their drives 2 - 1 tb ISCSI server to
OpenSolaris
On OpenSolaris assembled a RAID-Z with double parity.
Server OpenSolaris provides NFS access to this array, and du
On Mar 20, 2010, at 11:48 AM, vikkr wrote:
THX Ross, i plan exporting each drive individually over iSCSI.
I this case, the write, as well as reading, will go to all 6 discs
at once, right?
The only question - how to calculate fault tolerance of such a
system if the discs are all different
On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS
server? :)
I don't think you'll find the performance you paid for with ZFS and
Solaris at this time. I've been trying to more tha
On Mar 31, 2010, at 10:25 PM, Richard Elling
wrote:
On Mar 31, 2010, at 7:11 PM, Ross Walker wrote:
On Mar 31, 2010, at 5:39 AM, Robert Milkowski
wrote:
On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss
Use something other than Open/Solaris with ZFS as an NFS
server? :)
I don
On Mar 31, 2010, at 11:51 PM, Edward Ned Harvey
wrote:
A MegaRAID card with write-back cache? It should also be cheaper than
the F20.
I haven't posted results yet, but I just finished a few weeks of
extensive
benchmarking various configurations. I can say this:
WriteBack cache is much
On Mar 31, 2010, at 11:58 PM, Edward Ned Harvey
wrote:
We ran into something similar with these drives in an X4170 that
turned
out to
be an issue of the preconfigured logical volumes on the drives. Once
we made
sure all of our Sun PCI HBAs where running the exact same version of
firmware
a
On Apr 1, 2010, at 8:42 AM, casper@sun.com wrote:
Is that what "sync" means in Linux?
A sync write is one in which the application blocks until the OS
acks that
the write has been committed to disk. An async write is given to
the OS,
and the OS is permitted to buffer the write to di
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat
wrote:
> On 01/04/2010 14:49, Ross Walker wrote:
>>>
>>> We're talking about the "sync" for NFS exports in Linux; what do they
>>> mean
>>> with "sync" NFS exports?
>>
>>
On Fri, Apr 2, 2010 at 8:03 AM, Edward Ned Harvey
wrote:
>> > Seriously, all disks configured WriteThrough (spindle and SSD disks
>> > alike)
>> > using the dedicated ZIL SSD device, very noticeably faster than
>> > enabling the
>> > WriteBack.
>>
>> What do you get with both SSD ZIL and WriteBack
On Apr 19, 2010, at 12:50 PM, Don wrote:
Now I'm simply confused.
Do you mean one cachefile shared between the two nodes for this
zpool? How, may I ask, would this work?
The rpool should be in /etc/zfs/zpool.cache.
The shared pool should be in /etc/cluster/zpool.cache (or wherever
you p
On Apr 20, 2010, at 12:13 AM, Sunil wrote:
Hi,
I have a strange requirement. My pool consists of 2 500GB disks in
stripe which I am trying to convert into a RAIDZ setup without data
loss but I have only two additional disks: 750GB and 1TB. So, here
is what I thought:
1. Carve a 500GB s
On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote:
From: matthew patton [mailto:patto...@yahoo.com]
Sent: Tuesday, April 20, 2010 12:54 PM
Geoff Nordli wrote:
With our particular use case we are going to do a "save
state" on their
virtual machines, which is going to write 100-400 MB
per VM v
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote:
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Thursday, April 22, 2010 6:34 AM
On Apr 20, 2010, at 4:44 PM, Geoff Nordli
wrote:
If you combine the hypervisor and storage server and have students
connect to the VMs via RDP or VNC
On May 6, 2010, at 8:34 AM, Edward Ned Harvey
wrote:
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
In neither case do you have data or filesystem corruption.
ZFS probably is still OK, since it's designed to handle this (?),
but the data can't be OK if you lose 30 secs of writes.. 30 secs o
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared pool that is imported during
a failover.
The problem is that we use Z
On May 12, 2010, at 3:06 PM, Manoj Joseph
wrote:
Ross Walker wrote:
On May 12, 2010, at 1:17 AM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access t
On May 12, 2010, at 7:12 PM, Richard Elling
wrote:
On May 11, 2010, at 10:17 PM, schickb wrote:
I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared pool that is impo
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems to be one SSD
that is considered "working", the Zeus IOPS. Even if I had the
money, I can't buy it. As my ap
On May 20, 2010, at 7:17 PM, Ragnar Sundblad wrote:
On 21 maj 2010, at 00.53, Ross Walker wrote:
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote:
use a slog at all if it's not durable? You should
disable the ZIL
instead.
This is basically where I was going. There only seems
On Jun 2, 2010, at 12:03 PM, zfsnoob4 wrote:
Wow thank you very much for the clear instructions.
And Yes, I have another 120GB drive for the OS, separate from A, B
and C. I will repartition the drive and install Solaris. Then maybe
at some point I'll delete the entire drive and just instal
On Jun 7, 2010, at 2:10 AM, Erik Trimble
wrote:
Comments in-line.
On 6/6/2010 9:16 PM, Ken wrote:
I'm looking at VMWare, ESXi 4, but I'll take any advice offered.
On Sun, Jun 6, 2010 at 19:40, Erik Trimble
wrote:
On 6/6/2010 6:22 PM, Ken wrote:
Hi,
I'm looking to build a virtualiz
On Jun 8, 2010, at 1:33 PM, besson3c wrote:
Sure! The pool consists of 6 SATA drives configured as RAID-Z. There
are no special read or write cache drives. This pool is shared to
several VMs via NFS, these VMs manage email, web, and a Quickbooks
server running on FreeBSD, Linux, and Wind
On Jun 10, 2010, at 5:54 PM, Richard Elling
wrote:
On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote:
Andrey Kuzmin wrote:
Well, I'm more accustomed to "sequential vs. random", but YMMW.
As to 67000 512 byte writes (this sounds suspiciously close to
32Mb fitting into cache), did you have w
On Jun 11, 2010, at 2:07 AM, Dave Koelmeyer
wrote:
I trimmed, and then got complained at by a mailing list user that
the context of what I was replying to was missing. Can't win :P
If at a minimum one trims the disclaimers, footers and signatures,
that's better then nothing.
On long th
On Jun 13, 2010, at 2:14 PM, Jan Hellevik
wrote:
Well, for me it was a cure. Nothing else I tried got the pool back.
As far as I can tell, the way to get it back should be to use
symlinks to the fdisk partitions on my SSD, but that did not work
for me. Using -V got the pool back. What is
On Jun 16, 2010, at 9:02 AM, Carlos Varela
wrote:
Does the machine respond to ping?
Yes
If there is a gui does the mouse pointer move?
There is no GUI (nexentastor)
Does the keyboard numlock key respond at all ?
Yes
I just find it very hard to believe that such a
situation cou
On Jun 22, 2010, at 8:40 AM, Jeff Bacon wrote:
>> The term 'stripe' has been so outrageously severely abused in this
>> forum that it is impossible to know what someone is talking about when
>> they use the term. Seemingly intelligent people continue to use wrong
>> terminology because they thin
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
>
> 128GB.
>
> Does it mean that for dataset used for databases and similar environments
> where basically all blocks have fixed size and there is no other data all
> parity information will end-up on one (z1) or two (z2) specific disks?
W
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
> On 23/06/2010 18:50, Adam Leventhal wrote:
>>> Does it mean that for dataset used for databases and similar environments
>>> where basically all blocks have fixed size and there is no other data all
>>> parity information will end-up on one
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote:
> On 24/06/2010 14:32, Ross Walker wrote:
>> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
>>
>>
>>> On 23/06/2010 18:50, Adam Leventhal wrote:
>>>
>>>>> Does i
On Jul 10, 2010, at 5:46 AM, Erik Trimble wrote:
> On 7/10/2010 1:14 AM, Graham McArdle wrote:
>>> Instead, create "Single Disk" arrays for each disk.
>>>
>> I have a question related to this but with a different controller: If I'm
>> using a RAID controller to provide non-RAID single-disk
On Jul 11, 2010, at 5:11 PM, Freddie Cash wrote:
> ZFS-FUSE is horribly unstable, although that's more an indication of
> the stability of the storage stack on Linux.
Not really, more an indication of the pseudo-VFS layer implemented in fuse.
Remember fuse provides it's own VFS API separate fro
The whole disk layout should be copied from disk 1 to 2, then the slice on disk
2 that corresponds to the slice on disk 1 should be attached to the rpool which
forms an rpool mirror (attached not added).
Then you need to add the grub bootloader to disk 2.
When it finishes resilvering then you
On Jul 20, 2010, at 6:12 AM, v wrote:
> Hi,
> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one
> physical disk iops, since raidz1 is like raid5 , so is raid5 has same
> performance like raidz1? ie. random iops equal to one physical disk's ipos.
On reads, no, any part of
On Jul 22, 2010, at 2:41 PM, Miles Nordin wrote:
>> "sw" == Saxon, Will writes:
>
>sw> 'clone' vs. a 'copy' would be very easy since we have
>sw> deduplication now
>
> dedup doesn't replace the snapshot/clone feature for the
> NFS-share-full-of-vmdk use case because there's no equi
On Jul 23, 2010, at 10:14 PM, Edward Ned Harvey wrote:
>> From: Arne Jansen [mailto:sensi...@gmx.net]
>>>
>>> Can anyone else confirm or deny the correctness of this statement?
>>
>> As I understand it that's the whole point of raidz. Each block is its
>> own
>> stripe.
>
> Nope, that doesn't
On Jul 26, 2010, at 2:51 PM, Dav Banks wrote:
> I wanted to test it as a backup solution. Maybe that's crazy in itself but I
> want to try it.
>
> Basically, once a week detach the 'backup' pool from the mirror, replace the
> drives, add the new raidz to the mirror and let it resilver and sit
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais wrote:
>
> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>
>> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
>> wrote:
>>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
>>>
>>> sh-4.0# zfs create rpool/iscsi
>>> sh-4.0#
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote:
> On 03/08/2010 22:49, Ross Walker wrote:
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>
>>&g
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote:
> On 03/08/2010 22:49, Ross Walker wrote:
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>
>>&g
On Aug 4, 2010, at 3:52 AM, Roch wrote:
>
> Ross Walker writes:
>
>> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
>> wrote:
>>
>>>
>>> Le 27 mai 2010 à 07:03, Brent Jones a écrit :
>>>
>>>> On Wed, May 26, 2010 at 5:08 A
On Aug 4, 2010, at 9:20 AM, Roch wrote:
>
>
> Ross Asks:
> So on that note, ZFS should disable the disks' write cache,
> not enable them despite ZFS's COW properties because it
> should be resilient.
>
> No, because ZFS builds resiliency on top of unreliable parts. it's able to
> deal
On Aug 4, 2010, at 12:04 PM, Roch wrote:
>
> Ross Walker writes:
>> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>>
>>>
>>>
>>> Ross Asks:
>>> So on that note, ZFS should disable the disks' write cache,
>>> not enable t
On Aug 5, 2010, at 11:10 AM, Roch wrote:
>
> Ross Walker writes:
>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>
>>>
>>> Ross Walker writes:
>>>> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>>>>
>>>>>
>>>&g
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais wrote:
>
> Le 5 août 2010 à 19:49, Ross Walker a écrit :
>
>> On Aug 5, 2010, at 11:10 AM, Roch wrote:
>>
>>>
>>> Ross Walker writes:
>>>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>>&
On Aug 14, 2010, at 8:26 AM, "Edward Ned Harvey" wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> #3 I previously believed that vmfs3 was able to handle sparse files
>> amazingly well, like, when you create
On Aug 16, 2010, at 9:06 AM, "Edward Ned Harvey" wrote:
> ZFS does raid, and mirroring, and resilvering, and partitioning, and NFS, and
> CIFS, and iSCSI, and device management via vdev's, and so on. So ZFS steps
> on a lot of linux peoples' toes. They already have code to do this, or that,
On Aug 15, 2010, at 9:44 PM, Peter Jeremy
wrote:
> Given that both provide similar features, it's difficult to see why
> Oracle would continue to invest in both. Given that ZFS is the more
> mature product, it would seem more logical to transfer all the effort
> to ZFS and leave btrfs to die.
On Sep 27, 2009, at 3:19 AM, Paul Archer wrote:
So, after *much* wrangling, I managed to take on of my drives
offline, relabel/repartition it (because I saw that the first sector
was 34, not 256, and realized there could be an alignment issue),
and get it back into the pool.
Problem is t
On Sep 27, 2009, at 11:49 AM, Paul Archer wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat showed me that the
On Sep 27, 2009, at 1:44 PM, Paul Archer wrote:
My controller, while normally a full RAID controller, has had its
BIOS turned off, so it's acting as a simple SATA controller. Plus,
I'm seeing this same slow performance with dd, not just with ZFS.
And I wouldn't think that write caching wou
On Sep 27, 2009, at 8:41 PM, Ron Watkins wrote:
I have a box with 4 disks. It was my intent to place a mirrored root
partition on 2 disks on different controllers, then use the
remaining space and the other 2 disks to create a raid-5
configuration from which to export iscsi luns for use by
On Sep 27, 2009, at 10:05 PM, Ron Watkins wrote:
My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another
mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross
c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
There is no need for the 2 mirrors both on c1t0 and c2t0 one mirrored
On Tue, Sep 29, 2009 at 10:35 AM, Richard Elling
wrote:
>
> On Sep 29, 2009, at 2:03 AM, Bernd Nies wrote:
>
>> Hi,
>>
>> We have a Sun Storage 7410 with the latest release (which is based upon
>> opensolaris). The system uses a hybrid storage pool (23 1TB SATA disks in
>> RAIDZ2 and 1 18GB SSD as
On Tue, Sep 29, 2009 at 5:30 PM, David Stewart wrote:
> Before I try these options you outlined I do have a question. I went in to
> VMWare Fusion and removed one of the drives from the virtual machine that was
> used to create a RAIDZ pool (there were five drives, one for the OS, and four
> f
On Sep 30, 2009, at 10:40 AM, Brian Hubbleday wrote:
Just realised I missed a rather important word out there, that could
confuse.
So the conclusion I draw from this is that the --incremental--
snapshot simply contains every written block since the last snapshot
regardless of whether the
But this is concerning reads not writes.
-Ross
On Oct 20, 2009, at 4:43 PM, Trevor Pretty
wrote:
Gary
Where you measuring the Linux NFS write performance? It's well know
that Linux can use NFS in a very "unsafe" mode and report the write
complete when it is not all the way to safe s
x27; which
is unsafe.
-Ross
Ross Walker wrote:
But this is concerning reads not writes.
-Ross
On Oct 20, 2009, at 4:43 PM, Trevor Pretty
wrote:
Gary
Where you measuring the Linux NFS write performance? It's well
know that Linux can use NFS in a very "unsafe" mo
On Nov 2, 2009, at 2:38 PM, "Paul B. Henson" wrote:
On Sat, 31 Oct 2009, Al Hopper wrote:
Kudos to you - nice technical analysis and presentation, Keep
lobbying
your point of view - I think interoperability should win out if it
comes
down to an arbitrary decision.
Thanks; but so far tha
On Nov 6, 2009, at 11:23 PM, "Paul B. Henson" wrote:
NFSv3 gss:
damien cfservd # mount -o sec=krb5p ike.unx.csupomona.edu:/export/
user/henson /mnt
hen...@damien /mnt/sgid_test $ ls -ld
drwx--s--x+ 2 henson iit 2 Nov 6 20:14 .
hen...@damien /mnt/sgid_test $ mkdir gss
hen...@damien /mnt/
On Nov 8, 2009, at 12:09 PM, Tim Cook wrote:
Why not just convert the VM's to run in virtualbox and run Solaris
directly on the hardware?
Or use OpenSolaris xVM (Xen) with either qemu img files on zpools for
the VMs or zvols.
-Ross
___
zfs-dis
On Nov 27, 2009, at 12:55 PM, Carsten Aulbert > wrote:
On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote:
I was too fast, now it looks completely different:
scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27
18:46:33
2009
[...]
s13:~# zpool status
pool: atlashome
state
On Dec 2, 2009, at 6:57 AM, Brian McKerr
wrote:
Hi all,
I have a home server based on SNV_127 with 8 disks;
2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
This server performs a few functions;
NFS : for several 'lab' ESX virtual machines
NFS : mythtv storage (videos, music, recordi
On Dec 11, 2009, at 4:17 AM, Alexander Skwar > wrote:
Hello Jeff!
Could you (or anyone else, of course *G*) please show me how?
Situation:
There shall be 2 snapshots of a ZFS called rpool/rb-test
Let's call those snapshots "01" and "02".
$ sudo zfs create rpool/rb-test
$ zfs list rpool/rb-t
On Dec 11, 2009, at 3:26 PM, Alexander Skwar > wrote:
Hi!
On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha
wrote:
On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
wrote:
$ sudo zfs create rpool/rb-test
$ zfs list rpool/rb-test
NAMEUSED AVAIL REFER MOUNTPOINT
rpool/rb-test
On Dec 21, 2009, at 4:09 PM, Michael Herf wrote:
Anyone who's lost data this way: were you doing weekly scrubs, or
did you find out about the simultaneous failures after not touching
the bits for months?
Scrubbing on a routine basis is good for detecting problems early, but
it doesn't so
On Dec 21, 2009, at 11:56 PM, Roman Naumenko wrote:
On Dec 21, 2009, at 4:09 PM, Michael Herf
wrote:
Anyone who's lost data this way: were you doing
weekly scrubs, or
did you find out about the simultaneous failures
after not touching
the bits for months?
Scrubbing on a routine basis i
On Dec 22, 2009, at 11:46 AM, Bob Friesenhahn > wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Raid10 provides excellent performance and if performance is a
priority then I recommend it, but I was under the impression that
resiliency was the priority, as raidz2/raidz3 provide grea
On Dec 22, 2009, at 8:40 PM, Charles Hedrick
wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions of java files from our production application
Most of
On Dec 22, 2009, at 8:58 PM, Richard Elling
wrote:
On Dec 22, 2009, at 5:40 PM, Charles Hedrick wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions o
On Dec 22, 2009, at 9:08 PM, Bob Friesenhahn > wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
I think zil_disable may actually make sense.
How about a zil comprised of two mirrored iSCSI vdevs formed from a
SSD on each box?
I would not have believed that this is a useful idea except t
On Dec 25, 2009, at 6:01 PM, Jeroen Roodhart
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Freddie, list,
Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.
14 disks in a single
On Dec 29, 2009, at 7:55 AM, Brad wrote:
Thanks for the suggestion!
I have heard mirrored vdevs configuration are preferred for Oracle
but whats the difference between a raidz mirrored vdev vs a raid10
setup?
A mirrored raidz provides redundancy at a steep cost to performance
and might
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn > wrote:
On Tue, 29 Dec 2009, Ross Walker wrote:
A mirrored raidz provides redundancy at a steep cost to performance
and might I add a high monetary cost.
I am not sure what a "mirrored raidz" is. I have never heard of
such a
On Dec 29, 2009, at 5:37 PM, Brad wrote:
Hi! I'm attempting to understand the pros/cons between raid5 and
raidz after running into a performance issue with Oracle on zfs (http://opensolaris.org/jive/thread.jspa?threadID=120703&tstart=0
).
I would appreciate some feedback on what I've und
On Wed, Dec 30, 2009 at 12:35 PM, Bob Friesenhahn
wrote:
> On Tue, 29 Dec 2009, Ross Walker wrote:
>>
>>> Some important points to consider are that every write to a raidz vdev
>>> must be synchronous. In other words, the write needs to complete on all the
>>
On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs
file system with the same te
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones wrote:
> On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker wrote:
>> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner"
>> wrote:
>>
>> Hello,
>>
>> I was doing performance testing, validating zvol performa
On Mon, Jan 4, 2010 at 2:27 AM, matthew patton wrote:
> I find it baffling that RaidZ(2,3) was designed to split a record-size block
> into N (N=# of member devices) pieces and send the uselessly tiny requests to
> spinning rust when we know the massive delays entailed in head seeks and
> rotat
On Wed, Jan 6, 2010 at 4:30 PM, Wes Felter wrote:
> Michael Herf wrote:
>
>> I agree that RAID-DP is much more scalable for reads than RAIDZx, and
>> this basically turns into a cost concern at scale.
>>
>> The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be
>> used instead of n
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn > wrote:
On Mon, 11 Jan 2010, bank kus wrote:
Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris
if there is no fair I/O scheduling between multiple read sources
until that
On Jan 14, 2010, at 10:44 AM, "Mr. T Doodle"
wrote:
Hello,
I have played with ZFS but not deployed any production systems using
ZFS and would like some opinions
I have a T-series box with 4 internal drives and would like to
deploy ZFS with availability and performance in mind ;
On Jan 21, 2010, at 6:47 PM, Daniel Carosone wrote:
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
+ support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
I think the original author meant files bigger then 2
On Jan 30, 2010, at 2:53 PM, Mark wrote:
I have a 1U server that supports 2 SATA drives in the chassis. I
have 2 750 GB SATA drives. When I install opensolaris, I assume it
will want to use all or part of one of those drives for the install.
That leaves me with the remaining part of disk 1
On Feb 3, 2010, at 9:53 AM, Henu wrote:
Okay, so first of all, it's true that send is always fast and 100%
reliable because it uses blocks to see differences. Good, and thanks
for this information. If everything else fails, I can parse the
information I want from send stream :)
But am I
On Feb 3, 2010, at 12:35 PM, Frank Cusack z...@linetwo.net> wrote:
On February 3, 2010 12:19:50 PM -0500 Frank Cusack > wrote:
If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff. The docs don't expla
On Feb 3, 2010, at 8:59 PM, Frank Cusack
wrote:
On February 3, 2010 6:46:57 PM -0500 Ross Walker
wrote:
So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories
deleted
and file/directories changed)?
Find
system
functions offered by OS. I scan every byte in every file manually
and it
^^^
On February 3, 2010 10:11:01 AM -0500 Ross Walker >
wrote:
Not a ZFS method, but you could use rsync with the dry run option
to list
all changed fi
Interesting, can you explain what zdb is dumping exactly?
I suppose you would be looking for blocks referenced in the snapshot
that have a single reference and print out the associated file/
directory name?
-Ross
On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote:
Hi Ross,
zdb - f..
On Feb 5, 2010, at 10:49 AM, Robert Milkowski wrote:
Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared
to raid-10 pool you should get better throughput if at least 4
drives are used. Basically it is due to the fact that in RAID-10 the
maximum you can g
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey > wrote:
How are you managing UID's on the NFS server? If user eharvey
connects to
server from client Mac A, or Mac B, or Windows 1, or Windows 2, or
any of
the linux machines ... the server has to know it's eharvey, and
assign the
correct UID'
1 - 100 of 196 matches
Mail list logo