Not myself yet - but here is some really interesting reading on it:
http://hardforum.com/showthread.php?p=1035820555
On Aug 12, 2010, at 7:03 PM, valrh...@gmail.com wrote:
> Has anyone bought one of these cards recently? It seems to list for around
> $170 at various places, which seems like qu
Has anyone bought one of these cards recently? It seems to list for around $170
at various places, which seems like quite a decent deal. But no well-known
reputable vendor I know seems to sell these, and I want to be able to have
someone backing the sale if something isn't perfect. Where do you
Script attached.
Cheers,
Marty
--
This message posted from opensolaris.org
zfs_sync
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Aug 12, 2010 at 07:48:10PM -0500, Norm Jacobs wrote:
> For single file updates, this is commonly solved by writing data to
> a temp file and using rename(2) to move it in place when it's ready.
For anything more complicated you need... a more complicated approach.
Note that "transactional
For single file updates, this is commonly solved by writing data to a
temp file and using rename(2) to move it in place when it's ready.
-Norm
On 08/12/10 04:51 PM, Jason wrote:
Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just
Guys,
Need your help. My DEV134 OSOL build with my 30TB disk system got really
screwed due to my fat fingers :-(
I added 3 drives to my pool with the intent to add them to my RAIDz2...
This is what my zpool status looks like:
pool: rzpool2
state: ONLINE
scrub: none requested
config:
Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just consolidation private)?
Just recently, it would seem a poorly timed unscheduled poweroff while
NWAM was attempting to update nsswitch.conf left me with a 0 byte
nsswitch.conf (which when t
> "sw" == Saxon, Will writes:
sw> It was and may still be common to use RDM for VMs that need
sw> very high IO performance. It also used to be the only
sw> supported way to get thin provisioning for an individual VM
sw> disk. However, VMware regularly makes a lot of noise abou
Thank you everyone for your answers.
Cost is a factor, but the main obstacle is that the chassis will only support
four SSDs (and that's with using the spare 5.25 bay for a 4x2.5 hotswap bay).
My plan now is to buy the ssd's and do extensive testing. I want to focus my
performance efforts on
On Wed, August 11, 2010 15:11, Paul Kraus wrote:
> On Wed, Aug 11, 2010 at 10:36 AM, David Dyer-Bennet wrote:
Am I looking for too much here? I *thought* I was doing something
that
should be simple and basic and frequently used nearly everywhere, and
hence certain to work.
We are going to be migrating to a new EMC frame using Open Replicator.
ZFS is sitting on volumes that are running MPXIO. So the controller number/disk
number is going to change when we reboot the server. I would like to konw if
anyone has done this and will the zfs filesystems "just work" and fi
> People are always tempted to put more than one log onto a SSD because
> "Hey,
> the system could never use more than 8G, but I've got a 32G drive!
> What a
> waste of money!" Which has some truth in it. But the line of thought
> you
> should have is "Hey, the system will do its best to max out th
We are using zfs backed fibre targets for ESXi 4.1 and previously 4.0 and have
had good performance with no issues. The fibre LUNS were formated with vmfs by
the ESXi boxes.
SQLIO benchmarks from guest system running on fibre attacted ESXi host.
File Size MBThreads Read/Write Duration
I want to transfer a lot of ZFS data from an old OpenSolaris ZFS mirror
(v22) to a new FreeBSD-8.1 ZFs mirror (v14).
If I boot off the OpenSolaris boot CD and import both mirrors will the
copying from v22 ZFS to v14 ZFS be harmless?
I'm not sure if this is teh right mailinglist for this question.
Il giorno 12/ago/2010, alle ore 15.10, Marty Scholes ha scritto:
> Say the word and I'll send you a copy.
pretty please :)
thanks
(meanwhile, I created the top dataset on the backup pool, set compression to
gzip-2, removed any local compression setting on the source dataset children
and I am
> Hello,
>
> I would like to backup my main zpool (originally
> called "data") inside an equally originally named
> "backup"zpool, which will also holds other kinds of
> backups.
>
> Basically I'd like to end up with
> backup/data
> backup/data/dataset1
> backup/data/dataset2
> backup/otherthing
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chris Twa
>
> I have three zpools on a server and want to add a mirrored pair of
> ssd's for the ZIL. Can the same pair of SSDs be used for the ZIL of
> all three zpools or is it one ZIL SLOG
We are doing NFS in VMWare 4.0U2 production, 50K users using OpenSolaris
SNV_134 on SuperMicro boxes with SATA drives. Yes, I am crazy. Our experience
has been that iSCSI for ESXi 4.x is fast and works well with minimal fussing
until there is a problem. When that problem happens, getting to data
I am guessing you're experiencing cpu or memory failure. Or motherboard, or
disk controller.
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael Anderson
> Sent: Thursday, August 12, 2010 3:46 AM
> To: zfs
On Wed, Aug 11, 2010 at 6:15 PM, Saxon, Will wrote:
>
> It really depends on your VM system, what you plan on doing with VMs and how
> you plan to do it.
>
> I have the vSphere Enterprise product and I am using the DRS feature, so VMs
> are vmotioned around
> my cluster all throughout the day.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Simone Caldana
>
> I would like to backup my main zpool (originally called "data") inside
> an equally originally named "backup"zpool, which will also holds other
>
> Basically I'd like to end
> In my case, it gives an error that I need at least 11 disks (which I don't)
> but the point is that raidz parity does not seem to be limited to 3. Is this
> not true?
RAID-Z is limited to 3 parity disks. The error message is giving you false hope
and that's a bug. If you had plugged in 11 dis
> -Original Message-
> From: Mark J Musante [mailto:mark.musa...@oracle.com]
> Sent: Wednesday, August 11, 2010 5:03 AM
> To: Seth Keith
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] zfs replace problems please please help
>
> On Tue, 10 Aug 2010, seth keith wrote:
>
>
Am 11.08.10 00:40, schrieb Peter Taps:
> Hi,
>
> I am going through understanding the fundamentals of raidz. From the man
> pages, a raidz configuration of P disks and N parity provides (P-N)*X storage
> space where X is the size of the disk. For example, if I have 3 disks of 10G
> each and I c
I have a 8 disk raidz2 pool (ZP02) that I am having some issues with.
The pool is using WD20EADS 2TB drives connected to a Intel SASUC8I
controller (LSI 1068E chip).
The pool was originally created when the machine was running SXDE 1/08.
I later installed OpenSolaris 2009.06 and imported the poo
On 12/08/2010 07:27, Chris Twa wrote:
I have three zpools on a server and want to add a mirrored pair of ssd's for
the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or
is it one ZIL SLOG device per zpool?
Only if you partition it up and give slices to the pools, howe
Hello,
I've been getting warnings that my zfs pool is degraded. At first it was
complaining about a few corrupt files, which were listed as hex numbers instead
of filenames, i.e.
VOL1:<0x0>
After a scrub, a couple of the filenames appeared - turns out they were in
snapshots I don't really nee
Thanks to the help from many people on this board, I finally got my
OpenSolaris-based NAS box up and running.
I have a Dell T410 with a Xeon E5504 2.0 GHz (Nehalem) quad-core processor, 8
GB of RAM. I have six 2TB Hitachi Deskstar (HD32000IDK/7K) SATA drives, set up
as stripes across three mirr
28 matches
Mail list logo