On 07/23/10 04:38 PM, Chris wrote:
Apologies if this question has been answered before, and sorry if this is in the wrong
forum (I couldn't find communities>> zfs>> discuss in the list) but I haven't
been able to find the answer despite extensive searching.
I have a zpool consisting of 3 x 1
Apologies if this question has been answered before, and sorry if this is in
the wrong forum (I couldn't find communities >> zfs >> discuss in the list) but
I haven't been able to find the answer despite extensive searching.
I have a zpool consisting of 3 x 1TB disks. I would like to add 1 x 1TB
On Thu, Jul 22, 2010 at 11:14 AM, Miles Nordin wrote:
> reboots. Brandon have you actually set it yourself, or are you just
> aggregating forum discussion?
I'm using an older revision of WD10EADS drives that allow TLER to be
enabled via WDTLER.EXE. I have not had a drive fail in this
environment
On Jul 22, 2010, at 2:41 PM, Miles Nordin wrote:
>> "sw" == Saxon, Will writes:
>
>sw> 'clone' vs. a 'copy' would be very easy since we have
>sw> deduplication now
>
> dedup doesn't replace the snapshot/clone feature for the
> NFS-share-full-of-vmdk use case because there's no equi
I've got a system running s10x_u7wos_08 with only half of the disks
provisioned. When performing a dry run of a zpool add, I'm seeing some strange
output:
root# zpool add -n vst raidz2 c0t2d0 c5t1d0 c4t1d0 c4t5d0 c7t1d0 c7t5d0 c6t1d0
c6t5d0 c1t1d0 c1t5d0 c0t1d0 raidz2 c0t5d0 c0t5d0 c4t4d0 c7t0d
On 07/22/10 04:00, Orvar Korvar wrote:
Ok, so the bandwidth will be cut in half, and some people use this
configuration. But, how bad is it to have the bandwidth cut in half?
Will it hardly notice?
For a home server, I doubt you'll notice.
I've set up several systems (desktop & home server) as
On Jul 21, 2010, at 7:56 AM, Hernan F wrote:
> Hi,
> Out of pure curiosity, I was wondering, what would happen if one tries to use
> a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
My rule of thumb is that if the latency of the slog (write latency) or L2ARC
(random read)
is 10x bet
You haven't stated what you intend to use your pc for and what your
requirements are. Without that I don't see how anyone can come up with
an optimal configuration. So... what do you plan to do with your pc?
Do you want the fastest performance and don't care about anything else?
use all SSDs
Using a new email client and didn't notice that I didn't reply to the
list. Since it might be helpful to others here are the missing bits.
On 7/21/2010 5:07 PM, Freddie Cash wrote:
We use the 500 GB versions attached to 3Ware controllers (configured
as Single Disk arrays). They work quite ni
Hi Garrett,
Since my problem did turn out to be a debug kernel on my compilations,
I booted back into the Nexanta 3 RC2 CD and let a scrub run for about
half an hour to see if I just hadn't waited long enough the first time
around. It never made it past 159 MB/s. I finally rebooted into my
145 n
- Original Message -
> On Wed, 2010-07-21 at 09:42 -0700, Orvar Korvar wrote:
> > Are there any drawbacks to partition a SSD in two parts and use
> > L2ARC on one partition, and ZIL on the other? Any thoughts?
>
> Its probably a reasonable approach. The ZIL can be fairly small...
> only
>
> "sw" == Saxon, Will writes:
sw> 'clone' vs. a 'copy' would be very easy since we have
sw> deduplication now
dedup doesn't replace the snapshot/clone feature for the
NFS-share-full-of-vmdk use case because there's no equivalent of
'zfs rollback'
I'm tempted to say, ``vmware needs
> "bh" == Brandon High writes:
bh> Recent versions no longer support enabling TLER or ERC. To
bh> the best of my knowledge, Samsung and Hitachi drives all
bh> support CCTL, which is yet another name for the same thing.
once again, I have to ask, has anyone actually found these f
Hello,
I've recently joined this list, primarily because of a thread I found from late
April ("Is file cloning anywhere on ZFS roadmap") asking about file-level
cloning in ZFS. Based on that thread I understand that it's not currently
possible to 'clone' files instead of 'copying' them, but the
On 22/07/2010 03:25, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
I had a quick look at your results a moment ago.
The problem is that you used a server with 4GB of RAM + a raid card
w
On Wed, Jul 21, 2010 at 7:43 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of John Andrunas
>>
>> I know this is potentially a loaded question, but what is generally
>> considered the optimal disk configuration
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I didn't mean to imply that I use it for my media storage, just that I
occasionally encounter situations when it could be useful.
BR,
- --
Saso
On 07/22/2010 11:23 AM, Roy Sigurd Karlsbakk wrote:
> - Original Message -
>> I do encounter situa
> I'm currently planning on running FreeBSD with ZFS, but I wanted to
> double-check how much memory I'd need for it to be stable. The ZFS
> wiki currently says you can go as low as 1 GB, but recommends 2 GB;
> however, elsewhere I've seen someone claim that you need at least 4 GB.
> ...
> How a
- Original Message -
> I do encounter situations when I (or somebody from my family)
> accidentally create multiple copies of photo albums. :-)
I wouldn't recommend using dedup on this system. Dedup requires lots of RAM or
L2ARC, and I don't think it is suitable for your needs. You may wa
Hi all,
That's what i have, so i'm probably on the good track :)
Basically i have a Sun X4240 with 2 Sun HBA's attached to 2 Sun J4400 ,
each of them with 12 SATA 1TB disks.
The configuration is
- ZFS mirrored pool with 22x2 +2 spares , with 1 disk on Jbod A attached
to HBA A and the other disk
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Robert Milkowski
> >
> I had a quick look at your results a moment ago.
> The problem is that you used a server with 4GB of RAM + a raid card
> with
> a 256MB of cache.
> Then your filesize for
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of John Andrunas
>
> I know this is potentially a loaded question, but what is generally
> considered the optimal disk configuration for ZFS. I have 48 disks on
> 2 RAID controllers (2x24). The
> I wanted to build a small back up (maybe also NAS) server using
A common question that I am trying to get answered (and have a few) here:
http://www.opensolaris.org/jive/thread.jspa?threadID=102368&tstart=0
Rob
--
This message posted from opensolaris.org
__
On Fri, Jul 16, 2010 at 11:32 AM, Jordan McQuown
wrote:
> I’m curious to know what other people are running for HD’s in white box
> systems? I’m currently looking at Seagate Barracuda’s and Hitachi Deskstars.
> I’m looking at the 1tb models. These will be attached to an LSI expander in
> a sc847e2
Ok, so the bandwidth will be cut in half, and some people use this
configuration. But, how bad is it to have the bandwidth cut in half? Will it
hardly notice?
(Just ordinary home server, some media files, ebooks, etc)
--
This message posted from opensolaris.org
> I'm building my new storage server, all the parts should come in this week.
> ...
Another answer is here:
http://eonstorage.blogspot.com/2010/03/whats-best-pool-to-build-with-3-or-4.html
Rob
--
This message posted from opensolaris.org
___
zfs-discuss
> Hi guys, I am about to reshape my data spool and am wondering what
> performance diff. I can expect from the new config. Vs. The old.
>
> The old config. Is a pool of a single vdev of 8 disks raidz2.
> The new pool config is 2vdev's of 7 disk raidz2 in a single pool.
>
> I understand it should
27 matches
Mail list logo