Eric Andersen wrote:
I find Erik Trimble's statements regarding a 1 TB limit on drives to be a very
bold statement. I don't have the knowledge or the inclination to argue the
point, but I am betting that we will continue to see advances in storage
technology on par with what we have seen in t
On Apr 8, 2010, at 9:06 PM, Daniel Carosone wrote:
> On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
>> On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
>>>
>>> As for error rates, this is something zfs should not be afraid
>>> of. Indeed, many of us would be happy to get drives
I thought I might chime in with my thoughts and experiences. For starters, I
am very new to both OpenSolaris and ZFS, so take anything I say with a grain of
salt. I have a home media server / backup server very similar to what the OP
is looking for. I am currently using 4 x 1TB and 4 x 2TB dr
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
> On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
> >
> > As for error rates, this is something zfs should not be afraid
> > of. Indeed, many of us would be happy to get drives with less internal
> > ECC overhead and complexity for
On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
>
> As for error rates, this is something zfs should not be afraid
> of. Indeed, many of us would be happy to get drives with less internal
> ECC overhead and complexity for greater capacity, and tolerate the
> resultant higher error rates, specif
On Thu, 8 Apr 2010, Jason S wrote:
One thing i have noticed that seems a littler different from my
previous hardware raid controller (Areca) is the data is not
constantly being written to the spindles. For example i am copying
some large files to the array right now (approx 4 gigs a file) and
Well I would like to thank everyone for there comments and ideas.
I finally have this machine up and running with Nexenta Community edition and
am really liking the GUI for administering it. It suits my needs perfectly and
is running very well. I ended up going with 2 X 7 RaidZ2 vdevs in one poo
On Thu, Apr 08, 2010 at 03:48:54PM -0700, Erik Trimble wrote:
> Well
To be clear, I don't disagree with you; in fact for a specific part of
the market (at least) and a large part of your commentary, I agree. I
just think you're overstating the case for the rest.
> The problem is (and this i
On 04/ 9/10 10:48 AM, Erik Trimble wrote:
Well
The problem is (and this isn't just a ZFS issue) that resilver and scrub
times /are/ very bad for>1TB disks. This goes directly to the problem
of redundancy - if you don't really care about resilver/scrub issues,
then you really shouldn't bothe
> Do the following ZFS stats look ok?
>
>> ::memstat
> Page Summary Pages MB %Tot
>
> Kernel 106619 832 28%
> ZFS File Data 79817 623 21%
> Anon 28553 223 7%
> Exec and libs 3055 23 1%
> Page cache 18024 140 5%
> Free (cachelist) 2880 22 1%
> Fre
Do the following ZFS stats look ok?
> ::memstat
Page Summary Pages MB %Tot
Kernel 106619 832 28%
ZFS File Data 79817 623 21%
Anon 28553 223 7%
Exec and libs 3055 23 1%
Page cache 18024 140 5%
Free (cachelist) 2880 22 1%
Free (freelist) 146309 114
On 8 apr 2010, at 23.21, Miles Nordin wrote:
>> "rs" == Ragnar Sundblad writes:
>
>rs> use IPSEC to make IP address spoofing harder.
>
> IPsec with channel binding is win, but not until SA's are offloaded to
> the NIC and all NIC's can do IPsec AES at line rate. Until this
> happens y
On Apr 8, 2010, at 3:23 PM, Tomas Ögren wrote:
> On 08 April, 2010 - Abdullah Al-Dahlawi sent me these 12K bytes:
>
>> Hi Richard
>>
>> Thanks for your comments. OK ZFS is COW, I understand, but, this also means
>> a waste of valuable space of my L2ARC SSD device, more than 60% of the space
>> is
mingli writes:
> Thank Erik, and I will try it, but the new question is that the root
> of the NFS server mapped as nobody at the NFS client.
>
> For this issue, I set up a new test NFS server and NFS client, and
> with the same option, at this test environment, the file owner
> mapped correctly,
On 08 April, 2010 - Abdullah Al-Dahlawi sent me these 12K bytes:
> Hi Richard
>
> Thanks for your comments. OK ZFS is COW, I understand, but, this also means
> a waste of valuable space of my L2ARC SSD device, more than 60% of the space
> is consumed by COW !!!. I do not get it ?
The rest can an
On Fri, 2010-04-09 at 08:07 +1000, Daniel Carosone wrote:
> On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
> > Daniel Carosone wrote:
> >> Go with the 2x7 raidz2. When you start to really run out of space,
> >> replace the drives with bigger ones.
> >
> > While that's great in theor
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
> Daniel Carosone wrote:
>> Go with the 2x7 raidz2. When you start to really run out of space,
>> replace the drives with bigger ones.
>
> While that's great in theory, there's getting to be a consensus that 1TB
> 7200RPM 3.5" Sata dr
Hi Richard
Thanks for your comments. OK ZFS is COW, I understand, but, this also means
a waste of valuable space of my L2ARC SSD device, more than 60% of the space
is consumed by COW !!!. I do not get it ?
On Sat, Apr 3, 2010 at 11:35 PM, Richard Elling wrote:
> On Apr 1, 2010, at 9:41 PM, Abdul
> "rs" == Ragnar Sundblad writes:
rs> use IPSEC to make IP address spoofing harder.
IPsec with channel binding is win, but not until SA's are offloaded to
the NIC and all NIC's can do IPsec AES at line rate. Until this
happens you need to accept there will be some protocols used on SAN
2, or /usr/pkg-gcc and
/usr/pkg-spro. pkgsrc will also build pkg_add, pkg_info, u.s.w. under
/usr/pkg-gcc/bin which will point to /var/db/pkg-gcc or whatever to
track what's installed, so you can have more than one pkg_add on a
single system pointing to different sets of directories. You
Ray,
Here is my short list of Performance Metrics I track on 7410 Performance
Rigs via 7000 Analytics.
Cheers,
Joel.
m:analytics datasets> ls
Datasets:
DATASET STATE INCORE ONDISK NAME
dataset-000 active 1016K 75.9M arc.accesses[hit/miss]
dataset-001 active390K 37.9M arc.l2_acc
We're starting to grow our ZFS environment and really need to start
standardizing our monitoring procedures.
OS tools are great for spot troubleshooting and sar can be used for
some trending, but we'd really like to tie this into an SNMP based
system that can generate graphs for us (via RRD or oth
On 12 mar 2010, at 03.58, Damon Atkins wrote:
...
> Unfortunately DNS spoofing exists, which means forward lookups can be poison.
And IP address spoofing, and...
> The best (maybe only) way to make NFS secure is NFSv4 and Kerb5 used together.
Amen!
DNS is NOT an authentication system!
IP is NO
On Apr 8, 2010, at 8:52 AM, Bob Friesenhahn wrote:
> On Thu, 8 Apr 2010, Erik Trimble wrote:
>> While that's great in theory, there's getting to be a consensus that 1TB
>> 7200RPM 3.5" Sata drives are really going to be the last usable capacity.
I doubt that 1TB (or even 1.5TB) 3.5" disks are be
On Thu, 8 Apr 2010, Erik Trimble wrote:
While that's great in theory, there's getting to be a consensus that 1TB
7200RPM 3.5" Sata drives are really going to be the last usable capacity.
Agreed. The 2.5" form factor is rapidly emerging. I see that
enterprise 6-Gb/s SAS drives are available w
On 08 April, 2010 - Cindy Swearingen sent me these 2,6K bytes:
> Hi Daniel,
>
> D'oh...
>
> I found a related bug when I looked at this yesterday but I didn't think
> it was your problem because you didn't get a busy message.
>
> See this RFE:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.d
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found
If you're getting nobody:nobody on an NFS mount you have an NFS version
mismatch, (usually between V3 & V4) to get around this use the following as
mount options on the client:
hard,bg,intr,vers=3
e.g:
mount -o hard,bg,intr,vers=3 server:/pool/zfs /mountpoint
--
This message posted from opens
Daniel Carosone wrote:
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones. You will run out of space
eventually regardless; this way you can replace 7 at a time, not 14 at
a time. With luck, each replacement will last you long enough that
th
29 matches
Mail list logo