Thanks. I guess I am in a 'If it ain't broken, don't fix it' for my NFS setup.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jul 20, 2010, at 6:14 PM, Gregory Gee wrote:
> To further this question, I have been searching for a while and can't find
> any reference to the difference and benefits between zfs sharenfs and nfs
> share. Currently I am using standard NFS I believe.
>
> share -F nfs -o anon=0,sec=sys,rw=x
To further this question, I have been searching for a while and can't find any
reference to the difference and benefits between zfs sharenfs and nfs share.
Currently I am using standard NFS I believe.
share -F nfs -o anon=0,sec=sys,rw=xenserver0:xenserver1 /files/VM
ad...@nas:/files$ zfs list
Last week my FreeNAS server began to beep constantly so I rebooted it through
the webgui. When the machine finished booting I logged back in to the webgui
and I noted that my zpool (Raidz) was faulted. Most of the data on this pool
is replaceable but I had some pictures on this pool that were n
> No, the pool tank consists of 7 physical drives(5 of Seagate and 2 of
> Western Digital) See output below
I think you are looking at disk label name, and this is confusing you. I had a
similar thing happen where the label name from a 64GB SSD got written onto a
1TB HD.
That output in format
Could it somehow not be compiling 64-bit support?
--
Brent Jones
I thought about that but it says when it boots up that it is 64-bit, and I'm
able to run
64-bit binaries. I wonder if it's compiling for the wrong processor
optomization though?
Maybe if it is missing some of the newer
On 07/21/10 03:12 AM, Richard Jahnel wrote:
On the receiver
/opt/csw/bin/mbuffer -m 1G -I Ostor-1:8000 | zfs recv -F e...@sunday
in @ 0.0 kB/s, out @ 0.0 kB/s, 43.7 GB total, buffer 100% fullcannot receive
new filesystem stream: invalid backup stream
mbuffer: error: outputThread: error writin
Your config makes me think this is an atypical ZFS configuration. As a
result, I'm not as concerned. But I think the multithread/concurrency
may be the biggest concern here. Perhaps the compilers are doing
something different that causes significant cache issues. (Perhaps the
compilers themsel
On 07/20/10 14:10, Marcelo H Majczak wrote:
It also seems to be issuing a lot more
writing to rpool, though I can't tell what. In my case it causes a
lot of read contention since my rpool is a USB flash device with no
cache. iostat says something like up to 10w/20r per second. Up to 137
the perfo
On 07/20/10 14:10, Marcelo H Majczak wrote:
It also seems to be issuing a lot more
writing to rpool, though I can't tell what. In my case it causes a
lot of read contention since my rpool is a USB flash device with no
cache. iostat says something like up to 10w/20r per second. Up to 137
the perfo
If I can help narrow the variables, I compiled both 137 and 144 (137 is minimum
req. to build 144) using the same recommended compiler and lint, nightly
options etc. 137 works fine but 144 suffer the slowness reported. System wise,
I'm using only the 32bit non-debug version in an "old" single-co
So the next question is, lets figure out what richlowe did
differently. ;-)
- Garrett
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So I've tried both the ASUS U3S6, and the Koutech IO-PESA-A230R, recommended by
the helpful blog:
http://blog.zorinaq.com/?e=10
In BOTH cases, the SSD appears in the card's BIOS screen at bootup, so that the
card sees it and recognizes it properly.
I'm running EON 0.60 (SNV130), and once I log
On Tue, Jul 20, 2010 at 10:45:58AM -0700, Brent Jones wrote:
> On Tue, Jul 20, 2010 at 10:29 AM, Chad Cantwell wrote:
> > No, this wasn't it. A non debug build with the same NIGHTLY_OPTIONS
> > at Rich Lowe's 142 build is still very slow...
> >
> > On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad C
On Tue, 20 Jul 2010, Roy Sigurd Karlsbakk wrote:
Mostly, yes. Traditionl RAID-5 is likely to be faster than ZFS
because of ZFS doing checksumming, having the ZIL etc, but then,
trad raid5 won't have the safety offered by ZFS
The biggest difference is almost surely that ZFS will always
const
On Mon, 19 Jul 2010, Haudy Kazemi wrote:
Yup, but that's *per release*. Solaris (for instance) has binary
compatibility and library compatibility all the way back to Solaris 2.0
in 1991. AIX and HPUX are similar. *very* few things ever break between
releases on professional UNIX systems. Thos
On Tue, Jul 20, 2010 at 10:29 AM, Chad Cantwell wrote:
> No, this wasn't it. A non debug build with the same NIGHTLY_OPTIONS
> at Rich Lowe's 142 build is still very slow...
>
> On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad Cantwell wrote:
>> Yes, I think this might have been it. I missed the N
No, this wasn't it. A non debug build with the same NIGHTLY_OPTIONS
at Rich Lowe's 142 build is still very slow...
On Tue, Jul 20, 2010 at 09:52:10AM -0700, Chad Cantwell wrote:
> Yes, I think this might have been it. I missed the NIGHTLY_OPTIONS variable
> in
> opensolaris and I think it was c
On Mon, Jul 19, 2010 at 9:40 PM, devsk wrote:
>> On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles
>> wrote:
>> > What supporting applications are there on Ubuntu
>> for RAIDZ?
>>
>> None. Ubuntu doesn't officially support ZFS.
>>
>> You can kind of make it work using the ZFS-FUSE
>> project. But it'
Yes, I think this might have been it. I missed the NIGHTLY_OPTIONS variable in
opensolaris and I think it was compiling a debug build. I'm not sure what the
ramifications are of this or how much slower a debug build should be, but I'm
recompiling a release build now so hopefully all will be well.
I'll try an export/import and scrub of the receiving pool and see what that
does. I can't take the sending pool offline to try that stuff though.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
On the receiver
/opt/csw/bin/mbuffer -m 1G -I Ostor-1:8000 | zfs recv -F e...@sunday
in @ 0.0 kB/s, out @ 0.0 kB/s, 43.7 GB total, buffer 100% fullcannot receive
new filesystem stream: invalid backup stream
mbuffer: error: outputThread: error writing to at offset 0xaedf6a000:
Broken pipe
sum
On Jul 19, 2010, at 5:26 PM, Gregory Gee wrote:
> I am using OpenSolaris to host VM images over NFS for XenServer. I'm looking
> for tips on what parameters can be set to help optimize my ZFS pool that
> holds my VM images. I am using XenServer which is running the VMs from an
> NFS storage o
On Jul 20, 2010, at 3:46 AM, Roy Sigurd Karlsbakk wrote:
> - Original Message -
>> Hi,
>> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to
>> one physical disk iops, since raidz1 is like raid5 , so is raid5 has
>> same performance like raidz1? ie. random iops equal to on
On Jul 20, 2010, at 3:09 AM, v wrote:
> Hi,
> A basic question regarding how zil works:
The seminal blog on how the ZIL works is
http://blogs.sun.com/perrin/entry/the_lumberjack
> For asynchronous write, will zil be used?
No.
> For synchronous write, and if io is small, will the whole io be p
Michael Shadle wrote:
>Actually I guess my real question is why iostat hasn't logged any
> errors in its counters even though the device has been bad in there
> for months?
One of my arrays had a drive in slot 4 fault -- lots of reset something or
other
errors. I cleared the errors and the po
Well, this is a REALLY 300 users production server with 12 VM's running on it,
so I definitely won't play with a firmware :)
I can easily identify which drive is what by physically looking at it.
It's just sad to realize that I cannot trust solaris anymore.
I never noticed this problem before be
Thanks Haudi, really appreciate your help.
This is Supermicro Server.
I really don't remember controller model, I set it up about 3 years ago. I just
remember that I needed to reflush controller firmware to make it work in JBOD
mode.
I run the script you suggested:
But it looks like it's still
Hello.
I have two Solaris 10 servers (release 10/09). The first one is a Sun M4000
Server with SPARC technology. The other one is a Sun Fire X4170 with x86 Intel
architecture. Both servers are attached via SAN to the same EMC Storage system.
The disks from the M4000 Server are cloned every nigh
- Original Message -
> On Jul 20, 2010, at 6:12 AM, v wrote:
>
> > Hi,
> > for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to
> > one physical disk iops, since raidz1 is like raid5 , so is raid5 has
> > same performance like raidz1? ie. random iops equal to one physical
On Jul 20, 2010, at 6:12 AM, v wrote:
> Hi,
> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one
> physical disk iops, since raidz1 is like raid5 , so is raid5 has same
> performance like raidz1? ie. random iops equal to one physical disk's ipos.
On reads, no, any part of
> I'm surprised you're even getting 400MB/s on the "fast"
> configurations, with only 16 drives in a Raidz3 configuration.
> To me, 16 drives in Raidz3 (single Vdev) would do about 150MB/sec, as
> your "slow" speeds suggest.
That'll be for random i/o. His i/o here is sequential, so the i/o is spre
On 20/07/2010 11:46, Roy Sigurd Karlsbakk wrote:
- Original Message -
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to
one physical disk iops, since raidz1 is like raid5 , so is raid5 has
same performance like raidz1? ie. random iops equal to one physical
disk's i
- Original Message -
> Hi,
> for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to
> one physical disk iops, since raidz1 is like raid5 , so is raid5 has
> same performance like raidz1? ie. random iops equal to one physical
> disk's ipos.
Mostly, yes. Traditionl RAID-5 is li
Hi,
for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one
physical disk iops, since raidz1 is like raid5 , so is raid5 has same
performance like raidz1? ie. random iops equal to one physical disk's ipos.
Regards
Victor
--
This message posted from opensolaris.org
_
Hi,
A basic question regarding how zil works:
For asynchronous write, will zil be used?
For synchronous write, and if io is small, will the whole io be place on zil?
or just the pointer be save into zil? what about large size io?
Regards
Victor
--
This message posted from opensolaris.org
___
On Tue, Jul 20, 2010 at 12:59 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Jahnel
>>
>> I'vw also tried mbuffer, but I get broken pipe errors part way through
>> the transfer.
>
> The standard answer
On 20/07/2010 04:41, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard L. Hamilton
I would imagine that if it's read-mostly, it's a win, but
otherwise it costs more than it saves. Even more conventional
compress
On 20/07/2010 07:59, Chad Cantwell wrote:
I've just compiled and booted into snv_142, and I experienced the same slow dd
and
scrubbing as I did with my 142 and 143 compilations and with the Nexanta 3 RC2
CD.
So, this would seem to indicate a build environment/process flaw rather than a
regress
On Mon, Jul 19, 2010 at 07:01:54PM -0700, Chad Cantwell wrote:
> On Tue, Jul 20, 2010 at 10:54:44AM +1000, James C. McPherson wrote:
> > On 20/07/10 10:40 AM, Chad Cantwell wrote:
> > >fyi, everyone, I have some more info here. in short, rich lowe's 142 works
> > >correctly (fast) on my hardware,
40 matches
Mail list logo