On 7 January 2011 08:16, Rick Macklem wrote:
> When I said I recalled that they didn't do TCP because of excessive
> overhead, I forgot to mention that my recollection could be wrong.
> Also, I suspect you are correct w.r.t. the above statement. (ie. Sun's
> official position vs something I heard
Hi
On 7 January 2011 00:45, Daniel Kalchev wrote:
> For pure storage, that is a place you send/store files, you don't really
> need the ZIL. You also need the L2ARC only if you read over and over again
> the same dataset, which is larger than the available ARC (ZFS cache memory).
> Both will not
On 6 January 2011 22:26, Chris Forgeron wrote:
> You know, these days I'm not as happy with SSD's for ZIL. I may blog about
> some of the speed results I've been getting over the last 6mo-1yr that I've
> been running them with ZFS. I think people should be using hardware RAM
> drives. You can g
On 7 January 2011 12:42, Jeremy Chadwick wrote:
> DDRdrive:
> http://www.ddrdrive.com/
> http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
>
> ACard ANS-9010:
> http://techreport.com/articles.x/16255
>
> GC-RAMDISK (i-RAM) products:
> http://us.test.giga-byte.com/Pr
Hi.
I have a raidz2 pool on which I'm trying to replaced two of the drives.
it is now showing:
[r...@server4 ~]# zpool import
pool: pool
id: 890764434375195435
state: DEGRADED
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may
Hi
responding to myself here..
I ran zpool scrub on it.
It started to resilver ; but then I got:
Problems with ZFS: pool: pool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in
Hi
On 9 January 2011 16:08, Adam Vande More wrote:
> On Sat, Jan 8, 2011 at 10:49 PM, Jean-Yves Avenard
> wrote:
>>
>> Any pointers on what I should do now?
>>
>> All my data seems fine :(
>
> The web page mentioned in the error message contains further info
Hi
On 9 January 2011 19:44, Matthew Seaman wrote:
> Not without backing up your current data, destroying the existing
> zpool(s) and rebuilding from scratch.
>
> Note: raidz2 on 4 disks doesn't really win you anything over 2 x mirror
> pairs of disks, and the RAID10 mirror is going to be rather m
On 9 January 2011 21:03, Matthew Seaman wrote:
>
> So you sacrifice performance 100% of the time based on the very unlikely
> possibility of drives 1+2 or 3+4 failing simultaneously, compared to the
> similarly unlikely possibility of drives 1+3 or 1+4 or 2+3 or 2+4
But this is not what you firs
On Friday, 14 January 2011, Pete French wrote:
> I build code using static linking for deployment across a set of
> machines. For me this has a lot of advantages - I know that the
> code will run, no matter what the state of the ports is on the
> machine, and if there is a need to upgrade a librar
Hi
On 15 January 2011 23:48, Jilles Tjoelker wrote:
>
> The approach has been used by Debian for some time.
>
> Links:
> http://chris.dzombak.name/blog/2010/03/building-openssl-with-symbol-versioning/
> http://chris.dzombak.name/files/openssl/openssl-0.9.8l-symbolVersioning.diff
> http://rt.open
Hi
On 16 January 2011 02:17, Jean-Yves Avenard wrote:
> Hi
>
> On 15 January 2011 23:48, Jilles Tjoelker wrote:
>
>>
>> The approach has been used by Debian for some time.
>>
>> Links:
>> http://chris.dzombak.name/blog/2010/03/buildi
On 7 February 2011 20:03, Jeremy Chadwick wrote:
> They're discussed practically on a monthly basis on the mailing lists
> (either freebsd-fs or freebsd-stable). Keeping track of them is almost
> impossible at this point, which is also probably why the Wiki is
> outdated.
I like Sun's take on t
Hi there.
I used stable-8-zfsv28-20101223-nopython.patch.xz from
http://people.freebsd.org/~mm/patches/zfs/v28/
simply because it was the most recent at this location.
Is this the one to use?
Just asking cause the file server I installed it on has stopped
responding this morning and doing a rem
On 27 December 2010 09:55, Jean-Yves Avenard wrote:
> Hi there.
>
> I used stable-8-zfsv28-20101223-nopython.patch.xz from
> http://people.freebsd.org/~mm/patches/zfs/v28/
I did the following:
# zpool status
pool: pool
state: ONLINE
scan: none requested
config:
Rebooting in single-user mode.
zpool status pool
or spool scrub pool
hangs just the same ... and there's no disk activity either ...
Will download a liveCD of OpenIndiana, hopefully it will show me what's wrong :(
Jean-Yves
___
freebsd-stable@freebsd.
tried to force a zpool import
got a kernel panic:
panic: solaris assert: weight >= space && weight <= 2 * space, file:
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
line: 793
cpuid = 5
KDB: stack backtrace
#0: 0xff805f64be at kdb_backtrace
#1 .. pa
Responding to myself again :P
On 27 December 2010 13:28, Jean-Yves Avenard wrote:
> tried to force a zpool import
>
> got a kernel panic:
> panic: solaris assert: weight >= space && weight <= 2 * space, file:
> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris
Hi
On 27 December 2010 16:04, jhell wrote:
>
> Before anything else can you: (in FreeBSD)
>
> 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
> 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
> 3) ( mount -w / ) to make sure you can remove and also
Hi
On Tuesday, 28 December 2010, Freddie Cash wrote:
> On Sun, Dec 26, 2010 at 4:43 PM, Jean-Yves Avenard
> wrote:
>> On 27 December 2010 09:55, Jean-Yves Avenard wrote:
>>> Hi there.
>>>
>>> I used stable-8-zfsv28-20101223-nopython.patch.xz from
>&g
Well
Today I added the log device:
zpool add pool log /dev/ada1s1 (8GB slice on a SSD Intel X25 disk)..
then added the cache (32GB)
zpool add pool cache /dev/ada1s2
So far so good.
zpool status -> all good.
Reboot : it hangs
booted in single user mode, zpool status:
ZFS filesystem version 5
ZF
Hi
On 27 December 2010 16:04, jhell wrote:
> 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
> 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
> 3) ( mount -w / ) to make sure you can remove and also write new
> zpool.cache as needed.
> 3) Remove /
On 28 December 2010 08:56, Freddie Cash wrote:
> Is that a typo, or the actual command you used? You have an extra "s"
> in there. Should be "log" and not "logs". However, I don't think
> that command is correct either.
>
> I believe you want to use the "detach" command, not "remove".
>
> # zp
On 29 December 2010 03:15, Jean-Yves Avenard wrote:
> # zpool import
> load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.11r 0.00u 0.03s 0% 2556k
> load: 0.00 cmd: zpool 405 [spa_namespace_lock] 15.94r 0.00u 0.03s 0% 2556k
> load: 0.00 cmd: zpool 405 [spa_namespace_lock] 16.57r 0
Hi
On Wednesday, 29 December 2010, Martin Matuska wrote:
> Please don't consider these patches as production-ready.
> What we want to do is find and resolve as many bugs as possible.
>
> To help us fix these bugs, a way to reproduce the bug from a clean start
> (e.g. in virtualbox) would be great
On Wednesday, 29 December 2010, jhell wrote:
>
> Another note too, I think I read that you mentioned using the L2ARC and
> slog device on the same disk You simply shouldn't do this it could
> be contributing to the real cause and there is absolutely no gain in
> either sanity or performance a
On Thursday, 30 December 2010, Matthew Seaman
wrote:
> No -- the on-disk format is different. ZFS will run fine with the older
> on-disk formats, but you won't get the full benefits without updating
> them. You'll need to run both 'zpool update -a' and 'zfs update -a' --
I believe it's "upgrad
On 2 January 2011 02:11, Damien Fleuriot wrote:
> I remember getting rather average performance on v14 but Jean-Yves
> reported good performance boosts from upgrading to v15.
that was v28 :)
saw no major difference between v14 and v15.
JY
___
freebsd
Hi
On 4 January 2011 10:50, Rick Macklem wrote:
> If the above 2 lines are in your /etc/exports file and "/" is a ufs
> file system, then the above should work. For a zfs "/" you must either:
> - export / as well as /data
> or
> - use "v4: /data" so that the nfsv4 root is at /data
>
> Also, make
Hi
On 5 January 2011 12:09, Rick Macklem wrote:
> You can also do the following:
> For /etc/exports
> V4: /
> /usr/home -maproot=root -network 192.168.183.0 -mask 255.255.255.0
>
> Then mount:
> # mount_nfs -o nfsv4 192.168.183.131:/usr/home /marek_nfs4/
> (But only if the file system for "/" is
activity LED will be lit for about 10s and after that goes
off.
The more it goes, the more I'm thinking of reverting to i386 mode, the
amd64 just keeps crashing about once a fortnight.
Jean-Yves
---
Jean-Yves Avenard
Hydrix Pty Ltd - Embedding the net
www.hydrix.com | fax +61 3 9572 2686 | pho
t have physical
access to the server until Monday and i don't want to take the risk of
starting a kernel that will hang.
---
Jean-Yves Avenard
Hydrix Pty Ltd - Embedding the net
www.hydrix.com | fax +61 3 9572 2686 | phone +61 3 9572 0686 ext 100
VoIP: direct: [EMAIL PROTECTED], gener
rely on nextboot my friends!)
When I got there I saw that it went a little bit further, so it seems
that it doesn't just hang... it is just very very slow
Jean-Yves
---
Jean-Yves Avenard
Hydrix Pty Ltd - Embedding the net
www.hydrix.com | fax +61 3 9572 2686 | phone +61 3 9572 0686 ext 100
bsd?? -mjm
---
Jean-Yves Avenard
Hydrix Pty Ltd - Embedding the net
www.hydrix.com | fax +61 3 9572 2686 | phone +61 3 9572 0686 ext 100
VoIP: direct: [EMAIL PROTECTED], general: [EMAIL PROTECTED]
___
freebsd-stable@freebsd.org mailing list
http://lists.f
I didn't notice much speed increase when I moved from i386 to amd64 so
it's not going to be too much of an issue to revert back (except
Subversion server: for some reasons it's much faster on amd64)
Jean-Yves
On 04/03/2005, at 7:50 AM, Scott Long wrote:
Jean-Yves Avenard wrote:
Well,
35 matches
Mail list logo