Brian Hechinger wrote:
> I think someone needs to port ZFS to NetBSD, they are in dired need of
> a better filesystem. ;)
Already being done:
http://code.google.com/soc/netbsd/about.html
Isn't the Google SoC great? :)
___
zfs-discuss mailing list
zfs-
Ricardo Correia wrote:
> |compression |time-real |time-user |time-sys |compressratio
> --
> |lzo |6m39.603s |0m1.288s |0m6.055s |2.99x
> |gzip |7m46.875s |0m1.275s |0m6.312s |3.41x
> |lzjb
Forwarding some simple benchmarks, just to peek your curiosity.
Very interesting results.
Original Message
Subject: [zfs-fuse] probably some better numbers ;)
Date: Thu, 19 Apr 2007 22:07:31 +0200
From: roland <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: <[EMAIL PROTECTED]
eric kustarz wrote:
> Two reasons:
> 1) cluttered the output (as the path name is variable length). We
> could perhaps add another flag (-V or -vv or something) to display the
> ranges.
> 2) i wasn't convinced that output was useful, especially to most
> users/admins.
>
> If we did provide the ran
Why doesn't "zpool status -v" display the byte ranges of permanent
errors anymore, like it used to (before snv_57)?
I think it was a useful feature. For example, I have a pool with 17
permanent errors in 2 files with 700 MB each, but no ability to see
directly which one has the most errors or whic
Darren J Moffat wrote:
> Until someone tries though you really never know what needs to be done.
> I would highly recommend getting LZO up and running as a Solaris
> kernel module - it might not be trivial or you could be lucky and it
> will "just work" . Writing a library and writing a kernel modu
Darren J Moffat wrote:
>
> I'd also highly recommend checking that this actually works for ZFS in
> the kernel - which also means porting miniLZO to be a kernel module on
> Solaris.
I don't see why it shouldn't. From the LZO homepage:
# Decompression is simple and */very/* fast.
# Requires no mem
Hi,
I don't know if this has been discussed before, but have you thought
about adding LZO compression to ZFS?
One zfs-fuse user has provided a patch which implements LZO compression,
and he claims better compression ratios *and* better speed than lzjb.
The miniLZO library is licensed under the G
Erblichs wrote:
> So, if the license issues are removed, I am sure
> that ZFS could be ported over to Linux. It is just
> time and effort...
I believe you are right, there seems to be a lot of interest in porting
ZFS to the Linux kernel. The main problem is, no doubt, the license
rring to when he/she said it was going to be
fixed in the near future).
Background scrubbing and the possibility to see a list of corrupted
files is already working for a long time.
That FAQ entry might scare some people from trying ZFS, so I think this
should be fixed :)
Regards,
Ricardo Co
Hi Pawel,
Pawel Jakub Dawidek wrote:
> Other than that, ZFS should be fully-functional.
Congratulations, nice work! :)
I'm interested in the cross-platform portability of ZFS pools, so I have
one question: did you implement the Solaris ZFS whole-disk support
(specifically, the creation and recog
Isn't it more likely that these are errors on data as well? I think zfs
retries read operations when there's a checksum failure, so maybe these
are transient hardware problems (faulty cables, high temperature..)?
This would explain the non-existence of unrecoverable errors.
Robert Milkowski wrote
Hi Tor,
Tor wrote:
> Dang, I think I'm dead as far as Solaris goes. I checked the HCL and the Java
> compatibility check, and none of the two controllers I would need to use, one
> PCI IDE and one S-ATA on the KT-4 motherboard, will work with OpenSolaris.
> Annoying as heck, but it looks like I
On Friday 15 December 2006 20:02, Dave Burleson wrote:
> Does anyone have a document that describes ZFS in a pure
> SAN environment? What will and will not work?
>
> From some of the information I have been gathering
> it doesn't appear that ZFS was intended to operate
> in a SAN environment.
Th
On Friday 15 December 2006 21:54, Eric Schrock wrote:
> Ah, you're running into this bug:
>
> 650054 ZFS fails to see the disk if devid of the disk changes due to driver
> upgrade
You mean 6500545 ;)
>
> Basically, if we have the correct path but the wrong devid, we bail out
> of vdev_disk_open()
With the help of dtrace, I found out that in vdev_disk_open() (in
vdev_disk.c), the ddi_devid_compare() function was failing.
I don't know why the devid has changed, but simply doing zpool export ; zpool
import did the trick - the pool imported correctly and the contents seem to
be intact.
Ex
Not sure if this is helpful, but anyway..:
[EMAIL PROTECTED]:~# zdb -bb pool
Traversing all blocks to verify nothing leaked ...
No leaks (block sum matches space maps exactly)
bp count: 1617816
bp logical:91235889152 avg: 56394
bp physical: 8
This might help diagnosing the problem: zdb successfully traversed the pool.
Here's the output:
[EMAIL PROTECTED]:~# zdb -c pool
Traversing all blocks to verify checksums and verify nothing leaked ...
zdb_blkptr_cb: Got error 50 reading <5, 3539, 0, 12e7> -- skipping
Error counts:
err
On Friday 15 December 2006 16:27, Anton B. Rang wrote:
> This is $7FFF, which is MAXOFFSET_T, aka UNKNOWN_SIZE. Not sure
> why...a damaged label on this device?
'format' seems to show the partition table correctly:
[EMAIL PROTECTED]:~# format
Searching for disks...done
AVAILABLE DISK
On Friday 15 December 2006 15:28, Trevor Watson wrote:
> Does anyone have any ideas or suggestions as to how I might try to figure
> out what's wrong?
I have no idea, but I've had the same thing happen to me yesterday (see
http://www.opensolaris.org/jive/thread.jspa?threadID=20294&tstart=0 ).
Wh
Hi,
I've been using a ZFS pool inside a VMware'd NexentaOS, on a single real disk
partition, for a few months in order to store some backups.
Today I noticed that there were some directories missing inside 2 separate
filesystems, which I found strange. I went to the backup logs (also stored
in
Wow, congratulations, nice work!
I'm the one porting ZFS to FUSE and seeing you doing such progress so fast is
very very encouraging :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
How are the statistics in 'zpool iostat -v' computed? Is this an
x-minute-average? I noticed that if there's no I/O for a while, the numbers
keep decreasing and the zpool manpage doesn't say anything about this.
By the way, the manpage links in
http://www.opensolaris.org/os/community/zfs/d
On Tuesday 18 July 2006 01:06, Jason Hoffman wrote:
> 2) Filebench RAIDZ of 3x3 vs "RAID0" vs RAIDZ of 1x9 vs RAIDZ of 2x9
> a) Varmail (50:50 reads-writes):
> - 2473.0 ops/s (RAIDZ of 3x3)
> - 4316.8 ops/s (RAID0),
>
I'm still having problems.
The specific link that I'm looking at is
http://docs.sun.com/app/docs/doc/816-5175/6mbba7f4u?a=view
Which has, for example, a link to zoneadm(1M) with the address
http://docs.sun.com/app/docs/doc/REFMAN1M%20Version%205.0
Which gives the "Error: The requested document
There are still a few problems in docs.sun.com.
If you go to the zones(5) manpage and click on any link, it says "Error: The
requested document could not be found."
I think there was also a link in one of the zones commands or libc functions
that pointed to the wrong manpage, but I don't rememb
Hi,
On Tuesday 02 May 2006 22:41, Eric Schrock wrote:
> Folks -
>
> Several people have vocalized interest in porting ZFS to operating
> systems other than solaris. While our 'mentoring' bandwidth may be
> small, I am hoping to create a common forum where people could share
> their experiences an
http://wizy.org/zfs-on-fuse.pdf
Anyway, I'm not planning on being mentored too much, but it's possible I will
need some help understanding the ZFS code.
Comments? :)
On Monday 01 May 2006 05:58, Ricardo Correia wrote:
> Hi, I'm a computer engineering student from Portugal,
28 matches
Mail list logo