Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-18 Thread Tomasz Torcz
On Fri, Feb 13, 2009 at 9:47 PM, Richard Elling
 wrote:
> It has been my experience that USB sticks use FAT, which is an ancient
> file system which contains few of the features you expect from modern
> file systems. As such, it really doesn't do any write caching. Hence, it
> seems to work ok for casual users. I note that neither NTFS, ZFS, reiserfs,
> nor many of the other, high performance file systems are used by default
> for USB devices. Could it be that anyone not using FAT for USB devices
> is straining against architectural limits?

  There are no archtiectural limits. USB sticks can be used with whatever
you throw at them. On sticks I use to interchange data with Windows machines
I have NTFS, on others differente filesystems: ZFS, ext4, btrfs, often encrypted
on block level.
   USB sticks are generally very simple -- no discard commands and
other fancy stuff,
but overall they are block devices just like discs, arrays, SSDs...

-- 
Tomasz Torcz
xmpp: zdzich...@chrome.pl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Modules

2009-04-18 Thread Tomasz Torcz
On Sat, Apr 18, 2009 at 10:38 PM, Andrew Gabriel
 wrote:
> Bob Friesenhahn wrote:
>>
>> On Sat, 18 Apr 2009, Eric D. Mudama wrote:
>>>
>>> What is tall about the SATA stack?  There's not THAT much overhead in
>>> SATA, and there's no reason you would need to support any legacy
>>> transfer modes or commands you weren't interested in.
>>
>> If SATA is much more than a memcpy() then it is excessive overhead for a
>> memory-oriented device.  In fact, since the "device" is actually comprised
>> of quite a few independent memory modules, it should be possible to schedule
>> I/O for each independent memory module in parallel.  A large storage system
>> will be comprised of tens, hundreds or even thousands of independent memory
>> modules so it does not make sense to serialize access via legacy protocols.
>>  The larger the storage device, the more it suffers from a serial protocol.
>
> It's a mistake to think that flash looks similar to RAM. It doesn't in lots
> of ways -- actually it looks more similar to a hard disk in many respects;-)

 That's true, but flash isn't hard disk either. Flash is flash and I
believe poster
meant exposing it for the OS to consume. This way OS can grow and use
generic Flash Translation Layer for wear levelling and block remapping and
filesystem could use flash features directly. This way for example TRIM commands
could be implemented in this FTL layer anfd won't be hidden in proprietary
firmware. The less magic, blackbox firmwares and more open source code, the
better.
 If I am not clear, here is longer article on this topic:
http://lwn.net/Articles/276025/


-- 
Tomasz Torcz
xmpp: zdzich...@chrome.pl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snv63: kernel panic on import

2007-05-15 Thread Tomasz Torcz

Hi,

I have a problem with ZFS filesystem on array. ZFS was created
by Solaris 10 U2. Some glitches with array made it panic
Solaris on boot. I've installed snv63 (as snv60 contains some
important fixes), systems boots but kernel panic when
I try to import pool. This is with zfs_recover=1.

Configuration is as follows (on snv63):
# zpool import
pool: macierz
  id: 15960555323673164597
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
  some features will not be available without an explicit 'zpool upgrade'.
config:

  macierz ONLINE
c2t0d0ONLINE
c2t1d0ONLINE
c2t2d0ONLINE
c2t3d0ONLINE

Those are 1,8TB logical volumes exported by array.

And the panic:

Loading modules: [ unix genunix specfs dtrace cpu.AuthenticAMD.15 uppc
pcplusmp scsi_vhci ufs ip hook neti sctp arp usba fctl lofs zfs random
md cpc crypto fcip fcp logindmux ptm ipc ]

::status

debugging crash dump vmcore.0 (64-bit) from boraks
operating system: 5.11 snv_63 (i86pc)
panic message:
ZFS: bad checksum (read on  off 0: zio fffed258b880 [L0
SPA space map] 1000L/600P DVA[0]=<1:fe78108600:600> DVA[
1]=<2:166f85c200:600> fletcher4 lzjb LE contiguous birth=2484644 fill=1 ck
dump content: kernel pages only

*panic_thread::findstack -v

stack pointer for thread ff00101d5c80: ff00101d58f0
ff00101d59e0 panic+0x9c()
ff00101d5a40 zio_done+0x17c(fffed258b880)
ff00101d5a60 zio_next_stage+0xb3(fffed258b880)
ff00101d5ab0 zio_wait_for_children+0x5d(fffed258b880, 11,
fffed258bad8)
ff00101d5ad0 zio_wait_children_done+0x20(fffed258b880)
ff00101d5af0 zio_next_stage+0xb3(fffed258b880)
ff00101d5b40 zio_vdev_io_assess+0x129(fffed258b880)
ff00101d5b60 zio_next_stage+0xb3(fffed258b880)
ff00101d5bb0 vdev_mirror_io_done+0x29d(fffed258b880)
ff00101d5bd0 zio_vdev_io_done+0x26(fffed258b880)
ff00101d5c60 taskq_thread+0x1a7(fffec27490f0)
ff00101d5c70 thread_start+8()

I've uploaded crash dump here:
http://www.crocom.com.pl/~tomek/boraks-zpool-import-crash.584MB.tar.bz2

Archive is 55MB, it unpacks to almost 600MB.
I'd be happy to provide additional details, this is my
first serious issue with ZFS.
And yes, I know I should've backups.
--
Tomasz Torcz
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-25 Thread Tomasz Torcz

On 5/24/07, Tom Buskey <[EMAIL PROTECTED]> wrote:

>  Linux and Windows
> as well as the BSDs) are all relative newcomers to
> the 64-bit arena.

The 2nd non-x86 port of Linux was to the Alpha in 1999 (98?) by Linus no less.


In 1994 to be precise. In 1999 Linux 2.2 got released, which supported
few more 64 bit platforms.

--
Tomasz Torcz
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Tomasz Torcz
On 10/30/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
> > I'm experiencing major checksum errors when using a syba silicon image 3114 
> > based pci sata controller w/ nonraid firmware.  I've tested by copying data 
> > via sftp and smb.  With everything I've swapped out, I can't fathom this 
> > being a hardware problem.
> Even before ZFS, I've had numerous situations where various si3112 and
> 3114 chips
> would corrupt data on UFS and PCFS, with very simple  copy and checksum
> test scripts, doing large bulk transfers.

  Those SIL chips are really broken when used with certain Seagate drivers.
But I have data corrupted by them with WD drive also.
Linux can workaround this bug by reducing transfer sizes (and thus
dramatically impacting speed). Solaris probably don't have workaround.
With this quirk enabled (on Linux), I get at most 20 MB/s from drives,
but ZFS do not report any corruption. Before I had corruptions hourly.

More info about SIL issue: http://home-tj.org/wiki/index.php/Sil_m15w
I have Si 3112, but despite SIL claims other chips seem to be affected also.


-- 
Tomasz Torcz
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a

2007-11-03 Thread Tomasz Torcz
On 11/2/07, Carson Gaspar <[EMAIL PROTECTED]> wrote:
> As 3.2.0 isn't released yet, and I didn't want to wait, I've backported
> vfs_zfsacl.c from SAMBA_3_2.

 What about licenses? (L)GPLv2/v3 compatibility?

--
Tomasz Torcz
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss