[zfs-discuss] crash during snapshot operations

2007-03-23 Thread Łukasz
When I'm trying to do in kernel in zfs ioctl: 1. snapshot destroy PREVIOS 2. snapshot rename LATEST->PREVIOUS 3. snapshot create LATEST code is: /* delete previous snapshot */ zfs_unmount_snap(snap_previous, NULL); dmu_objset_destroy(snap_previous

[zfs-discuss] Re: asize is 300MB smaller than lsize - why?

2007-03-23 Thread Łukasz
> How it got that way, I couldn't really say without looking at your code. It works like this: In new ioctl operation zfs_ioc_replicate_send(zfs_cmd_t *zc) we open filesystem ( not snapshot ) dmu_objset_open(zc->zc_name, DMU_OST_ANY, DS_MODE_STANDARD | DS_MODE_READON

[zfs-discuss] Re: crash during snapshot operations

2007-03-23 Thread Łukasz
Thanks for advice. I removed my buffers snap_previous and snap_latest and it helped. I'm using zc->value as buffer. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/l

[zfs-discuss] ZFS filesystem online backup question

2007-03-27 Thread Łukasz
I have to backup many filesystems, which are changing and machines are heavy loaded. The idea is to backup online - this should avoid I/O read operations from disks, data should go from cache. Now I'm using script that does snapshot and zfs send. I want to automate this operation and add new op

[zfs-discuss] Re: Re: asize is 300MB smaller than lsize - why?

2007-03-27 Thread Łukasz
I have other question about replication in this thread: http://www.opensolaris.org/jive/thread.jspa?threadID=27082&tstart=0 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/

[zfs-discuss] Re: ZFS filesystem online backup question

2007-03-27 Thread Łukasz
>Out of curiosity, what is the timing difference between a userland script >and performing the operations in the kernel? [EMAIL PROTECTED] ~]# time zfs destroy solaris/[EMAIL PROTECTED] ; time zfs rename solaris/[EMAIL PROTECTED] solaris/[EMAIL PROTECTED]; time zfs snapshot solaris/[EMAIL PROTEC

[zfs-discuss] ZFS performance and memory consumption

2007-07-05 Thread Łukasz
Hello, I'm investigating problem with ZFS over NFS. The problems started about 2 weeks ago, most nfs threads are hanging in txg_wait_open. Sync thread is consuming one processor all the time, Average spa_sync function times from entry to return is 2 minutes. I can't use dtrace to examine prob

Re: [zfs-discuss] ZFS performance and memory consumption

2007-07-06 Thread Łukasz
Field ms_smo.smo_objsize in metaslab struct is size of data on disk. I checked the size of metaslabs in memory: ::walk spa | ::walk metaslab | ::print struct metaslab ms_map.sm_root.avl_numnodes I got 1GB But only some metaslabs are loaded: ::walk spa | ::walk metaslab | ::print struct metaslab

Re: [zfs-discuss] ZFS performance and memory consumption

2007-07-06 Thread Łukasz
After few hours with dtrace and source code browsing I found that in my space map there are no 128K blocks left. Try this on your ZFS. dtrace -n fbt::metaslab_group_alloc:return'/arg1 == -1/{} If you will get probes, then you also have the same problem. Allocating from space map works like th

Re: [zfs-discuss] ZFS performance and memory consumption

2007-07-06 Thread Łukasz
If you want to know which blocks you do not have: dtrace -n fbt::metaslab_group_alloc:entry'{ self->s = arg1; }' -n fbt::metaslab_group_alloc:return'/arg1 != -1/{ self->s = 0 }' -n fbt::metaslab_group_alloc:return'/self->s && (arg1 == -1)/{ @s = quantize(self->s); self->s = 0; }' -n tick-10s'{

Re: [zfs-discuss] ZFS performance and memory consumption

2007-07-07 Thread Łukasz
> When tuning recordsize for things like databases, we > try to recommend > that the customer's recordsize match the I/O size of > the database > record. On this filesystem I have: - file links and they are rather static - small files ( about 8kB ) that keeps changing - big files ( 1MB - 20 MB

[zfs-discuss] ZFS pool fragmentation

2007-07-10 Thread Łukasz
I have a huge problem with ZFS pool fragmentation. I started investigating problem about 2 weeks ago http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0 I found workaround for now - changing recordsize - but I want better solution. The best solution would be a defragmentator to

[zfs-discuss] ZFS send needs optimalization

2007-07-23 Thread Łukasz
ZFS send is very slow. dmu_sendbackup function is traversing dataset in one thread and in traverse callback function ( backup_cb ) we are waiting for data in arc_read called with ARC_WAIT flag. I want to parallize zfs send to make it faster. dmu_sendbackup could allocate buffer, that will be u

Re: [zfs-discuss] ZFS send needs optimalization

2007-07-24 Thread Łukasz
> > Ł> I want to parallize zfs send to make it faster. > > Ł> dmu_sendbackup could allocate buffer, that will > be used for buffering output. > > Ł> Few threads can traverse dataset, few threads > would be used for async read operations. > > > > Ł> I think it could speed up zfs send operation > 1

Re: [zfs-discuss] Snapshots impact on performance

2007-07-27 Thread Łukasz
> Same problem here (snv_60). > Robert, did you find any solutions? > > gino check this http://www.opensolaris.org/jive/thread.jspa?threadID=34423&tstart=0 Check spa_sync function time remember to change POOL_NAME ! dtrace -q -n fbt::spa_sync:entry'/(char *)(((spa_t*)arg0)->spa_name) == "POOL_

Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)

2007-08-06 Thread Łukasz
I think you have a problem with pool fragmentation. We have the same problem and changing recordsize will help. You have to set smaller recordsize for pool ( all filesystem must have the same size or smaller size ). First check if you have problems with finding blocks with this dtrace script: #

[zfs-discuss] ZFS Space Map optimalization

2007-09-14 Thread Łukasz
I have a huge problem with space maps on thumper. Space maps takes over 3GB and write operations generates massive read operations. Before every spa sync phase zfs reads space maps from disk. I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps. Now space map

Re: [zfs-discuss] ZFS Space Map optimalization

2007-09-24 Thread Łukasz
> > On Sep 14, 2007, at 8:16 AM, Łukasz wrote: > > > I have a huge problem with space maps on thumper. > Space maps takes > > over 3GB > > and write operations generates massive read > operations. > > Before every spa sync phase zfs reads space maps &g

[zfs-discuss] ZFS data recovery

2008-03-14 Thread Łukasz
I have a a problem with zpool import after having problems with 2 disks in RAID 5 (hardware raid). There are some bad blocks on that disks. #zpool import .. state: FAULTED status: The pool metadata is corrupted. .. #zdb -l /dev/rdsk/c4t600C0FF009258F4855B59001d0s0 is OK. I managed t

Re: [zfs-discuss] ZFS data recovery

2008-03-19 Thread Łukasz
I managed to recover my data after 3 days fighting. Few system changes: - disable ZIL - enable readonly mode - disable zil_replay during mount - change function that chooses uberblock On snv_78 #mdb -kw > zil_disable/W 1 zil_disable:0 = 0x1 > spa_mode/W 1 spa_mode:

Re: [zfs-discuss] Metadata corrupted

2008-04-30 Thread Łukasz
Did you see http://www.opensolaris.org/jive/thread.jspa?messageID=220125 I managed to recover my lost data with simple mdb commands. --Lukas This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

Re: [zfs-discuss] ZFS data recovery

2008-04-30 Thread Łukasz
> Hi There, > > Is there any chance you could go into a little more > detail, perhaps even document the procedure, for the > benefit of others experiencing a similar problem? I have some spare time this weekend and will try to give more details. This message posted from opensolaris.org ___

[zfs-discuss] Odp: Kernel panic at zpool import

2008-08-11 Thread Łukasz K
Dnia 7-08-2008 o godz. 13:20 Borys Saulyak napisał(a): > Hi, > > I have problem with Solaris 10. I know that this forum is for > OpenSolaris but may be someone will have an idea. > My box is crashing on any attempt to import zfs pool. First crash > happened on export operation and since then I can

[zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impact on performance

2007-07-27 Thread Łukasz K
Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a): > Hello Victor, > > Wednesday, June 27, 2007, 1:19:44 PM, you wrote: > > VL> Gino wrote: > >> Same problem here (snv_60). > >> Robert, did you find any solutions? > > VL> Couple of week ago I put together an implementation of space maps

[zfs-discuss] Odp: Is ZFS efficient for large collections of small files?

2007-08-21 Thread Łukasz K
> Is ZFS efficient at handling huge populations of tiny-to-small files - > for example, 20 million TIFF images in a collection, each between 5 > and 500k in size? > > I am asking because I could have sworn that I read somewhere that it > isn't, but I can't find the reference. It depends, what typ

[zfs-discuss] Odp: zfs destroy takes long time

2007-08-24 Thread Łukasz K
Dnia 23-08-2007 o godz. 22:15 Igor Brezac napisał(a): > We are on Solaris 10 U3 with relatively recent recommended patches > applied. zfs destroy of a filesystem takes a very long time; 20GB usage > and about 5 million objects takes about 10 minutes to destroy. zfs pool > is a 2 drive stripe, not

Re: [zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impacton performance

2007-08-24 Thread Łukasz K
then I'll be able to > provide you with my changes in some form. Hope this will happen next week. > > Cheers, > Victor > > Łukasz K wrote: > > Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a): > >> Hello Victor, > >> > >> Wednesday, Ju

Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-11 Thread Łukasz K
> > Now space maps, intent log, spa history are compressed. > > All normal metadata (including space maps and spa history) is always > compressed. The intent log is never compressed. Can you tell me where space map is compressed ? Buffer is filled up with: 468 *entry++ = SM_

[zfs-discuss] Odp: Slow file system access on zfs

2007-11-07 Thread Łukasz K
Hi, I think your problem is filesystem fragmentation. When available space is less than 40% ZFS might have problems with finding free blocks. Use this script to check it: #!/usr/sbin/dtrace -s fbt::space_map_alloc:entry { self->s = arg1; } fbt::space_map_alloc:return /arg1 != -1/ { self-

Re: [zfs-discuss] Slow file system access on zfs

2007-11-08 Thread Łukasz K
there are problems with zfs sync phase.Run #dtrace -n fbt::txg_wait_open:entry'{ stack(); ustack(); }'and wait 10 minutesalso give more information about pool#zfs get all filerI assume 'filer' is you pool name.RegardsLukasOn 11/7/07, Łukasz K <[EMAIL PROTECTED]> wrote: Hi,

[zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Hi I'm using ZFS on few X4500 and I need to backup them. The data on source pool keeps changing so the online replication would be the best solution. As I know AVS doesn't support ZFS - there is a problem with mounting backup pool. Other backup systems (disk-to-disk or block-to-block)

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a): > Łukasz K wrote: > > > Hi > >I'm using ZFS on few X4500 and I need to backup them. > > The data on source pool keeps changing so the online replication > > would be the best solution. > > > >

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a): > On Jan 10, 2008, at 4:50 AM, Łukasz K wrote: > > > Hi > > I'm using ZFS on few X4500 and I need to backup them. > > The data on source pool keeps changing so the online replication > > would be the b