Morning! > Personally i assume the peaks were triggerd by resilvering the pool. > Its not uncommon that you have a high load, if your pool is resilvering.
Yes, the peaks were triggered by the resiver. > Best Practice in this case would have been creating a new Zpool, e.g > raidz.. > > zfs send from $oldpool zfs receive $newpool... :-( I don't have enough connections to hook everything together at the same time. Also, I performed the trick with memory sticks. Here is a demo video I made... http://www.youtube.com/watch?v=RmtPpnSrJ6w ... once all the 4gig sticks were replaced with 8 gig sticks, simply exporting and then importing the pool resulted in the extra space becoming available. For some reason, it didn't want to work with the 2tb drives. > I have three Questions for you. > > First.. is this a production Box? It is my home server. The one that keeps all my data, photography RAW files, music, programs, code, everything going back to when I started with computers in my teens. > Second.. could you provide the output of zpool history? mich@jaguar:/root# zpool history History for 'backup': 2011-01-20.07:27:55 zpool create backup c3d0p0 History for 'data': 2010-01-14.20:50:32 zpool create data raidz c10t0d0p0 c11t0d0p0 c12t0d0p0 2010-01-14.20:54:22 zfs set mountpoint=/mirror data 2010-01-14.21:08:52 zfs set sharesmb=on data 2010-01-14.21:14:38 zfs set sharesmb=on data 2010-01-14.21:14:43 zfs set sharesmb=on data 2010-01-14.21:21:28 zfs set sharenfs=on data 2010-01-14.21:25:14 zfs set sharenfs=on data 2010-01-14.21:25:35 zfs set sharesmb=on data 2010-01-14.21:30:16 zfs set sharesmb=on data 2010-01-14.21:30:21 zfs set sharenfs=on data 2010-01-14.21:34:29 zfs set sharesmb=on data 2010-01-16.17:38:07 zpool import -f data 2010-01-17.17:13:56 zpool import -f data 2010-01-17.17:23:31 zpool export data 2010-01-17.17:32:34 zpool import data 2010-01-17.18:06:15 zpool export data 2010-01-17.18:07:18 zpool import data 2010-01-17.18:14:38 zpool export data 2010-01-17.18:14:55 zpool import data 2010-01-17.19:32:29 zpool export data 2010-01-17.20:33:34 zpool import data 2010-01-17.20:47:38 zpool export data 2010-01-17.20:56:05 zpool import data 2010-01-17.21:21:50 zpool export data 2010-01-17.21:35:24 zpool import data 2010-01-17.21:37:52 zfs set sharesmb=on data 2010-01-17.21:38:00 zfs set sharenfs=on data 2010-01-17.21:41:06 zfs set sharesmb=name=mirror data 2010-01-17.22:00:06 zpool export data 2010-01-17.22:02:44 zpool import data 2010-01-17.22:08:34 zpool export data 2010-01-17.22:31:27 zpool import data 2010-01-17.22:31:55 zpool export data 2010-01-18.07:39:30 zpool import data 2010-01-18.20:04:06 zfs set sharenfs=off data 2010-01-18.22:24:18 zpool export -f data 2010-01-18.22:26:17 zpool import data 2010-01-27.18:47:27 zpool export data 2010-01-27.20:36:11 zpool import data 2010-01-27.20:37:09 zpool export data 2010-01-31.09:26:24 zpool import data 2010-01-31.09:26:59 zpool export data 2010-02-03.19:19:57 zpool import data 2010-02-03.23:09:06 zpool export data 2010-02-03.23:13:25 zpool import data 2010-02-03.23:56:02 zpool export data 2010-02-16.19:41:11 zpool import data 2010-02-17.07:34:01 zpool upgrade data 2010-02-21.15:51:24 zpool export data 2010-02-21.18:35:57 zpool import data 2010-02-27.20:03:46 zpool scrub data 2010-03-05.04:00:04 zpool scrub data 2010-04-05.04:00:04 zpool scrub data 2010-05-05.04:00:04 zpool scrub data 2010-06-01.13:48:17 zpool clear data 2010-06-05.04:00:03 zpool scrub data 2010-07-05.04:00:03 zpool scrub data 2010-08-05.04:00:05 zpool scrub data 2010-08-05.22:26:45 zpool clear data 2010-08-19.15:01:05 zpool scrub data 2010-09-05.04:00:03 zpool scrub data 2010-09-15.19:44:14 zpool clear data 2010-09-15.22:41:29 zpool clear data 2010-10-05.04:00:04 zpool scrub data 2010-10-06.08:22:05 zpool clear data 2010-10-08.22:45:10 zpool clear data 2010-11-05.04:00:04 zpool scrub data 2010-12-05.04:00:04 zpool scrub data 2010-12-22.14:46:27 zpool scrub data 2010-12-23.08:08:27 zpool replace -f data c6t0d0p0 c6t2d0p0 2010-12-23.20:27:17 zpool scrub data 2010-12-23.20:55:59 zpool replace -f data c6t2d0p0 c6t0d0p0 2010-12-24.01:34:37 zpool clear data 2010-12-24.01:51:21 zpool offline data c6t0d0p0 2010-12-24.01:53:05 zpool online data c6t0d0p0 2010-12-24.01:54:59 zpool export data 2010-12-24.01:55:57 zpool import data 2010-12-24.07:22:34 zpool scrub -s data 2010-12-24.07:37:08 zpool detach data c6t0d0p0 2010-12-24.07:38:14 zpool scrub data 2010-12-24.07:38:43 zpool scrub -s data 2010-12-24.07:39:15 zpool scrub data 2010-12-24.12:08:01 zpool clear data 2010-12-24.12:24:49 zpool clear data 2010-12-24.12:27:10 zpool clear data 2010-12-24.12:30:33 zpool scrub -s data 2010-12-24.12:31:05 zpool clear data 2010-12-24.12:37:56 zpool replace data c6t0d0p0 c6t2d0p0 2010-12-25.07:02:28 zpool scrub data 2010-12-25.07:34:09 zpool scrub -s data 2010-12-25.07:34:18 zpool clear data 2010-12-25.10:55:36 zpool replace data c6t2d0p0 c4t1d0 2010-12-25.20:01:19 zpool clear data 2010-12-25.20:01:52 zpool scrub data 2010-12-25.20:10:32 zpool detach data c6t2d0p0 2010-12-26.05:55:48 zpool scrub data 2010-12-26.12:23:19 zpool scrub data 2010-12-31.09:54:06 zpool import -f data 2010-12-31.09:54:36 zpool upgrade data 2010-12-31.10:02:50 zfs set sharesmb=on data 2010-12-31.10:12:17 zfs set sharesmb=on data sharemgr=name=mirror 2010-12-31.10:13:03 zfs set sharesmb=name=mirror data 2010-12-31.10:18:06 zfs set readonly=off data 2010-12-31.13:19:46 zpool scrub data 2010-12-31.18:54:22 zpool replace data c2t5d0p0 c2t2d0 2011-01-01.07:48:06 zpool scrub data 2011-01-01.07:48:14 zpool clear data 2011-01-05.00:00:09 zpool scrub data 2011-01-13.19:14:42 zpool scrub data 2011-01-13.20:20:56 zpool scrub -s data 2011-01-16.06:35:43 zpool scrub data 2011-01-16.12:01:46 zpool export data 2011-01-16.12:02:29 zpool import data 2011-01-16.12:02:33 zpool export data 2011-01-16.12:02:46 zpool import data 2011-01-16.12:03:29 zpool replace data c3d0p0 c2t4d0 2011-01-16.20:09:31 zpool clear data 2011-01-17.05:47:31 zpool scrub data 2011-01-17.16:57:14 zpool scrub -s data 2011-01-17.16:57:35 zpool export data 2011-01-17.17:01:43 zpool import data 2011-01-17.17:02:11 zpool replace data c4d0 c2t3d0 2011-01-18.07:38:31 zpool replace data c2t4d0 c3d0 2011-01-18.16:13:42 zpool clear data 2011-01-18.23:24:34 zpool clear data 2011-01-19.05:52:49 zpool clear data 2011-01-19.07:50:22 zpool clear data 2011-01-19.09:35:56 zpool clear data 2011-01-19.11:02:32 zpool scrub data 2011-01-19.11:12:34 zpool import data 2011-01-19.11:13:18 zpool replace data c3d0 c2t4d0 2011-01-19.19:12:32 zpool clear data 2011-01-20.06:30:41 zpool export data 2011-01-20.06:56:10 zpool import data History for 'rpool': 2010-12-31.07:50:38 zpool create -f rpool c2t0d0s0 2010-12-31.07:50:38 zfs set org.openindiana.caiman:install=busy rpool 2010-12-31.07:50:38 zfs create -b 4096 -V 1979m rpool/swap 2010-12-31.07:50:39 zfs create -b 131072 -V 1979m rpool/dump 2010-12-31.07:50:39 zfs set mountpoint=/a/export rpool/export 2010-12-31.07:50:39 zfs set mountpoint=/a/export/home rpool/export/home 2010-12-31.07:50:40 zfs set mountpoint=/a/export/home/mich rpool/export/home/mich 2010-12-31.08:00:56 zpool set bootfs=rpool/ROOT/openindiana rpool 2010-12-31.08:02:00 zfs set org.openindiana.caiman:install=ready rpool 2010-12-31.08:02:00 zfs set mountpoint=/export/home/mich rpool/export/home/mich 2010-12-31.08:02:00 zfs set mountpoint=/export/home rpool/export/home 2010-12-31.08:02:01 zfs set mountpoint=/export rpool/export 2010-12-31.08:45:08 zpool attach -f rpool c2t0d0s0 c2t1d0s0 2011-01-09.00:00:08 zpool scrub rpool > Third, no offence, but do you have proper Literature for ZFS? > If not please have a look at Solarisinternals > > http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Perfor > mance_FAQ > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > > Or have a look at the Open Solaris Bible. I was on a Sun course (prior to the Oracle take over) which covered ZFS as part of it. It really captured my imagination. Hence I did a load of testing with memory sticks and even made videos to say, "Hey, how cool is this!" But it is odd things like this which are really throwing me curve balls. > Fourth... what would you like to achieve with OI ? <engage violins> On a small wage, I can not do much. After dealing with cheap RAID cards over the years, I lost a lot of data because the embedded control was near impossible to work with. I had back ups fortunately, but as data sizes grew, I couldn't afford the larger tape backup systems. ZFS seemed like the perfect answer for one as poor as I. <end violins> With the killing of OS for the home user (due to the licence prohibiting production box of Express 11 ... it made no distinction for home users at all ... so I had to move from OS) I did a bit of research and believe that OI is the obvious replacement for OS. Not because of the technology ... but because of the community. The e-mails and the help I have received over IRC have been wonderfully supportive, and tolerant of me. So, I have chosen to use OI to keep all my data safe with ZFS. E-sata cards for an external backup system that I can put two drives in, so I can back up to a ZFS mirror set. Seems like the ideal solution that will grow with me. I've actually achieved what I want with OI, a home server that seems stable. It is just odd behaviours that are throwing me a little. > Sorry now i made four Questions out of it. ;) No problem :-) Many thanks for any help you can give me on this. Michelle. _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss