Take the new disk out as well.. foreign/bad non-zero disk label may cause
trouble too.
I've experienced tool core dumps with foreign disk (partition) label which
might be the case if it is a recycled replacement disk (In my case fixed by
plugging the disk it into a linux desktop and "blanking"
I would like zpool iostat to take a "-p" option to output parsable statistics
with absolute counters/figures that for example could be fed to MRTG, RRD, et
al.
The "zpool iostat [-v] POOL 60 [N]" is great for humans but not very
api-friendly; N=2 is a bit overkill and unreliable. Is this info av
Why do they throw these fancy RAID-controllers at us when we have plenty CPU
force to do zfs mirror and even raidz1 and raidz2?
I have 12 SATA disks and would like to prepare to add 12 new internal SATA
disks to my home server.
The cabinet (Lian Li Modular Cube
http://www.microplex.no/aspx/prod
1) I would use soft-mirror:
During install dedicate s7 to metadb (~10MB is plenty)
cat /etc/lvm/md.tab
/dev/md/dsk/d0 -m /dev/md/dsk/d10
/dev/md/dsk/d10 1 1 /dev/dsk/c0d0s0
/dev/md/dsk/d20 1 1 /dev/dsk/c0d1s0
# metadb -a -c 3 /dev/dsk/c0d0s7 /dev/dsk/c0d1s7
Your boot-sector is lost (or not found)
Have you checked that BIOS is trying to boot from the correct disk.
My MSI-card bit me exacltly like this last time i plugged inn an aditional disk.
I had rebooted, but not powercycled.
When I powercycled, the BIOS detected new HW and came up with the incredi
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to
(re)configure the controller or restore the config without destroying your
data? Do you know for sure that a spare-part and firmware will be identical, or
at least co
Try mounting the other way, so you read form NFS and write to ZFS (~DAS). That
should perform significantly better.
NFS write is slow (compared to read) because of syncronous ack.
If you for some reason cant mount the other way, then you may want to play with
NFS mount-options for write-buffer si
Sorry I realize I was a bit misleading in the path handling and need to correct
this part:
new# mount -r old:/my/data /mnt
new# mkdir -p /my/data
new# cd /mnt ; rsync -aRHDn --delete ./ /my/data/
new# cd /mnt ; rsync -aRHD --delete ./ /my/data/
new# umount /mnt
..
new# cd /mnt ; rsync -aRHD --d
I would use rsync; over NFS if possible otherwise over ssh:
(NFS performs significantly better on read than write so preferably share from
the old and mount on the new)
old# share -F nfs -o [EMAIL PROTECTED],[EMAIL PROTECTED] /my/data
(or edit /etc/dfs/dfstab and shareall)
new# mount -r old:/my/
Just my problem too ;) And ZFS disapointed me big time here!
I know ZFS is new and every desired feature isn't implemented yet. I hope and
beleive more features are comming "soon", so I think I'll stay with ZFS and
wait..
My idea was to start out with just as many state-of-the-art size disks I r
I use Supermicro AOC-SAT2-MV8
It is 8-port SATA2, JBOD only, and literally plug&play (sol10u3) and just
~100Euro
It is PCI-X but mine is plugged into a plain PCI slot/mobo and works fine.
(Don't know how much better it would perform on a PCI-X slot/mobo).
I bought mine here:
http://www.mullet.se
11 matches
Mail list logo