I had a problem like that on my laptop that also has an rge interface, ping
worked fine, but ssh and ftp didn't. To get around it I had to add
set ip:dohwcksum = 0
to /etc/system and reboot.
That worked and is worth a try for you :)
Cheers,
Alan
--
This message posted from opensolaris.org
__
Hi Al,
Thanks for the tips, I've maxed the memory on the board now (Up to 8GB from
4GB) and you are dead right about it being cheap to do so. I'd upgraded the
power supply as I thought that was an issue since the original couldn't provide
enough start-up current but that didn't make much di
Hello All,
In a moment of insanity I've upgraded from a 5200+ to a Phenom 9600 on my
zfs server and I've had a lot of problems with hard hangs when accessing the
pool.
The motherboard is an Asus M2N32-WS, which has had the latest available BIOS
upgrade installed to support the Phenom.
bash-
Having recently upgraded from snv_57 to snv_73 I've noticed some strange
behaviour with the -v option to zpool iostat.
Without the -v option on an idle pool things look reasonable.
bash-3.00# zpool iostat 1
capacity operationsbandwidth
pool used avail read writ
Hi Ben,
Your sar output shows one core pegged pretty much constantly! From the Solaris
Performance and Tools book that SLP state value has "The remainder of important
events such as disk and network waits. along with other kernel wait
events.. kernel locks or condition variables also a
Hold fire on the re-init until one of the devs chips in, maybe I'm barking up
the wrong tree ;)
--a
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
Hi Jim,
That looks interesting though, I'm not a zfs expert by any means but look at
some of the properties of the children elements of the mirror:-
version=3
name='zmir'
state=0
txg=770
pool_guid=5904723747772934703
vdev_tree
type='root'
id=0
guid=5904723747772934703
children[0]
type='mirror'
id
PxFS performance improvements of the order of 5-6 times are possible depending
on the workload using Fastwrite option.
Fantastic! Has this been targetted at directory operations? We've had issues
with large directorys full of small files being very slow to handle over PxFS.
Are there plans fo
Eh maybe it's not a problem after all, the scrub has completed well...
--a
bash-3.00# zpool status -v
pool: raidpool
state: ONLINE
scrub: scrub completed with 0 errors on Tue May 9 21:10:55 2006
config:
NAMESTATE READ WRITE CKSUM
raidpoolONLINE 0 0
I'm not sure exactly what happened with my box here, but something caused a
hiccup on multiple sata disks...
May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci10de,[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]
(ata6):
May 9 16:47:43 sol scsi: [ID 10783
10 matches
Mail list logo