It was fine on the reboot...so even though zfs destroy threw up the errors, it
did remove them...just needed a reboot to refresh/remove
it in the zpool list.
thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
Hi guys, I physically removed disks from a pool without offlining the pool
first...(yes I know) anyway I now want to delete/destroy the pool...but zpool
destroy -f dvr says "can not open 'dvr':no such pool
I can not offline it or delete it!
I want to reuse the name dvr but how do I do this?
i
I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of
95-105MB and NFS of 5-20MB.
Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI
target...as I am getting 2-5MB on that too.
--
This message posted from opensolaris.org
There is alot there to reply to...but I will try and help...
Re. TLER. Do not worry about TLER when using ZFS. ZFS will handle it either way
and will NOT time out and drop the drive...it may wait a long time, but it will
not time out and drop the drive - nor will it have an issue if you do enabl
Thanks, seems simple.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi guys, I am about to reshape my data spool and am wondering what performance
diff. I can expect from the new config. Vs. The old.
The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev's of 7 disk raidz2 in a single pool.
I understand it should be better wit
"If you see the workload on the wire go through regular patterns of fast/slow
response
then there are some additional tricks that can be applied to increase the
overall
throughput and smooth the jaggies. But that is fodder for another post..."
Can you pls. elaborate on what can be done here as I
Hi,
I suspect mine are already IT mode...not sure how to confirm that
though...I have had no issues.
My controller is showing as C8...odd isn't it. It's in the 16xPCIE slot at the
moment...I am not sure how it gets the number...
--
This message posted from opensolaris.org
___
Glad you got it humming!
I got my (2x) 8 port LSI cards from here for $130USD...
http://cgi.ebay.com/BRAND-NEW-SUPERMICRO-AOC-USASLP-L8I-UIO-SAS-RAID_W0QQitemZ280397639429QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item4149006f05
Works perfectly.
--
This message posted from opensolaris.org
_
Thanks guys,
It's all working perfectly so farand very easy too.
Given that my boot disks (consumer laptop drives) cost only ~$60AUD each, it's
a cheap way to maintain high availability and backup.
ZFS does not seem to mind having one of the 3 offline so I'd recomend this to
others loo
Hi Jason, I spent months trying different O/S's for my server and finally
settled on opensolaris.
The o/s is just as easy to install/learn or use than any of the Linux
variants...and ZFS beats mdadm hands down.
I had a server up and sharing files in under an hour. Just do it - (you'll
know s
Hi guys,
On my home server (2009.6) I have a 2 HDD's in a mirrored rpool.
I just added a 3rd to the mirror and made all disks bootable (ie. installgrub
on the mirror disks).
My though is this, I remove the 3rd mirror disk and offsite it as a backup.
That way if I mess up the rpool, I can get
Ahh, interesting...once I get the data realatively stable in some of those
sub-folders I wil create a file system, move the data in there and setup the
snapshot for those that are relatively static...now I just need to do a load of
reading about snapshots!
Thanks again...sp much to learn.
--
T
Thanks for that Erik.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi guys, on my home server I have a variety of directories under a single
pool/filesystem, Cloud.
Things like
cloud/movies -> 4TB
cloud/music -> 100Gig
cloud/winbackups -> 1TB
cloud/data -> 1TB
etc.
After doing some reading, I see recomendations to have separate filesystem to
improve p
Just an update on this, I was seeing high CPU utilisation (100% on all 4 cores)
for ~10 seconds every 20 seconds when transfering files to the server using
Samba under 133.
So I rebooted and selected 111b and I no longer have the issue. Interestingly,
the rpool is still in place..as it should b
Yes, I am glad that I learned this lesson now, rather than in 6 months when I
have re-purposed the exiting drives...makes me all the more committed to
maintaining an up to date remote backup.
The reality is that I can not afford to mirror the 8TB in the zpool, so I'll
balance the risk and just
well, both I guess...
I thought the dataset name was based upon the file system...so I was assuming
that if i renamed the zfs filesystem (with zfs rename) it would also rename the
dataset...
ie...
#zfs create tank/fred
gives...
NAMEUSED AVAIL REFER MOUNTPOINT
tank/fr
Ok, I know NOW that I should have used zfs rename...but just for the record,
and to give you folks a laugh, this is the mistake I made...
I created a zfs file system, cloud/movies and shared it.
I then filled it with movies and music.
I then decided to rename it, so I used rename in the Gnome to
Ok, I know NOW that I should have used zfs rename...but just for the record,
and to give you folks a laugh, this is the mistake I made...
I created a zfs file system, cloud/movies and shared it.
I then filled it with movies and music.
I then decided to rename it, so I used rename in the Gnome to
Thanks David.
Re. the starting cylinder, it was more that one c8t0d0 the partition started at
zero and c8t1d0 it started at 1.
ie.
c8t0d0
Partition Status Type Start End Length %
= == = === == ===
1 Active Solaris2 0 30401 30402 100
c8t1d0:
Partition Status Type
Thanks for that.
It seems strange though that the two disks, which are from the same
manufacturer, same model, same firmware and similar batch/serial's behave
differently.
I am also puzzled that the rpool disk appears to start at cylinder 0 and not 1.
I did find this quote after googling for t
Looks like an issue with the start /length of the partition table...
These are the disks from "fomrat"...
8. c8t0d0
/p...@0,0/pci8086,3...@1f,2/d...@0,0
9. c8t1d0
/p...@0,0/pci8086,3...@1f,2/d...@1,0
Loking at the partitions, the existing rpool disk is formatte
Thanks for the replies.
1) Both are on the same controller (Intel motherboard) - one on port 0, the
other in port 1 (I think is is demonstrated by c8t0d0 and c8t1d0 ?)
2) I have tried adding them without doing the format and no joy.
Any other ideas?
Message was edited by: tomwater
--
This mes
Hi guys,
I have just installed open solaris 2009.6 on my server using a 250G laptop
drive (using the entire drive).
I now want to add another [u]identical[/u] 250G laptop drive to the rpool to
create a mirror. The have been no other changes to hardware (x86 - intel entry
level server board)
25 matches
Mail list logo