On Sat, 20 Jun 2009, Cindy Swearingen wrote:
I wish we had a zpool destroy option like this:
# zpool destroy -really_dead tank2
Cindy,
The moment we implemented such a thing, there would be a rash of requests
saying:
a) I just destroyed my pool with -really_dead - how can I get my data
b
Andrew Watkins wrote:
[I did post this in NFS, but I think it should be here]
I am playing with ACL on snv_114 (and Storage 7110) system and I have
noticed that strange things are happing to ACL's or am I doing something
wrong.
When you create a new sub-directory or file the ACL's seem to be
Dave,
If I knew I would tell you, which is the problem. :-)
I see a good follow-up about device links, but probably
more is lurking.
I generally don't trust anything I haven't tested myself,
and I know that the manual process hasn't always worked.
I think Scott Dickson's instructions would hav
Hi Kent,
This is what I do in similar situations:
1. Import the pool to be destroyed by using the ID. In your case,
like this:
# zpool import 3280066346390919920
If tank already exists you can also rename it:
# zpool import 3280066346390919920 tank2
Then destroy it:
# zpool destroy tank2
I
Over the course of multiple OpenSolaris installs , I first created a
pool called "tank" and then, later and resusing some of the same
drives, I created another pool called tank. I can `zpool export tank`,
but when I `zpool import tank`, I get:
bash-3.2# zpool import tank
cannot import 'tan
[I did post this in NFS, but I think it should be here]
I am playing with ACL on snv_114 (and Storage 7110) system and I have
noticed that strange things are happing to ACL's or am I doing something
wrong.
When you create a new sub-directory or file the ACL's seem to be incorrect.
# zfs crea
Neil,
Thanks.
That makes sense. May be man page for zpool can say that it is a rate as iostat
man page does. I think reads are from the zpool iostat command itself. zpool
iostat doesn't capture that.
Thanks
--
This message posted from opensolaris.org
__
On 06/20/09 11:14, tester wrote:
Hi,
Does anyone know the difference between zpool iostat and iostat?
dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync
pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg
of activity.
The zfs numbers are per second as we
Thank you !
This is exactly what I was looking for and although this is zfs (not a Windows
FAT) the time it takes to create a new pool (instantaneous) means all data is
still there and only the table of contents was maybe erased. as unix
directories are files, I suspect even the old structure ma
Hi,
Does anyone know the difference between zpool iostat and iostat?
dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync
pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg
of activity.
zpool iostat -v test 5
capac
> Working on the assumption that you are going to be adding more drives to
your server, why not just add the new drives to the Supermicro
controller and keep the existing pool (well vdev) where it is?
That's not a bad idea. I just thought that the AOC-SAT2-MV8 has 2 more SATA
ports than my mobo (
Also, b57 is about 2 years old and misses the improvements in performance,
especially in scrub performance.
-- richard
Tomas Ögren wrote:
On 19 June, 2009 - Joe Kearney sent me these 3,8K bytes:
I've got a Thumper running snv_57 and a large ZFS pool. I recently
noticed a drive throwing so
OK, that should work then, as my boot drive is currently an old IDE drive,
which I'm hoping to replace with a SATA SSD.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
OK, thanks again Jeff.
Cheers,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, Miles!
Hope, weather is fine at your place. :-)
On Sat, Jun 20, 2009 at 5:09 AM, Miles Nordin wrote:
> I understood Bogdan's post was a trap: ``provide bug numbers. Oh,
> they're fixed? nothing to see here then. no bugs? nothing to see
> here then.''
Would be great if you do not put a wor
Great, thanks a lot Jeff.
Cheers,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Charles,
Works fine.
I did just that with my home system. I have 2x .5 TB disks which I
didn't want to dedicate to rpool, and I wanted to create a second pool
on those disks which could be expanded. I set up the rpool to be
100GB and that left me with a 400GB partition to make into an
On 20/06/2009, at 9:55 PM, Charles Hedrick wrote:
I have a USB disk, to which I want to do a backup. I've used send |
receive. It works fine until I try to reboot. At that point the
system fails to come up because the backup copy is set to be mounted
at the original location so the system
I have a USB disk, to which I want to do a backup. I've used send | receive. It
works fine until I try to reboot. At that point the system fails to come up
because the backup copy is set to be mounted at the original location so the
system tries to mount two different things the same place. I gu
I have a small system that is going to be a file server. It has two disks. I'd
like just one pool for data. Is it possible to create two pools on the boot
disk, and then add the second disk to the second pool? The result would be a
single small pool for root, and a second pool containing the res
On Sat, Jun 20, 2009 at 2:53 AM, Miles Nordin wrote:
>> "fan" == Fajar A Nugraha writes:
>> "et" == Erik Trimble writes:
>
> fan> The N610N that I have (BCM3302, 300MHz, 64MB) isn't even
> fan> powerful enough to saturate either the gigabit wired
>
> I can't find that device. Did you
On Sat, Jun 20, 2009 at 9:18 AM, Dave Ringkor wrote:
> What would be wrong with this:
> 1) Create a recursive snapshot of the root pool on homer.
> 2) zfs send this snapshot to a file on some NFS server.
> 3) Boot my 220R (same architecture as the E450) into single user mode from a
> DVD.
> 4) Cre
A couple questions out of pure curiosity.
Working on the assumption that you are going to be adding more drives to
your server, why not just add the new drives to the Supermicro
controller and keep the existing pool (well vdev) where it is?
Reading your blog, it seems that you need one (or two if
On Fri, 19 Jun 2009 16:42:43 -0700
Jeff Bonwick wrote:
> Yep, right again.
That is, if the boot drives are not one of those.. ;-)
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (L
24 matches
Mail list logo