That was it. Thanks, Matt.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you Al for your quick response.
I will forward this info to customer and inform him about it.
Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Russell Blaine wrote:
The zdb object -> path trick doesn't give me a path name:
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
13 a51blvl=0 blkid=9
objectsbash-3.00# zdb -vvv mypool/rab a51b
Try 0xa51b.
--matt
The zdb object -> path trick doesn't give me a path name:
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
13 a51blvl=0 blkid=9
bash-3.00# zdb mypool | grep "ID 19,"
Dataset mypool/rab [ZPL], ID 19, cr_txg 6, last_txg 4391649, 80.3
On Wed, Sep 27, 2006 at 02:12:48PM -0700, Darren Dunham wrote:
>
> Is this only a concern with *imported* pools? I was assuming Neelakath
> was trying to destroy exported pools (or at least one of them was
> exported).
Yes, that is correct.
> I hope ZFS won't get too worried about them if I do
Hey,
I did a similar test a couple of months ago, albeit on a smaller system,
and 'only' 10,000 users. I saw a similar delay at boot time, but also
saw a large amount of memory utilisation.
I didn't notice the major memory usage, but the box had no other use,
than to mount these mass of empty
On Wed, 27 Sep 2006, Arlina Goce-Capiral wrote:
> All,
>
>
> Customer would like to confirm this if this is supported or not.
>
> =
> I've created non global zones with ZFS underneath the root filesystem
> for a new SAP
> environment that's approaching production next week. Then I
On Wed, 27 Sep 2006, Gino Ruopolo wrote:
> Thank you Bill for your clear description.
>
> Now I have to find a way to justify myself with my head office that after
> spending 100k+ in hw and migrating to "the most advanced OS" we are running
> about 8 time slower :)
>
> Anyway I have a problem
I was trying to import the pool, but got an error that there
wee 2 pools with the same name (in exported state) and I had
to import by "id". Thus I wanted to destroy the other pool
-neel
Sometime ago, Darren Dunham said:
> > On Wed, Sep 27, 2006 at 09:53:32AM -0700, Neelakanth Nadgir wrote:
> > >
> On Wed, Sep 27, 2006 at 09:53:32AM -0700, Neelakanth Nadgir wrote:
> > Is it possible to destroy a pool by ID? I created two pools with the
> > same name, and want to destroy one of them
>
> How do you manage to do that? This should be impossible, and is a bug
> in ZFS somewhere. The internal
All,
Customer would like to confirm this if this is supported or not.
=
I've created non global zones with ZFS underneath the root filesystem
for a new SAP
environment that's approaching production next week. Then I read that
its not supported,
but many people in the discussi
Sorry, But i was able to dd zero's to the devices "zpool status"
gave for the incorrect zpool, and made it disappear. There are
two possible cases that could have made this happen
1. When I first created the zpool (zpool create zfsdata ...)
I control-c'd the command. I ran it to completion th
On Tue, 2006-09-26 at 16:13 -0700, Noel Dellofano wrote:
> I can also reproduce this on my test machines and have opened up CR
> 6475506 panic in dmu_recvbackup due to NULL pointer dereference
> to track this problem. This is most likely due to recent changes
> made in the snapshot code for -F.
Neel,
Is it possible to destroy a pool by ID? I created two pools with the
same name, and want to destroy one of them
Could you please cut and paste (ie. not re-type) the output from the command "zpool
list | col -b", and post it here please?
Thanks... Sean.
_
On Wed, Sep 27, 2006 at 09:53:32AM -0700, Neelakanth Nadgir wrote:
> Is it possible to destroy a pool by ID? I created two pools with the
> same name, and want to destroy one of them
How do you manage to do that? This should be impossible, and is a bug
in ZFS somewhere. The internal AVL tree is
>
> More generally, I could suggest that we use an odd
> number of vdevs
> for raidz and an even number for mirrors and raidz2.
> Thoughts?
uhm ... we found serious performance problems also using a RAID-Z of 3 luns ...
Gino
This message posted from opensolaris.org
__
Richard Elling - PAE wrote:
More generally, I could suggest that we use an odd number of vdevs
for raidz and an even number for mirrors and raidz2.
Thoughts?
Sounds good to me. I'd make sure it's in the same section of the BP
guide as "Align the block size with your app..." type notes.
___
observations below...
Bill Moore wrote:
Thanks, Chris, for digging into this and sharing your results. These
seemingly stranded sectors are actually properly accounted for in terms
of space utilization, since they are actually unusable while maintaining
integrity in the face of a single drive f
Thank you Bill for your clear description.
Now I have to find a way to justify myself with my head office that after
spending 100k+ in hw and migrating to "the most advanced OS" we are running
about 8 time slower :)
Anyway I have a problem much more serious than rsync process speed. I hope
yo
Is it possible to destroy a pool by ID? I created two pools with the
same name, and want to destroy one of them
-neel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> So recently, i decided to test out some of the ideas i've been toying
> with, and decided to create 50 000 and 100 000 filesystems, the test
> machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a
> scsi 3310 raid array, via two scsi controllers.
I did a similar test a couple o
Fixed. Thank you for the heads up on that.
Noel
On Sep 27, 2006, at 1:04 AM, Victor Latushkin wrote:
Hi All,
I've noticed that link to dmu_txg.c from the ZFS Source Code tour
is broken. It looks like it dmu_txg.c should be changed to dmu_tx.c
Please take care of this.
- Victor
___
Hi,
*sigh*, one of the issues we recognized, when we introduced the new
cheap/fast file system creation, was that this new model would stress
the scalability (or lack thereof) of other parts of the operating
system. This is a prime example. I think the notion of an automount
option for zfs dir
Patrick wrote:
Hi,
So recently, i decided to test out some of the ideas i've been toying
with, and decided to create 50 000 and 100 000 filesystems, the test
machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a
scsi 3310 raid array, via two scsi controllers.
Now creating the ma
Hi,
So recently, i decided to test out some of the ideas i've been toying
with, and decided to create 50 000 and 100 000 filesystems, the test
machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a
scsi 3310 raid array, via two scsi controllers.
Now creating the mass of filesystem
Some people have privately asked me the configuration details when the problem
was encountered. Here they are:
zonecfg:bluenile> info
zonepath: /zones/bluenile
autoboot: false
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inhe
> > Hello Matthew,
> > Tuesday, September 12, 2006, 7:57:45 PM, you
> wrote:
> > MA> Ben Miller wrote:
> > >> I had a strange ZFS problem this morning. The
> > entire system would
> > >> hang when mounting the ZFS filesystems. After
> > trial and error I
> > >> determined that the problem was wit
On Tue, Sep 12, 2006 at 03:56:00PM -0700, Matthew Ahrens wrote:
> Matthew Ahrens wrote:
[...]
> Given the overwhelming criticism of this feature, I'm going to shelve it for
> now.
I'd really like to see this feature. You say ZFS should change our view
on filesystems, I say be consequent.
In ZFS
Hi All,
I've noticed that link to dmu_txg.c from the ZFS Source Code tour is
broken. It looks like it dmu_txg.c should be changed to dmu_tx.c
Please take care of this.
- Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
29 matches
Mail list logo