Haudy Kazemi wrote:
I think a better question would be: what kind of tests would be most
promising for turning some subclass of these lost pools reported on
the mailing list into an actionable bug?
my first bet would be writing tools that test for ignored sync cache
commands leading to lost
I think a better question would be: what kind of tests would be most
promising for turning some subclass of these lost pools reported on
the mailing list into an actionable bug?
my first bet would be writing tools that test for ignored sync cache
commands leading to lost writes, and apply them
The device tree for your 250 might be different, so you may need to
hack the path_to_inst and /devices and /dev to make it boot sucessfully.
On Jun 20, 2009, at 10:18 AM, Dave Ringkor
wrote:
Cindy, my question is about what "system specific info" is
maintained that would need to be cha
On Fri, Jun 19, 2009 at 10:30 PM, Dave Ringkor wrote:
> I'll start:
>
> - The commands are easy to remember -- all two of them. Which is easier, SVM
> or ZFS, to mirror your disks? I've been using SVM for years and still have
> to break out the manual to use metadb, metainit, metastat, metattac
Dave Ringkor wrote:
- Boasting to the unconverted. We still have a lot of VxVM and SVM on Solaris, and LVM
on AIX, in the office. The other admins are always having issues with storage
migrations, full filesystems, Live Upgrade, corrupted root filesystems, etc. I love
being able to offer s
I'll start:
- The commands are easy to remember -- all two of them. Which is easier, SVM
or ZFS, to mirror your disks? I've been using SVM for years and still have to
break out the manual to use metadb, metainit, metastat, metattach, metadetach,
etc. I hardly ever have to break out the ZFS m
Kees Nuyt wrote:
On Fri, 19 Jun 2009 11:50:07 PDT, stephen bond
wrote:
Kees,
is it possible to get at least the contents of /export/home ?
that is supposedly a separate file system.
That doesn't mean that data is in one particular spot on the
disk. The blocks of the zfilesystems ca
Cindy, my question is about what "system specific info" is maintained that
would need to be changed? To take my example, my E450, "homer", has disks that
are failing and it's a big clunky server anyway, and management wants to
decommission it. But we have an old 220R racked up doing nothing, a
Yep, right again.
Jeff
On Fri, Jun 19, 2009 at 04:21:42PM -0700, Simon Breden wrote:
> Hi,
>
> I'm using 6 SATA ports from the motherboard but I've now run out of SATA
> ports, and so I'm thinking of adding a Supermicro AOC-SAT2-MV8 8-port SATA
> controller card.
>
> What is the procedure for
Yep, you got it.
Jeff
On Fri, Jun 19, 2009 at 04:15:41PM -0700, Simon Breden wrote:
> Hi,
>
> I have a ZFS storage pool consisting of a single RAIDZ2 vdev of 6 drives, and
> I have a question about replacing a failed drive, should it occur in future.
>
> If a drive fails in this double-parity
Hi,
I'm using 6 SATA ports from the motherboard but I've now run out of SATA ports,
and so I'm thinking of adding a Supermicro AOC-SAT2-MV8 8-port SATA controller
card.
What is the procedure for migrating the drives to this card?
Is it a simple case of (1) issuing a 'zpool export pool_name' com
Hi,
I have a ZFS storage pool consisting of a single RAIDZ2 vdev of 6 drives, and I
have a question about replacing a failed drive, should it occur in future.
If a drive fails in this double-parity vdev, then am I correct in saying that I
would need to (1) unplug the old drive once I've identif
The Dell SAS controller probably have on-board write cache which helps with
performance (write commit).
Based on my limited understanding, the 7110 does not have write cache on SAS
controller.
--
This message posted from opensolaris.org
___
zfs-discus
On 19 June, 2009 - Joe Kearney sent me these 3,8K bytes:
> I've got a Thumper running snv_57 and a large ZFS pool. I recently
> noticed a drive throwing some read errors, so I did the right thing
> and zfs replaced it with a spare.
Are you taking snapshots periodically? If so, you're using a bui
On Fri, 19 Jun 2009 11:50:07 PDT, stephen bond
wrote:
>Kees,
>
>is it possible to get at least the contents of /export/home ?
>
>that is supposedly a separate file system.
That doesn't mean that data is in one particular spot on the
disk. The blocks of the zfilesystems can be interspersed.
>is
On Fri, Jun 19, 2009 at 04:09:29PM -0400, Miles Nordin wrote:
> Also, as I said elsewhere, there's a barrier controlled by Sun to
> getting bugs accepted. This is a useful barrier: the bug database is
> a more useful drive toward improvement if it's not cluttered. It also
> means, like I said, so
> "th" == Tim Haley writes:
th> The second is marked as a duplicate of 6784395, fixed in
th> snv_107, 20 weeks ago.
Yeah nice sleuthing. :/
I understood Bogdan's post was a trap: ``provide bug numbers. Oh,
they're fixed? nothing to see here then. no bugs? nothing to see
here the
> "fan" == Fajar A Nugraha writes:
> "et" == Erik Trimble writes:
fan> The N610N that I have (BCM3302, 300MHz, 64MB) isn't even
fan> powerful enough to saturate either the gigabit wired
I can't find that device. Did you misspell it or something? BCM
probably means Broadcom, and
Miles Nordin wrote:
"bmm" == Bogdan M Maryniuk writes:
bmm> OK, so what is the status of your bugreport about this?
That's a good question if it's meant genuinely, and not to be
obstructionist. It's hard to report one bug with clear information
because the problem isn't well-isolated yet.
> "bmm" == Bogdan M Maryniuk writes:
bmm> OK, so what is the status of your bugreport about this?
That's a good question if it's meant genuinely, and not to be
obstructionist. It's hard to report one bug with clear information
because the problem isn't well-isolated yet.
In my notes: 6
I've got a Thumper running snv_57 and a large ZFS pool. I recently noticed a
drive throwing some read errors, so I did the right thing and zfs replaced it
with a spare.
Everything went well, but the resilvering process seems to be taking an
eternity:
# zpool status
pool: bigpool
state: ONL
> "ic" == Ian Collins writes:
>> Access to the bug database is controlled.
ic> No, the bug databse is open.
no, it isn't. Not all the bugs are visible, and after submitting a
bug it has to be approved. Neither is true of the mailing list.
pgpZrCTBzKBaa.pgp
Description: PGP sign
Kees,
is it possible to get at least the contents of /export/home ?
that is supposedly a separate file system. is there a way to look for files
using some low level disk reading tool. If you are old enough to remember the
80s there was stuff like PCTools that could read anywhere on the disk. I
Generally, yes. Test it with your workload and see how it works out for you.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all.
Because the property compression could decrease the file size, and the file IO
will be decreased also.
So, would it increase the ZFS I/O throughput with compression?
for example:
I turn on gzip-9,on a server with 2*4core Xeon, 8GB RAM.
It could compress my files with compressratio 2.5x+
I would think you would run into the same problem I have. Where you can't
view child zvols from a parent zvol nfs share.
> From: Scott Meilicke
> Date: Fri, 19 Jun 2009 08:29:29 PDT
> To:
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
> So how are folks getting around the NFS speed
So how are folks getting around the NFS speed hit? Using SSD or battery backed
RAM ZILs?
Regarding limited NFS mounts, underneath a single NFS mount, would it work to:
* Create a new VM
* Remove the VM from inventory
* Create a new ZFS file system underneath the original
* Copy the VM to that fi
Bill Sommerfeld wrote:
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
I still use "disk swap" because I have some bad experiences
with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to "metadata
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
> I still use "disk swap" because I have some bad experiences
> with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to "metadata" instead of the defau
Scott Meilicke wrote:
> Obviously iSCSI and NFS are quite different at the storage level, and I
> actually like NFS for the flexibility over iSCSI (quotas, reservations,
> etc.)
Another key difference between them is that with iSCSI, the VMFS filesystem
(built on the zvol presented as a block dev
Le 18 juin 09 à 20:23, Richard Elling a écrit :
Cor Beumer - Storage Solution Architect wrote:
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
...and the average size of the files. For small files, it is likely
that the default
recordsize
Hi,
I'd like to understand a thing or two ... :)
I have a zpool on which I've created a zvol, then I've snapshotted the zvol and
I've created a clone out of that snapshot.
Now, what happens if I do a
zfs send mycl...@mysnap > myfile?
I mean, is this stream enough to recover the clone (does i
Richard Elling writes:
> George would probably have the latest info, but there were a number of
> things which circled around the notorious "Stop looking and start ganging"
> bug report,
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6596237
Indeed: we were seriously bitten by this
On Thu, Jun 18, 2009 at 8:01 AM, Cesar Augusto Suarez wrote:
> I have Ubuntu jaunty already installed on my pc, on the second HD, i've
> installed OS2009
> Now, i cant share info between this 2 OS.
> I download and install ZFS-FUSE on jaunty, but the version is 6, instead in
> OS209 the ZFS version
34 matches
Mail list logo