Well, I checked and it is 8k
volblocksize 8K
Any other suggestions how to begin to debug such issue ?
On Mon, Dec 15, 2008 at 2:44 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 15 Dec 2008, Ahmed Kamal wrote:
>
>>
>> RandomWrite-8k: 0.9M/s
>> SingleStreamWriteD
I have moved the zpool image file to an OpenSolaris machine running 101b.
r...@opensolaris:~# uname -a
SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris
Here I am able to attempt an import of the pool and at least the OS does not
panic.
r...@opensolaris:~# zpool import -d /mnt
pool: zo
I am currently developing an agent that will monitor Solaris machines. We are
using wbem services to access the system information.
When I query for the Solaris_LocalFileSystem the only systems it displays are
UFS and HSFS. (I am using cimworkshop to look at this information)
I need to know if th
I don't know if this is relevant or merely a coincidence but the zdb command
fails an assertion in the same txg_wait_synced function.
r...@opensolaris:~# zdb -p /mnt -e zones
Assertion failed: tx->tx_threads == 2, file ../../../uts/common/fs/zfs/txg.c,
line 423, function txg_wait_synced
Abort (
Forgive me for not understanding the details, but couldn't you also
work backwards through the blocks with ZFS and attempt to recreate the
uberblock?
So if you lost the uberblock, could you (memory and time allowing)
start scanning the disk, looking for orphan blocks that aren't
refernced anywhere
On Mon, 15 Dec 2008 06:12:19 PST, Nathan Hand wrote:
> I have moved the zpool image file to an
> OpenSolaris machine running 101b.
>
> r...@opensolaris:~# uname -a
> SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris
>
> Here I am able to attempt an import of the pool and at
> least the OS
Hi All,
Is there a way to get event notification for zfs filesystems and
snapshot creation/deletion ?
I looked at HAL and event ports but couldn't find anything.
Does such a feature exist for zfs ?
Thanks in advance,
Erwann
--
Erwann Ché
>I think the problem for me is not that there's a risk of data loss if
>a pool becomes corrupt, but that there are no recovery tools
>available. With UFS, people expect that if the worst happens, fsck
>will be able to recover their data in most cases.
Except, of course, that fsck lies. In "fixe
Hi all,
A while back, I posted here about the issues ZFS has with USB hotplugging
of ZFS formatted media when we were trying to plan an external media backup
solution for time-slider:
http://www.opensolaris.org/jive/thread.jspa?messageID=299501
As well as the USB issues in the subject we became
Does anyone know of a way to specify the creation of ZFS file systems for a ZFS
root pool during a JumpStart installation? For example, creating the following
during the install:
Filesystem Mountpoint
rpool/var /var
rpool/var
On Sat, 2008-12-13 at 12:18 -0500, Sebastien Roy wrote:
> I sent the following to indiana-disc...@opensolaris.org, but perhaps
> someone here can get to the bottom of this. Why must zfs trash my
> system so often with this hostid nonsense? How do I recover from this
> situation? (I have no OpenS
I think the problem for me is not that there's a risk of data loss if a pool
becomes corrupt, but that there are no recovery tools available. With UFS,
people expect that if the worst happens, fsck will be able to recover their
data in most cases.
With ZFS you have no such tools, yet Victor ha
Hello all.
I'm doing a course project to evaluate recovery time of RAID-Z.
One of my tests is to examine the impact of aging on recovery speed.
I've used PostMark to stress the file-system but I didn't observe any
noticeable slowdown.
Is there a better way to "age" a ZFS file-system?
Does ZFS
On Mon, Dec 15, 2008 at 6:06 PM, Brad Hudson wrote:
> Does anyone know of a way to specify the creation of ZFS file systems for a
> ZFS root pool during a JumpStart installation? For example, creating the
> following during the install:
>
> Filesystem Mountpoin
On Mon, 15 Dec 2008, Ross wrote:
> My concern is that ZFS has all this information on disk, it has the
> ability to know exactly what is and isn't corrupted, and it should
> (at least for a system with snapshots) have many, many potential
> uberblocks to try. It should be far, far better than
Just put commands to create them in the finish script.
I create several and set options on them like so:
### Create ZFS additional filesystems
echo "setting up additional filesystems."
zfs create -o compression=on -o mountpoint=/ucd rpool/ucd
zfs create -o compression=on -o mountpoint=/local/d01
I'm not sure I follow how that can happen, I thought ZFS writes were
designed to be atomic? They either commit properly on disk or they
don't?
On Mon, Dec 15, 2008 at 6:34 PM, Bob Friesenhahn
wrote:
> On Mon, 15 Dec 2008, Ross wrote:
>
>> My concern is that ZFS has all this information on disk,
On Mon, 15 Dec 2008, Ross Smith wrote:
> I'm not sure I follow how that can happen, I thought ZFS writes were
> designed to be atomic? They either commit properly on disk or they
> don't?
Yes, this is true. One reason why people complain about corrupted ZFS
pools is because they have hardware
On Mon, Dec 15, 2008 at 01:36:46PM -0600, Bob Friesenhahn wrote:
> On Mon, 15 Dec 2008, Ross Smith wrote:
>
> > I'm not sure I follow how that can happen, I thought ZFS writes were
> > designed to be atomic? They either commit properly on disk or they
> > don't?
>
> Yes, this is true. One reaso
Thanks for the reply. I tried the following:
$ zpool import -o failmode=continue -d /mnt -f zones
But the situation did not improve. It still hangs on the import.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
Thanks for the response Peter. However, I'm not looking to create a different
boot environment (bootenv). I'm actually looking for a way within JumpStart to
separate out the ZFS filesystems from a new installation to have better control
over quotas and reservations for applications that usuall
Over the weekend, I installed opensolaris 2008.11 onto a removable harddisk on
my laptop (having previously removed the zfs bootable snv_103 imaged
harddrive). - side note: opensolaris could do with a companion cd as it takes a
long long time to install the usefull/essential software from the re
These issues are discussed on the install-discuss forum. You'll have
better luck getting to the right audience there.
http://www.opensolaris.org/jive/forum.jspa?forumID=107
Also see the various design docs in the install community.
http://www.opensolaris.org/os/community/install
-- richard
Brad
> "nw" == Nicolas Williams writes:
nw> Your thesis is that all corruption problems observed with ZFS
nw> on SANs are: a) phantom writes that never reached the rotating
nw> rust, b) not bit rot, corruption in the I/O paths, ...
nw> Correct?
yeah.
by ``all'' I mean the sever
richard.ell...@sun.com said:
> L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
> not back-ported to Solaris 10u6.
You sure? Here's output on a Solaris-10u6 machine:
cyclops 4959# uname -a
SunOS cyclops 5.10 Generic_137138-09 i86pc i386 i86pc
cyclops 4960# zpool upgra
> A while back, I posted here about the issues ZFS has with USB hotplugging
> of ZFS formatted media when we were trying to plan an external media backup
> solution for time-slider:
> http://www.opensolaris.org/jive/thread.jspa?messageID=299501
[...]
> There are a few minor issues however which I
> "bc" == Bryan Cantrill writes:
> "jz" == Joseph Zhou writes:
bc> most of the people I talk to are actually _using_ NetApp's
bc> technology, a practice that tends to leave even the most
bc> stalwart proponents realistic about the (many) limitations of
bc> NetApp's
same
Brad Hudson wrote:
> Thanks for the response Peter. However, I'm not looking to create a
> different boot environment (bootenv). I'm actually looking for a way within
> JumpStart to separate out the ZFS filesystems from a new installation to have
> better control over quotas and reservations f
I've had some success.
I started with the ZFS on-disk format PDF.
http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
The uberblocks all have magic value 0x00bab10c. Used od -x to find that value
in the vdev.
r...@opensolaris:~# od -A x -x /mnt/zpool.zones | grep "b10c 00ba"
0200
On Mon, Dec 15, 2008 at 05:04:03PM -0500, Miles Nordin wrote:
> As Tim said, the one-filesystem-per-user thing is not working out.
For NFSv3 clients that truncate MOUNT protocol answers (and v4 clients
that still rely on the MOUNT protocol), yes, one-filesystem-per-user is
a problem. For NFSv4 cl
On Mon, 15 Dec 2008 14:23:37 PST, Nathan Hand wrote:
[snip]
> Initial inspection of the filesystems are promising.
> I can read from files, there are no panics,
> everything seems to be intact.
Good work, congratulations, and thanks for the clear
description of the process. I hope I never need
>
> Maybe the format allows unlimited O(1) snapshots, but it's at best
> O(1) to take them. All over the place it's probably O(n) or worse to
> _have_ them. to boot with them, to scrub with them.
Why would a scrub be O(n snapshots)?
The O(n filesystems) effects reported from time to time in
O
I emailed one of the more popular low-cost PCI card vendors and asked them
about [maybe 4/8 port???], PCIe x4 [or more] cards in their product roadmap.
They replied positively theat they were working on a PCIe x4 card, with both
internal and eSATA options. They said it's cooking in the lab, and
Marion Hakanson wrote:
> richard.ell...@sun.com said:
>
>> L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
>> not back-ported to Solaris 10u6.
>>
>
> You sure? Here's output on a Solaris-10u6 machine:
>
Yes, I'm sure. It was pulled late :-(. We've documen
Typically you want to do something like this:
Write 1,000,000 files of varying length.
Randomly select and remove 500,000 of those files.
Repeat (a) creating files, and (b) randomly removing files, until your file
system is full enough for your test, or you run out of time.
That's a pretty
So - will it be arriving in a patch? :)
Nathan.
Richard Elling wrote:
> Marion Hakanson wrote:
>> richard.ell...@sun.com said:
>>
>>> L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
>>> not back-ported to Solaris 10u6.
>>>
>> You sure? Here's output on a So
>
> The following versions are supported:
>
> VER DESCRIPTION
> ---
> 1 Initial ZFS version
> 2 Ditto blocks (replicated metadata)
> 3 Hot spares and double parity RAID-Z
> 4 zpool history
> 5 Compression using the gzip algor
On Tue, 16 Dec 2008 16:42:02 +1100
Nathan Kroenert wrote:
>
>
> So - will it be arriving in a patch? :)
no - we need a hook to get customers to use whatever
we package NV up as. Or buy fishworks kit :-)
James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/j
On Tue, 16 Dec 2008 16:19:05 +1000
"James C. McPherson" wrote:
> On Tue, 16 Dec 2008 16:42:02 +1100
> Nathan Kroenert wrote:
>
> >
> >
> > So - will it be arriving in a patch? :)
>
> no - we need a hook to get customers to use whatever
> we package NV up as. Or buy fishworks kit :-)
Ahem...
39 matches
Mail list logo