> It would be nice if the 32bit osol kernel support
> 48bit LBA
Is already supported, for may years (otherwise
disks with a capacity >= 128GB could not be
used with Solaris) ...
> (similar to linux, not sure if 32bit BSD
> supports 48bit LBA ), then the drive would probably
> work - perhaps late
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134
(5 seconds on a post b142 kernel), when the system is idle?
On an idle OpenSolaris 2009.06 (b111) system, /usr/demo/dtrace/iosnoop.d
shows no i/o activity for at least 15 minutes.
The same dtrace test on an idle b134 sys
> Why does zfs produce a batch of writes every 30 seconds on opensolaris b134
> (5 seconds on a post b142 kernel), when the system is idle?
It was caused by b134 gnome-terminal. I had an iostat
running in a gnome-terminal window, and the periodic
iostat output is written to a temporary file by gno
> I have a functional OpenSolaris x64 system on which I need to physically
> move the boot disk, meaning its physical device path will change and
> probably its cXdX name.
>
> When I do this the system fails to boot
...
> How do I inform ZFS of the new path?
...
> Do I need to boot from the Li
> So.. it seems that data is deduplicated, zpool has
> 54.1G of free space, but I can use only 40M.
>
> It's x86, ONNV revision 10924, debug build, bfu'ed from b125.
I think I'm observing the same (with changeset 10936) ...
I created a 2GB file, and a "tank" zpool on top of that file,
with compr
> I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
> dd if=/dev/urandom of=/tank/foobar/file1 bs=1024k count=512
512+0 records in
512+0 record
> But: Isn't there an implicit expectation for a space guarantee associated
> with a
> dataset? In other words, if a dataset has 1GB of data, isn't it natural to
> expect to be able to overwrite that space with other
> data?
Is there such a space guarantee for compressed or cloned zfs?
--
This
> Well, then you could have more "logical space" than
> "physical space", and that would be extremely cool,
I think we already have that, with zfs clones.
I often clone a zfs onnv workspace, and everything
is "deduped" between zfs parent snapshot and clone
filesystem. The clone (initially) needs
> > I wasnt clear in my description, I m referring to ext4 on Linux. In
> > fact on a system with low RAM even the dd command makes the system
> > horribly unresponsive.
> >
> > IMHO not having fairshare or timeslicing between different processes
> > issuing reads is frankly unacceptable given a
> I just installed opensolaris build 130 which i
> downloaded from genunix. The install went
> fineand the first reboot after install seemed to
> work but when i powered down and rebooted fully, it
> locks up as soon as i log in.
Hmm, seems you're asking in the wrong forum.
Sounds more like
> > in the build 130 annoucement you can find this:
> > 13540 Xserver crashes and freezes a system installed with LiveCD on bld 130
>
> It is for sure this bug. This is ok, i
> can do most of what i need via ssh. I just
> wasn't sure if it was a bug or if i had done
> something wrongi had tri
> I have a USB flash drive which boots up my
> opensolaris install. What happens is that whenever I
> move to a different machine,
> the root pool is lost because the devids don't match
> with what's in /etc/zfs/zpool.cache and the system
> just can't find the rpool.
See defect 4755 or defect 5484
> - Original Message -
...
> > r...@tos-backup:~# format
> > Searching for disks...Arithmetic Exception (core dumped)
> This error also seems to occur on osol 134. Any idea
> what this might be?
What stack backtrace is reported for that core dump ("pstack core") ?
--
This message posted
> r...@tos-backup:~# pstack /dev/rdsk/core
> core '/dev/rdsk/core' of 1217: format
> fee62e4a UDiv (4, 0, 8046c80, 80469a0, 8046a30, 8046a50) + 2a
> 08079799 auto_sense (4, 0, 8046c80, 0) + 281
> ...
Seems that one function call is missing in the back trace
between auto_sense and UDiv, because U
> I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits: It also found checksum errors
# zpool status files
pool: files
state: DEGRADED
status: One
> > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > errors:
>
> Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> on a system that is running post snv_94 bits: It also found checksum errors
...
> OTOH, trying to verify checksums with zdb -c did
Miles Nordin wrote:
> "jk" == Jürgen Keil <[EMAIL PROTECTED]> writes:
> jk> And a zpool scrub under snv_85 doesn't find checksum errors, either.
> how about a second scrub with snv_94? are the checksum errors gone
> the second time around?
Nope.
I
Bill Sommerfeld wrote:
> On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > > errors:
> >
> > Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> >
Rustam wrote:
> I'm living with this error for almost 4 months and probably have record
> number of checksum errors:
> # zpool status -xv
> pool: box5
...
> errors: Permanent errors have been detected in the
> following files:
>
> box5:<0x0>
>
> I've Sol 10 U5 though.
I suspect that
Bill Sommerfeld wrote:
> On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > > errors:
> >
> > Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> >
> Recently, I needed to move the boot disks containing a ZFS root pool in an
> Ultra 1/170E running snv_93 to a different system (same hardware) because
> the original system was broken/unreliable.
>
> To my dismay, unlike with UFS, the new machine wouldn't boot:
>
> WARNING: pool 'root' could no
I wrote:
> Bill Sommerfeld wrote:
> > On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > > I ran a scrub on a root pool after upgrading to snv_94, and got
> > > > checksum errors:
> > >
> > > Hmm, after reading this, I started a zpool
> I have OpenSolaris (snv_95) installed into my laptop (single sata disk)
> and tomorrow I updated my pool with:
>
> # zpool -V 11 -a
>
> and after I start a scrub into the pool with:
>
> # zpool scrub rpool
>
> # zpool status -vx
>
> NAMESTATE READ WRITE CKSUM
> rpool
> On 08/21/08 17:26, Jürgen Keil wrote:
> > Looks like bug 6727872, which is fixed in build 96.
> > http://bugs.opensolaris.org/view_bug.do?bug_id=6727872
>
> that pool contains normal OpenSolaris mountpoints,
Did you upgrade the opensolaris installation in the past?
W. Wayne Liauh wrote:
> If you are running B95, that "may" be the problem. I
> have no problem booting B93 (& previous builds) from
> a USB stick, but B95, which has a newer version of
> ZFS, does not allow me to boot from it (& the USB
> stick was of course recognized during installation of
> B9
> What Widows utility you are talking about? I have
> used the Sandisk utility program to remove the U3
> Launchpad (which creates a permanent hsfs partition
> in the flash disk), but it does not help the problem.
That's the problem, most usb sticks don't require any
special software and just wor
> THe lock I observed happened inside the BIOS of the card after the main board
> BIOS jumped into the board BIOS. This was before any bootloader has been
> ionvolved.
Is there a disk using a zpool with an EFI disk label? Here's a link to an old
thread about systems hanging in BIOS POST when the
> Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL
> PROTECTED],0:a fstype zfs
Is that physical device path correct for your new system?
Or is this the physical device path (stored on-disk in the zpool label)
from some other system? In this case you may be able to
> Again, what I'm trying to do is to boot the same OS from physical
> drive - once natively on my notebook, the other time from withing
> Virtualbox. There are two problems, at least. First is the bootpath as
> in VB it emulates the disk as IDE while booting natively it is sata.
When I started exp
> bash-3.00# zfs mount usbhdd1
> cannot mount 'usbhdd1': E/A-Fehler
> bash-3.00#
Why is there an I/O error?
Is there any information logged to /var/adm/messages when this
I/O error is reported? E.g. timeout errors for the USB storage device?
--
This message posted from opensolaris.org
_
> The problem was with the shell. For whatever reason,
> /usr/bin/ksh can't rejoin the files correctly. When
> I switched to /sbin/sh, the rejoin worked fine, the
> cksum's matched, ...
>
> The ksh I was using is:
>
> # what /usr/bin/ksh
> /usr/bin/ksh:
> Version M-11/16/88i
> SunOS 5.1
> besides performance aspects, what`s the con`s of
> running zfs on 32 bit ?
The default 32 bit kernel can cache a limited amount of data
(< 512MB) - unless you lower the "kernelbase" parameter.
In the end the small cache size on 32 bit explains the inferior
performance compared to the 64 bit kern
> Not a ZFS bug. IIRC, the story goes something like this: a SMI
> label only works to 1 TByte, so to use > 1 TByte, you need an
> EFI label. For older x86 systems -- those which are 32-bit -- you
> probably have a BIOS which does not handle EFI labels. This
> will become increasingly irritatin
> I had a system with it's boot drive
> attached to a backplane which worked fine. I tried
> moving that drive to the onboard controller and a few
> seconds into booting it would just reboot.
In certain cases zfs is able to find the drive on the
new physical device path (IIRC: the disk's "devid"
> > 32 bit Solaris can use at most 2^31 as disk address; a disk block is
> > 512bytes, so in total it can address 2^40 bytes.
> >
> > A SMI label found in Solaris 10 (update 8?) and OpenSolaris has been
> > enhanced
> > and can address 2TB but only on a 64 bit system.
>
> is what the problem is.
> The GRUB menu is presented, no problem there, and
> then the opensolaris progress bar. But im unable to
> find a way to view any details on whats happening
> there. The progress bar just keep scrolling and
> scrolling.
Press the ESC key; this should switch back from
graphics to text mode and mos
> I've found it only works for USB sticks up to 4GB :(
> If I tried a USB stick bigeer than that, it didn't boot.
Works for me on 8GB USB sticks.
It is possible that the stick you've tried has some
issues with the Solaris USB drivers, and needs to
have one of the workarounds from the
scsa2usb.con
> Well, here is the error:
>
> ... usb stick reports(?) scsi error: medium may have changed ...
That's strange. The media in a flash memory
stick can't be changed - although most sticks
report that they do have removable media.
Maybe this stick needs one of the workarounds
that can be enabled i
> How can i implement that change, after installing the
> OS? Or do I need to build my own livecd?
Boot from the livecd, attach the usb stick,
open a terminal window, "pfexec bash" starts
a root shell, "zpool import -f rpool" should
find and import the zpool from the usb stick.
Mount the root fi
> Nah, that didnt seem to do the trick.
>
> After unmounting
> and rebooting, i get the same error msg from my
> previous post.
Did you get these scsi error messages during installation
to the usb stick, too?
Another thing that confuses me: the unit attention /
medium may have changed message
> > Are there any message with "Error level: fatal" ?
>
> Not that I know of, however, i can check. But im
> unable to find out what to change in grub to get
> verbose output rather than just the splashimage.
Edit the grub commands, delete all splashimage,
foreground and background lines, and d
> No there was no error level fatal.
>
> Well, here is what I have tried since:
>
> a) I´ve tried to install a custom grub like described here:
> http://defect.opensolaris.org/bz/show_bug.cgi?id=4755#c28
> With that in place, I just get the grub prompt. I´ve
> tried to zpool import -f rpool when
> Does this give you anything?
>
> [url=http://bildr.no/view/460193][img]http://bildr.no/thumb/460193.jpeg[/img][/url]
That looks like the zfs mountroot panic you
get when the root disk was moved to a different
physical location (e.g. different usb port).
In this case the physical device path rec
I have my /usr filesystem configured as a zfs filesystem,
using a legacy mountpoint. I noticed that the system boots
with atime updates temporarily turned off (and doesn't record
file accesses in the /usr filesystem):
# df -h /usr
Filesystem size used avail capacity Mounted on
fil
> I still haven't got any "warm and fuzzy" responses
> yet solidifying ZFS in combination with Firewire or USB enclosures.
I was unable to use zfs (that is "zpool create" or "mkfs -F ufs") on
firewire devices, because scsa1394 would hang the system as
soon as multiple concurrent write commands are
> We are running Solaris 10 11/06 on a Sun V240 with 2
> CPUS and 8 GB of memory. This V240 is attached to a
> 3510 FC that has 12 x 300 GB disks. The 3510 is
> configured as HW RAID 5 with 10 disks and 2 spares
> and it's exported to the V240 as a single LUN.
>
> We create iso images of our produ
> > That's probably bug 6382683 "lofi is confused about sync/async I/O",
> > and AFAIK it's fixed in current opensolaris releases.
> >
> According to Bug Database bug 6382683 is in
> 1-Dispatched state, what does that mean? I wonder if
> the fix is available (or will be available) as a
> Solaris 1
> I just had a quick play with gzip compression on a filesystem and the
> result was the machine grinding to a halt while copying some large
> (.wav) files to it from another filesystem in the same pool.
>
> The system became very unresponsive, taking several seconds to echo
> keystrokes. The box
> The reason you are busy computing SHA1 hashes is you are using
> /dev/urandom. The implementation of drv/random uses
> SHA1 for mixing,
> actually strictly speaking it is the swrand provider that does that part.
Ahh, ok.
So, instead of using dd reading from /dev/urandom all the time,
I've no
> I'm not quite sure what this test should show ?
For me, the test shows how writing to a gzip compressed
pool completely kills interactive desktop performance.
At least when using an usb keyboard and mouse.
(I've not yet tested with a ps/2 keyboard & mouse; or
a SPARC box)
> Compressing random
> A couple more questions here.
...
> What do you have zfs compresison set to? The gzip level is
> tunable, according to zfs set, anyway:
>
> PROPERTY EDIT INHERIT VALUES
> compression YES YES on | off | lzjb | gzip | gzip-[1-9]
I've used the "default" gzip compression level
Roch Bourbonnais wrote
> with recent bits ZFS compression is now handled concurrently with
> many CPUs working on different records.
> So this load will burn more CPUs and acheive it's results
> (compression) faster.
Is this done using the taskq's, created in spa_activate()?
http://src.opens
> A couple more questions here.
...
> You still have idle time in this lockstat (and mpstat).
>
> What do you get for a lockstat -A -D 20 sleep 30?
>
> Do you see anyone with long lock hold times, long
> sleeps, or excessive spinning?
Hmm, I ran a series of "lockstat -A -l ph_mutex -s 16 -D 20 s
> A couple more questions here.
>
> [mpstat]
>
> > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
> > 0 0 0 3109 3616 316 196 5 17 48 45 245 0 85 0 15
> > 1 0 0 3127 3797 592 217 4 17 63 46 176 0 84 0 15
> > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys
> with recent bits ZFS compression is now handled concurrently with
> many CPUs working on different records.
> So this load will burn more CPUs and acheive it's results
> (compression) faster.
>
> So the observed pauses should be consistent with that of a load
> generating high system time.
up to its name
> >
> > Which was surprised to find was fixed by Eric in build 59.
> >
>
> It was pointed out by Jürgen Keil that using ZFS compression
> submits a lot of prio 60 tasks to the system task queues;
> this would clobber interactive performance.
Actually
> Would you mind also doing:
>
> ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
>
> to see the raw performance of underlying hardware.
This dd command is reading from the block device,
which might cache dataand probably splits requests
into "maxphys" pieces (which happens to be 56K o
performance.
In-Reply-To: <[EMAIL PROTECTED]>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Approved: 3sm4u3
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=123265&tstart=0#123265
> > Or if you do want to use bfu becaus
Has anyone else noticed a significant zfs performance deterioration
when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a
full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow
compilation disabled; using an lzjb compressed zpool / zfs on
I wrote
> Has anyone else noticed a significant zfs performance
> deterioration when running recent opensolaris bits?
>
> My 32-bit / 768 MB Toshiba Tecra S1 notebook was able
> to do a full opensolaris release build in ~ 4 hours 45
> minutes (gcc shadow compilation disabled; using an lzjb
> comp
> > Patching zfs_prefetch_disable = 1 has helped
> It's my belief this mainly aids scanning metadata. my
> testing with rsync and yours with find (and seen with
> du & ; zpool iostat -v 1 ) pans this out..
> mainly tracked in bug 6437054 vdev_cache: wise up or die
> http://www.opensolaris.org/jive/
I wrote
> Instead of compiling opensolaris for 4-6 hours, I've now used
> the following find / grep test using on-2007-05-30 sources:
>
> 1st test using Nevada build 60:
>
> % cd /files/onnv-2007-05-30
> % repeat 10 /bin/time find usr/src/ -name "*.[hc]" -exec grep FooBar {} +
This find + grep
> Hello Jürgen,
>
> Monday, June 4, 2007, 7:09:59 PM, you wrote:
>
> >> > Patching zfs_prefetch_disable = 1 has helped
> >> It's my belief this mainly aids scanning metadata. my
> >> testing with rsync and yours with find (and seen with
> >> du & ; zpool iostat -v 1 ) pans this out..
> >> mainly
> You are right... I shouldn't post in the middle of
> the night... nForce chipsets don't support AHCI.
Btw. does anybody have a status update for bug 6296435,
"native sata driver needed for nVIDIA mcp04 and mcp55 controllers"
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6296435
?
C
> i think i have read somewhere that zfs gzip
> compression doesn`t scale well since the in-kernel
> compression isn`t done multi-threaded.
>
> is this true - and if so - will this be fixed ?
If you're writing lots of data, zfs gzip compression
might not be a good idea for a desktop machine, bec
> I used a zpool on a usb key today to get some core files off a non-networked
> Thumper running S10U4 beta.
>
> Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to
> 'zpool import sticky' and it worked ok.
>
> But when we attach the drive to a blade 100 (running s10u3), it
> Shouldn't S10u3 just see the newer on-disk format and
> report that fact, rather than complain it is corrupt?
Yep, I just tried it, and it refuses to "zpool import" the newer pool,
telling me about the incompatible version. So I guess the pool
format isn't the correct explanation for the Dick D
Yesterday I was surprised because an old snv_66 kernel
(installed as a new zfs rootfs) refused to mount.
Error message was
Mismatched versions: File system is version 2 on-disk format,
which is incompatible with this software version 1!
I tried to prepare that snv_66 rootfs when running
> I think I have ran into this bug, 6560174, with a firewire drive.
And 6560174 might be a duplicate of 6445725
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
> > And 6560174 might be a duplicate of 6445725
>
> I see what you mean. Unfortunately there does not
> look to be a work-around.
Nope, no work-around. This is a scsa1394 bug; it
has some issues when it is used from interrupt context.
I have some source code diffs, that are supposed to
fix the
> > Nope, no work-around.
>
> OK. Then I have 3 questions:
>
> 1) How do I destroy the pool that was on the firewire
> drive? (So that zfs stops complaining about it)
Even if the drive is disconnected, it should be possible
to "zpool export" it, so that the OS forgets about it
and doesn't try
> > 3) Can your code diffs be integrated into the OS on my end to use this
> > drive, and if so, how?
>
> I believe the bug is still being worked on, right Jürgen ?
The opensolaris sponsor process for fixing bug 6445725 seems
to got stuck. I ping'ed Alan P. on the state of that bug...
This
> By coincidence, I spent some time dtracing 6560174 yesterday afternoon on
> b62, and these bugs are indeed duplicates. I never noticed 6445725 because my
> system wasn't hanging but as the notes say, the fix for 6434435 changes the
> problem, and instead the error that gets propogated back fro
> I'm running snv 65 and having an issue
> much like this:
>http://osdir.com/ml/solaris.opensolaris.help/2006-11/msg00047.html
Bug 6414472?
> Has anyone found a workaround?
You can try to patch my suggested fix for 6414472 into the ata binary
and see if it helps:
http://www.opensolaris.org/
> in my setup i do not install the ufsroot.
>
> i have 2 disks
> -c0d0 for the ufs install
> -c1d0s0 which is my zfs root i want to exploit
>
> my idea is to remove the c0d0 disk when the system will be ok
Btw. if you're trying to pull the ufs disk c0d0 from the system, and
physically move the
> it seems i have the same problem after zfs boot
> installation (following this setup on a snv_69 release
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ).
Hmm, in step 4., wouldn't it be better to use ufsdump / ufsrestore
instead of find / cpio to clone the ufs root into the
> I managed to create a link in a ZFS directory that I can't remove.
>
> # find . -print
> .
> ./bayes_journal
> find: stat() error ./bayes.lock.router.3981: No such
> file or directory
> ./user_prefs
> #
>
>
> ZFS scrub shows no problems in the pool. Now, this
> was probably cause when I was
> using hyperterm, I captured the panic message as:
>
> SunOS Release 5.11 Version snv_69 32-bit
> Copyright 1983-2007 Sun Microsystems, Inc. All
> rights reserved.
> Use is subject to license terms.
>
> panic[cpu0]/thread=fec1ede0: Can't handle mwait size
> 0
>
> fec37e70 unix:mach_alloc_mwait
I tried to copy a 8GB Xen domU disk image from a zvol device
to an image file on an ufs filesystem, and was surprised that
reading from the zvol character device doesn't detect "EOF".
On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this:
# zfs create -V 1440k tank/floppy-img
# dd if=
> I tried to copy a 8GB Xen domU disk image from a zvol device
> to an image file on an ufs filesystem, and was surprised that
> reading from the zvol character device doesn't detect "EOF".
>
> On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this:
>
> # zfs create -V 1440k tank/floppy
> I tried to copy a 8GB Xen domU disk image from a zvol device
> to an image file on an ufs filesystem, and was surprised that
> reading from the zvol character device doesn't detect "EOF".
I've filed bug 6596419...
This message posted from opensolaris.org
_
> > I tried to copy a 8GB Xen domU disk image from a zvol device
> > to an image file on an ufs filesystem, and was surprised that
> > reading from the zvol character device doesn't detect "EOF".
>
> I've filed bug 6596419...
Requesting a sponsor for bug 6596419...
http://bugs.opensolaris.org/bug
Yesterday I tried to clone a xen dom0 zfs root filesystem and hit this panic
(probably Bug ID 6580715):
System is running last week's opensolaris bits (but I'm also accessing the zpool
using the xen snv_66 bits).
files/s11-root-xen: is an existing version 1 zfs
files/[EMAIL PROTECTED]: new snap
>
> Using build 70, I followed the zfsboot instructions
> at http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
> to the letter.
>
> I tried first with a mirror zfsroot, when I try to boot to zfsboot
> the screen is flooded with "init(1M) exited on fatal signal 9"
Could be this
> I would like confirm that Solaris Express Developer Edition 09/07
> b70, you can't have /usr on a separate zfs filesystem because of
> broken dependencies.
>
> 1/ Part of the problem is that /sbin/zpool is linked to
> /usr/lib/libdiskmgt.so.1
Yep, in the past this happened on several occas
> Should I bfu to the latest bits to fix this
> problem or do I also need to install b72?
bfu to b72 (or newer) should be OK, iff there really is
a difference with shared library dependencies between
b70 and b72. I'm not sure about b70; but b72 with
just an empty /usr directory in the root files
> Regarding compression, if I am not mistaken, grub
> cannot access files that are compressed.
There was a bug where grub was unable to access files
on zfs that contained holes:
Bug ID 6541114
SynopsisGRUB/ZFS fails to load files from a default compressed (lzjb)
root
http://bu
size=66560)
In-Reply-To: <[EMAIL PROTECTED]>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Approved: 3sm4u3
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=163221&tstart=0#163221
> how does one free segment(offset=77984
A few weeks ago, I wrote:
> Yesterday I tried to clone a xen dom0 zfs root
> filesystem and hit this panic (probably Bug ID 6580715):
>
>
> > ::status
> debugging crash dump vmcore.6 (64-bit) from moritz
> operating system: 5.11 wos_b73 (i86pc)
> panic message: freeing free segment (vdev=0 offse
> I've got Solaris Express Community Edition build 75
> (75a) installed on an Asus P5K-E/WiFI-AP (ip35/ICH9R
> based) board. CPU=Q6700, RAM=8Gb, disk=Samsung
> HD501LJ and (older) Maxtor 6H500F0.
>
> When the O/S is running on bare metal, ie no xVM/Xen
> hypervisor, then everything is fine.
>
>
> I wanted to resurrect an old dual P3 system with a couple of IDE drives
> to use as a low power quiet NIS/DHCP/FlexLM server so I tried installing
> ZFS boot from build 90.
> Jun 28 16:09:19 zack scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
> PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED
Mike Gerdts wrote
> By default, only kernel memory is dumped to the dump device. Further,
> this is compressed. I have heard that 3x compression is common and
> the samples that I have range from 3.51x - 6.97x.
My samples are in the range 1.95x - 3.66x. And yes, I lost
a few crash dumps on a b
> http://www.opensolaris.org/jive/thread.jspa?messageID=36229#36229
The problem is back, on a different system: a laptop running on-20060605 bits.
Compared to snv_29, the error message has improved, though:
# zfs snapshot hdd/[EMAIL PROTECTED]
cannot snapshot 'hdd/[EMAIL PROTECTED]': dataset is b
> What about ATA disks?
>
> Currently (at least on x86) the ata driver enables the write cache
> unconditionally on each drive and doesn't support the ioctl to flush the
> cache
> (although the function is already there).
DKIOCFLUSHWRITECACHE?
It is implemented here, for x86 ata:
http://cvs.
> > What throughput do you get for the full untar (untared size / elapse time) ?
> # tar xf thunderbird-1.5.0.4-source.tar 2.77s user
> 35.36s system 33% cpu 1:54.19
>
> 260M/114 =~ 2.28 MB/s on this IDE disk
IDE disk?
Maybe it's this sparc ide/ata driver issue:
Bug ID: 6421427
Synopsis: netr
> Further testing revealed
> that it wasn't an iSCSI performance issue but a zvol
> issue. Testing on a SATA disk locally, I get these
> numbers (sequentual write):
>
> UFS: 38MB/s
> ZFS: 38MB/s
> Zvol UFS: 6MB/s
> Zvol Raw: ~6MB/s
>
> ZFS is nice and fast but Zvol performance just drops
> off
I've tried to use "dmake lint" on on-src-20060731, and was running out of swap
on my
Tecra S1 laptop, 32-bit x86, 768MB main memory, with a 512MB swap slice.
The "FULL KERNEL: global crosschecks:" lint run consumes lots (~800MB) of space
in /tmp, so the system was running out of swap space.
To fi
I made some powernow experiments on a dual core amd64 box, running the
64-bit debug on-20060828 kernel. At some point the kernel seemed to
make no more progress (probably a bug in the multiprocessor powernow
code), the gui was stuck, so I typed (blind) F1-A + $ ::status
debugging crash dump vmcore
> We are trying to obtain a mutex that is currently held
> by another thread trying to get memory.
Hmm, reminds me a bit on the zvol swap hang I got
some time ago:
http://www.opensolaris.org/jive/thread.jspa?threadID=11956&tstart=150
I guess if the other thead is stuck trying to get memory, then
The disks in that Blade 100, are these IDE disks?
The performance problem is probably bug 6421427:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427
A fix for the issue was integrated into the Opensolaris 20060904 source
drop (actually closed binary drop):
http://dlc.sun.com/os
1 - 100 of 111 matches
Mail list logo