Hi Tom and all,
Tom Bird wrote:
Hi,
Have a problem with a ZFS on a single device, this device is 48 1T SATA
drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
a ZFS on it as a single device.
There was a problem with the SAS bus which caused various errors
including the in
> "nw" == Nicolas Williams <[EMAIL PROTECTED]> writes:
nw> Without ZFS the OP would have had silent, undetected (by the
nw> OS that is) data corruption.
It sounds to me more like the system would have paniced as soon as he
pulled the cord, and when it rebooted, it would have rolled t
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
re> If your pool is not redundant, the chance that data
re> corruption can render some or all of your data inaccessible is
re> always present.
1. data corruption != unclean shutdown
2. other filesystems do not need a mirror
> so finally, I gathered up some courage and
> "installgrub /boot/grub/stage1 /boot/grub/stage2
> /dev/rdsk/c2d0s0" seemed to write out what I assume
> is a new MBR.
Not the MBR - the stage1 and 2 files are written to the boot area of the
Solaris FDISK partition.
> tried to also installgrub on
Yes, there have been bugs with heavy I/O and ZFS running the system
out of memory. However, there was a contention in the thread
about it possibly being due to marvell88sx driver bugs (most likely not).
Further, my mention of 32-bit Solaris being unsafe at any speed is still
true. Without analysi
On Wed, Aug 06, 2008 at 03:44:08PM -0400, Miles Nordin wrote:
> > "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>
> c> If that's really the excuse for this situation, then ZFS is
> c> not ``always consistent on the disk'' for single-VDEV pools.
>
> re> I disagree with your
On Wed, Aug 06, 2008 at 02:23:44PM -0400, Will Murnane wrote:
> On Wed, Aug 6, 2008 at 13:57, Miles Nordin <[EMAIL PROTECTED]> wrote:
> > If that's really the excuse for this situation, then ZFS is not
> > ``always consistent on the disk'' for single-VDEV pools.
> Well, yes. If data is sent, but c
> As others have explained, if ZFS does not have a
> config with data redundancy - there is not much that
> can be learned - except that it "just broke".
Plenty can be learned by just looking at the pool.
Unfortunately ZFS currently doesn't have tools which
make that easy; as I understand it, zdb
> From the ZFS Administration Guide, Chapter 11, Data Repair section:
> Given that the fsck utility is designed to repair known pathologies
> specific to individual file systems, writing such a utility for a file
> system with no known pathologies is impossible.
That's a fallacy (and is incorrect
On Thu, Aug 7, 2008 at 5:53 AM, Marc Bevand <[EMAIL PROTECTED]> wrote:
> Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are a
> well known problem. They are caused by memory contention in the kernel heap.
> Check 'kstat vmem::heap'. The usual recommendation is to change th
Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are a
well known problem. They are caused by memory contention in the kernel heap.
Check 'kstat vmem::heap'. The usual recommendation is to change the
kernelbase. It worked for me. See:
http://mail.opensolaris.org/pipermai
On Thu, Aug 7, 2008 at 5:32 AM, Peter Bortas <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 6, 2008 at 7:31 PM, Bryan Allen <[EMAIL PROTECTED]> wrote:
>>
>> Good afternoon,
>>
>> I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
>> pool is hanging off two LSI Logic SAS3041X-Rs (
Tom Bird wrote:
> Richard Elling wrote:
>
>
>> I see no evidence that the data is or is not correct. What we know is that
>> ZFS is attempting to read something and the device driver is returning EIO.
>> Unfortunately, EIO is a catch-all error code, so more digging to find the
>> root cause is
On Wed, Aug 6, 2008 at 6:22 PM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
> Brian D. Horn wrote:
>> In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches)
>> all the known marvell88sx problems have long ago been dealt with.
>
> Not true. The working marvell patches still have
As far as I can tell from the patch web patches:
For Solaris 10 x86 138053-01 should have the fixes it does
depend on other earlier patches though). I find it very difficult
to tell what the story is with patches as the patch numbers
seem to have very little in them to correlate them to
code chan
Tom Bird wrote:
> Richard Elling wrote:
>
>
>> I see no evidence that the data is or is not correct. What we know is that
>> ZFS is attempting to read something and the device driver is returning EIO.
>> Unfortunately, EIO is a catch-all error code, so more digging to find the
>> root cause is
Brian D. Horn wrote:
> In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches)
> all the known marvell88sx problems have long ago been dealt with.
Not true. The working marvell patches still have not been released for
Solaris. They're still just IDRs. Unless you know somethi
On Wed, Aug 6, 2008 at 8:20 AM, Tom Bird <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Have a problem with a ZFS on a single device, this device is 48 1T SATA
> drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
> a ZFS on it as a single device.
>
> There was a problem with the SAS bus
> The other changes that will appear in 0.11 (which is
> nearly done) are:
Still looking forward to seeing .11 :)
Think we can expect a release soon? (or at least svn access so that others can
check out the trunk?)
This message posted from opensolaris.org
_
Oh, I have 'played' with them all: VirtualBox, VMware, KVM...
But now I need to set up a production system for various Linux & Windows
guests. And none of the 3 mentioned are 100% perfect, so the choice is
difficult...
My first choice would be KVM+RAIDZ, but since KVM only works on Linux, and
RA
Brian D. Horn wrote:
> In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches)
> all the known marvell88sx problems have long ago been dealt with.
>
> However, I've said this before. Solaris on 32-bit platforms has problems and
> is not to be trusted. There are far, far too
In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches)
all the known marvell88sx problems have long ago been dealt with.
However, I've said this before. Solaris on 32-bit platforms has problems and
is not to be trusted. There are far, far too many places in the source
code
For what it's worth I see this as well on 32-bit Xeons, 1GB ram, and
dual AOC-SAT2-MV8 (large amounts of io sometimes resulting in lockup
requiring a reboot --- though my setup is Nexenta b85). Nothing in the
logging, nor loadavg increasing significantly. It could be the
regular Marvell driver iss
On Wed, 6 Aug 2008, Will Murnane wrote:
> I've got a pool which I'm currently syncing a few hundred gigabytes to
> using rsync. The source machine is pretty slow, so it only goes at
> about 20 MB/s. Watching "zpool iostat -v local-space 10", I see a
> pattern like this (trimmed to take up less s
Michael Hale wrote:
> A bug report I've submitted for a zfs-related kernel crash has been
> marked incomplete and I've been asked to provide more information.
>
> This CR has been marked as "incomplete" by
> for the reason "Need More Info". Please update the CR
> providing the information re
A bug report I've submitted for a zfs-related kernel crash has been
marked incomplete and I've been asked to provide more information.
This CR has been marked as "incomplete" by
for the reason "Need More Info". Please update the CR
providing the information requested in the Evaluation and/or C
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
c> If that's really the excuse for this situation, then ZFS is
c> not ``always consistent on the disk'' for single-VDEV pools.
re> I disagree with your assessment. The on-disk format (any
re> on-disk format) necessarily
Richard Elling wrote:
> I see no evidence that the data is or is not correct. What we know is that
> ZFS is attempting to read something and the device driver is returning EIO.
> Unfortunately, EIO is a catch-all error code, so more digging to find the
> root cause is needed.
I'm currently check
Miles Nordin wrote:
>> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>> "tb" == Tom Bird <[EMAIL PROTECTED]> writes:
>>
>
> tb> There was a problem with the SAS bus which caused various
> tb> errors including the inevitable kernel panic, the thing came
> tb
On Wed, Aug 6, 2008 at 13:57, Miles Nordin <[EMAIL PROTECTED]> wrote:
>> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>> "tb" == Tom Bird <[EMAIL PROTECTED]> writes:
>
>tb> There was a problem with the SAS bus which caused various
>tb> errors including the inevitable kernel pa
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
> "tb" == Tom Bird <[EMAIL PROTECTED]> writes:
tb> There was a problem with the SAS bus which caused various
tb> errors including the inevitable kernel panic, the thing came
tb> back up with 3 out of 4 zfs mounted.
re> I
On Wed, Aug 6, 2008 at 13:31, Bryan Allen <[EMAIL PROTECTED]> wrote:
> I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
> pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured).
You might try taking out 4gb of the ram (!). Some 32-bit drivers have
problems do
I've got a pool which I'm currently syncing a few hundred gigabytes to
using rsync. The source machine is pretty slow, so it only goes at
about 20 MB/s. Watching "zpool iostat -v local-space 10", I see a
pattern like this (trimmed to take up less space):
capacity operations
Good afternoon,
I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured).
When I put a moderate amount of load on the zpool (like, say, copying many
files locally, or deleting a large number of ZFS fs), the sys
Tom Bird wrote:
> Hi,
>
> Have a problem with a ZFS on a single device, this device is 48 1T SATA
> drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
> a ZFS on it as a single device.
>
> There was a problem with the SAS bus which caused various errors
> including the inevita
Ross,
Thanks, I have updated the bug with this info.
Neil.
Ross Smith wrote:
> Hmm... got a bit more information for you to add to that bug I think.
>
> Zpool import also doesn't work if you have mirrored log devices and
> either one of them is offline.
>
> I created two ramdisks with:
> #
Almost. I did exactly the same thing to my system -- upgrading ZFS.
The 2008.11 development snapshot CD I found is based on snv_93 and doesn't yet
suport ZFS v.11 so it refuses to import the pool. My system doesn't have a DVD
drive, so I cannot boot the SXCE snv_94 DVD. I guess I have to trac
After some errors were logged as to a problem with a ZFS file system,
I ran zfs status followed by zfs status -v...
# zpool status
pool: ehome
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore
I use a Intel Q9450 + P45 mobo + ATI 4850 + ZFS + VirtualBox.
I have installed WinXP. It works good and is stable. There are features not
implemented yet, though. For instance USB.
I suggest you try VB yourself. It is ~20MB and installs quick. I used it on 1GB
RAM P4 machine. It worked fine. If
Hi,
Have a problem with a ZFS on a single device, this device is 48 1T SATA
drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
a ZFS on it as a single device.
There was a problem with the SAS bus which caused various errors
including the inevitable kernel panic, the thing ca
Hmm... got a bit more information for you to add to that bug I think.
Zpool import also doesn't work if you have mirrored log devices and either one
of them is offline.
I created two ramdisks with:
# ramdiskadm -a rc-pool-zil-1 256m
# ramdiskadm -a rc-pool-zil-2 256m
And added them to the p
41 matches
Mail list logo