any suggestions? I would like to restore redundancy ASAP
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Can you send the output of the attached D script when running 'zpool
status'?
- Eric
On Thu, Dec 04, 2008 at 02:58:54PM -0800, Brett wrote:
> As a result of a power spike during a thunder storm I lost a sata controller
> card. This card supported my zfs pool called newsan which is a 4 x samsung
Thanks for the tips. I'm not sure if they will be relevant, though. We don't
talk directly with the AMS1000. We are using a USP-VM to virtualize all of our
storage and we didn't have to add anything to the drv configuration files to
see the new disk (mpxio was already turned on). We are usin
On Thu, Dec 4, 2008 at 4:52 PM, Ed Spencer <[EMAIL PROTECTED]> wrote:
> Yes, I've seen them on nfs filesystems on solaris10 using a Netapp nfs
> server.
> Here's a link to a solution that I just implemented on a solaris10
> server:
> https://equoria.net/index.php/Value_too_large_for_defined_data_ty
Richard Elling wrote:
>>
>>asc = 0x29
>>ascq = 0x0
>
> ASC/ASCQ 29/00 is POWER ON, RESET, OR BUS DEVICE RESET OCCURRED
> http://www.t10.org/lists/asc-num.htm#ASC_29
>
> [this should be more descriptive as the codes are, more-or-less,
> standardized, I'll try to file an RFE, unless
As a result of a power spike during a thunder storm I lost a sata controller
card. This card supported my zfs pool called newsan which is a 4 x samsung 1Tb
sata2 disk raid-z. I replaced the card and the devices have the same
controller/disk numbers, but now have the following issue.
-bash-3.2$
Yes, I've seen them on nfs filesystems on solaris10 using a Netapp nfs
server.
Here's a link to a solution that I just implemented on a solaris10
server:
https://equoria.net/index.php/Value_too_large_for_defined_data_type
On Thu, 2008-12-04 at 15:31, Scott Williamson wrote:
> Has anyone seen files
Has anyone seen files created on a linux client with negative or zero
creation timestamps on zfs+nfs exported datasets?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] said:
> I think we found the choke point. The silver lining is that it isn't the
> T2000 or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has
> 7200RPM SATA disks with the cache turned off. This system has a very small
> cache, and when we did turn it on for one of
Ethan Erchinger wrote:
>
>
> Richard Elling wrote:
>>
>> I've seen these symptoms when a large number of errors were reported
>> in a short period of time and memory was low. What does "fmdump -eV"
>> show?
>>
> fmdump -eV shows lots of messages like this, and yea, I believe that
> to be sd16 whi
Tim wrote:
>
>
> Are you leaving ANY ram for zfs to do it's thing? If you're consuming
> ALL system memory for just this file/application, I would expect the
> system to fall over and die.
>
Hmm. I believe that the kernel should manage that relationship for me.
If the system cannot manage swa
On Thu, Dec 4, 2008 at 11:55 AM, Ethan Erchinger <[EMAIL PROTECTED]> wrote:
>
>
> Ross wrote:
> > I'm no expert, but the first thing I'd ask is whether you could repeat
> that test without using compression? I'd be quite worried about how a
> system is going to perform when it's basically running
Ross wrote:
> I'm no expert, but the first thing I'd ask is whether you could repeat that
> test without using compression? I'd be quite worried about how a system is
> going to perform when it's basically running off a 50GB compressed file.
>
>
Yes this does occur with compression off, but
Richard Elling wrote:
>
> I've seen these symptoms when a large number of errors were reported
> in a short period of time and memory was low. What does "fmdump -eV"
> show?
>
fmdump -eV shows lots of messages like this, and yea, I believe that to
be sd16 which is the SSD:
Dec 03 2008 08:31:11
Ethan Erchinger wrote:
> Hi all,
>
> First, I'll say my intent is not to spam a bunch of lists, but after
> posting to opensolaris-discuss I had someone communicate with me offline
> that these lists would possibly be a better place to start. So here we
> are. For those on all three lists, sorr
I can see a number of closed bugs, and some discussion on this, but I can't
find if there's any outstanding bug for it. Can anybody find one, or should I
create a new bug?
In snv_103 I've seen resilvers restarting several times. I have been running
"zpool status" as root, but most of the time
Keep in mind that if you use ZFS you get a lot of additional functionality like
snapshots, compression, clones.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
I'm no expert, but the first thing I'd ask is whether you could repeat that
test without using compression? I'd be quite worried about how a system is
going to perform when it's basically running off a 50GB compressed file.
There seem to be a lot of variables here, on quite a few new systems, a
18 matches
Mail list logo