Edward wrote:
> That is really weird. What are you calling "failed?" If you're
getting
> either a red blinking light, or a checksum failure on a device in a
zpool...
> You should get your replacement with no trouble.
Yes, failed, with all the normal "failed" signs, cfgadm not finding it,
"FAULTE
Frank wrote:
> Have you dealt with RedHat "Enterprise" support? lol.
Have you dealt with Sun/Oracle support lately? lololol It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The 7000
series su
I would probably tune lotsfree down as well. At 72G of ram currently it's
probably reserving around 1.1GB of ram.
http://docs.sun.com/app/docs/doc/819-2724/6n50b07bk?a=view
Ethan
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] O
> I have installed open solaris, build 111. I also added some packages
> from www.sunfreeware.com to my system and other tools (compiled by me)
> to /opt.
> Problem is, that all new data (added by me) after some days get lost.
> Disk looks like (for example) packages from sunfreeware was never
> in
> > http://opensolaris.org/jive/thread.jspa?threadID=105702&tstart=0
>
> Yes, this does sound very similar. It looks to me like data from read
> files is clogging the ARC so that there is no more room for more
> writes when ZFS periodically goes to commit unwritten data.
I'm wondering if chang
>
> > correct ratio of arc to l2arc?
>
> from http://blogs.sun.com/brendan/entry/l2arc_screenshots
>
Thanks Rob. Hmm...that ratio isn't awesome.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
> This is a mysql database server, so if you are wondering about the
> smallish arc size, it's being artificially limited by "set
> zfs:zfs_arc_max = 0x8000" in /etc/system, so that the majority of
> ram can be allocated to InnoDb.
>
I was told offline that it's likely because my arc size ha
Hi all,
Since we've started running 2009.06 on a few servers we seem to be
hitting a problem with l2arc that causes it to stop receiving evicted
arc pages. Has anyone else seen this kind of problem?
The filesystem contains about 130G of compressed (lzjb) data, and looks
like:
$ zpool status -v d
Ethan Erchinger wrote:
> Here is a sample set of messages at that time. It looks like timeouts
> on the SSD for various requested blocks. Maybe I need to talk with
> Intel about this issue.
>
Keeping everyone up-to-date, for those who care, I've RMAd the Intel
drive, and
Richard Elling wrote:
> The answer may lie in the /var/adm/messages file which should report
> if a reset was received or sent.
Here is a sample set of messages at that time. It looks like timeouts
on the SSD for various requested blocks. Maybe I need to talk with
Intel about this issue.
Ethan
Richard Elling wrote:
>>
>>asc = 0x29
>>ascq = 0x0
>
> ASC/ASCQ 29/00 is POWER ON, RESET, OR BUS DEVICE RESET OCCURRED
> http://www.t10.org/lists/asc-num.htm#ASC_29
>
> [this should be more descriptive as the codes are, more-or-less,
> standardized, I'll try to file an RFE, unless
Tim wrote:
>
>
> Are you leaving ANY ram for zfs to do it's thing? If you're consuming
> ALL system memory for just this file/application, I would expect the
> system to fall over and die.
>
Hmm. I believe that the kernel should manage that relationship for me.
If the system cannot manage swa
Ross wrote:
> I'm no expert, but the first thing I'd ask is whether you could repeat that
> test without using compression? I'd be quite worried about how a system is
> going to perform when it's basically running off a 50GB compressed file.
>
>
Yes this does occur with compression off, but
Richard Elling wrote:
>
> I've seen these symptoms when a large number of errors were reported
> in a short period of time and memory was low. What does "fmdump -eV"
> show?
>
fmdump -eV shows lots of messages like this, and yea, I believe that to
be sd16 which is the SSD:
Dec 03 2008 08:31:11
Hi all,
First, I'll say my intent is not to spam a bunch of lists, but after
posting to opensolaris-discuss I had someone communicate with me offline
that these lists would possibly be a better place to start. So here we
are. For those on all three lists, sorry for the repetition.
Second, thi
William Bauer wrote:
I've done some more research, but would still greatly appreciate someone
helping me understand this.
It seems that writes to only the home directory of the person logged in to the
console suffers from degraded performance. If I write to a subdirectory
beneath my home, or
Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
Hello,
I've looked quickly through the archives and haven't found mention of
this issue. I'm running SXCE (snv_99), which uses zfs version 13. I
had an existing zpool:
Hello,
I've looked quickly through the archives and haven't found mention of
this issue. I'm running SXCE (snv_99), which I believe uses zfs version
13. I had an existing zpool:
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
18 matches
Mail list logo