> Have been there and gave up at the end[1]. Could reproduce (even though
> it took a bit longer) under most Linux versions (incl. using latest LSI
> drivers) and LSI 3081E-R HBA.
>
> Is it just mpt causing the errors or also mpt_sas?
This is anecdotal, but I would say that the LSI1068 cards and
> Anybody who has worked on a SPARC system for the past 15 years is well
> aware of NUMAness. We've been living in a NUMA world for a very long time,
> a world where the processors were slow and far memory latency is much, much
> worse than we see in the x86 world.
>
> I look forward to seeing the
> I'd be interested in the results of such tests. You can change the
> primarycache
> parameter on the fly, so you could test it in less time than it
> takes for me to type this email :-)
> -- Richard
Tried that. Performance headed south like a cat with its tail on fire. We
didn't bother quanti
> I'm getting sub-optimal performance with an mmap based database
> (mongodb) which is running on zfs of Solaris 10u9.
>
> System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 * 8GB + 6 *
> 4GB)
> ram (installed so it runs at 1333MHz) and 2 * 300GB 15K RPM disks
>
> - a few mongodb instances ar
> Out of curiosity, are there any third-party hardware vendors
> that make server/storage chassis (Supermicro et al) who make
> SATA backplanes with the SAS interposers soldered on?
There doesn't seem to be much out there, though I haven't looked.
> Would that make sense, or be cheaper/more reli
> In general, mixing SATA and SAS directly behind expanders (eg without
> SAS/SATA intereposers) seems to be bad juju that an OS can't fix.
In general I'd agree. Just mixing them on the same box can be problematic,
I've noticed - though I think as much as anything that the firmware
on the 3G/s exp
> 2012-03-21 16:41, Paul Kraus wrote:
> > I have been running ZFS in a mission critical application since
> > zpool version 10 and have not seen any issues with some of the vdevs
> > in a zpool full while others are virtually empty. We have been running
> > commercial Solaris 10 releases. The
Let me throw two cents into the mix here.
Background: I have probably 8 different ZFS boxes, BYO using SMC
chassis. The standard config now looks like such:
- CSE847-E26-1400LPB main chassis, X8DTH-iF board, dual X5670 CPUs, 96G
RAM (some have 144G)
- Intel X520 dual-10G card
- 2 LSI 9211-8i co
> I've also started conversations with Pogo about offering an
OpenIndiana
> based workstation, which might be another option if you prefer more of
a
> general purpose solution.
>
> - Garrett
Just to highlight a point that seems often lost here - not everyone uses
Solaris/ZFS as a "file stor
> Date: Mon, 28 Feb 2011 22:02:37 +0100 (CET)
> From: Roy Sigurd Karlsbakk
>
> > > I cannot but agree. On Linux and Windoze (haven't tested FreeBSD),
> > > drives connected to an LSI9211 show up in the correct order, but
not
> > > on OI/osol/S11ex (IIRC), and fmtopo doesn't always show a mapping
> On 02/28/11 22:39, Dave Pooser wrote:
> > On 2/28/11 4:23 PM, "Garrett D'Amore"
> wrote:
> >
> >> Drives are ordered in the order they are *enumerated* when they
*first*
> >> show up in the system. *Ever*.
> >
> > Is the same true of controllers? That is, will c12 remain c12 or
> > /pci@0,0/pci
> From: Edward Ned Harvey
>
> To: "'Khushil Dep'"
> Cc: Richard Elling ,
> zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] A few questions
> Message-ID: <000201cbada5$a3678270$ea3687...@nedharvey.com>
> Content-Type: text/plain; charset="utf-8"
>
> > From: Khushil Dep [mailt
> Another alternative to try would be setting primarycache=metadata on
the
> ZFS dataset that contains the mmap files. That way you are only
turning
> of the ZFS ARC cache of the file content for that one dataset rather
> than clamping the ARC.
Yeah, you'd think that would be the right thing to d
One thing I've been confused about for a long time is the relationship
between ZFS, the ARC, and the page cache.
We have an application that's a quasi-database. It reads files by
mmap()ing them. (writes are done via write()). We're talking 100TB of
data in files that are 100k->50G in size (the fi
Wow, sounds familiar - binderedondat. I thought it was just when using
expanders... guess it's just anything 1068-based. Lost a 20TB pool to
having the controller basically just hose up what it was doing and write
scragged data to the disk.
1) The suggestion using the serial number of the drive t
So, Best Practices says "use (N^2)+2 disks for your raidz2".
I wanted to use 7 disk stripes not 6, just to try to balance my risk
level vs available space.
Doing some testing on my hardware, it's hard to say there's a ton of
difference one way or the other - seek/create/delete is a bit faster on
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn't the slog a copy of the
in-memory intent log? Wouldn't it just simply replicate the data that's
in the other log, checked against what's in RAM? And presumably there
isn't that much data in t
> looks similar to a crash I had here at our site a few month ago. Same
> symptoms, no actual solution. We had to recover from a rsync backup
> server.
Thanks Carsten. And on Sun hardware, too. Boy, that's comforting
Three way mirrors anyone?
___
> > All of this would be ok... except THOSE ARE THE ONLY DEVICES THAT
WERE
> > PART OF THE POOL. How can it be missing a device that didn't exist?
>
> The device(s) in question are probably the logs you refer to here:
There is a log, with a different GUID, from another pool from long ago.
It isn'
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
stripe. They're all supermicro-based with retail LSI cards.
I've noticed a tendency for things to go a little bonkers during the
weekly scrub (they all scrub over the weekend), and that's when I'll
lose a disk here and there. O
> I'm using iozone to get some performance numbers and I/O hangs when
> it's doing the writing phase.
>
> This pool has:
>
> 18 x 2TB SAS disks as 9 data mirrors
> 2 x 32GB X-25E as log mirror
> 1 x 160GB X-160M as cache
>
> iostat shows "2" I/O operations active and SSDs at 100% busy when
> it'
Gack, that's the same message we're seeing with the mpt controller with
SATA drives. I've never seen it with a SAS drive before .
Has anyone noticed a trend of 2TB SATA drives en-masse not working well
with the LSI SASx28/x36 expander chips? I can seemingly reproduce it on
demand - hook > 4 2TB d
> > Have I missed any changes/updates in the situation?
>
> I'm been getting very bad performance out of a LSI 9211-4i card
> (mpt_sas) with Seagate Constellation 2TB SAS disks, SM SC846E1 and
> Intel X-25E/M SSDs. Long story short, I/O will hang for over 1 minute
> at random under heavy load.
Hm
> The term 'stripe' has been so outrageously severely abused in this
> forum that it is impossible to know what someone is talking about when
> they use the term. Seemingly intelligent people continue to use wrong
> terminology because they think that protracting the confusion somehow
> helps new
I know that this has been well-discussed already, but it's been a few months -
WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting
out lots of beloved " Log info 3108 received for target" messages, and just
generally not working right.
(SM 836EL1 and 836TQ chassi
Is there a version of lsiutil that works for the LSI2008 controllers? I have a
mix of both, and lsiutil is nifty, but not as nifty if it only works on half my
controllers. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
26 matches
Mail list logo