if you get rid of the HBA and log device, and run with ZIL
> disabled (if your work load is compatible with a disabled ZIL.)
By "get rid of the HBA" I assume you mean put in a battery-backed RAID
card instead?
-J
___
zfs-discuss mailing list
zfs-discus
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2
stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors.
It seems like an exorbitantly long time. The other 5 disks in the stripe with
the replaced disk were at 90% busy and ~150io/s each during
Upgrading is definitely an option. What is the current snv favorite for ZFS
stability? I apologize, with all the Oracle/Sun changes I haven't been paying
as close attention to big reports on zfs-discuss as I used to.
-J
Sent via iPhone
Is your e-mail Premiere?
On Sep 26, 2010, at 10:22, Roy
134 it is. This is an OpenSolaris rig that's going to be replaced within the
next 60 days, so just need to get it to something that won't through false
checksum errors like the 120-123 builds do and has decent rebuild times.
Future boxes will be NexentaStor.
Thank you guys. :)
-J
On Sun, Sep 26
Err...I meant Nexenta Core.
-J
On Mon, Sep 27, 2010 at 12:02 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
> 134 it is. This is an OpenSolaris rig that's going to be replaced within
> the next 60 days, so just need to get it to something that won't through
&
If one was sticking with OpenSolaris for the short term, is something older
than 134 more stable/less buggy? Not using de-dupe.
-J
On Thu, Sep 23, 2010 at 6:04 PM, Richard Elling wrote:
> Hi Charles,
> There are quite a few bugs in b134 that can lead to this. Alas, due to the
> new
> regime, the
Hi,
I just replaced a drive (c12t5d0 in the listing below). For the first 6
hours of the resilver I saw no issues. However, sometime during the last
hour of the resilver, the new drive and two others in the same RAID-Z2 strip
threw a couple checksum errors. Also, two of the other drives in the str
Thanks Tuomas. I'll run the scrub. It's an aging X4500.
-J
On Thu, Sep 30, 2010 at 3:25 AM, Tuomas Leikola wrote:
> On Thu, Sep 30, 2010 at 9:08 AM, Jason J. W. Williams <
> jasonjwwilli...@gmail.com> wrote:
>
>>
>> Should I be worried about these check
Just for history as to why Fishworks was running on this box...we were
in the beta program and have upgraded along the way. This box is an
X4240 with 16x 146GB disks running the Feb 2010 release of FW with
de-dupe.
We were getting ready to re-purpose the box and getting our data off.
We then delet
WD's drives have gotten better the last few years but their quality is still
not very good. I doubt they test their drives extensively for heavy duty server
configs, particularly since you don't see them inside any of the major server
manufactures' boxes.
Hitachi in particular does well in mas
This might be related to your issue:
http://blog.mpecsinc.ca/2010/09/western-digital-re3-series-sata-drives.html
On Saturday, August 6, 2011, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or
Since iSCSI is block-level, I don't think the iSCSI intelligence at
the file level you're asking for is feasible. VSS is used at the
file-system level on either NTFS partitions or over CIFS.
-J
On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum wrote:
> Hi all,
>
> If I want to make a snapshot of an
Hi Neal,
We've been getting pretty good performance out of RAID-Z2 with 3x
6-disk RAID-Z2 stripes. More stripes mean better performance all
around...particularly on random reads. But as a file-server that's
probably not a concern. With RAID-Z2 it seems to me 2 hot-spares is
very sufficient, but I
I believe the SmartArray is an LSI like the Dell PERC isn't it?
Best Regards,
Jason
On 1/23/07, Robert Suh <[EMAIL PROTECTED]> wrote:
People trying to hack together systems might want to look
at the HP DL320s
http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475
-f79-3232017
Hi Peter,
Perhaps I'm a bit dense, but I've been befuddled by the x+y notation
myself. Is it X stripes consisting of Y disks?
Best Regards,
Jason
On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote:
On 1/23/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
> Hi: (Warning, new zfs user question
Hi Peter,
Ah! That clears it up for me. Thank you.
Best Regards,
Jason
On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote:
On 1/23/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
> Hi Peter,
>
> Perhaps I'm a bit dense, but I've been befuddled by the
Hi All,
This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.
What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
Bechtolsheim designed at Kealia as a mas
Hi Prashanth,
My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to
ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits
of 50%+. SVM only reduced performance by about 15%. ZFS was similar,
though a tad higher.
Also, my understanding is you can't write to a ZFS sna
Wow. That's an incredibly cool story. Thank you for sharing it! Does
the Thumper today pretty much resemble what you saw then?
Best Regards,
Jason
On 1/23/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote:
> This is a bit off-topic...but since the Thumper is the poster child
> for ZFS I hope its no
Hi Prashanth,
This was about a year ago. I believe I ran bonnie++ and IOzone tests.
Tried also to simulate an OLTP load. The 15-20% overhead for ZFS was
vs. UFS on a raw disk...UFS on SVM was almost exactly 15% lower
performance than raw UFS. UFS and XFS on raw disk were pretty similar
in terms o
'Cept the 3511 is highway robbery for what you get. ;-)
Best Regards,
Jason
On 1/24/07, Richard Elling <[EMAIL PROTECTED]> wrote:
Peter Eriksson wrote:
>> too much of our future roadmap, suffice it to say that one should expect
>> much, much more from Sun in this vein: innovative software and i
Hi Wee,
Having snapshots in the filesystem that work so well is really nice.
How are y'all quiescing the DB?
Best Regards,
J
On 1/24/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
On 1/25/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote:
> ...
> after all, what was ZFS going to do with that expensive
Hi Jeff,
We're running a FLX210 which I believe is an Engenio 2884. In our case
it also is attached to a T2000. ZFS has run VERY stably for us with
data integrity issues at all.
We did have a significant latency problem caused by ZFS flushing the
write cache on the array after every write, but t
Correction: "ZFS has run VERY stably for us with data integrity
issues at all." should read "ZFS has run VERY stably for us with NO
data integrity issues at all."
On 1/26/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
Hi Jeff,
We're running a FLX210 whi
To be fair, you can replace vdevs with same-sized or larger vdevs online.
The issue is that you cannot replace with smaller vdevs nor can you
eliminate vdevs. In other words, I can migrate data around without
downtime, I just can't shrink or eliminate vdevs without send/recv.
This is where the ph
You could use SAN zoning of the affected LUN's to keep multiple hosts
from seeing the zpool. When failover time comes, you change the zoning
to make the LUN's visible to the new host, then import. When the old
host reboots, it won't find any zpool. Better safe than sorry
Or change the LUN
Could the replication engine eventually be integrated more tightly
with ZFS? That would be slick alternative to send/recv.
Best Regards,
Jason
On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:
Project Overview:
I propose the creation of a project on opensolaris.org, to bring to the community
Hi Jeff,
Maybe I mis-read this thread, but I don't think anyone was saying that
using ZFS on-top of an intelligent array risks more corruption. Given
my experience, I wouldn't run ZFS without some level of redundancy,
since it will panic your kernel in a RAID-0 scenario where it detects
a LUN is
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Best Regards,
Jason
On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Could the replication engine eventually be integr
Hi Guys,
I seem to remember the Massive Array of Independent Disk guys ran into
a problem I think they called static friction, where idle drives would
fail on spin up after being idle for a long time:
http://www.eweek.com/article2/0,1895,1941205,00.asp
Would that apply here?
Best Regards,
Jason
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you can use both to their full potential together?
Best Regards,
Jason
On 1/2
ork. Would dramatically cut down on the power. What do y'all think?
Best Regards,
Jason
On 1/29/07, Toby Thain <[EMAIL PROTECTED]> wrote:
On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
> Hi Guys,
>
> I seem to remember the Massive Array of Independent Disk guys ran
Hi Nicholas,
ZFS itself is very stable and very effective as fast FS in our
experience. If you browse the archives of the list you'll see that NFS
performance is pretty acceptable, with some performance/RAM quirks
around small files:
http://www.opensolaris.org/jive/message.jspa?threadID=19858
ht
Hi Nicholas,
Actually Virtual Iron, they have a nice system at the moment with live
migration of windows guest.
Ah. We looked at them for some Windows DR. They do have a nice product.
3. Which leads to: coming from Debian, how easy is system updates? I
remember with OpenBSD system updates u
Hi Eric,
Everything Mark said.
We as a customer ran into this running MySQL on a Thumper (and T2000).
We solved it on the Thumper by limiting the ARC to 4GB:
/etc/system: set zfs:zfs_arc_max = 0x1 #4GB
This has worked marvelously over the past 50 days. The ARC stays
around 5-6GB now. L
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS for RAID. Big companies do a lot of things
"just because that's the certified way" that end up biting them in the
rear. Trusting your SAN arrays is one of them. That all being said,
the need to do m
Hi Gino,
We've noticed similar strangeness with ZFS on MPXIO. If you actively
fail over the path, everything works hunky dory. However, if one of
the paths disappears unexpectedly (i.e. FC switch dies...or an array
controller konks out) then ZFS will panic. UFS on MPXIO in as similar
situation do
Hi Gino,
Was there more than one LUN in the RAID-Z using the port you disabled?
-J
On 2/26/07, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
Hi Jason,
saturday we made some tests and found that disabling a FC port under heavy load
(MPXio enabled) often takes to a panic. (using a RAID-Z !)
No prob
-)
-J
On 2/27/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
> Hi Przemol,
>
> I think Casper had a good point bringing up the data integrity
> features when using ZFS for RAID. Big companies do a lot of
Hi Brian,
To my understanding the X2100 M2 and X2200 M2 are basically the same
board OEM'd from Quanta...except the 2200 M2 has two sockets.
As to ZFS and their weirdness, it would seem to me that fixing it
would be more an issue of the SATA/SCSI driver. I may be wrong here.
-J
On 3/12/07, Bri
Hi Marty,
We'd love to beta the driver. Currently, have 5 X2100 M2s in
production and 1 in development.
Best Regards,
Jason
On 3/12/07, Marty Faltesek <[EMAIL PROTECTED]> wrote:
On Mon, 2007-03-12 at 12:14 -0700, Frank Cusack wrote:
> On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTEC
Hi Jim,
My understanding is that the DLNC can consume quite a bit of memory
too, and the ARC limitations (and memory culler) don't clean the DNLC
yet. So if you're working with a lot of smaller files, you can still
go way over your ARC limit. Anyone, please correct me if I've got that
wrong.
-J
Hi Rainer,
While I would recommend upgrading to Build 54 or newer to use the
system tunable, its not that big of a deal to set the ARC on boot up.
We've done it on a T2000 for awhile, until we could take it down for
an extended period of time to upgrade it.
Definitely WOULD NOT run a database on
Hi Guys,
Rather than starting a new thread I thought I'd continue this thread.
I've been running Build 54 on a Thumper since Mid January and wanted
to ask a question about the zfs_arc_max setting. We set it to "
0x1 #4GB", however its creeping over that till our Kernel
memory usage is nea
Hey All,
Is it possible (or even technically feasible) for zfs to have a
"destroy to" feature? Basically destroy any snapshot older than a
certain date?
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
Hi Mark,
Thank you very much. That's what I was kind of afraid of. Its fine to
script it, just would be nice to have a built in function. :-) Thank
you again.
Best Regards,
Jason
On 5/11/07, Mark J Musante <[EMAIL PROTECTED]> wrote:
On Fri, 11 May 2007, Jason J. W. Williams wrot
Hello All,
Awhile back (Feb '07) when we noticed ZFS was hogging all the memory
on the system, y'all were kind enough to help us use the arc_max
tunable to attempt to limit that usage to a hard value. Unfortunately,
at the time a sticky problem was that the hard limit did not include
DNLC entries
Hi Dale,
We're testing out the enhanced arc_max enforcement (track DNLC
entries) using Build 72 right now. Hopefully, it will fix the memory
creep, which is the only real downside to ZFS for DB work it seems to
me. Frankly, of our DB loads have improved performance with ZFS. I
suspect its because
Hey Guys,
Its not possible yet to fracture a snapshot or clone into a
self-standing filesystem is it? Basically, I'd like to fracture a
snapshot/clone into is own FS so I can rollback past that snapshot in
the original filesystem and still keep that data.
Thank you in advance.
Best Regards,
Jaso
A (getting H), promote H,
> then delete C, D, and E. That would leave you with:
>
> A -- H
> \
> -- B -- F -- G
>
> Is that anything at all like what you're after?
>
>
> --Bill
>
> On Wed, Oct 17, 2007 at 10:00:03PM -0600, Jason J. W. Williams wrote:
>
F).
4.) Promote clone_B.
5.) If clone_Bs data doesn't work out, promote clone_F to roll forward.
Thank you in advance.
Best Regards,
Jason
On 10/18/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
> Hi Bill,
>
> You've got it 99%. I want to roll E back to say B, and keep G
> A quick Google of ext3 fsck did not yield obvious examples of why people
> needed to run fsck on ext3, though it did remind me that by default ext3 runs
> fsck just for the hell of it every N (20?) mounts - could that have been part
> of what you were seeing?
I'm not sure if that's what Robe
Hi Guys,
Someone asked me how to count the number of inodes/objects in a ZFS
filesystem and I wasn't exactly sure. "zdb -dv " seems
like a likely candidate but I wanted to find out for sure. As to why
you'd want to know this, I don't know their reasoning but I assume it
has to do with the maximum
Hi Darren,
> Ah, your "CPU end" was referring to the NFS client cpu, not the storage
> device CPU. That wasn't clear to me. The same limitations would apply
> to ZFS (or any other filesystem) when running in support of an NFS
> server.
>
> I thought you were trying to describe a qualitative diff
Hey Guys,
Have any of y'all seen a condition where the ILOM considers a disk
faulted (status is 3 instead of 1), but ZFS keeps writing to the disk
and doesn't report any errors? I'm going to do a scrub tomorrow and
see what comes back. I'm curious what caused the ILOM to fault the
disk. Any advice
ount seems a
little fishy...like iostat -E doesn't like the X4500 for some reason.
Thank you again for your help.
Best Regards,
Jason
On Dec 4, 2007 2:54 AM, Ralf Ramge <[EMAIL PROTECTED]> wrote:
> Jason J. W. Williams wrote:
> > Have any of y'all seen a condition where
Seconded. Redundant controllers means you get one controller that
locks them both up, as much as it means you've got backup.
Best Regards,
Jason
On Mar 21, 2007 4:03 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> JS wrote:
> > I'd definitely prefer owning a sort of SAN solution that would basica
Hello,
There seems to be a persistent issue we have with ZFS where one of the
SATA disk in a zpool on a Thumper starts throwing sense errors, ZFS
does not offline the disk and instead hangs all zpools across the
system. If it is not caught soon enough, application data ends up in
an inconsistent s
Hi Albert,
Thank you for the link. ZFS isn't offlining the disk in b77.
-J
On Jan 3, 2008 3:07 PM, Albert Chin
<[EMAIL PROTECTED]> wrote:
>
> On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote:
> > There seems to be a persistent issue we have with ZFS wh
timeouts independent
>of SCSI timeouts.
>
> Neither of these is trivial, and both potentially compromise data
> integrity, hence the lack of such features. There's no easy solution to
> the problem, but we're happy to hear ideas.
>
> - Eric
>
> On Thu, Jan 0
se situations?
> How about '::zio_state'?
>
> - Eric
>
>
> On Thu, Jan 03, 2008 at 03:11:39PM -0700, Jason J. W. Williams wrote:
> > Hi Albert,
> >
> > Thank you for the link. ZFS isn't offlining the disk in b77.
> >
> > -J
> >
&g
Hey Y'all,
I've posted the program (SnapBack) my company developed internally for
backing up production MySQL servers using ZFS snapshots:
http://blogs.digitar.com/jjww/?itemid=56
Hopefully, it'll save other folks some time. We use it a lot for
standing up new MySQL slaves as well.
Best Regards,
Hey Thiago,
SVM is a direct replacement for LVM. Also, you'll notice about a 30%
performance boost if you move from LVM to SVM. At least we did when we
moved a couple of years ago.
-J
On Jan 21, 2008 8:09 AM, Thiago Sobral <[EMAIL PROTECTED]> wrote:
> Hi folks,
>
> I need to manage volumes like
It'd be a really nice feature. Combined with baked-in replication it
would be a nice alternative to our DD appliances.
-J
On Jan 21, 2008 2:03 PM, John Martinez <[EMAIL PROTECTED]> wrote:
>
> Great question. I've been wondering this myself over the past few
> weeks, as de-dup is becoming more pop
X4500 problems seconded. Still having issues with port resets due to
the Marvell driver. Though they seem considerably more transient and
less likely to lock up the entire systems in the most recent ( >b72)
OpenSolaris builds.
-J
On Feb 12, 2008 9:35 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
>
Hi Robert,
Out of curiosity would it be possible to see the same test but hitting
the disk with write operations instead of read?
Best Regards,
Jason
On 11/2/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello zfs-discuss,
Server: x4500, 2x Opetron 285 (dual-core), 16GB RAM, 48x500GB
file
Hi Louwtjie,
Are you running FC or SATA-II disks in the 6140? How many spindles too?
Best Regards,
Jason
On 11/3/06, Louwtjie Burger <[EMAIL PROTECTED]> wrote:
Hi there
I'm busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above avail
Hi Yuen,
Not to my knowledge. I believe this project is working on it though:
http://zfs-on-fuse.blogspot.com/
Best Regards,
Jason
On 11/6/06, Yuen L. Lee <[EMAIL PROTECTED]> wrote:
I'm curious whether there is a version of Linux 2.6 ZFS available?
Many thanks.
This message posted from opens
Hi there,
I've been comparing using the ZFS send/receive function over SSH to
simply scp'ing the contents of snapshot, and have found for me the
performance is 2x faster for scp.
Has anyone else noticed ZFS send/receive to be noticeably slower?
Best Regards,
Jason
__
Listman,
What's the average size of your files? Do you have many file
deletions/moves going on? I'm not that familiar with how Perforce
handles moving files around.
XFS is bad at small files (worse than most file systems), as SGI
optimized it for larger files (> 64K). You might see a performance
hat's going on.
Best Regards,
Jason
On 11/15/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi there,
>
> I've been comparing using the ZFS send/receive function over SSH to
> simply scp'ing the contents of snapshot, and have found
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
Thanks in advance,
J
On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:
On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:
> So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
> fo
Hello,
Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?
-J
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sentially I've got 12 disks to work with.
Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
is much appreciated.
Best Regards,
Jason
On 11/28/06, Richard Elling <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Is it possible to non-destructively change RAI
from 200 to 20 once we
cut the replication. Since the masters and slaves were using the same
the volume groups and RAID-Z was striping across all of them on both
the masters and slaves, I think this was a big problem.
Any comments?
Best Regards,
Jason
On 11/29/06, Richard Elling <[EMAIL PROT
to me that there is some detailed information which would
be needed for a full analysis. So, to keep the ball rolling, I'll
respond generally.
Jason J. W. Williams wrote:
> Hi Richard,
>
> Been watching the stats on the array and the cache hits are < 3% on
> these volumes. We
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby trigger a fail-over of the
application. However, if it
Any chance we might get a short refresher warning when creating a
striped zpool? O:-)
Best Regards,
Jason
On 12/4/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi all,
>
> Having experienced this, it would be nice if there was an option to
> offl
Hi Luke,
We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC ports on the storage (2 per controller).
Depending on y
lped prove it wasn't ZFS or the storage.
Does this help?
Best Regards,
Jason
On 12/6/06, Douglas Denny <[EMAIL PROTECTED]> wrote:
On 12/6/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
> We've been using MPXIO (STMS) with ZFS quite solidly for the past few
> mon
<[EMAIL PROTECTED]> wrote:
On 12/6/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
> The configuration is a T2000 connected to a StorageTek FLX210 array
> via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
> the LUNs across 3 array volume groups. For performan
ng with clusters, we have a 5 minute
failover requirement on the entire cluster to move.
Therefore, it would be ideal to not have STMS(mpxio)
enabled on the machines.
Luke Schwab
--- "Jason J. W. Williams" <[EMAIL PROTECTED]>
wrote:
> Hi Doug,
>
> The configuration is a
Hi Luke,
Is the 4884 using two or four ports? Also, how many FSs are involved?
Best Regards,
Jason
On 12/6/06, Luke Schwab <[EMAIL PROTECTED]> wrote:
I, too, experienced a long delay while importing a zpool on a second machine. I
do not have any filesystems in the pool. Just the Solaris 10 Op
Hi Dale,
Are you using MyISAM or InnoDB? Also, what's your zpool configuration?
Best Regards,
Jason
On 12/7/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
Hey all, I run a netra X1 as the mysql db server for my small
personal web site. This X1 has two drives in it with SVM-mirrored UFS
slices for
resides
instead of zfs looking through every disk attached to
the machine.
Luke Schwab
--- "Jason J. W. Williams" <[EMAIL PROTECTED]>
wrote:
> Hey Luke,
>
> Do you have IM?
>
> My Yahoo IM ID is [EMAIL PROTECTED]
> -J
>
> On 12/6/06, Luke Schwab <[EMAIL P
That's gotta be what it is. All our MySQL IOP issues have gone away
one we moved to RAID-1 from RAID-Z.
-J
On 12/7/06, Anton B. Rang <[EMAIL PROTECTED]> wrote:
This does look like the ATA driver bug rather than a ZFS issue per se.
(For the curious, the reason ZFS triggers this when UFS doesn't
Hi Dale,
For what its worth, the SX releases tend to be pretty stable. I'm not
sure if snv_52 has made a SX release yet. We ran for over 6 months on
SX 10/05 (snv_23) with no downtime.
Best Regards,
Jason
On 12/7/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Dec 7, 2006, at 6:14 PM, Anton B. Ra
Hi Luke,
I wonder if it is the HBA. We had issues with Solaris and LSI HBAs
back when we were using an Xserve RAID.
Haven't had any of the issues you're describing between our LSI array
and the Qlogic HBAs we're using now.
If you have another type of HBA I'd try it. MPXIO and ZFS haven't ever
c
pr 29 2006 c1t0d0s1 ->
/dev/dsk/c1t0d0s1
lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t16d0s1 ->
/dev/dsk/c1t16d0s1
lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t17d0s1 ->
/dev/dsk/c1t17d0s1
Then:
zpool import -d /mydkslist mypool
Hope that helps...
-r
Hi Kory,
It depends on the capabilities of your array in our experience...and
also the zpool type. If you're going to do RAID-Z in a write intensive
environment you're going to have a lot more I/Os with three LUNs then
a single large LUN. Your controller may go nutty.
Also, (Richard can address
Hi Folks,
Roch Bourbonnais and Richard Elling helped me tremendously with the
issue of ZFS killing performance on arrays with battery-backed cache.
Since this seems to have been mentioned a bit recently, and there are
no instructions on how to fix it on Sun StorageTek/Engenio arrays, I
wanted to
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart enough to be a ZFS
contributor. :-)
The behavior is a reality we had to deal with and workaround, so I
pos
It seems to me that the optimal scenario would be network filesystems
on top of ZFS, so you can get the data portability of a SAN, but let
ZFS make all of the decisions. Short of that, ZFS on SAN-attached
JBODs would give a similar benefit. Having benefited tremendously from
being able to easily d
I do see this note in the 3511 documentation: "Note - Do not use a Sun StorEdge 3511
SATA array to store single instances of data. It is more suitable for use in
configurations where the array has a backup or archival role."
My understanding of this particular scare-tactic wording (its also in
> Shouldn't there be a big warning when configuring a pool
> with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?
Because if the host controller port goes flaky an
Hi Roch,
That sounds like a most excellent resolution to me. :-) I believe
Engenio devices support SBC-2. It seems to me making intelligent
decisions for end-users is generally a good policy.
Best Regards,
Jason
On 12/19/06, Roch - PAE <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams
Hi Robert,
I don't think its about assuming the admin is an idiot. It happened to
me in development and I didn't expect it...I hope I'm not an idiot.
:-)
Just observing the list, a fair amount of people don't expect it. The
likelihood you'll miss this one little bit of very important
information
Hi Robert
I didn't take any offense. :-) I completely agree with you that zpool
striping leverages standard RAID-0 knowledge in that if a device
disappears your RAID group goes poof. That doesn't really require a
notice...was just trying to be complete. :-)
The surprise to me was that detecting
n nice
behavior to have, since popping and reinserting triggered a rebuild of
the drive.
Best Regards,
Jason
On 12/19/06, Toby Thain <[EMAIL PROTECTED]> wrote:
On 19-Dec-06, at 2:42 PM, Jason J. W. Williams wrote:
>> I do see this note in the 3511 documentation: "Note - Do not
Hi Robert,
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.
Best Regards,
Jason
On 12/20/06, Robert Milkows
1 - 100 of 147 matches
Mail list logo