Even if it might not be the best technical solution, I think what a lot of
people are looking for when this comes up is a knob they can use to say "I only
want X IOPS per vdev" (in addition to low prioritization) to be used while
scrubbing. Doing so probably helps them feel more at ease that the
Someone on this list threw out the idea a year or so ago to just setup 2
ramdisk servers, export a ramdisk from each and create a mirror slog from them.
Assuming newer version zpools, this sounds like it could be even safer since
there is (supposedly) less of a chance of catastrophic failure if
40k IOPS sounds like "best in case, you'll never see it in the real world"
marketing to me. There are a few benchmarks if you google and they all seem to
indicate the performance is probably +/- 10% of an intel x25-e. I would
personally trust intel over one of these drives.
Is it even possible
On the PCIe side, I noticed there's a new card coming from LSI that claims
150,000 4k random writes. Unfortunately this might end up being an OEM-only
card.
I also notice on the ddrdrive site that they now have an opensolaris driver and
are offering it in a beta program.
--
This message posted
Is there a best practice on keeping a backup of the zpool.cache file? Is it
possible? Does it change with changes to vdevs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
Hi,
I have and raidz1 conisting 6 5400rpm drives on this zpool. I have stored some
Media in a FS and in an other 200k files. Both FS are written not much. The
Pool is 85% Full.
Could this issue also the reason that if Iam playing(reading) some Media that
the playback is lagging?
OSOL ips_111
No snapshots running. I have only 21 filesystems mounted. Blocksize is the
default one. Slow disk I dont think so because I get read and write rates about
350MB/s. Bios is the last also I tried to splitt the pool to two controllers
all this dont help.
--
This message posted from opensolaris.org
which gap?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html
second bug, its the same link like in the first post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
Hmm.. I guess that's what I've heard as well.
I do run compression and believe a lot of others would as well. So then, it
seems
to me that if I have guests that run a filesystem formatted with 4k blocks for
example.. I'm inevitably going to have this overlap when using ZFS network
storage?
So if
> I think it is a great idea, assuming the SSD has good write performance.
> This one claims up to 230MB/s read and 180MB/s write and it's only $196.
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
>
> Compared to this one (250MB/s read and 170MB/s write) which is $699.
>
> A
That is an interesting bit of kit. I wish a "white box" manufacturer would
create something like this (hint hint supermicro)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Are there *any* consumer drives that don't respond for a long time trying to
recover from an error? In my experience they all behave this way which has been
a nightmare on hardware raid controllers.
--
This message posted from opensolaris.org
___
zfs-d
> I'll admit, I was cheap at first and my
> fileserver right now is consumer drives. You
> can bet all my future purchases will be of the enterprise grade. And
> guess what... none of the drives in my array are less than 5 years old, so
> even
> if they did die, and I had bought the enterprise v
Hi Richard,
> So you have to wait for the sd (or other) driver to
> timeout the request. By
> default, this is on the order of minutes. Meanwhile,
> ZFS is patiently awaiting a status on the request. For
> enterprise class drives, there is a limited number
> of retries on the disk before it repor
For whatever it's worth to have someone post on a list.. I would *really*
like to see this improved as well. The time it takes to iterate over
both thousands of filesystems and thousands of snapshots makes me very
cautious about taking advantage of some of the built-in zfs features in
an HA environ
dd to the system. And I want
to go for file security.
How can I get the best out of this setup. Is there a way of mirroring the data
automatically between those three drives?
Any help is appreciated but please don't tell me I have to delete anything ;)
Thanks a lot,
Thomas
--
This mess
Thanks... works perfect!
Currently it's resilvering. That is all too easy ;)
Thanks again,
Thomas
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
nks a lot,
Thomas
2010/3/2 Cindy Swearingen
> Hi Thomas,
>
> I see that Richard has suggested mirroring your existing pool by
> attaching slices from your 1 TB disk if the sizing is right.
>
> You mentioned file security and I think you mean protecting your data
> from
On Thu, Mar 4, 2010 at 4:46 AM, Dan Dascalescu <
bigbang7+opensola...@gmail.com > wrote:
> Please recommend your up-to-date high-end hardware components for building
> a highly fault-tolerant ZFS NAS file server.
>
> I've seen various hardware lists online (and I've summarized them at
> http://wik
no, if you don't use redundancy, each disk you add makes the pool that much
more likely to fair. This is the entire point of raidz .
ZFS stripes data across all vdevs.
On Thu, Mar 4, 2010 at 12:32 PM, Travis Tabbal wrote:
> I have a small stack of disks that I was considering putting in a box
gated gigabit
ethernet cables.
It's very nice.
On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle wrote:
> On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wrote:
>
> > I got a norco 4020 (the 4220 is good too)
> >
> > Both of those cost around 300-350 dolars. That is a
ke an ac window
unit.
On Thu, Mar 4, 2010 at 3:27 PM, Michael Shadle wrote:
> If I had a decently ventilated closet or space to do it in I wouldn't
> mind noise, but I don't, that's why I had to build my storage machines
> the way I did.
>
> On Thu, Mar 4, 2010 at 12
257 0 32.0M 0
sumpf804G 124G 0 0 0 0
Why are there so many 0 in this chart? No wonder I only get 15MB/s max...
Thanks for helping a Solaris beginner. Your help is very appreciated.
Thomas
--
This message posted from opensolaris.org
_
r the beginning).
Thanks for all your input and support.
Thomas
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I scrub once a week.
I think the general rule is:
once a week for consumer grade drives
once a month for enterprise grade drives.
On Sat, Mar 13, 2010 at 3:29 PM, Tony MacDoodle wrote:
> When would it be necessary to scrub a ZFS filesystem?
> We have many "rpool", "datapool", and a NAS 7130,
n the write() request and the bit
being written on a piece of hardware. I wouldn't trust any numbers from
syscall/sec benchmarks being relevant in my environment.
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
I was wondering if anyone had any first hand knowledge of compatibility with
any asus pike slot expansion cards and OpenSolaris.
I would guess this should work:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042
because it's based on lsi 1068e but i'm currious if anyone knows for s
2010-05-10 05:58, Bob Friesenhahn skrev:
On Sun, 9 May 2010, Edward Ned Harvey wrote:
So, Bob, rub it in if you wish. ;-) I was wrong. I knew the behavior in
Linux, which Roy seconded as "most OSes," and apparently we both
assumed the
same here, but that was wrong. I don't know if solaris and o
I was looking at building a new ZFS based server for my media files and i
was wondering if this cpu was supported...i googled and i coudlnt' find much
info about it.
I'm specificially looking at this motherboard:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182230
I'd hate to buy i
y
regards
--
====
Johnson Thomas
Technical Support Engineer
Sun Solution Centre- APAC
Global Customer Services, Sun Microsystems, Inc.
Email- johnson.tho...@sun.com
Toll Free /Hotline:
Australia:1800 555 786 New Zealand:0800 275 786
Singapore:1800 339 2786 India:160
the onboard sata is a secondary issue. If i need to, i'll boot from the
oboard usb slots. I have 2 LSI 1068e based sas controllers which i will be
using.
On Tue, May 11, 2010 at 8:40 PM, James C. McPherson wrote:
> On 12/05/10 10:32 AM, Michael DeMan wrote:
>
>> I agree on the motherboard and
Well i went ahead and ordered the board. I will report back soon with the
results..i'm pretty excited. These CPU's seem great on paper.
On Tue, May 11, 2010 at 9:02 PM, Thomas Burgess wrote:
> the onboard sata is a secondary issue. If i need to, i'll boot from the
>
This is how i understand it.
I know the network cards are well supported and i know my storage cards are
supportedthe onboard sata may work and it may not. If it does, great,
i'll use it for booting, if not, this board has 2 onboard bootable USB
sticksluckily usb seems to work regardless
>
>
>>
> Now wait just a minute. You're casting aspersions on
> stuff here without saying what you're talking about,
> still less where you're getting your info from.
>
> Be specific - put up, or shut up.
>
>
I think he was just trying to tell me that my cpu should be fine, that the
only thing whic
I ordered it. It should be here monday or tuesday. When i get everything
built and installed, i'll report back. I'm very excited. I am not
expecting problems now that i've talked to supermicro about it. Solaris 10
runs for them so i would imagine opensolaris should be fine too.
On Thu, May 13
remember
right, this happened on other machines as well.
On Thu, May 13, 2010 at 9:56 AM, Thomas Burgess wrote:
> I ordered it. It should be here monday or tuesday. When i get everything
> built and installed, i'll report back. I'm very excited. I am not
> expecting problems n
The Intel SASUC8I Is a pretty good deal. around 150 dollars for 8 sas/sata
channels. This card is identical to the LSI SAS3081E-R for a lot less
money. It doesn't come with cables, but this leaves you free to buy the
type you need (in my case, i needed SFF-8087 - SFF-8087 cables, some people
wil
well, i haven't had a lot of time to work with this...but i'm having trouble
getting the onboard sata to work in anything but NATIVE IDE mode.
I'm not sure exactly what the problem isi'm wondering if i bought the
wrong cable (i have a norco 4220 case so the drives connect via a sas
sff-8087 o
, it shows what it should show at the
bottom)
I'll capture all that later and post it.
On Sat, May 15, 2010 at 8:35 PM, Dennis Clarke wrote:
> - Original Message -
> From: Thomas Burgess
> Date: Saturday, May 15, 2010 8:09 pm
> Subject: Re: [zfs-discuss] Opteron 610
age -
> From: Thomas Burgess
> Date: Saturday, May 15, 2010 8:09 pm
> Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
> To: Orvar Korvar
> Cc: zfs-discuss@opensolaris.org
>
>
> > Well i just wanted to let everyone know that preliminary results are
bled, because i thought i
needed it in order to use both sata and idei think now it's something
else.
I'm going to try to boot without it on, if it doesn't work, i'll try to
reinstall with it disabled.
On Sun, May 16, 2010 at 8:18 PM, Ian Collins wrote:
> On 05/17
at 4:04 PM, Brandon High wrote:
> On Mon, May 17, 2010 at 12:51 PM, Thomas Burgess
> wrote:
> > In the bios i can select from:
> > Native IDE
> > AMD_AHCI
>
> This is probably what you want. AHCI is supposed to be chipset agnostic.
>
> > I also have an option cal
I'd have to agree. Option 2 is probably the best.
I recently found myself in need of more space...i had to build an entirely
new server...my first one was close to full (it has 20 1TB drives in 3
raidz2 groups 7/7/6 and i was down to 3 TB) I ended up going with a whole
new serverwith 2TB dri
wow, that's a truly excelent question.
If you COULD do it, it might work with a simple import
but i have no idea...i'd love to know myself.
On Tue, May 18, 2010 at 7:06 AM, Demian Phillips
wrote:
> Is it possible to recover a pool (as it was) from a set of disks that
> were replaced during a c
A really great alternative to the UIO cards for those who don't want the
headache of modifying the brackets or cases is the Intel SASUC8I
*
*
*
*
*This is a rebranded LSI SAS3081E-R*
*
*
*It can be flashed with the LSI IT firmware from the LSI website and is
physically identical to the LSI card. I
I know i'm probably doing something REALLY stupid.but for some reason i
can't get send/recv to work over ssh. I just built a new media server and
i'd like to move a few filesystem from my old server to my new server but
for some reason i keep getting strange errors...
At first i'd see somethi
also, i forgot to say:
one server is b133, the new one is b134
On Thu, May 20, 2010 at 4:23 PM, Thomas Burgess wrote:
> I know i'm probably doing something REALLY stupid.but for some reason i
> can't get send/recv to work over ssh. I just built a new media server and
&g
I seem to be getting decent speed with arcfour (this was what i was using to
begin with)
Thanks for all the helpthis honestly was just me being stupid...looking
back on yesterday, i can't even remember what i was doing wrong nowi was
REALLY tired when i asked this question.
On Fri, May 2
supported_frequencies_Hz
8:10:12:15:20
supported_max_cstates 0
vendor_id AuthenticAMD
On Mon, May 17, 2010 at 5:55 PM, Dennis Clarke wrote:
>
> >On 05-17-10, Thomas Burgess wrote:
> >ps
Something i've been meaning to ask
I'm transfering some data from my older server to my newer one. the older
server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives in raidz2 (3
vdevs, 2 with 7 drives one with 6) connected to 3 AOC-SAT2-MV8 cards spread
as evenly across them as i coul
is 3 zfs recv's random?
On Fri, May 21, 2010 at 10:03 PM, Brandon High wrote:
> On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess
> wrote:
> > shouldn't the newer server have LESS load?
> > Please forgive my ubernoobness.
>
> Depends on what it's doi
, if i switched to 2 wider
stripes instead of 3 i'd gain another TB or twofor my use i don't think
that would be a horrible thing.
On Fri, May 21, 2010 at 10:03 PM, Brandon High wrote:
> On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess
> wrote:
> > shouldn't the new
so long for the server to come up?
it's stuck on "Reading ZFS config"
and there is a FLURRY of hard drive lights blinking (all 10 in sync )
On Sat, May 22, 2010 at 12:26 AM, Brandon High wrote:
> On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess
> wrote:
> > is 3 zfs recv
yah, it seems that rsync is faster for what i need anywaysat least right
now...
On Sat, May 22, 2010 at 1:07 AM, Ian Collins wrote:
> On 05/22/10 04:44 PM, Thomas Burgess wrote:
>
>> I can't tell you for sure
>>
>> For some reason the server lost power an
3.14.2 6 13 c6t6d0
0.9 201.9 34.2 25338.0 3.8 0.5 18.92.6 51 52 c8t5d0
0.00.00.00.0 0.0 0.00.00.0 0 0 c4t7d0
On Sat, May 22, 2010 at 12:26 AM, Brandon High wrote:
> On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess
> wrote:
> &
well it wasn't.
it was running pretty slow.
i had one "really big" filesystemwith rsync i'm able to do multiple
streams and it's moving much faster
On Sat, May 22, 2010 at 1:45 AM, Ian Collins wrote:
> On 05/22/10 05:22 PM, Thomas Burgess wrote:
>
>&g
source. If you don't, there's nothing you can do.
>
> It probably taking a while to restart because the sends that were
> interrupted need to be rolled back.
>
> Sent from my Nexus One.
>
> On May 21, 2010 9:44 PM, "Thomas Burgess" wrote:
>
> I can
install smartmontools
There is no package for it but it's EASY to install
once you do, you can get ouput like this:
pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://sm
i don't think there is but it's dirt simple to install.
I followed the instructions here:
http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/
On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou <
andreas_wants_the_w...@hotmail.com> wrote:
>
i only care about the most recent snapshot, as this is a growing video
collection.
i do have snapshots, but i only keep them for when/if i accidently delete
something, or rename something wrong.
On Sat, May 22, 2010 at 3:43 AM, Brandon High wrote:
> On Fri, May 21, 2010 at 10:22 PM, Tho
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot
I had to reinstall with the settings correct.
the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on
if not, then you may need to reinstall with it on (for
just to make sure i understand what is going on here,
you have a rpool which is having performance issues, and you discovered ahci
was disabled?
you enabled it, and now it won't boot. correct?
This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ah
s, turns out it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.
I had to reinstall. I tried the livecd/import method and it still failed to
boot.
On Sat, May 22, 2010 at 5:30 PM, Ian Collins wrote:
> On 05/23/10 08:52 AM, Th
this old thread has info on how to switch from ide->sata mode
http://opensolaris.org/jive/thread.jspa?messageID=448758
On Sat, May 22, 2010 at 5:32 PM, Ian Collins wrote:
> On 05/23/10 08:43 AM, Brian wrote:
>
>> Is there a way within opensolaris to detect if AHCI is being used by
>> vario
GREAT, glad it worked for you!
On Sat, May 22, 2010 at 7:39 PM, Brian wrote:
> Ok. What worked for me was booting with the live CD and doing:
>
> pfexec zpool import -f rpool
> reboot
>
> After that I was able to boot with AHCI enabled. The performance issues I
> was seeing are now also gone
I'm confusedI have a filesystem on server 1 called tank/nas/dump
I made a snapshot called first
zfs snapshot tank/nas/d...@first
then i did a zfs send/recv like:
zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx "/bin/pfexec
/usr/sbin/zfs recv tank/nas/dump"
this worked fine, next
On Sat, May 22, 2010 at 9:26 PM, Ian Collins wrote:
> On 05/23/10 01:18 PM, Thomas Burgess wrote:
>
>>
>> this worked fine, next today, i wanted to send what has changed
>>
>> i did
>> zfs snapshot tank/nas/d...@second
>>
>> now, heres w
will the new recv'd filesystem be identical to the original forced snapshot
or will it be a combination of the 2?
On Sat, May 22, 2010 at 11:50 PM, Edward Ned Harvey
wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On
ok, so forcing just basically makes it drop whatever "changes" were made
Thats what i was wondering...this is what i expected
On Sun, May 23, 2010 at 12:05 AM, Ian Collins wrote:
> On 05/23/10 03:56 PM, Thomas Burgess wrote:
>
>> let me ask a question though.
&
did this come out?
http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
i was googling trying to find info about the next release and ran across
this
Does this mean it's actually about to come out before the end of the month
or is this something else?
_
never mindjust found more info on this...shoudl have held back from
asking
On Mon, May 24, 2010 at 1:26 AM, Thomas Burgess wrote:
> did this come out?
>
> http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
>
> i was googling trying to find info about the n
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the nu
>
>
> ZFS is always consistent on-disk, by design. Loss of the ZIL will result
> in loss of the data in the ZIL which hasn't been flushed out to the hard
> drives, but otherwise, the data on the hard drives is consistent and
> uncorrupted.
>
>
>
> This is what i thought. I have read this list on
>
>
> Not familiar with that model
>
>
It's a sandforce sf-1500 model but without a supercapheres some info on
it:
Maximum Performance
- Max Read: up to 270MB/s
- Max Write: up to 250MB/s
- Sustained Write: up to 235MB/s
- Random Write 4k: 15,000 IOPS
- Max 4k IOPS: 50,00
>
>
>
> From earlier in the thread, it sounds like none of the SF-1500 based
> drives even have a supercap, so it doesn't seem that they'd necessarily
> be a better choice than the SLC-based X-25E at this point unless you
> need more write IOPS...
>
> Ray
>
I think the upcoming OCZ Vertex 2 Pro wi
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?
see:
capacity operationsbandwidth
poolalloc free read write read write
--
The last couple times i've read this questions, people normally responded
with:
It depends
you might not even NEED a slog, there is a script floating around which can
help determine that...
If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more io
i am running the last release from the genunix page
uname -a output:
SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris
On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> Hi Thomas,
>
> This looks like a display bug. I'm se
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Nicolas Williams
> >
> > > I recently got a new SSD (ocz vertex LE 50gb)
> > >
> > > It seems to work really well as a ZIL perform
>
>
> At least to me, this was not clearly "not asking about losing zil" and was
> not clearly "asking about power loss." Sorry for answering the question
> you
> thought you didn't ask.
>
I was only responding to your response of WRONG!!! The guy wasn't wrong in
regards to my questions. I'm s
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 24 May 2010, Thomas Burgess wrote:
>
>>
>> It's a sandforce sf-1500 model but without a supercapheres some info
>> on it:
>>
>> Maximum Performan
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.
On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess wrote:
>
>
> On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
> bfr
On Wed, May 26, 2010 at 5:47 PM, Brandon High wrote:
> On Sat, May 15, 2010 at 4:01 AM, Marc Bevand wrote:
> > I have done quite some research over the past few years on the best (ie.
> > simple, robust, inexpensive, and performant) SATA/SAS controllers for
> ZFS.
>
> I've spent some time lookin
I thought it didI couldn't imagine sun using that chip in the original
thumper if it didn't suppoer NCQalso, i've read where people have had to
DISABLE ncq on this driver to fix one bug or another (as a work around)
On Wed, May 26, 2010 at 8:40 PM, Marty Faltesek
wrote:
> On Wed, 2010-05
>
>
> Yeah, this is what I was thinking too...
>
> Is there anyway to retain snapshot data this way? I've read about the ZFS
> replay/mirror features, but my impression was that this was more so for a
> development mirror for testing rather than a reliable backup? This is the
> only way I know of
On Sun, Jun 13, 2010 at 12:18 AM, Joe Auty wrote:
> Thomas Burgess wrote:
>
>
>> Yeah, this is what I was thinking too...
>>
>> Is there anyway to retain snapshot data this way? I've read about the ZFS
>> replay/mirror features, but my impression was that
in production after pulling data from
the backup tapes. Scrubbing didn't show any error so any idea what's
behind the problem? Any chance to fix the FS?
Thomas
---
panic[cpu3]/thread=ff0503498400: BAD TRAP: type=e (#pf Page fault)
rp=ff001e937320 addr=20 occurred in module &quo
Thanks for the link Arne.
On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in
Arne,
On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in use) the machine pan
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen wrote:
> Hi,
>
> I known it's been discussed here more than once, and I read the
> Evil tuning guide, but I didn't find a definitive statement:
>
> There is absolutely no sense in having slog devices larger than
> then main memory, because it will neve
>
>
>
> Also, the disks were replaced one at a time last year from 73GB to 300GB to
> increase the size of the pool. Any idea why the pool is showing up as the
> wrong size in b134 and have anything else to try? I don't want to upgrade
> the pool version yet and then not be able to revert back...
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen wrote:
> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't seem to find a reason for
> this.
> >
> > I'm getting bad to medium performance with my new test storage device.
> I've got 24 1.5T
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. wrote:
> Oh! Yes. dedup. not compression, but dedup, yes.
dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss mailing list
zfs-d
>
>
> Conclusion: This device will make an excellent slog device. I'll order
> them today ;)
>
>
I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)
It made a huge difference in NFS performance and other
I've found the Seagate 7200.12 1tb drives and Hitachi 7k2000 2TB drives to
be by far the best.
I've read lots of horror stories about any WD drive with 4k
sectorsit'sbest to stay away from them.
I've also read plenty of people say that the green drives are terrible.
__
On Wed, Jul 21, 2010 at 12:42 PM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:
> Are there any drawbacks to partition a SSD in two parts and use L2ARC on
> one partition, and ZIL on the other? Any thoughts?
> --
> This message posted from opensolaris.org
>
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie wrote:
> Hi,
>
> I've been searching around on the Internet to fine some help with this, but
> have been
> unsuccessfull so far.
>
> I have some performance issues with my file server. I have an OpenSolaris
> server with a Pentium D
> 3GHz CPU, 4GB of
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie wrote:
> I see I have already received several replies, thanks to all!
>
> I would not like to risk losing any data, so I believe a ZIL device would
> be the way for me. I see
> these exists in different prices. Any reason why I would not buy a cheap
1 - 100 of 453 matches
Mail list logo