As a home user, here are my thoughts.
WD = ignore (TLER issues, parking issues, etc)
I recently built up a server on Osol running Samsung 1.5TB drives. They are
"green", but don't seem to have the irritating "features" found on the WD
"green" drives. They are 5400RPM, but seem to transfer data
NFS writes on ZFS blows chunks performance wise. The only way to increase the
write speed is by using an slog, the problem is that a "proper" slog device
(one that doesn't lose transactions) does not exist for a reasonable price. The
least expensive SSD that will work is the Intel X25-E, and eve
> use a slog at all if it's not durable? You should
> disable the ZIL
> instead.
This is basically where I was going. There only seems to be one SSD that is
considered "working", the Zeus IOPS. Even if I had the money, I can't buy it.
As my application is a home server, not a datacenter, thin
> On May 19, 2010, at 2:29 PM, Don wrote:
> The data risk is a few moments of data loss. However,
> if the order of the
> uberblock updates is not preserved (which is why the
> caches are flushed)
> then recovery from a reboot may require manual
> intervention. The amount
> of manual interventio
Disable ZIL and test again. NFS does a lot of sync writes and kills
performance. Disabling ZIL (or using the synchronicity option if a build with
that ever comes out) will prevent that behavior, and should get your NFS
performance close to local. It's up to you if you want to leave it that way.
When I did a similar upgrade a while back I did #2. Create a new pool raidz2
with 6 drives, copy the data to it, verify the data, delete the old pool, add
old drives + some new drives to another 6 disk raidz2 in the new pool.
Performance has been quite good, and the migration was very smooth.
Thanks! I might just have to order a few for the next time I take the server
apart. Not that my bent up versions don't work, but I might as well have them
be pretty too. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
> I've got an OCZ Vertex 30gb drive with a 1GB stripe
> used for the slog
> and the rest used for the L2ARC, which for ~ $100 has
> been a nice
> boost to nfs writes.
What about the Intel X25-V? I know it will likely be fine for L2ARC, but what
about ZIL/slog?
--
This message posted from openso
> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Travis Tabbal
>
> Oh, one more thing. Your subject says "ZIL/L2ARC"
> and your message says "I
> want to speed up NFS writes."
>
>
> If your clients are mounting "async" don't bother.
> If the clients are
> ounting async, then all the writes are done
> asynchronously, fully
> accelerated, and never any data written to ZIL log.
I've tried async, things run well until you get to the end of the job, then the
process hangs unt
I have a few old drives here that I thought might help me a little, though not
at much as a nice SSD, for those uses. I'd like to speed up NFS writes, and
there have been some mentions that even a decent HDD can do this, though not to
the same level a good SSD will.
The 3 drives are older LVD S
Thanks. That's what I expected the case to be. Any reasons this shouldn't work
for strictly backup purposes? Obviously, one disk down kills the pool, but as I
only ever need to care if I'm restoring, that doesn't seem to be such a big
deal. It will be a secondary backup destination for local mac
I have a small stack of disks that I was considering putting in a box to build
a backup server. It would only store data that is duplicated elsewhere, so I
wouldn't really need redundancy at the disk layer. The biggest issue is that
the disks are not all the same size. So I can't really do a rai
Supermicro USAS-L8i controllers.
I agree with you, I'd much rather have the drives respond properly and promptly
than save a little power if that means I'm going to get strange errors from the
array. And these are the "green" drives, they just don't seem to cause me any
problems. The issues pe
smartmontools doesn't work with my controllers. I can try it again when the 2
new drives I've ordered arrive. I'll try connecting to the motherboard ports
and see if that works with smartmontools.
I haven't noticed any sleeping with the drives. I don't get any lag accessing
the array or any er
On Sun, Jan 17, 2010 at 8:14 PM, Richard Elling wrote:
> On Jan 16, 2010, at 10:03 PM, Travis Tabbal wrote:
>
> > Hmm... got it working after a reboot. Odd that it had problems before
> that. I was able to rename the pools and the system seems to be running well
> now. Irritati
HD154UI/1AG01118
They have been great drives for a home server. Enterprise users probably need
faster drives for most uses, but they work great for me.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I've been having good luck with Samsung "green" 1.5TB drives. I have had 1 DOA,
but I currently have 10 of them, so that's not so bad. In that size purchase,
I've had one bad from just about any manufacturer. I've avoided WD for RAID
because of the error handling stuff kicking drives out of arra
Hmm... got it working after a reboot. Odd that it had problems before that. I
was able to rename the pools and the system seems to be running well now.
Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get
copied over with the zfs send/recv. I didn't have that many filesystem
r...@nas:~# zpool export -f raid
cannot export 'raid': pool is busy
I've disabled all the services I could think of. I don't see anything accessing
it. I also don't see any of the filesystems mounted with mount or "zfs mount".
What's the deal? This is not the rpool, so I'm not booted off it or
> Everything I've seen you should stay around 6-9
> drives for raidz, so don't do a raidz3 with 12
> drives. Instead make two raidz3 with 6 drives each
> (which is (6-3)*1.5 * 2 = 9 TB array.)
So the question becomes, why? If it's performance, I can live with lower IOPS
and max throughput. If i
Interesting discussion. I know the bias here is generally toward enterprise
users. I was wondering if the same recommendations hold for home users that are
generally more price sensitive. I'm currently running OpenSolaris on a system
with 12 drives. I had split them into 3 sets of 4 raidz1 array
To be fair, I think it's obvious that Sun people are looking into it and that
users are willing to help diagnose and test. There were requests for particular
data in those threads you linked to, have you sent yours? It might help them
find a pattern in the errors.
I understand the frustration
Perhaps. As I noted though, it also occurs on the onboard NVidia SATA
controller when MSI is enabled. I had already put a line in /etc/system to
disable MSI for that controller per a forum thread and it worked great. I'm now
running with all MSI disabled via XVM as the mpt controller is giving m
Just an update, my scrub completed without any timeout errors in the log. XVM
with MSI disabled globally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
If someone from Sun will confirm that it should work to use the mpt driver from
2009.06, I'd be willing to set up a BE and try it. I still have the snapshot
from my 2009.06 install, so I should be able to mount that and grab the files
easily enough.
--
This message posted from opensolaris.org
_
> (1) disabling MSI support in xVM makes the problem go
> away
Yes here.
> (6) mpt(7d) without MSI support is sloow.
That does seem to be the case. It's not so bad overall, and at least the
performance is consistent. It would be nice if this were improved.
> For those of you who have b
> o The problems are not seen with Sun's version of
> this card
Unable to comment as I don't have a Sun card here. If Sun would like to send me
one, I would be willing to test it compared to the cards I do have. I'm running
Supermicro USAS-L8i cards (LSI 1068e based).
> o The problems are not
> Travis Tabbal wrote:
> > I have a possible workaround. Mark Johnson
> has
> > been emailing me today about this issue and he
> proposed the
> > following:
> >
> >> You can try adding the following to /etc/system,
> then rebooting...
> >> s
>
> On Nov 23, 2009, at 7:28 PM, Travis Tabbal wrote:
>
> > I have a possible workaround. Mark Johnson
>
> > has been emailing me today about this issue and he
> proposed the
> > following:
> >
> >> You can try adding the follow
I have a possible workaround. Mark Johnson has been
emailing me today about this issue and he proposed the following:
> You can try adding the following to /etc/system, then rebooting...
> set xpv_psm:xen_support_msi = -1
I have been able to format a ZVOL container from a VM 3 times while oth
> I will give you all of this information on monday.
> This is great news :)
Indeed. I will also be posting this information when I get to the server
tonight. Perhaps it will help. I don't think I want to try using that old
driver though, it seems too risky for my taste.
Is there a command
> The latter, we run these VMs over NFS anyway and had
> ESXi boxes under test already. we were already
> separating "data" exports from "VM" exports. We use
> an in-house developed configuration management/bare
> metal system which allows us to install new machines
> pretty easily. In this case we
> > I'm running nv126 XvM right now. I haven't tried
> it
> > without XvM.
>
> Without XvM we do not see these issues. We're running
> the VMs through NFS now (using ESXi)...
Interesting. It sounds like it might be an XvM specific bug. I'm glad I
mentioned that in my bug report to Sun. Hopefully
> Have you tried wrapping your disks inside LVM
> metadevices and then used those for your ZFS pool?
I have not tried that. I could try it with my spare disks I suppose. I avoided
LVM as it didn't seem to offer me anything ZFS/ZPOOL didn't.
--
This message posted from opensolaris.org
___
> What type of disks are you using?
I'm using SATA disks with SAS-SATA breakout cables. I've tried different cables
as I have a couple spares.
mpt0 has 4x1.5TB Samsung "Green" drives.
mpt1 has 4x400GB Seagate 7200 RPM drives.
I get errors from both adapters. Each adapter has an unused SAS cha
I submitted a bug on this issue, it looks like you can reference other bugs
when you submit one, so everyone having this issue could possibly link mine and
submit their own hardware config. It sounds like it's widespread though, so I'm
not sure if that would help or hinder. I'd hate to bury the
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
wrote:
>
> The first step towards "acknowledging" that there is a problem
> is you logging a bug in bugs.opensolaris.org. If you don't, we
> don't know that there might be a problem outside of the ones
> that we identify.
>
I apologize if I of
> Have you tried another SAS-cable?
I have. 2 identical SAS cards, different cables, different disks (brand, size,
etc). I get the errors on random disks in the pool. I don't think it's hardware
related as there have been a few reports of this issue already.
--
This message posted from opensol
> Hi, you could try LSI itmpt driver as well, it seems
> to handle this better, although I think it only
> supports 8 devices at once or so.
>
> You could also try more recent version of opensolaris
> (123 or even 126), as there seems to be a lot fixes
> regarding mpt-driver (which still seems to
I am also running 2 of the Supermicro cards. I just upgraded to b126 and it
seems improved. I am running a large file copy locally. I get these warnings in
the dmesg log. When I do, I/O seems to stall for about 60sec. It comes back up
fine, but it's very annoying. Any hints? I have 4 disks per c
Hmm.. I expected people to jump on me yelling that it's a bad idea. :)
How about this, can I remove a vdev from a pool if the pool still has enough
space to hold the data? So could I add it in and mess with it for a while
without losing anything? I would expect the system to resliver the data o
> - How can I effect OCE with ZFS? The traditional
> 'back up all the data somewhere, add a drive,
> re-establish the file system/pools/whatever, then
> copy the data back' is not going to work because
> there will be nowhere to temporarily 'put' the
> data.
Add devices to the pool. Preferably in
I have a new array of 4x1.5TB drives running fine. I also have the old array of
4x400GB drives in the box on a separate pool for testing. I was planning to
have the old drives just be a backup file store, so I could keep snapshots and
such over there for important files.
I was wondering if it
> I am after suggestions of motherboard, CPU and ram.
> Basically I want ECC ram and at least two PCI-E x4
> channels. As I want to run 2 x AOC-USAS_L8i cards
> for 16 drives.
Asus M4N82 Deluxe. I have one running with 2 USAS-L8i cards just fine. I don't
have all the drives loaded in yet, but t
45 matches
Mail list logo