I am currently trying to get two of these things running Illumian. I don't have
any particular performance requirements, so I'm thinking of using some sort of
supported hypervisor, (either RHEL and KVM or VMware ESXi) to get around the
driver support issues, and passing the disks through to an I
t versions of linux (i.e. RHEL 6) are a bit better at
NFSv4, but i'm not holding my breath.
--
Greg Mason
HPC Administrator
Michigan State University
Institute for Cyber Enabled Research
High Performance Computing Center
web: www.icer.msu.edu
email: gma...@msu.edu
_
As an alternative, I've been taking a snapshot of rpool on the golden
system, sending it to a file, and creating a boot environment from the
archived snapshot on target systems. After fiddling with the snapshots a
little, I then either appropriately anonymize the system or provide it
with its i
How about the bug "removing slog not possible"? What if this slog fails? Is
there a plan for such situation (pool becomes inaccessible in this case)?
You can "zpool replace" a bad slog device now.
-Greg
___
zfs-discuss mailing list
zfs-discuss@op
Of course, I would welcome a reply from anyone who has experience
with this, not just Greg.
Monish
- Original Message - From: "Greg Mason"
To: "HUGE | David Stahl"
Cc: "zfs-discuss"
Sent: Thursday, August 20, 2009 4:04 AM
Subject: Re: [zfs-discuss] Ssd
using the third-party parts is that the involved support
organizations for the software/hardware will make it very clear that
such a configuration is quite unsupported. That said, we've had pretty
good luck with them.
-Greg
--
Greg Mason
System Administrator
High Performance Computing
on a test file system resolved both bugs, as well as
other known issues that our users have been running into. All the
various known issues this caused can be found at the MSU HPCC wiki:
https://wiki.hpcc.msu.edu/display/Issues/Known+Issues, under "Home
Directory file system."
-Greg
ilesystems around to different systems. If you had only one filesystem
in the pool, you could then safely destroy the original pool. This does
mean you'd need 2x the size of the LUN during the transfer though.
For replication of ZFS filesystems, we a similar process, with just a
lot of inc
D is an MLC device. The Intel SSD is an SLC device.
That right there accounts for the cost difference. The SLC device (Intel
X25-E) will last quite a bit longer than the MLC device.
-Greg
--
Greg Mason
System Administrator
Michigan State University
High Performance Computing Center
___
Thanks for the link Richard,
I guess the next question is, how safe would it be to run snv_114 in
production? Running something that would be technically "unsupported"
makes a few folks here understandably nervous...
-Greg
On Thu, 2009-07-09 at 10:13 -0700, Richard Elling wrote:
&g
being able to utilize ZFS user quotas, as we're
having problems with NFSv4 on our clients (SLES 10 SP2). We'd like to be
able to use NFSv3 for now (one large ZFS filesystem, with user quotas
set), until the flaws with our Linux NFS clients can be addressed.
--
Greg Mason
System Admi
In my testing, I've seen that trying to duplicate zpool disks with dd
often results in a disk that's unreadable. I believe it has something to
do with the block sizes of dd.
In order to make my own slog backups, I just used cat instead. I plugged
the slog SSD into another system (not a necessary s
And it looks like the Intel fragmentation issue is fixed as well:
http://techreport.com/discussions.x/16739
FYI, Intel recently had a new firmware release. IMHO, odds are that
this will be as common as HDD firmware releases, at least for the
next few years.
http://news.cnet.com/8301-13924_3-10
Harry,
ZFS will only compress data if it is able to gain more than 12% of space
by compressing the data (I may be wrong on the exact percentage). If ZFS
can't get get that 12% compression at least, it doesn't bother and will
just store the block uncompressed.
Also, the default ZFS compressio
Francois,
Your best bet is probably a stripe of mirrors. i.e. a zpool made of many
mirrors.
This way you have redundancy, and fast reads as well. You'll also enjoy
pretty quick resilvering in the event of a disk failure as well.
For even faster reads, you can add dedicated L2ARC cache devic
Just my $0.02, but would pool shrinking be the same as vdev evacuation?
I'm quite interested in vdev evacuation as an upgrade path for
multi-disk pools. This would be yet another reason to for folks to use
ZFS at home (you only have to buy cheap disks), but it would also be a
good to have that
e:
On Thu, Feb 12, 2009 at 10:33:40AM -0500, Greg Mason wrote:
What I'm looking for is a faster way to do this than format -e -d
-f
Are you sure thar write cache is back on after restart?
Yes, I've checked with format -e, on each drive.
When disabling the write cache with format, it also gives a warning
stating this is the case.
What I'm looking for is a faster way to do this than format -e -d
-f
We use several X4540's over here as well, what type of workload do you
have, and how much performance increase did you see by disabling the
write caches?
We see the difference between our tests completing in around 2.5 minutes
(with write caches) to around a minute an and a half without them,
We're using some X4540s, with OpenSolaris 2008.11.
According to my testing, to optimize our systems for our specific
workload, I've determined that we get the best performance with the
write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set
in /etc/system.
The only issue is s
Tony,
I believe you want to use "zfs recv -F" to force a rollback on the
receiving side.
I'm wondering if your ls is updating the atime somewhere, which would
indeed be a change...
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
Orvar,
With my testing, i've seen a 5x improvement with small file creation
when working specifically with NFS. This is after I added an SSD for the
ZIL.
I recommend Richard Elling's zilstat (he posted links earlier). It'll
let you see if a dedicated device for the ZIL will help your specific
I'll give this a script a shot a little bit later today.
For ZIL sizing, I'm using either 1 or 2 32G Intel X25-E SSDs in my
tests, which, according to what I've read, is 2-4 times larger than the
maximum that ZFS can possibly use. We've got 32G of system memory in
these Thors, and (if I'm not m
> If there was a latency issue, we would see such a problem with our
> existing file server as well, which we do not. We'd also have much
> greater problems than just file server performance.
>
> So, like I've said, we've ruled out the network as an issue.
I should also add that I've tested the
Jim Mauro wrote:
>
>> This problem only manifests itself when dealing with many small files
>> over NFS. There is no throughput problem with the network.
> But there could be a _latency_ issue with the network.
If there was a latency issue, we would see such a problem with our
existing file ser
7200 RPM SATA disks.
Tim wrote:
>
>
> On Fri, Jan 30, 2009 at 8:24 AM, Greg Mason <mailto:gma...@msu.edu>> wrote:
>
> A Linux NFS file server, with a few terabytes of fibre-attached disk,
> using XFS.
>
> I'm trying to get these Thors to p
I should also add that this "creating many small files" issue is the
ONLY case where the Thors are performing poorly, which is why I'm
focusing on it.
Greg Mason wrote:
> A Linux NFS file server, with a few terabytes of fibre-attached disk,
> using XFS.
>
> I
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform at least as well as the current
setup. A performance hit is very hard to explain to our users.
> Perhaps I missed something, but what was your previous setup?
> I.e. what di
This problem only manifests itself when dealing with many small files
over NFS. There is no throughput problem with the network.
I've run tests with the write cache disabled on all disks, and the cache
flush disabled. I'm using two Intel SSDs for ZIL devices.
This setup is faster than using the
the funny thing is that I'm showing a performance improvement over write
caches + cache flushes.
The only way these pools are being accessed is over NFS. Well, at least
the only way I care about when it comes to high performance.
I'm pretty sure it would give a performance hit locally, but I do
So, I'm still beating my head against the wall, trying to find our
performance bottleneck with NFS on our Thors.
We've got a couple Intel SSDs for the ZIL, using 2 SSDs as ZIL devices.
Cache flushing is still enabled, as are the write caches on all 48 disk
devices.
What I'm thinking of doing i
How were you running this test?
were you running it locally on the machine, or were you running it over
something like NFS?
What is the rest of your storage like? just direct-attached (SAS or
SATA, for example) disks, or are you using a higher-end RAID controller?
-Greg
kristof wrote:
> Kebab
If i'm not mistaken (and somebody please correct me if i'm wrong), the
Sun 7000 series storage appliances (the Fishworks boxes) use enterprise
SSDs, with dram caching. One such product is made by STEC.
My understanding is that the Sun appliances use one SSD for the ZIL, and
one as a read cache.
We're evaluating the possibility of speeding up NFS operations of our
X4540s with dedicated log devices. What we are specifically evaluating
is replacing 1 or two of our spare sata disks with sata SSDs.
Has anybody tried using SSD device(s) as dedicated ZIL devices in a
X4540? Are there any kno
>
> Good idea. Thor has a CF slot, too, if you can find a high speed
> CF card.
> -- richard
We're already using the CF slot for the OS. We haven't really found
any CF cards that would be fast enough anyways :)
___
zfs-discuss mailing list
zfs-discu
So, what we're looking for is a way to improve performance, without
disabling the ZIL, as it's my understanding that disabling the ZIL
isn't exactly a safe thing to do.
We're looking for the best way to improve performance, without
sacrificing too much of the safety of the data.
The current
or the log device?
And, yes, I already know that turning off the ZIL is a Really Bad Idea.
We do, however, need to provide our users with a certain level of
performance, and what we've got with the ZIL on the pool is completely
unacceptable.
Thanks for any pointers you may have...
--
Gre
zfs-auto-snapshot (SUNWzfs-auto-snapshot) is what I'm using. Only trick
is that on the other end, we have to manage our own retention of the
snapshots we send to our offsite/backup boxes.
zfs-auto-snapshot can handle the sending of snapshots as well.
We're running this in OpenSolaris 2008.11 (s
38 matches
Mail list logo