l boot the system if an OS disk fails.
Once Illumos is better supported on the R720 and the PERC H310, I plan to get
rid of the hypervisor silliness and run Illumos on bare metal.
-Greg
Sent from my iPhone
___
zfs-discuss mailing list
zfs-discuss@opens
t versions of linux (i.e. RHEL 6) are a bit better at
NFSv4, but i'm not holding my breath.
--
Greg Mason
HPC Administrator
Michigan State University
Institute for Cyber Enabled Research
High Performance Computing Center
web: www.icer.msu.edu
email: gma...@msu.edu
_
On Wed, Jun 9, 2010 at 8:17 PM, devsk wrote:
> $ swap -s
> total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k
> available
>
> $ swap -l
> swapfile dev swaplo blocks free
> /dev/dsk/c6t0d0s1 215,1 8 12594952 12594952
>
> Can someone please do
Hey Scott,
Thanks for the information. I doubt I can drop that kind of cash, but back to
getting bacula working!
Thanks again,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Yes it would, however we only have the restore/verify portion. Unless of course
I am overlooking something.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Hey Miles,
Do you have any idea if there is a way to backup a zvol in the manner you speak
of with bacula? Is DD a secure way to do this or are there better methods to do
this? Otherwise I will just use dd. Thanks again!
Thanks!
Greg
--
This message posted from opensolaris.org
Thank you for such a thorough look into my issue. As you said, I guess I am
down to trying to backup to a zvol and then backing that up to tape. Has anyone
tried this solution? I would be very interested to find out. Anyone else with
any other solutions?
Thanks!
Greg
--
This message posted
a lot of questions but I thought the
solution would work perfect in my environment.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
will then be written to tape with bacula. I hope I am
posting this in the correct place.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
rvers to the new box and we are up and running. The next issue is then
backing this all up to tape and making it so that it is not impo
ssible to recover if people do their standard bone headed things. Does anyone
have any ideas on how to do this? I was first thinking rsync or zfs
send/re
h its identity. When it boots up, it's ready to go.
The only downfall to my method is that I still have to run the full
OpenSolaris installer, and I can't exclude anything in the archive.
Essentially, it's a poor man's flash archive.
-Greg
cindy.swearin...@sun.com wrote:
H
I have tried to unmount the zfs volume and remount it. However, this does not
help the issue.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This also occurs when I do a zfs destroy.
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello all,
I am having a problem when I do a zfs promote or a zfs rollback, I get a
"dataset is busy error" I am now doing a image update to see if there was an
issue with the image I have. Has anyone idea as to how to fix this issue?
Thanks,
Greg
--
This message posted from opens
sxi 4 so I updated it but again to no avail. If anyone has any ideas it would
be helpful!
Thanks!
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
How about the bug "removing slog not possible"? What if this slog fails? Is
there a plan for such situation (pool becomes inaccessible in this case)?
You can "zpool replace" a bad slog device now.
-Greg
___
zfs-discuss mail
will help your workload or not. I didn't know about
this script at the time of our testing, so it ended up being some trial
and error, running various tests on different hardware setups (which
means creating and destroying quite a few pools).
-Greg
Jorgen Lundman wrote:
Does un-taring so
using the third-party parts is that the involved support
organizations for the software/hardware will make it very clear that
such a configuration is quite unsupported. That said, we've had pretty
good luck with them.
-Greg
--
Greg Mason
System Administrator
High Performance Computing
on a test file system resolved both bugs, as well as
other known issues that our users have been running into. All the
various known issues this caused can be found at the MSU HPCC wiki:
https://wiki.hpcc.msu.edu/display/Issues/Known+Issues, under "Home
Directory file system."
-Greg
ilesystems around to different systems. If you had only one filesystem
in the pool, you could then safely destroy the original pool. This does
mean you'd need 2x the size of the LUN during the transfer though.
For replication of ZFS filesystems, we a similar process, with just a
lot of inc
D is an MLC device. The Intel SSD is an SLC device.
That right there accounts for the cost difference. The SLC device (Intel
X25-E) will last quite a bit longer than the MLC device.
-Greg
--
Greg Mason
System Administrator
Michigan State University
High Performance Computing Center
___
Thanks for the link Richard,
I guess the next question is, how safe would it be to run snv_114 in
production? Running something that would be technically "unsupported"
makes a few folks here understandably nervous...
-Greg
On Thu, 2009-07-09 at 10:13 -0700, Richard Elling wrote:
&g
being able to utilize ZFS user quotas, as we're
having problems with NFSv4 on our clients (SLES 10 SP2). We'd like to be
able to use NFSv3 for now (one large ZFS filesystem, with user quotas
set), until the flaws with our Linux NFS clients can be addressed.
--
Greg Mason
System Admi
erday backup wise or are those snapshots useless and I am up to
last week.
Thanks for helping!
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
breaking edge of
everything so I was thinking of using it.
Thanks!
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sk. I've tested this method of
replacing a slog, and the zpool is imported on boot, like nothing
happened, even though the physical hardware has changed.
A question I have is, does "zpool replace" now work for slog devices as
of snv_111b?
-Greg
On Fri, 2009-06-05 at 20:57 -0700, Paul
-10218245-64.html?tag=mncol
It should also be noted that the Intel X25-M != the Intel X25-E. The
X25-E hasn't had any of the performance and fragmentation issues.
The X25-E is an SLC SSD, the X25-M is an MLC SSD, hence the more complex
firmware.
efault ZFS compression algorithm isn't gzip, so you aren't
going to get the greatest compression possible, but it is quite fast.
Depending on the type of data, it may not compress well at all, leading
ZFS to store that data completely uncompressed.
-Greg
All good info thanks. Still one
cache devices (folks
typically use SSDs for very fast (15k RPM) SAS drives for this).
-Greg
Francois wrote:
Hello list,
What would be the best zpool configuration for a cache/proxy server
(probably based on squid) ?
In other words with which zpool configuration I could expect best
reading perfor
to upgrade a pool in-place, and in production.
basically, what I'm thinking is:
zpool remove mypool
Allow time for ZFS to vacate the vdev(s), and then light up the "OK to
remove" light on each evacuated disk.
-Greg
Blake Irvin wrote:
Shrinking pools would also solve the
as we saw in the
recent thread about "Unreliable for professional usage" it is possible
to have issues. Likewise with database systems.
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
discussion of database
recovery into the discussion seems to me to only be increasing the FUD
factor.
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard Elling wrote:
Greg Palmer wrote:
Miles Nordin wrote:
gm> That implies that ZFS will have to detect removable devices
gm> and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole for
uot; and
those who have educated themselves use it. I seldom turn it on unless
I'm doing heavy I/O to a USB hard drive, otherwise the performance
difference is just not that great.
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensola
abled, something makes its way into the write cache, then
the cache is disabled. Does this mean the write cache is flushed to disk
when the cache is disabled? If so, then I guess it's less critical when
it happens in the bootup process or if it's permanent...
-Greg
A Darren Dunham wrot
ches reads but not
writes. If you enable them you will lose data if you pull the stick out
before all the data is written. This is the type of safety measure that
needs to be implemented in ZFS if it is to support the average user
instead of just the IT professionals.
Rega
Are you sure thar write cache is back on after restart?
Yes, I've checked with format -e, on each drive.
When disabling the write cache with format, it also gives a warning
stating this is the case.
What I'm looking for is a faster way to do this than format -e -d
-f
them, in
one instance.
I'm trying to optimize our machines for a write-heavy environment, as
our users will undoubtedly hit this limitation of the machines.
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
the write cache on every disk in an X4540?
Thanks,
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
verse and
human stupidity. I'm not sure about the first one" - Albert Einstein
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tony,
I believe you want to use "zfs recv -F" to force a rollback on the
receiving side.
I'm wondering if your ls is updating the atime somewhere, which would
indeed be a change...
-Greg
___
zfs-discuss mailing list
zfs-discuss@opens
this
was for accelerating the ZIL, not for use on the L2ARC, so YMMV.
Fishworks does this. They use an SSD both for the read cache as well as
the ZIL.
-Greg
Orvar Korvar wrote:
> So are there no guide lines how to add a SSD disk as a home user? Which is
> the best SSD disk to add? What per
Orvar Korvar wrote:
> Ok. Just to confirm: A modern disk has already some spare capacity which is
> not normally utilized by ZFS, UFS, etc. If the spare capacity is finished,
> then the disc should be replaced.
>
Yup, that is the case.
___
zfs-discus
uld I be facing in such a
situation? Would I simply just risk losing that in-play data, or could
more serious things happen? I know disabling the ZIL is an Extremely Bad
Idea, but I need to tell people exactly why...
-Greg
Jim Mauro wrote:
> You have SSD's for the ZIL (logzilla) enabled, and
> If there was a latency issue, we would see such a problem with our
> existing file server as well, which we do not. We'd also have much
> greater problems than just file server performance.
>
> So, like I've said, we've ruled out the network as an issue.
I should also add that I've tested the
Jim Mauro wrote:
>
>> This problem only manifests itself when dealing with many small files
>> over NFS. There is no throughput problem with the network.
> But there could be a _latency_ issue with the network.
If there was a latency issue, we would see such a problem with our
existing file ser
7200 RPM SATA disks.
Tim wrote:
>
>
> On Fri, Jan 30, 2009 at 8:24 AM, Greg Mason <mailto:gma...@msu.edu>> wrote:
>
> A Linux NFS file server, with a few terabytes of fibre-attached disk,
> using XFS.
>
> I'm trying to get these Thors to p
I should also add that this "creating many small files" issue is the
ONLY case where the Thors are performing poorly, which is why I'm
focusing on it.
Greg Mason wrote:
> A Linux NFS file server, with a few terabytes of fibre-attached disk,
> using XFS.
>
> I
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform at least as well as the current
setup. A performance hit is very hard to explain to our users.
> Perhaps I missed something, but what was your previous setup?
> I.e. what di
r...
I've done my homework on this issue, I've ruled out the network as an
issue, as well as the NFS clients. I've narrowed my particular
performance issue down to the ZIL, and how well ZFS plays with NFS.
-Greg
Jim Mauro wrote:
> Multiple Thors (more than 2?), with performanc
the funny thing is that I'm showing a performance improvement over write
caches + cache flushes.
The only way these pools are being accessed is over NFS. Well, at least
the only way I care about when it comes to high performance.
I'm pretty sure it would give a performance hit locally, but I do
inking of doing is disabling all write caches, and disabling
the cache flushing.
What would this mean for the safety of data in the pool?
And, would this even do anything to address the performance issue?
-Greg
___
zfs-discuss mailing list
zfs-discuss@
How were you running this test?
were you running it locally on the machine, or were you running it over
something like NFS?
What is the rest of your storage like? just direct-attached (SAS or
SATA, for example) disks, or are you using a higher-end RAID controller?
-Greg
kristof wrote
read cache. For the 7210 (which is basically a Sun Fire X4540),
that gives you 46 disks and 2 SSDs.
-Greg
Bob Friesenhahn wrote:
> On Thu, 22 Jan 2009, Ross wrote:
>
>> However, now I've written that, Sun use SATA (SAS?) SSD's in their
>> high end fishworks stora
e any known technical issues with using a SSD in a X4540?
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> Good idea. Thor has a CF slot, too, if you can find a high speed
> CF card.
> -- richard
We're already using the CF slot for the OS. We haven't really found
any CF cards that would be fast enough anyways :)
___
zfs-discuss mailing list
zfs-discu
ty of the data.
The current solution we are considering is disabling the cache
flushing (as per a previous response in this thread), and adding one
or two SSD log devices, as this is similar to the Sun storage
appliances based on the Thor. Thoughts?
-Greg
On Jan 19, 2009, at 6:24 PM, Richard El
or the log device?
And, yes, I already know that turning off the ZIL is a Really Bad Idea.
We do, however, need to provide our users with a certain level of
performance, and what we've got with the ZIL on the pool is completely
unacceptable.
Thanks for any pointers you may have...
--
Gre
is 2008.11 (snv_100).
Another use I've seen is using zfs-auto-snapshot to take and manage
snapshots on both ends, using rsync to replicate the data, but that's
less than ideal for most folks...
-Greg
Ian Mather wrote:
> Fairly new to ZFS. I am looking to replicate data between two
Perhaps I mis-understand, but the below issues are all based on Nevada,
not Solaris 10.
Nevada isn't production code. For real ZFS testing, you must use a
production release, currently Solaris 10 (update 5, soon to be update 6).
In the last 2 years, I've stored everything in my environment (
Could anyone explain where the capacity % comes from for this df -h output (or
where to read to find out, having scoured the man page for df and ZFS admin
guide already)?
# df -h -F zfs
Filesystem size used avail capacity Mounted on
jira-pool/artifactory
4
James C. McPherson wrote:
> Bill Sommerfeld wrote:
>
>> On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
>>
>>> How would you gather that information?
>>>
>> the tools to use would be dependant on the actual storage device in use.
>> luxadm for A5x00 and V8x0 internal s
It would be a manual process. As with any arbitrary name, it's a useful
tag, not much more.
James C. McPherson wrote:
> Gregory Shaw wrote:
>
>> Hi. I'd like to request a feature be added to zfs. Currently, on
>> SAN attached disk, zpool shows up with a big WWN for the disk. If
>> ZFS
64 matches
Mail list logo