nsfer all of my data to another
machine and build the RAIDz from scratch, then transfer the data back?
Thanks for any advice,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
it off then back on?
Thanks for any advice,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hear that people are
still working on it. I may have to pluck up the courage to put it on my Mac
Pro if I do a rebuild anytime soon.
Thanks again,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
ttach that, wait for it to resilver, then
detach c5d0s0 and add another 1TB drive and attach that to the zpool, will that
up the storage of the pool?
Thanks very much,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
losing my configurations).
Many thanks for both of your replies,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Many thanks Thomas,
I have a test machine so I shall try it on that before I try it on my main
system.
Thanks very much once again,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thanks very much everyone.
Victor, I did think about using VirtualBox, but I have a real machine and a
supply of hard drives for a short time, for I'll test it out using that if I
can.
Scott, of course, at work we use three mirrors and it works very well, has
saved us on occasion where we have
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this
Hi,
I would like to offline an entire storage pool (not some devices),
( I want to stop all io activity to the pool)
Maybe it could be implemented with a
a command like :
zpool offline -f tank
which should implicity do a zfs unmount tank
I use zfs with solaris 10 update 4.
Thanks,
Ben
Hi,
I know that is not recommended by Sun
to use ZFS on 32 bits machines but,
what are really the consequences of doing this ?
I have an old Bipro Xeon server (6 GB ram , 6 disks),
and I would like to do a raidz with 4 disks with Solaris 10 update 4.
Thanks,
Ben
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks wouldn't mind sharing their work on
the sub
On 4/21/10 2:15 AM, Robert Milkowski wrote:
> I haven't heard from you in a while! Good to see you here again :)
>
> Sorry for stating obvious but at the end of a day it depends on what
> your goals are.
> Are you interested in micro-benchmarks and comparison to other file
> systems?
>
> I think th
On 5/7/10 9:38 PM, Giovanni wrote:
> Hi guys,
>
> I have a quick question, I am playing around with ZFS and here's what I did.
>
> I created a storage pool with several drives. I unplugged 3 out of 5 drives
> from the array, currently:
>
> NAMESTATE READ WRITE CKSUM
> gpool
On 5/8/10 3:07 PM, Tony wrote:
> Lets say I have two servers, both running opensolaris with ZFS. I basically
> want to be able to create a filesystem where the two servers have a common
> volume, that is mirrored between the two. Meaning, each server keeps an
> identical, real time backup of the
The drive (c7t2d0)is bad and should be replaced. The second drive
(c7t5d0) is either bad or going bad. This is exactly the kind of
problem that can force a Thumper to it knees, ZFS performance is
horrific, and as soon as you drop the bad disks things magicly return to
normal.
My first recommend
lse to try? I don't want to upgrade
the pool version yet and then not be able to revert back...
thanks,
Ben
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy,
The other two pools are 2 disk mirrors (rpool and another).
Ben
Cindy Swearingen wrote:
Hi Ben,
Any other details about this pool, like how it might be different from
the other two pools on this system, might be helpful...
I'm going to try to reproduce this problem.
How much of a difference is there in supporting applications in between Ubuntu
and OpenSolaris?
I was not considering Ubuntu until OpenSOlaris would not load onto my machine...
Any info would be great. I have not been able to find any sort of comparison of
ZFS on Ubuntu and OS.
Thanks.
(My cur
What supporting applications are there on Ubuntu for RAIDZ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I tried to post this question on the Ubuntu forum.
Within 30 minutes my post was on the second page of new posts...
Yah. Im really not down with using Ubuntu on my server here. But I may be
forced to.
--
This message posted from opensolaris.org
___
z
0GB disks.
AFAIK, it's only non-rpool disks that use the "whole disk",
and I doubt there's some sort of specific feature with
an SSD, but I could be wrong.
I like your idea of a reasonably sized root rpool and the
rest used for the ZIL. But if you're going to do LU,
you s
On 8/13/10 9:02 PM, "C. Bergström" wrote:
> Erast wrote:
>>
>>
>> On 08/13/2010 01:39 PM, Tim Cook wrote:
>>> http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/
>>>
>>> I'm a bit surprised at this development... Oracle really just doesn't
>>> get it. The part that's most disturbing to m
On 8/14/10 1:12 PM, Frank Cusack wrote:
>
> Wow, what leads you guys to even imagine that S11 wouldn't contain
> comstar, etc.? *Of course* it will contain most of the bits that
> are current today in OpenSolaris.
That's a very good question actually. I would think that COMSTAR would
stay becau
in there.
I'm just looking for a clean way to remove the old BE, and then remove the old
snapshot without interfering with Live Upgrade from working in the future.
Many thanks,
Ben
--
This message posted from opensolaris.org
___
zfs-discuss
> + dev=`echo $dev | sed 's/mirror.*/mirror/'`
Thanks for the suggestion Kurt. However, I'm not running a mirror on that pool
- so am guessing this won't help in my case.
I'll try and pick my way through the lulib script if I get any time.
Ben
--
This message
he problems). This also
happened with hot-spares, which caused support to spend some time
with back-line to figure out a procedure to clear those fauled disks
which had the same ctd# as a working hot-spare...
Ben
--
This message posted from opensolaris.org
> Ben,
> I have found that booting from cdrom and importing
> the pool on the new host, then boot the hard disk
> will prevent these issues.
> That will reconfigure the zfs to use the new disk
> device.
> When running, zpool detach the missing mirror device
> and attach
E ALLOC FREECAP DEDUP HEALTH ALTROOT
pool2 31.8T 13.8T 17.9T43% 1.65x DEGRADED -
The slog is a mirror of two SLC SSDs and the L2ARC is an MLC SSD.
thanks,
Ben
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 09/20/10 10:45 AM, Giovanni Tirloni wrote:
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller mailto:bmil...@mail.eecis.udel.edu>> wrote:
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB
disks (Seagate Constellation) and the pool seems sick now. The pool
On 09/21/10 09:16 AM, Ben Miller wrote:
On 09/20/10 10:45 AM, Giovanni Tirloni wrote:
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller mailto:bmil...@mail.eecis.udel.edu>> wrote:
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB
disks (Seagate Constellation) and the
On 09/22/10 04:27 PM, Ben Miller wrote:
On 09/21/10 09:16 AM, Ben Miller wrote:
I had tried a clear a few times with no luck. I just did a detach and that
did remove the old disk and has now triggered another resilver which
hopefully works. I had tried a remove rather than a detach before
If you're still having issues go into the BIOS and disable C-States, if you
haven't already. It is responsible for most of the problems with 11th Gen
PowerEdge.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
zfs list is mighty slow on systems with a large number of objects, but there is
no foreseeable plan that I'm aware of to solve that "problem".
Never the less, you need to do a zfs list, therefore, do it once and work from
that.
zfs list > /tmp/zfs.out
for i in `grep mydataset@ /tmp/zfs.out`;
Would someone "in the know" be willing to write up (preferably blog) definitive
definitions/explanations of all the arcstats provided via kstat? I'm
struggling with proper interpretation of certain values, namely "p",
"memory_throttle_count", and the mru/mfu+ghost hit vs demand/prefetch hit
co
Thanks, not as much as I was hoping for but still extremely helpful.
Can you, or others have a look at this: http://cuddletech.com/arc_summary.html
This is a PERL script that uses kstats to drum up a report such as the
following:
System Memory:
Physical RAM: 32759 MB
Free Me
Its a starting point anyway. The key is to try and draw useful conclusions
from the info to answer the torrent of "why is my ARC 30GB???"
There are several things I'm unclear on whether or not I'm properly
interpreting such as:
* As you state, the anon pages. Even the comment in code is, to
New version is available (v0.2) :
* Fixes divide by zero,
* includes tuning from /etc/system in output
* if prefetch is disabled I explicitly say so.
* Accounts for jacked anon count. Still need improvement here.
* Added friendly explanations for MRU/MFU & Ghost lists counts.
Page and examp
I've got a Intel DP35DP Motherboard, Q6600 proc (Intel 2.4G, 4 core), 4GB of
ram and a
copule of Sata disks, running ICH9. S10U5, patched about a week ago or so...
I have a zpool on a single slice (haven't added a mirror yet, was getting to
that) and have
started to suffer regular hard resets a
I've been struggling to fully understand why disk space seems to vanish. I've
dug through bits of code and reviewed all the mails on the subject that I can
find, but I still don't have a proper understanding of whats going on.
I did a test with a local zpool on snv_97... zfs list, zpool list,
No takers? :)
benr.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there some hidden way to coax zdb into not just displaying data based on a
given DVA but rather to dump it in raw usable form?
I've got a pool with large amounts of corruption. Several directories are
toast and I get "I/O Error" when trying to enter or read the directory...
however I can re
pool online' on the disk it resilvered
in fine. Any ideas why 'zpool status -x' reports all healthy while 'zpool
status' shows a pool in degraded mode?
thanks,
Ben
> We run a cron job that does a 'zpool status -x' to
> check for any degraded pools. We
I just put in a (low priority) bug report on this.
Ben
> This post from close to a year ago never received a
> response. We just had this same thing happen to
> another server that is running Solaris 10 U6. One of
> the disks was marked as removed and the pool
> degraded, but &
errors: No known data errors
%
Ben
> I just put in a (low priority) bug report on this.
>
> Ben
>
> > This post from close to a year ago never received
> a
> > response. We just had this same thing happen to
> > another server that is running Solaris 10 U6. One
The pools are upgraded to version 10. Also, this is on Solaris 10u6.
# zpool upgrade
This system is currently running ZFS pool version 10.
All pools are formatted using this version.
Ben
> What's the output of 'zfs upgrade' and 'zpool
> upgrade'? (I
We haven't done 'zfs upgrade ...' any. I'll give that a try the next time the
system can be taken down.
Ben
> A little gotcha that I found in my 10u6 update
> process was that 'zpool
> upgrade [poolname]' is not the same as 'zfs upgrade
> [poolnam
var/mysql': Device busy
cannot unmount '/var/postfix': Device busy
6 filesystems upgraded
821 filesystems already at this version
Ben
> You can upgrade live. 'zfs upgrade' with no
> arguments shows you the
> zfs version status of filesystems present w
# zpool status -xv
all pools are healthy
Ben
> What does 'zpool status -xv' show?
>
> On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller
> wrote:
> > I forgot the pool that's having problems was
> recreated recently so it's already at zfs version 3.
> I
Ya, I agree that we need some additional data and testing. The iostat
data in itself doesn't suggest to me that the process (dd) is slow but
rather that most of the data is being retrieved elsewhere (ARC). An
fsstat would be useful to correlate with the iostat data.
One thing that also comes to
On Jan 24, 2007, at 12:37 PM, Shannon Roddy wrote:
I went with a third party FC/SATA unit which has been flawless as
a direct attach for my ZFS JBOD system. Paid about $0.70/GB.
What did you use, if you don't mind my asking?
--
Ben
PGP.sig
Description: This is a digitally signed me
Robert Milkowski wrote:
CLSNL> but if I click, say E, it has F's contents, F has Gs contents, and no
CLSNL> mail has D's contents that I can see. But the list in the mail
CLSNL> client list view is correct.
I don't belive it's a problem with nfs/zfs server.
Please try with simple dtrace script
I've been playing with replication of a ZFS Zpool using the recently released
AVS. I'm pleased with things, but just replicating the data is only part of
the problem. The big question is: can I have a zpool open in 2 places?
What I really want is a Zpool on node1 open and writable (productio
Is there an existing RFE for, what I'll wrongly call, "recursively visable
snapshots"? That is, .zfs in directories other than the dataset root.
Frankly, I don't need it available in all directories, although it'd be nice,
but I do have a need for making it visiable 1 dir down from the dataset
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt it'd work. And if it does, it probly
shouldn't.
Darren J Moffat wrote:
Ben Rockwood wrote:
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt
s.
http://developer.apple.com/technotes/tn2007/tn2173.html
--
Ben
PGP.sig
Description: This is a digitally signed message part
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
libzfs_mount.c, line 380,
function zfs_share
If I re-enable nfs/server after the system is up it's fine. The system was
recently upgraded to use zfs and this has happened on the last two reboots. We
have lots of other systems that share nfs through zfs fine and I didn't see a
similar p
It does seem like an ordering problem, but nfs/server should be starting up
late enough with SMF dependencies. I need to see if I can duplicate the
problem on a test system...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
S, it would be best to uncheck that box.
Yes, that's the setting you're looking for. The Xserve RAID works
great with ZFS IME.
--
Ben
PGP.sig
Description: This is a digitally signed message part
___
zfs-discuss mailing list
zfs-discuss@ope
I just rebooted this host this morning and the same thing happened again. I
have the core file from zfs.
[ Apr 26 07:47:01 Executing start method ("/lib/svc/method/nfs-server start") ]
Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c, line 380, func
tion zfs_share
Abort - core du
I was able to duplicate this problem on a test Ultra 10. I put in a workaround
by adding a service that depends on /milestone/multi-user-server which does a
'zfs share -a'. It's strange this hasn't happened on other systems, but maybe
it's related to slower systems
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
thanks,
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing
doing is this situation? We
have also set up alternate filesystems for users with transient data that we do
not take snapshots on, but we still have this problem on home directories.
thanks,
Ben
This message posted from opensolaris.org
___
zfs
Has anyone else run into this situation? Does anyone have any solutions other
than removing snapshots or increasing the quota? I'd like to put in an RFE to
reserve some space so files can be removed when users are at their quota. Any
thoughts from the ZFS team?
Ben
> We have aro
Peter Schuller wrote:
Hello,
with the advent of clones and snapshots, one will of course start
creating them. Which also means destroying them.
Am I the only one who is *extremely* nervous about doing "zfs destroy
some/[EMAIL PROTECTED]"?
This goes bot manually and automatically in a script. I
Diego Righi wrote:
Hi all, I just built a new zfs server for home and, being a long time and avid
reader of this forum, I'm going to post my config specs and my benchmarks
hoping this could be of some help for others :)
http://www.sickness.it/zfspr0nserver.jpg
http://www.sickness.it/zfspr0nser
May 25 23:32:59 summer unix: [ID 836849 kern.notice]
May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf
Page fault) rp=ff00232c3a80 addr=490 occurred in module "unix" due to a
NULL pointer dereference
May
I'm trying to test an install of ZFS to see if I can backup data from one
machine to another. I'm using Solaris 5.10 on two VMware installs.
When I do the zfs send | ssh zfs recv part, the file system (folder) is getting
created, but none of the data that I have in my snapshot is sent. I can
George wrote:
> I have set up an iSCSI ZFS target that seems to connect properly from
> the Microsoft Windows initiator in that I can see the volume in MMC
> Disk Management.
>
>
> When I shift over to Mac OS X Tiger with globalSAN iSCSI, I am able to
> set up the Targets with the target name
Other specs are one of those Intel
E6750 1333MHz FSB CPUs and 2Gb of matched memory.
Ben.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > > Hello Matthew,
> > > Tuesday, September 12, 2006, 7:57:45 PM, you
> > wrote:
> > > MA> Ben Miller wrote:
> > > >> I had a strange ZFS problem this morning.
> The
> > entire system would
> > >> hang when mounting the Z
Dick Davies wrote:
> On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
>
>
>> Client A
>> - import pool make couple-o-changes
>>
>> Client B
>> - import pool -f (heh)
>>
>
>
>> Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
>> Oct 4 15:03:12 fozzie genunix: [
Dale Ghent wrote:
> ...and eventually in a read-write capacity:
>
> http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write-
> developer-preview-1-1-for-leopard/
>
> Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac
> OS X to Developers this week. The preview updates a p
I've run across an odd issue with ZFS Quota's. This is an snv_43 system with
several zones/zfs datasets, but only one effected. The dataset shows 10GB
used, 12GB refered but when counting the files only has 6.7GB of data:
zones/ABC10.8G 26.2G 12.0G /zones/ABC
zones/[EMAIL PROTECTED]
Today, suddenly, without any apparent reason that I can find, I'm
getting panic's during zpool import. The system paniced earlier today
and has been suffering since. This is snv_43 on a thumper. Here's the
stack:
panic[cpu0]/thread=99adbac0: assertion failed: ss != NULL, file:
../..
I made a really stupid mistake... having trouble removing a hot spare
marked as failed I was trying several ways to put it back in a good
state. One means I tried was to 'zpool add pool c5t3d0'... but I forgot
to use the proper syntax "zpool add pool spare c5t3d0".
Now I'm in a bind. I've got
Eric Schrock wrote:
> There's really no way to recover from this, since we don't have device
> removal. However, I'm suprised that no warning was given. There are at
> least two things that should have happened:
>
> 1. zpool(1M) should have warned you that the redundancy level you were
>atte
Robert Milkowski wrote:
> If you can't re-create a pool (+backup&restore your data) I would
> recommend to wait for device removal in zfs and in a mean time I would
> attach another drive to it so you've got mirrored configuration and
> remove them once there's a device removal. Since you're alread
the following to /etc/system:
set sata:sata_max_queue_depth = 0x1
If you don't life will be highly unpleasant and you'll believe that disks are
failing everywhere when in fact they are not.
benr.
Ben Rockwood wrote:
> Today, suddenly, without any apparent reason that I can find, I'
is look like a bug with 'zpool status -x'?
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Experts,
Do you know where I could find the list of all the ZFS patches that will
be released with Solaris 10 U5? My customer told me that they've seen
such list for prior update releases. I've not been able to find anything
like it in the usual places
the following files:
rpool/duke:<0x8237>
I tried running "zpool clear rpool" to clear the error, but it persists in the
status output. Should a "zpool scrub rpool" get rid of this error?
Thanks,
Ben
This messag
Hi,
I can't seem to delete a file in my zpool that has permanent errors:
zpool status -vx
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Other
Hello again,
I'm not making progress on this.
Every time I run a zpool scrub rpool I see:
$ zpool status -vx
pool: rpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in questi
Hi Marc,
$ : > 09 - Check.mp3
bash: 09 - Check.mp3: I/O error
$ cd ..
$ rm -rf BAD
$ rm: cannot remove `BAD/09 - Check.mp3': I/O error
I'll try shuffling the cables - but as you see above it occasionally reports on
a different disk - so imagine the cables are OK. Also, the new disk I added has
Can someone please clarify the ability to utilize ACL's over NFSv3 from a ZFS
share? I can "getfacl" but I can't "setfacl". I can't find any documentation
in this regard. My suspicion is that that ZFS Shares must be NFSv4 in order to
utilize ACLs but I'm hoping this isn't the case.
Can anyon
Thanks Marc - I'll run memtest on Monday, and re-seat memory/cpu//cards etc. If
that fails, I'll try moving the devices onto a different SATA controller.
Failing that I'll rebuild from scratch. Failing that, I'll get a new
motherboard!
Ben
This message posted
tes. Still getting I/O errors trying to delete that
file.
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng I don't find any obvious hardware issues - wouldn't this be a regarded
as flaw in ZFS (i.e. no way of clearing such an error without a rebuild)?
Would I be safer rebuilding to a pair of mirrors rather than a 3 disk zraid +
hotspare?
Ben
This message posted
Sent response by private message.
Today's findings are that the cksum errors appear on the new disk on the other
controller too - so I've ruled out controllers & cables. It's probably as Jeff
says - just got to figure out now how to prove the memory is duff.
Ben
This
run, followed by a rebuild of the errored pool.
I'll have a read around to see if there's anyway of making the memory more
stable on this mobo.
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
switched it back to Auto
speed&timing for now. I'll just hope that it was a one-off glitch that
corrupted the pool.
I'm going to rebuild the pool this weekend.
Thanks for all the suggestions.
Ben
This message posted from opensolaris.org
products ?
Best Regards,
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've run into an odd problem which I lovingly refer to as a "black hole
directory".
On a Thumper used for mail stores we've found find's take an exceptionally long
time to run. There are directories that have as many as 400,000 files, which I
immediately considered the culprit. However, und
Hello,
I'm curious if anyone would mind sharing their experiences with zvol's. I
recently started using zvol as an iSCSI backend and was supprised by the
performance I was getting. Further testing revealed that it wasn't an iSCSI
performance issue but a zvol issue. Testing on a SATA disk l
plug another one in and
run some zfs commands to make it part of the mirror?
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 0
raidz ONLINE 0 0 0
c1t8d0 ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
errors: No known data errors
Ben
This message po
> Hello Matthew,
> Tuesday, September 12, 2006, 7:57:45 PM, you wrote:
> MA> Ben Miller wrote:
> >> I had a strange ZFS problem this morning. The
> entire system would
> >> hang when mounting the ZFS filesystems. After
> trial and error I
> >> de
> > Hello Matthew,
> > Tuesday, September 12, 2006, 7:57:45 PM, you
> wrote:
> > MA> Ben Miller wrote:
> > >> I had a strange ZFS problem this morning. The
> > entire system would
> > >> hang when mounting the ZFS filesystems. After
> &
1 - 100 of 113 matches
Mail list logo