First a little background, I'm running b130, I have a zpool with two
Raidz1(each 4 drives, all WD RE4-GPs) "arrays" (vdev?). They're in a
Norco-4220 case ("home" server), which just consists of SAS backplanes
(aoc-usas-l8i ->8087->backplane->SATA drives). A couple of the drives are
showing a
> First a little background, I'm running b130, I have a
> zpool with two Raidz1(each 4 drives, all WD RE4-GPs)
> "arrays" (vdev?). They're in a Norco-4220 case
> ("home" server), which just consists of SAS
> backplanes (aoc-usas-l8i ->8087->backplane->SATA
> drives). A couple of the drives are sh
I just started replacing drives in this zpool (to increase storage). I pulled
the first drive, and replaced it with a new drive and all was well. It
resilvered with 0 errors. This was 5 days ago. Just today I was looking around
and noticed that my pool was degraded (I see now that this occurred
I just ran 'iostat -En'. This is what was reported for the drive in question
(all other drives showed 0 errors across the board.
All drives indicated the "illegal request... predictive failure analysis"
--
c7t1d0
Yeah,
--
$smartctl -d sat,12 -i /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)
>
> Do worry about media errors. Though this is the most
> common HDD
> error, it is also the cause of data loss.
> Fortunately, ZFS detected this
> and repaired it for you.
Right. I assume you do recommend swapping the faulted drive out though?
Other file systems may not
> be so gracious.
>
Can anyone confirm my action plan is the proper way to do this? The reason I'm
doing this is I want to create 2xraidz2 pools instead of expanding my current
2xraidz1 pool. So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1
pool over, destroy that pool and then add it as a 1xraidz2 vde
asis in reality until it's about 1% do or so. I think there is some
bookkeeping or something ZFS does at the start of a scrub or resilver that
throws off the time estimate for a while. Thats just my experience with
it but it's been like that pretty consistently for me.
Jonathan
u start seeing hundreds of errors be sure to check things like the
cable. I had a SATA cable come loose on a home ZFS fileserver and scrub
was throwing 100's of errors even though the drive itself was fine, I
don't want to think about what could have happened with UFS...
H
's easier just to spend the money on enough
hardware to do it properly without the chance of data loss and the
extended down time. "Doesn't invest the time in" may be a be a better
phrase than "avoids" though. I doubt Sun actually goes out of their way
to make things harder for people.
Hope that helps,
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Michael Shadle wrote:
> On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble
wrote:
>
>> zpool add tank raidz1 disk_1 disk_2 disk_3 ...
>>
>> (The syntax is just like creating a pool, only with add instead of
create.)
>
> so I can add individual disks to the existing tank zpool anytime i want?
Using th
blocks will be allocated for the new files. that`s because rsync will
> write entirely new file and rename it over the old one.
ZFS will allocate new blocks either way, check here
http://all-unix.blogspot.com/2007/03/zfs-cow-and-relate-features.html
for more information about how
Daniel Rock wrote:
> Jonathan schrieb:
>> OpenSolaris Forums wrote:
>>> if you have a snapshot of your files and rsync the same files again,
>>> you need to use "--inplace" rsync option , otherwise completely new
>>> blocks will be allocated for the
.
Thoughts on what I should be looking at?
much thanks,
Jonathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I would really apreciate if any of you can help me get the modified mdb and zdb
(in any version of OpenSolaris) for digital forensic reserch purpose.
Thank you.
Jonathan Cifuentes
ill the GUID for each pool get found by
the system from the partitioned log drives?
Please give me your sage advice. Really appreciate it.
Jon
- _/ _/ / - Jonathan Loran - -
-/ /
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jonathan Loran
>>
> Because you're at pool v15, it does not matter if the log device fails while
> you&
The real problem for us is down to the fact that with ufsdump and ufsrestore
they handled tape spanning and zfs send does not.
we looked into having a wrapper to "zfs send" to a file and running gtar (which
does support tape spanning), or cpio ... then we looked at the amount we
started storing
/work with the LSI-SAS
expander in the supermicro chassis. Using an 1068e based HBA works fine and
works well with osol.
Jonathan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
Hey all,
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
When I attempted to receive the stream onto to the newly configured
pool, I ended up with a
>> New to ZFS, I made a critical error when migrating data and
>> configuring zpools according to needs - I stored a snapshot stream to
>> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
>
>Why is this a critical error, I thought you were supposed to be
>able to save the outp
Hello all,
I'm building a file server (or just a storage that I intend to access by
Workgroup from primarily Windows machines) using zfs raidz2 and openindiana
148. I will be using this to stream blu-ray movies and other media, so I will
be happy if I get just 20MB/s reads, which seems like a pr
Do you mean that OI148 might have a bug that Solaris 11 Express might solve? I
will download the Solaris 11 Express LiveUSB and give it a shot.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
Nevermind this, I destroyed the raid volume, then checked each hard drive one
by one, and when I put it back together, the problem fixed itself. I'm now
getting 30-60MB/s read and write, which is still slow as heck, but works well
for my application.
--
This message posted from opensolaris.org
it be possible to have a number of possible places to store this
> log? What I'm thinking is that if the system drive is unavailable,
> ZFS could try each pool in turn and attempt to store the log there.
>
> In fact e-mail alerts or external error logging would be a great
> addition to ZFS. Surely it makes sense that filesy
e best position to monitor the device.
> >
> > The primary goal of ZFS is to be able to correctly read data which was
> > successfully committed to disk. There are programming interfaces
> > (e.g. fsync(), msync()) which may be used to en
Miles Nordin wrote:
>> "s" == Steve <[EMAIL PROTECTED]> writes:
>>
>
> s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
>
> no ECC:
>
> http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
>
This MB will take these:
http://www.inte
it's not so!), why can't I at least have the 20GB of data that
it can restore before it bombs out with that checksum error.
Thanks for any help with this!
Jonathan
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jorgen Lundman wrote:
> # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 "vendor 0x11ab device
> 0x6081"
> pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081
> Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
>
> But it claims resolved for our version:
other helpful chap pointed out, if tar encounters an error in the
bitstream it just moves on until it finds usable data again. Can zfs not do
something similar?
I'll take whatever i can get!
Jonathan
This message posted from opensolaris.org
___
z
over the /home fs
from the pre-zfsroot.zfs dump? Since there seems to be a problem with the first
fs (faith/virtualmachines), I need to find a way to skip restoring that zfs, so
it can focus on the faith/home fs.
How can this be achieved with zfs receive?
Jonathan
This message posted from
ID=220125
It's way over my head, but if anyone can tell me the mdb commands I'm happy to
try them, even if they do kill my cat. I don't really have anything to loose
with a copy of the data, and I'll do it all in a VM anyway.
Thanks,
Jonathan
This message posted from
e a chance of being recovered. If
it stops half way, it has _no_ chance of recovering that data, so I favor my
odds of letting it go on to at least try :)
Or is that an entirely new CR itself?
Jonathan
This message posted from opensolaris.org
value of a failure in one year:
Fe = 46% failures/month * 12 months = 5.52 failures
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Science
s requires me to a) type more; and b) remember where the top of
the filesystem is in order to split the path. This is obviously more
of a pain if the path is 7 items deep, and the split means you can't
just use $PWD.
[My choice of .snapshot/nightly.0 is a deliberate nod to the
On 25 Sep 2008, at 17:14, Darren J Moffat wrote:
> Chris Gerhard has a zfs_versions script that might help:
> http://blogs.sun.com/chrisg/entry/that_there_is
Ah. Cool. I will have to try this out.
Jonathan
___
zfs-discuss mailing list
zfs-d
two vdevs out
of two raidz to see if you get twice the throughput, more or less. I'll
bet the answer is yes.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ /
Hi
Please see the query below. Appreciate any help.
Rgds
jonathan
Original Message
Would you mind helping me ask your tech guy whether there will be
repercussions when I try to run this command in view of the situation below:
# /*zpool add -f zhome raidz
tools, resilience of the platform, etc.)..
>
> .. Of course though, I guess a lot of people who may have never had a
> problem wouldn't even be signed up on this list! :-)
>
>
> Thanks!
> ___
> storage-discuss mailing li
y, give it a go and see what happens. I'm sure I can still dimly
recall a time when 500MHz/512MB was a kick-ass system...
Jonathan
(*) This machine can sustain 110MB/s off of the 4-disk RAIDZ1 set,
which is substantially more than I can get over my 100Mb network.
___
the system board for this machine would make use of ECC
memory either, which is not good from a ZFS perspective. How many SATA
plugs are there on the MB in this guy?
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /I
not quite .. it's 16KB at the front and 8MB back of the disk (16384
sectors) for the Solaris EFI - so you need to zero out both of these
of course since these drives are <1TB you i find it's easier to format
to SMI (vtoc) .. with format -e (choose SMI, label, save, validate -
then choose EFI
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote:
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot o
es in tact?
I'm going to perform a full backup of this guy (not so easy on my
budget), and I would rather only get the good files.
Thanks,
Jon
- _/ _/ / - Jonathan Loran - -
-/ / /
on
On Jun 1, 2009, at 2:41 PM, Paul Choi wrote:
"zpool clear" just clears the list of errors (and # of checksum
errors) from its stats. It does not modify the filesystem in any
manner. You run "zpool clear" to make the zpool forget that it ever
had any issues.
-Paul
Jonat
he zfs layer, and also do backups.
Unfortunately for me, penny pinching has precluded both for us until
now.
Jon
On Jun 1, 2009, at 4:19 PM, A Darren Dunham wrote:
On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote:
Kinda scary then. Better make sure we delete all the bad fil
i've seen a problem where periodically a 'zfs mount -a' and sometimes
a 'zpool import ' can create what appears to be a race condition
on nested mounts .. that is .. let's say that i have:
FS mountpoint
pool/export
pool/fs1
On Jul 4, 2009, at 12:03 AM, Bob Friesenhahn wrote:
% ./diskqual.sh
c1t0d0 130 MB/sec
c1t1d0 130 MB/sec
c2t202400A0B83A8A0Bd31 13422 MB/sec
c3t202500A0B83A8A0Bd31 13422 MB/sec
c4t600A0B80003A8A0B096A47B4559Ed0 191 MB/sec
c4t600A0B80003A8A0B096E47B456DAd0 192 MB/sec
c4t600A0B80003A8A0B00
On Jul 4, 2009, at 11:57 AM, Bob Friesenhahn wrote:
This brings me to the absurd conclusion that the system must be
rebooted immediately prior to each use.
see Phil's later email .. an export/import of the pool or a remount of
the filesystem should clear the page cache - with mmap'd files
>
> > We have a SC846E1 at work; it's the 24-disk, 4u
> version of the 826e1.
> > It's working quite nicely as a SATA JBOD enclosure.
> We'll probably be
> buying another in the coming year to have more
> capacity.
> Good to hear. What HBA(s) are you using against it?
>
I've got one too and it
On Aug 14, 2009, at 11:14 AM, Peter Schow wrote:
On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette
wrote:
I saw this question on another mailing list, and I too would like to
know. And I have a couple questions of my own.
== Paraphrased from other list ==
Does anyone have a
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote:
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
Some hours later, here I am again:
scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
Any suggestions?
Let it run for another day.
A pool on a build server I manage takes ab
Roch
I've been chewing on this for a little while and had some thoughts
On Jan 15, 2007, at 12:02, Roch - PAE wrote:
Jonathan Edwards writes:
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given files
On Jan 24, 2007, at 09:25, Peter Eriksson wrote:
too much of our future roadmap, suffice it to say that one should
expect
much, much more from Sun in this vein: innovative software and
innovative
hardware working together to deliver world-beating systems with
undeniable
economics.
Yes p
On Jan 24, 2007, at 06:54, Roch - PAE wrote:
[EMAIL PROTECTED] writes:
Note also that for most applications, the size of their IO
operations
would often not match the current page size of the buffer, causing
additional performance and scalability issues.
Thanks for mentioning this, I forgo
On Jan 24, 2007, at 12:41, Bryan Cantrill wrote:
well, "Thumper" is actually a reference to Bambi
You'd have to ask Fowler, but certainly when he coined it, "Bambi"
was the
last thing on anyone's mind. I believe Fowler's intention was "one
that
thumps" (or, in the unique parlance of a
On Jan 25, 2007, at 10:16, Torrey McMahon wrote:
Albert Chin wrote:
On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote:
On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill
<[EMAIL PROTECTED]> wrote:
On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote:
Well, he did sa
On Jan 25, 2007, at 14:34, Bill Sommerfeld wrote:
On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote:
So there's no way to treat a 6140 as JBOD? If you wanted to use a
6140
with ZFS, and really wanted JBOD, your only choice would be a RAID 0
config on the 6140?
Why would you want to
On Jan 25, 2007, at 17:30, Albert Chin wrote:
On Thu, Jan 25, 2007 at 02:24:47PM -0600, Al Hopper wrote:
On Thu, 25 Jan 2007, Bill Sommerfeld wrote:
On Thu, 2007-01-25 at 10:16 -0500, Torrey McMahon wrote:
So there's no way to treat a 6140 as JBOD? If you wanted to use
a 6140
with ZFS, an
On Jan 26, 2007, at 13:52, Marion Hakanson wrote:
[EMAIL PROTECTED] said:
. . .
realize that the pool is now in use by the other host. That leads
to two
systems using the same zpool which is not nice.
Is there any solution to this problem, or do I have to get Sun
Cluster 3.2 if
I want to
On Jan 26, 2007, at 09:16, Jeffery Malloch wrote:
Hi Folks,
I am currently in the midst of setting up a completely new file
server using a pretty well loaded Sun T2000 (8x1GHz, 16GB RAM)
connected to an Engenio 6994 product (I work for LSI Logic so
Engenio is a no brainer). I have config
On Jan 29, 2007, at 14:17, Jeffery Malloch wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about
managing writes,reads and failures. It wants to be bit perfect.
So if you use the hardware that comes with a given solution (in my
case an Engenio 6994) to ma
On Feb 2, 2007, at 15:35, Nicolas Williams wrote:
Unlike traditional journalling replication, a continuous ZFS send/recv
scheme could deal with resource constraints by taking a snapshot and
throttling replication until resources become available again.
Replication throttling would mean losing s
On Feb 3, 2007, at 02:31, dudekula mastan wrote:
After creating the ZFS file system on a VTOC labeled disk, I am
seeing the following warning messages.
Feb 3 07:47:00 scoobyb Corrupt label; wrong magic number
Feb 3 07:47:00 scoobyb scsi: [ID 107833 kern.warning] WARNING: /
scsi_vhci/[
On Feb 6, 2007, at 06:55, Robert Milkowski wrote:
Hello zfs-discuss,
It looks like when zfs issues write cache flush commands se3510
actually honors it. I do not have right now spare se3510 to be 100%
sure but comparing nfs/zfs server with se3510 to another nfs/ufs
server with se3510 w
On Feb 6, 2007, at 11:46, Robert Milkowski wrote:
Does anybody know how to tell se3510 not to honor write cache
flush
commands?
JE> I don't think you can .. DKIOCFLUSHWRITECACHE *should* tell the
array
JE> to flush the cache. Gauging from the amount of calls that zfs
makes to
JE>
Roch
what's the minimum allocation size for a file in zfs? I get 1024B by
my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/
znode allocation) since we never pack file data in the inode/znode.
Is this a problem? Only if you're trying to pack a lot files small
byte fil
On Feb 20, 2007, at 15:05, Krister Johansen wrote:
what's the minimum allocation size for a file in zfs? I get 1024B by
my calculation (1 x 512B block allocation (minimum) + 1 x 512B inode/
znode allocation) since we never pack file data in the inode/znode.
Is this a problem? Only if you're t
right on for optimizing throughput on solaris .. a couple of notes
though (also mentioned in the QFS manuals):
- on x86/x64 you're just going to have an sd.conf so just increase
the max_xfer_size for all with a line at the bottom like:
sd_max_xfer_size=0x80;
(note: if you look at
be very much appreciated.
Thanks,
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ /
You know you've got
an empty label if you get stderr entries at the top of the format
output, or syslog messages around "corrupt label - bad magic number"
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On May 15, 2007, at 13:13, Jürgen Keil wrote:
Would you mind also doing:
ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
to see the raw performance of underlying hardware.
This dd command is reading from the block device,
which might cache dataand probably splits requests
into
On Jun 1, 2007, at 18:37, Richard L. Hamilton wrote:
Can one use a spare SCSI or FC controller as if it were a target?
we'd need an FC or SCSI target mode driver in Solaris .. let's just
say we
used to have one, and leave it mysteriously there. smart idea though!
---
.je
B file write of
zeros .. or use a better opensource tool like iozone to get a better
fix on single thread vs multi-thread, read/write mix, and block size
differences for your given filesystem and storage layout
jonathan
___
zfs-discuss mailing li
;ll need the following in the smb.conf [public]
section:
vfs objects = zfsacl
nfs4: mode = special
and for other issues around samba and the zfs_acl patch you should
really watch jurasek's blog:
http://blogs.sun.com/jurasek/
jonathan
_
On Sep 6, 2007, at 14:48, Nicolas Williams wrote:
>> Exactly the articles point -- rulings have consequences outside of
>> the
>> original case. The intent may have been to store logs for web server
>> access (logical and prudent request) but the ruling states that
>> RAM albeit
>> working m
ill I see the benefit of compression
> on the blocks
> that are copied by the mirror being resilvered?
No; resilvering just re-copies the existing blocks, in whatever compression
state they are in. You need to re-write the files *at the filesystem layer*
to get the blocks compressed.
Cheer
ks!
> Kent
>
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
- _/ _/ / - Jonathan Loran -
C-SAT2-MV8.cfm)
> for about $100 each
>
>> Good luck,
> Getting there - can anybody clue me into how much CPU/Mem ZFS
> needs?I have an old 1.2Ghz with 1Gb of mem laying around - would
> it be sufficient?
>
>
> Thanks!
> Kent
&g
On Sep 21, 2007, at 14:57, eric kustarz wrote:
>> Hi.
>>
>> I gave a talk about ZFS during EuroBSDCon 2007, and because it won
>> the
>> the best talk award and some find it funny, here it is:
>>
>> http://youtube.com/watch?v=o3TGM0T1CvE
>>
>> a bit better version is here:
>>
>> http:
roblem of worrying about where a user's
files are when they want to access them :(.
--
- _____/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Spa
Paul B. Henson wrote:
On Sat, 22 Sep 2007, Jonathan Loran wrote:
My gut tells me that you won't have much trouble mounting 50K file
systems with ZFS. But who knows until you try. My questions for you is
can you lab this out?
Yeah, after this research phase has been comp
On Sep 25, 2007, at 19:57, Bryan Cantrill wrote:
>
> On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote:
>> It seems like ZIL is a separate issue.
>
> It is very much the issue: the seperate log device work was done
> exactly
> to make better use of this kind of non-volatile memory.
On Sep 26, 2007, at 14:10, Torrey McMahon wrote:
> You probably don't have to create a LUN the size of the NVRAM
> either. As
> long as its dedicated to one LUN then it should be pretty quick. The
> 3510 cache, last I checked, doesn't do any per LUN segmentation or
> sizing. Its a simple front
SCSI based, but solid and cheap enclosures if you don't care about
support:
http://search.ebay.com/search/search.dll?satitle=Sun+D1000
On Oct 1, 2007, at 12:15, Andy Lubel wrote:
> I gave up.
>
> The 6120 I just ended up not doing zfs. And for our 6130 since we
> don't
> have santricity or t
rites enough to
make a difference? Possibly not.
Anton
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
- _/ _/
Nicolas Williams wrote:
On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
I can envision a highly optimized, pipelined system, where writes and
reads pass through checksum, compression, encryption ASICs, that also
locate data properly on disk. ...
I've argued b
rg/mailman/listinfo/zfs-discuss
--
- _/ _____/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / /
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
- _/ _/ / - Jonathan Loran -
Richard Elling wrote:
> Jonathan Loran wrote:
...
> Do not assume that a compressed file system will send compressed.
> IIRC, it
> does not.
Let's say, if it were possible to detect the remote compression support,
couldn't we send it compressed? With higher compression
On Oct 18, 2007, at 11:57, Richard Elling wrote:
> David Runyon wrote:
>> I was presenting to a customer at the EBC yesterday, and one of the
>> people at the meeting said using df in ZFS really drives him crazy
>> (no,
>> that's all the detail I have). Any ideas/suggestions?
>
> Filter it. T
On Oct 18, 2007, at 13:26, Richard Elling wrote:
>
> Yes. It is true that ZFS redefines the meaning of available space.
> But
> most people like compression, snapshots, clones, and the pooling
> concept.
> It may just be that you want zfs list instead, df is old-school :-)
exactly - i'm not
On Oct 20, 2007, at 20:23, Vincent Fox wrote:
> To my mind ZFS has a serious deficiency for JBOD usage in a high-
> availability clustered environment.
>
> Namely, inability to tie spare drives to a particular storage group.
>
> Example in clustering HA setups you would would want 2 SAS JBOD
>
Hey Bill:
what's an object here? or do we have a mapping between "objects" and
block pointers?
for example a zdb -bb might show:
th37 # zdb -bb rz-7
Traversing all blocks to verify nothing leaked ...
No leaks (block sum matches space maps exactly)
bp count: 47
On Nov 10, 2007, at 23:16, Carson Gaspar wrote:
> Mattias Pantzare wrote:
>
>> As the fsid is created when the file system is created it will be the
>> same when you mount it on a different NFS server. Why change it?
>>
>> Or are you trying to match two different file systems? Then you also
>> ha
think it should be too bad (for ::memstat), given that (at least in
Nevada), all of the ZFS caching data belongs to the "zvp" vnode, instead of
"kvp". The work that made that change was:
4894692 caching data in heap inflates crash dump
Of course, this so-called "fr
ata buffers are attached to zvp; however, we still keep metadata in
> the crashdump. At least right now, this means that cached ZFS metadata
> has kvp as its vnode.
>
>
Still, it's better than what you get currently.
Cheers,
- jonathan
_
where 1-1.5MB jpegs and
the errors moved around so I could have just copied a file repeatedly
until I got a good copy but that would have been a lot of work.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
revised indentation:
mirror2 / # zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz2ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
--
ith c4t0d0 plus some more disks
since there are more than the recommended number of disks in tank
already.
jonathan soons
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
1 - 100 of 235 matches
Mail list logo