On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Robert Milkowski
> >
> > if you can disable ZIL and compare the performance to when it is off it
> > will give you an estim
On May 5, 2010, at 8:35 PM, Richard Jahnel wrote:
> Hmm...
>
> To clarify.
>
> Every discussion or benchmarking that I have seen always show both off,
> compression only or both on.
>
> Why never compression off and dedup on?
I've seen this quite often. The decision to compress is based on t
On 6 May 2010, at 13:18 , Edward Ned Harvey wrote:
>> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com]
>>
>> While it explains how to implement these, there is no information
>> regarding failure of a device in a striped L2ARC set of SSD's. I have
>
> http://www.solarisinternals.com/w
> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com]
>
> My Google is very strong and I have the Best Practices Guide committed
> to bookmark as well as most of it to memory.
>
> While it explains how to implement these, there is no information
> regarding failure of a device in a striped
Hi Ed,
Thanks for your answers. Seem to make sense, sort of…
On 6 May 2010, at 12:21 , Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Michael Sullivan
>>
>> I have a question I cannot seem to find an answer to
OK, I've installed OpenSolaris DEV 134, created 2 files.
Mkfile 128m /disk1
Mkfile 127m /disk2
Zpool create stapler /disk1
Zpool attach stapler /disk1 /disk2
Cannot attach /disk2 to /disk1: device is too small (that's what she said..
lol)
But, if I created 128m and 128m - 10bytes, it works. I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Steven Stallion
>
> I had a question regarding how the ZIL interacts with zpool import:
>
> Given that the intent log is replayed in the event of a system failure,
> does the replay behavior d
On 05/ 6/10 03:35 PM, Richard Jahnel wrote:
Hmm...
To clarify.
Every discussion or benchmarking that I have seen always show both off,
compression only or both on.
Why never compression off and dedup on?
After some further thought... perhaps it's because compression works at the
byte level
Hmm...
To clarify.
Every discussion or benchmarking that I have seen always show both off,
compression only or both on.
Why never compression off and dedup on?
After some further thought... perhaps it's because compression works at the
byte level and dedup is at the block level. Perhaps I hav
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Robert Milkowski
>
> if you can disable ZIL and compare the performance to when it is off it
> will give you an estimate of what's the absolute maximum performance
> increase (if any) by having
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael Sullivan
>
> I have a question I cannot seem to find an answer to.
Google for ZFS Best Practices Guide (on solarisinternals). I know this
answer is there.
> I know if I set up ZIL
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ray Van Dolson
>
> Well, being able to remove ZIL devices is one important feature
> missing. Hopefully in U9. :)
I did have a support rep confirm for me that both the log device removal,
and
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> From a zfs standpoint, Solaris 10 does not seem to be behind the
> currently supported OpenSolaris release.
I'm sorry, I'll have to disagree with you there. In solaris 10,
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Thanks to Victor, here is at least proof of concept that yes, it is possible
to reverse resolve, inode number --> pathname, and yes, it is almost
infinitely faster than doing
One of the big things to remember with dedup is that it is
block-oriented (as is compression) - it deals with things in discrete
chunks, (usually) not the entire file as a stream. So, let's do a
thought-experiment here:
File A is 100MB in size. From ZFS's standpoint, let's say it's made up
of 100
Another thought is this: _unless_ the CPU is the bottleneck on
a particular system, compression (_when_ it actually helps) can
speed up overall operation, by reducing the amount of I/O needed.
But storing already-compressed files in a filesystem with compression
is likely to result in wasted effort
> I've googled this for a bit, but can't seem to find
> the answer.
>
> What does compression bring to the party that dedupe
> doesn't cover already?
>
> Thank you for you patience and answers.
That almost sounds like a classroom question.
Pick a simple example: large text files, of which each
Dedup came much later than compression. Also, compression saves both
space and therefore load time even when there's only one copy. It is
especially good for e.g. HTML or man page documentation which tends to
compress very well (versus binary formats like images or MP3s that
don't).
It gi
I've googled this for a bit, but can't seem to find the answer.
What does compression bring to the party that dedupe doesn't cover already?
Thank you for you patience and answers.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Wed, May 5, 2010 at 5:00 PM, Ian Collins wrote:
> Have you hot swapped any drives? I had a similar oddity after swapping
> drives and running cfgadm.
No hot-swapping. I'd imported & exported both pools from a LiveCD
environment, but I'd also rebooted at least twice since then.
-B
--
Brando
On Wed, May 05, 2010 at 05:09:40PM -0700, Erik Trimble wrote:
> On Wed, 2010-05-05 at 19:03 -0500, Bob Friesenhahn wrote:
> > On Wed, 5 May 2010, Ray Van Dolson wrote:
> > >>
> > >> From a zfs standpoint, Solaris 10 does not seem to be behind the
> > >> currently supported OpenSolaris release.
> >
On Wed, 2010-05-05 at 19:03 -0500, Bob Friesenhahn wrote:
> On Wed, 5 May 2010, Ray Van Dolson wrote:
> >>
> >> From a zfs standpoint, Solaris 10 does not seem to be behind the
> >> currently supported OpenSolaris release.
> >
> > Well, being able to remove ZIL devices is one important feature
> >
On Wed, 5 May 2010, Ray Van Dolson wrote:
From a zfs standpoint, Solaris 10 does not seem to be behind the
currently supported OpenSolaris release.
Well, being able to remove ZIL devices is one important feature
missing. Hopefully in U9. :)
While the development versions of OpenSolaris are
You are right; the system does not really care that it can not mount it
automatically but it still tries since it sees the zpool data.
[b]pfexec zdb -l /dev/rdsk/c7t0d0s2[/b]
LABEL 0
failed to unpack label 0
On 05/ 6/10 11:48 AM, Brandon High wrote:
I know for certain that my rpool and tank pool are not both using
c6t0d0 and c6t1d0, but that's what zpool status is showing.
It appears to be an output bug, or a problem with the zpool.cache,
since format shows my rpool devices at c8t0d0 and c8t1d0.
I know for certain that my rpool and tank pool are not both using
c6t0d0 and c6t1d0, but that's what zpool status is showing.
It appears to be an output bug, or a problem with the zpool.cache,
since format shows my rpool devices at c8t0d0 and c8t1d0.
What's the right way to fix this? Do nothing?
On Wed, May 05, 2010 at 04:31:08PM -0700, Bob Friesenhahn wrote:
> On Thu, 6 May 2010, Ian Collins wrote:
> >> Bob and Ian are right. I was trying to remember the last time I installed
> >> Solaris 10, and the best I can recall, it was around late fall 2007.
> >> The fine folks at Oracle have been
On Thu, 6 May 2010, Ian Collins wrote:
Bob and Ian are right. I was trying to remember the last time I installed
Solaris 10, and the best I can recall, it was around late fall 2007.
The fine folks at Oracle have been making improvements to the product
since then, even though no new significant f
On Thu, 6 May 2010, Daniel Carosone wrote:
That said, I'd also recommend a scrub on a regular basis, once the
resilver has completed, and that will trawl through all the data and
take all that time you were worried about anyway. For a 200G disk,
full, over usb, I'd expect around 4-5 hours. Tha
On Wed, 5 May 2010, Edward Ned Harvey wrote:
Here are the obstacles I think you'll have with your proposed solution:
#1 I think all the entire used portion of the filesystem needs to resilver
every time. I don't think there's any such thing as an incremental
resilver.
It sounds like you are
On May 5, 2010, at 12:59 PM, nich romero wrote:
> Stupid question time.
>
> I have a CF Card that I place a ZFS volume. Now I want to put a UFS volume
> on it instead but I can not seem to get ride of the ZFS information on the
> drive. I have tried clearing and recreating the Partition Ta
Hi Euan,
You might find some of this useful:
http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/
http://breden.org.uk/2009/08/30/home-fileserver-zfs-boot-pool-recovery/
I backed up the rpool to a single file which I believe is frowned upon, due to
the consequences of an
On Wed, May 05, 2010 at 04:34:13PM -0400, Edward Ned Harvey wrote:
> The suggestion I would have instead, would be to make the external drive its
> own separate zpool, and then you can incrementally "zfs send | zfs receive"
> onto the external.
I'd suggest doing both, to different destinations :)
Glenn Lagasse wrote:
How about ease-of-use, all you have to do is plug in the usb disk and
zfs will 'do the right thing'. You don't have to remember to run zfs
send | zfs receive, or bother with figuring out what to send/recv etc
etc etc.
It should be possible to automate that via syseventd/s
On 05/ 6/10 05:32 AM, Richard Elling wrote:
On May 4, 2010, at 7:55 AM, Bob Friesenhahn wrote:
On Mon, 3 May 2010, Richard Elling wrote:
This is not a problem on Solaris 10. It can affect OpenSolaris, though.
That's precisely the opposite of what I thought. Care to expla
* Edward Ned Harvey (solar...@nedharvey.com) wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Matt Keenan
> >
> > Just wondering whether mirroring a USB drive with main laptop disk for
> > backup purposes is recommended or not.
> >
I got a suggestion to check what fmdump -eV gave to look for PCI errors if the
controller might be broken.
Attached you'll find the last panic's fmdump -eV. It indicates that ZFS can't
open the drives. That might suggest a broken controller, but my slog is on the
motherboard's internal controll
On Wed, May 5, 2010 at 1:36 PM, Matt Cowger wrote:
> It probably put an EFI label on the disk. Try doing a wiping the first AND
> last 2MB.
>
>
If nothing else works, the following should definitely do it:
dd if=/dev/zero of=/dev/whatever bs=1M
That will write zeroes to every bit of the drive
It probably put an EFI label on the disk. Try doing a wiping the first AND
last 2MB.
--M
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of nich romero
Sent: Wednesday, May 05, 2010 1:00 PM
To: zfs-discuss@opensolaris.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Matt Keenan
>
> Just wondering whether mirroring a USB drive with main laptop disk for
> backup purposes is recommended or not.
>
> Plan would be to connect the USB drive, once or twice a week
Stupid question time.
I have a CF Card that I place a ZFS volume. Now I want to put a UFS volume on
it instead but I can not seem to get ride of the ZFS information on the drive.
I have tried clearing and recreating the Partition Table with fdisk. I have
tried clearing the labels and VTOC
On 05/05/2010 20:45, Steven Stallion wrote:
All,
I had a question regarding how the ZIL interacts with zpool import:
Given that the intent log is replayed in the event of a system failure,
does the replay behavior differ if -f is passed to zpool import? For
example, if I have a system which fai
All,
I had a question regarding how the ZIL interacts with zpool import:
Given that the intent log is replayed in the event of a system failure,
does the replay behavior differ if -f is passed to zpool import? For
example, if I have a system which fails prior to completing a series of
writes and
Support for thin reclamation depends on the SCSI "WRITE SAME" command; see this
draft of a document from T10:
http://www.t10.org/ftp/t10/document.05/05-270r0.pdf.
I spent some time searching the source code for support for "WRITE SAME", but I
wasn't able to find much. I assume that if it
We have a pair of opensolaris systems running snv_124. Our main zpool
'z' is running ZFS pool version 18.
Problem:
#zfs destroy -f z/Users/harri...@zfs-auto-snap:daily-2010-04-09-00:00
cannot destroy 'z/Users/harri...@zfs-auto-snap:daily-2010-04-09-00:00':
dataset is busy
I have tried:
Un
On May 4, 2010, at 7:55 AM, Bob Friesenhahn wrote:
> On Mon, 3 May 2010, Richard Elling wrote:
This is not a problem on Solaris 10. It can affect OpenSolaris, though.
>>>
>>> That's precisely the opposite of what I thought. Care to explain?
>>
>> In Solaris 10, you are stuck with Live
On 5/5/10 10:22 AM, Christian Thalinger wrote:
On Wed, 2010-05-05 at 09:45 -0600, Evan Layton wrote:
No that doesn't appear like an EFI label. So it appears that ZFS
is seeing something there that it's interpreting as an EFI label.
Then the command to set the bootfs property is failing due to th
Thanks for your reply! I ran memtest86 and it did not report any errors. The
disk controller I've not replaced, yet. The server is up in multi-user mode
with the broken pool in an un-imported state. Format now works and properly
lists all my devices without panic'ing. zpool import panic's the b
On 5/5/10 1:44 AM, Christian Thalinger wrote:
On Tue, 2010-05-04 at 16:19 -0600, Evan Layton wrote:
Can you try the following and see if it really thinks it's an EFI lable?
# dd if=/dev/dsk/c12t0d0s2 of=x skip=512 bs=1 count=10
# cat x
This may help us determine if this is another instance of b
Hi all,
I would like to install a "virtual san" using opensolaris b134 under an ESXi 4
host. Instead of use vmfs datastores I would like to use local raw disks on ESXi 4
host: http://www.mattiasholm.com/node/33.
Somebody have tried?? Some problem to do this? Or is it better to use vmfs than
Hi James,
Thanks for the information, and if there's any test/command to be done
on this server, just let me know it.
Regards,
Bruno
On 5-5-2010 15:38, James C. McPherson wrote:
>
> On 5/05/10 10:42 PM, Bruno Sousa wrote:
>> Hi all,
>>
>> I have faced yet another kernel panic that seems to be r
On 5/05/10 10:42 PM, Bruno Sousa wrote:
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus and
Przem,
> Anybody has an idea what I can do about it?
zfs set shareiscsi=off vol01/zvol01
zfs set shareiscsi=off vol01/zvol02
Doing this will have no impact on the LUs if configured under COMSTAR.
This will also transparently go away with b136, when ZFS ignores the shareiscsi
property.
- Jim
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus and this lead to a system panic.
MPT driver :
Hi, It definitely seems like hardware-related issue as panics related to common
tools like format isn’t to be expected.
Anyhow. You might want to start to get all your disks show up in iostat /
cfgadm before trying to import pool. You should replace controller if you have
not already done so, a
This is how my zpool import command looks like:
Attached you'll find the output of zdb -l of each device.
pool: tank
id: 10904371515657913150
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
raidz1-0 ONLIN
56 matches
Mail list logo