Read the man page for zpool. Specifically, zpool attach.
On 4/10/07, Martin Girard <[EMAIL PROTECTED]> wrote:
Hi,
I have a zpool with only one disk. No mirror.
I have some data in the file system.
Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror
On 2/26/07, Thomas Garner <[EMAIL PROTECTED]> wrote:
Since I have been unable to find the answer online, I thought I would
ask here. Is there a knob to turn to on a zfs filesystem put the .zfs
snapshot directory into all of the children directories of the
filesystem, like the .snapshot directori
Something similar was proposed here before and IIRC someone even has a
working implementation. I don't know what happened to it.
That would be me. AFAIK, no one really wanted it. The problem that it
solves can be solved by putting snapshots in a cronjob.
--
Regards,
Jeremy
On 2/1/07, Nathan Essex <[EMAIL PROTECTED]> wrote:
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as
On 1/30/07, Jeremy Teo <[EMAIL PROTECTED]> wrote:
Hello,
On 1/30/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello zfs-discuss,
>
> I had a pool with only two disks in a mirror. I detached one disks
> and have erased later first disk. Now i would really like
Hello,
On 1/30/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello zfs-discuss,
I had a pool with only two disks in a mirror. I detached one disks
and have erased later first disk. Now i would really like to quickly
get data from the second disk available again. Other than detaching
t
On 1/25/07, Tim Cook <[EMAIL PROTECTED]> wrote:
Just want to verify, if I have say, 1 160GB disk, can I format it so that the
first say 40GB is my main UFS parition with the base OS install, and then make
the rest of the disk zfs? Or even better yet, for testing purposes make two
60GB partiti
On 1/25/07, ComCept Net GmbH Andrea Soliva <[EMAIL PROTECTED]> wrote:
Hi Jeremy
Did I understand it correct there is no workaround or patch available to
solve this situation?
Do not misunderstand me but this issue (and it is not a small issue) is from
September 2006?
Is this in work or..?
This is 6456939:
sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls
biowait()and deadlock/hangs host
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6456939
(Thanks to Tpenta for digging this up)
--
Regards,
Jeremy
___
zfs-d
System specifications please?
On 1/25/07, ComCept Net GmbH Soliva <[EMAIL PROTECTED]> wrote:
Hello
now I was configuring my syste with RaidZ and with Spares (explained below).
I would like to test the configuration it means after successful config of
ZFS I pulled-out a disk of one of the RaidZ'
I'm defining "zpool split" as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
and then transport the 1 disk pool to another machine.
While contemplating "zpool split" fun
On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
"nice to have"?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
On 1/11/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
Just a thought: would it be theoretically possible to designate some
device as a system-wide write cache for all FS writes? Not just ZFS,
but for everything... In a manner similar to which we currently use
extra RAM as a cache for FS read (
On 12/16/06, Richard Elling <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi Jeremy,
>
> It would be nice if you could tell ZFS to turn off fsync() for ZIL
> writes on a per-zpool basis. That being said, I'm not sure there's a
> consensus on that...and I'm sure not smart enough to be a
The instructions will tell you how to configure the array to ignore
SCSI cache flushes/syncs on Engenio arrays. If anyone has additional
instructions for other arrays, please let me know and I'll be happy to
add them!
Wouldn't it be more appropriate to allow the administrator to disable
ZFS from
Yes. But its going to be a few months.
i'll presume that we will get background disk scrubbing for free once
you guys get bookmarking done. :)
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
The whole raid does not fail -- we are talking about corruption
here. If you lose some inodes your whole partition is not gone.
My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet). But still, for
what happened, I cannot believe t
On 12/5/06, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote:
> mypool2/[EMAIL PROTECTED] 34.4M - 151G -
> mypool2/[EMAIL PROTECTED] 141K - 189G -
> mypool2/d3 492G 254G 11.5G legacy
>
> I am so confused with all o
On 11/14/06, Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
> After examining the source, it clearly wipes the vdev label during a detach.
> I suppose it does this so that the machine can't get confused at a later date.
> It would be nice if the
This is the same problem described in
6343653 : want to quickly "copy" a file from a snapshot.
On 10/30/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Pavan Reddy wrote:
> This is the time it took to move the file:
>
> The machine is a Intel P4 - 512MB RAM.
>
> bash-3.00# time mv ../share/pav.tar .
On 10/25/06, Jonathan Edwards <[EMAIL PROTECTED]> wrote:
On Oct 24, 2006, at 12:26, Dale Ghent wrote:
> On Oct 24, 2006, at 12:33 PM, Frank Cusack wrote:
>
>> On October 24, 2006 9:19:07 AM -0700 "Anton B. Rang"
>> <[EMAIL PROTECTED]> wrote:
Our thinking is that if you want more redundancy
Hello,
Shrinking the vdevs requires moving data. Once you move data, you've
got to either invalidate the snapshots or update them. I think that
will be one of the more difficult parts.
Updating snapshots would be non-trivial, but doable. Perhaps some sort
of reverse mapping or brute force s
Hello all,
Isn't a large block size a simple case of prefetching? In other words,
if we possessed an intelligent prefetch implementation, would there
still be a need for large block sizes? (Thinking aloud)
:)
--
Regards,
Jeremy
___
zfs-discuss mailing
Kudos Eric! :)
On 10/17/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Hi everybody,
Yesterday I putback into nevada:
PSARC 2006/288 zpool history
6343741 want to store a command history on disk
This introduces a new subcommand to zpool(1m), namely 'zpool history'.
Yes, team ZFS is tracking what
Heya Anton,
On 10/17/06, Anton B. Rang <[EMAIL PROTECTED]> wrote:
No, the reason to try to match recordsize to the write size is so that a small
write does not turn into a large read + a large write. In configurations where
the disk is kept busy, multiplying 8K of data transfer up to 256K hur
Heya Roch,
On 10/17/06, Roch <[EMAIL PROTECTED]> wrote:
-snip-
Oracle will typically create it's files with 128K writes
not recordsize ones.
Darn, that makes things difficult doesn't it? :(
Come to think of it, maybe we're approaching things from the wrong
perspective. Databases such as Oracl
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I'm having problems tracking down where the code for performing
readahed in vdev_cache is. Could someone give me a clue to where it
is?
Thanks!
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
A couple of use cases I was considering off hand:
1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots, but versioning gives
us immediacy.
So is immediacy worth it to you
Hello,
On 10/6/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Fri, Oct 06, 2006 at 01:14:23AM -0600, Chad Leigh -- Shire.Net LLC wrote:
>
> But I would dearly like to have a versioning capability.
Me too.
Example (real life scenario): there is a samba server for about 200
concurrent connec
What would versioning of files in ZFS buy us over a "zfs snapshots +
cron" solution?
I can think of one:
1. The usefulness of the ability to get the prior version of anything
at all (as richlowe puts it)
Any others?
--
Regards,
Jeremy
___
zfs-discuss
What would a version FS buy us that cron+ zfs snapshots doesn't?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I keep thinking that it would be useful to be able to define a zfs file system
where all calls to mkdir resulted not just in a directory but in a file system.
Clearly such a property would not be inherited but in a number of situations
here it would be a really useful feature.
Any example us
Hello,
how are writes distributed as the free space within a pool reaches a
very small percentage?
I understand that when free space is available, ZFS will batch writes
and then issue them in sequential order, maximising write bandwidth.
When free space reaches a minimum, what happens?
Thanks!
Hello,
this is with reference to bug #6343653, "want to quickly "copy" a file
from a snapshot".
After a short dig through source code, the issue is that 'mv' will do
a copy because the rename syscall fails for files on different
filesystems (snapshots are mounted as separate filesystems from the
Hello,
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect
and repair
errors in its (ZFS) code.
I
Hello Constantin,
On 5/29/06, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
Hi,
the current discussion on how to implement "undo" seems to circulate around
concepts and tweaks for replacing any "rm" like action with "mv" and then
fix the problems associated with namespaces, ACLs etc.
Why not
Hello,
with reference to bug id #4852821: user undo
I have implemented a basic prototype that has the current functionality:
1) deleted files/directories are moved to /your_pool/your_fs/.zfs/deleted
Unfortunately, it is non-trivial to completely reproduce the namespace
of deleted files: for now
Hello,
while testing some code changes, I managed to fail an assertion while
doing a zfs create.
My zpool is now invulnerable to destruction. :(
bash-3.00# zpool destroy -f test_undo
internal error: unexpected error 0 at line 298 of ../common/libzfs_dataset.c
bash-3.00# zpool status
pool: tes
Hello Eric,
On 5/3/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
Folks -
Several people have vocalized interest in porting ZFS to operating
systems other than solaris. While our 'mentoring' bandwidth may be
small, I am hoping to create a common forum where people could share
their experiences
40 matches
Mail list logo