Bob Friesenhahn writes:
> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>>
>> I’m primarily concerned with in the possibility of a bit flop. If
>> this occurs will the stream be lost? Or will the file that that bit
>> flop occurred in be the only degraded file? Lastly how does the
>> relia
lly looked into documentation.
-- Juergen Nickelsen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Edward Ned Harvey writes:
> There are legitimate specific reasons to use separate filesystems
> in some circumstances. But if you can't name one reason why it's
> better ... then it's not better for you.
Having separate filesystems per user lets you create user-specific
quotas and reservations,
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) writes:
> The netapps patents contain claims on ideas that I invented for my Diploma
> thesis work between 1989 and 1991, so the netapps patents only describe prior
> art. The new ideas introduced with "wofs" include the ideas on how to use CO
Is there any limit on the number of snapshots in a file system?
The documentation -- manual page, admin guide, troubleshooting guide
-- does not mention any. That seems to confirm my assumption that is
is probably not a fixed limit, but there may still be a practical
one, just like there is no lim
Lutz Schumann writes:
> When importing a pool with many snapshots (which happens during
> reboot also) the import may take a long time (example: 1
> snapshots ~ 1-2 days).
>
> I've not tested the new release of Solaris (svn_125++) which fixes
> this regarding this issue. So a test with osol 1
Tony MacDoodle writes:
> Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
> empty
> (6/6)
> svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
> failed: exit status 1
>
> And yes, there is data in the /data/apache file system...
I think it is co
Stephen Quintero <[EMAIL PROTECTED]> writes:
> I am running OpenSolaris 2008.05 as a PV guest under Xen. If you
> import the bootable root pool of a VM into another Solaris VM, the
> root pool is no longer bootable.
I had a similar problem: After installing and booting Opensolaris
2008.05, I succ
Hello all,
in the setup I try to build I want to have snapshots of a file
system replicated from host "replsource" to host "repltarget" and
from there NFS-mounted on host "nfsclient" to access snapshots
directly:
replsource# zfs create pool1/nfsw
replsource# mkdir /pool1/nfsw/lala
rep
David Finberg <[EMAIL PROTECTED]> writes:
>> JN> I had a similar problem: After installing and booting Opensolaris
>> JN> 2008.05, I succeded to lock myself out through some passwd/shadow
>> JN> inconsistency (totally my own fault). Not a problem, I thought -- I
>> JN> booted from the install disk
(Haven't I already written an answer to this? Anyway, I cannot find it.)
Nils Goroll <[EMAIL PROTECTED]> writes:
>> In a snoop I see that, when the access(2) fails, the nfsclient gets
>> a "Stale NFS file handle" response, which gets translated to an
>> ENOENT.
>
> What happens if you use the noa
(I found the saved draft of the answer I thought I had send; I send
it just for completeness's sake.)
Nils Goroll <[EMAIL PROTECTED]> writes:
> What happens if you use the noac NFS mount option on the client?
That does not seem to change the behaviour. (I have not tried
"Timh Bergström" <[EMAIL PROTECTED]> writes:
> Unfortunely I can only agree to the doubts about running ZFS in
> production environments, i've lost ditto-blocks, i''ve gotten
> corrupted pools and a bunch of other failures even in
> mirror/raidz/raidz2 setups with or without hardware mirrors/raid5
Ian Collins writes:
>> I suspect that a 'zfs copy' or somesuch would be a nice utility
>> when wanting to shove a parent and all of it's snapshots to
>> another system.
>>
> If that's what you want, do an incremental send (-I).
To be a bit more detailed, first create the file system on the
ta
Juergen Nickelsen writes:
>> If that's what you want, do an incremental send (-I).
>
> To be a bit more detailed, first create the file system on the
> target machine by sending the first snapshot that you want to have
> replicated in full. After that, send each of
Harry Putnam writes:
>www.jtan.com/~reader/SDDToolReport-chub-OpenSolaris.html
I see the following there:
Solaris Bundled Driver: * vgatext/ ** radeon
Video
ATI Technologies Inc
R360 NJ [Radeon 9800 XT]
I *think* this is the same driver used with my work laptop (which I
don't have at hand
Juergen Nickelsen writes:
> Solaris Bundled Driver: * vgatext/ ** radeon
> Video
> ATI Technologies Inc
> R360 NJ [Radeon 9800 XT]
>
> I *think* this is the same driver used with my work laptop (which I
> don't have at hand to check, unfortunately), also with ATI graph
Ketan writes:
> I had a pool which was exported and due to some issues on my SAN i
> was never able to import it again. Can anyone tell me how can i
> destroy the exported pool to free up the LUN.
I did that once; I *think* that was with the "-f" option to "zpool
destroy".
Regards, Juergen.
___
DL Consulting writes:
> It takes daily snapshots and sends them to another machine as a
> backup. The sending and receiving is scripted and run from a
> cronjob. The problem is that some of the snapshots disappear from
> monster after they've been sent to the backup machine.
Do not use the snaps
19 matches
Mail list logo