Alas you are hosed. There is at the moment no way to shrink a pool which is
what you now need to be able to do.
back up and restore I am afraid.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
Just to close this. It turns out you can't get the crtime over NFS so without
access to the NFS server there is only limited checking that can be done.
I filed
CR 6956379 Unable to open extended attributes or get the crtime of files in
snapshots over NFS.
--chris
--
This message posted fro
The reason for wanting to know is to try and find versions of a file.
If a file is renamed then the only way to know that the renamed file was the
same as a file in a snapshot would be if the inode numbers matched. However for
that to be reliable it would require the i-nodes are not reused.
If
If I create a file in a file system and then snapshot the file system.
Then delete the file.
Is it guaranteed that while the snapshot exists no new file will be created
with the same inode number as the deleted file?
--chris
--
This message posted from opensolaris.org
___
> One of my pools (backup pool) has a disk which I
> suspect may be going south. I have a replacement disk
> of the same size. The original pool was using one of
> the partitions towards the end of the disk. I want to
> move the partition to the beginning of the disk on
> the new disk.
>
> Does ZF
>
> I'll say it again: neither 'zfs send' or (s)tar is an
> enterprise (or
> even home) backup system on their own one or both can
> be components of
> the full solution.
>
Up to a point. zfs send | zfs receive does make a very good back up scheme for
the home user with a moderate amount of s
Your pool is on a device that requires a 16 byte CDB to address the entire LUN.
That is the LUN is more than 2Tb in size. However the host bus adapter driver
that is being used does not support 16byte CDBs.
Quite how you got into this situation, ie how you could create the volume I
don't know,
TMPFS was not in the first release of 4.0. It was introduced to boost the
performance of diskless clients which no longer had the old network disk for
their root file systems and hence /tmp was now over NFS.
Whether there was a patch that brought it back into 4.0 I don't recall but I
don't thin
Not that I have seen. I use them, they work.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Alas you need the fix for:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Until that arrives mirror the disk or rebuild the pool.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
Looks like this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
Workaround: Don't run zpool status as root.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
My current solution is a -d option that takes a colon set of aruments min:max
giving the minimum and maximum depth so
zfs list -d 1:1 tank
behaves like zfs list -c is described and only lists the direct children of
tank.
zfs list -d 1: tank
Will list all the descendants of tank
zfs list -
To improve the performance of scripts that manipulate zfs snapshots and the zfs
snapshot service in perticular there needs to be a way to list all the
snapshots for a given object and only the snapshots for that object.
There are two RFEs filed that cover this:
http://bugs.opensolaris.org/view_
Should SMF have a THAW method that fires when a system is woken from being
hibernated?
I think it should. The zfs snapshot service could use this to snapshot on thaw
and while it would be possible to leave a daemon around to catch the signal
that would potentially leave lots of daemons around f
If you have a separate ZIL device is there any way to scrub the data in it?
I appreciate that the data in the ZIL is only there for a short time but since
it is never read if you had a misbehaving ZIL device that was just throwing the
data away you could potentially run like this for many months
Richard Elling wrote:
Chris Gerhard wrote:
My home server running snv_94 is tipping with the same assertion when
someone list a particular file:
Failed assertions indicate software bugs. Please file one.
http://en.wikipedia.org/wiki/Assertion_(computing)
A colleague pointed out that it
My home server running snv_94 is tipping with the same assertion when someone
list a particular file:
::status
Loading modules: [ unix genunix specfs dtrace cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs md ip hook neti sctp arp
usba qlc fctl nca lofs zfs audiosup sd cpc random
Tim Foster wrote:
Chris Gerhard wrote:
Not quite. I want to make the default for all pools imported or not
to not do this and then turn it on where it makes sense and won't do
harm.
Aah I see. That's the complete opposite of what the desktop folk wanted
then - you want to opt-i
A slight nit.
Using cat(1) to read the file to /dev/null will not actually cause the data to
be read thanks to the magic that is mmap(). If you use dd(1) to read the file
then yes you will either get the data and thus know it's blocks match their
checksums or dd will give you an error if yo
would still be respected so to turn it on you just set the
property to true in the root of the pool.
--
Chris Gerhard. __o __o __o
Systems TSC Chief Technologist_`\<,`\<,`\<,_
Tim Foster wrote:
Hi Chris,
Chris Gerhard wrote:
How can you disable the auto-snapshot service[s] by default without
> disabling the timeslider
> as well which appears to be the case if you disable the smf services.
Not sure I follow - time slider depends on the auto-snapshot serv
ating new pools.
With auto-snapshot still on are all the snapshots taken as a single
transaction as they would be with a recursive snapshot?
Alas I've had to downgrade as Nautilus is not usable:
http://blogs.sun.com/chrisg/entry/brief_visit_to_build_100
--
Chr
How can you disable the auto-snapshot service[s] by default without disabling
the timeslider as well which appears to be the case if you disable the smf
services.
Setting the properly in the root pool is ok except for removable media which I
don't want to have snapshots taken in the time betwee
the case. Fortunately none of the users would know
how to run the sync command.
--
Chris Gerhard. __o __o __o
Systems TSC Chief Technologist_`\<,`\<,`\<,_
Sun Microsystem
Is there any way to control the resliver speed? Having attached a third disk
to a mirror (so I can replace the other disks with larger ones) the resilver
goes at a fraction of the speed of the same operation using disk suite. However
it still renders the system pretty much unusable for anything
Victor Latushkin wrote:
On 28.08.08 15:06, Chris Gerhard wrote:
I have a USB disk with a pool on it called removable. On one laptop
zpool import removable works just fine but on another with the same
disk attached it tells me there is more than one matching pool:
: sigma TS 6 $; pfexec zpool
I have a USB disk with a pool on it called removable. On one laptop zpool
import removable works just fine but on another with the same disk attached it
tells me there is more than one matching pool:
: sigma TS 6 $; pfexec zpool import removable
cannot import 'removable': more than one matching
Also http://blogs.sun.com/chrisg/entry/a_faster_zfs_snapshot_massacre which I
run every night. Lots of snapshots are not a bad thing it is keeping them for
a long time that takes space. I'm still snapping every 10 minutes and it is
great.
The thing I discovered was that I really wanted to be
> > Starfox wrote:
>
> None of the scripts that I looked at seemed to
> offered any sort of error recovery. I think I'll be
> able to use this as a starting point (and maybe the
> man pages can be updated to include that you can use
> any common snapshot to send -i - that fact is not
> obvious t
Oddly I posted a script that does what you want all be ti without sending it to
a remote system on friday to my blog
(http://blogs.sun.com/chrisg/entry/rolling_incremental_backups) which i use to
backup my system to an external USB drive.
--chris
This message posted from opensolaris.org
___
It is not possible to use send and receive of the pool is not imported. It is
however possible to use send and receive when the file system is not mounted.
--chris
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
You can do this using zfs send and receive. See
http://blogs.sun.com/chrisg/entry/recovering_my_laptop_using_zfs for an
example. If the file system was remote then you would need to squeeze some ssh
commands into the script but the concept is the same.
This message posted from opensolaris.o
>
> On the other hand personally I just don't see the
> need for this since
> the @ char isn't special to the shell so I don't see
> where the original
> problem came from.
It is the combination of the fear of doing something bad and the the
consequence of doing that something bad that make pe
You are not alone.
My preference would be for an optional -t option to zfs destroy:
zfs destroy -t snapshot tank/[EMAIL PROTECTED]
or
zfs destroy -t snapshot -r tank/fs
would delete all the snapshots below tank/fs
This message posted from opensolaris.org
_
I'm not sure what you want that the file system does not already provide.
you can use cp to copy files out, or find(1) to find them based on time or any
other attribute and then cpio to copy them out.
This message posted from opensolaris.org
___
zfs
I've used ZFS to back up my laptops to an external USB disk that formed one
half of the mirror for a long while.
See: http://blogs.sun.com/chrisg/entry/external_usb_disk_drive
I recently stopped doing that in favour of doing the same to iSCSI luns hosted
on ZFS ZVOLs on a server:
http://blogs.
While I would really like to see a zpool dump and zpool restore so that I could
throw a whole pool to tape it is not hard to script the recursive zfs send /
zfs receive. I had to when I had to recover my laptop.
http://blogs.sun.com/chrisg/entry/recovering_my_laptop_using_zfs
--chris
This
As has been pointed out you want to mirror (or get more disks).
I would suggest you think carefully about the layout of the disks so that you
can take advantage of ZFS boot when it arrives. See
http://blogs.sun.com/chrisg/entry/new_server_arrived for a suggestion.
--chris
This message po
You have to mount the file system using NFS v3 or v2 for this trick to work.
See http://blogs.sun.com/chrisg/entry/fixing_a_covered_mount_point
--chris
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
There are two things that could be coming into play here:
1) if you have snapshots then you have more live data to scrub so it could just
take longer.
2) When you take snapshots the scrub starts again from scratch. This has been
discussed before:
http://www.opensolaris.org/jive/thread.jspa?mes
pshots we had so that
the system became usable again. An engineer is looking into reproducing
this on a lab system. I hope they will turn up on this thread with a
progress report.
--
Chris Gerhard. __o __o __o
Principal Engineer _`
One of our file servers internally to Sun that reproduces this running nv53
here is the dtrace output:
unix`mutex_vector_enter+0x120
zfs`metaslab_group_alloc+0x1a0
zfs`metaslab_alloc_dva+0x10c
zfs`metaslab_alloc+0x3c
zfs`zio_dv
What os is this?
What is the hardware?
can you try running format with efi_debug set. You have to run format using a
debugger and patch the variable. Here is how using mdb (set a break point in
main so that the dynamic linker has done it's stuff, then update the value of
efi_debug to be 1, the
>
> An alternate way will be to use NFSv4. When an NFSv4
> client crosses
> a mountpoint on the server, it can detect this and
> mount the filesystem.
> It can feel like a "lite" version of the automounter
> in practice, as
> you just have to mount the root and discover the
> filesystems as neede
One question that keeps coming up in my discussions about ZFS is the lack of
user quotas.
Typically this comes from people who have many tens of thousands (30,000 -
100,000) of users where they feel that having a file system per user will not
be manageable. I would agree that today that is the
thank you Eric, Doug.
Is there anymore information about the sharemgr project out in opensolaris.org?
Searching for it just finds this thread.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
To have user quota with ZFS you have to have a file system per user. However
this leads to a very large number of file systems on a large server. I
understand there is work already in hand to make sharing a large number of file
systems faster, however even mounting a large number of file system
Mark Shellenbaum wrote:
Chris Gerhard wrote:
I'm trying to create a directory hierarchy that when ever a file is
created it is created mode 664 with directories 775.
Now I can do this with chmod to create the ACL on UFS and it behaves
as expected howerver on ZFS it does not.
So
I'm trying to create a directory hierarchy that when ever a file is created it
is created mode 664 with directories 775.
Now I can do this with chmod to create the ACL on UFS and it behaves as
expected howerver on ZFS it does not.
: pearson TS 68 $; mkdir ~/tmp/acl
: pearson TS 69 $; df -h ~/
> Why map it to mkdir rather than using zfs create ? Because mkdir means
> it will work over NFS or CIFS.
Also your users and applications don't need to know. The administrator sets the
policy and then it just happens, plus the resulting "directories" would end up
with the correct ownership mode
than directories plus snapshot based on activity in the case.
--
Chris Gerhard. __o __o __o
Sun Microsystems Limited_`\<,`\<,`\<,_
Phone: +44 (0) 1252 426033 (
I keep thinking that it would be useful to be able to define a zfs file system
where all calls to mkdir resulted not just in a directory but in a file system.
Clearly such a property would not be inherited but in a number of situations
here it would be a really useful feature.
I can see there
After giving a demo of ZFS I was asked if there is anyway to protect the
.zfs/snapshot directory and or the snapshots in it?
The reason was to cover the case where a user creates a file that is mode 644
and later realises this is not correct and changes it to mode 600. If a
snapshot happens wh
having just upgraded to nv42 zpool status tells me I need to upgrade the ondisk
version.
zpool version points me at
http://www.opensolaris.org/os/community/zfs/version/3 :
: sigma TS 6 $; zpool upgrade -v
This system is currently running ZFS version 3.
The following versions are suppored:
VER
I've been playing with offlining an external USB disk as a way of having a
backup of a laptop drive. However when I online the device and scrub it I
always get cksum errors.
So I just build a v880 in the lab with a mirrored zpool. I offlined 2 disks
that form the mirror and then created a new f
When unpacking the solaris source onto a local disk on a system running build
39 I got the following panic:
panic[cpu0]/thread=d2c8ade0:
really out of space
d2c8a7b4 zfs:zio_write_allocate_gang_members+3e6 (e4385ac0)
d2c8a7d0 zfs:zio_dva_allocate+81 (e4385ac0)
d2c8a7e8 zfs:zio_next_stage+66 (e
56 matches
Mail list logo