On Tue, Feb 26, 2013 at 7:42 PM, Bob Friesenhahn
wrote:
> On Wed, 27 Feb 2013, Ian Collins wrote:
>>>
>>> I am finding that rsync with the right options (to directly
>>> block-overwrite) plus zfs snapshots is providing me with pretty
>>> amazing "deduplication" for backups without even enabling
>>
On Tue, Jan 22, 2013 at 5:29 AM, Darren J Moffat wrote:
> Preallocated ZVOLs - for swap/dump.
>
Darren, good to hear about the cool stuff in S11.
Just to clarify, is this preallocated ZVOL different than the preallocated
dump which has been there for quite some time (and is in Illumos)? Can you
Arne, I took a look at far.c in
http://cr.illumos.org/~webrev/sensille/far-send/. Here are some
high-level comments:
Why did you choose to do this all in the kernel? As opposed to the
way "zfs diff" works, where the kernel generates the list of changed
items and then userland sorts out what exac
In general, you can force the unmount with the "-f" flag.
As to your specific question of changing the mountpoint to somewhere that
it can't currently be mounted, it should set the mountpoint property but
not remount it. E.g.:
# zfs set mountpoint=/ rpool/test
cannot mount '/': directory is not
On Thu, Oct 25, 2012 at 2:25 AM, Jim Klimov wrote:
> Hello all,
>
> I was describing how raidzN works recently, and got myself wondering:
> does zpool scrub verify all the parity sectors and the mirror halves?
>
Yes. The ZIO_FLAG_SCRUB instructs the raidz or mirror vdev to read and
verify all
On Sat, Oct 20, 2012 at 1:24 PM, Tim Cook wrote:
>
>
> On Sat, Oct 20, 2012 at 2:54 AM, Arne Jansen wrote:
>
>> On 10/20/2012 01:10 AM, Tim Cook wrote:
>> >
>> >
>> > On Fri, Oct 19, 2012 at 3:46 PM, Arne Jansen > > <mailto:sensi...@gmx.net&
On Sat, Oct 20, 2012 at 1:23 AM, Arne Jansen wrote:
> On 10/20/2012 01:21 AM, Matthew Ahrens wrote:
> > On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen > <mailto:sensi...@gmx.net>> wrote:
> >
> > On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
> > >
On Fri, Oct 19, 2012 at 1:46 PM, Arne Jansen wrote:
> On 10/19/2012 09:58 PM, Matthew Ahrens wrote:
> > Please don't bother changing libzfs (and proliferating the copypasta
> > there) -- do it like lzc_send().
> >
>
> ok. It would be easier though if zfs_send
On Wed, Oct 17, 2012 at 5:29 AM, Arne Jansen wrote:
> We have finished a beta version of the feature. A webrev for it
> can be found here:
>
> http://cr.illumos.org/~webrev/sensille/fits-send/
>
> It adds a command 'zfs fits-send'. The resulting streams can
> currently only be received on btrfs,
On Wed, Sep 26, 2012 at 10:28 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> When I create a 50G zvol, it gets "volsize" 50G, and it gets "used" and "
> refreservation" 51.6G
>
> ** **
>
> I have some filesystems alr
On Fri, Sep 21, 2012 at 4:00 AM, Bogdan Ćulibrk wrote:
> Greetings,
>
> I'm trying to achieve selective output of "zfs list" command for specific
> user to show only delegated sets. Anyone knows how to achieve this?
> I've checked "zfs allow" already but it only helps in restricting the user
> to
On Thu, Aug 30, 2012 at 1:11 PM, Timothy Coalson wrote:
> Is there a way to get the total amount of data referenced by a snapshot
> that isn't referenced by a specified snapshot/filesystem? I think this is
> what is really desired in order to locate snapshots with offending space
> usage.
Try
On Sat, Sep 15, 2012 at 2:07 PM, Dave Pooser wrote:
> The problem: so far the send/recv appears to have copied 6.25TB of 5.34TB.
> That... doesn't look right. (Comparing zfs list -t snapshot and looking at
> the 5.34 ref for the snapshot vs zfs list on the new system and looking at
> space used.)
On Fri, Sep 14, 2012 at 11:07 PM, Bill Sommerfeld wrote:
> On 09/14/12 22:39, Edward Ned Harvey
> (**opensolarisisdeadlongliveopens**olaris)
> wrote:
>
>> From:
>> zfs-discuss-bounces@**opensolaris.org[mailto:
>>> zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Dave Pooser
>>>
>>> Unfortu
On Wed, May 2, 2012 at 3:28 PM, Fred Liu wrote:
>>
>>The size accounted for by the userused@ and groupused@ properties is the
>>"referenced" space, which is used as the basis for many other space
>>accounting values in ZFS (e.g. "du" / "ls -s" / stat(2), and the zfs
>>accounting
>>properties "ref
2012/4/25 Richard Elling :
> On Apr 25, 2012, at 8:14 AM, Eric Schrock wrote:
>
> ZFS will always track per-user usage information even in the absence of
> quotas. See the the zfs 'userused@' properties and 'zfs userspace' command.
>
>
> tip: zfs get -H -o value -p userused@username filesystem
>
>
On Mon, Jan 16, 2012 at 11:34 AM, Jim Klimov wrote:
> 2012-01-16 23:14, Matthew Ahrens пишет:
>
>> On Thu, Jan 12, 2012 at 5:00 PM, Jim Klimov > <mailto:jimkli...@cos.ru>> wrote:
>>
>>While reading about zfs on-disk formats, I wondered once again
>
On Fri, Jan 13, 2012 at 4:49 PM, Matt Banks wrote:
> I'm sorry to be asking such a basic question that would seem to be easily
> found on Google, but after 30 minutes of "googling" and looking through
> this lists' archives, I haven't found a definitive answer.
>
> Is the L2ARC caching scheme bas
On Thu, Jan 5, 2012 at 6:53 AM, sol wrote:
>
> I would have liked to think that there was some good-will between the ex-
> and current-members of the zfs team, in the sense that the people who
> created zfs but then left Oracle still care about it enough to want the
> Oracle version to be as bug-
On Thu, Jan 5, 2012 at 7:17 PM, Ivan Rodriguez wrote:
> Dear list,
>
> I'm about to upgrade a zpool from 10 to 29 version, I suppose that
> this upgrade will improve several performance issues that are present
> on 10, however
> inside that pool we have several zfs filesystems all of them are
>
On Thu, Jan 12, 2012 at 5:00 PM, Jim Klimov wrote:
> While reading about zfs on-disk formats, I wondered once again
> why is it not possible to create a snapshot on existing data,
> not of the current TXG but of some older point-in-time?
>
It is not possible because the older data may no longer
On Mon, Dec 12, 2011 at 11:04 PM, Erik Trimble wrote:
> On 12/12/2011 12:23 PM, Richard Elling wrote:
>>
>> On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
>>
>>> Not exactly. What is dedup'ed is the stream only, which is infect not
>>> very
>>> efficient. Real dedup aware replication is taking
On Fri, Nov 4, 2011 at 6:49 PM, Ian Collins wrote:
> On 11/ 5/11 02:37 PM, Matthew Ahrens wrote:
>
> On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins > i...@ianshome.com>> wrote:
>>
>> I just tried sending from a oi151a system to a Solaris 10 backup
>>
On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins wrote:
> I just tried sending from a oi151a system to a Solaris 10 backup server
> and the server barfed with
>
> zfs_receive: stream is unsupported version 17
>
> I can't find any documentation linking stream version to release, so does
> anyone know
On Sat, Oct 29, 2011 at 10:57 AM, Jim Klimov wrote:
> In short, is it
> possible to add "restartability" to ZFS SEND
In short, yes.
We are working on it here at Delphix, and plan to contribute our changes
upstream to Illumos.
You can read more about it in the slides I link to in this blog pos
On Thu, Aug 11, 2011 at 11:14 AM, Test Rat wrote:
> After replicating a pool with zfs send/recv I've found out I cannot
> perform some zfs on those datasets anymore. The datasets had permissions
> set via `zfs allow'.
>
...
>
> So, what are permissions if not properties?
Properties are things
ressratio" as the long name and
> "refratio" as the short name would make sense, as that matches
> "compressratio". Matt?
>
> - Eric
>
>
> On Mon, Jun 6, 2011 at 7:08 PM, Haudy Kazemi wrote:
>
>> On 6/6/2011 5:02 PM, Richard Elling wrote:
>
I have implemented a new property for ZFS, "refratio", which is the
compression ratio for referenced space (the "compressratio" is the ratio for
used space). We are using this here at Delphix to figure out how much space
a filesystem would use if it was not compressed (ignoring snapshots). I'd
li
On Sat, Jun 4, 2011 at 12:51 PM, Harry Putnam wrote:
> But I also see a massive list of files with a letter `m' prefixed on
> each line, Which is supposed to mean modified, They cannot all really
> be modified so I'm thinking its something to do with rsyncing files
> from a windows XP machine to
On Tue, May 31, 2011 at 6:52 AM, Tomas Ögren wrote:
>
> On a different setup, we have about 750 datasets where we would like to
> use a single recursive snapshot, but when doing that all file access
> will be frozen for varying amounts of time (sometimes half an hour or
> way more). Splitting it
>
> On Thu, May 12, 2011 at 08:52:04PM +1000, Daniel Carosone wrote:
> > Other than the initial create, and the most
> > recent scrub, the history only contains a sequence of auto-snapshot
> > creations and removals. None of the other commands I'd expect, like
> > the filesystem creations and recv,
On Wed, May 25, 2011 at 8:01 PM, Matt Weatherford wrote:
> pike# zpool get version internal
> NAME PROPERTY VALUESOURCE
> internal version 28 default
> pike# zpool get version external-J4400-12x1TB
> NAME PROPERTY VALUESOURCE
> external-J4400-12x1TB versi
On Wed, May 25, 2011 at 2:23 PM, Edward Ned Harvey <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> I've finally returned to this dedup testing project, trying to get a handle
> on why performance is so terrible. At the moment I'm re-running tests and
> monitoring memory_throttle_co
On Wed, May 25, 2011 at 3:08 PM, Peter Jeremy <
peter.jer...@alcatel-lucent.com> wrote:
> On 2011-May-26 03:02:04 +0800, Matthew Ahrens wrote:
>
> Looks good.
>
Thanks for taking the time to look at this. More comments inline below.
> >pool open ("zpool imp
On Wed, May 25, 2011 at 12:55 PM, Deano wrote:
>
> Hi Matt,
>
> That's looks really good, I've been meaning to implement a ZFS compressor
> (using a two pass, LZ4 + Arithmetic Entropy), so nice to see a route with
> which this can be done.
>
Cool! New compression algorithms are definitely some
The community of developers working on ZFS continues to grow, as does
the diversity of companies betting big on ZFS. We wanted a forum for
these developers to coordinate their efforts and exchange ideas. The
ZFS working group was formed to coordinate these development efforts.
The working group e
On Thu, Jan 13, 2011 at 4:36 AM, fred wrote:
> Thanks for this explanation
>
> So there is no real way to estimate the size of the increment?
Unfortunately not for now.
> Anyway, for this particular filesystem, i'll stick with rsync and yes, the
> difference was 50G!
Why? I would expect rsync
On Mon, Jan 10, 2011 at 2:40 PM, fred wrote:
> Hello,
>
> I'm having a weird issue with my incremental setup.
>
> Here is the filesystem as it shows up with zfs list:
>
> NAME USED AVAIL REFER MOUNTPOINT
> Data/FS1 771M 16.1T 116M /Da
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins wrote:
> On 12/10/10 12:31 PM, Moazam Raja wrote:
>> So, is it OK to send/recv while having the receive volume write enabled?
> A write can fail if a filesystem is unmounted for update.
True, but ZFS recv will not normally unmount a filesystem. It co
"usedsnap" is the amount of space consumed by all snapshots. Ie, the
amount of space that would be recovered if all snapshots were to be
deleted.
The space "used" by any one snapshot is the space that would be
recovered if that snapshot was deleted. Ie, the amount of space that
is unique to that
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson wrote:
>
> # zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv npool/openbsd
> receiving full stream of naspool/open...@xfer-11292010 into
> npool/open...@xfer-11292010
> received 23.5GB stream in 883 seconds (27.3MB/sec)
> cannot receive ne
I verified that this bug exists in OpenSolaris as well. The problem is that
we can't destroy the old filesystem "a" (which has been renamed to
"rec2/recv-2176-1"
in this case). We can't destroy it because it has a child, "b". We need to
rename "b" to be under the new "a". However, we are not re
I verified that this bug exists in OpenSolaris as well. The problem is that
we can't destroy the old filesystem "a" (which has been renamed to
"rec2/recv-2176-1" in this case). We can't destroy it because it has a
child, "b". We need to rename "b" to be under the new "a". However, we are
not re
That's correct.
This behavior is because the send|recv operates on the DMU objects,
whereas the recordsize property is interpreted by the ZPL. The ZPL
checks the recordsize property when a file grows. But the recv
doesn't grow any files, it just dumps data into the underlying
objects.
--matt
O
Jordan Schwartz wrote:
ZFSfolk,
Pardon the slightly offtopic post, but I figured this would be a good
forum to get some feedback.
I am looking at implementing zfs group quotas on some X4540s and
X4140/J4400s, 64GB of RAM per server, running Solaris 10 Update 8
servers with IDR143158-06.
There
Tom Hall wrote:
Re the DDT, can someone outline it's structure please? Some sort of
hash table? The blogs I have read so far dont specify.
It is stored in a ZAP object, which is an extensible hash table. See
zap.[ch], ddt_zap.c, ddt.h
--matt
___
z
This is RFE 6425091 "want 'zfs diff' to list files that have changed between
snapshots", which covers both file & directory changes, and file
removal/creation/renaming. We actually have a prototype of zfs diff.
Hopefully someday we will finish it up...
--matt
Henu wrote:
Hello
Is there a p
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any i
John Meyer wrote:
Looks like this part got cut off somehow:
the filesystem mount point is set to /usr/local/local. I just want to
do a simple backup/restore, can anyone tell me something obvious that I'm not
doing right?
Using OpenSolaris development build 130.
Sounds like bug 6916662, fixe
Gaëtan Lehmann wrote:
Hi,
On opensolaris, I use du with the -b option to get the uncompressed size
of a directory):
r...@opensolaris:~# du -sh /usr/local/
399M/usr/local/
r...@opensolaris:~# du -sbh /usr/local/
915M/usr/local/
r...@opensolaris:~# zfs list -o space,refer,rat
Len Zaifman wrote:
We have just update a major file server to solaris 10 update 9 so that we can
control user and group disk usage on a single filesystem.
We were using qfs and one nice thing about samquota was that it told you your
soft limit, your hard limit and your usage on disk space and
Brandon High wrote:
I'm playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that's already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
er4,verify functionality has been removed. We will investigate
whether it's possible to fix these isses and re-enable this functionality.
--matt
Matthew Ahrens wrote:
If you did not do "zfs set dedup=fletcher4,verify " (which is
available in build 128 and nightly bits since then), you
Andrew Gabriel wrote:
Kjetil Torgrim Homme wrote:
Daniel Carosone writes:
Would there be a way to avoid taking snapshots if they're going to be
zero-sized?
I don't think it is easy to do, the txg counter is on a pool level,
AFAIK:
# zdb -u spool
Uberblock
magic = 00bab
If you did not do "zfs set dedup=fletcher4,verify " (which is available
in build 128 and nightly bits since then), you can ignore this message.
We have changed the on-disk format of the pool when using
dedup=fletcher4,verify with the integration of:
6903705 dedup=fletcher4,verify doesn't byt
Tomas Ögren wrote:
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been
removed. They have
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been removed.
They have been removed from the namespace, but they are still open, eg due to
some process's
Alastair Neil wrote:
On Tue, Oct 20, 2009 at 12:12 PM, Matthew Ahrens <mailto:matthew.ahr...@sun.com>> wrote:
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has a user or
The user/group used can be out of date by a few seconds, same as the "used"
and "referenced" properties. You can run sync(1M) to wait for these values
to be updated. However, that doesn't seem to be the problem you are
encountering here.
Can you send me the output of:
zfs list zpool1/sd01_m
Alastair Neil wrote:
However, the user or group quota is applied when a clone or a
snapshot is created from a file system that has a user or group quota.
"applied to a clone" I understand what that means, "applied to a
snapshot" - not so clear does it mean enforced on the original datas
Peter Wilk wrote:
tank/appswill be mounted as /apps -- need to be set with 10G
tank/apps/data1 will need to be mount as /apps/data1, need to be set
with 20G alone.
The question is:
If refquota is being used to set the filesystem sizes on /apps and
/apps/data1. /apps/data1 will not be in
Thanks for reporting this. I have fixed this bug (6822816) in build
127. Here is the evaluation from the bug report:
The problem is that the clone's dsobj does not appear in the origin's
ds_next_clones_obj.
The bug can occur can occur under certain circumstances if there was a
"botched upg
Erik Trimble wrote:
From a global perspective, multi-disk parity (e.g. raidz2 or raidz3) is
the way to go instead of hot spares.
Hot spares are useful for adding protection to a number of vdevs, not a
single vdev.
Even when using raidz2 or 3, it is useful to have hot spares so that
reconstru
Brandon,
Yes, this is something that should be possible once we have bp rewrite (the
ability to move blocks around). One minor downside to "hot space" would be
that it couldn't be shared among multiple pools the way that hot spares can.
Also depending on the pool configuration, hot space may
Tristan Ball wrote:
OK, Thanks for that.
From reading the RFE, it sound's like having a faster machine on the
receive side will be enough to alleviate the problem in the short term?
That's correct.
--matt
___
zfs-discuss mailing list
zfs-discuss@o
Tristan Ball wrote:
Hi Everyone,
I have a couple of systems running opensolaris b118, one of which sends
hourly snapshots to the other. This has been working well, however as of
today, the receiving zfs process has started running extremely slowly,
and is running at 100% CPU on one core, comp
Brian Kolaci wrote:
So Sun would see increased hardware revenue stream if they would just
listen to the customer... Without [pool shrink], they look for alternative
hardware/software vendors.
Just to be clear, Sun and the ZFS team are listening to customers on this
issue. Pool shrink has be
Jorgen Lundman wrote:
Oh I forgot the more important question.
Importing all the user quota settings; Currently as a long file of "zfs
set" commands, which is taking a really long time. For example,
yesterday's import is still running.
Are there bulk-import solutions? Like zfs set -f file.tx
Jorgen Lundman wrote:
I have been playing around with osol-nv-b114 version, and the ZFS user
and group quotas.
First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone
else involved).
Thanks for the feedback!
I was unable to get ZFS quota to work with rquota. (Ie, NFS mount t
Joep Vesseur wrote:
I was wondering why "zfs destroy -r" is so excruciatingly slow compared to
parallel destroys.
This issue is bug # 6631178.
The problem is that "zfs destroy -r " destroys each filesystem
and snapshot individually, and each one must wait for a txg to sync (0.1 - 10
seconds)
Edward Pilatowicz wrote:
hey all,
so recently i wrote some zones code to manage zones on zfs datasets.
the code i wrote did things like rename snapshots and promote
filesystems. while doing this work, i found a few zfs behaviours that,
if changed, could greatly simplify my work.
the primary is
Paul Kraus wrote:
Sorry in advance if this has already been discussed, but I did not
find it in my archives of the list.
According to the ZFS documentation, a resilver operation
includes what is effectively a dirty region log (DRL) so that if the
resilver is interrupted, by a snapshot or
Ed,
"zfs destroy [-r] -p" sounds great.
I'm not a big fan of the "-t template". Do you have conflicting snapshot
names due to the way your (zones) software works, or are you concerned about
sysadmins creating these conflicting snapshots? If it's the former, would
it be possible to change th
Enrico Maria Crisostomo wrote:
# zfs send -R -I @20090329 mypool/m...@20090330 | zfs recv -F -d
anotherpool/anotherfs
I experienced core dumps and the error message was:
internal error: Arg list too long
Abort (core dumped)
This is 6801979, fixed in build 111.
--matt
Mike Gerdts wrote:
On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens wrote:
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over
NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on
Robert Milkowski wrote:
Hello Matthew,
Tuesday, March 31, 2009, 9:16:42 PM, you wrote:
MA> Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted bas
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk use / quota?
That
Tomas Ögren wrote:
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland
Nicolas Williams wrote:
We could also
disallow them from doing "zfs get useru...@name pool/zoned/fs", just make
it an error to prevent them from seeing something other than what they
intended.
I don't see why the g-z admin should not get this data.
They can of course still get the data by d
Nicolas Williams wrote:
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The or is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work with zones?
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
]
The compressed space *is* the amount of spa
Microsystems
1. Introduction
1.1. Project/Component Working Name:
ZFS user/group quotas & space accounting
1.2. Name of Document Author/Supplier:
Author: Matthew Ahrens
1.3 Date of This Document:
30 March, 2009
4. Technical Description
ZFS user/group s
José Gomes wrote:
Can we assume that any snapshot listed by either 'zfs list -t snapshot'
or 'ls .zfs/snapshot' and previously created with 'zfs receive' is
complete and correct? Or is it possible for a 'zfs receive' command to
fail (corrupt/truncated stream, sigpipe, etc...) and a corrupt or
Jorgen Lundman wrote:
Great! Will there be any particular limits on how many uids, or size of
uids in your implementation? UFS generally does not, but I did note that
if uid go over 1000 it "flips out" and changes the quotas file to
128GB in size.
All UIDs, as well as SIDs (from the SMB s
Gavin Maltby wrote:
Hi,
The manpage says
Specifically, used = usedbychildren + usedbydataset +
usedbyrefreservation +, usedbysnapshots. These proper-
ties are only available for datasets created on zpool
"version 13" pools.
.. and I now realize that
Bob Friesenhahn wrote:
On Thu, 12 Mar 2009, Jorgen Lundman wrote:
User-land will then have a daemon, whether or not it is one daemon per
file-system or really just one daemon does not matter. This process
will open '/dev/quota' and empty the transaction log entries
constantly. Take the uid,gi
Jorgen Lundman wrote:
In the style of a discussion over a beverage, and talking about
user-quotas on ZFS, I recently pondered a design for implementing user
quotas on ZFS after having far too little sleep.
It is probably nothing new, but I would be curious what you experts
think of the feas
Greg Mason wrote:
Just my $0.02, but would pool shrinking be the same as vdev evacuation?
Yes.
basically, what I'm thinking is:
zpool remove mypool
Allow time for ZFS to vacate the vdev(s), and then light up the "OK to
remove" light on each evacuated disk.
That's the goal.
--matt
_
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
install to mirror fro
David Magda wrote:
Given the threads that have appeared on this list lately, how about
codifying / standardizing the output of "zfs send" so that it can be
backed up to tape? :)
We will soon be changing the manpage to indicate that the zfs send stream
will be receivable on all future versions
These stack traces look like 6569719 (fixed in s10u5).
For update 5, you could start with the kernel stack of the hung commands.
(use ::pgrep and ::findstack) We might also need the sync thread's stack
(something like ::walk spa | ::print spa_t
spa_dsl_pool->dp_txg.tx_sync_thread | ::findstack
Ian,
I couldn't find any bugs with a similar stack trace. Can you file a bug?
--matt
Ian Collins wrote:
> The system was an x4540 running Solaris 10 Update 6 acting as a
> production Samba server.
>
> The only unusual activity was me sending and receiving incremental dumps
> to and from another
Are you sure that you don't have any refreservations?
--matt
Paul wrote:
> I apologize for lack of info regarding to previous post.
>
> # zpool list
>
> NAMESIZE USED AVAILCAP HEALTH ALTROOT
> gwvm_zpool 3.35T 3.16T 190G94% ONLINE -
> rpool 135G 27.5
Andreas Koppenhoefer wrote:
> Hello,
>
> occasionally we got some solaris 10 server to panic in zfs code while doing
> "zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive
> poolname".
> The race condition(s) get triggered by a broken data transmission or killing
> sending zf
Andreas Koppenhoefer wrote:
> Hello,
>
> occasionally we got some solaris 10 server to panic in zfs code while doing
> "zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive
> poolname".
> The race condition(s) get triggered by a broken data transmission or killing
> sending z
Ben Rockwood wrote:
> I've been struggling to fully understand why disk space seems to vanish.
> I've dug through bits of code and reviewed all the mails on the subject that
> I can find, but I still don't have a proper understanding of whats going on.
>
> I did a test with a local zpool on s
Indeed. This happens when the scrub started "in the future" according to the
timestamp. Then we get a negative amount of time passed, which gets printed
like this. We should check for this and at least print a more useful message.
--matt
Sanjeev Bagewadi wrote:
> Mike,
>
> Indeed an interes
Robert Lawhead wrote:
> Apologies up front for failing to find related posts...
> Am I overlooking a way to get 'zfs send -i [EMAIL PROTECTED] [EMAIL
> PROTECTED] | zfs receive -n -v ...' to show the contents of the stream? I'm
> looking for the equivalent of ufsdump 1f - fs ... | ufsrestore tv
Sumit Gupta wrote:
> The /dev/[r]dsk nodes implement the O_EXCL flag. If a node is opened using
> the O_EXCL, subsequent open(2) to that node fail. But I dont think the
> same is true for /dev/zvol/[r]dsk nodes. Is that a bug (or maybe RFE) ?
Yes, that seems like a fine RFE. Or a bug, if there'
I believe this is because sharemgr does an O(number of shares) operation
whenever you try to share/unshare anything (retrieving the list of shares
from the kernel to make sure that it isn't/is already shared). I couldn't
find a bug on this (though it's been known for some time), so feel free to
1 - 100 of 435 matches
Mail list logo