+1
for zfsdump/zfsrestore
Julian Regel wrote:
When we brought it up last time, I think we found no one knows of a
userland tool similar to 'ufsdump' that's capable of serializing a ZFS
along with holes, large files, ``attribute'' forks, windows ACL's, and
checksums of its own, and then rest
zpool split
http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck
I came across this around noon today, originally on http://c0t0d0s0.org .
More here:
http://opensolaris.org/jive/thread.jspa?threadID=113685&tstart=60
Too bad this probably won't make it to the final release of OpenSola
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a
'du -sh' on the filesystem root, I only get appr. 300GB which is the correct
size.
The file system became full during Christmas and I increased the quota from 1
to 1.5 to 2TB and then decreased to 1.5TB. No reservatio
I used the default while creating zpool with one disk drive. I guess it is a
RAID 0 configuration.
Thanks,
Giri
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
> Hi Giridhar,
>
> The size reported by ls can include things like holes
> in the file. What space usage does the zfs(1M)
> command report for the filesystem?
>
> Adam
>
> On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
>
> > Hi,
> >
>
Hi,
Reposting as I have not gotten any response.
Here is the issue. I created a zpool with 64k recordsize and enabled dedupe on
it.
-->zpool create -O recordsize=64k TestPool device1
-->zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output o
As I have noted above after editing the initial post, its the same locally too.
>>I found that the "ls -l" on the zpool also reports 51,193,782,290 bytes
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREECAP
am using
> Sun DSEE 7.0 and I'm
> facing a heck of a lot of problems with the LDAP DIT
> structure.
>
Let me know how and where we can discuss?
Thanks,
Venkatesh K
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
o wait for U8
to be released.)
I will update the CR with this information.
Lori
On 02/18/09 09:12, Jerry K wrote:
Hello Lori,
Any update to this issue, and can you speculate as to if it will be a
patch to Solaris 10u6, or part of 10u7?
Thanks again,
Jerry
Lori Alt wrote:
This is in t
There is a pretty active apple ZFS sourceforge group that provides RW
bits for 10.5.
Things are oddly quiet concerning 10.6. I am curious about how this
will turn out myself.
Jerry
Rich Teer wrote:
It's not pertinent to this sub-thread, but zfs (albeit read-only)
is already in currently s
This is wrt Postgres 8.4 beta1 which has a new effective_io_concurrency
tunable which uses posix_fadvice
http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html
(Go to the bottom)
Quote:
synchronous I/O depends on an effective |posix_fadvise| function, which
some operating s
This is wrt Postgres 8.4 beta1 which has a new effective_io_concurrency
tunable which uses posix_fadvice
http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html
(Go to the bottom)
Quote:
synchronous I/O depends on an effective |posix_fadvise| function, which
some operating sy
Where is the boot-interest mailing list??
A review of mailing list here:
http://mail.opensolaris.org/mailman/listinfo/
does not show a boot-interest mailing list, or anything similar. Is it
on a different site?
Thanks
Richard Elling wrote:
Uwe Dippel wrote:
C. wrote:
I've worked hard t
.
In the meantime, you might try this:
http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs
- Lori
On 01/09/09 12:28, Jerry K wrote:
I understand that currently, at least under Solaris 10u6, it is not
possible to jumpstart a new system with a zfs root using a flash
archive
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass. Its going to be another week
before I have a chance to play with b105.
Does anyon
I understand that currently, at least under Solaris 10u6, it is not
possible to jumpstart a new system with a zfs root using a flash archive
as a source.
Can anyone comment as to whether this restriction will pass in the near
term, or if this is a while out (6+ months) before this will be possi
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass. Its going to be another week
before I have a chance to play with b105.
Does anyone k
Hello Thomas,
What is mbuffer? Where might I go to read more about it?
Thanks,
Jerry
>
> yesterday, I've release a new version of mbuffer, which also enlarges
> the default TCP buffer size. So everybody using mbuffer for network data
> transfer might want to update.
>
> For everybody unfam
http://www.fusionio.com/Products.aspx
Looks like a cool SSD to go with ZFS
Has anybody tried ZFS with Fusion-IO storage? For that matter even with
Solaris?
-Jignesh
--
Jignesh Shah http://blogs.sun.com/jkshah
Sun Microsystems,Inc http://sun.com/postgresql
Ming into
this.
Jerry K.
Bob Friesenhahn wrote:
> On Wed, 3 Sep 2008, Jerry K wrote:
>
>> How would this work for servers that support only (2) drives, or systems
>> that are configured to have pools of (2) drives, i.e. mirrors, and
>> there is no additional space to have
How would this work for servers that support only (2) drives, or systems
that are configured to have pools of (2) drives, i.e. mirrors, and
there is no additional space to have a new disk, as shown in the sample
below.
I still support lots of V490's, which hold only (2) drives.
Thanks,
Jerr
Dnia 7-08-2008 o godz. 13:20 Borys Saulyak napisał(a):
> Hi,
>
> I have problem with Solaris 10. I know that this forum is for
> OpenSolaris but may be someone will have an idea.
> My box is crashing on any attempt to import zfs pool. First crash
> happened on export operation and since then I can
No, the problem data must be moved or copied from where it is, to a different
ZFS.
Raquel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks, glad someone else thought of it first.
I guess I will have to do things the hard way.
Raquel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
I've run across something that would save me days of trouble.
Situation, the contents of one ZFS file system needs to be moved to another ZFS
file system. The
destination can be the same Zpool, even a brand new ZFS file system. A command
to move the
data from one ZFS file system to another, WITH
resting read anyways. :)
>
> Nathan.
>
>
>
> Nicolas Williams wrote:
>> On Wed, Apr 09, 2008 at 11:38:03PM -0400, Jignesh K. Shah wrote:
>>> Can zfs send utilize multiple-streams of data transmission (or some
>>> sort of multipleness)?
>>>
>&
Can zfs send utilize multiple-streams of data transmission (or some sort
of multipleness)?
Interesting read for background
http://people.planetpostgresql.org/xzilla/index.php?/archives/338-guid.html
Note: zfs send takes 3 days for 1TB to another system
Regards,
Jignesh
___
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>
> > Hi
> > I'm using ZFS on few X4500 and I need to backup them.
> > The data on source pool keeps changing so the online replication
> > would be the b
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a):
> Łukasz K wrote:
>
> > Hi
> >I'm using ZFS on few X4500 and I need to backup them.
> > The data on source pool keeps changing so the online replication
> > would be the best solution.
> >
> >
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Other backup systems (disk-to-disk or block-to-block)
On 26/12/2007, at 2:43 AM, Mike Gerdts wrote:
> On Dec 25, 2007 1:33 PM, K <[EMAIL PROTECTED]> wrote:
>>
>> if (fclose (file)) {
>> fprintf (stderr, "fatal: unable to close temp file: %s\n",
>> strerror (errno));
>> exit (1)
if (fclose (file)) {
fprintf (stderr, "fatal: unable to close temp file: %s\n",
strerror (errno));
exit (1);
I don't understand why the above piece of code is failing...
fatal: unable to close file: File too large
and of course my code fails at 2G... The output should b
I haven't seen anything about this recently, or I have missed it.
Can anyone share what the current status of ZFS boot partition on Sparc is?
Thanks,
Jerry K
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more
flexibility in the way we setup xen networking. What is sad is that
the code is already available in the unreleased crossbow bits... but
it won't appear in nevada until Q1 2008 :(
This is a real blocker for me as my ISP
> kugutsum
>
> I tried with just 4Gb in the system, and the same issue. I'll try
> 2Gb tomorrow and see if any better.(ps, how did you determine
> that was the problem in your case)
sorry, I wasn't monitoring this list for a while. My machine has 8GB
of ram and I remembered that some
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good value for dom0_mem o
there are problems with zfs sync phase.Run #dtrace -n fbt::txg_wait_open:entry'{ stack(); ustack(); }'and wait 10 minutesalso give more information about pool#zfs get all filerI assume 'filer' is you pool name.RegardsLukasOn 11/7/07, Łukasz K <[EMAIL PROTECTED]> wrote:
Hi,
#!/bin/sh
echo '::spa' | mdb -k | grep ACTIVE \
| while read pool_ptr state pool_name
do
echo "checking pool map size [B]: $pool_name"
echo "${pool_ptr}::walk metaslab|::print -d struct metaslab
ms_smo.smo_objsize" \
| mdb -k \
| nawk '{sub("^0t&
> > Now space maps, intent log, spa history are compressed.
>
> All normal metadata (including space maps and spa history) is always
> compressed. The intent log is never compressed.
Can you tell me where space map is compressed ?
Buffer is filled up with:
468 *entry++ = SM_
then I'll be able to
> provide you with my changes in some form. Hope this will happen next week.
>
> Cheers,
> Victor
>
> Łukasz K wrote:
> > Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a):
> >> Hello Victor,
> >>
> >> Wednesday, Ju
drive stripe, nothing too fancy. We do not have any snapshots.
>
> Any ideas?
Maybe your pool is fragmented and pool space map i very big.
Run this script:
#!/bin/sh
echo '::spa' | mdb -k | grep ACTIVE \
| while read pool_ptr state pool_name
do
echo "checking pool
> Is ZFS efficient at handling huge populations of tiny-to-small files -
> for example, 20 million TIFF images in a collection, each between 5
> and 500k in size?
>
> I am asking because I could have sworn that I read somewhere that it
> isn't, but I can't find the reference.
It depends, what typ
Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a):
> Hello Victor,
>
> Wednesday, June 27, 2007, 1:19:44 PM, you wrote:
>
> VL> Gino wrote:
> >> Same problem here (snv_60).
> >> Robert, did you find any solutions?
>
> VL> Couple of week ago I put together an implementation of space maps
er on.
Regards,
Jignesh
Jonathan Edwards wrote:
On Dec 8, 2006, at 05:20, Jignesh K. Shah wrote:
Hello ZFS Experts
I have two ZFS pools zpool1 and zpool2
I am trying to create bunch of zvols such that their paths are
similar except for consisent number scheme without reference to the
z
Hello ZFS Experts
I have two ZFS pools zpool1 and zpool2
I am trying to create bunch of zvols such that their paths are similar except for consisent number
scheme without reference to the zpools that actually belong. (This will allow me to have common
references in my setup scripts)
If I
46 matches
Mail list logo