adventure, and I got
the info that I needed. Thanks to all that took the time to reply.
-Matt Breitbach
-Original Message-
From: Donal Farrell [mailto:vmlinuz...@gmail.com]
Sent: Wednesday, November 23, 2011 10:42 AM
To: Matt Breitbach
Subject: Re: [zfs-discuss] Compression
is this o
Currently using NFS to access the datastore.
-Matt
-Original Message-
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Tuesday, November 22, 2011 11:10 PM
To: Matt Breitbach
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Compression
Hi Matt,
On Nov 22, 2011, at
Hi Matt,
On Nov 22, 2011, at 7:39 PM, Matt Breitbach wrote:
> So I'm looking at files on my ZFS volume that are compressed, and I'm
> wondering to myself, "self, are the values shown here the size on disk, or
> are they the pre-compressed values". Google gives me no great results on
> the first
2011-11-23 8:21, Ian Collins wrote:
If you use "du" on the ZFS filesystem, you'll see the logical
storage size, which takes into account compression and sparse
bytes. So the "du" size should be not greater than "ls" size.
It can be significantly bigger:
ls -sh x
2 x
du -sh x
1K x
Pun accept
On 11/23/11 04:58 PM, Jim Klimov wrote:
2011-11-23 7:39, Matt Breitbach wrote:
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, "self, are the values shown here the size on disk, or
are they the pre-compressed values". Google gives me no great results o
2011-11-23 7:39, Matt Breitbach wrote:
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, "self, are the values shown here the size on disk, or
are they the pre-compressed values". Google gives me no great results on
the first few pages, so I headed here.
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, "self, are the values shown here the size on disk, or
are they the pre-compressed values". Google gives me no great results on
the first few pages, so I headed here.
This really relates to my VMware environ
On Wed, 15 Sep 2010, Brandon High wrote:
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I thin
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I think it's the former, but I'm
not certain.
Thi
On Wed, Apr 7, 2010 at 10:47 AM, Daniel Bakken
wrote:
> When I send a filesystem with compression=gzip to another server with
> compression=on, compression=gzip is not set on the received filesystem. I am
> using:
Is compression set on the dataset, or is it being inherited from a
parent dataset?
On 08 April, 2010 - Cindy Swearingen sent me these 2,6K bytes:
> Hi Daniel,
>
> D'oh...
>
> I found a related bug when I looked at this yesterday but I didn't think
> it was your problem because you didn't get a busy message.
>
> See this RFE:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.d
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found
Daniel Bakken wrote:
We have found the problem. The mountpoint property on the sender was at
one time changed from the default, then later changed back to defaults
using zfs set instead of zfs inherit. Therefore, zfs send included these
local "non-default" properties in the stream, even though
Daniel Bakken wrote:
Here is the info from zstreamdump -v on the sending side:
BEGIN record
hdrtype = 2
features = 0
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname = promise1/arch...@
Daniel Bakken wrote:
The receive side is running build 111b (2009.06), so I'm not sure if
your advice actually applies to my situation.
The advice regarding received vs local properties definitely does not
apply. You could still confirm the presence of the compression property
in the send s
Daniel Bakken wrote:
When I send a filesystem with compression=gzip to another server with
compression=on, compression=gzip is not set on the received filesystem.
I am using:
zfs send -R promise1/arch...@daily.1 | zfs receive -vd sas
The zfs manpage says regarding the -R flag: "When received,
We have found the problem. The mountpoint property on the sender was at one
time changed from the default, then later changed back to defaults using zfs
set instead of zfs inherit. Therefore, zfs send included these local
"non-default" properties in the stream, even though the local properties are
Here is the info from zstreamdump -v on the sending side:
BEGIN record
hdrtype = 2
features = 0
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname = promise1/arch...@daily.1
nvlist version
The receive side is running build 111b (2009.06), so I'm not sure if your
advice actually applies to my situation.
Daniel Bakken
On Tue, Apr 6, 2010 at 10:57 PM, Tom Erickson wrote:
> After build 128, locally set properties override received properties, and
> this would be the expected behavio
I worked around the problem by first creating a filesystem of the same name
with compression=gzip on the target server. Like this:
zfs create sas/archive
zfs set compression=gzip sas/archive
Then I used zfs receive with the -F option:
zfs send -vR promise1/arch...@daily.1 | zfs send zfs receive
Hi Daniel,
I tried to reproduce this by sending from a b130 system to a s10u9
system, which vary in pool versions, but this shouldn't matter. I've
been sending/receiving streams between latest build systems and
older s10 systems for a long time. The zfs send -R option to send a
recursive snapsh
Cindy,
The source server is OpenSolaris build 129 (zpool version 22) and the
destination is stock OpenSolaris 2009.06 (zpool version 14). Both
filesystems are zfs version 3.
Mystified,
Daniel Bakken
On Wed, Apr 7, 2010 at 10:57 AM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> Danie
Daniel,
Which Solaris release is this?
I can't reproduce this on my lab system that runs the Solaris 10 10/09
release.
See the output below.
Thanks,
Cindy
# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs re
When I send a filesystem with compression=gzip to another server with
compression=on, compression=gzip is not set on the received filesystem. I am
using:
zfs send -R promise1/arch...@daily.1 | zfs receive -vd sas
The zfs manpage says regarding the -R flag: "When received, all properties,
snapshot
With the default compression scheme (LZJB ), how does one calculate the ratio
or amount compressed ahead of time when allocating storage?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Bill Sommerfeld wrote:
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
I still use "disk swap" because I have some bad experiences
with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to "metadata
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
> I still use "disk swap" because I have some bad experiences
> with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to "metadata" instead of the defau
On Thu, 18 Jun 2009, Haudy Kazemi wrote:
for text data, LZJB compression had negligible performance benefits (task
times were unchanged or marginally better) and less storage space was
consumed (1.47:1).
for media data, LZJB compression had negligible performance benefits (task
times were unc
Bob Friesenhahn wrote:
On Wed, 17 Jun 2009, Haudy Kazemi wrote:
usable with very little CPU consumed.
If the system is dedicated to serving files rather than also being
used interactively, it should not matter much what the CPU usage is.
CPU cycles can't be stored for later use. Ultimately,
On Wed, 17 Jun 2009, Haudy Kazemi wrote:
usable with very little CPU consumed.
If the system is dedicated to serving files rather than also being used
interactively, it should not matter much what the CPU usage is. CPU cycles
can't be stored for later use. Ultimately, it (mostly*) does not ma
David Magda wrote:
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also the CPU
time to decompress. Given this, I think it could be a big win.
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video
editing
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is
very
desirable. Performance studies have shown that today's CPUs can
compress
data fast
On Wed, June 17, 2009 06:15, Fajar A. Nugraha wrote:
>>> Perhaps compressing /usr could be handy, but why bother enabling
>>> compression if the majority (by volume) of user data won't do
>>> anything but burn CPU?
>
> How do you define "substantial"? My opensolaris snv_111b installation
> has 1.4
On Wed, June 17, 2009 06:03, Kjetil Torgrim Homme wrote:
> I'd be interested to see benchmarks on MySQL/PostgreSQL performance
> with compression enabled. my *guess* would be it isn't beneficial
> since they usually do small reads and writes, and there is little gain
> in reading 4 KiB instead of
"Monish Shah" writes:
>> I'd be interested to see benchmarks on MySQL/PostgreSQL performance
>> with compression enabled. my *guess* would be it isn't beneficial
>> since they usually do small reads and writes, and there is little
>> gain in reading 4 KiB instead of 8 KiB.
>
> OK, now you have s
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG
video editing--or ripping audio (MP3 / AAC / FLAC) stuff. All of
which are probably some of the largest files in most people's
homedirs nowadays.
indeed. I think only programmers will see any substantial benefit
from compression
"Fajar A. Nugraha" writes:
> Kjetil Torgrim Homme wrote:
>> indeed. I think only programmers will see any substantial benefit
>> from compression, since both the code itself and the object files
>> generated are easily compressible.
>
>>> Perhaps compressing /usr could be handy, but why bother e
>On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Homme.no> wrote:
>> indeed. =A0I think only programmers will see any substantial benefi=
>t
>> from compression, since both the code itself and the object files
>> generated are easily compressible.
>
>>> Perhaps compressing /usr could be handy, but
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Homme wrote:
> indeed. I think only programmers will see any substantial benefit
> from compression, since both the code itself and the object files
> generated are easily compressible.
>> Perhaps compressing /usr could be handy, but why bother enab
"David Magda" writes:
> On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
>
>> So the cache saves not only the time to access the disk but also
>> the CPU time to decompress. Given this, I think it could be a big
>> win.
>
> Unless you're in GIMP working on JPEGs, or doing some kind of MPEG
> vid
Hello Richard,
Monish Shah wrote:
What about when the compression is performed in dedicated hardware?
Shouldn't compression be on by default in that case? How do I put in an
RFE for that?
Is there a bugs.intel.com? :-)
I may have misled you. I'm not asking for Intel to add hardware
comp
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
> So the cache saves not only the time to access the disk but also the CPU
> time to decompress. Given this, I think it could be a big win.
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video
editing--or ripping audio (MP3 / A
Darren J Moffat wrote:
Kyle McDonald wrote:
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that
it was
faster
Monish Shah wrote:
Hello,
I would like to add one more point to this.
Everyone seems to agree that compression is useful for reducing load
on the disks and the disagreement is about the impact on CPU
utilization, right?
What about when the compression is performed in dedicated hardware?
Sh
Kyle McDonald wrote:
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait fo
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is very
desirable. Performance studies have shown that today's CPUs can compress
data faster than it takes for the uncompressed data to be read
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as i
Hello,
I would like to add one more point to this.
Everyone seems to agree that compression is useful for reducing load on the
disks and the disagreement is about the impact on CPU utilization, right?
What about when the compression is performed in dedicated hardware?
Shouldn't compression b
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is very
desirable. Performance studies have shown that today's CPUs can compress
data faster than it takes for the uncompressed data to be read or written.
Do you have a reference for such
> On Mon, 15 Jun 2009, dick hoogendijk wrote:
>
>> IF at all, it certainly should not be the DEFAULT.
>> Compression is a choice, nothing more.
>
> I respectfully disagree somewhat. Yes, compression shuould be a
> choice, but I think the default should be for it to be enabled.
I agree that "Comp
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
> In most cases compression is not desireable. It consumes CPU and results in
> uneven system performance.
You actually have that backwards. :-) In most cases, compression is very
desirable. Performance studies have shown that today's CPUs can compr
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait for so much data
from the
On Mon, 15 Jun 2009, dick hoogendijk wrote:
> IF at all, it certainly should not be the DEFAULT.
> Compression is a choice, nothing more.
I respectfully disagree somewhat. Yes, compression shuould be a
choice, but I think the default should be for it to be enabled.
--
Rich Teer, SCSA, SCNA, SC
On Mon, 15 Jun 2009 22:51:12 +0200
"Thommy M." wrote:
> IIRC there was a blog about I/O performance with ZFS stating that it
> was faster with compression ON as it didn't have to wait for so much
> data from the disks and that the CPU was fast at unpacking data. But
> sure, it uses more CPU (and
* Shannon Fiume (shannon.fi...@sun.com) wrote:
> Hi,
>
> I just installed 2009.06 and found that compression isn't enabled by
> default when filesystems are created. Does is make sense to have an
> RFE open for this? (I'll open one tonight if need be.) We keep telling
> people to turn on compressi
Bob Friesenhahn wrote:
> On Mon, 15 Jun 2009, Shannon Fiume wrote:
>
>> I just installed 2009.06 and found that compression isn't enabled by
>> default when filesystems are created. Does is make sense to have an
>> RFE open for this? (I'll open one tonight if need be.) We keep telling
>> people to
On Mon, 15 Jun 2009, Shannon Fiume wrote:
I just installed 2009.06 and found that compression isn't enabled by
default when filesystems are created. Does is make sense to have an
RFE open for this? (I'll open one tonight if need be.) We keep
telling people to turn on compression. Are there any
Hi,
I just installed 2009.06 and found that compression isn't enabled by default
when filesystems are created. Does is make sense to have an RFE open for this?
(I'll open one tonight if need be.) We keep telling people to turn on
compression. Are there any situations where turning on compressio
>I'll call bull* on that. Microsoft has an admirably simple installation
>and 88% of the market. Apple has another admirably simple installation
>and 10% of the market. Solaris has less than 1% of the market and has
>had a very complex installation process. You can't win that battle by
>increa
Carson Gaspar wrote:
Richard Elling wrote:
Miles Nordin wrote:
AIUI the later BE's are clones of the first, and not all blocks
will be rewritten, so it's still an issue. no?
In practice, yes, they are clones. But whether it is an issue
depends on what the "issue" is. As I see it, the iss
Richard Elling wrote:
Miles Nordin wrote:
AIUI the later BE's are clones of the first, and not all blocks
will be rewritten, so it's still an issue. no?
In practice, yes, they are clones. But whether it is an issue
depends on what the "issue" is. As I see it, the issue is that
someone wants
Miles Nordin wrote:
"re" == Richard Elling writes:
re> Note: in the Caiman world, this is only an issue for the first
re> BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
be rewritten, so it's
On Wed, May 6, 2009 at 2:54 AM, wrote:
>
>>On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
>>> PS: At one point the old JumpStart code was encumbered, and the
>>> community wasn't able to assist. I haven't looked at the next-gen
>>> jumpstart framework that was delivered as part of the OpenSo
> "re" == Richard Elling writes:
re> Note: in the Caiman world, this is only an issue for the first
re> BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
be rewritten, so it's still an issue. no?
pgpsh8Y
On Wed, May 6, 2009 at 11:14 AM, Rich Teer wrote:
> On Wed, 6 May 2009, Richard Elling wrote:
>
>> popular interactive installers much more simplified. I agree that
>> interactive installation needs to remain as simple as possible.
>
> How about offering a choice an installation time: "Custom or
This sounds like a good idea to me, but it should be brought up
on the caiman-disc...@opensolaris.org mailing list, since this
is not just, or even primarily, a zfs issue.
Lori
Rich Teer wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified.
On Wed, 6 May 2009, Richard Elling wrote:
> popular interactive installers much more simplified. I agree that
> interactive installation needs to remain as simple as possible.
How about offering a choice an installation time: "Custom or default?"?
Those that don't want/need the interactive flex
ound/doc-link on that new
JumpStart framework?
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Torrey McMahon
Sent: Tuesday, May 05, 2009 6:38 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Compression/copies o
>On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
>> PS: At one point the old JumpStart code was encumbered, and the
>> community wasn't able to assist. I haven't looked at the next-gen
>> jumpstart framework that was delivered as part of the OpenSolaris SPARC
>> preview. Can anyone provide any
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
> PS: At one point the old JumpStart code was encumbered, and the
> community wasn't able to assist. I haven't looked at the next-gen
> jumpstart framework that was delivered as part of the OpenSolaris SPARC
> preview. Can anyone provide any backg
JumpStart framework?
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Torrey McMahon
Sent: Tuesday, May 05, 2009 6:38 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Compression/copies on root pool RFE
Before I put on
Before I put one in ... anyone else seen one? Seems we support
compression on the root pool but there is no way to enable it at install
time outside of a custom script you run before the installer. I'm
thinking it should be a real install time option, have a jumpstart
keyword, etc. Same with c
Hello Krzys,
Wednesday, November 5, 2008, 5:41:16 AM, you wrote:
K> compression is not supported for rootpool?
K> # zpool create rootpool c1t1d0s0
K> # zfs set compression=gzip-9 rootpool
K> # lucreate -c ufsBE -n zfsBE -p rootpool
K> Analyzing system configuration.
K> ERROR: ZFS pool does not
Krzys wrote:
> compression is not supported for rootpool?
>
> # zpool create rootpool c1t1d0s0
> # zfs set compression=gzip-9 rootpool
>
I think gzip compression is not supported on zfs root. Try compression=on.
Regards,
Fajar
smime.p7s
Description: S/MIME Cryptographic Signature
__
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
ERROR: ZFS pool does not support boot environments
#
why? are there any plans to have compression on that dis
> On 9/12/07, Mike DeMarco <[EMAIL PROTECTED]> wrote:
>
> > Striping several disks together with a stripe width
> that is tuned for your data
> > model is how you could get your performance up.
> Stripping has been left out
> > of the ZFS model for some reason. Where it is true
> that RAIDZ will s
Mike DeMarco wrote:
> IO bottle necks are usually caused by a slow disk or one that has heavy
> workloads reading many small files. Two factors that need to be considered
> are Head seek latency and spin latency. Head seek latency is the amount
> of time it takes for the head to move to the trac
On 9/12/07, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> Striping several disks together with a stripe width that is tuned for your
> data
> model is how you could get your performance up. Stripping has been left out
> of the ZFS model for some reason. Where it is true that RAIDZ will stripe
> the d
> On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]>
> wrote:
> > > I've got 12Gb or so of db+web in a zone on a ZFS
> > > filesystem on a mirrored zpool.
> > > Noticed during some performance testing today
> that
> > > its i/o bound but
> > > using hardly
> > > any CPU, so I thought turning on compre
On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> > I've got 12Gb or so of db+web in a zone on a ZFS
> > filesystem on a mirrored zpool.
> > Noticed during some performance testing today that
> > its i/o bound but
> > using hardly
> > any CPU, so I thought turning on compression would be
> >
> I've got 12Gb or so of db+web in a zone on a ZFS
> filesystem on a mirrored zpool.
> Noticed during some performance testing today that
> its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be
> a quick win.
If it is io bound won't compression make it worse?
>
On 9/11/07, Dick Davies <[EMAIL PROTECTED]> wrote:
>
> I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored
> zpool.
> Noticed during some performance testing today that its i/o bound but
> using hardly
> any CPU, so I thought turning on compression would be a quick win.
>
> I
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool.
Noticed during some performance testing today that its i/o bound but
using hardly
any CPU, so I thought turning on compression would be a quick win.
I know I'll have to copy files for existing data to be compressed, s
On Tue, Aug 22, 2006 at 04:04:50PM -0500, Neil A. Wilson wrote:
> Do both compression and fixed record sizes work together?
Yes.
> Our Directory Server uses a fixed page size (8KB by default) for
> database records, so I'm in the habit of setting the ZFS recordsize to
> equal the database page
Do both compression and fixed record sizes work together?
Our Directory Server uses a fixed page size (8KB by default) for
database records, so I'm in the habit of setting the ZFS recordsize to
equal the database page size. However, we also typically use
compression because it often helps imp
86 matches
Mail list logo