Tom Lane wrote:
> Alvaro Herrera writes:
> > Excerpts from Alvaro Herrera's message of vie oct 28 16:47:13 -0300 2011:
> >> BTW we had previous discussions about dropping pg_database's toast
> >> table. Maybe this is a good time to do it, even if there's no risk of
> >> this bug (or the hypotheti
On Fri, Oct 28, 2011 at 3:11 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>> Excerpts from Alvaro Herrera's message of vie oct 28 16:47:13 -0300 2011:
>>> BTW we had previous discussions about dropping pg_database's toast
>>> table. Maybe this is a good time to do it, even if there's no risk of
Alvaro Herrera writes:
> Excerpts from Alvaro Herrera's message of vie oct 28 16:47:13 -0300 2011:
>> BTW we had previous discussions about dropping pg_database's toast
>> table. Maybe this is a good time to do it, even if there's no risk of
>> this bug (or the hypothetical circularity detoasting
"Kevin Grittner" writes:
> If we made the commit sequence number more generally available,
> incrementing it at the point of visibility change under cover of
> ProcArrayLock, and including the then-current value in a Snapshot
> object when built, would that help with this at all?
No, because we n
Tom Lane wrote:
> The risk factor comes in when we hold a syscache entry across
> transactions; then this guarantee is lost. (In both the original
> example and the pg_proc case above, the second backend is only at
> risk when it starts its transaction after the first one's commit,
> and in bet
Excerpts from Alvaro Herrera's message of vie oct 28 16:47:13 -0300 2011:
> BTW we had previous discussions about dropping pg_database's toast
> table. Maybe this is a good time to do it, even if there's no risk of
> this bug (or the hypothetical circularity detoasting problem) showing up
> ther
Excerpts from Tom Lane's message of vie oct 28 15:37:43 -0300 2011:
> The main concern I had about detoast before caching is the risk of
> circularity, ie, needing detoastable cache entries in order to figure
> out how to detoast. But I think it's probably okay. The current list
> of catalogs w
Robert Haas writes:
> Wait a minute: I'm confused. What's at issue here, at least AIUI, is
> taking a TOAST pointer and fetching the corresponding value. But that
> must involve reading from the TOAST table, and our usual paradigm for
> reading from system catalogs is (1) take AccessShareLock, (
On Tue, Oct 25, 2011 at 11:24 PM, Tom Lane wrote:
> Even given your recent changes to reduce the overhead of checking for
> sinval messages, I'm not sure that it'd be practical to move the sinval
> message processing to just-before-we-look-up-a-cache-entry.
Wait a minute: I'm confused. What's at
On Wed, Oct 26, 2011 at 4:24 AM, Tom Lane wrote:
> Even given your recent changes to reduce the overhead of checking for
> sinval messages, I'm not sure that it'd be practical to move the sinval
> message processing to just-before-we-look-up-a-cache-entry. Right now,
> we do AcceptInvalidationMe
Robert Haas writes:
> On Tue, Oct 25, 2011 at 9:06 PM, Tom Lane wrote:
>> Ordinarily, sending out sinval messages post-commit is okay because we
>> don't release locks until after that, and we suppose that our locks
>> prevent any other transactions from getting to the point of using
>> syscache
On Tue, Oct 25, 2011 at 9:06 PM, Tom Lane wrote:
> What has happened here is that the second ANALYZE has marked itself
> committed in pg_clog and no longer running in the ProcArray, so VACUUM
> feels entitled to remove toast tuples that the ANALYZE deleted. However,
> the ANALYZE has not yet sent
I believe I have reproduced the behavior described by Andrew Hammond in
http://archives.postgresql.org/pgsql-general/2011-10/msg00928.php
This is using the regression database:
1. In session 1, do
set default_statistics_target TO 1;
analyze tenk1;
(We need the large stats targ
On Mon, Sep 5, 2011 at 1:01 PM, Bruce Momjian wrote:
> Alvaro Herrera wrote:
>> Excerpts from Tom Lane's message of mar mar 01 19:03:35 -0300 2011:
>> > Alvaro Herrera writes:
>> > > Strangely, we made pg_database have a toast table, and the only reason
>> > > for this is datacl. Should we creat
Alvaro Herrera wrote:
> Excerpts from Tom Lane's message of mar mar 01 19:03:35 -0300 2011:
> > Alvaro Herrera writes:
> > > Strangely, we made pg_database have a toast table, and the only reason
> > > for this is datacl. Should we create toast tables for the remaining
> > > catalogs?
> >
> > As
- Original Message -
From: "Bruce Momjian"
To: "Lou Picciano"
Cc: pgsql-hackers@postgresql.org
Sent: Wednesday, August 31, 2011 10:38:01 PM
Subject: Re: [HACKERS] toast error after pg_upgrade 9.0.4 -> 9.1rc1
Lou Picciano wrote:
> After running
Bruce Momjian wrote:
> Lou Picciano wrote:
> > After running an essentially uneventful* pg_upgrade from 9.0.4 ->
> > 9.1rc1, we are seeing some toast errors logged on the new cluster:
> >
> > All are of this pattern: ERROR: missing chunk number 0 for toast value
> > 130087 in pg_toast_34735
> >
>
Lou Picciano wrote:
> After running an essentially uneventful* pg_upgrade from 9.0.4 ->
> 9.1rc1, we are seeing some toast errors logged on the new cluster:
>
> All are of this pattern: ERROR: missing chunk number 0 for toast value
> 130087 in pg_toast_34735
>
> Have seen the same pattern for a f
After running an essentially uneventful* pg_upgrade from 9.0.4 -> 9.1rc1, we
are seeing some toast errors logged on the new cluster:
All are of this pattern: ERROR: missing chunk number 0 for toast value 130087
in pg_toast_34735
Have seen the same pattern for a few of the databases in the 9.1
Excerpts from Tom Lane's message of mar mar 01 19:03:35 -0300 2011:
> Alvaro Herrera writes:
> > Strangely, we made pg_database have a toast table, and the only reason
> > for this is datacl. Should we create toast tables for the remaining
> > catalogs?
>
> As I commented on your blog, this is n
Alvaro Herrera writes:
> Strangely, we made pg_database have a toast table, and the only reason
> for this is datacl. Should we create toast tables for the remaining
> catalogs?
As I commented on your blog, this is nonsense. pg_database has a TOAST
table becase we thought it might need one for
Hi,
Someone on IRC recently noticed that you can't grant USAGE privileges on
a table to a large number of roles. (My experiment says 2466 works,
2467 doesn't).
Of course, this is wrong and all that. I wrote a blog article about
this:
http://www.commandprompt.com/blogs/alvaro_herrera/2011/03/gra
Hmm,
k, then I don't know exactly what happens. I have some values in my composite
type. The size of the index tuple (if coming from one table and being copied to
the target table) is about 15x bytes. Otherwise, if I insert the same tuple
directly, it is about 12kb. So, I don't know exactly what
Carsten Kropf writes:
> However, it is a little bit strange, that I get toasted values (when
> inserting from another table) and untoasted values, if I insert items
> directly. Could anybody please explain this to me?
Huh? An index will never ever get passed an externally-toasted value.
See th
Oh,
thanks. I didn't know about index tables not having access to associated toast
values. The index access method is (at least in the textual part) similar to
GIN, however, I needed to do some changes to it. Saving the whole document is
actually only important for vacuum. I think, I will find s
Carsten Kropf writes:
> I have a strange issue using a custom built index structure. My index access
> method support document type composed of words (as tsvector) and points
> (1-dimensional arrays of them). For internal reasons, I have to save the
> documents as a whole inside my structure (f
Hi *,
I have a strange issue using a custom built index structure. My index access
method support document type composed of words (as tsvector) and points
(1-dimensional arrays of them). For internal reasons, I have to save the
documents as a whole inside my structure (for proper reorganisations
On Mon, 2010-05-03 at 23:36 -0400, Jan Wieck wrote:
> Suppose we had something similar to the NULL value bitmap, specifying
> plain or compressed values (not TOAST references), that are moved to a
> shadow tuple inside the toast table. Suppose further we had some
> statistics about how often att
On 5/2/2010 10:34 AM, Tom Lane wrote:
Simon Riggs writes:
Not commenting further on that patch, but I notice that when we UPDATE
the toasting algorithm takes no account of the available freespace on
the current block. If we are updating and the space available would make
a difference to the row
On Sun, 2010-05-02 at 10:34 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > Not commenting further on that patch, but I notice that when we UPDATE
> > the toasting algorithm takes no account of the available freespace on
> > the current block. If we are updating and the space available would make
Simon Riggs writes:
> Not commenting further on that patch, but I notice that when we UPDATE
> the toasting algorithm takes no account of the available freespace on
> the current block. If we are updating and the space available would make
> a difference to the row length chosen, it seems like it
On Wed, 2009-07-22 at 01:21 +, Tom Lane wrote:
> Tweak TOAST code so that columns marked with MAIN storage strategy are
> not forced out-of-line unless that is necessary to make the row fit on a
> page. Previously, they were forced out-of-line if needed to get the row
> down to the default ta
Simon Riggs wrote:
> Why do we protect against this?
>
> postgres=# alter table test set (nonexistent = on);
> ERROR: unrecognized parameter "notexistent"
Eh? The parameter doesn't exist. It's not exactly the same as defining
a value for a parameter that exists but is unused.
--
Alvaro Herr
Simon Riggs wrote:
> Also, this seems not to work?
>
> postgres=# alter table test set (my.expectation = on);
> ERROR: unrecognized parameter namespace "my"
>
> I thought we had enabled custom table options in this release?
> Or was that deferred, or even rejected completely?
IIRC you can defi
On Wed, 2010-04-28 at 21:11 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > It's possible to set toast reloptions for tables that don't have toast
> > tables at all. e.g.
>
> > create table test (c1 integer);
> > ALTER TABLE test SET (toast.autovacuum_enabled = off);
>
> > Why?
>
> 1. Why no
Simon Riggs writes:
> It's possible to set toast reloptions for tables that don't have toast
> tables at all. e.g.
> create table test (c1 integer);
> ALTER TABLE test SET (toast.autovacuum_enabled = off);
> Why?
1. Why not?
2. They might have toast tables later, after an ALTER ADD COLUMN
fo
It's possible to set toast reloptions for tables that don't have toast
tables at all. e.g.
create table test (c1 integer);
ALTER TABLE test SET (toast.autovacuum_enabled = off);
Why?
--
Simon Riggs www.2ndQuadrant.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgre
Alvaro Herrera píše v pá 06. 03. 2009 v 10:32 -0300:
>
> The servers are running 8.3.5.
>
> Does this ring a bell for anyone?
look here:
http://archives.postgresql.org/pgsql-bugs/2008-11/msg00113.php
It look likes that there could be problem with btree wal.
Zdenek
--
Sent via pgs
Alvaro Herrera writes:
> 14903 2009-02-28 22:22:02 EST kangaroo app 49a9fe2e.3a37 ERROR: index
> "pg_toast_42299_index" contains unexpected zero page at block 23768
>
> There were actually several different instances of the first error
> involving different toast values.
Do you know how big this
We have a customer running a very high-velocity database. We have set
up a PITR replica that's restoring correctly using JD's PITRtools and
pg_standby.
However, upon stopping the recovery and starting it for production (for
"switchover") it promptly gets into these problems:
14895 2009-02-28 22:
Zdenek Kotala wrote:
> Alvaro Herrera napsal(a):
>> Heikki Linnakangas wrote:
>>
>>> Hmm, you're right. I think it can be made to work by storing the
>>> *end* offset of each chunk. To find the chunk containing offset X,
>>> search for the first chunk with end_offset > X.
>>
>> FWIW I'm trying
Tom Lane wrote:
It's kind of annoying to introduce a search when it's so seldom needed,
though. How about something like
/* fast path for common case */
if (key[i].sk_attno == irel->rd_index->indkey.values[i])
key[i].sk_attno = i + 1;
else
... sea
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Basically, this comment and code in genam.c:
> ...
> is wrong, because it assumes that there's only one scankey per index
> column, but that's not true for toast_fetch_datum_slice(), which uses
> two scankeys for the chunkid, to fetch a range. Atta
A bug was introduced a while ago by this patch:
> commit 447f7364dd7227a32b58a2aff24f587dd7d7051a
> Author: Tom Lane <[EMAIL PROTECTED]>
> Date: Sat Apr 12 23:14:21 2008 +
>
> Create new routines systable_beginscan_ordered,
systable_getnext_ordered,
> systable_endscan_ordered that
[EMAIL PROTECTED] ("Joshua D. Drake") writes:
> Alvaro Herrera wrote:
>> Joshua D. Drake wrote:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA1
>>>
>>> Heikki Linnakangas wrote:
Joshua D. Drake wrote:
>
>>> I guess my point is, if the patch looks good and does not appear
>>> to hurt anyt
> My next suggestion would be to leave EXTERN_TUPLES_PER_PAGE
> as is, but:
> Split data wider than a page into page sized chunks as long
> as they fill whole pages.
> Split the rest with EXTERN_TUPLES_PER_PAGE (4) as now.
> This would not waste more space than currently, but improve
> performa
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>>
>> Is this still testing with all data fitting in RAM?
>
> Yes. Having things out of RAM is going to make access even slower, but
> it is going to allow the heap to be in RAM more often.
It would let us measure the actual imp
Gregory Stark wrote:
> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
> > Well, it is summarized here:
> >
> > http://momjian.us/expire/TOAST/SUMMARY.html
> >
> > It made non-TOAST access 2x faster, but TOAST 7x slower, and that seemed
> > like a good compromise.
>
> Is this still testing with
> > While I agree, that 2 might be a good compromise with low risc for
> > now, I think that toasting all rows down to ~512 bytes is too
narrowly
> > targeted at not reading wider columns.
>
> Well, it is summarized here:
>
> http://momjian.us/expire/TOAST/SUMMARY.html
>
> It made non-T
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> Well, it is summarized here:
>
> http://momjian.us/expire/TOAST/SUMMARY.html
>
> It made non-TOAST access 2x faster, but TOAST 7x slower, and that seemed
> like a good compromise.
Is this still testing with all data fitting in RAM?
--
Gregor
Zeugswetter Andreas ADI SD wrote:
>
> > > No, you misunderstood. Bruce was suggesting changing the target to
> 512.
> > > That means if a row is wider than ~2k, toaster will try to toast
> until
> > > the base row is
> > > ~512 bytes. I would not do that part for 8.3.
> >
> > OK, what do you su
> > No, you misunderstood. Bruce was suggesting changing the target to
512.
> > That means if a row is wider than ~2k, toaster will try to toast
until
> > the base row is
> > ~512 bytes. I would not do that part for 8.3.
>
> OK, what do you suggest for 8.3? Attached are my suggestion
> to use
Zeugswetter Andreas ADI SD wrote:
>
> > > The big question is do we want to drop the target tuple size down
> to
> > > 512, and increase the chunk size to 8k for 8.3? Dropping the tuple
> > > size down to 512 is going to give us some smaller TOAST values to
> fill
> > > in free space created
> > The big question is do we want to drop the target tuple size down
to
> > 512, and increase the chunk size to 8k for 8.3? Dropping the tuple
> > size down to 512 is going to give us some smaller TOAST values to
fill
> > in free space created by the 8k chuck size, assuming you have both
>
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Fri, Jun 01, 2007 at 01:50:12PM -0400, Bruce Momjian wrote:
>> The big question is do we want to drop the target tuple size down to
>> 512, and increase the chunk size to 8k for 8.3?
> If we do that people could see their disk space usage increase by
On Fri, Jun 01, 2007 at 01:50:12PM -0400, Bruce Momjian wrote:
> I think the long-term solution is to go to a 2k/8k fragment/block model,
> but that isn't going to happen for 8.3.
There might well have been lessons learned since UFS (anyone know what
ZFS does in this regard?), but I agree that we
Zeugswetter Andreas ADI SD wrote:
> It is good. It shows, that we even see a small advantage in the
> everything cached case.
>
> What we don't have yet is numbers for whether EXTERN_TUPLES_PER_PAGE=1
> substantially increases the toast table size for real life scenarios,
> what happens in the wor
Gregory Stark wrote:
> "Gregory Stark" <[EMAIL PROTECTED]> writes:
>
> > "Bruce Momjian" <[EMAIL PROTECTED]> writes:
> >
> >> shared_buffers again was 32MB so all the data was in memory.
> >
> > The case where all the data is in memory is simply not interesting. The cost
> > of TOAST is the random
> I tested EXTERN_TUPLES_PER_PAGE for values 4(default), 2, and 1:
>
> 4 15.596
> 2 15.197
> 1 14.6
>
> which is basically a 3% decrease from 4->2 and 2->1. The
> test script and result are here:
>
> http://momjian.us/expire/TOAST2/
>
> shared_buffer
"Gregory Stark" <[EMAIL PROTECTED]> writes:
> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
>> shared_buffers again was 32MB so all the data was in memory.
>
> The case where all the data is in memory is simply not interesting. The cost
> of TOAST is the random access seeks it causes. You seem to
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> shared_buffers again was 32MB so all the data was in memory.
The case where all the data is in memory is simply not interesting. The cost
of TOAST is the random access seeks it causes. You seem to be intentionally
avoiding testing the precise thing we
I tested EXTERN_TUPLES_PER_PAGE for values 4(default), 2, and 1:
4 15.596
2 15.197
1 14.6
which is basically a 3% decrease from 4->2 and 2->1. The test script
and result are here:
http://momjian.us/expire/TOAST2/
shared_buffers again was 32MB
Bruce Momjian wrote:
What you will see is that pushing TEXT to a TOAST column allows quick
access to non-TOAST values and single-row TOAST values, but accessing
all TOAST columns is slower than accessing them in the heap, by a factor
of 3-18x.
Looking at the chart, it seems 512 is the proper bre
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> Uh, am I supposed to be running more TOAST tests? Would someone explain
> what they want tested?
If you want my opinion I would say we need two tests:
1) For TOAST_TUPLE_TARGET:
We need to run the test scripts you have already for sizes that cause
Tom Lane wrote:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> > Whereas if you set toast_tuples_per_page to 8k then the only option for
> > Postgres will be to put each datum in its own page and waste 1-3k on every
> > page.
>
> No, because actually the code is designed to make the toast chunk siz
> > Whereas if you set toast_tuples_per_page to 8k then the only option
> > for Postgres will be to put each datum in its own page and
> waste 1-3k
> > on every page.
>
> No, because actually the code is designed to make the toast
> chunk size just enough less than 8K that the tuples fit.
He
"Tom Lane" <[EMAIL PROTECTED]> writes:
> The shorter-than-normal tuples carrying the last chunk of any particular
> datum are going to result in wasted space to the extent that we can't
> pack them together on a page, but that's true now. Right now, if you
> have a large toasted datum, it mostly
Gregory Stark <[EMAIL PROTECTED]> writes:
> Whereas if you set toast_tuples_per_page to 8k then the only option for
> Postgres will be to put each datum in its own page and waste 1-3k on every
> page.
No, because actually the code is designed to make the toast chunk size
just enough less than 8K t
"Tom Lane" <[EMAIL PROTECTED]> writes:
> "Zeugswetter Andreas ADI SD" <[EMAIL PROTECTED]> writes:
>> My expectation would be, that we want to allow a toast tuple to fill a
>> whole page (TOAST_TUPLES_PER_PAGE = 1),
I thought that previously but then I started thinking maybe that's not true.
Pic
"Zeugswetter Andreas ADI SD" <[EMAIL PROTECTED]> writes:
> My expectation would be, that we want to allow a toast tuple to fill a
> whole page (TOAST_TUPLES_PER_PAGE = 1),
I've been wondering about that too. It certainly needs to be
experimented with, independently of TOAST_TUPLE_THRESHOLD, now t
> I reran the tests with hashtext(), and created a SUMMARY.HTML chart:
>
> http://momjian.us/expire/TOAST/
>
> What you will see is that pushing TEXT to a TOAST column
> allows quick access to non-TOAST values and single-row TOAST
> values, but accessing all TOAST columns is slower than
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I reran the tests with hashtext(), and created a SUMMARY.HTML chart:
> > http://momjian.us/expire/TOAST/
>
> I don't understand what the numbers in this chart are?
They are taken from the test script and output files that are als
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I reran the tests with hashtext(), and created a SUMMARY.HTML chart:
> http://momjian.us/expire/TOAST/
I don't understand what the numbers in this chart are?
regards, tom lane
---(end of broadcast)-
Bruce Momjian wrote:
> Gregory Stark wrote:
> > "Bruce Momjian" <[EMAIL PROTECTED]> writes:
> >
> > >> No, we did substring() too :)
> > >
> > > Uh, I looked at text_substring(), and while there is an optimization to
> > > do character counting for encoding length == 1, it is still accessing
> > >
Gregory Stark wrote:
> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
> >> No, we did substring() too :)
> >
> > Uh, I looked at text_substring(), and while there is an optimization to
> > do character counting for encoding length == 1, it is still accessing
> > the data.
>
> Sure but it'll only a
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
>> No, we did substring() too :)
>
> Uh, I looked at text_substring(), and while there is an optimization to
> do character counting for encoding length == 1, it is still accessing
> the data.
Sure but it'll only access the first chunk. There are two ch
Gregory Stark wrote:
> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
> > Gregory Stark wrote:
> >
> >> Is your database initialized with C locale? If so then length(text) is
> >> optimized to not have to detoast:
> >>
> >>if (pg_database_encoding_max_length() == 1)
> >>PG_RETURN_I
Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > My test uses random data, which I figured was a close to real-world as I
> > could get, and I have a test that makes sure the data was pushed to the
> > TOAST table. Should I still try EXTERNAL?
>
> My point is that you probably want to measure
Bruce Momjian wrote:
> My test uses random data, which I figured was a close to real-world as I
> could get, and I have a test that makes sure the data was pushed to the
> TOAST table. Should I still try EXTERNAL?
My point is that you probably want to measure separately the effect of
compression
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>
>> Is your database initialized with C locale? If so then length(text) is
>> optimized to not have to detoast:
>>
>> if (pg_database_encoding_max_length() == 1)
>> PG_RETURN_INT32(toast_raw_datum_size(str) - VA
Alvaro Herrera wrote:
> > > >> > Strangely, 128 bytes seems to be the break-even point for TOAST and
> > > >> > non-TOAST, even for sequential scans of the entire heap touching all
> > > >> > long row values. I am somewhat confused why TOAST has faster access
> > > >> > than inline heap data.
> >
Bruce Momjian wrote:
> Gregory Stark wrote:
> > "Bruce Momjian" <[EMAIL PROTECTED]> writes:
> >
> > > Gregory Stark wrote:
> > >> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
> > >>
> > >> > I tested TOAST using a method similar to the above method against CVS
> > >> > HEAD, with default shared_bu
Gregory Stark wrote:
> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
> > Gregory Stark wrote:
> >> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
> >>
> >> > I tested TOAST using a method similar to the above method against CVS
> >> > HEAD, with default shared_buffers = 32MB and no assert()s. I cre
"Gregory Stark" <[EMAIL PROTECTED]> writes:
> Is your database initialized with C locale? If so then length(text) is
> optimized to not have to detoast:
>
> if (pg_database_encoding_max_length() == 1)
> PG_RETURN_INT32(toast_raw_datum_size(str) - VARHDRSZ);
Of course I got tha
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>>
>> > I tested TOAST using a method similar to the above method against CVS
>> > HEAD, with default shared_buffers = 32MB and no assert()s. I created
>> > backends with power-of-2
Gregory Stark wrote:
> "Bruce Momjian" <[EMAIL PROTECTED]> writes:
>
> > I tested TOAST using a method similar to the above method against CVS
> > HEAD, with default shared_buffers = 32MB and no assert()s. I created
> > backends with power-of-2 seetings for TOAST_TUPLES_PER_PAGE (4(default),
> >
"Bruce Momjian" <[EMAIL PROTECTED]> writes:
> I tested TOAST using a method similar to the above method against CVS
> HEAD, with default shared_buffers = 32MB and no assert()s. I created
> backends with power-of-2 seetings for TOAST_TUPLES_PER_PAGE (4(default),
> 8, 16, 32, 64) which gives TOAST/
[EMAIL PROTECTED] (Bruce Momjian) writes:
> The results are here:
>
> http://momjian.us/expire/TOAST/
I'll take a look and see if there's anything further it makes sense
for me to try testing. Thanks for following up so quickly; what with
the cold I have had, I haven't yet gotten back to th
bruce wrote:
> > > * Re: [HACKERS] Modifying TOAST thresholds /Tom Lane/
> > >
> > > At this point it seems nothing will be done about this issue for 8.3.
> >
> > I'm not sure anyone has an idea how to test it. TPCC isn't really useful
> > because it has a fixed size (500 byte) string buffer. Per
On Wed, 13 Dec 2006 11:29:44 +, "Simon Riggs" <[EMAIL PROTECTED]> wrote:
> Is there a big reason why TOAST tables are called such cryptic names?
>
> e.g. pg_toast.pg_toast_16399
>
> Wouldn't it be more pleasant to have them called the same thing as their
> parent
>
> e.g. pg_toast._toast
>
W
Is there a big reason why TOAST tables are called such cryptic names?
e.g. pg_toast.pg_toast_16399
Wouldn't it be more pleasant to have them called the same thing as their
parent
e.g. pg_toast._toast
This would be very convenient for most purposes, though it would mean
we'd have to do something
"Josh Berkus" wrote
> > While I'm waiting to figure out how to get the size of the toast table,
at
> > least I can provide the speed of query with/without assumed compression
on
> > the 6K text columns.
>
> Check out the table_size view in the newsysviews project. Andrew computed
the
> regular,
Luke,
> While I'm waiting to figure out how to get the size of the toast table, at
> least I can provide the speed of query with/without assumed compression on
> the 6K text columns.
Check out the table_size view in the newsysviews project. Andrew computed the
regular, toast, and index sizes as
While I'm waiting to figure out how to get the size of the toast table, at
least I can provide the speed of query with/without assumed compression on
the 6K text columns.
To insure that we're actually accessing the data in the rows, I do a regexp
query on the TOASTed rows:
mpptestdb=# select coun
Josh,
On 2/26/06 8:04 PM, "Josh Berkus" wrote:
> Check out SET STORAGE.
I just altered the MIVP data generator in Bizgres MPP to produce the usual
15 column table but with a 6K row size. You'd only expect a few tens of
bytes variance around the 6K, and the data is randomly chosen words from a
Luke,
> As Jim pointed out, we would need a real test to confirm the behavior,
> I'm not yet acquainted with the toast compression, so it's harder for me
> to compose a real test.
Check out SET STORAGE.
--
--Josh
Josh Berkus
Aglio Database Solutions
San Francisco
---(e
Hannu,
On 2/26/06 12:19 PM, "Hannu Krosing" <[EMAIL PROTECTED]> wrote:
>> On DBT-3 data, I've just run some tests meant to simulate the speed
>> differences of compression versus native I/O. My thought is that an
>> external use of gzip on a binary dump file should be close to the speed of
>> LZ
Ühel kenal päeval, P, 2006-02-26 kell 09:31, kirjutas Luke Lonergan:
> Jim,
>
> On 2/26/06 8:00 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
>
> > Any idea on how decompression time compares to IO bandwidth? In other
> > words, how long does it take to decompress 1MB vs read that 1MB vs read
> >
Luke Lonergan wrote:
> Jim,
>
> On 2/26/06 10:37 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
>
> > So the cutover point (on your system with very fast IO) is 4:1
> > compression (is that 20 or 25%?).
>
> Actually the size of the gzipp'ed binary file on disk was 65MB, compared to
> 177.5MB unco
Jim,
On 2/26/06 10:37 AM, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> So the cutover point (on your system with very fast IO) is 4:1
> compression (is that 20 or 25%?).
Actually the size of the gzipp'ed binary file on disk was 65MB, compared to
177.5MB uncompressed, so the compression ratio is 3
1 - 100 of 147 matches
Mail list logo