On 18-1-2007 23:11 Tom Lane wrote:
Increase work_mem? It's not taking the hash because it thinks it won't
fit in memory ...
When I increase it to 128MB in the session (arbitrarily selected
relatively large value) it indeed has the other plan.
Best regards,
Arjen
--
Hi Brian,
On Thu, 18 Jan 2007, Brian Hurt wrote:
> Is there any experience with Postgresql and really huge tables? I'm
> talking about terabytes (plural) here in a single table. Obviously the
> table will be partitioned, and probably spread among several different
> file systems. Any other tri
Chris,
On 1/18/07 1:42 PM, "Chris Mair" <[EMAIL PROTECTED]> wrote:
> A lot of data, but not a lot of records... I don't know if that's
> valid. I guess the people at Greenplum and/or Sun have more exciting
> stories ;)
You guess correctly :-)
Given that we're Postgres 8.2, etc compatible, that
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> PS, In case any of the planner-hackers are reading, here are the plans
> of the first two queries, just to see if something can be done to
> decrease the differences between them.
Increase work_mem? It's not taking the hash because it thinks i
Well - it hadn't run on any table in over 24 hours (according to
pg_stat_user_tables). My tables are constantly being inserted into and
deleted from, and the autovacuum settings are pretty aggressive. I also
had not seen the autovac process running in the past 24 hours. (although
I wasn't watchin
Brian Hurt <[EMAIL PROTECTED]> writes:
> Is there any experience with Postgresql and really huge tables? I'm
> talking about terabytes (plural) here in a single table.
The 2MASS sky survey point-source catalog
http://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec2_2a.html
is 470 million rows
Is there any experience with Postgresql and really huge tables? I'm
talking about terabytes (plural) here in a single table. Obviously the
table will be partitioned, and probably spread among several different
file systems. Any other tricks I should know about?
We have a problem of that f
"Jeremy Haile" <[EMAIL PROTECTED]> writes:
> No tables have been vacuumed or analyzed today. I had thought that this
> problem was due to my pg_autovacuum changes, but perhaps not. I
> restarted PostgreSQL (in production - yikes) About a minute after being
> restarted, the autovac process fired
On 18-1-2007 18:28 Jeremy Haile wrote:
I once had a query which would operate on a recordlist and
see whether there were any gaps larger than 1 between consecutive
primary keys.
Would you mind sharing the query you described? I am attempting to do
something similar now.
Well it was over a
On Thu, 2007-01-18 at 14:31, Brian Hurt wrote:
> Is there any experience with Postgresql and really huge tables? I'm
> talking about terabytes (plural) here in a single table. Obviously the
> table will be partitioned, and probably spread among several different
> file systems. Any other tric
Jeremy Haile wrote:
Also, are other auto-vacuums and auto-analyzes showing up in the
pg_stats table? Maybe it's a stats system issue.
No tables have been vacuumed or analyzed today. I had thought that this
problem was due to my pg_autovacuum changes, but perhaps not. I
restarted Postgre
Brian Hurt wrote:
> Is there any experience with Postgresql and really huge tables? I'm
> talking about terabytes (plural) here in a single table. Obviously the
> table will be partitioned, and probably spread among several different
> file systems. Any other tricks I should know about?
>
> We
Is there any experience with Postgresql and really huge tables? I'm
talking about terabytes (plural) here in a single table. Obviously the
table will be partitioned, and probably spread among several different
file systems. Any other tricks I should know about?
We have a problem of that for
> Unless it's just a bug, my only guess is that autovacuum may be getting
> busy at times (vacuuming large tables for example) and hasn't had a
> chance to even look at that table for a while, and by the time it gets
> to it, there have been tens of thousands of inserts. Does that sounds
> pla
Jeremy Haile wrote:
I changed the table-specific settings so that the ANALYZE base threshold
was 5000 and the ANALYZE scale factor is 0. According to the documented
formula: analyze threshold = analyze base threshold + analyze scale
factor * number of tuples, I assumed that this would cause the
Some of my very large tables (10 million rows) need to be analyzed by
autovacuum on a frequent basis. Rather than specifying this as a
percentage of table size + base threshold, I wanted to specify it as an
explicit number of rows.
I changed the table-specific settings so that the ANALYZE base th
> I once had a query which would operate on a recordlist and
> see whether there were any gaps larger than 1 between consecutive
> primary keys.
Would you mind sharing the query you described? I am attempting to do
something similar now.
---(end of broadcast)--
On 18-1-2007 17:20 Scott Marlowe wrote:
Besides that, mysql rewrites the entire table for most table-altering
statements you do (including indexes).
Note that this applies to the myisam table type. innodb works quite
differently. It is more like pgsql in behaviour, and is an mvcc storage
A
On Thu, 2007-01-18 at 04:24, Arjen van der Meijden wrote:
> On 18-1-2007 0:37 Adam Rich wrote:
> > 4) Complex queries that might take advantage of the MySQL "Query Cache"
> > since the base data never changes
>
> Have you ever compared MySQL's performance with complex queries to
> PostgreSQL's? I
On Wed, 2007-01-17 at 18:27, Steve wrote:
> > Generally speaking, once you've gotten to the point of swapping, even a
> > little, you've gone too far. A better approach is to pick some
> > conservative number, like 10-25% of your ram for shared_buffers, and 1
> > gig or so for maintenance work_mem
You will need to properly tune the thresholds for VACUUM and ANALYZE in case
of autovacuuming process, so that you do not need to increase the
max_fsm_pages oftenly...
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/18/07, Bill Moran <[EMAIL PROTECTED]> wrote:
In response to
In response to "Gauri Kanekar" <[EMAIL PROTECTED]>:
> On 1/18/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
> >
> > On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:
> >
> > > is autovacuum similar to vacuum full analyse verbose.
> >
> > http://www.postgresql.org/docs/8.2/interactive/routine-
>
Hi
Thanks.
We have autovacuum ON , but still postgres server warns to
increas max_fsm_pages value.
Do autovacuum release space after it is over?
so how can we tackle it.
On 1/18/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:
> is auto
On Jan 18, 2007, at 22:24 , Gauri Kanekar wrote:
is autovacuum similar to vacuum full analyse verbose.
http://www.postgresql.org/docs/8.2/interactive/routine-
vacuuming.html#AUTOVACUUM
Apparently, no FULL, no VERBOSE (which is only really useful if you
want to see the results, not for ro
Hi List,
Can anybody help me out with this -
is autovacuum similar to vacuum full analyse verbose.
--
Regards
Gauri
On 18-1-2007 0:37 Adam Rich wrote:
4) Complex queries that might take advantage of the MySQL "Query Cache"
since the base data never changes
Have you ever compared MySQL's performance with complex queries to
PostgreSQL's? I once had a query which would operate on a recordlist and
see whether
Suggested in case he wants to do a log switch after certain amount of
time...
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/18/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
On Wed, 2007-01-17 at 23:03 +0500, Shoaib Mir wrote:
> archive_timeout (came in ver 8.2) might help you wi
On Wed, 2007-01-17 at 23:03 +0500, Shoaib Mir wrote:
> archive_timeout (came in ver 8.2) might help you with customizing the
> size for log files.
I'm not sure that it will.
If anything it could produce more log files, which could lead to a
backlog if the archive_command isn't functioning for som
28 matches
Mail list logo