t;Bitmap Heap scan" first ?
There ara too much rows in this table ?
PS: sorry for my english, i'm french.
--
Paul.
Thank you for your answer.
Now i ve to find how to reduce the size of the table.
Paul.
Le mardi 28 août 2007 à 12:55 -0400, Tom Lane a écrit :
> Paul <[EMAIL PROTECTED]> writes:
> > Why in the first case, pgsql uses the "better" index and if i search
> > r_service
Hello
I'm running pgsql 8.1.11 (from debian stable) on a server with 16GB RAM
(Linux helios 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64
GNU/Linux).
I have a table "tickets" with 1 000 000 insert by month ( ~2600 each 2hours
) (for the moment 1300 rows for 5GB )
and i have to extr
Thanks,
Unfornatly, i can't update pgsql to 8.3 since it's not in debian stable.
So i'm going to play with work_mem & shared_buffers.
With big shared_buffers pgsql tells me
shmget(cle=5432001, taille=11183431680, 03600).
so i do "echo 13183431680 > /proc/sys/kernel/shmmax" ( 10Go + 2Go just
in
I played with work_mem and setting work_mem more than 256000 do not change
the performance.
I try to upgrade to 8.3 using etch-backports but it's a new install not an
upgrade.
So i have to create users, permissions, import data again, it scared me so
i want to find another solutions first.
But now
SHMMAX to 134217728 (ie 128 Meg)
What should SHMALL be?
The current system values are
[EMAIL PROTECTED]:~/data$ cat /proc/sys/kernel/shmmax
33554432
[EMAIL PROTECTED]:~/data$ cat /proc/sys/kernel/shmall
2097152
ie SHMALL seems to be 1/16 of SHMMAX
Paul
[1] http://www.po
a multi-key index on them both.
Paul
Kjell Tore Fossbakk wrote:
Hello!
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
I got a huge database with roughly 19 million records. There is just one
table, with a time field, a few ints and a few strings.
table test
fields time (timestamp), source (string),
ive_cache_size
--
1000
(1 row)
I have used the manual pages on postgresql, postmaster, and so on, but
I cant find anywhere to specify which config file Pg is to use. I'm
not entirely sure if he uses the one im editing
(/usr/local/etc/postgresql.conf).
Any hints,
gain.
Has anyone experienced real performance gains by moving the pg_xlog files?
Thanks in anticipation,
Paul.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ndle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data from the
pg_xlog seems to come unstuck, or are we missing something?
Cheers,
Paul Johnson.
---(end of broadcast)-
ue key violation leave dead rows in the table? If so, why?
I really appreciate any information you guys can give me. I'm convinced
that PG is the best database for our needs, but I need to be able to get
this database performing well enough to convince the bigwigs.
Regards,
Paul Lathrop
System
You'll find that PostGIS does a pretty good job of selectivity
estimation.
P
On 13-Feb-07, at 9:09 AM, Guillaume Smet wrote:
Hi all,
Following the work on Mark Stosberg on this list (thanks Mark!), I
optimized our slow proximity queries by using cube, earthdistance
(shipped with contrib) an
I can't just set an arbitrarily big
limit to use the index.
Any ideas? To me it looks like a bug in the planner. I can't think of
any logical reason not to use an existing index to retrieve a sorted
listing of the data.
PaulVPOP3
At 16:26 04/05/2007, you wrote:
Paul Smith wrote:
Why doesn't it use the other index? If use 'set enable_seqscan=0'
then it does.
Just a guess, but is the table clustered on column a? Maybe not
explicitly, but was it loaded from data that was sorted by a?
I wouldn't ha
scan" first ?
There ara too much rows in this table ?
I'm doing something wrong ?
PS: sorry for my english, i'm french.
--
Paul.
r the other, such as
allocating shared_buffers to a much larger percentage (such as 90-95%
of expected 'free' memory).
Paul
(Apologies if two copies of this email arrive, I sent the first from
an email address that wasn't directly subscribed to the list so it was
blocked).
--
Sent
r the other, such as
allocating shared_buffers to a much larger percentage (such as 90-95%
of expected 'free' memory).
Paul
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
8.3 on 64bit OS with 64gig of memory but
with Postgres still tuned for the 8 gigs the servers originally had
and under a VM).
Paul
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
at least 4 bytes to be stored ?
Thanks,Paul
busy, but it has more or less as many writes as
reads.
I have not seen more then 10-15 simultaneous queries.
Any idea why idle postmaster consume 3-5% CPU ?
This is a FreeBSD 5.4-RELEASE server with 2x3G Xeon CPUs, 2G memory,
RAID1 mirrored U320 drives.
Thanx
Paul
signature.asc
Description:
ed.Could the problem be that an index on a function result is not cached or less well cached ?
Thanks,Paul
I think that having an API for backup functionality would definitely be useful.
Just my 2 cents...
Paul
On 6/8/06, Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> Personally I think it would be neat. For example the admin-tool guys> would be able to get a dump without
Take a look at: http://www.icp-vortex.com/english/index_e.htm
They have always made good RAID controllers.
Cheers
Paul
On Thu, 2006-07-06 at 11:10 -0700, Kenji Morishige wrote:
> Thanks for the suggestion Mark, though the server chassis I am trying to
> utilize already has 4 10,000 RP
stado = 'A') or (Estado = 'I')),
PRIMARY KEY(CodBanco)
);
select * from icc_m_banco where codbanco = 1;
select * from icc_m_banco where codbanco = 1::int2;
--
Paul Thomas
+--+-+
| Thomas Micro S
hrough the client-side operations to see what could be eating up the 13
seconds.
Given that the client and server are on different machines, I'm wondering
the bulk of the 13 seconds is due a network mis-configuration
want
I will gladly give you the information.
Googling threw up
http://spider.tm/apr2004/cstory2.html
Interesting and possibly relevant quote:
"Benchmarks have shown that in certain conditions the anticipatory
algorithm is almost 10 times faster than what 2.4 kernel supports".
HTH
bout what the value should be..)
(b) is determined by the dastardly trick of actually sampling the data in
the table!!! That's what analyze does. It samples your table(s) and uses
the result to feeede into it's descision about when to flip between
sequential and index scans.
Hope thi
I tried to test how this is related to cache coherency, by forcing
affinity of the two test_run.sql processes to the two cores (pipelines?
threads) of a single hyperthreaded xeon processor in an smp xeon box.
When the processes are allowed to run on distinct chips in the smp box,
the CS storm h
.)
On Apr 20, 2004, at 1:02 PM, Paul Tuckfield wrote:
I tried to test how this is related to cache coherency, by forcing
affinity of the two test_run.sql processes to the two cores
(pipelines? threads) of a single hyperthreaded xeon processor in an
smp xeon box.
When the processes are allowed to
.4.
Yes, I've seen other benchmarks which also show that.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for
Business |
| Computer Consultants |
http://www.t
y underestimated for
todays
average hardware configuration (1+GHz, 0.5+GB RAM, fast FSB, fast HDD).
It seems to me better strategy to force that 1% of users to "downgrade"
cfg.
than vice-versa.
regards
ch
This has been discussed many times before. Check the
Dave:
Why would test and set increase context swtches:
Note that it *does not increase* context swtiches when the two threads
are on the two cores of a single Xeon processor. (use taskset to force
affinity on linux)
Scenario:
If the two test and set processes are testing and setting the same bi
2004, Paul Tuckfield wrote:
If you are having a "write storm" or bursty writes that's burying
performance, a scsi raid controler with writeback cache will greatly
improve the situation, but I do believe they run around $1-2k. If
it's write specific problem, the cache matte
The king of statistics in these cases, is probably vmstat. one can
drill down on specific things from there, but first you should send
some vmstat output.
Reducing cache -> reducing IO suggests to me the OS might be paging out
shared buffers. This is indicated by activity in the "si" and "so
I'm guessing you have a 4 cpu box:
1 99 percent busy process on a 4 way box == about 25% busy overall.
On May 5, 2004, at 6:03 AM, Tom Lane wrote:
"Cyrille Bonnet" <[EMAIL PROTECTED]> writes:
Should I be worried that Postgres is eating up 99% of my CPU??? Or is
this
*expected* behaviour?
It's not
it's very good to understand specific choke points you're trying to
address by upgrading so you dont get disappointed. Are you truly CPU
constrained, or is it memory footprint or IO thruput that makes you
want to upgrade?
IMO The best way to begin understanding system choke points is vmstat
o
I'm confused why you say the system is 70% busy: the vmstat output
shows 70% *idle*.
The vmstat you sent shows good things and ambiguous things:
- si and so are zero, so your not paging/swapping. Thats always step
1. you're fine.
- bi and bo (physical IO) shows pretty high numbers for how many
One big caveat re. the "SAME" striping strategy, is that readahead can
really hurt an OLTP you.
Mind you, if you're going from a few disks to a caching array with many
disks, it'll be hard to not have a big improvement
But if you push the envelope of the array with a "SAME" configuration,
read
04','182','153','6','2004','0')
DESC
OFFSET 0 LIMIT 20;
I expect that pg is having to evaluate your function every time it does a
compare within its sort. Something like
SELECT t1.value1,t1.value2,
getday_total(..) AS
On 17/06/2004 12:10 Adam Witney wrote:
Will this run on other platforms? OSX maybe?
It's a Java app so it runs on any any platform with a reasonably modern
Java VM.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Li
On 17/06/2004 17:54 Vitaly Belman wrote:
Is it possible to download the Visual Explain only (link)? I only see
that you can donwload the whole ISO (which I hardly need).
You can get it from CVS and build it yourself.
--
Paul Thomas
her queries
are slightly slower than under Oracle on the same
hardware but nothing like this.
Usual questions:
have you vacuumed the table recently?
what are your postgresql.conf settings?
can you show us explain ANALYZE output rather than just explain output?
or 2 years now and have yet to discover any key
sequence which makes any sense. But then I don't do drugs so my perseption
is probably at odds with the origators of Emacs ;)
--
Paul Thomas
+--+-+
| Thomas Micro Systems Lim
Can anyone give a good reference site/book for getting the most out of
your postgres server.
All I can find is contradicting theories on how to work out your settings.
This is what I followed to setup our db server that serves our web
applications.
http://www.phpbuilder.com/columns/smith2001082
On 04/08/2004 13:45 Paul Serby wrote:
Can anyone give a good reference site/book for getting the most out of
your postgres server.
All I can find is contradicting theories on how to work out your
settings.
This is what I followed to setup our db server that serves our web
applications.
http
eID_key" on "tblForumMessages"
~ (cost=0.00..8037.33 rows=2150 width=223) (actual time=0.153..0.153
rows=0 loops=1)
~ Index Cond: ("fk_iParentMessageID" = 90)
~ Total runtime: 0.323 ms
SELECT COUNT(*) FROM "tblForumMessages" WHERE "fk_iParentMessageID"
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
count
9106
1 row(s)
Total runtime: 45,778.813 ms
There are only 3 fields:
id
integer
nex
index but a type mis-match (e.g, an int4 field referencing
an int8 field)
Either of these will cause a sequential table scan and poor performance.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for
relationship between PvA and PvB on a
row-by-row basis.
Have you considered using cursors?
--
Paul Thomas
+--+---+
| Thomas Micro Systems Limited | Software Solutions for Business |
| Computer Consultants | http
ve to take into account your
expected number of concurrent connections.
Paul
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Hi all,
I have a table which have more than 20 records. I need to get
the records which matches like this
where today::date = '2004-11-05';
This is the only condition in the query. There is a btree index on the
column today.
Is there any way to optimise it.
rgds
A
performance ?. If creating index can help then
how the index should be created on lower case or uppercase ?.
rgds
Antony Paul
---(end of broadcast)---
TIP 8: explain analyze is your friend
I used PreparedStatements to avoid SQL injection attack and it is the
best way to do in JDBC.
rgds
Antony Paul
On Mon, 24 Jan 2005 09:01:49 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Russell wrote:
> > I am not sure what the effect of it being prepared will be, however
://archives.postgresql.org/pgsql-sql/2004-11/msg00285.php
It says that index is not used if the search string begins with a % symbol.
rgds
Antony Paul
On Mon, 24 Jan 2005 20:58:54 +1100, Russell Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:
> > Hi,
&
Actually the query is created like this.
User enters the query in a user interface. User can type any character
in the query criteria. ie. % and _ can be at any place. User have the
freedom to choose query columns as well. The query is agianst a single
table .
rgds
Antony Paul
On Tue, 25 Jan
467
Swap: 501 0 501
rgds
Antony Paul
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get thro
.
rgds
Antony Paul.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
On more investigation I found that index scan is not used if the query
have a function in it like lower() and an index exist for lower()
column.
rgds
Antony Paul
On Mon, 7 Feb 2005 14:37:15 +0530, Antony Paul <[EMAIL PROTECTED]> wrote:
> Hi all,
> I am facing a strange proble
Sorry I forgot to mention it. I am using 7.3.3. I will try it in 8.0.0
rgds
Antony Paul
On Mon, 7 Feb 2005 12:46:05 +0100, Steinar H. Gunderson
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 07, 2005 at 04:44:07PM +0530, Antony Paul wrote:
> > On more investigation I found that in
I ran analyze; several times.
rgds
Antony Paul
On Mon, 07 Feb 2005 12:53:30 +0100, Jan Poslusny wrote:
> It depends on many circumstances, but, at first, simple question: Did
> you run vacuum analyze?
> I am satisfied with functional indexes - it works in my pg 7.4.x.
>
> An
Hi all, we have an Sun E3500 running Solaris 9. It's got 6x336MHz CPU and
10GB RAM.
I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres. We
use this Sun box solely for single user Postgres data warehousing
work
memory available via /etc/system, and having read all we could find
on various web sites.
Should I knock it down to 400MB as you suggest?
I'll check out that URL.
Cheers,
Paul.
> Paul,
>> I would like to know what /etc/system and postgresql_conf values are
recommended to deliver
00 setup that
we both run.
Many thanks,
Paul.
> Hi, Paul
>
> Josh helped my company with this issue -- PG doesn't use shared memory
> like Oracle, it depends more on the OS buffers. Making shared mem
> too large a fraction is disasterous and seriously impact performance.
>
minute when they find a cheaper system.
--
Paul Lambert
Database Administrator
AutoLedgers
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
t;>>
>>>>
>>
>> Never take advice from a guy who top posts... A friend of mine just went
>> through an OTG course and had good things to say, and I've heard other
>> speak well of it too, so I'd probably recommend them, but there are
>>
'0'::text)))::integer))"
" -> Index Scan using pk_fincompany_dealer_company on
fincompany (cost=0.01..8.28 rows=1 width=61) (actual time=0.007..0.009
rows=1 loops=17227)"
"Index Cond: (((fincompany.dealer
Paul Lambert wrote:
This part of the query alone takes a significant part of the time:
SELECT DISTINCT ON (finbalance.dealer_id, finbalance.year_id,
finbalance.subledger_id, finbalance.account_id)
finbalance.year_id AS year
Gregory Stark wrote:
"Richard Huxton" <[EMAIL PROTECTED]> writes:
Paul Lambert wrote:
" -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual >>
time=5949.691..7018.931 rows=206748 loops=1)"
"Sort Key: dealer_id, year_id, subled
therwise silly) for a bitmap fast index scan/bitmap
conversion for similar dramatic results.
For "large" tables, Oracle is not going to be as fast as MyISAM tables
in MySQL, even with these optimizations, since MyISAM doesn't have to
scan even index pages to get a count(*) answer again
Mark Mielke wrote:
Josh Berkus wrote:
Count() on Oracle and MySQL is almost instantaneous, even for very
large tables. So why can't Postgres do what they do?
AFAIK the above claim is false for Oracle. They have the same
transactional issues we do.
Nope. Oracle's MVCC is implemen
econd that. PITR is IMHO the way to go, and I believe you'll be
pleasantly surprised how easy it is to do. As always, test your backup
strategy by restoring. Even better, make a point of periodically
testing a restore of production backups to a non-production system.
Paul
-
Sent via
gs, or you can keep a table of
active partitions that your script which drops off old partitions and
generates new ones can keep updated on the oldest/newest partition
dates. Or some number of other solutions, whatever you find cleanest for
your purposes.
Paul
--
Sent via pgsql-performance m
On 2008-08-28, at 21:31, Emi Lu wrote:
Good morning,
Tried to compare Table1 based on Table2
. update table1.col = false
if table1.pk_cols not in table2.pk_cols
For the following two ways, (2) always performs better than (1) right,
and I need your inputs.
On 26-Sep-2009, at 10:16 PM, Claus Guttesen wrote:
I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474
, you really should consider using a connection pool
as it removes the overhead of creating and closing connections.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Com
using connection pooling, try reducing the maximum number
of connections. This will take some of the stress off the database.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Bus
ttp://www.redhat.com/docs/manuals/database/. I'd welcome your oppinions
on
this product.
Thank you for your comments.
It looks like they just wrote a number of GUI versions of the command line
utilities. From what I can tell, its still a standard postgresql database
beh
ooses a seq scan for small tables
as the whole table can often be bought into memory with one IO whereas
reading the index then the table would be 2 IOs.
HTH
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Sof
find that the third query is also a lot faster that the
first query.
HTH
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer Consultants |
http://www.th
dscan, enable_sort, enable_nestloop,
enable_mergejoin or enable_hashjoin have been set to false. Looking at the
source, thats the only way I can see that such large numbers can be
produced.
HTH
--
Paul Thomas
+--+-+
| Thomas
should be increased for your situation?
HTH
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer Consultants |
http://www.thomas-micro-systems-ltd.co.uk
k (faster maybe but could cause data corruption). Try modifying your
program to have connection.setAutoCommit(false) and do a
connection.commit() after say every 100 inserts.
HTH
--
Paul Thomas
+--+-+
| Thomas Micro S
to me. I'd start off
with something like
shared_buffers = 2000
sort_mem = 1024
max_coonections = 100
and see how it performs under normal business loading.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software So
gle-processor Intel/AMD
based hardware.
Selfishness and sillyness aside, I'm sure your tests will of interest to
us all. Thanks in advance
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for
st a thought..
Interestingly, float8 indexes do work OK (float8col = 99). I spend a large
part of yesterday grepping through the sources to try and find out why
this should be so. No luck so far but I'm going to keep on trying
RAM) or do you want to use moderate
settings?"
Something like this, you get the idea.
ISR reading that 7.4 will use a default of shared_beffers = 1000 if the
machine can support it (most can). This alone should make a big difference
in out
ntation of the facts as you see them, and if you want to put in
your opinion also, that's fine, just make a note!
TIA.
Paul...
--
plinehan__AT__yahoo__DOT__com
C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro
Please do not top-post.
---(e
(hope I'm posting this correctly)
You wrote:
>First question is do we gain anything by moving the RH Enterprise
>version of Linux in terms of performance, mainly in the IO realm as we
>are not CPU bound at all? Second and more radical, has anyone run
>postgreSQL on the new Apple G5 with an XRaid
Not that I'm offering to do the porgramming mind you, :) but . .
In the case of select count(*), one optimization is to do a scan of the
primary key, not the table itself, if the table has a primary key. In a
certain commercial, lesser database, this is called an "index fast full
scan". It wou
n
their site.
What, exactly, is the story with this?
Paul...
--
plinehan y_a_h_o_o and d_o_t com
C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro
Please do not top-post.
"XML avoids the fundamental question of what we should do,
by focusing entirely on how we should do it.&
his
email) which has still got its rpm binaries. My other machines have all
been upgraded from source.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer C
rpm disk, whilst it will probably have a lower seek time than a 10K rpm
disk, won't have a proportionately (i.e., 2/3rds) lower seek time.
- likelihood of page to be cached in memory by the kernel
That's effective cache s
eriences anyone?
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller
Business |
| Computer Consultants |
http://www.thomas-micro-sy
This is a good one :)
Here is a "brief" description of our issue(Postgres 9.0):
Tables:
main fact table:
Table "public.parent_fact"
Column|Type |
--+-+---
etime| date | not n
val, then
using a view that includes a calculation using CURRENT_DATE().
Regards,
Paul Bort
Systems Engineer
TMW Systems, Inc.
pb...@tmwsystems.com
216 831 6606 x2233
216 8313606 (fax)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Feb 12, 2015 9:17 PM, "Tom Lane" wrote:
> The planner might produce such a result if there's an opportunity
> to perform the sorting via an index on "alpha" (ie, the ORDER BY
> matches some index). If it has to do an explicit sort it's gonna
> do the join first.
>
> (If you have such an index
Hi!
How can I speed up my server's performance when I use offset and limit
clause.
For example I have a query:
SELECT * FROM table ORDER BY id, name OFFSET 10 LIMIT 1
This query takes a long time about more than 2 minutes.
If my query is:
SELECT * FROM table ORDER BY id, name OFFSET 500
5000
Thread 3 : gets offset 1 limit 5000
And so on...
Would there be any other faster way than what It thought?
-Original Message-
From: PFC [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 11, 2006 7:06 AM
To: Christian Paul Cosinas; pgsql-performance@postgresql.org
Subject: Re
imitated: move pg_xlog before anything else.
Now about "client side", I reccomend you install and use pgpool, see:
http://pgpool.projects.postgresql.org/ . Because "pgpool caches the
connection to PostgreSQL server to reduce the overhead to establish the
connection to it". Allways g
1 - 100 of 122 matches
Mail list logo