Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
seriously, I am far below this level of knowledge. But I can contribute
a test that (maybe) can help. I have rewritten the query so it JOINs the
varchar() fields (in fact all fields except the IDPK) at the last INNE
Tom Lane wrote:
So I have some results. I have tested the query on both PostgreSQL 8.0.1
and MySQL 4.1.8 with LIMIT set to 30 and OFFSET set to 6000. PostgreSQL
result is 11,667.916 ms, MySQL result is 448.4 ms.
That's a fairly impressive discrepancy :-(, and even the slot_getattr()
patch th
Kaloyan Iliev Iliev wrote:
Hi,
I have an idea about your problem. Will it be difficult not to change
the entire code but only the queries? You can change type in the
Postgres to bool. Then, when select data you can use a CASE..WHEN to
return 'Y' or 'N' or even write a little function which accep
Tom Lane wrote:
Just FYI, I did a quick search-and-replace on your dump to replace
varchar(1) by "char", which makes the column fixed-width without any
change in the visible data. This made hardly any difference in the
join speed though :-(. So that is looking like a dead end.
I'll try to chan
Harald Fuchs wrote:
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data type.
Since BOOL is exactly what you want to express and since MySQL also
sup
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
As there are a lot of varchar(1) in the AdDevicesSites table, wouldn't
be helpful to change them to char(1)? Would it solve the variable-width
problem at least for some fields and speed the query up?
No, becaus
PFC wrote:
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL
or an integer.
Sure I could. The problem is our project still supports both MySQL and
PostgreSQL. We used enum('Y','N') in MySQL so there would be a lot of
changes in the code if we would change to the BOOL data t
Christopher Kings-Lynne wrote:
1) in PostgreSQL I use 'varchar(1)' for a lot of fields and in MySQL
I use 'enum'
2) in PostgreSQL in some cases I use connection fields that are not
of the same type (smallint <-> integer (SERIAL)), in MySQL I use the
same types
Well both those things will make P
John Arbash Meinel wrote:
In fact, on MySQL I didn't see any slow reactions so I didn't measure
and inspect it. But I can try it if I figure out how to copy the
database from PostgreSQL to MySQL.
I figured you still had a copy of the MySQL around to compare to. You
probably don't need to spend too
Tom Lane wrote:
...
I think the reason this is popping to the top of the runtime is that the
joins are so wide (an average of ~85 columns in a join tuple according
to the numbers above). Because there are lots of variable-width columns
involved, most of the time the fast path for field access does
Tom Lane wrote:
I wrote:
Since ExecProject operations within a nest of joins are going to be
dealing entirely with Vars, I wonder if we couldn't speed matters up
by having a short-circuit case for a projection that is only Vars.
Essentially it would be a lot like execJunk.c, except able to cope
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
shared_buffers = 48000 # min 16, at least max_connections*2,
8KB each
work_mem = 1024 # min 64, size in KB
maintenance_work_mem = 16384# min 1024, size in KB
max_stack_depth = 2048 # min
John Arbash Meinel wrote:
Is there a reason to use varchar(1) instead of char(1). There probably
is 0 performance difference, I'm just curious.
No, not at all. I'm just not used to char().
Well, with cursors you can also do "FETCH ABSOLUTE 1 FROM
", which sets the cursor position, and then you can
Tom Lane wrote:
John Arbash Meinel <[EMAIL PROTECTED]> writes:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
Indeed. The hash joins seem unreasonably slow considering how little
data they are processing (unless t
John Arbash Meinel wrote:
It's actually more of a question as to why you are doing left outer
joins, rather than simple joins.
Are the tables not fully populated? If so, why not?
Some records do not consist of full information (they are collected from
different sources which use different approach
John Arbash Meinel wrote:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
I have tried to set shared_buffers to 48000 now but no speedup
(11,098.813 ms third try). The others are still default. I'll see
documentation and
Hi Ragnar,
Ragnar Hafstaà wrote:
[snip output of EXPLAIN ANALYZE]
for those of us who have not yet reached the level where one can
infer it from the query plan, how abour showing us the actual
query too ?
I thought it will be sufficient to show me where the main bottleneck is.
And in fact, the
Hi John,
thank you for your response.
John Arbash Meinel wrote:
You really need to post the original query, so we can see *why* postgres
thinks it needs to run the plan this way.
Here it is:
SELECT AdDevicesSites.IDPK, AdDevicesSites.AdDevicesSiteSizeIDFK,
AdDevicesSites.AdDevicesSiteRegionIDFK,
-> Hash (cost=2.61..2.61 rows=61 width=34) (actual
time=0.118..0.118 rows=0 loops=1)
-> Seq Scan on partners partnersmaintainer
(cost=0.00..2.61 rows=61 width=34) (actual time=0.003..0.075 rows=61 loops=1)
-> Hash (cost=2.61..2.61 ro
19 matches
Mail list logo