So the issue is that instead of taking 174 seconds the query now takes
201?
I'm guessing that SQL server might be using index covering, but that's
just a guess. Posting query plans (prefferably with actual timing info;
EXPLAIN ANALYZE on PostgreSQL and whatever the equivalent would be for
MSSQL) m
Who needs a paginated view with 100.000 pages ?
- Select min(date) and max(date) from your table
- Present a nifty date selector to choose the records from any day,
hour, minute, second
- show them, with "next day" and "previous day" buttons
- It's more useful to the user
We have a GUI that let user browser through the record page by page at
about 25 records a time. (Don't ask me why but we have to have this
GUI). This translates to something like
select count(*) from table <-- to give feedback about the DB size
select * from table order by date limit 25 o
On Wed, Oct 26, 2005 at 08:51:03PM -0400, Tom Lane wrote:
> I have some ideas in the back of my head about supporting
> cross-data-type hashing. Essentially this would require that the hash
> functions for two types be compatible in that they generate the same
> hash value for two values that woul
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> Aha!
> Figured out the "start" column wasn't the problem after all. The problem was
> the "stopp" column, which was timestamp on one side and date on the other...
Ah-hah.
> So, it can be fixed for this instance, but this feels a bit like the p
On Wed, Oct 26, 2005 at 07:53:02PM -0400, Tom Lane wrote:
> I don't think you're getting a correct reading for optup, but OID
> 2373 is timestamp = date:
>
> [...]
>
> My recollection is that there was no such operator in 7.4; probably in
> 7.4 the IN ended up using timestamp = timestamp which is
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Wed, Oct 26, 2005 at 07:06:15PM -0400, Tom Lane wrote:
>> AFAICS, subplan_is_hashable() is testing the same conditions in 7.4 and
>> HEAD, so this isn't clear. Want to step through it and see where it's
>> deciding not to hash?
> (gdb) print
I DON'T KNOW WHAT TO DO WITH THIS QUERYS...
Comparation with sql server, sql server wins !!!
Table sizes:
archivos: 40MB
fotos: 55MB
select count(1) from fotos f where not exists (select a.archivo from
archivos a where a.archivo=f.archivo)
173713 ms.
110217 ms.
83122 ms.
select count(*) from
On Wed, Oct 26, 2005 at 07:06:15PM -0400, Tom Lane wrote:
> AFAICS, subplan_is_hashable() is testing the same conditions in 7.4 and
> HEAD, so this isn't clear. Want to step through it and see where it's
> deciding not to hash?
Line 639, ie.:
635 if (!optup->oprcanhash || optup->
"Edward Di Geronimo Jr." <[EMAIL PROTECTED]> writes:
> ... I'd like to know exactly what causes
> the bottleneck in the original query, and if there are other approaches
> to solving the issue in case I need them in future queries.
This is fairly hard to read ... it would help a lot if you had s
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> Any good ideas why 8.1 would refuse to do this, when 7.4 would do it? It does
> not matter how high I set my work_mem; even at 2.000.000 it refused to hash
> the subplan.
AFAICS, subplan_is_hashable() is testing the same conditions in 7.4 and
HE
On Oct 19, 2005, at 9:51 AM, Katherine Stoovs wrote:
I want to correlate two index rows of different tables to find an
offset so that
table1.value = table2.value AND table1.id = table2.id + offset
is true for a maximum number of rows.
To achieve this, I have the two tables and a table with p
I am running Postgre 7.4 on FreeBSD. The main table have 2 million record
(we would like to do at least 10 mil or more). It is mainly a FIFO
structure
with maybe 200,000 new records coming in each day that displace the older
records.
I'm so sorry, but I have to rant XDDD
People wh
Hi,
I finally found what I believe is the root cause for the hopeless
performance, after a lot of query rewriting:
> Subquery Scan mdb_effektiv_tilgang (cost=19821.69..4920621.69 rows=1
> width=48)
>Filter: ((NOT (hashed subplan)) AND (NOT (subplan)))
The problem here is simply that 8
aurora <[EMAIL PROTECTED]> writes:
> It would still be helpful if select count(*) can perform well.
If you can settle for an approximate count, pg_class.reltuples might
help you.
regards, tom lane
---(end of broadcast)---
TI
You could also create your own index so to speak as a table that
simply contains a list of primary keys and an order value field that
you can use as your offset. This can be kept in sync with the master
table using triggers pretty easily. 2 million is not very much if you
only have a integer pkey
>>
select * from table order by date limit 25 offset 0
> Do you have an index on the date column? Can you post an EXPLAIN
> ANALYZE for the slow query?
Wow! Now that I look again there are actually 2 date fields. One is
indexed and one is not. Order by was done on the column without index.
Usi
> We have a GUI that let user browser through the record page by page at
> about 25 records a time. (Don't ask me why but we have to have this
> GUI). This translates to something like
>
> select count(*) from table <-- to give feedback about the DB size
Do you have a integer field that is a
On Wed, 2005-10-26 at 15:41, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is mainly a
> FIFO structure with maybe 200,000 new records coming in each day that
> displace the older records.
>
> We have a
Do you have an index on the date column? Can you post an EXPLAIN
ANALYZE for the slow query?
-- Mark Lewis
On Wed, 2005-10-26 at 13:41 -0700, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is mainly a
I am running Postgre 7.4 on FreeBSD. The main table have 2 million
record (we would like to do at least 10 mil or more). It is mainly a
FIFO structure with maybe 200,000 new records coming in each day that
displace the older records.
We have a GUI that let user browser through the record page by p
I currently have an infrastructure that's based around SQL Server 2000.
I'm trying to move some of the data over to Postgres, partly to reduce
the load on the SQL Server machine and partly because some queries I'd
like to run are too slow to be usuable on SQL Server. Mostly likely over
time mor
On Tue, Oct 25, 2005 at 22:24:06 -0600,
Sidar López Cruz <[EMAIL PROTECTED]> wrote:
> where can i find bests practices for tunning postgresql?
You should first read the documentation. For 8.1, that would be here:
http://developer.postgresql.org/docs/postgres/runtime-config.html
There is also go
Hi there.
I am currently building a system, where it would be nice to use multiple
levels of views upon each other (it is a staticstics system, where
traceability is important).
Is there any significant performance reduction in say 10 levels of views
instead of one giant, nested sql-statemen
On Wed, Oct 26, 2005 at 08:05:21AM -0400, Merlin Moncure wrote:
> select count(*) from fotos f where not exists (select archivo from archivos a
> where a.archivo = f.archivo)
This was an optimization before 7.4, but probably isn't anymore.
/* Steinar */
--
Homepage: http://www.sesse.net/
> look at this:
> select count(*) from fotos where archivo not in (select archivo from
> archivos)
> Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)
> -> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 width=0)
>Filter: (NOT (subplan))
>SubPlan
>
On Tue, Oct 25, 2005 at 10:26:43PM -0600, Sidar López Cruz wrote:
> look at this:
> select count(*) from fotos where archivo not in (select archivo from
> archivos)
> Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)
> -> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 wi
Tom Lane wrote:
Rich Doughty <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
The reason these are different is that the second case constrains only
the last-to-be-joined table, so the full cartesian product of t and h1
has to be formed. If this wasn't what you had in mind, you might be
able to r
2005/10/26, Sidar López Cruz <[EMAIL PROTECTED]>:
> where can i find bests practices for tunning postgresql?
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
--
Jean-Max Reymond
CKR Solutions Open Source
Nice France
http://www.ckr-solutions.com
---(end of broa
29 matches
Mail list logo