On Mon, Oct 11, 2010 at 12:38 AM, AI Rumman wrote:
> I need to join two tales say TAB_A and TAB_B, where TAB_A is greater than
> TAB_B in size and records.
> Which Table should I put first in join order?
If it's a regular old inner join it doesn't matter, the query planner
will figure it out. If
I need to join two tales say TAB_A and TAB_B, where TAB_A is greater than
TAB_B in size and records.
Which Table should I put first in join order?
Any idea please.
On Sunday 10 October 2010 15:41:16 you wrote:
> On 10/11/2010 01:14 AM, Mladen Gogala wrote:
> > I can provide measurements, but from Oracle RDBMS. Postgres doesn't
> > allow tuning of that aspect, so no measurement can be done. Would the
> > numbers from Oracle RDBMS be acceptable?
>
> Well, they
On 11/10/10 11:14, Mladen Gogala wrote:
> On 10/10/2010 8:27 PM, Joshua Tolley wrote:
>> It was asserted that reading bigger chunks would help performance; a
>> response
>> suggested that, at least in Linux, setting readahead on a device would
>> essentially do the same thing. Or that's what I got
On Sun, Oct 10, 2010 at 11:14:43PM -0400, Mladen Gogala wrote:
> The fact is, however, that the question
> about slow sequential scan appears with some regularity on PostgreSQL
> forums.
Definitely. Whether that's because there's something pathologically wrong with
sequential scans, or just be
On 10/10/2010 8:27 PM, Joshua Tolley wrote:
It was asserted that reading bigger chunks would help performance; a response
suggested that, at least in Linux, setting readahead on a device would
essentially do the same thing. Or that's what I got from the thread, anyway.
I'm interested to know how
On 10/11/2010 08:27 AM, Joshua Tolley wrote:
One thing a test program would have to take into account is multiple
concurrent users. What speeds up the single user case may well hurt the
multi user case, and the behaviors that hurt single user cases may have been
put in place on purpose to allow
On Mon, Oct 11, 2010 at 06:41:16AM +0800, Craig Ringer wrote:
> On 10/11/2010 01:14 AM, Mladen Gogala wrote:
>
>> I can provide measurements, but from Oracle RDBMS. Postgres doesn't
>> allow tuning of that aspect, so no measurement can be done. Would the
>> numbers from Oracle RDBMS be acceptable?
On Sun, Oct 10, 2010 at 12:14 PM, Mladen Gogala
wrote:
>
>
>
> In other words, when I batched the sequential scan to do 128 blocks I/O, it
> was 4 times faster than when I did the single block I/O.
> Does that provide enough of an evidence and, if not, why not?
These numbers tell us nothing beca
On 10/11/2010 01:14 AM, Mladen Gogala wrote:
I can provide measurements, but from Oracle RDBMS. Postgres doesn't
allow tuning of that aspect, so no measurement can be done. Would the
numbers from Oracle RDBMS be acceptable?
Well, they'd tell me a lot about Oracle's performance as I/O chunk siz
On Sunday 10 October 2010 05:02:03 Віталій Тимчишин wrote:
> 2010/10/10 Neil Whelchel
>
> > On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:
> > > On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel
> >
> > wrote:
> > > > I know that there haven been many discussions on the slowness of
> >
On 10/10/2010 11:02 AM, Reid Thompson wrote:
On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel
On the other hand, I copied a table out of one of my production servers that
has about 60,000 rows with 6 columns (numeric, numeric, bool, bool, timestamp,
text). The first numeric column has numbers eve
Craig Ringer wrote:
On 10/10/2010 9:54 AM, Mladen Gogala wrote:
Unfortunately, the problem is in the rather primitive way PostgreSQL
does I/O. It didn't change in 9.0 so there is nothing you could gain by
upgrading. If you execute strace -o /tmp/pg.out -e read and inspect the file /tmp/pg.o
On 10/10/2010 6:29 AM, Neil Whelchel wrote:
On Saturday 09 October 2010 23:56:15 Craig Ringer wrote:
On 10/10/2010 11:02 AM, Neil Whelchel wrote:
On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:
On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel
wrote:
I know that there haven been many
On 10/10/2010 6:29 PM, Neil Whelchel wrote:
On the other hand, I copied a table out of one of my production servers that
has about 60,000 rows with 6 columns (numeric, numeric, bool, bool, timestamp,
text). The first numeric column has numbers evenly spread between 0 and 100
and it is indexed. I
On 10/10/2010 7:45 AM, Florian Weimer wrote:
Some people use RDBMSs mostly for the*M* part, to get a consistent
administration experience across multiple applications. And even with
asynchronous commits, PostgreSQL will maintain a consistent state of
the database.
Both Postgres and Oracle ha
2010/10/10 Neil Whelchel
> On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:
> > On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel
> wrote:
> > > I know that there haven been many discussions on the slowness of
> count(*)
> > > even when an index is involved because the visibility of the row
* Mladen Gogala:
> I have a logical problem with asynchronous commit. The "commit"
> command should instruct the database to make the outcome of the
> transaction permanent. The application should wait to see whether the
> commit was successful or not. Asynchronous behavior in the commit
> stateme
On 10 October 2010 11:51, AI Rumman wrote:
> I already sent the mail earlier. but added wrong explain. So I mail it
> again.
>
> I have more than 130 records in crm table and I partioned the table with
> deleted = 0 key.
> It is working fine except that after partioion query is taking more tim
I already sent the mail earlier. but added wrong explain. So I mail it
again.
I have more than 130 records in crm table and I partioned the table with
deleted = 0 key.
It is working fine except that after partioion query is taking more time
than the previous one.
I already set constraint_exclu
On Saturday 09 October 2010 23:56:15 Craig Ringer wrote:
> On 10/10/2010 11:02 AM, Neil Whelchel wrote:
> > On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:
> >> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel
> >
> > wrote:
> >>> I know that there haven been many discussions on the slownes
I have more than 130 records in crm table and I partioned the table with
deleted = 0 key.
It is working fine except that after partioion query is taking more time
than the previous one.
I already set constraint_exclusion = on; My DB version is Postgresql 8.1
I added the explain anayze for both
On 10/10/2010 2:55 PM, Mladen Gogala wrote:
On 10/10/2010 2:43 AM, Craig Ringer wrote:
Some of the other flavours of non-SQL databases, both those that've been
around forever (PICK/UniVerse/etc, Berkeley DB, Cache, etc) and those
that're new and fashionable Cassandra, CouchDB, etc, provide some
On 10/10/2010 9:54 AM, Mladen Gogala wrote:
Unfortunately, the problem is in the rather primitive way PostgreSQL
does I/O. It didn't change in 9.0 so there is nothing you could gain by
upgrading. If you execute strace -o /tmp/pg.out -e read and inspect the file /tmp/pg.out when the query
finish
24 matches
Mail list logo