On Tue, Oct 2, 2012 at 9:18 AM, Merlin Moncure wrote:
> > Yes but that is irrelevant to the discussion. I am comparing the speed of
> > repeated table existence checks with the speed of repeated exception
> blocks
> > that access said table.
>
> Both approaches have to do a catalog scan (even if
On Mon, Oct 1, 2012 at 3:58 PM, Moshe Jacobson wrote:
> On Mon, Oct 1, 2012 at 12:22 PM, Merlin Moncure wrote:
>>
>> >> *) Functions without exception blocks are faster than those with.
>> >> *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
>> >
>> > I don't think that can
On Mon, Oct 1, 2012 at 12:22 PM, Merlin Moncure wrote:
> >> *) Functions without exception blocks are faster than those with.
> >> *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
> >
> > I don't think that can be assumed by your premise above. Essentially we
> are
> > comp
On Mon, Oct 1, 2012 at 11:22 AM, Merlin Moncure wrote:
>> We currently do use permanent tables using pg_backend_pid(). It's because of
>> the connection pooling specifically that we are having problems with stale
>> data. I have been unable to find a way to automatically clear that data upon
>> st
On Mon, Oct 1, 2012 at 10:21 AM, Moshe Jacobson wrote:
> Merlin,
>
> On Mon, Oct 1, 2012 at 10:28 AM, Merlin Moncure wrote:
>>
>>
>> Couple points:
>> *) Functions without exception blocks are faster than those with.
>
>
> Clearly.
>
>>
>> *) Therefore, CREATE/IF NOT EXISTS is probably faster (te
Merlin,
On Mon, Oct 1, 2012 at 10:28 AM, Merlin Moncure wrote:
>
> Couple points:
> *) Functions without exception blocks are faster than those with.
>
Clearly.
> *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
>
I don't think that can be assumed by your premise above
On Mon, Oct 1, 2012 at 8:36 AM, Moshe Jacobson wrote:
> I am working on an audit logging trigger that gets called for every row
> inserted, updated or deleted on any table.
> For this, I need to store a couple of temporary session variables such as
> the ID of the user performing the change, which
I am working on an audit logging trigger that gets called for every row
inserted, updated or deleted on any table.
For this, I need to store a couple of temporary session variables such as
the ID of the user performing the change, which can be set at the start of
the session.
Until now I have been
On Thu, Feb 09, 2006 at 10:52:03AM -0800, Uwe C. Schroeder wrote:
> Depending on your keys neither.
> Rather let the DB handle the resultset. count(*) is quite slow.
>
> How about something like
>
> select blablabla from _complex_query order by _key_ (optional DESC or ASC)
> OFFSET xxx LIMIT 15
Depending on your keys neither.
Rather let the DB handle the resultset. count(*) is quite slow.
How about something like
select blablabla from _complex_query order by _key_ (optional DESC or ASC)
OFFSET xxx LIMIT 15
where your offset would be a parameter from the php side and is basically the
Silas Justiniano wrote:
Hello all!
I'm performing a query that returns me hundreds of records... but I
need cut them in pages that have 15 items! (using PHP)
So, is it faster:
select blablabal from _complex_query
if (count($result) > 15) show_pages;
show_only_15_rows($result);
or:
se
Hi,
Second option.
For the first option, if your query returns say 10,000 rows then php has
to actually fetch 10,000 rows from the database before anything else
happens even though you're only displaying 15.
Silas Justiniano wrote:
Hello all!
I'm performing a query that returns me hundreds
Hello all!
I'm performing a query that returns me hundreds of records... but I
need cut them in pages that have 15 items! (using PHP)
So, is it faster:
select blablabal from _complex_query
if (count($result) > 15) show_pages;
show_only_15_rows($result);
or:
select count(*) from _comple
On Fri, Dec 10, 2004 at 06:15:50PM -0800, Eric Brown wrote:
> Option 1:
> create table a (id serial, hosts text[]);
>
> OR
>
> Option 2:
> create table a (id serial);
> create table hosts (id int references a, host text);
Option 2 will save a lot of developer & query time, as it's much more
stan
Option 1:
create table a (id serial, hosts text[]);
OR
Option 2:
create table a (id serial);
create table hosts (id int references a, host text);
Table 'a' will have about 500,000 records. There will probably be about
20 reads for every write. Each id has approximately 1.1 hosts. If I use
the arr
On Mon, Dec 11, 2000 at 04:28:24AM +0100, some SMTP stream spewed forth:
> Hi,
>
> I'm thinking about, what might be faster on SELECTs: a column with index
> which is NOT NULL and takes the value of 0 or a column which can take
> the NULL value instead of 0, also with index.
>
> My feeling sais
Hi,
I'm thinking about, what might be faster on SELECTs: a column with index
which is NOT NULL and takes the value of 0 or a column which can take
the NULL value instead of 0, also with index.
My feeling sais, that 0 and NOT NULL should be a lot more faster, but
perhaps it's not true?
bye
Al
17 matches
Mail list logo