e same defaults.
Thanks,
Hasnul
Michael Fuhr wrote:
On Mon, Feb 14, 2005 at 04:01:20PM +0800, Hasnul Fadhly bin Hasan wrote:
I am just wondering, by default, autocommit is enabled for every client
connection. The documentations states that we have to use BEGIN
and END or COMMIT
Hi,
I am just wondering, by default, autocommit is enabled for every client
connection. The documentations states that we have to use BEGIN
and END or COMMIT so to increase performance by not using autocommit.
My question is, when we use the BEGIN and END statements, is autocommit
unset/disabl
Hi,
I am trying to build a function that would extend the trigger in general
tid bits that would only track count changes for table rows.
The one i am trying to build would check which column and value should
be tracked.
e.g. below would be the tracker.
CREATE TABLE "public"."" (
"tables"
Hi Richard,
Thanks for the reply.. is that the case? i thought it would comply to
the where condition first..
and after that it will format the output to what we want..
Hasnul
Richard Huxton wrote:
Hasnul Fadhly bin Hasan wrote:
Hi,
just want to share with all of you a wierd thing that i found
Hi,
just want to share with all of you a wierd thing that i found when i
tested it.
i was doing a query that will call a function long2ip to convert bigint
to ips.
so the query looks something like this.
select id, long2ip(srcip), long2ip(dstip) from sometable
where timestamp between timestamp
Hi,
I am not sure if this is the place to ask this question, but since the
question is trying to improve the performance.. i guess i am not that
far off.
My question is if there is a query design that would query multiple
server simultaneously.. would that improve the performance?
To make it c
Hi Bryan,
Just wondering, i ran vacuumdb but didn't get the information that you
get about the free space even when i set the verbose option. How did
you get that?
Thanks,
Hasnul
Bryan wrote:
Postgresql is the backbone of our spam filtering system. Currently the
performance is OK. Wanted to kn
Hi,
I was reading a lot on the specs that was used by those who runs
postgres. I was wondering is the a more structured method of
determining what is the required hardware specs? The project that i am
doing can populate about few millions records a day (worst case).
Based on what i read, this