On 01/06/2010 08:49 PM, Greg Smith wrote:
> Yan Cheng Cheok wrote:
>> The time taken to perform measurement per unit is in term of ~30
>> milliseconds. We need to record down the measurement result for every
>> single unit. Hence, the time taken by record down the measurement
>> result shall be far
Greg Smith writes:
> If you're OK with the possibility of losing a measurement in the case of a
> system crash
Then I'd say use synchronous_commit = off for the transactions doing
that, trading durability (the 'D' of ACID) against write
performances. That requires 8.3 at least, and will not fsync
ards
Yan Cheng CHEOK
--- On Thu, 1/7/10, Greg Smith wrote:
> From: Greg Smith
> Subject: Re: [GENERAL] PostgreSQL Write Performance
> To: "Yan Cheng Cheok"
> Cc: "Dann Corbit" , pgsql-general@postgresql.org
> Date: Thursday, January 7, 2010, 12:49 PM
> Y
Yan Cheng Cheok wrote:
The time taken to perform measurement per unit is in term of ~30 milliseconds.
We need to record down the measurement result for every single unit. Hence, the
time taken by record down the measurement result shall be far more less than
milliseconds, so that it will have
On Thu, Jan 7, 2010 at 3:13 AM, Dimitri Fontaine wrote:
> Tim Uckun writes:
>> Is there a command like COPY which will insert the data but skip all
>> triggers and optionally integrity checks.
>
> pg_bulkload does that AFAIK.
>
That's a great utility. Unfortunately since it bypasses the WAL I
c
Tim Uckun writes:
> Is there a command like COPY which will insert the data but skip all
> triggers and optionally integrity checks.
pg_bulkload does that AFAIK.
http://pgbulkload.projects.postgresql.org/
Regards,
--
dim
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
On Wed, 2010-01-06 at 15:30 +1300, Tim Uckun wrote:
> > I, for one, would loudly and firmly resist the addition of such a
> > feature. Almost-as-fast options such as intelligent re-checking of
>
> Even if it was not the default behavior?
>
> >
> > If you really want to do that, look at the manual
On Tue, 2010-01-05 at 22:29 -0800, Yan Cheng Cheok wrote:
> Thanks for the information. I perform benchmarking on a very simple table, on
> local database. (1 table, 2 fields with 1 is bigserial, another is text)
>
>
> INSERT IN
> -Original Message-
> From: Yan Cheng Cheok [mailto:ycch...@yahoo.com]
> Sent: Tuesday, January 05, 2010 10:30 PM
> To: Craig Ringer
> Cc: Dann Corbit; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PostgreSQL Write Performance
>
> Thanks for th
Tim Uckun wrote:
Is there a command like COPY which will insert the data but skip all
triggers and optionally integrity checks.
I'm curious if it would be worth COPYing the data into dummy tables with
no constraints, and then using INSERT INTO ... SELECT statements to feed
from those tables
ct: Re: [GENERAL] PostgreSQL Write Performance
> To: "Yan Cheng Cheok"
> Cc: "Dann Corbit" , pgsql-general@postgresql.org
> Date: Tuesday, January 5, 2010, 7:20 PM
> On 5/01/2010 3:30 PM, Yan Cheng Cheok
> wrote:
> >>> What is the actual problem you are t
Thanks for the information. I wrote a plan c program to test the performance.
Its time measurement is very MUCH different from pgAdmin.
Thanks and Regards
Yan Cheng CHEOK
--- On Wed, 1/6/10, Andres Freund wrote:
> From: Andres Freund
> Subject: Re: [GENERAL] PostgreSQL Write Perfo
Tim Uckun wrote:
>> I, for one, would loudly and firmly resist the addition of such a
>> feature. Almost-as-fast options such as intelligent re-checking of
>
> Even if it was not the default behavior?
Even if it was called
COPY (PLEASE BREAK MY DATABASE) FROM ...
... because there are *better
> I, for one, would loudly and firmly resist the addition of such a
> feature. Almost-as-fast options such as intelligent re-checking of
Even if it was not the default behavior?
>
> If you really want to do that, look at the manual for how to disable
> triggers, but understand that you are throwi
Tim Uckun wrote:
>> Technically you *can* disable triggers, including RI checks, but it's VERY
>> unwise and almost completely defeats the purpose of having the checks. In
>> most such situations you're much better off dropping the constraints then
>> adding them again at the end of the load.
>
>
On Jan 5, 2010, at 3:46 PM, Tim Uckun wrote:
pg_dump has a --disable-triggers option too.
[...]
It doesn't seem like an outrageous expectation that the COPY command
or something similar should have that option.
Well, whether an expectation is "outrageous" or not is a matter of
viewpoint.
>
> Technically you *can* disable triggers, including RI checks, but it's VERY
> unwise and almost completely defeats the purpose of having the checks. In
> most such situations you're much better off dropping the constraints then
> adding them again at the end of the load.
I know that the SQL se
On 6/01/2010 6:21 AM, Tim Uckun wrote:
You might use the copy command instead of insert, which is far faster.
If you want the fastest possible inserts, then probably copy is the way
to go instead of insert.
Here is copy command via API:
http://www.postgresql.org/docs/current/static/libpq-copy.htm
Hi,
On Tuesday 05 January 2010 04:36:10 Yan Cheng Cheok wrote:
> I make the following single write operation through pgAdmin :
...
> It takes 16ms to write a single row according to "Query Editor" (bottom
> right corner)
In my experience the times presented by pgadmin vary wildly and seldomly do
Tim Uckun wrote:
Is there a command like COPY which will insert the data but skip all
triggers and optionally integrity checks.
Nope, skipping integrity checks is MySQL talk. When doing a bulk
loading job, it may make sense to drop constraints and triggers though;
there's more notes on th
> You might use the copy command instead of insert, which is far faster.
> If you want the fastest possible inserts, then probably copy is the way
> to go instead of insert.
> Here is copy command via API:
> http://www.postgresql.org/docs/current/static/libpq-copy.html
> Here is copy command via SQ
On 5/01/2010 3:30 PM, Yan Cheng Cheok wrote:
What is the actual problem you are trying to solve?
I am currently developing a database system for a high speed measurement
machine.
The time taken to perform measurement per unit is in term of ~30 milliseconds.
We need to record down the measure
On 5 Jan 2010, at 8:30, Yan Cheng Cheok wrote:
>>> What is the actual problem you are trying to solve?
>
> I am currently developing a database system for a high speed measurement
> machine.
>
> The time taken to perform measurement per unit is in term of ~30
> milliseconds. We need to record
> -Original Message-
> From: Yan Cheng Cheok [mailto:ycch...@yahoo.com]
> Sent: Monday, January 04, 2010 11:30 PM
> To: Dann Corbit
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PostgreSQL Write Performance
>
> >> What is the actual problem you
t file.. However, using flat file is quite a mess,
when come to generating reports to customers.
Thanks and Regards
Yan Cheng CHEOK
--- On Tue, 1/5/10, Dann Corbit wrote:
> From: Dann Corbit
> Subject: Re: [GENERAL] PostgreSQL Write Performance
> To: "Yan Cheng Cheok&q
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Yan Cheng Cheok
> Sent: Monday, January 04, 2010 9:05 PM
> To: Scott Marlowe
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Postgre
Yan Cheng Cheok wrote:
Instead of sending 1000++ INSERT statements in one shot, which will requires my
application to keep track on the INSERT statement.
Is it possible that I can tell PostgreSQL,
"OK. I am sending you INSERT statement. But do not perform any actual right
operation. Only pe
write operation when the pending statement had
reached 1000"
Thanks and Regards
Yan Cheng CHEOK
--- On Tue, 1/5/10, Scott Marlowe wrote:
> From: Scott Marlowe
> Subject: Re: [GENERAL] PostgreSQL Write Performance
> To: "Yan Cheng Cheok"
> Cc: pgsql-general@postgresql.org
On Mon, Jan 4, 2010 at 8:36 PM, Yan Cheng Cheok wrote:
> I am not sure whether I am doing the correct benchmarking way.
>
> I have the following table ;
>
> CREATE TABLE measurement_type
> (
> measurement_type_id bigserial NOT NULL,
> measurement_type_name text NOT NULL,
> CONSTRAINT pk_measure
I am not sure whether I am doing the correct benchmarking way.
I have the following table ;
CREATE TABLE measurement_type
(
measurement_type_id bigserial NOT NULL,
measurement_type_name text NOT NULL,
CONSTRAINT pk_measurement_type_id PRIMARY KEY (measurement_type_id),
CONSTRAINT measurem
30 matches
Mail list logo