> -Original Message-
> From: Yan Cheng Cheok [mailto:ycch...@yahoo.com]
> Sent: Tuesday, January 05, 2010 10:30 PM
> To: Craig Ringer
> Cc: Dann Corbit; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PostgreSQL Write Performance
>
> Thanks for the information. I perform benchmarking
Tim Uckun wrote:
Is there a command like COPY which will insert the data but skip all
triggers and optionally integrity checks.
I'm curious if it would be worth COPYing the data into dummy tables with
no constraints, and then using INSERT INTO ... SELECT statements to feed
from those tables
Thanks for the information. I perform benchmarking on a very simple table, on
local database. (1 table, 2 fields with 1 is bigserial, another is text)
INSERT INTO measurement_type(measurement_type_name) VALUES ('Hello')
Thanks for the information. I wrote a plan c program to test the performance.
Its time measurement is very MUCH different from pgAdmin.
Thanks and Regards
Yan Cheng CHEOK
--- On Wed, 1/6/10, Andres Freund wrote:
> From: Andres Freund
> Subject: Re: [GENERAL] PostgreSQL Write Performance
> To
Stephen Frost writes:
> This begs the question of if this is something PG should just allow
> rather than denying the update.
AFAICT, throwing a permissions error for "UPDATE SET foo = foo" is
required by the SQL standard. There's nothing in there about "it's
okay depending on what you assign to
I am getting WARNING at log file:
Jan 6 11:19:54 dev04 postgres[14624]: [1622-1] DEBUG: name: unnamed;
blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1,
children:
Jan 6 11:19:54 dev04 postgres[14624]: [1623-1] LOG: could not send data to
client: Broken pipe
Jan 6 11
Stephen Frost wrote:
> * Craig Ringer (cr...@postnewspapers.com.au) wrote:
>> The issue with column privs is that Hibernate lists all columns, even
>> ones it hasn't set or altered, in the INSERT and UPDATE statements it
>> issues. Column privileges are checked based on the INSERT or UPDATE
>> colu
* Craig Ringer (cr...@postnewspapers.com.au) wrote:
> The issue with column privs is that Hibernate lists all columns, even
> ones it hasn't set or altered, in the INSERT and UPDATE statements it
> issues. Column privileges are checked based on the INSERT or UPDATE
> column list, not the actual val
I have a function that's working for what I needed it to do, but now I
need to call it for every id in a different table... and I'm not sure
what the syntax should be.
Here is an example:
create or replace function test(uid integer, out vhrs integer, out phrs
integer, out fhrs integer)
retur
Craig Ringer wrote:
> One of the apps using the database uses the Hibernate ORM system for
> Java. While excellent in most ways, it's giving me some trouble when it
> comes to inserts/updates on tables with column privileges.
>
> It's easy enough to tell Hibernate that certain columns are not
> in
Tim Uckun wrote:
>> I, for one, would loudly and firmly resist the addition of such a
>> feature. Almost-as-fast options such as intelligent re-checking of
>
> Even if it was not the default behavior?
Even if it was called
COPY (PLEASE BREAK MY DATABASE) FROM ...
... because there are *better
> I, for one, would loudly and firmly resist the addition of such a
> feature. Almost-as-fast options such as intelligent re-checking of
Even if it was not the default behavior?
>
> If you really want to do that, look at the manual for how to disable
> triggers, but understand that you are throwi
On Wed, 30 Dec 2009 20:04:51 -0600,
Seb wrote:
> On Wed, 30 Dec 2009 19:39:15 -0600,
> Seb wrote:
> CREATE RULE footwear_nothing_upd AS
>> ON UPDATE TO footwear DO INSTEAD NOTHING; CREATE RULE
>> footwear_newshoelaces_upd AS ON UPDATE TO footwear WHERE NEW.sl_name
>> <> OLD.sl_name AND OLD.sl_n
Tim Uckun wrote:
>> Technically you *can* disable triggers, including RI checks, but it's VERY
>> unwise and almost completely defeats the purpose of having the checks. In
>> most such situations you're much better off dropping the constraints then
>> adding them again at the end of the load.
>
>
Poking around in the 8.5 Devel Documentation section 8.13.1, the XML Type, I
noticed that:
"The xml type does not validate input values against a document type
declaration (DTD), even when the input value specifies a DTD"
I suppose the same is true in the case that the XML should validate agains
On Jan 5, 2010, at 3:46 PM, Tim Uckun wrote:
pg_dump has a --disable-triggers option too.
[...]
It doesn't seem like an outrageous expectation that the COPY command
or something similar should have that option.
Well, whether an expectation is "outrageous" or not is a matter of
viewpoint.
Jeff Ross wrote:
I'm trying to put a new server on line and I'm having a problem
getting any kind of decent performance from it. pgbench yields around
4000 tps until scale and clients both are above 21, then I see the
following:
NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index
>
> Technically you *can* disable triggers, including RI checks, but it's VERY
> unwise and almost completely defeats the purpose of having the checks. In
> most such situations you're much better off dropping the constraints then
> adding them again at the end of the load.
I know that the SQL se
# jayadevan.maym...@ibsplc.com / 2010-01-04 10:03:29 +0530:
> From: neuhauser+pgsql-general#postgresql@sigpipe.cz
> >
> > this fails with "duplicate key value":
> >
> > CREATE TABLE x (
> > i INT NOT NULL UNIQUE
> > );
> > INSERT INTO x (i) VALUES (1), (2), (3);
> > UP
On 6/01/2010 6:21 AM, Tim Uckun wrote:
You might use the copy command instead of insert, which is far faster.
If you want the fastest possible inserts, then probably copy is the way
to go instead of insert.
Here is copy command via API:
http://www.postgresql.org/docs/current/static/libpq-copy.htm
Hi,
I'm trying to put a new server on line and I'm having a problem getting any
kind of decent performance from it. pgbench yields around 4000 tps until
scale and clients both are above 21, then I see the following:
...
218 tuples done.
219 tuples done.
220 tuples done.
set prima
Hi,
On Tuesday 05 January 2010 04:36:10 Yan Cheng Cheok wrote:
> I make the following single write operation through pgAdmin :
...
> It takes 16ms to write a single row according to "Query Editor" (bottom
> right corner)
In my experience the times presented by pgadmin vary wildly and seldomly do
(*Sorry if this posts twice, I sent it from the wrong account the first
time*)
I have a function that's working for what I needed it to do, but now I
need to call it for every id in a different table... and I'm not sure
what the syntax should be.
Here is an example:
create or replace functi
Tim Uckun wrote:
Is there a command like COPY which will insert the data but skip all
triggers and optionally integrity checks.
Nope, skipping integrity checks is MySQL talk. When doing a bulk
loading job, it may make sense to drop constraints and triggers though;
there's more notes on th
Tim Uckun wrote:
I guess the next thing to try is to tail the log file and create a new
log file with the timestamps.
See, told you it was harder than it looked :)
I knew there was a gotcha here in the seemingly easy way to approach
this but just couldn't remember the details of why it fel
> You might use the copy command instead of insert, which is far faster.
> If you want the fastest possible inserts, then probably copy is the way
> to go instead of insert.
> Here is copy command via API:
> http://www.postgresql.org/docs/current/static/libpq-copy.html
> Here is copy command via SQ
>
> If that works, great. I'm not sure if you'll run afoul of output buffering
> in this situation. Clearly you've got the right idea, just need to make
> sure it behaves as you expect and doesn't clump the line reads into larger
> chunks.
Actually I could not get it to send the output to the pi
> "JRP" == John R Pierce writes:
>> effective_cache_size = 128MB
JRP> thats rather small unless your systme is very memory
JRP> constrained. assuming postgres is the primary disk IO consumer
JRP> on this ysstem, take a look at the cached' value on TOP or
JRP> whatever af
Le 05/01/2010 20:36, Frank Joerdens a écrit :
> On Tue, Jan 5, 2010 at 6:24 PM, Guillaume Lelarge
> wrote:
> [...]
>>> Is there a good reason to set PGSTAT_ACTIVITY_SIZE at 256 and is my
>>> only option to recompile the server? Is there a practical
>>> limit/drawback to making the variable say 4 o
Op 05-01-10 18:00, sabrina miller schreef:
You can also have inheritance from animal, in such case you can have
partitioned different animals in different tables with their special
attributes in it.
then you will have:
create table animal(
id serial,
name varchar(20),
age integer
)
On Tue, Jan 5, 2010 at 6:24 PM, Guillaume Lelarge
wrote:
[...]
>> Is there a good reason to set PGSTAT_ACTIVITY_SIZE at 256 and is my
>> only option to recompile the server? Is there a practical
>> limit/drawback to making the variable say 4 or 8 times the default?
[...]
> On current releases, it
On Tue, Jan 5, 2010 at 12:49 PM, Merlin Moncure wrote:
> is this behavior correct?
>
> create or replace function error() returns text as
> $$
> begin
> raise exception 'test!';
> end;
> $$ language plpgsql;
>
>
> create or replace function test() returns text as
> $$
> my $res = spi_query("sele
Le 05/01/2010 18:02, frank joerdens a écrit :
> From my angle, it looks as if the default for PGSTAT_ACTIVITY_SIZE is
> too small, or rather that it ought to be configurable at least, so
> that longer current_query strings that are shown via
> pg_stat_get_backend_activity() in the system view pg_st
is this behavior correct?
create or replace function error() returns text as
$$
begin
raise exception 'test!';
end;
$$ language plpgsql;
create or replace function test() returns text as
$$
my $res = spi_query("select error()"); # this error is ignored
my $res = spi_query("something stupid
Milan Zamazal wrote:
PS> and value efective_cache_size ???
effective_cache_size = 128MB
thats rather small unless your systme is very memory constrained.
assuming postgres is the primary disk IO consumer on this ysstem, take a
look at the 'cached' value on TOP or whatever after it
>From my angle, it looks as if the default for PGSTAT_ACTIVITY_SIZE is
too small, or rather that it ought to be configurable at least, so
that longer current_query strings that are shown via
pg_stat_get_backend_activity() in the system view pg_stat_activity
aren't truncated.
The use case is to wat
You can also have inheritance from animal, in such case you can have
partitioned different animals in different tables with their special
attributes in it.
then you will have:
create table animal(
id serial,
name varchar(20),
age integer
);
create table elephant
(some_attr_that_
> Ah, there it is... but now what do I do with it to disable the auto-start?
sudo launchctl unload -w blahblahblah.plist
The -w option causes it to not only unload the item, but also write a
key into it which will stop it from loading at launch.
--
Scott Ribe
scott_r...@killerbytes.com
http:/
shulkae wrote:
We have few tables which we would like to convert to XML and store it.
Another requirement is to convert the stored XML file back to the
original tables. This helps us to clone a system. I was thinking to
use Perl XML Simple module to generate XML files.
How do I again re-create
> "PS" == Pavel Stehule writes:
PS> and value efective_cache_size ???
effective_cache_size = 128MB
PS> what is CREATE INDEX stament for index?
create index foo2_value_idx on foo2(value);
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to you
howardn...@selestial.com wrote:
Happy new year everyone!
I am trying to create a column to extract the file extension from a
column in a table and set another column to this value. E.g. foo.doc
-> doc
I am using the regex E'\\.([[:alnum:]])*$', but I cannot seem to find
the correct update c
2010/1/5 Milan Zamazal :
>> "PS" == Pavel Stehule writes:
>
> PS> Have you original values random_page_cost and seq_page_cost in
> PS> postgres.conf?
>
> Yes. To be sure I uncommented the values in postgresql.conf
>
> seq_page_cost = 1.0 # measured on an arbitrary sca
Adrian Klaver writes:
> From what I could see in the source code
> (src/backend/utils/adt/formatting.c) the year portion of the string is
> not run through the FM modifier. A fix would mean a patch to the above
> AFAIK.
Should it be? Can anyone check how this works on Oracle?
> "PS" == Pavel Stehule writes:
PS> Have you original values random_page_cost and seq_page_cost in
PS> postgres.conf?
Yes. To be sure I uncommented the values in postgresql.conf
seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 4.0
--
Nicola Farina
Ufficio Progetti
Info Line Srl, Via Colorno 63/a, 43122 Parma
Tel 0521-609811
Fax 0521-606924
e-mail: nicola.far...@info-line.it
sito web: http://www.info-line.it
*** AVVISO di RISERVATEZZA ***
Ai sensi del D.lgs 196/03 si precisa che il contenuto di questo messaggio è
ri
2010/1/5 Milan Zamazal :
>> "PS" == Pavel Stehule writes:
>
> PS> please EXPLAIN ANALYZE Pavel
>
> I see, I'm sorry. Here are the results:
Have you original values random_page_cost and seq_page_cost in postgres.conf?
it is strange.
Pavel
>
> set enable_seqscan = on;
> explain analy
On Tue, Jan 5, 2010 at 3:30 PM, Antonio Goméz Soto
wrote:
> Hello,
>
> I have a column in a table that contains the name of another table,
> and the id in that table.
>
> I would like to use this in a join statement. Is that possible?
not possible I'm afraid.
But have a look at concept of inheri
erobles writes:
> i compiled Postgres 8.4.0 on SCO Openserver but when i want install it
> , i got the follow error:
> ./zic: wild compilation-time specification of zic_t
> what can i do to fix that error???
Get a modern operating system ... one with a working 64-bit integer
type. (Actually,
Hello,
I have a column in a table that contains the name of another table,
and the id in that table.
I would like to use this in a join statement. Is that possible?
example:
create table animal (
id serial,
table_name varchar(8) default 'monkey' CHECK (table_name IN ('monkey',
'elephant')),
On Saturday 02 January 2010 10:14:07 am Andrus wrote:
> Command:
>
> select to_char(DATE'2009-1-1','FMDD.FMMM.FMYY')
>
> Result observed:
>
> 1.1.09
>
> Result expected:
>
> 1.1.9
>
> How to fix ?
>
> Andrus.
>From what I could see in the source code (src/backend/utils/adt/formatting.c)
the year
Hi,
I've just realised that I'm performing the same rewrite on lots of my
queries to get performance reasonable. They take the form of something
like:
SELECT a.x, b.y, COUNT(*) AS n
FROM foo a, bar b
WHERE a.z = b.z
GROUP BY a.x, b.y;
And I rewrite them to:
SELECT a.x, b.y, SUM(b.cou
Merlin,
There is a set of databases whose columns are differently named and
have different meanings, but which are processed similarly (the data
is all floats, and some dot products need to be made).
The processing to be done can all described by other tables which
provide coefficents based on the
Happy new year everyone!
I am trying to create a column to extract the file extension from a
column in a table and set another column to this value. E.g. foo.doc -> doc
I am using the regex E'\\.([[:alnum:]])*$', but I cannot seem to find
the correct update combination.
My best attempt to d
> "GJ" == Grzegorz Jaśkiewicz writes:
GJ> Do you seriously need to walk the user through couple of million
GJ> rows of data ?
Typically not. Data can be of any size. Some tables may be large and
I'd like to understand what happens. It is a general data browser.
--
Sent via pgsq
> "AL" == Albe Laurenz writes:
AL> Did you try to reduce the cursor_tuple_fraction parameter?
No, good idea, thanks. It helps.
The question is whether it's a good idea to reduce cursor_tuple_fraction
universally, without knowing the table size before (and I'm not going to
use SELECT CO
> "PS" == Pavel Stehule writes:
PS> please EXPLAIN ANALYZE Pavel
I see, I'm sorry. Here are the results:
set enable_seqscan = on;
explain analyze declare c cursor for select * from foo2 order by value;
QUERY PLAN
I actually tackled a problem very much like this in the distant past with a
different DB. I think one of the practical questions you have to ask is
whether or not you really need all that detailed data, or would storing
summarized data serve.
If (for example) the components are fabricated on w
On Tue, Jan 5, 2010 at 8:53 AM, Steve White wrote:
> Hi,
>
> I ran into a roadblock at the very bottom of a fairly large database
> design implementation. It will be at least messy to fix, unless there is
> a neat solution.
>
> The roadblock is:
>
> There is a record, which may be from any of a s
On Mon, Jan 04, 2010 at 12:45:00PM -0500, Tom Lane wrote:
> Sam Mason writes:
> > Um, I think the OP is right. Notice he does: ...
> > showing that PG is auto-magically inserting a cast from BIGINT to OID.
>
> Yes, as a quick look into pg_cast will show you, bigint -> oid is an
> implicit cast:
2010/1/4 Daniel Verite :
> David Fetter wrote:
>
>> The DEFERRED uniqueness constraints in 8.5alpha3 fix this problem
>
> That fix has a drawback: when the unique constraint is violated, the rest of
> the transaction runs with data that is somehow corrupted, with duplicate
> values being vis
8.4.2 is already available, with many critical bugs fixed. As for you
main question, maybe someone else here knows anything about SCO
system.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-genera
the array type is first and foremost the storage type. It can be
managed to be used to add 3rd dimension of access to database, but
that will always come with certain price.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.post
hi!
i compiled Postgres 8.4.0 on SCO Openserver but when i want install it
, i got the follow error:
./zic: wild compilation-time specification of zic_t
what can i do to fix that error???
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscr
On Mon, Jan 04, 2010 at 05:12:56PM -0800, Yan Cheng Cheok wrote:
> Measurement table will have 24 * 50 million rows in 1 day
> Is it efficient to design that way?
>
> **I wish to have super fast write speed, and reasonable fast read speed from
> the database.**
When writing software there's (al
Steve White wrote:
> The roadblock is:
>
> There is a record, which may be from any of a set of similar databases.
> Field names for this record are computed as strings. Using these strings,
> I need to access fields of the record.
>
> But I can't find the syntax for it.
You can't do this wi
Ayo wrote:
> Inserting a new release and its first revision is done as explained with the
> following pseudocode:
>
> $rid = INSERT INTO releases DEFAULT VALUES RETURNING id;
> $rev = INSERT INTO revisions (type, uid) VALUES('r', $uid) RETURNING id;
> INSERT INTO releases_rev VALUES($rev, $rid
On Mon, Jan 04, 2010 at 07:46:29AM -0800, shulkae wrote:
> We have few tables which we would like to convert to XML and store it.
Not sure if you've seen them, but Postgres provides a few built in functions
that will help with simple tasks:
http://www.postgresql.org/docs/current/static/functio
Hi,
I ran into a roadblock at the very bottom of a fairly large database
design implementation. It will be at least messy to fix, unless there is
a neat solution.
The roadblock is:
There is a record, which may be from any of a set of similar databases.
Field names for this record are computed
shulkae writes:
> Another requirement is to convert the stored XML file back to the
> original tables. This helps us to clone a system. I was thinking to
> use Perl XML Simple module to generate XML files.
What about using pg_dump and pg_restore for the cloning, or maybe a
replication solution?
please EXPLAIN ANALYZE
Pavel
2010/1/5 Milan Zamazal :
>> "PS" == Pavel Stehule writes:
>
> PS> please, send explain result
>
> For ~ 10 million rows table:
>
> explain declare c cursor for select * from foo2 order by value;
> QUERY PLAN
> -
2010/1/5 Milan Zamazal :
> Cursors are very convenient for me, because they allow easy browsing
> data in the user interface (fetching limited sets of rows while seeking
> forward and backward) and they prevent contingent seeking and other
> troubles when concurrent updates happen.
>
Sounds to me
Milan Zamazal wrote:
> My problem is that retrieving sorted data from large tables
> is sometimes
> very slow in PostgreSQL (8.4.1, FWIW).
>
> I typically retrieve the data using cursors, to display them in UI:
>
> BEGIN;
> DECLARE ... SELECT ... ORDER BY ...;
> FETCH ...;
> ...
>
> On
> "PS" == Pavel Stehule writes:
PS> please, send explain result
For ~ 10 million rows table:
explain declare c cursor for select * from foo2 order by value;
QUERY PLAN
-
> "FR" == Filip Rembiałkowski writes:
FR> 2010/1/5 Milan Zamazal
>> - Is it a good idea to set enable_seqscan or enable_sort to "off"
>> globally in my case? Or to set them to "off" just before working
>> with large tables? My databases contain short and long tables,
>>
Hello
please, send explain result
postgres=# explain analyze declare x cursor for select * from foo;
QUERY PLAN
-
Seq Scan on foo (cost=0.00..34.00 rows=240
2010/1/5 Milan Zamazal
> My problem is that retrieving sorted data from large tables is sometimes
> very slow in PostgreSQL (8.4.1, FWIW).
>
>
> I typically retrieve the data using cursors, to display them in UI:
>
> BEGIN;
> DECLARE ... SELECT ... ORDER BY ...;
> FETCH ...;
> ...
>
> On a n
On 5/01/2010 3:30 PM, Yan Cheng Cheok wrote:
What is the actual problem you are trying to solve?
I am currently developing a database system for a high speed measurement
machine.
The time taken to perform measurement per unit is in term of ~30 milliseconds.
We need to record down the measure
On 5 Jan 2010, at 4:26, Yan Cheng Cheok wrote:
> Can you please provide me an example of a stored procedures to achieve that?
>
> Thanks and Regards
> Yan Cheng CHEOK
Sure. The one below should even protect you against concurrent inserts. I
didn't test it though, there may be some typos etc.
C
On 5 Jan 2010, at 8:30, Yan Cheng Cheok wrote:
>>> What is the actual problem you are trying to solve?
>
> I am currently developing a database system for a high speed measurement
> machine.
>
> The time taken to perform measurement per unit is in term of ~30
> milliseconds. We need to record
My problem is that retrieving sorted data from large tables is sometimes
very slow in PostgreSQL (8.4.1, FWIW).
I typically retrieve the data using cursors, to display them in UI:
BEGIN;
DECLARE ... SELECT ... ORDER BY ...;
FETCH ...;
...
On a newly created table of about 10 million rows
80 matches
Mail list logo