Thanks Laurenz, very good point!
Luckily (phew!) the business scenario is such that race conditions cannot
occur (and the transaction table is append only). There is business
workflow to address duplicates but
1) it occurs extremely rarely (it would be a deliberate sabotage if it
occurs)
2) there
Anil Menon wrote:
> I would like to ask from your experience which would be the best "generic"
> method for checking if row
> sets of a certain condition exists in a PLPGSQL function.
>
> I know of 4 methods so far (please feel free to add if I missed out any
> others)
[...]
Are you aware that
Thanks Adrian
On Thu, Nov 20, 2014 at 3:46 AM, Adrian Klaver
wrote:
> On 11/19/2014 08:26 AM, Anil Menon wrote:
>
>> Hello,
>>
>> I would like to ask from your experience which would be the best
>> "generic" method for checking if row sets of a certain condition exists
>> in a PLPGSQL function.
On 11/19/2014 08:26 AM, Anil Menon wrote:
Hello,
I would like to ask from your experience which would be the best
"generic" method for checking if row sets of a certain condition exists
in a PLPGSQL function.
I know of 4 methods so far (please feel free to add if I missed out any
others)
1) ge
Hi Anil:
On Wed, Nov 19, 2014 at 5:26 PM, Anil Menon wrote:
> Cons : It seems doing a count(*) is not the best option for PG
>
For this and some of the following options, if you are going to just test
for existence, I would consider adding a limit 1 somewehere on the query,
to let the optimize
Hello,
I would like to ask from your experience which would be the best "generic"
method for checking if row sets of a certain condition exists in a PLPGSQL
function.
I know of 4 methods so far (please feel free to add if I missed out any
others)
1) get a count (my previous experience with ORCL
On Sat, Dec 24, 2011 at 11:34 AM, Darren Duncan wrote:
>
>
> You can do it correctly while reusing all of your code; you just have
> different arguments at connect time and otherwise your code uses the
> connection handle in the same way afterwards. Its fine to have flags in the
> app so the app
Chris Angelico wrote:
On Sat, Dec 24, 2011 at 11:46 PM, vinny wrote:
How about using a databaseuser that has it's create/update/delete rights
revoked? That will cause an error if the supposedly read-only routine
does try to change data.
Also, we want to minimize debugging time by having
both
On Sun, Dec 25, 2011 at 12:00 AM, vinny wrote:
> So, your read-only mode is basically a flag that forces your code to
> always issue a rollback at the end, instead of a commit for read/write
> mode.
>
> I find that a bit scary. :-)
It's three things:
1) BEGIN TRANSACTION READ ONLY instead of BEG
On Sat, 2011-12-24 at 23:49 +1100, Chris Angelico wrote:
> On Sat, Dec 24, 2011 at 11:46 PM, vinny wrote:
> > The actual rollback won't hurt as long as you have not made any
> > modificatons to any records. But opening the transaction could have side
> > effects for other processes that want to mo
On Sat, Dec 24, 2011 at 11:46 PM, vinny wrote:
> The actual rollback won't hurt as long as you have not made any
> modificatons to any records. But opening the transaction could have side
> effects for other processes that want to modiy the records that you want
> to protect in your read-only tran
On Sun, 2011-12-18 at 14:05 +1100, Chris Angelico wrote:
> On Sun, Dec 18, 2011 at 2:02 PM, Chris Travers
> wrote:
> > I do not believe there are performance penalties for either. All
> > commit or rollback does is determine visibility of changes made.
>
> Thanks. (And thanks for the incredibly
On Sun, Dec 18, 2011 at 2:02 PM, Chris Travers wrote:
> I do not believe there are performance penalties for either. All
> commit or rollback does is determine visibility of changes made.
Thanks. (And thanks for the incredibly quick response!)
My framework has a "read-only mode" (determined by
On Sat, Dec 17, 2011 at 6:53 PM, Chris Angelico wrote:
> Some of my code involves transactions which will not have significant
> effect on the database. It might be a read-only transaction (possibly
> declared as one, but possibly not), or perhaps a completely empty
> transaction - I have a framew
Some of my code involves transactions which will not have significant
effect on the database. It might be a read-only transaction (possibly
declared as one, but possibly not), or perhaps a completely empty
transaction - I have a framework that will always open a transaction,
then call on other code
On Thu, Aug 16, 2007 at 08:52:02AM -0300, Marcelo de Moraes Serpa wrote:
> Hello list,
>
> If I've got a trigger that calls a function each time there is a DELETE or
> UPDATE opration on a table in my system, and in this function I retrieve
> some boolean information from another table and based o
Hello list,
If I've got a trigger that calls a function each time there is a DELETE or
UPDATE opration on a table in my system, and in this function I retrieve
some boolean information from another table and based on this information,
additional code will be ran or not in this function. Could the
Mike <[EMAIL PROTECTED]> writes:
> I see. Thank you for the elaborate response. I have a clearer idea of
> what is going on now. In designing my application I was thinking of
> storing pieces of my data as serialized python data structures into a
> binary field (no more than 15KB), while a friend
I see. Thank you for the elaborate response. I have a clearer idea of
what is going on now. In designing my application I was thinking of
storing pieces of my data as serialized python data structures into a
binary field (no more than 15KB), while a friend was arguing I should
store the data in oth
"Mike" <[EMAIL PROTECTED]> writes:
> I am designing my database and I was wondering whether my table row
> size effects the performance of querying my table.
yes
If your table is large and you're reading all the rows then you'll be limited
by the i/o rate. If your rows are twice as big it will
On 7/9/07, Mike <[EMAIL PROTECTED]> wrote:
I am designing my database and I was wondering whether my table row
size effects the performance of querying my table. Please note that my
table is being designed to hold high volume of records and I do not
plan to do (select *) for retrieving them. That
Hi,
I am designing my database and I was wondering whether my table row
size effects the performance of querying my table. Please note that my
table is being designed to hold high volume of records and I do not
plan to do (select *) for retrieving them. That is I plan to only
query a few of those
In article <[EMAIL PROTECTED]>,
Terry Lee Tucker <[EMAIL PROTECTED]> wrote:
% elements of 50 thousand records on 8 structurally identical databases. We
% threw together the script and decided to just delete the record and re-insert
% it with the data that was brought into sync. Now the question:
On Wednesday 14 June 2006 03:57 pm, Terry Lee Tucker <[EMAIL PROTECTED]> thus
communicated:
--> Hello List:
-->
--> I've been told that an update to a record is equivalent to a delete and
insert --> operation. We have a utility written in Perl that brings into
sync certain --> elements of 50 tho
On Wednesday 14 June 2006 13:24, Greg Stark <[EMAIL PROTECTED]> wrote:
> One way it would be unequal is if you can do your DELETE as a single
> query and the insert operation as using a single large COPY FROM.
This is definitely the fastest way to update tens of thousands of rows if
you know the
Terry Lee Tucker <[EMAIL PROTECTED]> writes:
> Hello List:
>
> I've been told that an update to a record is equivalent to a delete and
> insert
> operation. We have a utility written in Perl that brings into sync certain
> elements of 50 thousand records on 8 structurally identical databases.
Terry Lee Tucker <[EMAIL PROTECTED]> writes:
> Hello List:
>
> I've been told that an update to a record is equivalent to a delete
> and insert operation. We have a utility written in Perl that brings
> into sync certain elements of 50 thousand records on 8 structurally
> identical databases. We t
Hello List:
I've been told that an update to a record is equivalent to a delete and insert
operation. We have a utility written in Perl that brings into sync certain
elements of 50 thousand records on 8 structurally identical databases. We
threw together the script and decided to just delete th
vishal saberwal <[EMAIL PROTECTED]> writes:
> The query I am actually trying to optimize is long and has a few joins (for=
> =20
> normalization) and hence didn't copy it here.
> The function structure is similar to the one above.
> (a) Am i right in thinking that if I eliminate the for loop, some
hi,
I have this preformance question.
create view test_v as select 'text'::varchar as Field1, 'text'::varchar as Field2;
create or replace function test() returns setof test_v as $$
declare
res test_v%ROWTYPE;
begin
for res in
select t1.field1, t1.field2 from table1 t1;
loop
return
On Mon, 2005-05-23 at 14:07, LiSim: Rainer Mokros wrote:
> Hello,
> Anyone has try to install Postgres on a AMT Opteron (Dual Core) box with 4
> CPU’s and 64GB main memory and 3 TB Raid 10 (24 disks) Linux (2.6), to run a
> datawarehouse of 1TB.
> Any points
No, but if you buy my ticket I'd fly ou
Hello,
Anyone has try to install Postgres on a AMT Opteron (Dual Core) box with 4
CPUs and 64GB main memory and 3 TB Raid 10 (24 disks) Linux (2.6), to run a
datawarehouse of 1TB.
Any points
Kind regards
Rainer
---(end of broadcast)---
TIP 5: Have
Tom Lane wrote:
> Oleg Bartunov <[EMAIL PROTECTED]> writes:
> > On Wed, 2 Jul 2003, Tom Lane wrote:
> >> You might find it useful to read the slides from my talk at last
> >> year's O'Reilly conference about this and related concurrency
> >> problems:
> >> http://conferences.oreillynet.com/cs/os200
Tom Lane wrote:
Yes. Last year I asked Vince to put those slides up somewhere on the
postgresql.org website, but he never got around to it (I think he got
stuck wondering where they should go). Bruce has materials for several
different talks he's given that should be there somewhere, too.
Perhaps
Oleg Bartunov <[EMAIL PROTECTED]> writes:
> On Wed, 2 Jul 2003, Tom Lane wrote:
>> You might find it useful to read the slides from my talk at last
>> year's O'Reilly conference about this and related concurrency
>> problems:
>> http://conferences.oreillynet.com/cs/os2002/view/e_sess/2681
> I'd li
I'm trying to convince another open-source project (phpOpenTracker) to
modify their current INSERT sql queries. Currently they just do an
INSERT into a table without first checking if their might be a record
with the same primary key.
The reason for this that they need fast inserts and most use
On Thu, Sep 20, 2001 at 11:10:02AM +0200, Herbert Liechti wrote:
> I tried it. See my actions below. The main performance boost is
> reached by creating an index and disabling the sequential scan:
> Without any index;
> real0m18.128s
> user0m0.010s
> sys 0m0.010s
>
> Same statement wi
Hi Andreas,
I'm running PostgreSQL 7.1.3 here on a PC with nearly a gig of ram, and
running Linux Mandrake 8.0
First thing I did was to increase the amount of shared memory and stuff
which Linux allows things to use :
echo "kernel.shmall = 134217728" >> /etc/sysctl.conf
echo "kernel.shmmax = 13
On Thu, 20 Sep 2001, Einar Karttunen asked me for query plans for
both M$ SQL and postgresql:
M$ SQL:
|--Compute Scalar(DEFINE:([Expr1002]=Convert([Expr1005])))
|--Stream Aggregate(GROUP BY:([Hauptdaten_Fall].[MeldeKategorie])
DEFINE:([Expr1005]=Count(*)))
|--Index
Scan(OBJE
Hi Andreas,
Sorry, I haven't seen the history of this thread. One question which
might be relevant is, have you adjusted the postgresql.conf file from
the default memory settings to be something better?
If these are the times you're getting from a default configuration, you
might be able to get
On Thu, 20 Sep 2001, Justin Clift wrote:
> Sorry, I haven't seen the history of this thread. One question which
> might be relevant is, have you adjusted the postgresql.conf file from
> the default memory settings to be something better?
I adjusted two parameters:
shared_buffers = 2048
(When I
On Thu, 20 Sep 2001, Herbert Liechti wrote:
> I tried it. See my actions below. The main performance boost is
> reached by creating an index and disabling the sequential scan:
Thanks. I tried this and it helps in dead (see below).
> ---
> crea
On Wed, 19 Sep 2001, Tom Lane wrote:
> No. In the first place, there's no extra sort: the planner is well
> aware that our current GROUP BY implementation produces ordered output.
> In the second place, there's no guarantee that GROUP BY will always
> produce ordered output in the future --- we
Tille, Andreas writes:
> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz
> FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20)) GROUP BY
> Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;
> (which should just measure the time needed for th
On Mon, 10 Sep 2001, Tille, Andreas wrote:
> On Mon, 10 Sep 2001 [EMAIL PROTECTED] wrote:
>
> > Use explain. Explain tells you the query plan of the optimizer.
> >
> > explain SELECT .;
> Thanks I just found the thread "Index usage question" and tried to make
> some profit from it:
>
> exp
On Mon, 10 Sep 2001, Tille, Andreas wrote:
> Hello,
>
> Now I started some performance comparisons and did the following statement:
> The MS-SQL server represents the result "immediately" - you just not notice
> any delay. If I do it on the PostgreSQL server it takes 30s on comparable
> hardwa
On Mon, Sep 10, 2001 at 02:34:25PM +0200, Tille, Andreas wrote:
> On Mon, 10 Sep 2001 [EMAIL PROTECTED] wrote:
>
> > Use explain. Explain tells you the query plan of the optimizer.
> >
> > explain SELECT .;
> Thanks I just found the thread "Index usage question" and tried to make
> some profi
On Mon, 10 Sep 2001 [EMAIL PROTECTED] wrote:
> Use explain. Explain tells you the query plan of the optimizer.
>
> explain SELECT .;
Thanks I just found the thread "Index usage question" and tried to make
some profit from it:
explain SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fa
On Mon, 10 Sep 2001, Tille, Andreas wrote:
> Hello,
>
> I have ported a database from MS SQL Server to PostgreSQL. The database has
> 40 tables and 117 indexes which I defined for the same fields as in MS SQL.
> I converted the data using some SQL output from MS SQL server and inserted
> it with
Hello,
I have ported a database from MS SQL Server to PostgreSQL. The database has
40 tables and 117 indexes which I defined for the same fields as in MS SQL.
I converted the data using some SQL output from MS SQL server and inserted
it with psql.
Now I started some performance comparisons and
> ... I know all the stats show the MySQL is faster...
A story which, if and when true, stays true only for cases with one or a
very few users. We used to just accept MySQL's claims in this regard
without question, but it is finally dawning on us that they are not
doing representative tests for
Arg! Thanks to everyone for their help. I followed a few suggestions I
got from here, and installed Time::HiRes on my box (that part was my
idea actually ;)), to monitor how long the query's take. I know all the
stats show the MySQL is faster, but in my situation, postgres is really
kicking ass
Title: VS: [GENERAL] performance question
>Tom Lane wrote:
>>
>> ernie cline <[EMAIL PROTECTED]> writes:
>> > Quick question. I am using postgres 7.0. When running a select query,
>> > does postgres lock the row (or table) while doing a select? Only
On Sun, 19 Mar 2000, martin wrote:
> Hi there, I have been doing some performance testing with ODBC and have
> found that
> ODBC is much slower than using C to call the Psql API
> I don't understand why the results are like this, I thought ODBC would
> slow things down a bit but not my much, all
Hi there, I have been doing some performance testing with ODBC and have
found that
ODBC is much slower than using C to call the Psql API
I don't understand why the results are like this, I thought ODBC would
slow things down a bit but not my much, all its doing in sending off SQL
straight the serv
55 matches
Mail list logo