[GENERAL] Trigger Performance

2011-01-15 Thread Randall Smith
Hi,

I've created a trigger that checks the uniqueness of two columns in a
table.  Traditionally, one would use a unique constraint, but in my
case, the size of the unique index would be too large and some
performance loss is acceptable.  However, the trigger performance seems
to be far below what's indicated by an explain analyze of the query used
in the trigger.

The unique fields consist of a an indexed int8 (volume_id) and a text
field (name).  The average ratio of volume_id to name is 1 to 10,000.
The query I'm using to check uniqueness in the trigger is:

...
IF (SELECT EXISTS (SELECT 1 FROM t1 WHERE
volume_id = NEW.volume_id AND name = NEW.name)) THEN
RAISE EXCEPTION '% already exists on volume', NEW.name;
END IF;
...

This trigger is called only BEFORE INSERTS.

In testing, without the trigger, inserts are at 10,000 every 2 seconds.
With the trigger, they are 10,000 every 40 seconds.  The output of
explain analyze suggests that this shouldn't be taking so long.

EXPLAIN ANALYZE SELECT EXISTS (
SELECT 1 FROM t1 WHERE volume_id = 300 AND name = 'whodat');

 Result  (cost=8.35..8.36 rows=1 width=0) (actual time=0.019..0.020
rows=1 loops=1)
   InitPlan 1 (returns $0)
 ->  Index Scan using volume_id_idx on t1  (cost=0.00..8.35 rows=1
width=0) (actual time=0.016..0.016 rows=0 loops=1)
   Index Cond: (volume_id = 300)
   Filter: (name = 'whodat'::text)
 Total runtime: 0.053 ms
(6 rows)

0.053 ms/record / 1,000 ms/sec * 10,000 records = .53 seconds

According to that stat, this lookup should be adding about 0.5 seconds
to 10,000 records, far from the actual 38 seconds it is adding.  I've
tried to change up the query in the trigger to see if I could get
different results with not much luck.  Any idea what might be taking up
the extra time or what I can do to troubleshoot?

Thanks.

-Randall



 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger Performance

2011-01-15 Thread Randall Smith
Before reading.  This is solved.  Was an error on my part.

On Sun, 2011-01-16 at 03:46 +, Jasen Betts wrote:
> In plpgsql IF is an implicit select. 
> 
> 
>  IF EXISTS (SELECT 1 FROM t1 WHERE
>  volume_id = NEW.volume_id AND name = NEW.name) THEN
>  RAISE EXCEPTION '% already exists on volume', NEW.name;
>  END IF;

Thanks.  Certainly more concise.

> 
> > 0.053 ms/record / 1,000 ms/sec * 10,000 records = .53 seconds
> 
> huh?
> 
> > According to that stat, this lookup should be adding about 0.5 seconds
> > to 10,000 records,
> 
> why?  what are you doing to 1 records.


Inserting them.  Sorry, that was a critical omission on my part.  The
trigger check is for inserts and I'm testing its performance by
inserting 10,000 records.

Turns out my EXPLAIN ANALYZE times were so low because I was querying
for a volume_id that had only a few rows.  When I query for a volume_id
with 10,000 rows, it changes to 7 ms, which matches the performance I'm
seeing.  That's acceptable to me because that's probably at the upper
end of what I'll see.  7 ms to check 10,000 text fields is actually
impressive to me.


> 
> how much disk (in bytes, and dollars) are you hoping to save by not
> using the index.
> 

I need to be able to fit my indexes in RAM.  This table will have a few
billion records and I have several other indexes with billions of
records and I'd like my DB to run well on a machine with 20G (preferred)
60G (max) RAM and not have to resort to sharding.  These text fields can
be up to 1k each.  A 1 billion row int8 index comes in around 2G.
Adding the text field to the index would probably put it at over 20G per
billion records.

Thanks.

-Randall


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] append_array causes rapid growth according to pg_total_relation_size

2010-08-21 Thread Randall Smith
Hi,

Using Postgres 8.4, I create this table:

rcs_test=> \d array_big1;
Table "public.array_big1"
  Column   |   Type   | Modifiers 
---+--+---
 id| integer  | not null
 chunk_ids | bigint[] | 
Indexes:
"array_big1_pkey" PRIMARY KEY, btree (id)

It's initially populated with a single record with chunk_ids having a
1024^2 sized array of big integers.  Immediately after creation, things
look as expected.

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size 

8757248
(1 row)

rcs_test=> select pg_column_size(chunk_ids) from array_big1;
 pg_column_size 

8388628
(1 row)



Now, after I append the array with a single big integer, the table size
nearly doubles according to pg_total_relation_size, but pg_column_size
looks OK.


rcs_test=> update array_big1 set chunk_ids = array_append(chunk_ids,
'12345678912345') where id = 0;
UPDATE 1

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size 

   17448960
(1 row)

rcs_test=> select pg_column_size(chunk_ids) from array_big1;
 pg_column_size 

8388636
(1 row)


And If I append again:

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size 

   26255360
(1 row)


A few more and it gets out of hand:

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size 

  104529920
(1 row)


VACUUM ANALYZE, doesn't seem to help but VACUUM FULL does.  Though
according to the docs, VACUUM FULL is bad.


I'm wondering if this is something that Postgres will take care of on
its own or something that could cause trouble.


If you're wondering why I'm using this approach instead of using another
table, that's probably for another thread, but in short, the overhead of
another table plus indexes is enormous compared to the array, which has
almost no overhead.


-Randall


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] append_array causes rapid growth according to pg_total_relation_size

2010-08-22 Thread Randall Smith
More information including a workaround.

VACUUM ANALYZE is not freeing space after an array_append, but it does
after replacing a specific element in the array.

So if I do this:

update array_big1 set chunk_ids = array_append(chunk_ids, '1') where id
= 0;
update array_big1 set chunk_ids[1] = chunk_ids[1] where id = 0;

VACUUM ANALYZE reclaims the growth caused by array_append.

-Randall




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sequence reset

2010-08-23 Thread Randall Smith
I can't tell what a roll is based on your question, but here is how to
set a sequence to a specific value.

SELECT setval('foo', 42);   Next nextval will return 43

http://www.postgresql.org/docs/8.4/interactive/functions-sequence.html

-Randall

On Mon, 2010-08-23 at 05:04 -0500, SUNDAY A. OLUTAYO wrote:
> How can I reset a sequence column due to deletion of some rolls?
> 
> Sunday Olutayo
> Sadeeb Technologies Ltd
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] connecting to server process via sockets

2005-05-16 Thread Randall Smith
For fun and learning, I would like to connect to the Postgresql backend 
and issue queries using sockets.  I'm using Python's socket module.  I'm 
new to socket programming, but I'm experienced with Python and 
Postgresql.  I've been using the JDBC driver and the online 
documentation as a guide, but I'm afraid my ignorance has led me to 
failure thus far.

This is what I think I understand.
1.  Send the startup message as such.
a.  length of message.
b.  protocol major (3)
c.  protocol minor (0) (don't know what this is).
d.  message
e.  send 0 (Don't know why?)
O.K.  Here I show my ignorance.
#!/usr/bin/python
import socket
# Connection string
cnstring = 'user=randall, database=dws, client_encoding=UNICODE, 
DateStyle=ISO'  # This just wrapped in my email.
msg_len = str(len(cnstring))
protocol_major = '3'
protocol_minor = '0'

pgsocket = socket.socket()
pgsocket.connect(('localhost', 5432))
pgsocket.send(msg_len)
pgsocket.send(protocol_major)
pgsocket.send(protocol_minor)
pgsocket.send(cnstring)
pgsocket.send('0')
pgsocket.close()
When I run this, this is what shows up in the logs.
2005-05-16 10:11:34 [2638] LOG:  connection received: host=127.0.0.1 
port=42607
2005-05-16 10:11:34 [2638] LOG:  invalid length of startup packet

Please do not recommend that I use an existing API.  I'm doing this for 
fun and maybe to come up with a simple pure python database driver.

Randall
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] quoted identifier behaviour

2007-03-14 Thread Randall Smith
Are there plans to make Postgresql's behavior SQL compliant with regards 
to quoted identifiers?  My specific need is to access tables named in 
lower case by an uppercase quoted identifier, which is in line with the 
SQL standard.  So if I created table foo, I should be able to access it 
like "FOO".  Is there a configuration options that modifies this 
behavior or can I change the names in the system tables?


Randall


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] quoted identifier behaviour

2007-03-14 Thread Randall Smith

Albe Laurenz wrote:

Randall Smith wrote:

Are there plans to make Postgresql's behavior SQL compliant with
regards 
to quoted identifiers?  My specific need is to access tables named in 
lower case by an uppercase quoted identifier, which is in line with
the 

SQL standard.  So if I created table foo, I should be able to access
it 
like "FOO".  Is there a configuration options that modifies this 
behavior or can I change the names in the system tables?


I don't think that there are plans to change this as it would be
hard to maintain compatibility.


It would be nice to have it as a configuration option though.  That way, 
you could have both SQL compliance and backward compatibility.


I'm trying to get Postgresql to work with software that uses JDBC and 
Oracle for a large government project.  So I have to report that the 
application won't work with Postgresql because it (PG) doesn't adhere to 
the standard.  That's usually something I say about Oracle and MySQL. 
I'm an advocate of Postgresql and usually tout SQL compliance as a 
strength, so it bothers me that this is not in line to be corrected.




You must not change the names of system tables, but you can
certainly create upper case views for them.


That's my current solution.  I imagine this is not very efficient, but I 
could be wrong.


Thanks.

Randall



Yours,
Laurenz Albe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] pre-parser query manipulation

2007-03-14 Thread Randall Smith

Hi,

I'm attempting to process a query written in the form:

SELECT fields FROM table WHERE conditions FOR UPDATE OF field_name

when Postgresql supports:

SELECT fields FROM table WHERE conditions FOR UPDATE OF table_name

I'd like to remove the OF clause in the case only one table is queried.

Anyone know of a solution to this?

Randall


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] quoted identifier behaviour

2007-03-14 Thread Randall Smith

Tom Lane wrote:

Randall Smith <[EMAIL PROTECTED]> writes:
I'm an advocate of Postgresql and usually tout SQL compliance as a 
strength, so it bothers me that this is not in line to be corrected.


It's not that it's not on the radar screen, it's just that no one sees
a way to do it that's going to be acceptable.  We're not willing to give
up the current behavior, both for backwards-compatibility reasons and
because most of us just plain like it better (ALL UPPER CASE IS UGLY AND
HARDER TO READ...).  So we'd need to support both, and that's hard.
Easy answers like "make it a configuration option" don't work because
they break too much stuff, including a whole lot of client-side code
that we don't control.

There are a couple of long threads in the pghackers archives discussing
pros and cons of different possibilities, if you're interested.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Thanks Tom.  I understand your points and I gather that the cost of 
making the change (even as an option) outweighs the benefits of SQL 
conformance for the developers.  Though I'm still of the same opinion.


I'll give the pghackers forum a visit and since I'm already on the 
subject here, I'll make a direct comparison of the situation. 
Microsoft's Internet Explorer web browser is known to have poor support 
for the CSS standard, but refuses to fix it saying that it's too 
difficult and would break existing websites.  Many developers, myself 
included, prefer to code to the standard and have the html/css render 
correctly in all browsers.  In the long run, it's better to do it right 
even if that means breaking things today. If the standard is good (there 
are poor standards), there's no reason to not use the standard.


Thanks again.

Randall


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] quoted identifier behaviour

2007-03-14 Thread Randall Smith

This is the last statement I found on the issue, which is someone hopeful.

http://archives.postgresql.org/pgsql-hackers/2006-11/msg00347.php

Randall


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] quoted identifier behaviour

2007-03-14 Thread Randall Smith

Thomas Kellerer wrote:

Randall Smith wrote on 14.03.2007 18:59:
I'm trying to get Postgresql to work with software that uses JDBC and 
Oracle for a large government project.  So I have to report that the 
application won't work with Postgresql because it (PG) doesn't adhere 
to the standard.  That's usually something I say about Oracle and 
MySQL. I'm an advocate of Postgresql and usually tout SQL compliance 
as a strength, so it bothers me that this is not in line to be corrected.


Then the software is not written well :)

DatabaseMetaData.storesLowerCaseIdentifiers() correctly tells the 
program that PG stores everything in lowercase. So if a (JDBC based) 
software wants to be truly multi-DBMS enabled, it *has* to retrieve 
those things from the driver.


Thomas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



I don't write Java software so I can't discuss what you said.  However, 
the SQL the app sends IS SQL compliant and Postgresql is not.  A table 
named foo should be accessible as both foo and "FOO" according to the 
standard.


Randall


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [Bulk] Re: quoted identifier behaviour

2007-03-14 Thread Randall Smith

Ted Byers wrote:


- Original Message - From: "Randall Smith" <[EMAIL PROTECTED]>


I'll give the pghackers forum a visit and since I'm already on the 
subject here, I'll make a direct comparison of the situation. 
Microsoft's Internet Explorer web browser is known to have poor 
support for the CSS standard, but refuses to fix it saying that it's 
too difficult and would break existing websites.  Many developers, 
myself included, prefer to code to the standard and have the html/css 
render correctly in all browsers.  In the long run, it's better to do 
it right even if that means breaking things today. If the standard is 
good (there are poor standards), there's no reason to not use the 
standard.


In principle, I don't disagree with you.  However, I tend to be much 
more pragmatic.  Often we have little choice in the tools we use.  I 
have seen occasions where developers were constrained to use MS Visual 
Studio.  It is an outstanding product now (I have, and use the 
professional edition of MS VS 2005).  However, MS VS v6 was mediocre at 
best.   And a majority of WIN32 developers had to use it.  It was 
popular despite its problems.  It had very poor compliance to the ANSI 
C++ standards at the turn of the century, although that isn't too 
surprising given its age.  But many people thought they were writing 
standards compliant code, and that code broke big time once they tried 
to use it with later, better, compilers.  That is just one example.   
Yours is another, and there examples relating to FORTRAN (is there 
anyone other than me old enough to remember that? - I recently 
encountered IT professionals who didn't know what punch cards where or 
how they were used). Yet much FORTRAN code that is four or five decades 
old is still valuable a) because it was written to a standard and b) 
there are tools to automate converting it into C, allowing it to be used 
while an old C++ programmer like myself adapts it to take advantage of 
key C++ programming idioms and language features (templates, template 
metaprogramming).  This is because in many cases the core algorithms 
developed at the time have not been supplanted by better ones, and they 
are still essential for intensive number crunching.  I have seen 
technologies and standards come and go (or better, be upgraded), and 
while we ought to write our code to be as compliant as possible to the 
relevant standards, if there is one (there isn't one, last I checked, 
for VB, Java or Perl), in the end our tools are the final arbiters of 
what is legal and what isn't. Therefore, when the investment in the code 
in question is significant, and only then, the most viable approach to 
trying to preserve the value of the code is to use the highest common 
factor between the standard and what the tools say the standard is (it 
isn't unusual for tool developers to disagree about what the standard 
really means, especially when dealing with some of the harder issues, 
and nothing is as simple as it appears at first glance).


It is important to remember that standards change.  Just compare the 
different versions of the ANSI standard for SQL that have existed over 
the years.  Or do the same for some of the other languages such as 
FORTRAN, C and C++.  Since it is certain that even the best standards 
will change, and that some of those changes will break existing code, I 
do not believe it is worth getting paranoid or worried or upset just 
because one or another of our favourite tools lacks perfect compliance 
with the standard.  I do not define what is right by what a standard has 
to say.  Rather, I define it according to whether or not the correct 
answers are obtained and whether or not the application does for the 
user what the user needs it to do.  In this view, then, there are 
multiple right options, and this takes us back to our tools being the 
final arbiter of whether or not our code is correct.


So, I say that good standards are valuable tools that can be used to 
preserve the value of code that carries significant investment.  At the 
same time, they are only tools and the value they provide may well pale 
relative to other considerations.  I disagree with you when you say 
there's no reason not to use the (good) standard.  Often tool developers 
put considerable effort into providing features in addition to what the 
standard specifies, often without compromising compliance to the 
standard since often details are specified to be implementation 
dependant.  Regardless of their motives for doing so, there are good, 
economic and practical reasons to use such features.  The trick, if one 
must worry about using the code fifty years from now, is to ensure that 
implementation specific code is well separated from standard compliant 
code and that it is well documented.  That way, when the technology 
evolves in a way that will likely break your cod

Re: [GENERAL] [Bulk] Re: quoted identifier behaviour

2007-03-14 Thread Randall Smith

Scott Marlowe wrote:

This whole discussion is reminding me of one of my personal mantras, and
that is that relying on "artifacts" of behaviour is generally a bad
idea.

For instance, many databases accept != for not equal, but the sql
standard quite clearly says it's <>.

If you're relying on case folding meaning that you don't have to
consistently use the same capitalization when referring to variables,
table names, people, or anything else, you're asking for trouble down
the line, and for little or no real gain today.

I know that a lot of times we are stuck with some commercial package
that we can't do anything to fix, so I'm not aiming this comment at the
average dba, but at the developer.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



Yea, this is a commercial package, but it's actually doing it right. 
Since it doesn't know how a user will name a table or column, it always 
calls them as quoted strings in upper case which is standards compliant, 
but doesn't work with PG.  So if a user names a table 55 and mine, it 
calls "55 AND MINE" and for foo, it calls "FOO". Looks like they did it 
right to me.


Randall


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [Bulk] Re: quoted identifier behaviour

2007-03-14 Thread Randall Smith

Alvaro Herrera wrote:

Randall Smith wrote:

Scott Marlowe wrote:

This whole discussion is reminding me of one of my personal mantras, and
that is that relying on "artifacts" of behaviour is generally a bad
idea.

For instance, many databases accept != for not equal, but the sql
standard quite clearly says it's <>.

If you're relying on case folding meaning that you don't have to
consistently use the same capitalization when referring to variables,
table names, people, or anything else, you're asking for trouble down
the line, and for little or no real gain today.

I know that a lot of times we are stuck with some commercial package
that we can't do anything to fix, so I'm not aiming this comment at the
average dba, but at the developer.
Yea, this is a commercial package, but it's actually doing it right. 
Since it doesn't know how a user will name a table or column, it always 
calls them as quoted strings in upper case which is standards compliant, 
but doesn't work with PG.  So if a user names a table 55 and mine, it 
calls "55 AND MINE" and for foo, it calls "FOO". Looks like they did it 
right to me.


So what's the problem?  Just create the tables as all uppercase and you
should be fine, since the application must be systematic about quoting.



The toolkit uses quoted identifiers and the application doesn't.  I have 
a solution, which is to use views, but this thread is now about whether 
or not Postgresql should and will support the current SQL standard, 
which is to convert non-quoted identifiers to upper case.  Postgresql 
converts them to lower case and that is the issue.  Since posting this 
thread I've found this topic hashed out before in pgsql-hackers (thanks 
Tom).


Randall


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [Bulk] Re: quoted identifier behaviour

2007-03-14 Thread Randall Smith

Stephan Szabo wrote:

On Wed, 14 Mar 2007, Randall Smith wrote:


Scott Marlowe wrote:

This whole discussion is reminding me of one of my personal mantras, and
that is that relying on "artifacts" of behaviour is generally a bad
idea.

For instance, many databases accept != for not equal, but the sql
standard quite clearly says it's <>.

If you're relying on case folding meaning that you don't have to
consistently use the same capitalization when referring to variables,
table names, people, or anything else, you're asking for trouble down
the line, and for little or no real gain today.

I know that a lot of times we are stuck with some commercial package
that we can't do anything to fix, so I'm not aiming this comment at the
average dba, but at the developer.

Yea, this is a commercial package, but it's actually doing it right.
Since it doesn't know how a user will name a table or column, it always
calls them as quoted strings in upper case which is standards compliant,
but doesn't work with PG.  So if a user names a table 55 and mine, it
calls "55 AND MINE" and for foo, it calls "FOO". Looks like they did it
right to me.


Maybe, but the 55 and mine example may or may not actually work. 55 and
mine isn't a valid regular identifier. "55 and mine" would be a valid
identifier, but that's not the same identifier as "55 AND MINE".

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



Your right. Its not a correct example.  I think the point is clear, though.

Randall


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pre-parser query manipulation

2007-03-14 Thread Randall Smith

Erik Jones wrote:
This may be a dupe.  I sent and had some issues with my mail client and 
afterwards it was sitting all alone in my outbox, if you've already seen 
this, sorry for the resend...


On Mar 14, 2007, at 2:08 PM, Randall Smith wrote:


Hi,

I'm attempting to process a query written in the form:

SELECT fields FROM table WHERE conditions FOR UPDATE OF field_name

when Postgresql supports:

SELECT fields FROM table WHERE conditions FOR UPDATE OF table_name

I'd like to remove the OF clause in the case only one table is queried.

Anyone know of a solution to this?


The OF clause is optional in postrges.  The semantics are thus:

Without OF clause:  lock all rows in all tables in the from clause that 
contribute data to the rows returned by your select statement.  So, if 
I'm understanding what you're asking here, just chop off the OF clause 
and you're set.


Didn't know that.  Thanks.

Chopping off the OF clause is the problem.  I have control of the DB, 
but not of the application.  I do have control over the JDBC driver so I 
could attack it there, but it would be neat to intercept the query 
before the parser gets it and chop off the OF.  Any idea how to do that? 
 Rules work after the parser, right?




With OF clause: lock only rows from the specified tables that contribute 
data to the rows returned by your select statement.


There isn't any way that I know of to lock specific columns on a table 
as it seems the first query you listed is doing.


Oracle likes it for some reason even though I think it only does row 
level locking.


Randall


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pre-parser query manipulation

2007-03-15 Thread Randall Smith

Erik Jones wrote:
This may be a dupe.  I sent and had some issues with my mail client and 
afterwards it was sitting all alone in my outbox, if you've already seen 
this, sorry for the resend...


On Mar 14, 2007, at 2:08 PM, Randall Smith wrote:


Hi,

I'm attempting to process a query written in the form:

SELECT fields FROM table WHERE conditions FOR UPDATE OF field_name

when Postgresql supports:

SELECT fields FROM table WHERE conditions FOR UPDATE OF table_name

I'd like to remove the OF clause in the case only one table is queried.

Anyone know of a solution to this?


The OF clause is optional in postrges.  The semantics are thus:

Without OF clause:  lock all rows in all tables in the from clause that 
contribute data to the rows returned by your select statement.  So, if 
I'm understanding what you're asking here, just chop off the OF clause 
and you're set.


With OF clause: lock only rows from the specified tables that contribute 
data to the rows returned by your select statement.


There isn't any way that I know of to lock specific columns on a table 
as it seems the first query you listed is doing.


erik jones <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
sofware developer
615-296-0838
emma(r)





I'm at a loss as to how to remove the OF clause at the server.  It it 
possible to do it with a rule?  I don't have control over the 
application and it's written to work with Oracle.  I do have control 
over the JDBC driver, so maybe I should look for a solution there. 
Maybe a wrapper or something.


Randall


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] oracle synchronization strategy

2004-10-31 Thread Randall Smith
I am going to sync a schema in postgres with one in an oracle db.  The 
tables are simple, but there are 200 of them.  I would like to try to 
keep the sync lag < 1 minute.  Here is my idea.  Please critique/suggest.

1. Set up stored proc on oracle that records a INSERT, UPDATE, DELETE 
SQL action taken on a table into a log table.
2. Program reads the log table on oracle and issues the same SQL command 
on the postgres db.  In the same transaction, postgres writes to a log 
showing the command has been executed.
3. The program will query the oracle log table on some frequency ~30 
seconds.

What are your thoughts?
Randall
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] oracle synchronization strategy

2004-11-01 Thread Randall Smith
Thanks Joachim,
The mirror only has to go from oracle to pgsql and the schema/tables 
never change.  I'm going to take a look at dbmirror.  Thanks for the advice.

Randall
Joachim Wieland wrote:
Hi Randall,
On Sun, Oct 31, 2004 at 11:25:46PM -0600, Randall Smith wrote:
1. Set up stored proc on oracle that records a INSERT, UPDATE, DELETE 
SQL action taken on a table into a log table.
2. Program reads the log table on oracle and issues the same SQL command 
on the postgres db.  In the same transaction, postgres writes to a log 
showing the command has been executed.
3. The program will query the oracle log table on some frequency ~30 
seconds.

It depends on what you're trying to achieve.
Your way might work if you only want to mirror oracle -> pgsql but not vice
versa.
Furthermore you need to do manual maintenance on the pgsql side if you
change your schema on the oracle side (create/drop/change tables, ...)
I've done something similar with MS SQL -> pgsql and perl some years ago.
Shout if you're interested.
There's also dbmirror in contrib/ that works in a similar way.
Joachim

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faqs/FAQ.html
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] oracle synchronization strategy

2004-11-03 Thread Randall Smith
For anyone interested, the below procedure worked well.
Randall
Randall Smith wrote:
I am going to sync a schema in postgres with one in an oracle db.  The 
tables are simple, but there are 200 of them.  I would like to try to 
keep the sync lag < 1 minute.  Here is my idea.  Please critique/suggest.

1. Set up stored proc on oracle that records a INSERT, UPDATE, DELETE 
SQL action taken on a table into a log table.
2. Program reads the log table on oracle and issues the same SQL command 
on the postgres db.  In the same transaction, postgres writes to a log 
showing the command has been executed.
3. The program will query the oracle log table on some frequency ~30 
seconds.

What are your thoughts?
Randall
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] cross-table unique constraint

2004-01-18 Thread Randall Smith
So how might I go about doing this.

3 tables: a, b, and c

When these 3 tables are joined, there is a combination of 3 fields, one 
field from each table, that should be unique.

Should there be a trigger that checks this condition on inserts/updates 
on each of these tables?

Randall

Bruno Wolff III wrote:
On Tue, Jan 06, 2004 at 07:10:28 +,
  Randall Smith <[EMAIL PROTECTED]> wrote:
Is it possible to create a unique constraint on multiple fields in 
related tables?


Not with the normal UNIQUE constraint. That will do multiple columns in
one table, but won't span tables.
You can probably do what you want using triggers.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] zero-length delimited identifier in pg_dumpall file

2004-10-01 Thread Randall Smith
Got this error on a pg_dumpall file dumped from a postgresql-7.4-0.3PGDG 
database and imported into 7.4.5-3 database.

ERROR:  zero-length delimited identifier at or near  at character 37
From looking at the output, I think maybe this is the line that caused 
the error.

CREATE SCHEMA "MSV80" AUTHORIZATION "";
There is another statement a few lines down that looks like this:
CREATE SCHEMA msv80 AUTHORIZATION dba;
Is this a but in the pg_dumall program?
Randall
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] initdb on Debian

2004-10-01 Thread Randall Smith
I would like to be able to initdb to get a clean database to test 
backups.  Reading the Debian version of the Postgresql README, I see 
that I should use the --debian-conffile option with initdb.  Problem 
though.  I have to run initdb as user postgres, but I need to be root to 
edit the stuff in /etc.  So I can't run initdb as root or postgres.  I 
lose either way.  Also, since the configs are symlinked in etc, is there 
an elegant way to handle the configs in /etc when I'm running > 1 
postgresql instances?

Randall
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] PostgreSQL on Cobalt Raq and Cold Fusion on NT using ODBC?

2000-09-25 Thread Randall Smith

The machines are on the same subnet, nearly side by side. I thought maybe I
could telnet into port 5432 but still no luck... it's as if postmaster isn't
taking tcp connections of any kind. Both machines have proper
forward/reverse dns and no packet filtering exists between them. How can I
verify postmaster is accepting tcp connections?

Randy Smith
Tiger Mountain Technologies

- Original Message -
From: Tom Lane <[EMAIL PROTECTED]>
To: Randall Smith <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, September 25, 2000 1:06 PM
Subject: Re: [GENERAL] PostgreSQL on Cobalt Raq and Cold Fusion on NT using
ODBC?


> [EMAIL PROTECTED] (Randall Smith) writes:
> > This is the error I get:
> > ODBC Error Code = S1000 (General error)
> > Could not connect to the server; Could not connect to remote socket.
>
> > Any thoughts?
>
> DNS problems?  Firewall packet filtering?  It sounds like you're never
> getting as far as talking to the postmaster.
>
> regards, tom lane
>
>





Re: [GENERAL] PostgreSQL on Cobalt Raq and Cold Fusion on NT using ODBC?

2000-09-25 Thread Randall Smith

Thanks for everyone who helped. I'm making progress. A quick port scan
revealed that postmaster was running on 5583. I guess Cobalt in
non-standard. I can now talk to postmaster but I'm getting an authenication
error like:
ODBC Error Code = S1000 (General error)
The database does not exist on the server or user authentication failed.

I suspect that I need to review the permissions of the database but I'm
stuck again.

Thanks in advance,

Randy Smith
Tiger Mountain Technologies