Re: [GENERAL] ISO something like "#if 0 ... #endif" for SQL code

2008-03-20 Thread Rick
In article <[EMAIL PROTECTED]>,
Sam Mason <[EMAIL PROTECTED]> wrote:

>I'm not quite sure if this would help your use case, but a few editors
>allow you to send blocks of text to other processes.  For example, under
>Emacs I can hit Ctrl+C twice and it will grab the current paragraph
>and send it off to psql, showing the results in another window.  Once
>I'm happy with the statement I leave it and move on to the next job
>(committing changes to some SCM when appropriate).

You can do that with vi (or vim) as well. Sending a paragraph would
be !}psql


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE TEMP TABLE AS ... ON COMMIT DROP fails

2005-11-06 Thread Rick
In article <[EMAIL PROTECTED]>, Andrus Moor <[EMAIL PROTECTED]> wrote:
>I need to create temporary table with data which is dropped at end of 
>transaction.
>
>CREATE TABLE t1 ( c1 INTEGER ) ;
>INSERT INTO t1 VALUES (1);
>CREATE TEMP TABLE t2 AS SELECT * FROM t1 ON COMMIT DROP;
>
>Causes ERROR:  syntax error at or near "ON" at character 104
>
>How to implement this ?

You can't just explicitly drop the table just before the commit?

--
http://yosemitenews.info/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] newbie

2004-10-25 Thread Rick
i installed mandrake 10 on my pc, and i choose postgresql as db, i installed 
it before in windows, but in linux i dont have idea what happends, i 
remebered that i should create an user, but in linux the process is auto and 
i when i try to connect by pgadmin i dont know user and password =(

i wrote in user postgre and trusted server but it says 

Error connecting to the server: could not connect to server: Connection 
refused
Is the server running on host 127.0.0.1 and accepting
TCP/IP connections on port 5432?

and the server is up, the service is running,  who can help me please?? =(

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] foreign keys to foreign tables

2015-06-22 Thread Rick Otten
Hello pgsql-general,

I'd like to set up a foreign key constraint to a foreign table from a local
table.

ie, I have a column in a local table that I'd like to ensure has a value in
the foreign table.

alter mytable
add column  some_column_id   uuid references myforeigntable(some_column_id)
;

Unfortunately I get a "not a table" error when I try this.

ERROR:  referenced relation "myforeigntable" is not a table

I'm thinking I'll have to write a function that checks for existance of the
ids in the foreign table, and then put a CHECK constraint on using that
function, but I thought I'd as first if there was a better way.

Thanks.

--
Rick Otten
rottenwindf...@gmail.com


Re: [GENERAL] foreign keys to foreign tables

2015-06-22 Thread Rick Otten
Obviously the server will be able to delete those rows because it will be
completely unaware of this dependency.

So it is the implied reverse constraint (of sorts) that can't be enforced
which makes an FK based definition impossible.

For my particular use case, this shouldn't be a problem.  The foreign table
is a reference table which does not typically experience deletes.  I'll go
with a function for now.  Since this happens to be a PostgreSQL-PostgreSQL
mapping I'll also consider mapping my table back the other way and then
putting a delete trigger on the foreign reference table to either cascade
or stop the delete once I decide which I'd rather do.

Thanks for the help!





On Mon, Jun 22, 2015 at 12:21 PM, Tom Lane  wrote:

> Rick Otten  writes:
> > Hello pgsql-general,
> > I'd like to set up a foreign key constraint to a foreign table from a
> local
> > table.
>
> > ie, I have a column in a local table that I'd like to ensure has a value
> in
> > the foreign table.
>
> > alter mytable
> > add column  some_column_id   uuid references
> myforeigntable(some_column_id)
> > ;
>
> > Unfortunately I get a "not a table" error when I try this.
>
> > ERROR:  referenced relation "myforeigntable" is not a table
>
> > I'm thinking I'll have to write a function that checks for existance of
> the
> > ids in the foreign table, and then put a CHECK constraint on using that
> > function, but I thought I'd as first if there was a better way.
>
> What's going to happen when the foreign server decides to delete some rows
> from its table?
>
> regards, tom lane
>


Re: [GENERAL] foreign keys to foreign tables

2015-06-22 Thread Rick Otten
Thanks Will!

I had been considering setting up replication (using SymmetricDS - which we
already use between other databases in our environment), but decided for
this one check it was too much trouble.  I may change my mind on that point
again after all if I end up with a lot of dependencies like this or run
into performance issues.




On Mon, Jun 22, 2015 at 1:06 PM, William Dunn  wrote:

> Hello Rick,
>
> As I understand it you are correct. Oracle/DB2/Postgres and I think the
> SQL Standards to not implement constraints against tables on foreign
> servers. Although it would be possible to develop the DBMS to handle such
> constraints in a heterogeneous distributed environment it would be unwise
> because of the poor performance and reliability of data sent over networks
> so DBMSs do not implement it. You would, as you suspected, have to use
> stored procedures to emulate some of the functionality of a foreign key but
> definitely think twice about the performance bottlenecks you would
> introduce. A more clever thing to do is use Slony, BDR, or triggers to
> replicate the foreign table and create the constraint against the local
> copy. In some other DBMSs the clever thing to do is create a materialized
> view and constraints against the materialized view (which achieves the
> same) but Postgres does not yet support such constraints against
> materialized views.
>
> *Will J. Dunn*
> *willjdunn.com <http://willjdunn.com>*
>
> On Mon, Jun 22, 2015 at 12:21 PM, Tom Lane  wrote:
>
>> Rick Otten  writes:
>> > Hello pgsql-general,
>> > I'd like to set up a foreign key constraint to a foreign table from a
>> local
>> > table.
>>
>> > ie, I have a column in a local table that I'd like to ensure has a
>> value in
>> > the foreign table.
>>
>> > alter mytable
>> > add column  some_column_id   uuid references
>> myforeigntable(some_column_id)
>> > ;
>>
>> > Unfortunately I get a "not a table" error when I try this.
>>
>> > ERROR:  referenced relation "myforeigntable" is not a table
>>
>> > I'm thinking I'll have to write a function that checks for existance of
>> the
>> > ids in the foreign table, and then put a CHECK constraint on using that
>> > function, but I thought I'd as first if there was a better way.
>>
>> What's going to happen when the foreign server decides to delete some rows
>> from its table?
>>
>> regards, tom lane
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


[GENERAL] ALTER TABLE and vacuum

2016-06-06 Thread Rick Widmer

Do I need to vacuum after an alter table command?

Does it matter if there is a default or if we accept NULLs for the new 
field?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Identify ldap connections in logfiles

2016-07-14 Thread Rick Widmer

Is there a way to identify ldap connections to pgsql 9.3 from log files?


Thanks,
Rick


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] overwrite column data select - Postgres 9.2

2016-09-20 Thread Rick Widmer


I was hopping that in a SELECT I could replace the data from the email
column to something else...


maybe email = name_first + name_last? Is that possible?

I can't overwrite the data into that column... that has to be done by
the select (if possible)


SELECT , , ..., name_first || ' ' || name_last AS email
FROM ...

You can't use SELECT *, you must list everything but email.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Permission denied for sequence

2007-05-31 Thread Hackenberg, Rick
I am currently having a problem with an application that has been
working fine for the past few months. Whenever I try to add a new entry
into a table I receive the following message:

ERROR: permission denied for sequnce contractid

I have checked the permission for this sequence as well as the other
sequences that are contained in my database and it appears that my
permissions are ok.

Does anyone have any suggestions for any other settings I should verify?
Thanks in advance.

Rick Hackenberg
Business Systems Analyst
PRIMEDIA 
Enthusiast Media
6405 Flank Drive
Harrisburg, PA 17112
717-540-6643(Office)
717-657-9552 (FAX)
717-215-0450(Cell)



[GENERAL] Multiple customers sharing one database?

2007-06-01 Thread Rick Schumeyer
I'm developing an application that will be used by several independent 
customers.  Conceptually, the data from one customer has no relation at 
all to another customer.  In fact, each customer's data is private, and 
you would never combine data from different customers.  I'm trying to 
decide whether to:


a) give each customer a separate database

or

b) put everyone in the same database, but take steps to ensure that 
customer #1 cannot see data from customer #2.


I was talking to an Oracle expert who suggested (based on his oracle 
background) option b where each customer has their own schema.  His 
feeling was that each database has significant memory overhead, which 
then gets multiplied with option a.  At least it does with oracle.


Does anyone have any suggestions?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



[GENERAL] yet another image: db or filesystem ? question

2007-07-17 Thread Rick Schumeyer
I've read the earlier threads on whether to store images in the database 
or filesystem.  I think I understand the pros and cons of each method, 
but I have a question on a specific use case.


Let's say I have a web app, and I want to display the images in a web page.

a) if the images are in the filesystem (and also under the web root), no 
problem.  Just use 


b) if the images are in the database...do I need to create a temporary 
file first in order to use the  tag?  Or is there some other HTML 
way of doing this?




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] JOB WITH SUN MICROSYSTEMS!!!

2007-11-13 Thread Rick Grandy
We are looking for a Postgresql Architect/Developer to do some mentoring,
training, knowledge transfer and implementation for a large media outlet in
Los Angeles.  Travel is fully funded and the rates are great!  Please reply
to [EMAIL PROTECTED]  Thank you for your time!!!

 

Regards,

 

Ricky Grandy Sr. Recruiter

949-724-0304 (O) | 949-545-8885 (M) | 909-494-4185 (eFax)

[EMAIL PROTECTED]

www.crescentsolutions.net



 

<>

[GENERAL] Tips for upgrading from 7.4

2008-01-23 Thread Rick Schumeyer
I'm working with someone who is about to upgrade from (I believe) pg 7.4 
to pg 8.1.


What would be the best resource to read to look for "gotchas"?  Release 
notes?  At the moment, we don't care about performance problems, only 
things that might break.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Tips for upgrading from 7.4

2008-01-23 Thread Rick Schumeyer

Tom Lane wrote:

Rick Schumeyer <[EMAIL PROTECTED]> writes:
  
I'm working with someone who is about to upgrade from (I believe) pg 7.4 
to pg 8.1.



  
What would be the best resource to read to look for "gotchas"?  Release 
notes?  At the moment, we don't care about performance problems, only 
things that might break.



Reading the release notes is good, but you really really should test the
application(s) against a test 8.1 installation before you go live ...

regards, tom lane
  
We plan on testing.  I'm asking if there is anything that "everyone 
knows" will break that we might as well fix before testing.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Tips for upgrading from 7.4

2008-01-23 Thread Rick Schumeyer

Steve Crawford wrote:



BTW, why not upgrade to the latest version?

Cheers,
Steve



Mostly because its not my server :-)  I've suggested that, we'll see.

I appreciate the comments regarding type casting.  I'll be sure to look 
out for that.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL For Beginners

2011-02-03 Thread rick chu
I think an intro presentation for Oracle developers should include:
-- sqlplus to psql comparison
-- how to setup OS authenticated users in PostgreSQL
-- purpose of pg_hba.conf
-- schemas and search_path.
-- where to find things in the catalog.  information_schema or pg_tables


Rick


Re: [GENERAL] Unable to connect to Postgresql

2017-04-10 Thread Rick Widmer

On 4/10/2017 2:23 PM, John Iliffe wrote:

On Monday 10 April 2017 11:53:35 Daniel Verite wrote:

John Iliffe wrote:



Given that you set two directories: /tmp and /var/pgsql,
I would think you can let the other apps use /tmp as before
and have only Apache use /var/pgsql ?


Yes, I will do that, but there are several hundred PHP web page scripts to
be updated.  Presumably if one script opens two different databases then
both of the pg_connect() instances will need to be updated.


How about creating a php file with the connect code, once, then 
including that file from all of the hundreds of web pages.  It won't 
help much with the first set of changes, but it will help with 
portability later.



Rick



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL with PowerBuilder, and Identity keys (serials)

2017-08-05 Thread Rick Widmer

On 8/5/2017 6:06 PM, Dan Cooperstock at Software4Nonprofits wrote:
 (I wish Postgres had a web-based community board, 
rather than just this mailing list with no history available!) I will 
post it to Appeon as an actual bug if I get stuck.


Have you looked here?

https://www.postgresql.org/list/

or here:

https://www.postgresql.org/list/pgsql-general/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Looking for Silicon Valley/Peninsula/San Francisco users group

2011-04-29 Thread Rick Genter
I've been using PostgreSQL pretty steadily for the past year and am
interesting in joining/attending a users group meeting. I've searched for a
users group in the San Francisco/Peninsula/Silicon Valley area (I'm in
Redwood City), but all I've found are references to a San Francisco group
where the last update/meeting was 2 years ago. Is there such a group in this
area? Thanks.

-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Pass age function to extract function

2011-05-02 Thread Rick Genter
Try removing the keyword "interval" (i.e., EXTRACT(MINUTE FROM
TestRunTime)). Since TestRunTime is a column name, I think if you wanted to
cast it as an INTERVAL you'd specify EXTRACT(MINUTE FROM
TestRunTime::INTERVAL), but since TestRunTime is already an INTERVAL, the
cast is redundant.

On Mon, May 2, 2011 at 4:16 PM, Adam Bruss  wrote:

>  Hello,
>
>
>
> How can one pass a non string literal interval to the extract function?
>
>
>
> For example:
>
>
>
> SELECT starttime, *extract(minute from interval testruntime) as runtime
> from ( select age(endtime, starttime) as testruntime*,
> ref_testnames_serial, starttime, endtime, dense_rank() over (order by
> starttime desc) dr from dtprfrm764.orion_tests where ref_testnames_serial =
> 389 and pass = true) x WHERE dr <= 20 ORDER BY starttime asc
>
>
>
> Does ‘minute from interval’ only accept string literals?
>
>
>
> Thanks,
>
> Adam
>
>
>
> Adam Bruss
>
> Development Engineer
>
> AWR Corporation/Simulation Technology & Applied Research
>
> 11520 N. Port Washington Rd., Suite 201
>
> Mequon, WI  53092  USA
>
> P: 1.262.240.0291 x104
>
> F: 1.262.240.0294
>
> E: abr...@awrcorp.com
>
> W: http://www.awrcorp.com
>
>
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] pervasiveness of surrogate (also called synthetic) keys

2011-05-03 Thread Rick Genter

On May 2, 2011, at 10:52 PM, Craig Ringer wrote:

> SSN? What if they don't live in the US or aren't a citizen?

Non-citizens can have SSNs (they have to if they work in the US).
--
Rick Genter
rick.gen...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Rick Genter
On Thu, May 5, 2011 at 3:20 PM, Jack Christensen wrote:

> What is the best way to handle multiple table relationships where
> attributes of the tables at the ends of the chain must match?
>
> Example:
>
> CREATE TABLE achievements(
> achievement_id serial PRIMARY KEY,
> ...
> );
>
> CREATE TABLE achievement_versions(
> achievement_version_id serial PRIMARY KEY,
> achievement_id integer NOT NULL REFERENCES achievements,
> ...
> );
>
> CREATE TABLE achievement_attempts(
> achievement_attempt_id serial PRIMARY KEY,
> achievement_version_id integer NOT NULL REFERENCES achievement_versions,
> ...
> );
>
> CREATE TABLE actions(
> action_id serial PRIMARY KEY,
> ...
> )
>
> CREATE TABLE achievement_attempt_actions(
> achievement_attempt_id integer NOT NULL REFERENCES achievement_attempts,
> action_id integer NOT NULL REFERENCES actions,
> PRIMARY KEY( achievement_attempt_id, action_id)
> );
>
>
> The achievement_attempt_actions table links actions to
> achievement_attempts. For a link to be valid a number of attributes of
> actions must match attributes of achievements and achievement_attempts. This
> means an update to any of these 5 tables could invalidate the chain. How can
> I eliminate the possibility for this type of erroneous data?
>

I might not be understanding your question, but isn't that what your foreign
key references do? For example, you can't update achievement_attempt_id in
the achievement_attempt table if there is an achievement_attempt_actions
record that refers to it since that would break the reference. (Not that you
want to be updating primary key values in the first place...)
-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] psql tunneling thru a middle server

2011-05-05 Thread Rick Chu
On Thu, 2011-05-05 at 15:05 -0400, jtke...@verizon.net wrote:
> Im trying to connect my WS to a postgresql database (destination) via
> a middle server 
> I.e.
> WS   > Middle > Database 
> server  server 
>   172.x.2.4   172.x.4.12
> 
> 
> I can create a SSH tunnel from my WS to the middle server but not sure
> how or if I can make the middle server pass connections to and from my
> ws and database.  BTW,  they run the DB on a non standard postrgres
> port.  Any help would be appreciated
> Thanks 
> 

1. From your WS open two terminal(ssh) windows.

2. From Terminal#1 startup the ssh tunnel using: 
ssh -L 5432:database.com:8432 yo...@middle.com -p 20

3. From Terminal#2 connect to the database via the tunnel:
psql -p 5432 mydatabasename -U you12 -h localhost


RickC



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Rick Genter
On Thu, May 5, 2011 at 3:50 PM, Jack Christensen wrote:

> The trick is there are additional attributes of actions and achievements
> such as a category that must match for the link to be valid. These
> attributes are not part of the primary key of either record and can and do
> change.
>

So your data is denormalized? (The "category" appears in 2 tables?) Don't do
that. Create a view that joins your two tables together instead if you need
a single entity that contains data from multiple sources. Then you won't
have any of the data integrity issues you're worried about.

-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Multiple table relationship constraints

2011-05-05 Thread Rick Genter
On Thu, May 5, 2011 at 4:14 PM, Jack Christensen wrote:

> It's not denormalized. It is an attribute that both tables have that have
> to match for it to be a valid link.
>
> Here's a contrived example:
>
> CREATE TABLE dorms(
>   dorm_id serial PRIMARY KEY,
>   gender varchar NOT NULL,
>   ...
> );
>
> CREATE TABLE people(
>   person_id serial PRIMARY KEY,
>   gender varchar NOT NULL,
>   ...
> );
>
> CREATE TABLE room_assignments(
>   person_id integer NOT NULL REFERENCES people,
>   dorm_id integer NOT NULL REFERENCES dorms,
>   ...
> );
>
> Men should only be assignable to men's dorms and women should only be
> assignable to women's dorms. On occasion a person's or dorm's gender needs
> to be updated. I want to make sure that doesn't cause a room assignment to
> become invalid. In this example, adding gender to room_assignments and using
> composite foreign keys is fairly straight forward -- but in my actual domain
> I have 5+ tables and 2+ attributes involved in the relationship.
>

Hm. I think the way I would handle this is to put the business logic for
inserting/updating into the room_assignments table into one or more
functions and have a special user that owns the tables and owns the
functions and declare the functions to be SECURITY DEFINER. Revoke
INSERT/UPDATE/DELETE access to the tables from all other users. Then you
grant your regular users EXECUTE access to the functions. The functions run
as the user that created them, so they will have direct INSERT/UPDATE/DELETE
access to the tables while your regular users won't.

-- 
Rick Genter
rick.gen...@gmail.com


[GENERAL] Growing a live database

2011-05-06 Thread Rick Genter
I have a PostgreSQL 8.4.4 database that is running 24/7. The drive that the
database is on is becoming full and I need to expand it. We are currently
doing log-shipping of the WAL files to a slave system to run in a hot
standby mode. I have two servers: S1 (currently running as master) and S2
(currently running as slave)

My current plan is to do the following:

- fail S1 over to S2 by doing pg_ctl stop on S1, then ending the recovery
process on S2 and let it come up as the master
- add a new larger drive to S1
- swap roles; backup S2 and restore it on S1, then start log shipping from
S2 to S1
- let the S1 "catch up" on the log files from S2
- fail S2 back over to S1 by doing pg_ctl stop on S2, then ending the
recovery process on S1 and let it come up as the master
- add a new larger drive to S2
- backup S1 and restore it on S2, then start log shipping from S1 to S2

I believe that this accomplishes the goal (increasing available drive space)
with a minimum amount of down time. Am I thinking correctly, or have I
missed something?

-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Memcached for Database server

2011-05-16 Thread Rick Genter

On May 16, 2011, at 10:09 PM, Adarsh Sharma wrote:

> Dear all,
> 
> I need to research on Memcache in the next few days.
> 
> What I want to know is it worth to have memcahed enable in our Mysql/ 
> Postgres Production Servers.
> We have databases from 20 to 230 GB and it's not the OLTP just a simple OLAP 
> where data is fetched and stored in some meaningful format.
> 
> 
> What are benefits & why we used memcahed?
> 
> What are the bottlenecks to meet?

You need to read about memcached. Memcached is not something you "enable". You 
have to program to it.
--
Rick Genter
rick.gen...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memcached for Database server

2011-05-16 Thread Rick Genter

On May 16, 2011, at 10:31 PM, Adarsh Sharma wrote:

> Rick Genter wrote:
>> 
>> On May 16, 2011, at 10:09 PM, Adarsh Sharma wrote:
>> 
>>   
>>> Dear all,
>>> 
>>> I need to research on Memcache in the next few days.
>>> 
>>> What I want to know is it worth to have memcahed enable in our Mysql/ 
>>> Postgres Production Servers.
>>> We have databases from 20 to 230 GB and it's not the OLTP just a simple 
>>> OLAP where data is fetched and stored in some meaningful format.
>>> 
>>> 
>>> What are benefits & why we used memcahed?
>>> 
>>> What are the bottlenecks to meet?
>>> 
>> 
>> You need to read about memcached. Memcached is not something you "enable". 
>> You have to program to it.
>>   
> 
> Thanks Rick, just one question..
> 
>  At what stage we need memcached & what is the purpose of using it.
> 
> I just want to know whether it is worth to use memcahced or not as per our 
> requirements.


I can't tell you; you didn't state your requirements. I recommend that you go 
to the memcached web site and research it.

http://www.memcached.org

--
Rick Genter
rick.gen...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL and SSIS

2011-05-23 Thread Bailey, Rick
I have an Microsoft SQL Server IS job that has been running in production for 
several years.  It pulls data from a PostgreSQL 8.4.2 database on Linux into an 
SQL Server 2005 installation on Windows  Server 2003 (all 32-bit).  I am 
re-writing this in a test environment that consists of the same PostgreSQL 
database and an SQL Server 2008R2 installation on Windows Server 2008R2 (all 
64-bit).
On the Windows Server 2008R2 machine, I installed 64 bit Postgres ODBC drivers 
and found that I could not see them when creating an ADO.net connection manager 
in BIDS.  A bit of googling later, I removed the 64 bit drivers and installed 
32-bit Postgres ODBC drivers and set up DSNs usning 
windows\SysWOW64\odbcad32.exe.  When setting up the DSNs, clicking the test 
button returned 'Connection successful'.
Back to BIDS, create a new ADO.net connection manager, ODBC Data Provider, 
select the DSN name in 'Use user or system data source name', hit the test 
connection button.  It returns 'Test connection succeeded'.  Create  a data 
flow task, edit, add an ADO.NET source, edit, select the new connection 
manager, Data access mode is set to 'Table or view', click the drop down for 
'Name of the table or view:', it says loading and will sit there like that 
forever.  If I click it again, it returns the following error message:
'Could not retrieve the table information for the connection manager 
'PostgreSQL30'.  Object reference not set to an instance of an object. 
(Microsoft.DataWarehouse)'
If I select 'SQL Command' as the Data Access mode, and enter any SQL Command 
(eg select * from PostgresTable) and hit the Preview button, the expected data 
is returned.
My question is why can it not return the list of tables, but it can return data.
Any help would be appreciated.


Rick Bailey
Database Specialist
Materials Research Institute
123 Land & Water Building
University Park, PA 16802
814-863-1294



Re: [GENERAL] how to start a procedure after postgresql started.

2011-05-23 Thread Rick Genter

On May 23, 2011, at 9:46 PM, jun yang wrote:

> thanks for the info,i am just not have such deep learn of pg internal,
> i am on user level,not hacker,so the mail is in pgsql-general,not
> hacker list.

What you are asking to do is not a typical user function. It would be more 
appropriate for a "hacker list".
--
Rick Genter
rick.gen...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Miidpoint between two long/lat points? (earthdistance?)

2011-05-25 Thread Rick Genter
On Wed, May 25, 2011 at 9:47 AM, Carlo Stonebanks <
stonec.regis...@sympatico.ca> wrote:

>  I need to calculate the long/lat values between a line demarcated by two
> long/lat points.
>
>
>
> The points will be very close, but there is the 180 degree problem to
> consider, so a simple average won’t work.
>
>
>
> Does anyone know of a function or have a formula that will work using geo
> long/lat values? I don’t see anything obvious in the earthdistance module.
>

The simplest way to deal with "the 180 degree problem" is to remember that
you can add 360 degrees to a long and get a value that should continue to
work. So, assuming "West" is negative, -175 (175 degrees West) is the same
as -175+360 = 185 (185 degrees East). Then you don't have to worry about
wraparound. If the result is > 180, subtract 360.
-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] syntax for updating an aliased table

2011-05-26 Thread Rick Genter
The UPDATE statement when multiple tables are involved always drives me
nuts.

I think what you need to do is remove all of the "old." from the SET clause
and use "triple." in the WHERE clause instead of "old." - and remove the old
table alias from the UPDATE.

On Thu, May 26, 2011 at 9:38 AM, Andy Chambers  wrote:

> I'm confused about the correct syntax for updating an aliased table.  I
> want to update triple from triple_updates
> where the data is different and tried to use the following
>
> update triple old
>set
>   old.obln = new.obln,  old.ointv = new.ointv,
>  old.otime = new.otime,  old.oflt = new.oflt,  old.ostr = new.ostr,
>  old.oint = new.oint,  old.oda = new.oda,  old.uasid = new.uasid
>from triple_update as new
>   where (old.s = new.s and
>  old.g = new.g) and
> ( old.obln <> new.obln or  old.ointv <>
> new.ointv or  old.otime <> new.otime or  old.oflt <> new.oflt or  old.ostr
> <> new.ostr or  old.oint <> new.oint or  old.oda <> new.oda or  old.uasid <>
> new.uasid)
>
>
> ...but postgres complains about not having column "old" in the triple
> table.  Putting an "as" between triple and old on the first line didn't make
> any difference.  If
> I leave out the old alias, it complains about the columns being ambiguous.
>  How should the query above be changed to be syntactically correct?
>
> Thanks,
> Andy
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] proper regex_replace() syntax

2011-06-01 Thread Rick Genter
On Wed, Jun 1, 2011 at 10:22 AM, Geoffrey Myers  wrote:

> I want to use regex_replace() to replace characters in multiple records.
>
> What I would like to do is this:
>
> select regex_replace((select fname from table), 'z', 'Z'));
>
>
> The problem is, the subquery returns more then one row.
>
> So, is there a way to do what I'm trying to do?  That is, replace the same
> character in multiple records using regex_replace() ?
>

I think what you want is:

SELECT regex_replace(fname, 'z', 'Z') FROM table;

This should return a recordset where each row has one column which is the
result of regex_replace() on the corresponding row of table.
-- 
Rick Genter
rick.gen...@gmail.com


[GENERAL] trouble building user defined agg function with plpython

2011-06-14 Thread Rick Harding
I'm trying to test out a user defined aggregation function. The problem I'm
getting is that the state is never passed to the function after the first
call. I'm wondering if this is an issue with having my function defined as
a pypython function or something. 

Each call I get an UnboundLocalError exception. I can try/catch it and set
a default value for the state, but then it's triggered on every invocation
during execution.

A small demo function:

CREATE TYPE group_data AS (
id  integer,
weight  decimal
);

CREATE OR REPLACE FUNCTION mysum(curr integer, vals group_data)
 RETURNS integer
 AS $$

try:
curr = curr + vals['weight']
except UnboundLocalError:
plpy.notice("UNBOUND")
curr = 0

return curr

 $$ LANGUAGE plpythonu;

CREATE AGGREGATE mysumagg (group_data)
(
sfunc = mysum,
stype = integer
);


I get the following when testing it:

-- SELECT  
--mysumagg(ROW(res.idx, 1)::group_data)
-- FROM (
-- SELECT r."D6" as idx
-- FROM t_fct_respondent r
-- LIMIT 2
--  ) AS res;

-- NOTICE:  UNBOUND
--  CONTEXT:  PL/Python function "mysum"
-- NOTICE:  UNBOUND
--  CONTEXT:  PL/Python function "mysum"
 mysumagg 
--
0
(1 row)


Thanks for any pointers on what I'm missing from the way to handle the
agggregate definition.

-- 

Rick Harding
@mitechie
http://blog.mitechie.com
http://lococast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trouble building user defined agg function with plpython

2011-06-16 Thread Rick Harding
On Tue, 14 Jun 2011, Tom Lane wrote:

> Rick Harding  writes:
> > CREATE OR REPLACE FUNCTION mysum(curr integer, vals group_data)
> >  RETURNS integer
> >  AS $$
> > try:
> > curr = curr + vals['weight']
> > except UnboundLocalError:
> > plpy.notice("UNBOUND")
> > curr = 0
> > return curr
> >  $$ LANGUAGE plpythonu;
> 
> This function doesn't work when called manually; it's got nothing to do
> with the aggregate context.  You should read the last half of this page:
> 
> http://www.postgresql.org/docs/9.0/static/plpython-funcs.html
> 
>   regards, tom lane

Thanks so much, I had missed that part in the docs about reassigning the
function parameters. That helped point me in the right direction.

Rick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In a view, how do I cause fields with possible NULLs to be treated as a blank string in a replace operation?

2011-06-27 Thread Rick Genter
Either use '' as some_type, or use COALESCE(some_type, '').

On Mon, Jun 27, 2011 at 9:53 AM, Asfand Qazi (Sanger Institute) <
aq2.san...@gmail.com> wrote:

> Hello,
>
> So I have am playing with a view to test the feasibility of a
> technique for storing some data.
>
> It basically goes something like this:
>
> CREATE VIEW formatted_table AS
>   SELECT name,
>  replace(some_template, '@', some_type) AS some_field
> FROM some_table;
>
> some_template is something like 'foo@bar' or 'foobar' (note the
> missing template character).
>
> some_type is a single letter like 'a' or 'b', or it can be NULL.
>
> The above view works fine for rows where some_type is a letter, and
> some_field ends up as 'fooabar' or whatever.
>
> However, when some_type is NULL, some_field ends up as NULL as well.
> I understand that this is expected behaviour, but how do I cause the
> view to treat a some_type of NULL as an empty string, so that
> some_field simply ends up as 'foobar'?
>
> Hope that was clear.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] DROP TABLE Appears to Fail

2011-06-28 Thread Rick Genter
Silly question, but did you try it with a semicolon after the drop table?

# drop table station_type;

I've noticed that if you are in the middle of a statement and issue a \
command, psql ignores the SQL you've typed in and just does the \ command.


On Tue, Jun 28, 2011 at 3:34 PM, Rich Shepard wrote:

>  I cannot recall issuing a DROP TABLE command from psql that did not work,
> but seem to have this as a new experience.
>
>  When I look at the database table list with '\d' I see
>
> public | station_type  | table| rshepard
> public | station_type_statype_seq  | sequence | rshepard
>
> and I want to drop and recreate these with a typo fixed. But, the drop
> commands do not appear to work:
>
> # drop table station_type
>
> # \d
>
>  public | station_type  | table| rshepard
>  public | station_type_statype_seq  | sequence | rshepard
>
>  I fail to see what I'm doing incorrectly and would appreciate a clue stick
> to set me on the proper path.
>
> TIA,
>
> Rich
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] DROP TABLE Appears to Fail

2011-06-28 Thread Rick Genter
After issuing the \d you are still in the middle of your command. Witness
the following copy/paste of a terminal session:

bash-3.2$ ./psql
Password:
psql (8.4.4)
Type "help" for help.

postgres=# create table foo (bar int);
CREATE TABLE
postgres=# drop table foo
postgres-# \d
List of relations
 Schema | Name | Type  |  Owner
+--+---+--
 public | foo  | table | postgres
(1 row)

postgres-# drop table foo;
ERROR:  syntax error at or near "drop"
LINE 2: drop table foo;
^
postgres=# drop table foo;
DROP TABLE
postgres=#

This is on 8.4.4. The semicolon is required.

On Tue, Jun 28, 2011 at 3:53 PM, Rich Shepard wrote:

> On Tue, 28 Jun 2011, Rick Genter wrote:
>
>  Silly question, but did you try it with a semicolon after the drop table?
>>
>
> Rick,
>
>  See my answer to Andy: that's incorrect syntax and psql complains.
>
>
>  I've noticed that if you are in the middle of a statement and issue a \
>> command, psql ignores the SQL you've typed in and just does the \ command.
>>
>
>  But there is no continuation command just 'drop table '.
>
> Thanks,
>
>
> Rich
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Insufficient privileges.

2011-07-07 Thread Rick Genter
On Thu, Jul 7, 2011 at 10:03 AM, Dave Coventry  wrote:

> I am getting the following error message in my Drupal install.
>
> PDOException: SQLSTATE[42501]: Insufficient privilege: 7 ERROR:
> permission denied for sequence currenttest_id_seq: INSERT INTO
> currentTest (score) VALUES (:db_insert_placeholder_0);
>
> This is a table that I created using the postgres super user.
>
> I have tried to grant the drupal user (drupaluser) privileges to the table
> with:
>
> GRANT ALL ON currentTest to drupaluser;
>
> but this fails to resolve the issue.
>
> Can anyone suggest a way forward?
>

>From the message I'd say that the drupal user doesn't have access to the
sequence, which is a separate object from the table.

-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Insufficient privileges.

2011-07-07 Thread Rick Genter
On Thu, Jul 7, 2011 at 10:26 AM, Dave Coventry  wrote:

> Hi Rick,
>
> Thanks for the response.
>
> What is "the sequence"? and how do I grant the privileges needed to
> insert data into the database?
>
> Is it a postgres issue?
>

Yes. I don't know drupal, so I don't know the correct way to fix this. My
guess is that something wasn't installed/configured correctly.
-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Unexpected results when joining on date fields

2011-07-12 Thread Rick Genter
I don't think you understand what JOIN does. Think of it as a double-nested
FOR loop: for each record that has the value on the left side of the JOIN,
it will match all records on the right side of the JOIN that meet the ON
criteria. For example, if I have two tables:

A (i int, j int):

i   j
1 1
2 1
3 2
4 2
5 3
6 3


and

B (k int, j int)
k j
10   1
11   1
12   2
13   2
14   3
15   3

Then if I do

SELECT COUNT(*) FROM A JOIN B ON A.j = B.j

I'll get 12. Each record in A matches 2 records in B on the value of j.
Study the following transcript:

bash-3.2$ bin/psql -d g2_master
Password:
psql (8.4.4)
Type "help" for help.

g2_master=# CREATE TABLE A (i int, j int);
CREATE TABLE
g2_master=# CREATE TABLE B (k int, j int);
CREATE TABLE
g2_master=# INSERT INTO A VALUES (1, 1), (2, 1), (3, 2), (4, 2), (5, 3), (6,
3);
INSERT 0 6
g2_master=# INSERT INTO B VALUES (10, 1), (11, 1), (12, 2), (13, 2), (14,
3), (15, 3);
INSERT 0 6
g2_master=# SELECT COUNT(*) FROM A JOIN B ON A.j = B.j;
 count
---
12
(1 row)

g2_master=# SELECT * FROM A JOIN B ON A.j = B.j;
 i | j | k  | j
---+---++---
 1 | 1 | 10 | 1
 1 | 1 | 11 | 1
 2 | 1 | 10 | 1
 2 | 1 | 11 | 1
 3 | 2 | 12 | 2
 3 | 2 | 13 | 2
 4 | 2 | 12 | 2
 4 | 2 | 13 | 2
 5 | 3 | 14 | 3
 5 | 3 | 15 | 3
 6 | 3 | 14 | 3
 6 | 3 | 15 | 3
(12 rows)

g2_master=#


On Sun, Jul 10, 2011 at 4:58 PM, Tim Uckun  wrote:

> I have two tables, traffic and sales. Each one has a date field and
> lists the traffic and sales broken down by various parameters
> (multiple rows for each date).
>
> If I run  select (select count(*) from traffic) as traffic, (select
> count(*) from sales) as sales; I get the following  49383;167807
>
> if I run   select count(*) from traffic t inner join sales s on t.date
> = s.date  I get 24836841.
>
> If I change the join to a left join, right join, full join I get the
> same number of records.
>
> So I created a data table which just has the dates in it and ran this
> query.
>
> select count(d.date) from dates d
> inner join traffic t on t.date = d.date
> inner join sales s on s.date = d.date
>
> And I get the same number 24836841
>
> Same goes for right joins on the above query. Left joins of course
> give a different answer as there are more dates in the date table than
> there are in the other tables.
>
> I am a bit perplexed by what is happening here.
>
> Cheers
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Error Importing CSV File

2011-07-15 Thread Rick Genter
I think the COPY is expecting the first line of the file to be a header and
it doesn't find a column named "96799" in the table. Try putting a line at
the top of the file that looks like this:

zip_code,latitude,longitude,city,state,county

On Fri, Jul 15, 2011 at 10:10 AM, Susan Cassidy wrote:

> There seems to be no reason it should be looking for an integer, if your
> table definition as shown is correct.  You don't have any integers listed.
>
> Also, why does it think that the column id is 96799?
>
> Stupid question, but are you logged into the right database?  Maybe a
> different db has a different table definition for that table name?
>
> Maybe it is using a different file than you think it is?  I had to specify
> the full path to get the COPY to work on my test database.
>
> When I created the same table as you specified, and created a test CSV
> file, it worked fine both with and without quotes.
>
> Susan
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of Bryan Nelson
> Sent: Friday, July 15, 2011 9:04 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Error Importing CSV File
>
> I am having problems importing a CSV file of sample data for testing
> in a web app.
>
> Columns & Types
> ---
> zip_code - text
> lattitude - float8
> longitude - float8
> city - text
> state - text
> county - text
>
> Some Sample Data From CSV File
> --
> 96799,-7.209975,-170.7716,PAGO PAGO,AS,AMERICAN SAMOA
> 96941,7.138297,151.503116,POHNPEI,FM,FEDERATED STATES OF MICRO
> 96942,7.138297,151.503116,CHUUK,FM,FEDERATED STATES OF MICRO
>
> COPY Command
> 
> COPY geo_data FROM 'geo_data2.csv' DELIMITERS ',' CSV;
>
> Error Message
> -
> ERROR: invalid input syntax for integer: "96799"
> CONTEXT: COPY geo_data, line 1, column id: "96799"
>
> I can't figure out why it keeps choking with "invalid syntax for
> integer" since the field was created as "text".
>
> Any and all help greatly appreciated!
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Rick Genter
rick.gen...@gmail.com


Re: [GENERAL] Slow query with sub-select

2011-07-16 Thread Rick Genter

On Jul 16, 2011, at 4:14 PM, - - wrote:

> I would like to count rows in q whose mid does not exist in t.

I would write such a query like this:

SELECT COUNT(*)
   FROM q
  LEFT OUTER JOIN t
 ON (t.mid = q.mid)
WHERE t.mid IS NULL;

And I would make sure there was an index on t.mid. (And for 9.2, as I 
understand it, q.mid as well, since I believe in 9.2 PostgreSQL will be able to 
compute the result strictly from the indexes without hitting the base tables.)

--
Rick Genter
rick.gen...@gmail.com



Re: [GENERAL] Another unexpected behaviour

2011-07-20 Thread Rick Genter
On Wed, Jul 20, 2011 at 9:58 AM, Rob Richardson
wrote:

> It seems to me that it is inherently wrong to perform any operation on a
> database that depends on the order in which records are retrieved,
> without specifying that order in an ORDER BY clause.  The "update t1 set
> f1 = f1 + 1" assumes that the operation will be performed in an order
> that guarantees that the highest unchanged record will be the next
> record processed.  I don't believe that any database system should be
> required to support an action like this.
>
> RobR
>

I disagree. I think it depends upon all records being modified before any
are constraint-checked, which may or may not be a reasonable requirement. If
you think of it as a true set operation, it seems like a perfectly
reasonable thing to do ("increment the value of column N in each of the
records of this set"). It seems odd that this should work:

-- drop unique index
-- single update statement
-- apply unique index

But just "single update statement" won't.

-- 
Rick Genter
rick.gen...@gmail.com


[GENERAL] Database design confusing pg_restore, and misc pg_restore issues

2010-05-08 Thread Rick Yorgason

Hey everyone,

I run a website that sells videogames, and different games have 
different registration systems, so I have a database design that goes 
something like this:



registration_type enum('none', 'regtype1', 'regtype2')

products(product_id, registration_type)

order_item(order_id, product_id, check(order_item_has_reginfo(order_id, 
product_id)))

regtype1_reginfo(order_id, product_id, misc rows)

regtype2_reginfo(order_id, product_id, orthogonally misc rows)

function order_item_has_reginfo(text, text) returns boolean as $$
select exists(
select 1 from products where product_id = $2
and (
(reg_type = 'none')
or (reg_type = 'regtype1' and (select exists(select 1 
from regtype1_reginfo where order_id = $1 and product_id = $2)))
or (reg_type = 'regtype2' and (select exists(select 1 
from regtype2_reginfo where order_id = $1 and product_id = $2)))
)
)
$$ LANGUAGE 'SQL';


In other words, (order_id, product_id) of order_item is a foreign key to 
either reginfo1, reginfo2, or nothing, depending on which product it is.


The works really well, until I try to use pg_dump/pg_restore, because it 
attempts to restore order_items before the reginfo tables.  To get it to 
work properly, I need to load the schema, disable the check, load the 
data, then re-enable the check.


I'm interested in either a more painless way of importing backups, or a 
better design.


Incidentally, using --disable-triggers didn't disable checks, and 
--use-list didn't seem to actually work on my dev machine (Vista x64); 
it just pretends like everything went fine, without inserting any data. 
 Here's what PowerShell prints out:



PS D:\projects\backup> & 'C:\Program Files 
(x86)\PostgreSQL\8.4\bin\pg_restore.exe' -v --use-list=backup.list -U blahblah -p  
-C backup.db
--
-- PostgreSQL database dump
--

-- Started on 2010-05-07 22:22:02

SET statement_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;

-- Completed on 2010-05-08 01:15:01

--
-- PostgreSQL database dump complete
--

pg_restore.exe : pg_restore: implied data-only restore
At line:1 char:2
+ & <<<<  'C:\Program Files (x86)\PostgreSQL\8.4\bin\pg_restore.exe' -v 
--use-list=backup.list -U blahblah -p  -C backup.db
+ CategoryInfo  : NotSpecified: (pg_restore: implied data-only 
restore:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError


Thanks for your help,

-Rick-

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design confusing pg_restore, and misc pg_restore issues

2010-05-08 Thread Rick Yorgason

On 08/05/2010 10:33 PM, Tom Lane wrote:

Since you say that --disable-triggers doesn't help, I guess that you're
applying that function not in a trigger but in a CHECK constraint?
That's pretty horrid in itself: CHECK is *not* meant to enforce anything
except local properties of the newly inserted/updated row itself.
Aside from the ordering problems that you've already run into some of,
consider what happens when the referenced row gets deleted.  (Hint:
nothing.)


Luckily, they never get deleted :)

Okay, well, I guess one solution is to replace the checks with triggers 
on all tables involved.  That's not pretty, and really doesn't express 
the concept of a constraint very clearly, but I guess it would work.



Sure you can't find a way to unify reginfo1/reginfo2 into one table?
If you have some side information that doesn't fit conveniently into
that table, maybe making an auxiliary table that's foreign-keyed to
the master reginfo table would help.  But you really need a structure
that allows you to declare the order_item table with a regular foreign
key for reginfo.


So, your first suggestion would look like this:


reginfo(order_id, product_id, reginfo1_columns, reginfo2_columns, FOREIGN 
KEY(order_id, product_id) REFERENCES order_items)


For the sake of illustration, let's say that order_item's foreign key to 
this table is NOT NULL.


So, if the product in question uses regtype1, then the reginfo2 columns 
are NULL, and vice versa.  If the product doesn't use any registration, 
then both the reginfo1 and reginfo2 columns are NULL.


The problem is, how do I express that requirement in a constraint?  And 
without updating the schema every time I add a new product?



Your second suggestion would look like this:


reginfo(order_id, product_id, FOREIGN KEY(order_id, product_id) REFERENCES 
order_items)

reginfo1(order_id, product_id, reginfo1_columns, FOREIGN KEY(order_id, 
product_id) REFERENCES reginfo)

reginfo2(order_id, product_id, reginfo2_columns, FOREIGN KEY(order_id, 
product_id) REFERENCES reginfo)


Well, at that point, the reginfo table is redundant, and the reginfo1 
and reginfo2 tables may as well reference order_items directly, which is 
exactly what I have, minus my problematic constraint.


My assumption is that most people would simply give up and assume that 
this constraint is too difficult to express in SQL, and just rely on the 
business logic never being wrong.  I was hoping that wasn't the case :)


Thanks,

-Rick-

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] left outer join fails because "column .. does not exist in left table?"

2010-06-30 Thread Rick . Casey
I have a JOIN error that is rather opaque...at least to me.

I've using other JOIN queries on this project, which seem very similar to
this one, which looks like:

SELECT S.subjectid,STY.studyabrv,labID,boxnumber,wellrow,wellcolumn
FROM DNASample D, IBG_Studies STY, Subjects S, ibg_projects P
  LEFT OUTER JOIN ibg_ps_join IPJ USING (dnasampleid)
WHERE
D.subjectidkey=S.id
AND STY.studyindex=D.studyindex
AND IPJ.projects_index=P.ibg_projects_index
ORDER BY studyabrv,boxnumber,wellcolumn,wellrow

But when I run it I get this error:

ERROR:  column "dnasampleid" specified in USING clause does not exist in
left table

I am rather mystified by this, since this field is definitely in the
dnasample table, as the primary key. Nor do not see how to start debugging
such an error. Any suggestions appreciated...

--Rick



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] open_sync fails

2008-07-24 Thread Rick Weber

Basic system setup:

Linux 2.4 kernel (heavily modified)
Dual core Athlon Opteron
4GB ECC RAM
SW RAID 10 configuration with 8 750 Gb disks (using only 500Gb of each 
disk) connected via LSISAS1068 based card



While working on tuning my database, I was experimenting with changing 
the wal_sync_method to try to find the optimal value.  The really odd 
thing is when I switch to open_sync (O_SYNC), Postgres immediately fails 
and gives me an error message of:


2008-07-22 11:22:37 UTC 19411 akamai [local] PANIC:  could not write to 
log file 101, segment 40 at offset 1255

8336, length 2097152: No space left on device

Even running the test_fsync tool on this system gives me an error 
message indicating O_SYNC isn't supported, and it promptly bails.


So I'm wondering what the heck is going on.  I've found a bunch of posts 
that indicate O_SYNC may provide some extra throughput, but nothing 
indicating that O_SYNC doesn't work.


Can anybody provide me any pointers on this?

Thanks

--Rick




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] open_sync fails

2008-07-24 Thread Rick Weber

Definitely believable.  It gives me an internal avenue to chase down.

Thanks

--Rick



Alvaro Herrera wrote:

Rick Weber wrote:

  
While working on tuning my database, I was experimenting with changing  
the wal_sync_method to try to find the optimal value.  The really odd  
thing is when I switch to open_sync (O_SYNC), Postgres immediately fails  
and gives me an error message of:


2008-07-22 11:22:37 UTC 19411 akamai [local] PANIC:  could not write to  
log file 101, segment 40 at offset 12558336, length 2097152: No space left on device



Sounds like a kernel bug to me, particularly because the segment is most
likely already 16 MB in length; we're only rewriting the contents, not
enlarging it.  Perhaps the kernel wanted to report a problem and chose
the wrong errno.

  


smime.p7s
Description: S/MIME Cryptographic Signature


[GENERAL] optimizing import of large CSV file into partitioned table?

2010-03-28 Thread Rick Casey
After careful research, I would to post the following problem I'm having
with the importing of a large (16Gb) CSV file. Here is brief synopsis:
- this is running on Postgres (PG) version: PostgreSQL 8.3.9 on
i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Ubuntu 4.3.2-1ubuntu11)
4.3.2
- it is running on a Ubuntu (small) server instance at Amazon Web Services
(AWS), with a 320Gb volume mounted for the PG data directory
- the database was created using the partition example in the documentation,
with an insert trigger and a function to direct which table where records
get inserted.
(see below for code on my table and trigger creation)

After some days of attempting to import the full 16Gb CSV file, I decided to
split the thing up, using the split utility in Linux. This seemed to improve
things; once I had split the CSV files into about 10Mb size files, I finally
got my first successful import of about 257,000 recs. However, this is going
to be a rather labor intensive process to import the full 16Gb file, if I
have to manually split it up, and import each smaller file separately.

So, I am wondering if there is any to optimize this process? I have been
using Postgres for several years, but have never had to partition or
optimize it for files of this size until now.
Any comments or suggestions would be most welcomed from this excellent
forum.

(I might add that I spend several weeks prior to this trying to get this to
work in MySQL, which I finally had to abandon.)

Sincerely,
Rick

Details of the code follow:

Here is the basic COPY command, which I run as the postgres user, to import
the CSV files:

COPY allcalls FROM '/data/allcalls-csv/sub3ab' WITH CSV;

Here is what some sample data look like in the files:
3153371867,2008-02-04 16:11:00,1009,1,40
2125673062,2008-02-04 16:11:00,1009,1,41
5183562377,2008-02-04 16:11:00,1009,1,50
...

Here are the basic scripts that created the partition table and insert
trigger:
CREATE TABLE allcalls (
phonenum bigint,
callstarted timestamp without time zone,
status int,
attempts int,
duration int
);
CREATE TABLE allcalls_0 (
CHECK ( phonenum < 10 )
) INHERITS (allcalls);
...(repeat this 9 more times, for 10 subpartition tables)

CREATE INDEX allcalls_0_phonenum ON allcalls_0 (phonenum);
..(repeat this 9 more times, for indexes on the 10 subpartition tables)
CREATE OR REPLACE FUNCTION allcalls_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.phonenum  < 10 ) THEN
INSERT INTO allcalls_0 VALUES (NEW.*);
ELSIF ( NEW.phonenum >= 10 AND NEW.phonenum < 20 ) THEN
INSERT INTO allcalls_1 VALUES (NEW.*);
...(again, repeat for rest of the parition tables)

CREATE TRIGGER insert_phonenum_trigger
BEFORE INSERT ON allcalls
FOR EACH ROW EXECUTE PROCEDURE allcalls_insert_trigger();




Re: [GENERAL] optimizing import of large CSV file into partitioned table?

2010-03-29 Thread Rick Casey
Thanks Dim; I was not aware of pgloader. This, and the other suggestions,
have helped a lot; thanks everyone.

--rick

On Mon, Mar 29, 2010 at 7:41 AM, Dimitri Fontaine wrote:

> Rick Casey  writes:
>
> > So, I am wondering if there is any to optimize this process? I have been
> using Postgres for several years, but have never had to partition or
> optimize it for files
> > of this size until now.
> > Any comments or suggestions would be most welcomed from this excellent
> forum.
>
> The pgloader tool will import your data as batches of N lines, you get
> to say how many lines you want to consider in each transaction. Plus,
> you can have more than one python thread importing your big file, either
> sharing one writer and having the other threads doing the parsing and
> COPY, or having N independent threads doing the reading/parsing/COPY.
>
>  http://pgloader.projects.postgresql.org/
>
> Hope this helps,
> --
> dim
>



-- 
----
Rick Casey :: caseyr...@gmail.com :: 303.345.8893


[GENERAL] recovery_target_timeline and multiple slave behavior when master fails

2011-12-18 Thread Rick Pufky
79:LOG:  selected new timeline ID: 2
2011-12-15 12:08:10.078 EST2579:LOG:  archive recovery complete
2011-12-15 12:08:10.491 EST2580:LOG:  restartpoint complete: wrote 2692
buffers (87.6%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=112.051 s, sync=0.274 s, total=112.361 s; sync files=2, longest=0.273
s, average=0.137 s
2011-12-15 12:08:10.492 EST2580:LOG:  recovery restart point at
0/330BE0B0
2011-12-15 12:08:10.492 EST2580:DETAIL:  last completed transaction was
at log time 2011-12-15 12:05:28.868118-05
2011-12-15 12:08:10.493 EST2580:LOG:  checkpoint starting:
end-of-recovery immediate wait
2011-12-15 12:08:10.497 EST2580:LOG:  checkpoint complete: wrote 0
buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=0.000 s, sync=0.000 s, total=0.005 s; sync files=0, longest=0.000 s,
average=0.000 s
2011-12-15 12:08:10.951 EST2651:LOG:  autovacuum launcher started
2011-12-15 12:08:10.960 EST2577:LOG:  database system is ready to
accept connections


Slave 2 was still attempting to connect to the master, so I modified its
recovery.conf file to look like this (connect to slave 1 instead of master)
and restarted pg on slave 2:
standby_mode = 'on'
primary_conninfo = 'host=192.168.56.6 port=5432 user=postgres'
trigger_file = '/pgsql/omnipitr/finish.recovery'
recovery_target_timeline = latest

Relevant log snippets from slave 2:
2011-12-15 12:05:54.992 EST2626:LOG:  recovery restart point at
0/2D0BE0B0
2011-12-15 12:05:54.992 EST2626:DETAIL:  last completed transaction was
at log time 2011-12-15 12:05:28.868118-05
2011-12-15 12:05:56.295 EST2626:LOG:  restartpoint starting: xlog
2011-12-15 12:06:05.979 EST2627:FATAL:  could not receive data from WAL
stream: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

2011-12-15 12:06:06.191 EST2625:LOG:  unexpected pageaddr 0/28E4 in
log file 0, segment 50, offset 14942208
2011-12-15 12:06:06.218 EST2626:LOG:  restartpoint complete: wrote 185
buffers (6.0%); 0 transaction log file(s) added, 0 removed, 3 recycled;
write=9.692 s, sync=0.222 s, total=9.923 s; sync files=2, longest=0.204 s,
average=0.111 s
2011-12-15 12:06:06.218 EST2626:LOG:  recovery restart point at
0/300BE0B0
2011-12-15 12:06:06.218 EST2626:DETAIL:  last completed transaction was
at log time 2011-12-15 12:05:28.868118-05
2011-12-15 12:06:21.419 EST2652:FATAL:  could not connect to the
primary server: could not connect to server: Connection refused
Is the server running on host "192.168.56.5" and accepting
TCP/IP connections on port 5432?
... more of the same primary server connection refused error ...
2011-12-15 12:08:44.924 EST2623:LOG:  received fast shutdown request
2011-12-15 12:08:44.935 EST2623:LOG:  aborting any active transactions
2011-12-15 12:08:44.935 EST2626:LOG:  shutting down
2011-12-15 12:08:44.945 EST2626:LOG:  database system is shut down
2011-12-15 12:08:46.802 EST2737:LOG:  database system was shut down in
recovery at 2011-12-15 12:08:44 EST
2011-12-15 12:08:46.804 EST2737:LOG:  entering standby mode
2011-12-15 12:08:46.818 EST2737:LOG:  redo starts at 0/300BE0B0
2011-12-15 12:08:47.313 EST2737:LOG:  consistent recovery state reached
at 0/32E3FFF0
2011-12-15 12:08:47.313 EST2735:LOG:  database system is ready to
accept read only connections
2011-12-15 12:08:47.313 EST2737:LOG:  unexpected pageaddr 0/28E4 in
log file 0, segment 50, offset 14942208
2011-12-15 12:08:47.437 EST2740:FATAL:  timeline 2 of the primary does
not match recovery target timeline 1
2011-12-15 12:08:52.323 EST2742:FATAL:  timeline 2 of the primary does
not match recovery target timeline 1
... repeated continuously ...


Any thoughts on the above snippets? Am I interpreting the documentation
correctly? Is there any further information needed to debug this?

Thanks,
Rick


Re: [GENERAL] recovery_target_timeline and multiple slave behavior when master fails

2011-12-19 Thread Rick Pufky
Thanks for the comments. I'm not actually running with an archive directory
in this configuration (archiving is disabled), however, scp'ing the new
history file and the last WAL File from the new master allowed the other
slave to just continue replay from where it left off.

This is expected in the SR only setup configuration case?

On Sun, Dec 18, 2011 at 9:51 PM, Fujii Masao  wrote:

> On Fri, Dec 16, 2011 at 3:59 AM, Rick Pufky  wrote:
> > Any thoughts on the above snippets? Am I interpreting the documentation
> > correctly? Is there any further information needed to debug this?
>
> You need to share the archive directory between all three nodes to use that
> trick.
>
> To follow the timeline change that occurs at failover to another standby,
> the standby needs to read the timeline history file. This file is created
> and
> archived at failover by new master (i.e., another standby). This file is
> not
> shipped via replication, so the standby needs to read it from the archive.
> So you need to have the shared archive directory.
>
> Regards,
>
> --
> Fujii Masao
> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
> NTT Open Source Software Center
>



-- 
Rick Pufky
OmniTI Computer Consulting Inc.
Database Administrator


[GENERAL] Full text search strategy for names

2009-04-16 Thread Rick Schumeyer
I want to be able to search a list of articles for title words as well as
author names.  I understand how to do the title words with the full text
searching.  But I'm not sure the best strategy for the names.  The full text
parser "parses" the names giving undesirable results.

For example,

select to_tsvector('claude Jones');
to_tsvector

 'jone':2 'claud':1


Is there a way to tell the parser to index the words in a column without
trying to determine word roots?

Or what is the best way to index names for efficient searching?


Re: [GENERAL] Full text search strategy for names

2009-04-17 Thread Rick Schumeyer
You can get extra (undesirable) results, depending on the name.  For
example, if you are searching for the last name of "Ricks", you will also
find all authors whose first name is "Rick".

I also noticed that the directions for indexing multiple columns don't seem
to be quite right.

In section 12.2.2:

UPDATE pgweb SET textsearchable_index_col =
 to_tsvector('english', coalesce(title,'') || coalesce(body,''));


I found that the last word of title is joined with the first word of body,
which gives strange results.  I ended up added a space which gave better
results:

 to_tsvector('english', coalesce(title,'') || ' ' || coalesce(body,''));


On Fri, Apr 17, 2009 at 1:04 AM, Tom Lane  wrote:

> Rick Schumeyer  writes:
> > I want to be able to search a list of articles for title words as well as
> > author names.  I understand how to do the title words with the full text
> > searching.  But I'm not sure the best strategy for the names.  The full
> text
> > parser "parses" the names giving undesirable results.
>
> > For example,
>
> > select to_tsvector('claude Jones');
> > to_tsvector
> > 
> >  'jone':2 'claud':1
>
> Er ... why is that a problem?  As long as the index stems the words in
> the same way that the query does, why should you care?
>
>regards, tom lane
>


[GENERAL] basic trigger using OLD not working?

2005-02-24 Thread Rick Casey
Hello all,
I am trying to a simple thing: create a log history of deletes, and 
updates; but which I am having trouble getting to work in PG 7.4.7 
(under Debian Linux 2.6.8).

I have reduced my code to the following trivial case:
Here is the code that creates the delete trigger:
create trigger PEDIGREES_hist_del_trig
AFTER DELETE
on PEDIGREES
EXECUTE PROCEDURE logPedigreesDel();
Here is the trigger code: (famindid is an integer field in the Pedigrees 
table):

CREATE OR REPLACE FUNCTION logPedigreesDel() RETURNS TRIGGER AS '
DECLARE
   test integer;
begin
   test := OLD.famindid;
   RAISE EXCEPTION ''OLD.famindid = '', test;
   return OLD;
end;
' LANGUAGE plpgsql;
Here is the error message returned:
psql:testphdtrig.sql:1: ERROR:  record "old" is not assigned yet
DETAIL:  The tuple structure of a not-yet-assigned record is indeterminate.
CONTEXT:  PL/pgSQL function "logpedigreesdel" line 4 at assignment
Would *really appreciate* any suggestions! This could help us decide 
whether to PostGres for a major project...

thanks --rick
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] basic trigger using OLD not working?

2005-02-24 Thread Rick Casey
I am going to answer my own post here since I found more info in some 
threads posted here.

My real question now is: does the OLD variable work at all in plpgsql?
If not, then some major documentation, books and many posted messages 
appear to be wrong!

Here is what the PostgreSQL 7.4.6 Documentation says:
-
37.10. Trigger Procedures
PL/pgSQL can be used to define trigger procedures. A trigger procedure 
is created with the CREATE FUNCTION command, declaring it as a function 
with no arguments and a return type of trigger. Note that the function 
must be declared with no arguments even if it expects to receive 
arguments specified in CREATE TRIGGER --- trigger arguments are passed 
via TG_ARGV, as described below.

When a PL/pgSQL function is called as a trigger, several special 
variables are created automatically in the top-level block. They are:

NEW
Data type RECORD; variable holding the new database row for 
INSERT/UPDATE operations in row-level triggers. This variable is null in 
statement-level triggers.

OLD
Data type RECORD; variable holding the old database row for 
UPDATE/DELETE operations in row-level triggers. This variable is null in 
statement-level triggers.
...
-

This certainly implies that OLD and NEW can be used with plpgsql.
Does anyone know the answer to this?
Are there ways to get the OLD and NEW variables to work in plpgsql?
I am just amazed if so much documentation could be so misleading if this 
is not the case.

Regards,
Rick
Rick Casey, Research Associate
Institute for Behavioral Genetics
[EMAIL PROTECTED]
303.735.3518

Rick Casey wrote:
Hello all,
I am trying to a simple thing: create a log history of deletes, and 
updates; but which I am having trouble getting to work in PG 7.4.7 
(under Debian Linux 2.6.8).

I have reduced my code to the following trivial case:
Here is the code that creates the delete trigger:
create trigger PEDIGREES_hist_del_trig
AFTER DELETE
on PEDIGREES
EXECUTE PROCEDURE logPedigreesDel();
Here is the trigger code: (famindid is an integer field in the 
Pedigrees table):

CREATE OR REPLACE FUNCTION logPedigreesDel() RETURNS TRIGGER AS '
DECLARE
   test integer;
begin
   test := OLD.famindid;
   RAISE EXCEPTION ''OLD.famindid = '', test;
   return OLD;
end;
' LANGUAGE plpgsql;
Here is the error message returned:
psql:testphdtrig.sql:1: ERROR:  record "old" is not assigned yet
DETAIL:  The tuple structure of a not-yet-assigned record is 
indeterminate.
CONTEXT:  PL/pgSQL function "logpedigreesdel" line 4 at assignment

Would *really appreciate* any suggestions! This could help us decide 
whether to PostGres for a major project...

thanks --rick
---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] basic trigger using OLD not working?

2005-02-24 Thread Rick Casey




Thanks much! That met with  partial success; but getting closer.

The error message about OLD went away (thankfully!), but there is still
no data from the OLD variable. Here is the code again:

create trigger PEDIGREES_hist_del_trig
AFTER DELETE
on PEDIGREES
FOR EACH ROW
EXECUTE PROCEDURE logPedigreesDel();


CREATE OR REPLACE FUNCTION logPedigreesDel() RETURNS TRIGGER AS '
begin
    RAISE EXCEPTION ''OLD.famindid = '', OLD.famindid;
    return OLD;
end;
' LANGUAGE plpgsql;

Which when it fires results in:
cgb_lab_data=# \i testphdtrig.sql
psql:testphdtrig.sql:1: ERROR:  OLD.famindid =

Regards,
rick



Tom Lane wrote:

  Rick Casey <[EMAIL PROTECTED]> writes:
  
  
Here is the code that creates the delete trigger:
create trigger PEDIGREES_hist_del_trig
AFTER DELETE
on PEDIGREES
EXECUTE PROCEDURE logPedigreesDel();

  
  
I think you forgot FOR EACH ROW.  By default, the above creates a
STATEMENT trigger, in which you don't have access to individual rows.

			regards, tom lane
  





[GENERAL] Is this correct behavior for ON DELETE rule?

2005-02-25 Thread Rick Schumeyer








 

I have two related tables, “item”
and “book”.  I have defined

a view, “bookview”
that contains fields from item and book.

My goal was to have all
inserts, updates, and deletes performed

on bookview rather than on
the tables directly.  I was able

to do this with ON INSERT
and ON UPDATE rules easily.

 

I am having trouble with the
ON DELETE rule.  When a row is

deleted from bookview, the
appropriate row should be deleted

from item and from book. 
The example below only deletes the

row from book.

 

Is this expected behavior,
and if so, what is the right way

to do this?  At the
moment I have defined an ON DELETE rule

on item which works. 
But I would prefer if this could be

done on the view.

 

Thanks for any help.

 

--

drop table book cascade;

drop table item cascade;

 

-- "parent" table

create table item 

  (id
serial primary key,

  type  varchar(8),

  title varchar(20)

);

 

-- "child" table

create table book 

  (id
integer references item primary key,

  title varchar(20),

  author 
varchar(20)

);

 

-- combine stuff from item
and book tables

create view bookview as 

  select
i.id, b.title, b.author from item i, book b 

  where
i.id=b.id;

 

-- insert to item and book
instead of bookview

create rule bookviewins as
on insert to bookview do instead (

  insert
into item (type, title) 

  values
('book', new.title);

  insert
into book (id, title, author) 

  values
(currval('item_id_seq'), new.title, new.author);

);

 

-- delete to item and book
instead of bookview

create rule bookviewdel as
on delete to bookview do instead (

  delete
from book where id=old.id;

  delete
from item where id=old.id;

);

 

-- everyone has access to bookview

grant all on bookview to
public;

 

insert into bookview (title,
author) values ('Dune','Herbert');

insert into bookview (title,
author) values ('Hobbit','Tolkein');

 

select * from bookview;

 

delete from bookview where
author='Tolkein';

-- "DELETE 0"

 

select * from bookview;

-- looks correct

 

select * from item;

-- shows both books

 

select * from book;

-- looks correct

 

 








Re: [GENERAL] basic trigger using OLD not working?

2005-02-25 Thread Rick . Casey
Yes, thank you, I corrected my function from statement level to row level.
This did get rid of the error message. However, I still get no output from
an OLD variable that should contain data: see the test variable in the
simple case below.

How else can I test OLD variables? This is the simplest test case I can
think of. Any suggestions would be appreciated!

Thanks,
Rick

> I think you have created a statement level trigger (If they existed in
> 7.4.7...)  by not including FOR EACH ROW in your create statement.  In
> statement level triggers, there is no OLD or NEW.
>
>>>> Rick Casey <[EMAIL PROTECTED]> 02/24/05 1:22 PM >>>
> Hello all,
>
> I am trying to a simple thing: create a log history of deletes, and
> updates; but which I am having trouble getting to work in PG 7.4.7
> (under Debian Linux 2.6.8).
>
> I have reduced my code to the following trivial case:
>
> Here is the code that creates the delete trigger:
> create trigger PEDIGREES_hist_del_trig
> AFTER DELETE
> on PEDIGREES
> EXECUTE PROCEDURE logPedigreesDel();
>
>
> Here is the trigger code: (famindid is an integer field in the Pedigrees
>
> table):
>
> CREATE OR REPLACE FUNCTION logPedigreesDel() RETURNS TRIGGER AS '
> DECLARE
> test integer;
> begin
> test := OLD.famindid;
> RAISE EXCEPTION ''OLD.famindid = '', test;
> return OLD;
> end;
> ' LANGUAGE plpgsql;
>
>
> Here is the error message returned:
> psql:testphdtrig.sql:1: ERROR:  record "old" is not assigned yet
> DETAIL:  The tuple structure of a not-yet-assigned record is
> indeterminate.
> CONTEXT:  PL/pgSQL function "logpedigreesdel" line 4 at assignment
>
> Would *really appreciate* any suggestions! This could help us decide
> whether to PostGres for a major project...
>
> thanks --rick
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
>
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Is this correct behavior for ON DELETE rule?

2005-02-25 Thread Rick Schumeyer

I suspected that might be part of the answer.

Would some combination of triggers work instead?  I've played
with those too, but without success.

> 
> This is an ancient gotcha: as soon as you delete the book row, there is
> no longer any such entry in the bookview view ... and "old.id" is
> effectively a reference to the bookview view, so the second delete
> finds no matching rows.
> 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Is this correct behavior for ON DELETE rule?

2005-02-25 Thread Rick Schumeyer
I tried that, but I get a "...violates foreign-key constraint" error.

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Bruce Momjian
> Sent: Friday, February 25, 2005 6:23 PM
> To: Rick Schumeyer
> Cc: 'PgSql General'
> Subject: Re: [GENERAL] Is this correct behavior for ON DELETE rule?
> 
> 
> Uh, because of your REFERENCES clause you have to delete from 'item'
> first, then 'book':
> 
> > -- delete to item and book instead of bookview
> > create rule bookviewdel as on delete to bookview do instead (
> >   delete from item where id=old.id;
> >   delete from book where id=old.id;
> > );
> 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Newbie: help with FUNCTION

2005-02-25 Thread Rick Apichairuk
> > On Thu, Feb 24, 2005 at 09:00:46PM +, Charl Gerber wrote:
> > I'm trying to create  a function that takes 1
> > paramater (eg an integer) as input, then does 5
> > database updates or deletes in 5 different SQL
> > statements and returns 5 integers (in one resultset)
> > indicating how many rows were affected by the various
> > updates/deletes.
> >
> > How do I do this?

You could return an array like elein recommended

> > How can I specify the names of the 5 output colums?

but you won't have "names" associated with the output columns. To have
something other than integers as the index (such as a string), you
need to use an associative array.

sub foo
{
my $integer = shift;

my $ret1 = $dbh->do('some sql here');

my $ret2 = $dbh->do('some sql here');

# etc for 5 statements.

my %hash = (
'Column Label 1' => $ret1,
'Column Label 2' => $ret2,
#. etc...
);

 return \%hash;
}

So, you should read about associative arrays (aka hashes) and references.

Rick

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] basic trigger using OLD not working? >>THANKS!

2005-02-28 Thread Rick Casey
Hey, thanks to everyone who replied to my questions: problem solved!
I needed to: 1) do BEFORE DELETE to see the OLD variables, and 2) use a 
placeholder in my format string in the RAISE EXCEPTION/NOTICE statement.

Open source newsgroups rock!
--rick
Rick Casey, Research Associate
Institute for Behavioral Genetics
[EMAIL PROTECTED]
303.735.3518

Sven Willenberger wrote:

[EMAIL PROTECTED] presumably uttered the following on 02/25/05 
19:14:

Yes, thank you, I corrected my function from statement level to row 
level.
This did get rid of the error message. However, I still get no output 
from
an OLD variable that should contain data: see the test variable in the
simple case below.

How else can I test OLD variables? This is the simplest test case I can
think of. Any suggestions would be appreciated!
Thanks,
Rick

I think you have created a statement level trigger (If they existed in
7.4.7...)  by not including FOR EACH ROW in your create statement.  In
statement level triggers, there is no OLD or NEW.

Rick Casey <[EMAIL PROTECTED]> 02/24/05 1:22 PM >>>

Hello all,
I am trying to a simple thing: create a log history of deletes, and
updates; but which I am having trouble getting to work in PG 7.4.7
(under Debian Linux 2.6.8).
I have reduced my code to the following trivial case:
Here is the code that creates the delete trigger:
create trigger PEDIGREES_hist_del_trig
AFTER DELETE
on PEDIGREES
EXECUTE PROCEDURE logPedigreesDel();
Here is the trigger code: (famindid is an integer field in the 
Pedigrees

table):
CREATE OR REPLACE FUNCTION logPedigreesDel() RETURNS TRIGGER AS '
DECLARE
   test integer;
begin
   test := OLD.famindid;
   RAISE EXCEPTION ''OLD.famindid = '', test;
   return OLD;
end;
' LANGUAGE plpgsql;


Need a place holder for your variable in your RAISE expression (like a 
printf syntax):

RAISE EXCEPTION ''OLD.famindid = %'', test;
btw, if you just want to see the variable without having your function 
bail on you, try RAISE NOTICE ''OLD.famindid = %'', test;

Sven

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] postgresql vs mysql performance comparison

2005-03-07 Thread Rick Schumeyer








I’m interested in comparing the performance of postgresql
and mysql

on various combinations of user loads and database sizes. 
I have seen

a few statements to the effect of “mysql is faster for
small, low use

applications, but postgresql is better for bigger stuff”. 
I would like

to run some independent tests to see if this is true.

 

Before I do this, has anyone done this already, with the latest

versions of both?  I think I’ve seen some
comparisons that are 

somewhat dated.

 

Does anyone have any pointers on what to do or not do? 
Or

would anyone be willing to comment on an experiment plan

and suggest improvements?  Any help or references are
appreciated.

 








Re: [GENERAL] postgresql vs mysql performance comparison

2005-03-08 Thread Rick Casey
This will not answer you question, but documents some of the evidence 
for you:

http://www.geocities.com/mailsoftware42/db/
Rick Casey, Research Associate
Institute for Behavioral Genetics
[EMAIL PROTECTED]
303.735.3518

Rick Schumeyer wrote:
I’m interested in comparing the performance of postgresql and mysql
on various combinations of user loads and database sizes. I have seen
a few statements to the effect of “mysql is faster for small, low use
applications, but postgresql is better for bigger stuff”. I would like
to run some independent tests to see if this is true.
Before I do this, has anyone done this already, with the latest
versions of both? I think I’ve seen some comparisons that are
somewhat dated.
Does anyone have any pointers on what to do or not do? Or
would anyone be willing to comment on an experiment plan
and suggest improvements? Any help or references are appreciated.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Rick Schumeyer
Below are some PRELIMINARY results in comparing the performance of pgsql and
mysql.

These results are for a single process populating a table with 934k rows,
and then performing some selects.  I also compared the effect of creating 
indexes on some of the columns.

I have not yet done any testing of transactions, multiple concurrent
processes, etc.

I did not make any changes to the default config settings.  I can do
so if someone has some suggestions.

My machine is a 3.0 GHz P4 with 1 GB ram, running FC 3.

I used pg 8.0.1 and mysql 5.0.2 alpha.

I compiled pg from source, but I downloaded an binary for mysql.  If 
someone thinks this is significant, and can point me to a good
binary for pg, I will give it a try.

All timings are as reported by the db.

I included the pg script below..

Finally, I don't have an axe to grind either way.  I'm trying to be
fair, but I am the first to admit I am not an expert in db tuning.
I welcome constructive comments and advice.

 data and queries

The data comes from some experimental data I have been working with.

I load data into a table with relavant columns fid, rid, x.
The combination of fid,rid is unique.
x is a location, and is not unique.

I loaded the data using COPY (pg) and LOAD (mysql).

The queries were:

select count(*) from data where fid=2 and rid=6; count = 100
select count(*) from data where x > 5000 and x < 5500;   count = 35986
select count(*) from data where x > 5000 and x < 5020;   count = 1525

* preliminary conclusions

As suspected, MyISAM is very fast.  In the tested case (only
one process, only one big load and some selects) MyISAM tables are
much faster than pg or InnoDB.

For queries, InnoDB and pg are roughly equivalent.  In some cases
one or the other is a little faster, but they are mostly in the
same ballpark.  The one exception seems to be that pg has an edge
in seq scans.

pg is slower loading data when it has to create an index.  Also,
I found that is is critical to run "vacuum analyze" in pg.  Running
"analyze" in mysql did not seem to make much difference.  I'm guessing
that mysql builds statistics while it is loading data, and does not
actually run an analyze since the table has not changed.

*** preliminary results ***
*** all times in seconds ** 

note: input table has 934500 rows.

 mysql 5.0.2 alpha
   PG 8.0.1  MyISAM   InnoDB
   NO INDEXES

Load file22.3 3.9  22.1
select count fid=?,rid=?  3.0 0.23  2.07
select count x > 5000, x < 5500   1.2 0.27  1.59
select count x > 5000, x < 5020   0.630.29  1.58

  INDEXES on (fid,rid) 
 
Load file36. 13.5  30.1
vacuum analyze3.6 
select count fid=?,rid=?  0.0 0.00  0.02 
select count x > 5000, x < 5500   0.702   0.29  2.07
select count x > 5000, x < 5020   0.713   0.28  1.59

  INDEXES on (fid,rid) and (x)

Load file   202. 24.  151.
vacuum analyze   11.
select count fid=?,rid=?  0.002   0.00  0.02

select count x > 5000, x < 5500   0.9 0.06  0.75
select count x > 5000, x < 5020   0.048   0.01  0.01

* PG-SQL script 

\timing

--
-- Load table, no indexes
--

drop table data cascade;

create table data (
fid integer,
rid integer,
range   real,
x   real,
y   real,
z   real,
bs  real,
snr real,
rvelreal,
    cfarsmallint);

COPY data (fid,rid,range,snr,bs,rvel,cfar,x,y,z) FROM
'/home/rick/bed/data/data.dat';
select count(*) from data where fid=2 and rid=6;
select count(*) from data where x > 5000 and x < 5500;
select count(*) from data where x > 5000 and x < 5020;

--
-- Load table, index on (fid,rid)
--

drop table data cascade;

create table data (
fid integer,
rid integer,
range   real,
x   real,
y   real,
z   real,
bs  real,
snr real,
rvelreal,
cfarsmallint);

create index fidrid_data on data (fid,rid);
COPY data (fid,rid,range,snr,bs,rvel,cfar,x,y,z) FROM
'/home/rick/bed/data/data.dat';
vacuum analyze data;
select count(*) from data where fid=2 and rid=6;
select count(*) from data where x > 5000 and x < 5500;
select count(*) from data where x > 5000 and x < 5020;

--
-- Load table, index on (fid,rid) and (x)
--

drop table data cascade;

create table data 

Re: [GENERAL] prelimiary performance comparison pgsql vs mysql

2005-03-14 Thread Rick Schumeyer
That site produces some sort of php error.

I don't suppose this information is available elsewhere?

> Stop now. I've not looked at your test results, and frankly there is no
> point. As it ships, PG should run fine on a small corner of an old
> laptop. It will not perform well with any sort of serious workload on
> any sort of serious hardware. You're wasting your time if you want to
> get any sort of meaningful result.
>
> Take 30 minutes to read through the article below. It covers the basics
> of how to manage your configuration settings.
>http://www.powerpostgresql.com/PerfList
>


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] performance: pg vs pg!

2005-03-18 Thread Rick Schumeyer








At the suggestion of several
people, I have increased the

default settings in postgresql.conf
before continuing my

postgresql vs mysql
performance tests.

 

To date, I have only been
loading a (roughly) million-row

file, creating indexes
during the load, running a vacuum analyze,

and a couple of simple
queries.

 

I do intend on performing
more complicated tests, but I did not

want to do that until people
stopped telling me my methodology

for simple tests
was...flawed.

 

I ran a thorough series of
tests, varying shared_buffers from 1000 to 9000, 

work_mem from 1 to 9
and maintenance_work_mem from 1 to 9.

The complete results are
long (I will provide them if anyone is interested)

so I am only including a small
selection.

 

Before I do that, I will
confess that increasing memory made more of

a difference than I thought
it would.  I know many of you are thinking

"no kidding" but I
thought it would only be important for big

complicated queries, or a
server with multiple concurrent requests.

No, it makes a big
difference for "merely" loading a million rows and

indexing them.

 

  
Time in seconds

shared_buffers work_mem m_work_mem
COPY   VACUUM

 1000   1    1
186.154 9.814    

 3000   1    1 
64.404 4.526    

 5000   5    5 
65.036 3.435    

 9000   9    9 
63.664 2.218    

 

 

-- The relevant
commands

create table data (

  fid   integer,

  rid   integer,

  range real,

  x real,

  y real,

  z real,

  bs    real,

  snr   real,

  rvel  real,

  cfar  smallint);

 

create index fidrid_data on
data (fid,rid);

create index fidx on data
(x);

 

-- COPY a table with 934500
rows

COPY data (fid,rid,range,snr,bs,rvel,cfar,x,y,z)
FROM '/home/rick/bed/data/data.dat';

 

-- VACUUM

vacuum analyze data;

 

 








[GENERAL] tsearch2 installation question

2005-03-23 Thread Rick Schumeyer








I’m trying to install the version of tsearch2
distributed with pg 8.0.1.  “make” and “make install”
runs with no apparent problems, but “make installcheck” fails. 


Looking at the diffs, I would guess that the differences are
inconsequential.  But I would like to be sure before proceeding.

 

Is this a known issue, or do I really have a problem?  If
I really have a problem, any suggestions?

 

 output of “diff results/tsearch2.out expected/tsearch2.out”


2092,2093c2092,2093

<  
headline 


<


---

>   
headline   


> ---

2109,2110c2109,2110

<   
headline   

<
--

---

> 
headline


>
--

2126,2127c2126,2127

<   
headline   

<
--

---

>
 headline 


>
---

2146,2147c2146,2147

<  headline 

< --

---

>  
headline  


> -

 

 

 








Re: [GENERAL] Oracle buys Innobase

2005-10-09 Thread Rick Morris

Chris Browne wrote:

[EMAIL PROTECTED] ("Uwe C. Schroeder") writes:


On Saturday 08 October 2005 21:07, Chris Browne wrote:


2.  The code base was pretty old, pretty creaky, and has a *really*
heavy learning curve.

It was pretty famous as being *really* difficult to build; throw
together such things as:
 - It uses a custom set of build tools that were created for a
   mainframe environment and sorta hacked into Python
 - Naming conventions for files, variables, and functions combine
   pseudo-German with an affinity for 8 character names that are
   anything but mnemonic.  (Think: "Germans developing on MVS.")
 - I seem to recall there being a Pascal translator to transform
   some of the code into C++...


WOW - careful now. I'm german - but then, there's a reason why I
immigrated to the US :-)



I'm 1/4 German, and a couple brothers married German girls, so I'm not
trying to be mean, by any stretch.

The bad Procrustean part is the "8 character mainframe" aspect, as it
takes things that might have been mnemonic, at least to those knowing
German, and distills things down in size so as to lose even that.

It truly *was* Germans developing on MVS (or TSO or OS/360 or such)...



Doing substantial revisions to it seems unlikely.  Doing terribly
much more than trying to keep it able to compile on a few
platforms of interest seems unlikely.

When they announced at OSCON that MySQL 5.0 would have all of the
features essential to support SAP R/3, that fit the best theories
available as to why they took on "MaxDB", namely to figure out the
minimal set of additions needed to get MySQL to be able to host R/3.

If that be the case, then Oracle just took about the minimal action
necessary to take the wind out of their sails :-).


SAPdb (aka Adabas D) is something I worked with quite a while ago. And you're 
right, the naming schemes and restrictions, as well as severe 
incompatibilities with the SQL standard where one of my major reasons to drop 
that database in favor of Informix (at that time) and PostgreSQL later on.
It was kind of tough to generate explanatory table names with those kind of 
limitations. Nonetheless back then (maybe around 1993) Adabas D was a quite 
powerful and considerably cheap alternative to anything serious at the market 
- and it was easy to sell to customers (back in germany) just because this 
was THE database powering SAP R/3.



And SAP R/3 has its own "8 character mainframe limits," often
involving somewhat Germanic things, abbreviated :-).



But you may be right - considering what the codebase of SAPdb must
look like it's probably unlikely MySQL AB can make any considerable
improvements in the time available.



When Slashdot sorts of people propose "Oh, that can just be another
storage engine!", well, I'll believe it if I see someone implement the
refactoring.

In one of the recent discussions, someone proposed the thought of
MySQL AB adopting the PostgreSQL storage engine as Yet Another One Of
Their Engines.  Hands up, anyone that thinks that's likely tomorrow
:-).

What would seem interesting to me would be the idea of building a
PostgreSQL front end for "Tutorial D" as an alternative to SQL.  I
don't imagine that will be happening tomorrow, either.  :-)


But much more interesting to consider, indeed.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Oracle buys Innobase

2005-10-09 Thread Rick Morris

Marc G. Fournier wrote:

Stupid question, but what does MySQL bring to the equation?


MySQL brings to the table an impressive AI interface that knows what you
really meant to do and thus does away with those pesky error messages.

After all, who wants to be told that -00-00 is not a date, or that
you tried to insert a value of 7 into a SMALLINT column?

  Why not

just use PostgreSQL in the first place?

On Sun, 9 Oct 2005, CSN wrote:


Look what somebody suggested!

---

If the worst happens and Oracle tries to squash
InnoDB, there may already be such an alternative out
there.

I wonder what it would take to add (and optimize)
Postgres storage engine support to MySQL? I don't know
exactly how current versions of MySQL and Postgres
maesure up performance-wise, but PgSQL seems to have
made steady progress on performance improvements.

Maybe this is a crazy idea, I don't know how
technically or legally feasible it is, but I really
like the idea of the two open-source communities
uniting to battle Oracle.

http://jeremy.zawodny.com/blog/archives/005490.html#comment-21233



__
Start your day with Yahoo! - Make it your home page!
http://www.yahoo.com/r/hs

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] problem converting from 8.0.4 to 8.1beta3: character encoding

2005-10-14 Thread Rick Schumeyer








I have a database in pg 8.0.4 that is encoded as UNICODE.  Somehow,
some “strange” characters made it in there.  Mostly, several
instances of ‘ (apostrophe) are really some extended character.  They
display in a text editor as \222 (or something similar).  I’m not
sure how that happened, but they display properly in a web application; in psql
they seem to be ignored.

 

In any event, if I pg_dump the database and attempt to load
it into 8.1beta3, I get an error for each of these characters.  The beta3
database is encoded in UTF8, which I understand is the same as UNICODE. 
Perhaps this is incorrect?

 

I’m not familiar on the character encoding issues, so
any help is appreciated!








Re: [GENERAL] Anybody using PostGIS?

2005-10-17 Thread Rick Morris

CSN wrote:

I've been meaning to try out PostGIS and see what it
is capable of. Is anybody using it? Do you have
accompanying URLs?


Yes: www.anisite.com (see http://www.anisite.com/state.php?state=FL). I 
am using it to serve dynamic layers for UMN Mapserver. Setup was fairly 
easy, and it chugs along great even on a low-end Sempron with 384 MB 
RAM. Provides a nice way to get interactive info into a mapping system.


Regards,

Rick



Thanks,
CSN



__ 
Yahoo! Music Unlimited 
Access over 1 million songs. Try it free.

http://music.yahoo.com/unlimited/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: 'a' == 'a ' (Was: RE: [pgsql-advocacy] [GENERAL] Oracle buysInnobase)

2005-10-20 Thread Rick Morris

Richard Huxton wrote:

Dann Corbit wrote:



I can see plenty of harm and absolutely no return.  We are talking about
blank padding before comparison.  Do you really want 'Danniel '
considered distinct from 'Danniel  ' in a comparison?  In real life,
what does that buy you?



100% YES!

If two values are the same, then any function on those two values should 
return the same result. Otherwise what does "equals" mean? At the very 
least length() is broken by your argument.


I agree completely. I would much rather be precise than intuitive. And, 
I have done applications where the padding length was important, 
especially when working with remote batch processing in the credit 
industries and the like. Writing queries to create and process 
fixed-width batch files is much easier if you can rely on these kinds of 
behaviors.




Here it's CHAR that's broken IMHO - spawn of some punch-card spawned 
data processing rules of the 70s.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Why database is corrupted after re-booting

2005-10-27 Thread Rick Ellis
In article <[EMAIL PROTECTED]>,
Welty, Richard <[EMAIL PROTECTED]> wrote:

>crappy disk drives and bad windows file systems, nothing more.

Could even be crappy memory.

--
http://yosemitecampsites.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] tsearch2: more than one index per table?

2005-11-22 Thread Rick Schumeyer








Is there something in tsearch2 that prevents more than one
index per table?

I would like an index on field A, and a separate index on
field B.

The index builds fine for A, but gives an error for B. 
The error text is 

 

ERROR:  could not find tsearch config by locale

 

The code below is taken almost verbatim from the tsearch2
documentation.

 

Any help is appreciated!

 



 

\i
/home/rick/ftp/postgresql-8.1.0/contrib/tsearch2/tsearch2.sql

 

CREATE TABLE t (a varchar(20),
b varchar(20));

INSERT INTO t (a,b) VALUES
('hello world','quick brown fox');

 

--

-- A

--

 

ALTER TABLE t ADD COLUMN idxA
tsvector;

UPDATE t SET idxA=to_tsvector('default',
a);

VACUUM FULL ANALYZE;

CREATE INDEX idxA_idx ON t
USING gist(idxA);

VACUUM FULL ANALYZE;

 

CREATE TRIGGER ts_A_Update
BEFORE UPDATE OR INSERT ON t

FOR EACH ROW EXECUTE
PROCEDURE tsearch2(idxA, a);

 

--

-- B

--

 

ALTER TABLE t ADD COLUMN idxB
tsvector;

 

--

-- The next line gives:
ERROR:  could not find tsearch config by locale

--

UPDATE t SET idxB=to_tsvector('default',
b);

 

VACUUM FULL ANALYZE;

CREATE INDEX idxB_idx ON t
USING gist(idxB);

VACUUM FULL ANALYZE;

 

CREATE TRIGGER ts_B_Update
BEFORE UPDATE OR INSERT ON t

FOR EACH ROW EXECUTE
PROCEDURE tsearch2(idxB, b);

 

 








Re: [GENERAL] tsearch2: more than one index per table?

2005-11-23 Thread Rick Schumeyer
I apologize if I'm being dense, but I'm not completely following the
explanation.  It is true that my pg_ts_cfg.locale is set to en_US.UTF-8.

It was my understanding that specifying "default" as in

UPDATE t SET idxB=to_tsvector('default', b);

should give tsearch enough information.

It is not clear to me why the first time works, but not the second time
with almost identical statements.

I thought that I only had to follow the procedure in the docs if I want
to do the following:

UPDATE t SET idxB=to_tsvector(b); -- no 'default'

Perhaps I am wrong about this?

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Andrew J. Kopciuch
> Sent: Wednesday, November 23, 2005 12:08 AM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] tsearch2: more than one index per table?
> 
> On Tuesday 22 November 2005 21:45, Rick Schumeyer wrote:
> > Is there something in tsearch2 that prevents more than one index per
> table?
> >
> > I would like an index on field A, and a separate index on field B.
> >
> > The index builds fine for A, but gives an error for B.  The error text
> is
> >
> >
> >
> > ERROR:  could not find tsearch config by locale
> >
> >
> 
> 
> This is not a problem with the index creation ... your tsearch2
> installation
> is not configured for the locale your server is running.
> 
> http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-
> intro.html
> 
> See the section "TSEARCH2 CONFIGURATION".  It explains, and has examples
> on
> how to set this up.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg web hosting with tsearch2?

2006-09-29 Thread Rick Schumeyer








I hope pg-general is the correct forum for this question…if
not please let me know the correct location.

 

I have a pg application that uses tsearch2.  I would
like to move this application off my local machine and onto a web host
somewhere.  I have some questions regarding this:

 

1)   What is the
preferred postgresql text searching tool these days?  Is it still tsearch2?

 

2)   Can someone
suggest a web host service that includes tsearch2 (or an equivalent text
searching component)?

 

3)    All
the web hosts I am aware of are still offering only pg 7.4.  Does anybody
offer pg 8.x ?

 

 








Re: [GENERAL] pg web hosting with tsearch2?

2006-09-29 Thread Rick Schumeyer
I guess I should have mentioned this initially...I also need a web host that
offers that other database (my***).  We are using the Joomla content
management system which only works with my***.  Although, I'm not as picky
about which version of my*** is offered.

I'm examining that commandprompt...my initial guess is they do not support
my*** ?

-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 29, 2006 5:05 PM
To: Rick Schumeyer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg web hosting with tsearch2?

Rick Schumeyer wrote:
> I hope pg-general is the correct forum for this question.if not please let
> me know the correct location.
> 
>  
> 
> I have a pg application that uses tsearch2.  I would like to move this
> application off my local machine and onto a web host somewhere.  I have
some
> questions regarding this:
> 
>  
> 
> 1)   What is the preferred postgresql text searching tool these days?
> Is it still tsearch2?


Yes and pg_trgm (similar but different)

> 
>  
> 
> 2)   Can someone suggest a web host service that includes tsearch2 (or
> an equivalent text searching component)?

www.commandprompt.com

> 
>  
> 
> 3)All the web hosts I am aware of are still offering only pg 7.4.
> Does anybody offer pg 8.x ?

8.0.x and 8.1.x only.

Joshua D. Drake



> 
>  
> 
>  
> 
> 


-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] hardware failure - data recovery

2006-10-18 Thread Rick Gigger
To make a long story short lets just say that I had a bit of a  
hardware failure recently.


If I got an error like this when trying to dump a db from the mangled  
data directory is it safe to say it's totally hosed or is there some  
chance of recovery?


pg_dump: ERROR:  could not open relation 1663/18392/18400: No such  
file or directory
pg_dump: SQL command to dump the contents of table "file" failed:  
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  could not open relation  
1663/18392/18400: No such file or directory
pg_dump: The command was: COPY public.file (vfs_id, vfs_type,  
vfs_path, vfs_name, vfs_modified, vfs_owner, vfs_data) TO stdout;


Thanks,

Rick

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] hardware failure - data recovery

2006-10-18 Thread Rick Gigger

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/18/06 19:57, Rick Gigger wrote:

To make a long story short lets just say that I had a bit of a hardware
failure recently.

If I got an error like this when trying to dump a db from the mangled
data directory is it safe to say it's totally hosed or is there some
chance of recovery?

pg_dump: ERROR:  could not open relation 1663/18392/18400: No such file
or directory
pg_dump: SQL command to dump the contents of table "file" failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  could not open relation
1663/18392/18400: No such file or directory
pg_dump: The command was: COPY public.file (vfs_id, vfs_type, vfs_path,
vfs_name, vfs_modified, vfs_owner, vfs_data) TO stdout;


What happens when you fsck the relevant partitions?


Errors about a bunch of duplicate inodes, missing inodes, etc.  Should I 
do it again and get some of the exact text for you?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] hardware failure - data recovery

2006-10-18 Thread Rick Gigger

Rick Gigger wrote:

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/18/06 19:57, Rick Gigger wrote:

To make a long story short lets just say that I had a bit of a hardware
failure recently.

If I got an error like this when trying to dump a db from the mangled
data directory is it safe to say it's totally hosed or is there some
chance of recovery?

pg_dump: ERROR:  could not open relation 1663/18392/18400: No such file
or directory
pg_dump: SQL command to dump the contents of table "file" failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  could not open relation
1663/18392/18400: No such file or directory
pg_dump: The command was: COPY public.file (vfs_id, vfs_type, vfs_path,
vfs_name, vfs_modified, vfs_owner, vfs_data) TO stdout;


What happens when you fsck the relevant partitions?


Errors about a bunch of duplicate inodes, missing inodes, etc.  Should I 
do it again and get some of the exact text for you?


Also this is an example of the type of errors that were being logged 
before it died:


LOG:  checkpoint record is at 26/41570488
LOG:  redo record is at 26/41570488; undo record is at 0/0; shutdown TRUE



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] hardware failure - data recovery

2006-10-18 Thread Rick Gigger

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/18/06 23:52, Rick Gigger wrote:

Rick Gigger wrote:

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/18/06 19:57, Rick Gigger wrote:

To make a long story short lets just say that I had a bit of a hardware
failure recently.

If I got an error like this when trying to dump a db from the mangled
data directory is it safe to say it's totally hosed or is there some
chance of recovery?

pg_dump: ERROR:  could not open relation 1663/18392/18400: No such file
or directory
pg_dump: SQL command to dump the contents of table "file" failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR:  could not open relation
1663/18392/18400: No such file or directory
pg_dump: The command was: COPY public.file (vfs_id, vfs_type, vfs_path,
vfs_name, vfs_modified, vfs_owner, vfs_data) TO stdout;

What happens when you fsck the relevant partitions?

Errors about a bunch of duplicate inodes, missing inodes, etc.  Should
I do it again and get some of the exact text for you?

Also this is an example of the type of errors that were being logged
before it died:

LOG:  checkpoint record is at 26/41570488
LOG:  redo record is at 26/41570488; undo record is at 0/0; shutdown TRUE


What does Google say about these error messages and your fs?


Not much that is useful.  I think this is a little beyond that scope.  A 
hardware failure basically left the fs and the db in an inconsistent 
state.  There is one table in one database that has a bunch of data in 
it that I need to get out.  I'm guessing I'm going to need to find 
someone who understands the the internal structure of the files to go in 
and pull out whatever data is still in tact.


I have been poking around and as far as I can tell, although one of the 
toast indexes is gone the actual table files appear to be in tact.  That 
is they are still in the file system.  I don't know if they are ok 
internally.


I also get this error when trying to access the non-toasted data:

ERROR:  could not access status of transaction 307904873
DETAIL:  could not open file "pg_clog/0125": No such file or directory

I'm guessing that this means that I may have get someone to pull out all 
versions of a given tuple because I have lost some of the visibility 
info.  This shouldn't matter as most likely very few tuples would have 
had more than one version when the system went down.


I just hope that the relations are need are in tact and that there is 
someone out there who can help me get it out.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] hardware failure - data recovery

2006-10-18 Thread Rick Gigger

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/19/06 00:46, Rick Gigger wrote:

Ron Johnson wrote:

On 10/18/06 23:52, Rick Gigger wrote:

Rick Gigger wrote:

Ron Johnson wrote:

On 10/18/06 19:57, Rick Gigger wrote:

[snip]

Not much that is useful.  I think this is a little beyond that scope.  A
hardware failure basically left the fs and the db in an inconsistent
state.  There is one table in one database that has a bunch of data in
it that I need to get out.  I'm guessing I'm going to need to find
someone who understands the the internal structure of the files to go in
and pull out whatever data is still in tact.


So, no backup tapes?


I have been poking around and as far as I can tell, although one of the
toast indexes is gone the actual table files appear to be in tact.  That
is they are still in the file system.  I don't know if they are ok
internally.

I also get this error when trying to access the non-toasted data:

ERROR:  could not access status of transaction 307904873
DETAIL:  could not open file "pg_clog/0125": No such file or directory

I'm guessing that this means that I may have get someone to pull out all
versions of a given tuple because I have lost some of the visibility
info.  This shouldn't matter as most likely very few tuples would have
had more than one version when the system went down.

I just hope that the relations are need are in tact and that there is
someone out there who can help me get it out.


What kernel, version & fs?


Linux. 2.4.22. and ext3.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] hardware failure - data recovery

2006-10-19 Thread Rick Gigger

Rick Gigger wrote:

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/19/06 00:46, Rick Gigger wrote:

Ron Johnson wrote:

On 10/18/06 23:52, Rick Gigger wrote:

Rick Gigger wrote:

Ron Johnson wrote:

On 10/18/06 19:57, Rick Gigger wrote:

[snip]

Not much that is useful.  I think this is a little beyond that scope.  A
hardware failure basically left the fs and the db in an inconsistent
state.  There is one table in one database that has a bunch of data in
it that I need to get out.  I'm guessing I'm going to need to find
someone who understands the the internal structure of the files to go in
and pull out whatever data is still in tact.


So, no backup tapes?


Oh.  Yeah.  Everything was backed up but this one thing. Hence the need 
to get the data from the messed up db.


I think we've got it figure out though.  We were able to patch up the db 
enough to extract the data with some help from google and old postings 
from Tom.


Thanks,

Rick

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] hardware failure - data recovery

2006-10-23 Thread Rick Gigger
I could have my developer do this if it would be useful to someone  
else.  But in general I think my time would be much better served  
fixing my backup situation and monitoring them so that this CAN'T  
happen again.  It shouldn't have happened this time.


On Oct 19, 2006, at 8:35 AM, Ray Stell wrote:


On Thu, 19 Oct 2006 06:14:46 -0600, Rick Gigger wrote:

I think we've got it figure out though. We were able to patch up the
db enough to extract the data with some help from google and old  
postings

from Tom.



It would be really great if you put down the specifics of what you
googled/old postings/Tom together.   You might need it next time.
I know I'd like to be ready.

---(end of  
broadcast)---

TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] hardware failure - data recovery

2006-10-23 Thread Rick Gigger

Rick Gigger <[EMAIL PROTECTED]> writes:


To make a long story short lets just say that I had a bit of a
hardware failure recently.

If I got an error like this when trying to dump a db from the mangled
data directory is it safe to say it's totally hosed or is there some
chance of recovery?


Why don't you try dumping just the critical tables using pg_dump -t?


There was only one table in the db.


Also perhaps use psql and try looking up in pg_class for the damaged
tables by OID.  You may be able to drop just a few tables and then
dump the DB normally.  This assumes the damaged table(s) are
non-critical...


The table I needed was damaged.  I dropped the indexes from it that  
were also damaged but then the table had to be repaired.



I suggest you stop Pg first, take an FS backup of the entire cluster
before this so you cahn try various approaches if needed.


That was the first thing I did.

Thanks everyone for the help.  Luckily one of my developers  was able  
to patch up the table and get 99% of the data out.  All of the truly  
critical data was in another database and was backed up, so the 99%  
was enough to get through the crisis.


Thanks,

Rick

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] I know the bad way...what is the good way?

2006-11-03 Thread Rick Schumeyer

I confess to having once written code that prints something like
"Items # 1 to 10 of 45"
by using select count(*) from t where condition; and select * from t 
where condition limit 10 offset x;


I now know this is "bad", I guess because of the count() and the offset.

So what is the preferred way?  If it matters, my new application is 
servlet based, so I believe (but not positive) this means cursors are an 
option?  Is this a better way to do this?


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] database dump then restore on another system?

2006-11-09 Thread Rick Schumeyer

To date I have always used pg on a system where I had pg superuser status.
I'm trying to move a database from such a system to one where I am just 
a user, and I'm having a couple of problems.


The first is, the output of pg_dump has a lot of lines like:

ALTER FUNCTION some_function OWNER TO rick;

The user 'rick' does not exist on the target system.  I've looked at the 
help for pg_dump...it looks like specifying "no-owner" will skip these 
lines.  Are there any side effects I need to be aware of?


The second problem is the statement:

CREATE PROCEDURAL LANGUAGE plpgsql;

Apparently I need to be a pg superuser to do this?  Do I need to get the 
db admin to run this statement before I load the database?  What if, 
during testing, I need to drop/create the database?  Do I need the admin 
to run something every time?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Can non-superuser install c functions ?

2006-11-09 Thread Rick Schumeyer
I am transferring a database from a system where I am a pg superuser to 
one where I am not.


The database uses tsearch2.  I am unable to install any of the 
functions.  For example:


CREATE FUNCTION gtsvector_in(cstring) RETURNS gtsvector
   AS '$libdir/tsearch2', 'gtsvector_in'
   LANGUAGE c STRICT;

results in: permission denied for language c

Do I need to get the pg administrator to install my database ?





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] encoding advice requested

2006-11-11 Thread Rick Schumeyer

My database locale is en_US, and by default my databases are UTF8.

My application code allows the user to paste text into a box and submit 
it to the database.  Sometimes the pasted text contains non UTF8 
characters, typically the "fancy" forms of quotes and apostrophes.  The 
database does not appreciate it when the application attempts to store 
these characters.


What is the best option to deal with this problem?

a) I think I could re-create the database with a LATIN1 encoding.  I'm 
not real experienced with different encodings, are there any issues with 
combining en_US and LATIN1?


b) I can issue a SET CLIENT_ENCODING TO 'LATIN1'; statement every time I 
open a connection.  A brief test indicates this will work.


c) I can figure out how to filter the text in the application 
program...but this seems like wasted work considering I could use either 
option A or B.


Opinions?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] encoding advice requested

2006-11-13 Thread Rick Schumeyer

Albe Laurenz wrote:

My database locale is en_US, and by default my databases are UTF8.

My application code allows the user to paste text into a box and
  
submit 
  
it to the database.  Sometimes the pasted text contains non UTF8 
characters, typically the "fancy" forms of quotes and apostrophes.
  
The 
  

database does not appreciate it when the application attempts to
  
store 
  

these characters.

What is the best option to deal with this problem?

a) I think I could re-create the database with a LATIN1 encoding.
  
I'm 
  

not real experienced with different encodings, are there any issues
  
with 
  

combining en_US and LATIN1?
b) I can issue a SET CLIENT_ENCODING TO 'LATIN1'; statement every
  
time I 
  

open a connection.  A brief test indicates this will work.
  

Be aware that "fancy" quotes and apostrophes are not representable in
LATIN1, the closest character set in which they are is probably
WIN1252. See http://en.wikipedia.org/wiki/Windows-1252, especially
characters in the 0x91-0x94 range.
Maybe your application implicitly uses this encoding, especially
if it runs under Windows, in which case the more appropriate
solution to your problem would be to set the client_encoding to
WIN1252 while keeping your database in UTF8.



This is good advice!

To add an answer to your second question:

You can
ALTER ROLE username SET client_encoding = WIN1252
to make this encoding the default for this user.

If you want to change the setting for all users connecting
to this database, you can also
ALTER DATABASE mydb SET client_encoding = WIN1252

Yours,
Laurenz Albe
  

I will have to try the WIN1252 encoding.

On the client side, my application is a web browser.  On the server 
side, it is php scripts on a linux box.  The data comes from copying 
data from a browser window (pointing to another web site) and pasting it 
into an html textarea, which is then submitted. 


Given this, would you still suggest the WIN1252 encoding?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] SQL subquery question

2006-11-15 Thread Rick Schumeyer

I think this can be done with one SQL statement, but I'm not sure.

I have two tables: table t contains key k, another field f, and a bunch 
of other stuff.


In a poor design decision, table tsubset contains a small number of 
"pointers" to t.  I should have used the k column; instead I used the f 
column (it is unique, but not the primary key).


I want to fix this.

I altered tsubset to have two columns, f and k, where k will be a 
foreign key referencing t(k).


I now need to copy all the k values from t to tsubset.

I think I want to do something like this:

foreach f in tsubset
 update tsubset set k=(select k from t, tsubset where t.f=f);
end

Can this be done with one SQL statement?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] SQL subquery question

2006-11-15 Thread Rick Schumeyer

Thanks for the suggestion...it needed only one small change:

update tsubset set k = t.k from t where t.f=tsubset.f;

Thanks!

Alban Hertroys wrote:

Rick Schumeyer wrote:
  

foreach f in tsubset
 update tsubset set k=(select k from t, tsubset where t.f=f);
end

Can this be done with one SQL statement?



I think you mean
update tsubset set k = t.k from t where t.f = f;

  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] tsearch2: pg8.1 to pg8.2

2006-12-07 Thread Rick Schumeyer

I am trying to copy a database that uses tsearch2 from pg8.1 to pg8.2.

All I did to install tsearch2 was cd to the contrib/tsearch2 directory, 
then make, make install.


I then dumped the database from pg8.1 and used psql -f filename db to 
load in into pg8.2.


Attempting a query gives an error:

lib2=# select * from item where idxTitle @@ to_tsquery('default', 'money');
ERROR:  No dictionary with name 'en_stem'

Is there a document that describes the necessary steps to convert to the 
upgraded tsearch2?




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] tsearch2: pg8.1 to pg8.2

2006-12-07 Thread Rick Schumeyer
It was my understanding that running pgdump creates a file that contains 
all the necessary commands to use tsearch2. That approach has worked for 
me to transfer my database from one pg8.1 server to another. I now see 
that is does *not* work from pg8.1 to pg8.2.


At your suggestion I loaded tsearch2.sql before loading the pgdump 
output. I get some errors in the second part, I believe because it 
attempts to load tsearch2 stuff from the pg8.1 database that conflicts 
with the pg8.2 stuff from tsearch2.sql. But, the queries seem to work.


So perhaps the answer is, load tsearch2.sql, then load the result of 
running pgdump on the 8.1 database, and ignore the errors?


Oleg Bartunov wrote:

Rick,

did you load tsearch2 itself into your database ?

Oleg
On Thu, 7 Dec 2006, Rick Schumeyer wrote:


I am trying to copy a database that uses tsearch2 from pg8.1 to pg8.2.

All I did to install tsearch2 was cd to the contrib/tsearch2 
directory, then make, make install.


I then dumped the database from pg8.1 and used psql -f filename db to 
load in into pg8.2.


Attempting a query gives an error:

lib2=# select * from item where idxTitle @@ to_tsquery('default', 
'money');

ERROR: No dictionary with name 'en_stem'

Is there a document that describes the necessary steps to convert to 
the upgraded tsearch2?




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] How would you handle updating an item and related stuff all at once?

2007-02-16 Thread Rick Schumeyer

This may be bad design on my part, but...

I have three tables of interest...Account, Employee, and 
AccountEmployeeRelation.  There is a many-to-many relationship between 
accounts and employees.  The join table also contains a column 
indicating what role the employee plays on this account.


My interface is a web app (I'm trying out Ruby on Rails).  On the "edit 
account" screen I want to edit account attributes AND be able to 
add/delete employees in one form.  The gui part seems to work.


BUT, when I update I'm not sure how to handle updating the 
AccountEmployeeRelation table.  During the update, relations may have 
been added or deleted, and existing relations may have been changed.  It 
seems to me the easiest thing to do is delete all the relations for the 
account and create all new ones with the data submitted from the form.  
This seems wasteful, but the alternative would be a pain.  Or is this 
really the best way?


Thanks for any advice.

Completely off topic, (but not worth a separate post) I have been forced 
to use a little bit of mysql lately...did you know that if you use 
transaction and foreign key syntax with myisam tables, it does not 
complain...it just silently ignores your requests for transactions and 
foreign key checks.  Yikes!  I had incorrectly assumed I would get an 
error message indicating that transactions are not supported.  Oh well.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] one-to-one schema design question and ORM

2007-03-09 Thread Rick Schumeyer
I'm developing a system using Ruby on Rails (with ActiveRecord) and 
postgres.  (Although I think my question is still relevant for, say, 
java with hibernate.)


I have two classes (tables): users and employees.   A user is an account 
that can logon to the system, while an employee is...umm...an employee.


When someone is logged in, they will want to run queries like, "give me 
a list of my accounts".  This means I need to link the users table with 
the employees table.

From a business rules perspective:
  Some users are not employees (like an admin user)
  Some employees are not users

I can think of two ways to do this:

1) a 1-1 relationship where the user table contains a FK to the employee 
table.  Since not all users will be employees, the FK will sometimes be 
null.
In rails, the user class would "belong_to employee" while employee 
"has_one user".


2) Create a link table that has FKs to both the user and employee 
table.  This make sense because I'm not sure that the concept of "there 
might be a linked employee" belongs in the user table.  This moves it to 
a separate table designed for that purpose.  But then again, it may just 
be a needless extra table.


Would you prefer one solution over the other?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


  1   2   3   >