Re: [GENERAL] remove indexes on a column?

2008-09-09 Thread Vance Maverick
jid = tab.oid AND pd.refobjsubid = col.attnum AND pd.objid = dep.oid AND dep.relkind = 'i'; Thanks. Vance On Wed, 2008-09-10 at 00:23 -0400, Tom Lane wrote: > "Vance Maverick" <[EMAIL PROTECTED]> writes: > > I'd like to write a SQL script, possibly with some

[GENERAL] remove indexes on a column?

2008-09-09 Thread Vance Maverick
I'd like to write a SQL script, possibly with some PL/pgSQL, that can find all indexes on a column -- so I can remove them, and set up exactly the indexes I want. (I know what indexes are *supposed* to be there, but depending on the migration history of the specific instance, the names may vary

Re: [GENERAL] triggers: dynamic references to fields in NEW and OLD?

2008-05-16 Thread Vance Maverick
Thanks! Your solution clearly works, but it requires the shared function to enumerate all possible column names. In my real case, there are 8-10 distinct names, so that's a bit uglybut it works. Vance -Original Message- If you just need which table triggered the function then |T

Re: [GENERAL] triggers: dynamic references to fields in NEW and OLD?

2008-05-16 Thread Vance Maverick
15, 2008 8:06 PM To: Vance Maverick Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] triggers: dynamic references to fields in NEW and OLD? Vance Maverick wrote: > I have a bunch of tables that are similar in some ways, and I'm about > to put triggers on them. The triggers

[GENERAL] triggers: dynamic references to fields in NEW and OLD?

2008-05-15 Thread Vance Maverick
I have a bunch of tables that are similar in some ways, and I'm about to put triggers on them. The triggers will all do essentially the same thing -- the only wrinkle is that the name of the column they operate on varies from table to table. I'd like to have just one trigger function, written 'dy

Re: [GENERAL] table as log (multiple writers and readers)

2008-04-21 Thread Vance Maverick
Thanks to all for your help. I've adopted the scheme involving a "staging" table -- the writer processes insert into that, then a single "publisher" process pulls from that and writes to the log, giving a clean serial order for any reader of the log. Vance On Mon, 2008-04-21 at 23:59 +0200,

Re: [GENERAL] table as log (multiple writers and readers)

2008-04-16 Thread Vance Maverick
Craig Ringer wrote: > brian wrote: > > > Use a timestamp column also. > > That's subject to the same issues. [...] > I don't think it's even OK in the case of a single-statement INSERT (where the > transaction is implicit) and/or with the use of clock_timestamp() ... though > I'm less sure about

[GENERAL] table as log (multiple writers and readers)

2008-04-16 Thread Vance Maverick
I want to create a table with the semantics of a log. There may be multiple concurrent writers, because log entries will be generated by triggers. And there will be multiple concurrent readers -- mostly remote processes watching the log over time. I'd like to guarantee that each of those readers

[GENERAL] size cost for null fields

2007-12-14 Thread Vance Maverick
I have a table with lots and lots of rows (into the millions), and I want to add some information to it. The new data consists of a VARCHAR and a BYTEA, and it will almost always be null -- let's say only one row in 10,000 will have non-null values. I'm trying to decide whether to add the new

[GENERAL] SUBSTRING performance for large BYTEA

2007-08-20 Thread Vance Maverick
I'm working on reading large BYTEA fields from PostgreSQL 8.1. (For legacy reasons, it's unattractive to move them to large objects.) I'm using JDBC, and as various people have pointed out , the standard stream-style access method ru

Re: [GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Vance Maverick
Karsten Hilbert writes: > Well, in my particular case it isn't so much that I *want* > to access bytea in chunks but rather that under certain > not-yet-pinned-down circumstances windows clients tend to go > out-or-memory on the socket during *retrieval* (insertion is > fine, as is put/get access f

[GENERAL] SUBSTRING performance for large BYTEA

2007-08-18 Thread Vance Maverick
I'm working on reading large BYTEA fields from PostgreSQL 8.1. (For legacy reasons, it's unattractive to move them to large objects.) I'm using JDBC, and as various people have pointed out , the standard stream-style access method ru

Re: [GENERAL] UUID as primary key

2006-08-31 Thread Vance Maverick
Can you give a link to the PostgreSQL binding? I haven't been able to find it there. All I see is C, Perl and PHP bindings for the generation of UUIDs, and nothing about storage. (For my application, as it happens, I don't need to generate UUIDs in the database, but I recognize that would be use

Re: [GENERAL] UUID as primary key

2006-08-22 Thread Vance Maverick
Thanks, I've gotten this working nicely now (after some offline exchanges with Harald). In JDBC, inet values can be read a couple of ways -- the easiest is to call ResultSet.getString(). And to write them, the easiest is to use prepared statements of the form INSERT INTO xxx VALUES (?::inet, .

Re: [GENERAL] UUID as primary key

2006-08-22 Thread Vance Maverick
Thanks for the good suggestion. I spent some time last night verifying that it works, and that I can get the values in and out correctly through JDBC. (When going from Java to the database, for example, it's easiest to compose the value directly into the SQL query rather than using a parameter.)

[GENERAL] UUID as primary key

2006-08-21 Thread Vance Maverick
I'm considering using a UUID as a primary / foreign key for my schema, to help ensure portability of data in a multi-master context. Does anyone have experience with this? There's a project on Gborg (pguuid) to create a "native" UUID type, but it looks stagnant (and I'd prefer to use PostgreSQL

[GENERAL] SQL parsing suggestions?

2006-07-11 Thread Vance Maverick
I have a PostgreSQL schema definition. I'd like to be able to use it as the basis for code generation in a software build process -- specifically, I want to generate Java enums corresponding to the table definitions. However, it would be inconvenient to have to connect to a running database durin