For reference:
https://wiki.postgresql.org/wiki/Streaming_Replication
Assume a master -> slave streaming replication configuration, Postgresql
9.2.
Assume that the master has been chugging away, but the slave PG service
has been offline
for a while and the wal archive has updated enough that th
I think this is the first time I've ever reported a PG crash, which is notable
since I've been using PG for over 10 years. ;)
Using the 9.4 Beta RPMs on CentOS 6.X/64, we're experiencing a reproducible
crash when running a query with a left outer join, partially collapsed.
TRAP: FailedAsserti
Our app makes extensive use of temp tables, and this causes a
significant amount of bloat that can often only be cleared with a manual
vacuum process. We're looking for a better way that doesn't involve
locking, we found pg_repack and pg_reorg and were wondering if anybody
here could weigh in
On 01/15/2014 04:24 PM, Tom Lane wrote:
Lists writes:
Our app makes extensive use of temp tables, and this causes a
significant amount of bloat that can often only be cleared with a manual
vacuum process. We're looking for a better way that doesn't involve
locking, we found pg_
On 04/02/2014 02:55 PM, Bret Stern wrote:
Care to share the SSD hardware you're using?
I've used none to date, and have some critical data I would like
to put on a development server to test with.
Regards,
Bret Stern
SSDs are ridiculously cheap when you consider the performance
difference.
We're getting ready to do system upgrades to our postgresql hosting
cluster. Currently, we're using
8-core servers with 32 GB of RAM in each to host approximately 75 end
user databases per server. I'm wondering
if there are particular performance bottlenecks we should be aware of as
we scale the
We upgraded to 9.1 from 8.4 over the summer a few months ago, to new DB
servers with more disk space and memory. Unexpectedly, the DB servers
have steadily increased their disk space usage since. Reported system
load doesn't seem to be affected. It's happening to all our DB servers
running 9.1.
Jeff, thanks for the feedback!
On 11/05/2012 08:51 PM, Jeff Janes wrote:
My first guesses would be things that are not directly under the
databases control, such as:
1) your server logs are accumulating and you aren't doing anything about them
I'm guessing that this is not the case:
[root@de
I followed your example, the result is at the bottom. Based on this it
would seem that there are 3-4 databases that seem to be the culprit. How
could I get more depth/detail on what specifically is the problem?
-Ben
On 11/05/2012 07:10 PM, Scott Marlowe wrote:
What does du -sh have to say abou
On 11/07/2012 09:01 AM, Jeff Janes wrote:
Ben, did you ever figure out where the space was going?
I think we've found where the space is going, but I still don't yet know
how to resolve it. I modified your query thusly in order to get a total
of space used, and got an answer that matches clos
On 11/07/2012 12:42 PM, Tom Lane wrote:
So you've turned off autovacuum, and are carefully not vacuuming the
system catalogs. That's your problem all right. Is there a
particularly good reason why this script isn't a one-liner "VACUUM"?
Back in the 8.x days, we experienced "vacuum full analyz
On 11/07/2012 12:58 PM, Scott Marlowe wrote:
My experience is that if autovac is causing problems with stalled
queries etc you're either A: running ancient pg versions (pre 8.3), B:
Running WAY too aggressive settings in autovac (100 threads, no nap
time, cost limit of 10 etc.) or C: Your I
On 11/07/2012 11:56 AM, Igor Neyman wrote:
The only 100% fool-proof test would be to restore from your backup files.
Regards,
Igor Neyman
Our internal process is to back up production databases regularly, and
then use the backups offsite to populate copies of databases for
developer use. This
On 11/07/2012 12:42 PM, Tom Lane wrote:
... because it
>occasionally causes transactions and queries to hang when an update
>causes a vacuum mid-day, effectively taking us offline randomly.
I suspect this claim is based on ancient and no longer very relevant
experience.
Even so, if I felt the ne
As I've spent a considerable amount of time trying to sort this out, I'm
posting it for the benefit other users.
I've experienced persistent, ongoing issues with autovacuum in a mixed
read/write environment with midrange hardware (16 core Xeon, 128 GB RAM,
200 GB SATA3 6 Gb SSDs for disk I/O o
On 11/09/2012 05:26 PM, Steve Crawford wrote:
Bloat in pg_attribute would correlate with A) (or any constant
creation/destruction of tables). You can vacuum and/or reindex the
system tables if you are connected as the superuser but you are better
off preventing bloat by appropriate adjustment o
On 11/10/2012 02:21 PM, Jeff Janes wrote:
On Fri, Nov 9, 2012 at 4:28 PM, Lists wrote:
...
3) For each of the tables from #2, run the commands
REINDEX TABLE $table;
VACUUM FULL ANALYZE $table;
The end result is a squeaky-clean database server with expected disk usage.
NOTES:
...
2) It
The good news is that we have now resolved our critical problem (disk
space overuse) with a somewhat hackish, slow answer that is nonetheless
good enough for now.
Now I'd like to work out how to get autovacuum to work smoothly within
our cluster. I'm happy to try to clarify my notes and post t
On 11/12/2012 01:31 PM, Jeff Janes wrote:
On Mon, Nov 12, 2012 at 10:38 AM, Lists wrote:
On 11/10/2012 02:21 PM, Jeff Janes wrote:
On Fri, Nov 9, 2012 at 4:28 PM, Lists wrote:
2) It was sheer chance that I discovered the need to reindex prior to
vacuum
in order to get the disk space back
This issue always stumps me. I need to set the default date for a column in
postgres. A value for today, today + x days, now (timestamp), now + x amount of
days.
I never seem to be able to locate it in the documentation or online. Can anyone
post it here once and for all for posterity?
I will b
What is the query for retrieving a tables and a views definition from the system
tables (in 7.4).
I am trying to find my way in the system tables and I am finding very confusing,
especially way in the way the tables are linked.
---(end of broadcast)
On Tue, 10 Feb 2004, CSN wrote:
> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id = $category_id
I have an update statement that affects every row in a given table.
For that table it changes the value in a single column, which itself
has a foreign key constraint. The table has an additional 9 foreign
keys, some of which reference large tables.
My expectation would be that only the changed colu
, 2009 at 9:20 AM, Tom Lane wrote:
> j-lists writes:
>> I have an update statement that affects every row in a given table.
>> For that table it changes the value in a single column, which itself
>> has a foreign key constraint. The table has an additional 9 foreign
&
List,
I've got a problem where I need to make a table that is going to grow by an average of 230,000 records per day. There
are only 20 columns in the table, mostly char and integer. It's FK'd in two places to another table for import/export
transaction id's and I have a serial primary key an
So the details of it:
I'm using PostgreSQL 8.0.3 on a Gentoo kernel 2.6.12 on a P4/2.8G+HT proc, with kernel RAID0 on some SATA drives and 1G
RAM. Don't know the bus speed. I'm thinking that my queries are not using indexs correctly and therefore taking longer
to complete than they should. I'
I am trying to run the following:
COPY departments (id, issue_id, title, description, feature_type) FROM
stdin;
23 4 Local BuzzThings to do, people to see, places to go. aspen
I get back:
COPY departments (id, issue_id, title, description, feature_type) FROM
stdin;
23 4 Local Buz
Interesting.
How would I go about solving that?
I inserted an extra line between the two, no dice.
> From: Scott Marlowe <[EMAIL PROTECTED]>
> Date: Tue, 20 Dec 2005 13:53:37 -0600
> To: Hunter's Lists <[EMAIL PROTECTED]>
> Cc: PostgreSQL
> Subject: Re: [GENE
owe <[EMAIL PROTECTED]>
> Date: Tue, 20 Dec 2005 16:41:32 -0600
> To: Hunter's Lists <[EMAIL PROTECTED]>
> Cc: PostgreSQL
> Subject: Re: [GENERAL] Syntax Error Inserting From STDIN?
>
> quick answer, try a different editor.
>
> Are you editing in the sa
>>In article <[EMAIL PROTECTED]>, Felix Finch <[EMAIL PROTECTED]> writes:
> I have a perl test program ... and began getting this complaint
> from psql:
> Can't drop schemas: Illegal seek
Ahhh yesss... otherwise known as the subroutine not returning any
specific value and the caller expecti
I currently have a working 9.2 master + hot standby setup, using
asynchronous replication and WAL archiving (via a shared NFS mount),
running in our colocated datacenter.
I need to migrate this to AWS, with as little downtime as possible. My
plan is to create two cascading standbys, "daisy-cha
Stephen Carville wrote:
I have been asked to evaluate Oracle, mysql and postgresql as a possible
replacement for our existing Oracle and MsSQL databases. Oracle and
mysql I pretty much have covered. Postgresql, OTOH, is somewhat less
cooperative.
I have the software (v 8.4.13) installed on 64 b
John DeSoi wrote:
I have a 9.2 hot standby setup with replication via rsync. For the second time,
it has stopped working with no apparent error on the primary or standby. Last
time this happened I fixed it by restarting the primary. Yesterday I started a
new base backup around noon and it repl
It would be nice if PostgreSQL supported column aliases in WHERE
clauses, eg:
SELECT left(value, 1) AS first_letter
FROM some_table
WHERE first_letter > 'a';
Is this the proper mailing list for such feature requests?
Thanks in advance,
Daniel Serodio
--
Sent via pgsql-general mailing list (p
Ryan Kelly wrote:
On Mon, Sep 17, 2012 at 06:18:53PM -0300, Daniel Serodio (lists) wrote:
It would be nice if PostgreSQL supported column aliases in WHERE
clauses, eg:
SELECT left(value, 1) AS first_letter
FROM some_table
WHERE first_letter> 'a';
Is this the proper mailing
We are preparing a PostgreSQL database for production usage and we need
to estimate the storage size for this database. We're a team of
developers with low expertise on database administration, so we are
doing research, reading manuals and using our general IT knowledge to
achieve this.
We ha
I was reading the documentation for pg_basebackup and it states that
resulting backups "can be used both for point-in-time recovery and as
the starting point for a log shipping or streaming replication standby
servers".
Should pg_basebackup also be used for "regular" backups, or only for
PITR
Jasen Betts wrote:
On 2012-10-08, Daniel Serodio (lists) wrote:
We are preparing a PostgreSQL database for production usage and we need
to estimate the storage size for this database. We're a team of
developers with low expertise on database administration, so we are
doing research, re
John R Pierce wrote:
On 10/08/12 1:39 PM, Daniel Serodio (lists) wrote:
3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice **
postgres doesn't have a 'transaction log', it has the WAL (Write-Ahead
Logs). These are typi
Magnus Hagander wrote:
On Mon, Oct 15, 2012 at 10:15 PM, Daniel Serodio (lists)
wrote:
I was reading the documentation for pg_basebackup and it states that
resulting backups "can be used both for point-in-time recovery and as the
starting point for a log shipping or streaming replic
Craig Ringer wrote:
On 10/14/2012 05:53 AM, Heine Ferreira wrote:
Hi
Are there any best practices for avoiding database
corruption?
* Maintain rolling backups with proper ageing. For example, keep one a
day for the last 7 days, then one a week for the last 4 weeks, then
one a month for the
Tulio wrote:
You can use a stream replication in hot standby (native) to have the
same data and access in both (but not update and insert in the slave,
just select)
and create a virtual IP using heartbeat.. configuring a master to use
some IP (virtual) and when this lost the IP, the second serv
Shaun Thomas wrote:
On 10/17/2012 12:53 PM, Daniel Serodio (lists) wrote:
I've come across a few mentions of Heartbeat being used for PostgreSQL
failover, do have any links to more information about this?
This was the subject of my talk at PG Open this year. I've got the
ent
I'm trying to register at postgresql.org so I can edit the wiki to fix a
broken link. I received a link on my e-mail so I could set my password,
but when I submit the "Change password" form I get an error:
Forbidden (403)
CSRF verification failed. Request aborted.
More information is available
I have a table "t" with a timestamp column "s". I am trying to issue a query
to find all rows where s is exactly some number of days old (rounded off).
I have tried this:
select * from t where date_part('day', age('now', s)) = ?
But this only looks at the day of the month; e.g. if my parameter
Whoops, I had a typo in my translation; the second query I quoted should
read as follows:
select * from t where date_part('day', age('now', s)) = ? and
date_part('month', age('now', s)) = 0
Thanks,
Bryan
- Original Message -
From: "
On 7 Jan 2011, at 22:02, "THOMPSON, JARED (ATTBAPCO)" wrote:
> I assume when people use dashboards they are not being queried every
> second for updating but maybe every minute?
>
> Are there any tools that work good on top of postgres?
>
> (I see in the stock market (though I am looking at/for
Hi,
I’m trying to use dblink to connect from database1 to database2 on the same
machine. The Postgres version is 9.3.5.
Firstly, I’ve created a user to use with the connection, while logged in as the
postgres superuser to database2:
> create user db_link with password ‘mypassword’;
> grant sel
>>
>>> select * from dblink(‘dbname=database2 username=db_link
>>> password=mypassword','select username, email from appuser') as t1(username
>>> text, email text);:
>
> I think the problem is the above- ^^^
>
> username=db_link should be user=db_link
>
> The accepted key
Hello,
We have a system that allows users to create views containing
calculations but divisions by zero are commonly a problem.
An simple example calculation in SQL would be
SELECT cost / pack_size AS unit_cost from products;
Either variable could be null or zero.
I don't think there's a w
On 4 Jun 2009, at 13:11, Sam Mason wrote:
You need to take care of only one case here: denominator == 0; rest
of the
cases will be handled sanely by the database.
CASE WHEN packet_size = 0 THEN null ELSE cost/packet_size END as
unit_cost
Yes; or even shorter:
cost/nullif(packet_size,
On 8 Jun 2009, at 17:23, Merlin Moncure wrote:
Is there a way when creating a table to limit it to one row? That
is,
without using a stored procedure?
I searched the documentation, but didn't find anything.
CREATE TABLE x (...);
CREATE UNIQUE INDEX x_only_one_row ON ((1));
very cleve
On 8 Jun 2009, at 19:01, David Fetter wrote:
Hello!
I've the following data:
datetime | val1 | val2
time1|4 | 40%
time2|7 | 30%
time3| 12 | 20%
...
I'd like to sum up the following:
(7-4)*30% + (12-7)*20% + ...
This is best done in 8.4 using Windowing. Sadly, it's an
Hello,
Divide by zero errors have come up a couple of times on this list
(once raised by me). I wonder if I could propose a feature for
discussion. Could the result of a division by zero be treated as
infinity or null, rather than raising an error? Floating point types
already have the co
On 31 Jul 2009, at 19:49, Jeff Davis wrote:
Yes -- you can do what you want anyway. If you want it to be more
readable, you can redefine the division operator yourself -- it's
just a
function and operator that happens to be named "/" (although that
might
not be wise). Maybe you can call it "
On 27 Aug 2009, at 17:11, pgsql-general-ow...@postgresql.org wrote:
From: Kelly Jones
Date: 27 August 2009 14:43:51 BST
To: pgsql-general@postgresql.org
Subject: Viable alternatives to SQL?
Many sites let you search databases of information, but the search
queries are very limited.
I'm creat
On 25 Sep 2009, at 07:22, InterRob wrote:
I guess it IS quite overengineered indeed...
What I'm trying to do is to facilitate different fieldwork
methodologies for archaeological research (on project basis); there
is no final agreement on data structure and semantics; however, on a
meta-
On 27 Sep 2009, at 21:10, InterRob wrote:
Peter, may I invite you to privately share some more details on the
system you are using and the design of it? Did you implement it
using PostgreSQL? Looking forward to your reply.
(And with respect to your previous message: whom are you actually
Hello,
Given a string with certain words surrounded by stars, e.g.
The *quick* *brown* fox jumped over the *lazy* dog
can you transform the words surrounded by stars with uppercase versions, i.e.
The QUICK BROWN fox jumped over the LAZY dog
Given text in a column sentence in table sentences, I
On 9 Sep 2013, at 14:41, David Johnston wrote:
> Oliver Kohll - Mailing Lists wrote
>> select regexp_replace(sentence,'\*(.*?)\*','' || upper('\1'),'g') from
>> sentences;
>
> Yeah, you cannot embed a function-call result in the &qu
On 9 Sep 2013, at 21:03, Alvaro Herrera wrote:
> select string_agg(case when words like '*%*' then upper(btrim(words, '*'))
> else words end, ' ')
> from regexp_split_to_table('The *quick* *brown* fox jumped over the *lazy*
> dog', ' ') as words;
>
> string_agg
I'm sure no one else on this list has done anything like this, but here's a
cautionary tale.
I wanted to synchronise data in two tables (issue lists) - i.e. whenever a
record is added into one, add a similar record into the other. The two tables
are similar in format but not exactl
Hi,
I understand random() is a volatile function and runs multiple times for
multiple rows returned by a SELECT, however is there a way of getting it to run
multiple times *within* another function call and in the same row. i.e.
something like
select regexp_replace('+1 555 555 555', E'\\d', tr
Many thanks both, those solutions are great and have gone in my wiki for future
ref.
Regards
Oliver
On 21 Sep 2011, at 21:56, Szymon Guz wrote:
>
>
> >> Short answer is: yes. More information you can find
> >> here
> >> http://simononsoftware.com/problem-with-random-in-postgresql-subselect/
>
On 21 Aug 2012, at 13:32, Vincent Veyron wrote:
>>
>> Since Amazon has added new high I/O instance types and EBS volumes,
>> anyone has done some benchmark of PostgreSQL on them ?
>>
>
> I wonder : is there a reason why you have to go through the complexity
> of such a setup, rather than simpl
Here's a bit of positive news spin - in a backhanded way perhaps, but still a
compliment:
http://www.theregister.co.uk/2012/08/31/postgresql_too_cool_for_school/
Oliver
www.agilebase.co.uk
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscript
Hi,
I'm getting a problem where my application hangs in the process of adding a
field to a table. The app adds the field, then creates an index on the field.
It hangs for ages (minutes) until I cancel the query. My investigation so far
has been
Look at current queries:
agilebasedata=# SELECT
On 19 Oct 2012, at 13:28, Frank Lanitz wrote:
> Just an idea without bigger investigation: Whare are the idle in
> transactions are doing? Maybe they are blocking the create index.
>
> Cheers,
> Frank
Good question, I don't know. The app runs on Java / Apache Tomcat, which
maintains a connecti
On 19 Oct 2012, at 15:09, "Albe Laurenz" wrote:
> You should always include the list in your replies -
> other people might be interested in the solution.
Oops, thought I had.
>
> There must be at least two transactions involved
> to create a locking problem like you describe.
> But since CRE
Scott,
I implemented a system exactly like this for the app in my signature below.
Basically the system stores metadata containing each view definition and the
joins between them. When someone needs to alter a view, say to remove a column,
a DROP CASCADE is performed then each view is recreated
On 25 Apr 2010, at 07:34, Scott Bailey wrote:
>
>> I can point you to the relevant code in GitHub if you're interested (it's
>> Java).
>
> Absolutely. Thanks
>
> Scott Bailey
Line 813 of
http://github.com/okohll/agileBase/blob/master/gtpb_server/src/com/gtwm/pb/model/manageSchema/DatabaseDefn
Hello,
Many thanks to andreas.kretschmer for this helpful reply about how to set up a
window function to perform a running total:
http://archives.postgresql.org/pgsql-general/2010-03/msg01122.php
It works perfectly with the simple test data but I've just got back to work,
tried implementing it
On 29 Apr 2010, at 10:01, Magnus Hagander wrote:
>>
>> select extract(year from signup_date),
>> count(email_address),
>> sum(count(email_address)) over (partition by 1 order by 1 asc rows
>> unbounded preceding)
>> from email_list group by 1 order by 1;
>>
>> Does anyone have any other ideas
>>
>> Aren't you looking for something along the line of:
>>
>> SELECT year, sum(c) over (order by year)
>> FROM (
>> SELECT extract(year from signup_date) AS year, count(email_address) AS c
>> FROM email_list
>> GROUP BY extract(year from signup_date)
>> )
>>
>> (adjust for typos, I didn't t
>
> Curious note - how does the non-subselect version and the subselect
> version compare performance-wise?
Magnus,
On a test table with 12,000 rows there's not much in it, the subselect has a
simpler plan but they both take practically the same time.
The two plans (note I've been rewriting th
Hello,
I'm interested in using WAL shipping / replication for backup purposes but have
no interest in failover. Currently my situation is:
I have two servers, live and backup, which are in different cities. The backup
server is also a test/development machine.
Backups of my most important data
On 21 Jul 2010, at 23:14, Joe Conway wrote:
> If you want something simple, and not requiring PostGIS, but plpgsql
> instead, see:
>
> http://archives.postgresql.org/pgsql-sql/2003-12/msg00193.php
For completeness, the earthdistance module also provides the distance between
two lat/longs, the
On 22 Jul 2010, at 12:57, Geoffrey wrote:
>> For completeness, the earthdistance module also provides the distance
>> between two lat/longs, the point<@>point syntax is simple to use:
>> http://www.postgresql.org/docs/8.3/static/earthdistance.html
>
> Disgregard my last post, Surely as soon as
Hi,
>
> So the issue seems to be with the last line on IPv6, to be honest i dont
> even know what IPv6 is, could disabling IPv6 do the trick or do i need to
> add something?
Your machine is connecting to PostgreSQL using IPv6's "localhost" address.
IPv6 is a newer format of IP addressing that
> We just upgraded a customer's PostgreSQL installation from 8.3 to 8.4.1. I
> wanted to make a small change to the database. I wanted to take a backup
> before I did it, just in case. When I run pgAdmin from the
> PostgreSQL/8.4/bin folder and try to take a backup, the backup fails. The
>
Hello,
I've just come across this in an output from a cron backup script:
/etc/cron.hourly/gtwm_backup_databases.sh:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: could not open relation with OID
572838
pg_dump: The command was: SELECT
pg_catalog.pg_get_viewdef('57283
Hello,
May I take a lull between PG releases to ask an upgrade Q? Documentation and
people often recommend that when doing a major version upgrade, to dump data
from the old server with the new version of pg_dump, since the new pg_dump may
have bugfixes not available to the old.
I've never don
On 29 Mar 2010, at 14:33, Clemens Eisserer wrote:
> Hi,
>
> Sorry for beeing a bit off-topic.
> Recently I've done some database-fontends, which I used
> java+swingset+netbeans-gui-builder for.
> Compared to plain java development is quite fast - however I wonder
> wether you could recommend db-
Hello,
I'm still reasonably new to windowing functions, having used a few since 8.4
came out. I wonder if anyone can help with this one.
I've got a table of email addresses in a CRM system similar to the following:
CREATE TABLE test(
signup_date timestamp,
email_address varchar(1000)
);
INS
Hi,
I have a query I'm trying to optimise. It takes just under a second to run, not
too bad for my users but I'm worried that as the size of the data increases, it
will get worse. Of course the plan may change when that happens but I'd also
like to learn a bit more about optimisation anyway.
T
JOIN dbvcalc_delivery_charges ON b2deliveryorders.idb2deliveryorders =
dbvcalc_delivery_charges.idb2deliveryorders
WHERE b2deliveryorders.complete = false AND b2deliveryorders.invoiced = false
ORDER BY b2deliveryorders.expectedby NULLS FIRST;
Oliver
Begin forwarded message:
> From: Oliver Kohll - Maili
T 0) x WHERE x.invoiced = false
>
> Regards
>
> Pavel Stehule
>
> 2011/11/5 Oliver Kohll - Mailing Lists :
>> b2deliveryorders.complete = false AND b2deliveryorders.invoiced = false
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 28 Jan 2012, at 15:27, "Greg Sabino Mullane" wrote:
>> Is this a case for multi master do you think?
>> I.e. running one on the internet, one locally.
>
> Yes, could be.
>
>> b) changing schemas (new tables, fields, views etc.) as well as data
>
> That's a tall order; I don't think anythin
Hello,
I'm doing some SELECTs from information_schema.views to find views with
dependencies on other views, i.e.
SELECT table_name FROM information_schema.views WHERE view_definition ILIKE
'%myviewname%';
and each is taking about 1/2 a second, which is getting a bit slow for my use.
There are
On 22 Mar 2012, at 10:17, Albe Laurenz wrote:
>> Or is there a better way of finding view dependencies? I see there's a
> pg_catalog entry for tables
>> that a view depends on but that's not what I'm after.
>
> You can use pg_depend and pg_rewrite as follows:
>
> SELECT DISTINCT r.ev_class::reg
Hi,
Wondering who has any practical experience of open sourcing a software
product? I made a small internal database tool open source a while ago
and that worked quite well - www.gtportalbase.com/opensource. People
got involved in adding to the code and some people even paid a bit for
our
Hello,
Some of the new language features in 8.4 seem like pretty major
additions. I know that the window functions will be very useful. There
have been many times in the past when I've wanted to aggregate in this
way:
http://elegantcode.com/2009/01/04/sql-window-clause/
If this is possib
Hello,
I have asked about this before, but I just haven't been able to get
anywhere with it yet.. I'm hoping someone can help me?
Here is my original function and trigger:
CREATE OR REPLACE VIEW items_category AS select count(*) AS count
,b.category,nlevel(b.category) AS level,
subpath(b.cate
>Greg wrote:
> > is "DROP TRIGGER" transaction safe?... I mean, could I do:
> >
> > BEGIN
> > DROP TRIGGER category_mv_ut;
> > ... insert rows
> > ... update materialized view table
> > CREATE TRIGGER category_mv_ut AFTER UPDATE ON items
> > FOR EACH ROW EXECUTE PROCEDURE update_ut();
> > COMMIT;
Hello,
I have a question about inherited tables w/r to triggers... Hopefully
this will make some sense... (I'll try to keep the schema example as
simple as possible):
Given the follow schema/tables:
-- public stuff
SET search_path = public, pg_catalog;
CREATE TABLE customer(
customer_id SER
a
>proposal that contained a user_id contained in test.customer, but not
>test2.customer? I just don't understand the documentation on this
>issue of foreign keys and what is actually inherited...
>
>Is there someplace I can look for a more thorough explanation of how
>post
>It is the only known way to control triggers though it isn't regularly
>tested by the developers.
I think I've come up with another way.. I posted this recently, but did
not get any feedback on it so I'm not sure how dumb it is... It is
working really great for me though
All I did was add
>Net Virtual Mailing Lists wrote:
>> All I did was added an extra column to my table (I called it
>> "batch_process"). Then in
>> the trigger do something like (in whichever function you are calling):
>>
>> IF NEW.batch_update IS NOT NULL AND NEW.batch
I have the following three tables and my inserts are blocking each other
in a way I just can't understand Can someone point me in the
direction as to what is causing this?
jobs=> \d master.locations
Table "master.locations"
Column|
I have a rather peculiar performance observation and would welcome any
feedback on this.
First off, the main table (well, part of it.. it is quite large..):
Table "table1"
Column | Type |
1 - 100 of 149 matches
Mail list logo