Hi all,
System: postgres 8.1.3p, not a released version but a CVS checkout
somewhere after 8.1.3 was released.
I have a table (quite big one, 2 columns, 2166381 DB relpages, 364447136
reltuples as reported by pg_class) which is mostly inserted into, very
rarely deleted from.
I also have a primar
> If that table has a TOAST table you might need to mark the toast table
> as disabled too. Or maybe it's forcing a vacuum because it's worried
> about XID wraparound?
OK, that might be the case, as I guess there are tables which were not
successfully vacuumed in the last few months (DB wide vacu
On Fri, 2007-06-01 at 15:58, Alvaro Herrera wrote:
> Try reindexing the table; that should make the problem go away. Also,
> update to the latest of the 8.1 branch ASAP.
Reindexing won't work, it would mean hours of downtime. I plan to move
the DB to 8.2 via slony in ~2-3 weeks, that should take
> How would reindexing a table imply hours of downtime?
Simply, all meaningful activities on this system will sooner or later
insert into this table :-)
So given that we use a connection pool we end up pretty soon with all
connections waiting for an insert on this table, and at that point
nothing
[snip]
> You don't need to reindex, just update.
I was pretty sure I've seen the error reported and the fix too, the
thing is relatively harmless except the vacuum troubles. And considering
that the table itself does not need vacuuming, it would be even more
harmless if it wouldn't block other stu
On Fri, 2007-06-01 at 17:40, Alvaro Herrera wrote:
> Huh, why do you say that it doesn't need any vacuuming?
This table is only inserted and rarely deleted, so other than the XID
wraparound vacuuming it is not critical. Of course it will need vacuum
finally, but it can survive months without it -
> select age(datfrozenxid) from pg_database where datname = 'your database'
>
> 2 billions and you are screwed. Autovacuum starts panicking way before
> that, to have enough slack.
dbname=# select age(datfrozenxid) from pg_database where datname = 'dbname';
age
1648762992
(1 ro
On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
> Another option is to use your favorite scripting language and throw an
> excel header then the data in tab delimited format. Or even in excel
> xml format.
Why would you need any scripting language ? COPY supports CSV output
pretty well, it ca
Hi all,
I just had the following error on one of our data bases:
ERROR: could not access status of transaction 1038286848
DETAIL: could not open file "pg_clog/03DE": No such file or directory
I researched on the mailing list and it looks like the usual suspect is
disk page corruption. There ar
On Thu, 2007-07-12 at 16:18, Simon Riggs wrote:
> The corruption could only migrate if the WAL records themselves caused
> the damage, which is much less likely than corruption of the data blocks
> at hardware level. ISTM that both Slony and Log shipping replication
> protect fairly well against bl
Hi all,
This subject was touched a few times in the past, I looked into the
archives... the result is invariably key developers saying such a
feature is unsafe because the result is unpredictable, while the people
requesting is saying it is OK that way, it is expected... but no
compelling use case
First of all, thanks for all the suggestions.
> put a SERIAL primary key on the table
Or:
> Maybe add OIDs to the table, and delete based on the OID number?
No, this is not acceptable, it adds overhead to the insertions. Normally
the overhead will be small enough, but on occasions it is noticeabl
> How about using the following?
>
> delete from
> where ctid in (select ctid from limit );
>
I actually checked this out before starting this thread, and the plan
looked like:
> explain delete from my_table where ctid in (select ctid from my_table
limit 10);
> Unfortunately the stuff that makes a ctid= nice doesn't seem to be
> used when you're doing an in. It's possible that a function that does
> something like
> for rec in select ctid from my_table limit 10 loop
> delete from my_table where ctid=rec.ctid;
> end loop
> might do okay, but I haven'
On Tue, 2007-07-24 at 18:19, Tom Lane wrote:
> [ thinks for a bit ... ] Actually, you can do it as of 8.2 or so,
> by abusing the ScalarArrayOp stuff: turn the subquery into an array.
> An example in the regression database:
>
> regression=# explain update tenk1 set ten=ten+1
> regression-# whe
On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
> > Unfortunately I don't think this will work. Multiple backends will happily
> > pick up the same ctid in their selects and then try to delete the same
> > records.
>
> I'm pretty sure he said that the batch processing (and the delete) would
> on
Andrew,
Thanks for your input, comments below.
On Thu, 2007-07-26 at 13:30, Andrew Kroeger wrote:
> After reading through this thread, I have an idea that should accomplish
> what I believe are your 3 main goals (avoid any negative performance
> impact on the user's inserts, do not lose any data
> Dear Santa,
>
> I'd like my database to have functionality analogue to how LIMIT works,
> but for other - non-sequential - algorithms.
There was some discussion before to possibly reuse the algorithm ANALYZE
is using for sampling some given percentage of the table data and
provide this for some
Marc,
On Tue, 2007-10-23 at 13:58 +0200, Marc Schablewski wrote:
> We had some corrupted data files in the past (missing clog, see
> http://archives.postgresql.org/pgsql-bugs/2007-07/msg00124.php) and are
> thinking about setting up a warm standby system using WAL replication.
>
> Would an error
[snip]
> With Oracle we do it with: delete ,tname> where and rownum <
> Y;
> Can we have the same goody on Postgres?
The attached message is Tom's response to a similar question, in any
case it would work fine in your case too (assuming you have postgres
8.2).
HTH,
Csaba.
--- Begin Message -
On Thu, 2007-11-15 at 17:13 +, Gregory Stark wrote:
> DELETE
> FROM atable AS x
> USING (SELECT ctid FROM atable LIMIT 5) AS y
> WHERE x.ctid = y.ctid;
Have you tried to EXPLAIN this one ? Last time I tried to do something
similar it was going for a sequential scan on atable with a fi
On Fri, 2007-11-23 at 11:37 +0100, Martijn van Oosterhout wrote:
> On Fri, Nov 23, 2007 at 09:33:13AM +, Peter Childs wrote:
> > I tend to agree that primary keys should be single fields if they need to be
> > referenced but should also be natural if at all possible. ie use car number
> > plate
This is the problem:
> ... t2.idr(p1, p4, p6, p7, p9, 'HLA-DRB5*0101')>2;
What is that t2.idr ? Based on the syntax postgres must look for a
function (because of the parentheses), and it thinks t2 is the schema
where it must look for it.
Cheers,
Csaba.
---(end of broad
On Tue, 2008-01-15 at 16:32 +0100, Luca Arzeni wrote:
> In all cases I'm using locale LATIN9 during DB creation, but I tested also
> with ASCII, UTF8 and LATIN1 encoding.
I guess this has nothing to do with the encoding, but with the collation
rules used, which is governed by "lc_collate" paramet
On Mon, 2008-02-04 at 08:31 -0800, Joshua D. Drake wrote:
> On Mon, 04 Feb 2008 12:18:55 +
> Gregory Stark <[EMAIL PROTECTED]> wrote:
> > "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > I was going to suggest pruning the mailing lists down to just 3-4
> > already. The last thing we need to b
On Mon, 2008-02-04 at 12:18 +, Gregory Stark wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>
> > Guys, with respect this thread does nothing for us unless it is on the
> > certification list.
>
> Do we really need a separate mailing list for every thread? It's already kind
> of crazy
I found an old post regarding the subject, where modifying the
pg_constraint entry was recommended:
http://archives.postgresql.org/pgsql-admin/2003-04/msg00339.php
Is this still safe to do ? The pertinent docs don't say anything pro or
contra:
http://www.postgresql.org/docs/8.2/static/catalog-pg-
While upgrading our schema between application versions, we also had a
few constraint changes. Some of those changes were dropping NOT NULL
constraints on some columns. Our schema had a few such NOT NULL
constraints, which were created using the named variant of the column
constraint clause (someth
On Tue, 2008-02-05 at 12:11 -0500, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > I found an old post regarding the subject, where modifying the
> > pg_constraint entry was recommended:
>
> > http://archives.postgresql.org/pgsql-admin/2003-04/msg00339
On Wed, 2008-02-06 at 11:03 -0500, Tom Lane wrote:
> IIRC, the name of an FK constraint also appears in the arguments for its
> triggers. If you didn't update those too, the behavior might be a bit
> inconsistent, depending on which PG version you're using. In particular
> I'm not sure which name
http://www.theserverside.com/news/thread.tss?thread_id=48339
The interesting part is where somebody asks why NOT use postgres, and
it's answers could give some additional hints to those interested on
what people find missing from postgres to adopt it.
Just to summarize some of the answers:
* majo
On Wed, 2008-02-13 at 13:29 +0100, Dawid Kuroczko wrote:
> > * no direct table cache control;
>
> Could you elaborate more on this one?
Well, I was just summarizing what other people wrote :-)
But I guess they refer to table level control of how much cache memory
to use. I think there are DBMSs
On Wed, 2008-02-13 at 13:39 +0100, Csaba Nagy wrote:
> On Wed, 2008-02-13 at 13:29 +0100, Dawid Kuroczko wrote:
> > > * no direct table cache control;
> >
> > Could you elaborate more on this one?
>
OK, re-reading what I just wrote makes me think it was not clear en
On Wed, 2008-02-13 at 13:56 +0100, Magnus Hagander wrote:
> I don't think these people are comparing to other opensource ones...
> They're comparing to the commercial ones (at least in this case)
Yes, that's definitely the case. And that can actually be taken as a
compliment to the already attain
On Fri, 2008-03-14 at 08:43 -0700, Steve Crawford wrote:
> > Also, it is MVCC-safe only from 8.3 upwards; on older versions
> > it (incorrectly) deletes dead tuples that are still visible to old
> > transactions.
> >
> >
> More interesting. I may have a broken mental-model. I *thought* that
>
Hi all,
Is there an easy way to copy a large object to a new one, having a new
OID and it's content independent from the original ?
The large object API functions would only allow to stream the original
to the client and stream it back to the server for the copy, which is
not really optimal... I
> Is there an easy way to copy a large object to a new one, having a new
> OID and it's content independent from the original ?
So my current solution would be:
CREATE OR REPLACE FUNCTION copy_blob(p_blobId OID)
RETURNS OID AS '
DECLARE
v_NewOID BIGINT;
BEGIN
SELECT lo_create(0) INTO v_Ne
[snip]
> DECLARE
> v_NewOID BIGINT;
^^
small correction: v_NewOID should be of type OID:
DECLARE
v_NewOID OID;
BIGINT would of course work too as long as there is implicit cast from
it to/from OID, which seems to be the case on 8.2.
Cheers,
Csaba.
--
Se
On Sun, 2008-04-20 at 11:32 -0600, Scott Marlowe wrote:
> On Sun, Apr 20, 2008 at 11:12 AM, Scott Ribe <[EMAIL PROTECTED]> wrote:
> > > I am going to play with this and see where it breaks, but it's going to be
> > > an enormous time investment to babysit it.
One more suggestion: if you happen to
We also have such a queue here, and our solution is an algorithm like
this:
1. get the next processor_count * 2 queue ids which are not marked as
taken;
2. choose randomly one of these ids;
3. lock for update with nowait;
4. if locking succeeds:
4.1. check again the item, as it could have
Hi all,
I just want to report that we had here almost exactly the same problem
as reported here:
http://archives.postgresql.org/pgsql-hackers/2008-01/msg00134.php
The whole scenario repeated the same: production DB refused to work,
restarted in single user mode, run vacuum (few hours), postgres
Hi all,
On Thu, 2009-07-30 at 11:02 +0200, Greg Stark wrote:
> On Thu, Jul 30, 2009 at 7:43 AM, Craig
> Ringer wrote:
> > On Wed, 2009-07-29 at 14:56 +0100, Greg Stark wrote:
> > What does work well is occasionally poking the socket with recv(...,
> > MSG_DONTWAIT) while doing other work. Program
On Thu, 2009-07-30 at 11:41 +0200, Greg Stark wrote:
> I know this is a popular feeling. But you're throwing away decades of
> work in making TCP reliable. You would change feelings quickly if you
> ever faced this scenario too. All it takes is some bad memory or a bad
> wire and you would be turni
On Thu, 2009-07-30 at 13:22 +0200, Craig Ringer wrote:
> So, barring network breaks (wifi down / out of range, ethernet cable
> fell out, etc etc) how is the OP managing to leave backends running
> queries? Hard-resetting the machine?
It happened to us when a client box went out of memory and st
On Thu, 2009-07-30 at 13:40 +0200, Craig Ringer wrote:
> > A simple ping to the client would have
> > cleared the fact that the client is not there anymore.
>
> Yep. It'd also stop PostgreSQL working for clients with software
> firewalls, since most of them drop ICMP ECHO ("ping").
I wasn't mea
[just to make things clear, I'm not the one who brought up this
discussion, only that I was also bitten once by zombie connections]
On Thu, 2009-07-30 at 13:29 +0200, Craig Ringer wrote:
> Idle? I thought your issue was _active_ queries running, servicing
> requests from clients that'd since cea
[this is getting off topic]
On Thu, 2009-07-30 at 13:44 +0200, Craig Ringer wrote:
> A host with a runaway process hogging memory shouldn't be dying. It
> should really be killing off the problem process, or the problem process
> should be dying its self after failing to allocate requested memor
Hi Andor,
On Wed, 2009-08-05 at 14:15 +0200, Schindler Andor wrote:
> Can anyone tell me, how this is possible? If we insert 12 on the end, then
> it decreases, but 11 remains the same. The problem only occurs, when the
> where condition contains "sorrend > 9" or less.
I bet the "sorrend" column
Hi Thom,
I would like to review it, but I get "403 - Forbidden" when clicking:
http://downloads.joomlacode.org/trackeritem/4/5/0/45041/postgresql.php
Not sure what that means, probably I need some kind of login to the
joomla tracker system, and I don't have one, and I would prefer not to
create
Hi Thom,
Sorry for the delay, I got sick in the meantime. I see that others
already did some review, I will do a quick one too, later maybe I'll
actually try it out... so after a quick review:
* on line 218, the " ENCODING '$DBname')" part feels wrong, you probably
want hardcoded UTF8 encoding th
On Thu, 2008-07-03 at 19:56 +0530, cinu wrote:
> Could anyone please tell me where I am going wrong and if there is a
> way I can get the same behaviour that I am getting while I am
> executing the through psql prompt.
You're mistake is that you think a transaction is related to your
terminal, but
On Thu, 2008-07-03 at 16:59 +0200, Csaba Nagy wrote:
> If you'll say what you really want
> to do, I bet you'll get a lot more useful advices...
Oh, and you should use the general list only for these kind of
questions, hackers is for discussion about hacking on the postgres code
i
On Thu, 2008-07-03 at 16:59 +0200, Csaba Nagy wrote:
> [snip] It is likely possible to do (using PREPARE TRANSACTION) [snip]
I was wrong, you can't do it with that either, see:
http://www.postgresql.org/docs/8.2/static/sql-prepare-transaction.html
Maybe there is some feature to attach/
On Wed, 2008-07-09 at 00:48 -0400, Tom Lane wrote:
> "Jonathan Bond-Caron" <[EMAIL PROTECTED]> writes:
> > It would be quite useful to implement a database queue. Although FOR UPDATE
> > NOWAIT and trying again can work as well as other techniques,
>
> > just skipping over the locks has its advan
On Wed, 2008-07-09 at 16:23 +0800, Craig Ringer wrote:
> Especially if it returned an updated row count or supported the
> RETURNING clause, so you could find out after the fact what was or
> wasn't done.
Well, it is supposed to be used as "SELECT ... FOR UPDATE SKIP LOCKED",
so you can in fact pu
On Tue, 2008-07-29 at 19:25 +0400, Ivan Zolotukhin wrote:
> Any clues? Can anybody suggest how to debug this? Is it possible to
> get an explain of the query within the trigger?
I bet it's the difference between prepared/not prepared plans. The
trigger prepares the plan without considering the act
On Fri, 2008-10-17 at 13:40 -0700, John Skillings wrote:
> Hello Csaba,
>
> Back in the month of April, I noticed that you posted a similar
> request on copying blobs between two tables, having separate OID.
>
> Can you let me know your final solution please.
The final version I'm using is this
Hi Scara,
This should work just fine:
select num, min(mydate) from mytab group by num order by num desc limit
1;
If you have an index on 'num' it will also be fast.
Cheers,
Csaba.
On Wed, 2008-11-19 at 08:47 -0600, Scara Maccai wrote:
> Hi all,
>
> suppose I have a table like:
>
> CREATE TAB
On Thu, 2008-11-27 at 09:20 -0800, Ioana Danes wrote:
> I've been wondering if anybody tried to use the postgresql csv log file to
> replicate sql statements.
> I've been looking into it in the past days and after a brief testing it
> doesn't look bad at all...
Try to execute something like:
UP
Hi David,
On Thu, 2011-11-03 at 15:30 -0700, David Kerr wrote:
> I suspect that it has to be a transaction, and that further up in the TX is
> an update to one of
> the reference tables in each TX.
This is your cause - updating the referenced table in the same
transaction. That will want an excl
Hi David,
On Wed, 2011-11-09 at 09:52 -0800, David Kerr wrote:
> So, aside from removing the PKs do i have any other options?
Sure you have: order the inserts by primary key inside each transaction.
Then you will not get deadlocks, but inserting the same key again will
fail of course (but that's
On Thu, 2008-12-18 at 05:29 -0800, gerhard wrote:
> I suspect the foreign key constraint of downtime_detail to slow down
> the delete process. Is this a bug, probably fixed in latest version
> (8.1.x) or should I drop the constraint and recreate after deletion -
> which I only see as workaround ?
Hi Ray,
On Wed, 2009-05-27 at 14:24 +0200, Iv Ray wrote:
> When the db is refreshed, the first click inside the web application
> that uses it (php 5.x, Apache 2.x), and some of the next clicks (i. e.
> the 3rd, 5th, 8th, 12th) result in the following error -
> PGSQL ERROR: server closed the c
Hi Markus,
Have you considered installing slony and replicate from the old version
to the new one ? It can do that, and when the replica is up to date, the
switchover will take minutes I guess.
Note that I have never done that so please ask others about that too, or
research for yourself ;-)
Chee
[snip]
> We hope to have shared row locks in 8.1 for this usage.
This feature is one I'm really waiting for ! Is there any estimate when
8.1 comes out ?
I know, I know, the preferred answer is "whenever it's ready", but if
8.1 is not that far away, and shared row locking will be in it, I won't
bo
You might want to take a look at
http://www.daffodildb.com/replicator.html
It claims to be able to replicate between different databases, including
"Sql server" and postgres. If you try it pleases share your experience
with us, I haven't seen yet a review involving replication between
different DB
> DELETE FROM Temp1 WHERE Test = 'test3';
> ERROR: syntax error at or near "$2" at character 44
> QUERY: INSERT INTO Temp2 (ID, test) VALUES ( $1 $2 )
> CONTEXT: PL/pgSQL function "temp1_trigger_delete" line 2 at SQL statement
> LINE 1: INSERT INTO Temp2 (ID, test) VALUES ( $1 $2 )
Please post to the list too when replying...
On Wed, 2005-03-09 at 17:29, Mican Bican wrote:
> Thank you for your answer. The reason of so long transactions is that
> I first Insert a row than get the last ID (primary key) and select it
> for update from the database and wait for user to commit it
Whoops, posted to the wrong mailing list... but it might fit anyway.
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi Chris,
Just a thought: if you have to clear the table anyway, wouldn't it work
for you to use truncate ? That should be faster than delete.
HTH,
Csaba.
On Thu, 2005-04-07 at 04:11, Chris Smith wrote:
> I don't care about the order in my particular case, just that I have to
> clear the table.
Costin,
You could probably use one of the functions listed here:
file:///usr/share/doc/postgresql-7.4.2/html/functions-formatting.html
If you want the varchar -> date conversion to happen automatically, you
should study this chapter:
file:///usr/share/doc/postgresql-7.4.2/html/typeconv.html
HTH,
Hi all,
We have a situation where a varchar column was limited to a too small
maximum length in the design phase... shit happens, we will fix our db
schema in our next database downtime or software version deployment,
whichever comes first (we are using 7.4 and on the fly column type
change is onl
:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > We have a situation where a varchar column was limited to a too small
> > maximum length in the design phase... shit happens, we will fix our db
> > schema in our next database downtime or software version deployment,
> > wh
Dear Hugo,
It is possible using cron and psql. Just "man cron" on unix if you're
not already familiar with that. The command line you should use is
something like:
psql [connection options] -f sql_commands_file.sql
Also "man psql" for the options you can use...
Starting the procedure should be
On Wed, 2005-06-08 at 05:31, Joseph Shraibman wrote:
> I want to do the following:
>
> BEGIN;
> SELECT ... FROM table WHERE a = 1 FOR UPDATE;
> UPDATE table SET ... WHERE a = 1;
> if that resturns zero then
> INSERT INTO table (...) VALUES (...);
> END;
>
> The problem is that I need to avoid rac
[snip]
> Do say, are there foreign keys on those tables?
>
> If there are, that may explain the deadlocks. This is a known problem,
> fixed in the development version, for which there is no complete
Wow, that's a good news :-)
Time to drop that nasty patch we'r
Your problem is that the trigger's "found" check will not see the row
inserted by a concurrent transaction. In other words, your insert
actually fails, the record what you see was inserted by another
concurrent transaction, and the "found" check didn't work because the
other transaction started aft
[snip]
> If you have more than one client, this can always happen. You have two
> choices:
> 1. Use a lock to stop two clients interacting like this
This won't work unless you make all the clients serialized, or you have
all the ip's already inserted in the data base... you can't lock on an
unk
That would work indeed. Bit I guess the savepoint solution will be the
simplest and fastest if the OP has or can install 8.0 version.
Cheers,
Csaba.
On Mon, 2005-06-13 at 17:49, Richard Huxton wrote:
> Csaba Nagy wrote:
> > [snip]
> >
> >>If you have more than one client
Gavin,
Are you ordering the updates by id inside one transaction ? You should
order the execution of the statements by id inside a transaction, and
the deadlocks should go away.
HTH,
Csaba.
On Wed, 2005-06-15 at 13:10, Gavin Love wrote:
> I am getting a number of deadlock errors in my log files
ain why sorting by ID would make a difference as I don't
> understand why what you are suggesting should make a difference?
>
> Thanks,
>
> Gavin
>
> Csaba Nagy wrote:
> > Gavin,
> >
> > Are you ordering the updates by id inside one transaction ? You should
Hi all,
Jumping in directly to the subject, this is what I get:
explain SELECT bigint_col_1, bigint_col_2 FROM big_table WHERE
bigint_col_2 in (12132131, null, null, null,
null);
QUERY PLAN
---
Seq Scan
Hi all,
Sorry for the cross-post, I need the bigger exposure, and I'm not sure
if it's JDBC issue or not.
This is a follow-up from an earlier post, with a new twist.
We have a Java application which is designed initially to work with
Oracle, and then gradually adapted to also work with postgres,
surprises for us coming from the
oracle/postgres incompatibility twilight zone... (crossing fingers).
Cheers,
Csaba.
On Fri, 2005-07-29 at 12:29, Oliver Jowett wrote:
> Csaba Nagy wrote:
>
> > However, some of the queries still take the sequential scan route. The
> > most puzzl
Hi all,
I have a postgres system where we just migrated a fairly big data set.
The application accessing it is a cluster of servers which do burst-like
processing, i.e. when they have some work to do, it will be distributed
in the cluster and the data base will be under fairly high load.
On our fi
Oh, I've forgot to mention that the DB is version 8.0.3 running on
linux.
Cheers,
Csaba.
On Tue, 2005-08-09 at 15:19, Csaba Nagy wrote:
> Hi all,
>
> I have a postgres system where we just migrated a fairly big data set.
> The application accessing it is a cluster of serve
ssages = info", and that message might be a
debug level one.
In any case it seems my system can readily reproduce the issue whenever
I place a bigger load on it...
Cheers,
Csaba.
On Tue, 2005-08-09 at 15:51, Michael Fuhr wrote:
> On Tue, Aug 09, 2005 at 03:19:46PM +0200, Csaba Nagy wrot
[snip]
> > I've executed a "select pg_stat_reset();" as superuser, and all went
> > away except the offending row...
>
> That only resets the I/O counts (and only for one database), not the
> backend activity info.
>
> regards, tom lane
This reminds me I've forgot to ask, i
[snip]
> He's not spam.
But anybody could easily forge his address while sending to the group.
So that makes the mail itself no less spam.
The best way to handle those mails currently is dump and ignore them.
Otherwise threads like this one pop up...
Cheers,
Csaba.
---(
Is it possible that you're application is not closing connections, and
the server has a limit on connection count, and that is reached in a few
days ? Actually I'm not sure if the "max_connections" parameter is
applicable to local connections too, cause in that case you shouldn't be
able to connect
[snip]
> To find the nearest value in number_column to some CONSTANT (where you
> replace constant with a number), try:
>
> select *,(number_column - CONSTANT)^2 as d from tablename order by d limit
> 1;
>
This will scan the whole table and sort the results... and then pick
just one of it. Watch
The only problem is that you can't use the order by/limit syntax inside
the union queries I guess, cause the query you proposed is giving a
syntax error. I also thought first to do it like this, but it won't
work. If it would, then you could wrap the thing in another query which
orders by the diffe
n Wed, 2005-08-17 at 17:57, Bruno Wolff III wrote:
> On Wed, Aug 17, 2005 at 17:35:37 +0200,
> Csaba Nagy <[EMAIL PROTECTED]> wrote:
> > The only problem is that you can't use the order by/limit syntax inside
> > the union queries I guess, cause the query you proposed is
Well, I do have a use case for it.
Context:
We have data coming in from web requests, which must be fast, so we just
insert them in temporary tables without any verification. Then they are
periodically processed by a background task, but even that one will
process just a chunk at a time to avoid
You're misusing the case construct, and try to compare a text result
(from the 'nullif') with a boolean result (from the 'not null'). You
probably should use the other form of case:
SELECT CASE WHEN NULLIF(btrim(' A string', ' '), ') IS NULL THEN NULL
ELSE 6 END AS type_id;
I guess you were looki
Well, AFAICT this kind of replication in postgres is not be named "out
of the box". Setting up the whole thing to work, and test it to really
work reliably is not exactly trivial, and you could have a number of
surprises (like when there's little activity, the last WAL stays at the
server and so th
Hi all,
I am running a data import in a Postgres 8.0.1 data base. The target
table is one with ~ 100 million rows. The imported data is around 40
million rows. The import is inserting 1 rows per transaction. The
table has a few indexes on it, a few foreign constraints, and one insert
trigger w
s ? In this case is there
any way I could order the incoming data so that index update performance
is optimal ? Or may the server not optimally be tuned and I should look
in some tuning ?
TIA,
Csaba.
On Mon, 2005-09-19 at 11:18, Csaba Nagy wrote:
> Hi all,
>
> I am running a data import
You could use the system wide crontab, which is configured via
/etc/crontab, and there you can specify what user should execute the
command, without needing passwords. The system wide crontab is executed
as root and it will su to the user you specify.
For vacuuming, you could try to use the autova
You might be looking for a DEFERRED constraint.
You can declare your constraint to be checked only at transaction end,
and then make all your data changes, in one transaction.
You will find details here:
http://www.postgresql.org/docs/8.0/static/sql-createtable.html
Search for DEFERRED.
HTH,
Csab
1 - 100 of 331 matches
Mail list logo