Hi David,
On Wed, 2011-11-09 at 09:52 -0800, David Kerr wrote:
> So, aside from removing the PKs do i have any other options?
Sure you have: order the inserts by primary key inside each transaction.
Then you will not get deadlocks, but inserting the same key again will
fail of course (but that's
Hi David,
On Thu, 2011-11-03 at 15:30 -0700, David Kerr wrote:
> I suspect that it has to be a transaction, and that further up in the TX is
> an update to one of
> the reference tables in each TX.
This is your cause - updating the referenced table in the same
transaction. That will want an excl
Hi Thom,
Sorry for the delay, I got sick in the meantime. I see that others
already did some review, I will do a quick one too, later maybe I'll
actually try it out... so after a quick review:
* on line 218, the " ENCODING '$DBname')" part feels wrong, you probably
want hardcoded UTF8 encoding th
Hi Thom,
I would like to review it, but I get "403 - Forbidden" when clicking:
http://downloads.joomlacode.org/trackeritem/4/5/0/45041/postgresql.php
Not sure what that means, probably I need some kind of login to the
joomla tracker system, and I don't have one, and I would prefer not to
create
Hi Andor,
On Wed, 2009-08-05 at 14:15 +0200, Schindler Andor wrote:
> Can anyone tell me, how this is possible? If we insert 12 on the end, then
> it decreases, but 11 remains the same. The problem only occurs, when the
> where condition contains "sorrend > 9" or less.
I bet the "sorrend" column
[this is getting off topic]
On Thu, 2009-07-30 at 13:44 +0200, Craig Ringer wrote:
> A host with a runaway process hogging memory shouldn't be dying. It
> should really be killing off the problem process, or the problem process
> should be dying its self after failing to allocate requested memor
[just to make things clear, I'm not the one who brought up this
discussion, only that I was also bitten once by zombie connections]
On Thu, 2009-07-30 at 13:29 +0200, Craig Ringer wrote:
> Idle? I thought your issue was _active_ queries running, servicing
> requests from clients that'd since cea
On Thu, 2009-07-30 at 13:40 +0200, Craig Ringer wrote:
> > A simple ping to the client would have
> > cleared the fact that the client is not there anymore.
>
> Yep. It'd also stop PostgreSQL working for clients with software
> firewalls, since most of them drop ICMP ECHO ("ping").
I wasn't mea
On Thu, 2009-07-30 at 13:22 +0200, Craig Ringer wrote:
> So, barring network breaks (wifi down / out of range, ethernet cable
> fell out, etc etc) how is the OP managing to leave backends running
> queries? Hard-resetting the machine?
It happened to us when a client box went out of memory and st
On Thu, 2009-07-30 at 11:41 +0200, Greg Stark wrote:
> I know this is a popular feeling. But you're throwing away decades of
> work in making TCP reliable. You would change feelings quickly if you
> ever faced this scenario too. All it takes is some bad memory or a bad
> wire and you would be turni
Hi all,
On Thu, 2009-07-30 at 11:02 +0200, Greg Stark wrote:
> On Thu, Jul 30, 2009 at 7:43 AM, Craig
> Ringer wrote:
> > On Wed, 2009-07-29 at 14:56 +0100, Greg Stark wrote:
> > What does work well is occasionally poking the socket with recv(...,
> > MSG_DONTWAIT) while doing other work. Program
Hi Ray,
On Wed, 2009-05-27 at 14:24 +0200, Iv Ray wrote:
> When the db is refreshed, the first click inside the web application
> that uses it (php 5.x, Apache 2.x), and some of the next clicks (i. e.
> the 3rd, 5th, 8th, 12th) result in the following error -
> PGSQL ERROR: server closed the c
On Thu, 2008-12-18 at 05:29 -0800, gerhard wrote:
> I suspect the foreign key constraint of downtime_detail to slow down
> the delete process. Is this a bug, probably fixed in latest version
> (8.1.x) or should I drop the constraint and recreate after deletion -
> which I only see as workaround ?
On Thu, 2008-11-27 at 09:20 -0800, Ioana Danes wrote:
> I've been wondering if anybody tried to use the postgresql csv log file to
> replicate sql statements.
> I've been looking into it in the past days and after a brief testing it
> doesn't look bad at all...
Try to execute something like:
UP
Hi Scara,
This should work just fine:
select num, min(mydate) from mytab group by num order by num desc limit
1;
If you have an index on 'num' it will also be fast.
Cheers,
Csaba.
On Wed, 2008-11-19 at 08:47 -0600, Scara Maccai wrote:
> Hi all,
>
> suppose I have a table like:
>
> CREATE TAB
On Fri, 2008-10-17 at 13:40 -0700, John Skillings wrote:
> Hello Csaba,
>
> Back in the month of April, I noticed that you posted a similar
> request on copying blobs between two tables, having separate OID.
>
> Can you let me know your final solution please.
The final version I'm using is this
On Tue, 2008-07-29 at 19:25 +0400, Ivan Zolotukhin wrote:
> Any clues? Can anybody suggest how to debug this? Is it possible to
> get an explain of the query within the trigger?
I bet it's the difference between prepared/not prepared plans. The
trigger prepares the plan without considering the act
On Wed, 2008-07-09 at 16:23 +0800, Craig Ringer wrote:
> Especially if it returned an updated row count or supported the
> RETURNING clause, so you could find out after the fact what was or
> wasn't done.
Well, it is supposed to be used as "SELECT ... FOR UPDATE SKIP LOCKED",
so you can in fact pu
On Wed, 2008-07-09 at 00:48 -0400, Tom Lane wrote:
> "Jonathan Bond-Caron" <[EMAIL PROTECTED]> writes:
> > It would be quite useful to implement a database queue. Although FOR UPDATE
> > NOWAIT and trying again can work as well as other techniques,
>
> > just skipping over the locks has its advan
On Thu, 2008-07-03 at 16:59 +0200, Csaba Nagy wrote:
> [snip] It is likely possible to do (using PREPARE TRANSACTION) [snip]
I was wrong, you can't do it with that either, see:
http://www.postgresql.org/docs/8.2/static/sql-prepare-transaction.html
Maybe there is some feature to attach/
On Thu, 2008-07-03 at 16:59 +0200, Csaba Nagy wrote:
> If you'll say what you really want
> to do, I bet you'll get a lot more useful advices...
Oh, and you should use the general list only for these kind of
questions, hackers is for discussion about hacking on the postgres code
i
On Thu, 2008-07-03 at 19:56 +0530, cinu wrote:
> Could anyone please tell me where I am going wrong and if there is a
> way I can get the same behaviour that I am getting while I am
> executing the through psql prompt.
You're mistake is that you think a transaction is related to your
terminal, but
Hi all,
I just want to report that we had here almost exactly the same problem
as reported here:
http://archives.postgresql.org/pgsql-hackers/2008-01/msg00134.php
The whole scenario repeated the same: production DB refused to work,
restarted in single user mode, run vacuum (few hours), postgres
We also have such a queue here, and our solution is an algorithm like
this:
1. get the next processor_count * 2 queue ids which are not marked as
taken;
2. choose randomly one of these ids;
3. lock for update with nowait;
4. if locking succeeds:
4.1. check again the item, as it could have
On Sun, 2008-04-20 at 11:32 -0600, Scott Marlowe wrote:
> On Sun, Apr 20, 2008 at 11:12 AM, Scott Ribe <[EMAIL PROTECTED]> wrote:
> > > I am going to play with this and see where it breaks, but it's going to be
> > > an enormous time investment to babysit it.
One more suggestion: if you happen to
[snip]
> DECLARE
> v_NewOID BIGINT;
^^
small correction: v_NewOID should be of type OID:
DECLARE
v_NewOID OID;
BIGINT would of course work too as long as there is implicit cast from
it to/from OID, which seems to be the case on 8.2.
Cheers,
Csaba.
--
Se
> Is there an easy way to copy a large object to a new one, having a new
> OID and it's content independent from the original ?
So my current solution would be:
CREATE OR REPLACE FUNCTION copy_blob(p_blobId OID)
RETURNS OID AS '
DECLARE
v_NewOID BIGINT;
BEGIN
SELECT lo_create(0) INTO v_Ne
Hi all,
Is there an easy way to copy a large object to a new one, having a new
OID and it's content independent from the original ?
The large object API functions would only allow to stream the original
to the client and stream it back to the server for the copy, which is
not really optimal... I
On Fri, 2008-03-14 at 08:43 -0700, Steve Crawford wrote:
> > Also, it is MVCC-safe only from 8.3 upwards; on older versions
> > it (incorrectly) deletes dead tuples that are still visible to old
> > transactions.
> >
> >
> More interesting. I may have a broken mental-model. I *thought* that
>
On Wed, 2008-02-13 at 13:56 +0100, Magnus Hagander wrote:
> I don't think these people are comparing to other opensource ones...
> They're comparing to the commercial ones (at least in this case)
Yes, that's definitely the case. And that can actually be taken as a
compliment to the already attain
On Wed, 2008-02-13 at 13:39 +0100, Csaba Nagy wrote:
> On Wed, 2008-02-13 at 13:29 +0100, Dawid Kuroczko wrote:
> > > * no direct table cache control;
> >
> > Could you elaborate more on this one?
>
OK, re-reading what I just wrote makes me think it was not clear en
On Wed, 2008-02-13 at 13:29 +0100, Dawid Kuroczko wrote:
> > * no direct table cache control;
>
> Could you elaborate more on this one?
Well, I was just summarizing what other people wrote :-)
But I guess they refer to table level control of how much cache memory
to use. I think there are DBMSs
http://www.theserverside.com/news/thread.tss?thread_id=48339
The interesting part is where somebody asks why NOT use postgres, and
it's answers could give some additional hints to those interested on
what people find missing from postgres to adopt it.
Just to summarize some of the answers:
* majo
On Wed, 2008-02-06 at 11:03 -0500, Tom Lane wrote:
> IIRC, the name of an FK constraint also appears in the arguments for its
> triggers. If you didn't update those too, the behavior might be a bit
> inconsistent, depending on which PG version you're using. In particular
> I'm not sure which name
On Tue, 2008-02-05 at 12:11 -0500, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > I found an old post regarding the subject, where modifying the
> > pg_constraint entry was recommended:
>
> > http://archives.postgresql.org/pgsql-admin/2003-04/msg00339
While upgrading our schema between application versions, we also had a
few constraint changes. Some of those changes were dropping NOT NULL
constraints on some columns. Our schema had a few such NOT NULL
constraints, which were created using the named variant of the column
constraint clause (someth
I found an old post regarding the subject, where modifying the
pg_constraint entry was recommended:
http://archives.postgresql.org/pgsql-admin/2003-04/msg00339.php
Is this still safe to do ? The pertinent docs don't say anything pro or
contra:
http://www.postgresql.org/docs/8.2/static/catalog-pg-
On Mon, 2008-02-04 at 08:31 -0800, Joshua D. Drake wrote:
> On Mon, 04 Feb 2008 12:18:55 +
> Gregory Stark <[EMAIL PROTECTED]> wrote:
> > "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > I was going to suggest pruning the mailing lists down to just 3-4
> > already. The last thing we need to b
On Mon, 2008-02-04 at 12:18 +, Gregory Stark wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>
> > Guys, with respect this thread does nothing for us unless it is on the
> > certification list.
>
> Do we really need a separate mailing list for every thread? It's already kind
> of crazy
On Tue, 2008-01-15 at 16:32 +0100, Luca Arzeni wrote:
> In all cases I'm using locale LATIN9 during DB creation, but I tested also
> with ASCII, UTF8 and LATIN1 encoding.
I guess this has nothing to do with the encoding, but with the collation
rules used, which is governed by "lc_collate" paramet
This is the problem:
> ... t2.idr(p1, p4, p6, p7, p9, 'HLA-DRB5*0101')>2;
What is that t2.idr ? Based on the syntax postgres must look for a
function (because of the parentheses), and it thinks t2 is the schema
where it must look for it.
Cheers,
Csaba.
---(end of broad
On Fri, 2007-11-23 at 11:37 +0100, Martijn van Oosterhout wrote:
> On Fri, Nov 23, 2007 at 09:33:13AM +, Peter Childs wrote:
> > I tend to agree that primary keys should be single fields if they need to be
> > referenced but should also be natural if at all possible. ie use car number
> > plate
On Thu, 2007-11-15 at 17:13 +, Gregory Stark wrote:
> DELETE
> FROM atable AS x
> USING (SELECT ctid FROM atable LIMIT 5) AS y
> WHERE x.ctid = y.ctid;
Have you tried to EXPLAIN this one ? Last time I tried to do something
similar it was going for a sequential scan on atable with a fi
[snip]
> With Oracle we do it with: delete ,tname> where and rownum <
> Y;
> Can we have the same goody on Postgres?
The attached message is Tom's response to a similar question, in any
case it would work fine in your case too (assuming you have postgres
8.2).
HTH,
Csaba.
--- Begin Message -
Marc,
On Tue, 2007-10-23 at 13:58 +0200, Marc Schablewski wrote:
> We had some corrupted data files in the past (missing clog, see
> http://archives.postgresql.org/pgsql-bugs/2007-07/msg00124.php) and are
> thinking about setting up a warm standby system using WAL replication.
>
> Would an error
> Dear Santa,
>
> I'd like my database to have functionality analogue to how LIMIT works,
> but for other - non-sequential - algorithms.
There was some discussion before to possibly reuse the algorithm ANALYZE
is using for sampling some given percentage of the table data and
provide this for some
Andrew,
Thanks for your input, comments below.
On Thu, 2007-07-26 at 13:30, Andrew Kroeger wrote:
> After reading through this thread, I have an idea that should accomplish
> what I believe are your 3 main goals (avoid any negative performance
> impact on the user's inserts, do not lose any data
On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
> > Unfortunately I don't think this will work. Multiple backends will happily
> > pick up the same ctid in their selects and then try to delete the same
> > records.
>
> I'm pretty sure he said that the batch processing (and the delete) would
> on
On Tue, 2007-07-24 at 18:19, Tom Lane wrote:
> [ thinks for a bit ... ] Actually, you can do it as of 8.2 or so,
> by abusing the ScalarArrayOp stuff: turn the subquery into an array.
> An example in the regression database:
>
> regression=# explain update tenk1 set ten=ten+1
> regression-# whe
> Unfortunately the stuff that makes a ctid= nice doesn't seem to be
> used when you're doing an in. It's possible that a function that does
> something like
> for rec in select ctid from my_table limit 10 loop
> delete from my_table where ctid=rec.ctid;
> end loop
> might do okay, but I haven'
> How about using the following?
>
> delete from
> where ctid in (select ctid from limit );
>
I actually checked this out before starting this thread, and the plan
looked like:
> explain delete from my_table where ctid in (select ctid from my_table
limit 10);
First of all, thanks for all the suggestions.
> put a SERIAL primary key on the table
Or:
> Maybe add OIDs to the table, and delete based on the OID number?
No, this is not acceptable, it adds overhead to the insertions. Normally
the overhead will be small enough, but on occasions it is noticeabl
Hi all,
This subject was touched a few times in the past, I looked into the
archives... the result is invariably key developers saying such a
feature is unsafe because the result is unpredictable, while the people
requesting is saying it is OK that way, it is expected... but no
compelling use case
On Thu, 2007-07-12 at 16:18, Simon Riggs wrote:
> The corruption could only migrate if the WAL records themselves caused
> the damage, which is much less likely than corruption of the data blocks
> at hardware level. ISTM that both Slony and Log shipping replication
> protect fairly well against bl
Hi all,
I just had the following error on one of our data bases:
ERROR: could not access status of transaction 1038286848
DETAIL: could not open file "pg_clog/03DE": No such file or directory
I researched on the mailing list and it looks like the usual suspect is
disk page corruption. There ar
On Thu, 2007-06-21 at 16:45, Scott Marlowe wrote:
> Another option is to use your favorite scripting language and throw an
> excel header then the data in tab delimited format. Or even in excel
> xml format.
Why would you need any scripting language ? COPY supports CSV output
pretty well, it ca
> select age(datfrozenxid) from pg_database where datname = 'your database'
>
> 2 billions and you are screwed. Autovacuum starts panicking way before
> that, to have enough slack.
dbname=# select age(datfrozenxid) from pg_database where datname = 'dbname';
age
1648762992
(1 ro
On Fri, 2007-06-01 at 17:40, Alvaro Herrera wrote:
> Huh, why do you say that it doesn't need any vacuuming?
This table is only inserted and rarely deleted, so other than the XID
wraparound vacuuming it is not critical. Of course it will need vacuum
finally, but it can survive months without it -
[snip]
> You don't need to reindex, just update.
I was pretty sure I've seen the error reported and the fix too, the
thing is relatively harmless except the vacuum troubles. And considering
that the table itself does not need vacuuming, it would be even more
harmless if it wouldn't block other stu
> How would reindexing a table imply hours of downtime?
Simply, all meaningful activities on this system will sooner or later
insert into this table :-)
So given that we use a connection pool we end up pretty soon with all
connections waiting for an insert on this table, and at that point
nothing
On Fri, 2007-06-01 at 15:58, Alvaro Herrera wrote:
> Try reindexing the table; that should make the problem go away. Also,
> update to the latest of the 8.1 branch ASAP.
Reindexing won't work, it would mean hours of downtime. I plan to move
the DB to 8.2 via slony in ~2-3 weeks, that should take
> If that table has a TOAST table you might need to mark the toast table
> as disabled too. Or maybe it's forcing a vacuum because it's worried
> about XID wraparound?
OK, that might be the case, as I guess there are tables which were not
successfully vacuumed in the last few months (DB wide vacu
Hi all,
System: postgres 8.1.3p, not a released version but a CVS checkout
somewhere after 8.1.3 was released.
I have a table (quite big one, 2 columns, 2166381 DB relpages, 364447136
reltuples as reported by pg_class) which is mostly inserted into, very
rarely deleted from.
I also have a primar
> [snip] Take the example of a query "UPDATE tablename SET x = x + 1".
> When this query is erroneously issued twice, data corruption will occur.
Huh ? I thought slony is replicating data, not queries... what on the
master is "UPDATE tablename SET x = x + 1" will translate to "UPDATE
tablename SET
> If you are updating a large portion of your tree, you will probably want to
> throw in a vacuum in
> between the two updates. This should reduce the bloat caused by dead tuples
> in both your index
> and table.
... but that will only work if you can commit the first set of changes
before you
> Other than hard disk space, are there any disadvantages on
> having a large number of page slots?
It's not using hard disk space, it's using shared memory, so you might
want to adjust it to make that memory available for other purposes...
> DETAIL: A total of 2233472 page slots are in use (in
Sorry Jiří, that was a hasty answer from my part...
> I haven't used Slony by now. Do you have some advices or articles they
> may help? I have no ssh access at FreeBSD server box when PostgreSQL
> 8.0 server runs - so I hope this is no problem...
Slony is quite a heavy beast to install, I'm pret
Jiří,
I guess you should be able to do it via slony. Usually one wants to
upgrade using slony, but it should work the other way around too :-)
Cheers,
Csaba.
On Fri, 2007-04-13 at 15:09, Jiří Němec wrote:
> Hello,
>
> I need to import PostgreSQL DB from 8.2.3 server 8.0.8 server. Is
> there som
> I agree with everything you said except the point about the GWT.
> Using a framework like this you can have your query in the javascript,
> and pass it through directly the database and pass the data back using
> extremely simple (think 10 line) php or perl rpc that renders query
> result back in
(see below the original mail I forwarded to the list)
Zsombor,
As far as I know there's no way to disable the MVCC features in
postgres, that's a fundamental design choice in it's workings.
However, your problem can be solved by properly running vacuum on the
tables you update frequently. A firs
> And due at least in part to government (and other institutions operated by
> damned fools) opting for the least expensive provider rather than paying for
> someone who actually knows what they're doing. Just as buying cheap junk
> always comes back to get you, hiring incompetent fools that do
> I would like to convert a mysql database with 5 million records and
> growing, to a pgsql database.
I wouldn't qualify 5 million as "huge". We have here several 100 million
sized tables, and I still think it's a medium sized DB...
> All the stuff I have come across on the net has things like
>
On Tue, 2007-02-20 at 03:43, Tom Lane wrote:
> Nonsense. pg_stat_activity + "kill -TERM" should solve this problem
> reasonably well. Some of us don't trust kill -TERM 100%, which is why
> it's not currently exposed as a standard function, but if you're using
> a reasonably recent PG release it's
> Why irrecoverably? If the command fails, you just wait and try it
> again.
> You could use the pg_stat tables to work out who is connected and use
> pg_cancel_backend() to kill them. You could "kill -INT" them yourself.
> You could change the pg_hba.conf to forbid logging in and then bouncing
>
> [snip] I believe this is fixed as of 8.2 --- can you duplicate it
> there? (No, backpatching the fix is not practical.)
No, I was not able to duplicate it on 8.2, so I think it's fixed (given
that on 8.1 the errors are triggered almost 100% of the runs).
> How sure are you about that uninterlo
[Update: the post didn't make it to the list probably due to the attachment, so
I resend it inlined... and I was not able to trigger the same behavior on 8.2,
so it might have been already fixed.]
[snip]
> Well, if you can show a reproducible test case, I'd like to look at it.
OK, I have a test
On Tue, 2007-01-23 at 15:43, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > The responsible code is in src/backend/commands/trigger.c, and I
> > think it only happens if you manage to create/drop a new trigger (which
> > also could be a FK trigger c
On Tue, 2007-01-23 at 14:49, Brandon Aiken wrote:
> Nevertheless, the database should be able to handle any combination of
> syntactically correct SQL statements without throwing errors and
> maintaining the database in a consistent state. If what you're saying
> is right, the error thrown here is
On Mon, 2007-01-22 at 20:56, Lenorovitz, Joel wrote:
[snip]
> ERROR: too many trigger records found for relation "item"
I've got this error on a development data base where we were
continuously creating new child tables referencing the same parent
table. The responsible code is in src/backend/com
On Fri, 2007-01-19 at 20:03, Jeremy Haile wrote:
> Is it feasible to add a "reindex concurrently" that doesn't lock the
> table for the rebuild, then locks the table when doing a second pass to
> pickup rows that were changed after the first pass? Or something like
> that
IIRC, the objection
[snip]
> I afraid I don't see how any of the answers I saw discussed fit a
> 24x7 operation. Reindex, drop index, vacuum full, ... they all
> block production queries of one sort or another for significant
> periods of time (minutes) on large (multi/tens of GB) tables,
> and thus are infeasibl
[snip]
> Come to think of it, an auto-reindex option might be nice in core someday.
> TODO item?
Marry it with autovacuum + online index build, and it will be cool ;-)
BTW, having a privileged background thread doing the reindex could be a
solution to most of the objections regarding online reind
On Tue, 2007-01-09 at 17:36, Csaba Nagy wrote:
> On Tue, 2007-01-09 at 17:31, Matthew T. O'Connor wrote:
> > Without getting into all the details, the autovacuum naptime is a GUC
> > variable right now, so it can be much more frequent than the current
> > default whi
On Tue, 2007-01-09 at 17:31, Matthew T. O'Connor wrote:
> Without getting into all the details, the autovacuum naptime is a GUC
> variable right now, so it can be much more frequent than the current
> default which is 60 seconds.
Hmm, for some reason I thought the granularity is minutes, but it
On Mon, 2007-01-08 at 20:11, Ron Mayer wrote:
[snip]
> That's unlikely to work anyway. Organizations protecting valuable data
> using technical approaches (DVDs, etc) find it gets out anyway.
> Since you'll ship a client that can decrypt the data anyway, anyone with
> a debugger could decrypt it (u
On Mon, 2007-01-08 at 22:29, Chris Browne wrote:
[snip]
> Based on the three policies I've seen, it could make sense to assign
> worker policies:
>
> 1. You have a worker that moves its way through the queue in some sort of
>sequential order, based on when the table is added to the queue, to
>
On Sun, 2006-12-24 at 03:03, Christopher Browne wrote:
[snip]
> Seems to me that you could get ~80% of the way by having the simplest
> "2 queue" implementation, where tables with size < some threshold get
> thrown at the "little table" queue, and tables above that size go to
> the "big table" queu
On Thu, 2006-12-21 at 18:41, Alvaro Herrera wrote:
> > From all the discussion here I think the most benefit would result from
> > a means to assign tables to different categories, and set up separate
> > autovacuum rules per category (be it time window when vacuuming is
> > allowed, autovacuum pro
Ben,
On Thu, 2006-12-21 at 20:10, Benjamin Smith wrote:
> I'm breaking up a database into several sets of data with similar layout. (we
> currently have multiple customers using a single database and tableset, we're
> splitting it out to give us more "wiggle room")
We have here a very similar
On Thu, 2006-12-21 at 18:03, Matthew T. O'Connor wrote:
> The maintenance window design as I understand it (Alvaro chime in if I
> get this wrong) is that we will be able to specify blocks of time that
> are assigned specific autovacuum settings. For example we might define
> a maintenance wind
> Alternatively, perhaps a threshold so that a table is only considered
> for vacuum if:
>(table-size * overall-activity-in-last-hour) < threshold
> Ideally you'd define your units appropriately so that you could just
> define threshold in postgresql.conf as 30% (of peak activity in last 100
>From my POV, autovacuum is doing a very good job, with the exception of:
> - There might be time-based exclusions to the effect that large tables
> oughtn't be processed during certain periods (backup time?)
Either (per table!) exception or permission based control of when a
table can be vacuu
On Tue, 2006-12-19 at 07:28, Tom Lane wrote:
> Glen Parker <[EMAIL PROTECTED]> writes:
> > I am still trying to roll my own auto vacuum thingy.
>
> Um, is this purely for hack value? What is it that you find inadequate
> about regular autovacuum? It is configurable through the pg_autovacuum
> ca
On Wed, 2006-12-13 at 17:12, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > So an implementation which optimistically builds the new index
> > concurrently while holding no lock, and then hopes for the 3rd
> > transaction to be able to get the exclusive l
> Yeah, we could add defenses one by one for the cases we could think of,
> but I'd never feel very secure that we'd covered them all.
What you all forget in this discussion is that reindexing concurrently
would have to be a highly administrative task, controlled by the DB
admin... so whoever has
On Fri, 2006-12-08 at 17:39, Bernhard Weisshuhn wrote:
> On Fri, Dec 08, 2006 at 05:26:22PM +0100, Harald Armin Massa <[EMAIL
> PROTECTED]> wrote:
>
> > Now we just need fast, stable and native replication for " The Girl
> > That Every Man Secretly Wishes He Was Married To!"
>
> I want replicati
> FOR daycnt IN 1..31 LOOP
How about months with less than 31 days ? What do you get for those if
the day is 31 ?
Cheers,
Csaba.
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi all,
While postgres 8.x improved a lot the locking issues related to foreign
keys, the locking used is still stronger than needed.
The following test case deadlocks on postgres but goes through on
oracle:
preparation of tables:
create table test1(col_fk integer primary key, col_1 integer);
> I'm confused, what does statement timeout have to do with this? I was
> assuming you would edit "autovacuum = off" to "autovacuum = on" wouldn't
> that work?
Sorry for the confusion, I was thinking about the procedure of changing
programatically the config file and reload it, all this from a
[snip]
> I think the idea is to edit the postgresql.conf file on the fly and send
> a SIGHUP to the postmaster. I haven't ever heard of anyone doing that,
> but I don't see any reason why it wouldn't work.
It works, I did it for a while with the statement_timeout to change it
globally over night
1 - 100 of 331 matches
Mail list logo