Hi,
When I do a DELETE FROM table WHERE ...;
I get the number of rows deleted in that table.
How do I get the total number of rows deleted in the database by
foreign keys with the ON CASCADE option?
Thanks,
Daniel
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
On Sun, Aug 6, 2017 at 2:43 PM, Karsten Hilbert wrote:
>
> Yes. Been there done that.
>
> Karsten
>
Thanks Karsten, it worked!
Daniel
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
lder before sending
the SQL to Postgres. The latter of which is a clear PowerBuilder problem, so I
have submitted it to them as a bug.
So I think we can now put this to rest on this mailing list. Thanks, all.
----
Dan Cooperstock
DONA
Hi,
I'm updating my database from 9.4 to 9.6 (Debian Jessie to Stretch). I
think that it is a good opportunity to turn on data checksum.
I don't have experience with cluster creation or moving a DB to a new cluster.
I'll use pg_upgradecluster, but I don't see any option to turn of data checksum.
problem of
using Postgres with PowerBuilder, and getting identity retrieval to work in
PowerBuilder, are not at this point worthwhile.
----
Dan Cooperstock
DONATION and ACCOUNTS web site: http://www.Software4Nonprofits.com
Email: i...@software4
point in anyone else replying to this thread unless
you have experience with both programs together, and have solved this problem
for yourself. Thanks.
Dan Cooperstock
DONATION and ACCOUNTS web site: <http://www.Software4Nonprofits.
bout the combination of
PowerBuilder and PostgreSQL, not PostgreSQL alone.
Dan Cooperstock
DONATION and ACCOUNTS web site: http://www.Software4Nonprofits.com
Email: i...@software4nonprofits.com
Phone: 416-423-7722
Mail: 57 Gledhill Ave., Toronto
CT currval(‘GEN_CATEGORY’), it gives me the correct value,
which is also what got saved in that column.
----
Dan Cooperstock
DONATION and ACCOUNTS web site: <http://www.Software4Nonprofits.com>
http://www.Software4Nonprofits.com
E
can give me some pointers
on how get this to work?
Thanks.
----
Dan Cooperstock
DONATION and ACCOUNTS web site: http://www.Software4Nonprofits.com
Email: i...@software4nonprofits.com <mailto:i...@software4nonprofits.com>
Phone:
OK, I fixed it by changing to a 64-bit compile, which was necessary anyways
because it has to work with a 64-bit install of PostgreSQL.
I'm still curious about how I would have fixed that if I needed the 32-bit
version though.
----
pect it's something
about the calling convention, but I've tried both __stdcall and _cdecl and I
get the same error.
I'm pre-declaring the function with PGDLLEXPORT to make sure it gets
exported.
Any thoughts? Thanks.
------
about how to do this, or where to search. (Maybe i'm just
looking in all the wrong places.)
dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
uch higher than just building an ETL job. Are
you so certain your situation is so special that you can't use what the rest of
the industry uses?
-Dan
Yes I meant equivalence in the roundtrip conversion sense.
And of course the "feature complete" solution which can handle deep
structures would be really nice to have.
Best Regards
Dan S
2016-02-23 21:11 GMT+01:00 David G. Johnston :
> On Tue, Feb 23, 2016 at 12:54 PM, To
store it works well.
I have this table, data and query:
create table test
(
id int,
txt text,
txt_arr text[],
f float
);
insert into test
values
(1,'jkl','{abc,def,fgh}',3.14159),(2,'hij','{abc,def,fgh}',3.14159),(2,null,null,null),(3,'def',null,0);
select j, json_populate_record(null::test, j)
from
(
select to_json(t) as j from test t
) r
Best Regards
Dan S
2014-12-27 6:43 GMT+01:00 Merlin Moncure :
> On Fri, Dec 26, 2014 at 11:41 PM, Merlin Moncure
> wrote:
> > On Fri, Dec 26, 2014 at 1:19 PM, Dan S wrote:
> >> Well I'm trying to implement a window-function that works on
> range_types and
> >> produce
an be used to answer questions like what are the maximum
number of simultaneously overlapping ranges and at which ranges the maximum
occurs)
Best Regards
Dan S
2014-12-26 18:57 GMT+01:00 Tom Lane :
> Dan S writes:
> > I'm trying to write a window function in C .
> > In the fu
n is, how do I shut down the tuplesort properly after the last
call to my window function ?
I'm running "PostgreSQL 9.3.5 on i686-pc-linux-gnu, compiled by
gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 32-bit"
Best Regards
Dan S
Hi,
I encountered a deadlock while running 'reindex table TABLE1' in
postgresql version 9.2.4
The postgresql logs shows the two offending processes.
1st process was running reindex table TABLE1
waiting for AccessExclusiveLock on primary key index of TABLE1
2nd process was running stored procedur
al to require such a workaround for what seems like a common need.
Thanks for any insight you might have!
Sincerely,
Dan
Ahh yes, I understand now.
Thanks !
Best Regards
Dan S
2014-07-28 18:28 GMT+02:00 Tom Lane :
> Dan S writes:
> > I've run into a strange problem with a query.
> > I wonder if it might be a bug or a misunderstanding from my side.
>
> > Steps to recreate th
clause expression is volatile ?
Best Regards
Dan S
P.S.
I've since rewritten the query like below to get the expected results but I
still thought I should ask if it is a bug.
with
cte as
(
select generate_series,(random()*999.0)::int + 1 as id from
generate_series(1,1000)
)
select (select
Same version of DB for dump & restore? If not, was the dump done via the
pg_dump from the newer version. If not, please do that.
--
Dan Langille
http://langille.org/
On Aug 28, 2013, at 2:56 AM, Torello Querci wrote:
> Interesting .
>
> while trying to restore the databas
Update: I have successfully used this configuration with a month's worth of
WALs (tens of thousands), run a test restore, and everything appears to
have worked as expected. So at least based on that test, this
configuration seems fine.
-Dan
On Fri, May 24, 2013 at 4:42 PM, Dan Birken
We have backed up $PGDATA, but had to re-initialize the slave.
We also have the WALs from the day this happened.
Thanks,
Dan
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Saturday, June 22, 2013 10:09 PM
To: Dan Kogan
Cc: pgsql-general@postgresql.org
Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Saturday, June 22, 2013 4:11 PM
To: Dan Kogan
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Standby stopped working after PANIC: WAL contains
references to invalid pages
Looks like some kind of data corruption
I am in the process of doing that now. I'll reply again with results once that
is done.
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Saturday, June 22, 2013 4:11 PM
To: Dan Kogan
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Standby st
s per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index: 32
Maximum size of a TOAST chunk:1996
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by value
root@ip-10-148-131-236:~#
Thanks again.
Dan
sure be aware of, or some other incompatibility that makes this setup not
work.
The goal is to have PITR using a combination of pg_basebackup (which is
part of 9.1) and pg_receivexlog.
Thanks,
Dan
Achilleas Mantzios wrote:
> On Ôåô 20 Ìáñ 2013 15:15:23 Dan Thomas wrote:
>
>>
>> We actually have another FreeBSD8.3/PG9.1 machine under different (but
>> similar) load that *doesn't* demonstrate this behaviour. There's
>> nothing obvious in the difference
behaviour. There's
nothing obvious in the differences in usage patterns that we can see
(we're not using any exotic features or anything), but it certainly
suggests that it's *something* related to PG or our usage of it.
On 20 March 2013 14:11, Vick Khera wrote:
>
> On Wed, M
anything up. However, next reboot I'll certainly do that.
> That said, i think you might consider posting on freebsd-[questions|stable]
> as well.
Yes I think that might be a good plan :)
Dan
On 20 March 2013 12:30, Achilleas Mantzios wrote:
> Did you do a detailed du during the suppo
> ll /usr/local/pgsql/data/pg_xlog
lrwxr-xr-x 25B Oct 19 10:48 pg_xlog -> /usr/local/pglog/pg_xlog/
I've exhausted everything I can think of to try to solve this one. Has
anyone got any ideas on how to go about debugging this?
Thanks,
Dan
I got another unexpected behaviour of the call stack by this invocation :
select testfunc2(true) from generate_series(1,10);
The first call stack is different from the nine folllowing.
Shouldn't it be identical to the others ?
Best Regards
Dan S
2012/12/11 Pavel Stehule
> Hello
>
It would be nice with a consistent behaviour, with the callstack always
looking the same despite different causes of the exception.
I think it is violating the 'Principle of least astonishment' .
Best Regards
Dan S
2012/12/11 Pavel Stehule
> Hello
>
> 2012/12/10 Dan S :
&
On 8/11/2012 2:21 PM, Raymond O'Donnell wrote:
On 11/08/2012 04:32, Dan Halbert wrote:
1. select count(t1_id) from t1 where t1_id not in (select distinct t1_id
from t2 limit 1103) ==> 13357 [CORRECT result]
2. select count(t1_id) from t1 where t1_id not in (select distinct t1_id
ast on this list about the
efficiency of "NOT IN (SELECT DISTINCT ...)", but I haven't yet found any bug
reports about incorrect results. I'm sorry I haven't been able to create
reproducible test case yet.
Thanks,
Dan
1. Query pla
On 2012-Jan-21, at 6:39 PM, Scott Marlowe wrote:
> On Sat, Jan 21, 2012 at 1:37 AM, Dan Charrois wrote:
>> Hi everyone. I'm currently in the situation of administering a rather large
>> PostgreSQL database which for some reason seems to be even much larger than
>>
first place.
Until a few days ago, I hadn't even heard of TOAST tables, and just presumed
all the data was stuffed into the database I created directly. From what I've
read about them since, they sound like a great idea - but I never anticipated
them, or their effect on trying to sor
h anything so far
to shed some light on this. Any help someone could provide on how to figure
out where this substantial amount of extra disk space is being used would be
greatly appreciated!
Thanks!
Dan
--
Syzygy Research & Technology
Box 83, Legal, AB T0G 1L0 Canada
P
Hi,
Sorry for the late response on this.
On Wed, Aug 31, 2011 at 09:40, Tomas Vondra wrote:
> On 31 Srpen 2011, 1:07, Dan Scott wrote:
>> On Tue, Aug 30, 2011 at 13:52, Daniel Verite
>> wrote:
>>> Dan Scott wrote:
>>>
>>>> the insert
On Tue, Aug 30, 2011 at 13:21, Scott Ribe wrote:
> On Aug 30, 2011, at 8:22 AM, Dan Scott wrote:
>
>> Perhaps because I'm locking the table with my query?
>
> Do you mean you're explicitly locking the table? If so, why???
No, not explicitly. I just thought of it a
On Tue, Aug 30, 2011 at 13:52, Daniel Verite wrote:
> Dan Scott wrote:
>
>> the insert process is unable to insert new rows into the database
>
> You should probably provide the error message on insert or otherwise describe
> how it's not working. Normally readin
I'd like to ensure that the DB stops whatever else it's doing so that
the insert can occur. What is the best way of doing this?
Thanks,
Dan Scott
http://danieljamesscott.org
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I'll try that .
Thank you very much for your help.
Best Regards
Dan S
2011/5/21 Pavel Stehule
> 2011/5/21 Dan S :
> >
> > Is there any examples of how to join the system tables to get the same
> > information as I was trying to get from the function ?
>
> you c
Is there any examples of how to join the system tables to get the same
information as I was trying to get from the function ?
Best Regards
Dan S
2011/5/21 Pavel Stehule
> 2011/5/21 Dan S :
> > So is there always an underscore prepended to the type name of an array ?
> > fo
So is there always an underscore prepended to the type name of an array ?
for example float[] would then be _float right ?
Best Regards
Dan S
2011/5/21 Pavel Stehule
> Hello
>
> type "array of text" has name "_text"
>
> Regards
>
> Pavel Stehule
&
on_schema.parameters p
where r.routine_name = 'test'
and p.specific_name = r.specific_name
and p.specific_catalog=r.specific_catalog
and p.specific_schema=r.specific_schema
Best Regards
Dan S
Yes throwing an error would probably be good to catch these kind of mistakes
which silently gives you the wrong answer otherwise.
Best Regards
Dan S
2011/5/21 Tom Lane
> Dan S writes:
> > And yes I do know that I can fix the problem by renaming the output
> column
> > to s
tmt := 'select * from tbl1 ';
IF (i IS NOT NULL) THEN cond := ' col1 < $1 '; END IF;
IF (cond IS NOT NULL) THEN stmt := stmt || 'where ' || cond; END IF;
RETURN QUERY EXECUTE stmt USING i;
RETURN;
END;
$$ language plpgsql;
select * from dynamic_query(4);
Best Regards
Dan S
AL
files to finish a particular transaction.
-Dan
On Fri, May 13, 2011 at 11:28 AM, bubba postgres
wrote:
> What I mean is if I do pg_dump on slave I get the " ERROR: canceling
> statement due to conflict with recovery".
> So I googled and tried the solution listed in the link
ultiple times with the same tar
archive with the same results (on different systems).
Thanks,
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
way then.
Thank you Merlin and Pavel for your quick answers
Dan S
2011/3/18 Merlin Moncure
> On Fri, Mar 18, 2011 at 2:20 PM, Dan S wrote:
> > Hi !
> >
> > Is there a way to use plpgsql copy type to get an array of a certain type
> ?
> >
> > For example if I
Hi !
Is there a way to use plpgsql copy type to get an array of a certain type ?
For example if I have a type sample%TYPE
How can I declare a variable that is an array of sample%TYPE
I can't get it to work, is there a way to do it ?
Best Regards
Dan S
ines can fall behind while still being able to recover without
archiving.
-Dan
On Tue, Feb 8, 2011 at 6:51 PM, Ogden wrote:
>
> On Feb 8, 2011, at 8:47 PM, Ray Stell wrote:
>
> >
> > pg_controldata command is helpful.
> >
> > Archiving wal not required, b
cation> (which
I am assuming is the doc OP is referring to positively) is that it only
includes details about streaming replication, thus you don't have to
constantly be dodging information that doesn't apply to you.
-Dan
On Wed, Jan 26, 2011 at 7:04 AM, Bruce Momjian wrote:
>
I'm wrong.
-Dan
On Wed, Jan 12, 2011 at 12:32 PM, Vick Khera wrote:
> On Wed, Jan 12, 2011 at 12:03 AM, Dan Birken wrote:
> > If I commit asynchronously and then follow that with
> a synchronous commit,
> > does that flush the asynchronous commit as well?
>
> I'
ack in
the order that transactions return on the server, regardless of whether they
are asynchronous or synchronous?
Thanks,
Dan
Can anyone recommend a PostgreSQL compatible free tool that I can use
to generate some schema diagrams of an existing database?
Thanks
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
Thanks for that - yes very helpful. Good to know what is possible.
Dan
On Tue, 2010-11-23 at 10:27 +0100, Matthieu Huin wrote:
> A similar question was discussed here about 3 weeks ago :
> http://archives.postgresql.org/pgsql-general/2010-11/msg00110.php
>
> The "UPSERT&qu
Hi,
I'm using Pg for bioinformatic work and I want to be able to insert,
uniquely, biological sequences into a table returning the sequence id -
this part is fine. However, if the sequence already exists in the table
I want to return to id.
At the moment it seems to me that I should do a
SELECT
's an implicitly-created
index for the composite primary key.
Thanks,
Dan
> Ouch. Are you running Slony by any chance?
Nope, just a stock install. Both servers are running CentOS.
> Do you have *any* idea what caused this?
Nothing definitive unfortunately. Looking back through SVN logs for
code updates there was mention of disabling triggers to perform a
large delete
Ds
and counts, etc, have changed, and that'd just cause other issues.
If someone can provide any assistance or info, that'd be tops!
Thanks!
Dan Herbert
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ks now.
Is there a description or manual page on how to install a beta in paralell
with my 8.3.10 installation ?
regards
//Dan
2010/4/30 Tom Lane
> Dan S writes:
> > I did a test but it looks like date doesn't support infinity as a value.
>
> Try 8.4 or later.
>
>regards, tom lane
>
sion give an
error ?
It would be really nice with the possibility to use infinity with the date
type.
Regards
//Dan
On Mar 9, 2010, at 11:00 AM, Tom Lane wrote:
Dan Fitzpatrick writes:
The rule is creating a new value from the sequence a_a_id_seq for
"new.options_id" on each UPDATE call. How do I use the variable
new.options_id in the three update statements without incrementing
the
sequ
I think I found the problem with this. The rule:
CREATE OR REPLACE RULE insert_options AS
ON INSERT TO options DO INSTEAD
(INSERT INTO a (a_id, type_id, name)
VALUES (new.options_id, 6, new.options_name);
UPDATE a_item SET val = new.quantity
WHERE a_item.a_id = new.options_id
and 3 from insert_options).
The first 3 have null vals and the second 3 have the correct vals.
It should be:
options_id | options_name | quantity | price | discount
+--+--+---+--
1 | Test | | |
2 | Test 2 |1 | 2 |3
Any ideas why this is or if there is another approach?
Thanks,
Dan
Hello folks,
PGCon 2010 will be held 20-21 May 2010, in Ottawa at the University of
Ottawa. It will be preceded by two days of tutorials on 18-19 May 2010.
We are now accepting proposals for talks.
If you are doing something interesting with PostgreSQL, please submit
a proposal. You might be
hat I initially
thought you were refering to in your first post.) This would essentially
be a soft lock.
thanks again
Dan
On Wed, 2009-12-09 at 00:28 -0500, Merlin Moncure wrote:
> Advisory locks are basically only useful if the locker of the resource
> maintains a database session (that is
kes ~40-60 hours, and unfortunately the test
case takes about 20-30 minutes - I suspect that this problem would not
arise with more convenient tests).
cheers
Dan
On Wed, 2009-12-09 at 00:22 -0500, Tom Lane wrote:
> That seems unlikely. My best guess at this point is that for some
> reason t
ently because it is not attached
to a terminal (this has caused problems for others on that list with
sqlite and mysql).
Can anyone confirm/refute this? And if it is the case, is there
something that I can do about it?
thanks again.
Dan
On Mon, 2009-12-07 at 18:33 -0500, Merlin Moncure wrote:
>
Thanks to everyone who has answered this. The short answer is that
torque is not behaving the way I expected and not the way I have ever
seen it behave in the past. The I/O binding of these jobs may have
something to do with this, but I will look into it further.
cheers
On Mon, 2009-12-07 at 13:2
Thanks for that, that should help me sort it out. I haven't used the
autocommit option in pgdbi. I'll have a look to see if DBI::do has an
option to wait for command completion.
cheers
On Mon, 2009-12-07 at 16:12 -0500, Tom Lane wrote:
> It's not. What you want is to COMMIT and make sure you've
Yes, they are separate perl files (I'm thinking that perhaps this wasn't
the best way to do it now, but for the moment I'm going to have to stick
with it).
In the case of the manual testing it's jus a matter of command line
calls. The automated runs call each script as part of a PBS torque
script
Hi, this is a bit of a noob question.
I am using PGSql to perform some large analyses, with the clients being
a sequentially run set of perl scripts (one to set up and populate
tables and then down stream scripts to query the database for the
results).
During manual testing everything works, but
Thanks again.
On Mon, 2009-10-12 at 21:14 -0400, Stephen Frost wrote:
> > Seems like the way to go, though it will be significantly slower
> than
> > psql or superuser reads (a couple of tables have ~10s-100sM rows).
>
> Erm, really? You've tested that and found it to be that much slower?
Sorry
Thanks for that.
On Mon, 2009-10-12 at 20:21 -0400, Stephen Frost wrote:
> * Dan Kortschak (dan.kortsc...@adelaide.edu.au) wrote:
> > $dbh->do("COPY chromosome_data FROM '".chromosomes(\%options)."' CSV");
>
> > Does anyone have any suggestions
suggest why it is possible to create a database but not
COPY to/from a file as a non-superuser?
thanks
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
n does not do
any evaluation of what's inside, e.g.
SELECT '{1,2,1+2}'::INT[];
doesn't work, but
SELECT ARRAY[1,2,1+2]::INT[];
works fine.
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
weird cases are below. If someone could explain what the
SQL parser is really looking for, and what the "best" or "most correct" way is,
I would be grateful.
Thanks,
Dan
Examples:
db=# create temporary table x (p point);
CREATE TABLE
Can't use
On Mon, Jul 13, 2009 at 3:53 PM, Dan
Armbrust wrote:
>> So this thought leads to a couple of other things Dan could test.
>> First, see if turning off full_page_writes makes the hiccup go away.
>> If so, we know the problem is in this area (though still not exactly
>> which
;m happy to test things if you send me
patches or custom code.
Thanks,
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> So this thought leads to a couple of other things Dan could test.
> First, see if turning off full_page_writes makes the hiccup go away.
> If so, we know the problem is in this area (though still not exactly
> which reason); if not we need another idea. That's not a good perma
> Hm, I'm not sure I believe any of that except the last bit, seeing that
> he's got plenty of excess CPU capability. But the last bit fits with
> the wimpy-I/O problem, and it also offers something we could test.
> Dan, please see what happens when you vary the wal_buffer
r system into a state where I
could build postgres (I was using the binary install) I built a 8.3.4,
using your patch - but I didn't see any change in the behaviour. I
see hiccups that appear to be the same length as I saw on the binary
build of 8.3.4.
Thanks,
Dan
--
Sent via pgsql-general maili
> However, the latest report says that he
> managed that, and yet there's still a one-or-two-second transient of
> some sort. I'm wondering what's causing that. If it were at the *end*
> of the checkpoint, it might be the disk again (failing to handle a bunch
> of fsyncs, perhaps). But if it rea
itional slow queries logged while
the checkpoint process runs.
My takeaway is that starting the checkpoint process is really
expensive - so I don't want to start it very frequently. And the only
downside to longer intervals between checkpoints is a longer recovery
time if the system crashes?
On Wed, Jul 8, 2009 at 1:23 PM, Tom Lane wrote:
> Dan Armbrust writes:
>> With checkpoint_segments set to 10, the checkpoints appear to be
>> happening due to checkpoint_timeout - which I've left at the default
>> of 5 minutes.
>
> Well, you could increase both
On Wed, Jul 8, 2009 at 12:50 PM, Tom Lane wrote:
> Dan Armbrust writes:
>> However, once the checkpoint process begins, I get a whole flood of
>> queries that take between 1 and 10 seconds to complete. My throughput
>> crashes to near nothing. The checkpoint takes betwee
here anything I can do to prevent (or minimize) the performance
impact of the checkpoint?
Thanks,
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> These reports seem to come up a bit, with disk full issues resulting in
> the need to pg_resetxlog, dump, and re-initdb, but I wouldn't be too
> shocked if they all turned out to be on xfs or something like that.
>
My particular disk-full condition was on ext2. Nothing exotic. Also,
the proces
em fine, but
bits and pieces of documentation I've seen for pg_resetxlog also
recommend initdb, and starting over. Is that necessary?
Thanks,
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
is looking for.
Is this DB toast? Or is there something I could do to get the DB back
into a state where it will start, without losing everything?
Thanks,
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
Excellent! Thanks. One other quick question... What would happen if I
didn't delete the recovery.conf file? Is that step just to prevent
accidentally restarting the server with it there?
On Tue, Apr 14, 2009 at 6:26 PM, Erik Jones wrote:
>
> On Apr 14, 2009, at 3:47 PM, Dan
I've followed the implementation instructions at 24.4.2:
http://www.postgresql.org/docs/current/static/warm-standby.html
And I've used the archive/restore commands from the example in F23.2:
http://www.postgresql.org/docs/current/static/pgstandby.html
This all works great. The primary backs up t
t's stable enough to
announce to the world.
Nice, how about doing the same for http://identi.ca? Support the open
source alternatives...
That was done about 22 hours ago according to this:
http://twitter.com/PGSQL_Announce/status/1281937759
--
Dan Langille
BSDCan - The Technical
d for months, until the problem randomly crops up again.
I'm still looking into it, but, at the same time, we have enough
workarounds to the issue now (scheduled reindex, install a newer OS,
upgrade to Postgres 8.3) that this is becoming a low priority
mystery, rather than the high priority on
pdates, or deletes followed by a replacement add) has doubled from 2
seconds to 4 seconds. A reindex brings the time back down to 2
seconds.
Dan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
1 - 100 of 336 matches
Mail list logo