On 02/19/2014 02:14 AM, Antman, Jason (CMG-Atlanta) wrote:
> Juergen,
>
> I've seen this quite a lot in the past, as we do this multiple times a day.
>
> Here's the procedure we use to prevent it:
> 1) read the PID from postmaster.pid in the data directory
> 2) Issue "service postgresql-9.0 sto
Does anyone know if there are plans to support plpython in Amazon's RDS? I
(approximately) understand the issue, but I don't know if there's any
effort to remedy the problem or, rather, I shouldn't bother hoping.
Thanks,
Reece
Juergen,
I've seen this quite a lot in the past, as we do this multiple times a day.
Here's the procedure we use to prevent it:
1) read the PID from postmaster.pid in the data directory
2) Issue "service postgresql-9.0 stop" (this does a fast shutdown with
-t 600)
3) loop until the PID is no lon
On Fri, Feb 14, 2014 at 10:15 AM, Merlin Moncure wrote:
> yeah -- you could do this with some gymnastics and some dynamic SQL.
> If I were lazy (check), I would just encode the order in the name of
> the view somehow.
>
Thanks. That's exactly what I do already. Apparently, I'm even lazier than
y
On 2014-02-18 17:59:35 Tom Lane wrote:
> Samuel Gilbert writes:
> > All of this was done on PostgreSQL 9.2.0 64-bit compiled from the official
>
> > source. Significant changes in postgresql.conf :
> Why in the world are you using 9.2.0? You're missing a year and a half
> worth of bug fixes, so
> "The modification date must be updated if any row is modified in any way."
>
> If that is the case shouldn't the trigger also cover UPDATE?
You completely right about that! I actually have both configured, but I
focused only on the INSERT to try keep the length of my post as short as
possibl
On 02/18/2014 02:42 PM, Samuel Gilbert wrote:
On 2014-02-18 14:25:59 Adrian Klaver wrote:
On 02/18/2014 02:10 PM, Samuel Gilbert wrote:
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some
queries
are now tak
Samuel Gilbert writes:
> All of this was done on PostgreSQL 9.2.0 64-bit compiled from the official
> source. Significant changes in postgresql.conf :
Why in the world are you using 9.2.0? You're missing a year and a half
worth of bug fixes, some of them quite serious.
> INSERT ... RETURNING
On 2014-02-18 14:25:59 Adrian Klaver wrote:
> On 02/18/2014 02:10 PM, Samuel Gilbert wrote:
> > I have data warehousing DB 2 fairly big tables : one contains about 200
> > million rows and the other one contains about 4 billion rows. Some
> > queries
> > are now taking way too long to run (> 13 ho
Tom,
I will not claim I've totally observed all the fallout of attempting this.
You may be correct about subsequent local0 output being bollixed but I
certainly have seen some continued output to local0 after the trigger.
I am not committed to this method. It was primarily an experiment for proo
On 02/18/2014 02:10 PM, Samuel Gilbert wrote:
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some queries
are now taking way too long to run (> 13 hours). I need to get these queries
to run in an hour or so.
I have data warehousing DB 2 fairly big tables : one contains about 200
million rows and the other one contains about 4 billion rows. Some queries
are now taking way too long to run (> 13 hours). I need to get these queries
to run in an hour or so. The slowdown was gradual, but I eventually h
"Day, David" writes:
> Should I be able to run two syslog facilities simultaneously ( postgres
> local0, and a trigger function to local3 ) successfully ?
Probably not. libc's support for writing to syslog is not re-entrant.
> I have created an insert trigger on one of my datatables using p
Hi,
Should I be able to run two syslog facilities simultaneously ( postgres local0,
and a trigger function to local3 ) successfully ?
I have postgresql 9.3.1exit running on freebsd and sending errors to
"syslog_facility= local0".
That works fine.
I have created an insert trigger on one o
Thanks Magnus, I too checked that before coming here. I'm really looking
for a reason from someone that may have some more information. Also
potentially another download location for pg_bulkload 3.1.5+.
http://ftp.postgresql.org/pub/projects/pgFoundry/pgbulkload/pg_bulkload-3.1/only
goes to 3.1.4
On Tue, Feb 18, 2014 at 6:07 PM, Purdon wrote:
> It seems that pgfoundry.org is down? Is this the case for everyone?
>
According to http://www.downforeveryoneorjustme.com/pgfoundry.org it is,
yeah.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
It seems that pgfoundry.org is down? Is this the case for everyone?
If so is there any other location to get pg_bulkload 3.1.5+ (It is not on
the FTP mirror of pgFoundry)
Thanks,
*Kyle W. Purdon*
Research Assistant | Center for Remote Sensing of Ice Sheets (CReSIS)
https://www.cresis.ku.edu/~kpu
On Mon, Feb 17, 2014 at 8:45 AM, Herouth Maoz wrote:
> I have a production system using Postgresql 9.1.2.
>
> The system basically receives messages, puts them in a queue, and then
> several parallel modules, each in its own thread, read from that queue, and
> perform two inserts, then release th
On Fri, Feb 14, 2014 at 7:35 PM, Behrang Saeedzadeh wrote:
> Hi,
>
> I just stumbled upon this article from 2012 [1], according to which
> (emphasis mine):
>
> Window functions offer yet another way to implement pagination in SQL. This
> is a flexible, and above all, standards-compliant method. Ho
Is there a more appropriate place to ask this question? Or was my question
unclear?
I dug some data, and it seems that whenever messages come at a rate of 75,000
per hour, they start picking delays of up to 10 minutes. If I go up to 100,000,
delays pick up to about 20 minutes. And for 300,000 i
2014-02-18 11:53 GMT+01:00 Dmitriy Igrishin :
>
>
> 2014-02-18 13:44 GMT+04:00 邓尧 :
>
>> When single row mode is enabled, after retrieving part of the result set,
>> I'm no longer interested in the rest of it (due to error handling or other
>> reasons). How can I discard the result set without rep
2014-02-18 13:44 GMT+04:00 邓尧 :
> When single row mode is enabled, after retrieving part of the result set,
> I'm no longer interested in the rest of it (due to error handling or other
> reasons). How can I discard the result set without repeatedly calling
> PQgetResult() in such situation ?
> The
When single row mode is enabled, after retrieving part of the result set,
I'm no longer interested in the rest of it (due to error handling or other
reasons). How can I discard the result set without repeatedly calling
PQgetResult() in such situation ?
The result set may be quite large and it's ine
On Monday 17 February 2014 21:14:35 Tom Lane wrote:
> Kevin Grittner writes:
> > Perhaps we should arrange for a DROP DATABASE command to somehow
> > signal all backends to close files from that backend?
>
> See commit ff3f9c8de, which was back-patched into 9.1.x as of 9.1.7.
>
> Unfortunately,
24 matches
Mail list logo