On Tue, Mar 11, 2014 at 12:04 PM, Anand Kumar, Karthik <
karthik.anandku...@classmates.com> wrote:
> Hi all,
>
> We're running postgres 9.3.2, server configuration below.
>
> Seemingly randomly, we will see the number of active queries in postgres
> go up until we hit max_connections. The DB w
On 3/9/2014 6:52 PM, Aggarwal, Ajay wrote:
Our replication timeout is default 60 seconds. If we increase the
replication time to say 180 seconds, we see better results but backups
still fail occasionally.
so increase it to 300 seconds, or whatever. thats an upper limit, it
needs to be big e
If I don't include WAL files as part of my backup, I do not run into this
issue. But a backup without WAL files is not what I want.
From: Aggarwal, Ajay
Sent: Monday, March 10, 2014 9:46 PM
To: Haribabu Kommi
Cc: pgsql-general@postgresql.org
Subject: RE: [G
I have already tried experimenting with linux dirty_ratio etc. You can only
fine tune up to a limit. The backup process still fills up the buffer cache
very quickly. Yes, my database is about 5-6 GB in size and will grow bigger
over time.
If wish there was a way to slow down pg_basebackup or fo
Hi all,
We're running postgres 9.3.2, server configuration below.
Seemingly randomly, we will see the number of active queries in postgres go up
until we hit max_connections. The DB will recover after a few minutes.
We had the issue a couple of times in Feb 2014. We then upgraded the postgres
On Tue, Mar 11, 2014 at 7:07 AM, Aggarwal, Ajay wrote:
> Thanks Hari Babu.
>
> I think what is happening is that my dirty cache builds up quickly for the
> volume where I am backing up. This would trigger flush of these dirty pages
> to the disk. While this flush is going on pg_basebackup tries to
Yes. It is the "writer process". It is still called as background writer
process. It displays as "writer process" since PostgreSQL-8.0.
Venkata Balaji N
Sr. Database Administrator
Fujitsu Australia
On Tue, Mar 11, 2014 at 10:32 AM, Matthew Chambers
wrote:
>
> This is postgres 9.3.2.
>
> This i
This is postgres 9.3.2.
This is what the log shows.
Mar 11 08:16:29 jupiter521 postgres[2026]: [8-1] 2014-03-11 08:16:29
NZDTLOG: received SIGHUP, reloading configuration files
Mar 11 08:16:29 jupiter521 postgres[2026]: [9-1] 2014-03-11 08:16:29
NZDTLOG: parameter "bgwriter_lru_maxpages" ch
On Tue, Mar 11, 2014 at 8:30 AM, Matthew Chambers wrote:
> Hi, just wondering if this is normal, DB is operating just fine.
>
> I upped bgwriter_lru_maxpages to 200 and issued a reload. Normally, I'd
> see the bgwriter constantly churning as one of my main I/O using processes,
> but now I have:
>
On Mon, Mar 10, 2014 at 4:16 PM, Jeff Janes wrote:
> ... Or maybe it is mangling the "ROLLBACK;" into some form
> the database doesn't recognize. Look in the postgres log files to see what
> the events look like from PostgreSQL's perspective.
Well that's the clue I needed. I was misinterpreting
Hi, just wondering if this is normal, DB is operating just fine.
I upped bgwriter_lru_maxpages to 200 and issued a reload. Normally, I'd
see the bgwriter constantly churning as one of my main I/O using
processes, but now I have:
postgres: wal writer process
postgres: checkpointer process
Th
On Mon, Mar 10, 2014 at 1:41 PM, Brian Crowell wrote:
> I feel dumb asking this question, but I can't seem to find the answer
> online.
>
> I'm running serializable transactions, and so naturally, they will
> sometimes fail with the error "could not serialize access due to
> concurrent update."
>
I feel dumb asking this question, but I can't seem to find the answer online.
I'm running serializable transactions, and so naturally, they will
sometimes fail with the error "could not serialize access due to
concurrent update."
But then I try to issue a ROLLBACK so I can continue using the
conn
Thanks Hari Babu.
I think what is happening is that my dirty cache builds up quickly for the
volume where I am backing up. This would trigger flush of these dirty pages to
the disk. While this flush is going on pg_basebackup tries to do fsync() on a
received WAL file and gets blocked.
While in
matshyeq wrote:
> The only solution is CURSOR based which I find an awkward low level hack
> comparing to elegant option supported by native library.
That's not the only solution, even with the current DBD::Pg
you could do:
$dbh->do("COPY (sql-squery) TO STDOUT");
my $data;
while ($dbh->
Given a view, I need to extract tables, the join columns (ON) . I need to do
this analysis because of the view (agreements with the join condition and
where) I can say that there is a foreign key
Example:
CREATE OR REPLACE VIEW x_customer AS
SELECT a.asset_id, a.client_id
FROM asset_d ad, asset
On Mon, Mar 10, 2014 at 10:16 AM, Tim Kane wrote:
> But what I really wanted to do, is unnest multiple sets of array values as
> returned from a table/query..
>
Craig Ringer posted an interesting answer to a somewhat related question a
few months ago on Stack Overflow: http://stackoverflow.com/a
I think I may have misunderstood the use case of this..
I can do the following:
select * from unnest(array[1,2,3], array[1,3,4]);
unnest | unnest
+
1 | 1
2 | 3
3 | 4
(3 rows)
But what I really wanted to do, is unnest multiple sets of array val
On Sun, Mar 9, 2014 at 6:43 AM, matshyeq wrote:
> Hello,
>
> I've found an issue when tried to implement fetching rows from big table
> (2mln rows) in my app.
> Basically I don't find an elegant and easy way (other than always use
> cursors) to limit the number of rows returned.
> This causes my
On 10 March 2014 15:32, hubert depesz lubaczewski wrote:
> On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
>> On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
>> > hubert depesz lubaczewski writes:
>> > > I didn't have a chance to do it. Can try if there is a w
On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
> On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
> > hubert depesz lubaczewski writes:
> > > I didn't have a chance to do it. Can try if there is a way to get trace
> > > *without* making core (sorry, my c/gdb kn
Hmm. So it is.
My bad, thanks Tom.
I hadn’t noticed the documentation where it clearly says "This is only
allowed in the FROM clause”
xml_test=# select unnest(*) from (select array[1,2],array[1,2,3]) foo;
ERROR: function unnest() does not exist
And, yes.. I was expecting the function signatu
matshyeq writes:
> If 'SingleRowMode' goes row-by-row then again it's not a solution,
> especially given that this particular issue applies to rather large row
> sets.
Perhaps you should actually experiment with that solution instead of
rejecting it out of hand. Or at least RTFM about it.
It do
On 03/10/2014 04:51 AM, matshyeq wrote:
Albe Laurenz wrote:
I would believe the stackoverflow
(http://stackoverflow.com/questions/21960121/perl-script-fails-when-selecting-data-from-big-postgresql-table)
question referred to explains the issue well.
You can retrieve the full result set,
not
Tim Kane writes:
> I decided to have a play with postgresql-9.4devel as I wanted to explore the
> functionality provided by
> unnest (anyarray, anyarray [, â¦])
> Iâve taken the nightly snpahost, compiled, installed.. All good. (Obtained
> from http://ftp.postgresql.org/pub/snapshot/dev/ )
> H
>Albe Laurenz wrote:
I would believe the stackoverflow (
http://stackoverflow.com/questions/21960121/perl-script-fails-when-selecting-data-from-big-postgresql-table)
question referred to explains the issue well.
> You can retrieve the full result set,
not an option because of client memory limita
Fully agree with Laurenz.
LIMIT in some (limited!) cases could be seen as a workaround but it's far
from being elegant (what if your end user types the query?)
If 'SingleRowMode' goes row-by-row then again it's not a solution,
especially given that this particular issue applies to rather large row
Hi all,
I decided to have a play with postgresql-9.4devel as I wanted to explore the
functionality provided by
unnest (anyarray, anyarray [, …])
I’ve taken the nightly snpahost, compiled, installed.. All good. (Obtained
from http://ftp.postgresql.org/pub/snapshot/dev/ )
However it seems the ex
On Mon, Mar 10, 2014 at 11:09 PM, Anh Pham wrote:
> Hi everyone,
> I have created a new data types using SQL in a database:
>
> CREATE TYPE type1 AS (
> score integer,
> time integer[],
> );
>
> I also created a server extension module (C code), and I was able to
> retrieve a Datum value of type1
Hi everyone,
I have created a new data types using SQL in a database:
CREATE TYPE *type1* AS (
score integer,
time integer[],
);
I also created a server extension module (C code), and I was able to
retrieve a Datum value of *type1 *from the database (using SPI_ functions)
My question is: how can
On Mon, Mar 10, 2014 at 06:58:26AM +, Albe Laurenz wrote:
> Daniel Verite wrote:
> > matshyeq wrote:
>
> [ runs out of memory on the client because all results from a large query are
> retrieved at once ]
>
> >> "Unfortunately, this is a limitation in the underlying driver (libpq)
> >> rath
Thank you for you suggestion.
I actually took a look at SPI_ functions before. However, I believe using
this will bring quite an overhead.
I am trying to modify the Sequence scan node such that: each time the scan
returns a tuple,
we'll take some information from this tuple. And finally issue anoth
On 3/9/2014 6:43 AM, matshyeq wrote:
Hello,
I've found an issue when tried to implement fetching rows from big
table (2mln rows) in my app.
Basically I don't find an elegant and easy way (other than always use
cursors) to limit the number of rows returned.
This causes my application to break d
matshyeq wrote:
> Postgresql is there for a good while perceived as one of the best (or just
> simply the best!?)
> available open source DB solution, so I'm really surprised this functionality
> is not yet supported...
You can retrieve the full result set,
you can retrieve it row by row,
you ca
34 matches
Mail list logo