Hi,
up to parallel executions, when we had node in explain analyze showing
"loops=x" with x more than 1, it meant that the "actual time" had to be
multiplied by loops to get real time spent in a node.
For example, check step 13 in https://explain.depesz.com/s/gNBd
It shows time of 3ms, but loops
On Tue, Sep 12, 2017 at 02:56:26PM +0200, Chris Travers wrote:
> Normally I would not email the general list over this but it has been over
> a day and the google group for pgxn seems low enough traffic I figure I
> would mention it here.
>
> manager.pgxn.org is giving internal server errors (whic
On Wed, Feb 15, 2017 at 02:58:08PM +, Albe Laurenz wrote:
> Maybe the following answer can help:
> http://stackoverflow.com/a/39204667/6464308
>
> I don't really know how stable that (undocumented) behaviour will be, though.
Yeah, I'd rather not depend on things like xids for production
(some
On Wed, Feb 15, 2017 at 06:44:09AM -0800, Adrian Klaver wrote:
> On 02/15/2017 06:27 AM, hubert depesz lubaczewski wrote:
> >On Wed, Feb 15, 2017 at 06:24:14AM -0800, Adrian Klaver wrote:
> >>On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:
> >>>Hi,
> &g
On Wed, Feb 15, 2017 at 06:24:14AM -0800, Adrian Klaver wrote:
> On 02/15/2017 06:05 AM, hubert depesz lubaczewski wrote:
> >Hi,
> >I have a function, in PostgreSQL 9.6, which does:
> >
> >INSERT INTO table () values (...)
> >ON CONFLICT DO UPDATE ...;
> >
&
Hi,
I have a function, in PostgreSQL 9.6, which does:
INSERT INTO table () values (...)
ON CONFLICT DO UPDATE ...;
The thing is that the function should return information whether the row
was modified, or created - and currently it seems that this is not
available. Or am I missing something?
Bes
On Thu, Nov 03, 2016 at 11:28:57AM +0100, Tom DalPozzo wrote:
> What am I missing?
David already explained, but you might want to read also:
https://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/
depesz
--
The b
On Thu, Oct 27, 2016 at 04:43:55PM +0200, Marcin Giedz wrote:
> Hello all
>
> I'm wondering if there is any natural implementation of heartbeats in
> libpq library? We've been facing specially in firewall env
> occasionally session drops between client and server. Extending
> session timeout dire
On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> Hi all
>
> I am designing an inventory application, and I want to ensure that the stock
> level of any item cannot go negative.
>
> Instead of maintaining a running balance per item, I store the original
> quantities received in o
On Mon, Sep 26, 2016 at 08:22:11PM +0200, Alexander Farber wrote:
> ERROR: syntax error at or near "IF"
> LINE 11: IF LENGTH(TRIM(in_msg)) > 0 AND
> ^
of course it doesn't like it, because sql doesn't have "if" command.
If you want to use such syntax, you have to use plp
On Thu, Sep 22, 2016 at 02:23:20PM +0200, Sylvain Marechal wrote:
> is there a way to monitor the size of the pg_xlog directory in SQL? The
Assuming you have superuser privileges, it will most likely work:
select sum( (f).size ) from (select pg_stat_file('pg_xlog/' || pg_ls_dir) as f
from pg_ls_
Hi,
So, we have this situation, where there is cluster with 5 smallish
databases:
$ select oid, pg_database_size(oid) from pg_database;
oid | pg_database_size
---+--
1 | 6752440
12035 | 6760632
16428 | 59779475640
16427 |294947000
1203
On Thu, Sep 08, 2016 at 03:19:59PM +, PICCI Guillermo SNOOP wrote:
> Hi,
> we are trying to install pgaudit in order to check its funcionality, and we'd
> like to know if there is any available rpm to do this.
as far as quick google search shows, pgaudit is a tool from 2ndQuadrant,
a
On Wed, Sep 07, 2016 at 03:05:38PM +0200, Alexander Farber wrote:
> ERROR: syntax error at or near "in_until"
> LINE 69: ... banned_until = CURRENT_TIMESTAMP + INTERVAL in_until,
>^
> Is there please a better way here?
Why don'
On Mon, Aug 29, 2016 at 01:13:17PM -0300, Alvaro Herrera wrote:
> > > This happens on Pg 9.5. Are there any plans to make getting schema
> > > faster for such cases? Either by parallelization, or at least by getting
> > > schema for all tables "at once", and having pg_dump "sort it out",
> > > inst
Hi,
we have rather uncommon case - DB with ~ 50GB of data, but this is
spread across ~ 8 tables.
Running pg_dump -Fd -jxx dumps in parallel, but only data, and MOST of
the time is spent on queries that run sequentially, and as far as I can
tell, get schema of tables, and sequence values.
This
On Mon, Aug 15, 2016 at 06:27:06PM +0500, Михаил wrote:
> I need to escape double quotes only:
> test=# select regexp_replace('"""{Performer,"Boomwacker ""a""
> Recording""}"""', '([^"])"{2}([^"])', '\1\"\2', 'g');
> regexp_replace
> ---
On Tue, Jul 12, 2016 at 10:23:24AM +0200, Bjørn T Johansen wrote:
> I am trying to move a small system from Oracle to PostgreSQL and
> I have come upon a sql that runs really slow compared to on the Oracle
> database and I am not able to interpret why this is slow.
I loaded your explain analyze to
On Mon, May 30, 2016 at 11:05:17AM -0700, Jeff Janes wrote:
> So my theory is that you deleted a huge number of entries off from
> either end of the index, that transaction committed, and that commit
> became visible to all. Planning a mergejoin needs to dig through all
> those tuples to probe the
On Sat, May 28, 2016 at 02:15:07PM -0400, Tom Lane wrote:
> hubert depesz lubaczewski writes:
> > Does that help us in any way?
>
> Not terribly. That confirms that the processes are contending for a
> spinlock, but we can't tell which one. Can you collect a few st
On Sat, May 28, 2016 at 10:32:15AM -0700, Jeff Janes wrote:
> > any clues on where to start diagnosing it?
>
> I'd start by using strace (with -y -ttt -T) on one of the processes
> and see what it is doing. A lot of IO, and one what file? A lot of
> semop's?
So, I did:
sudo strace -o bad.log -
On Sat, May 28, 2016 at 08:04:43AM +0200, Pavel Stehule wrote:
> > > you should to install debug info - or compile with dubug symbols
> > Installed debug info, and the problem stopped.
OK. ot he problem back.
Ps looked like this:
USERPID %CPU %MEMVSZ RSS TTY STAT START TIME
On Sat, May 28, 2016 at 07:46:52AM +0200, Pavel Stehule wrote:
> you should to install debug info - or compile with dubug symbols
Installed debug info, and the problem stopped.
Don't think it's related - it could be just timing. I'll report back
if/when the problem will re-appear.
Best regards,
On Sat, May 28, 2016 at 07:25:18AM +0200, Pavel Stehule wrote:
> It is looking like spinlock issue.
> try to look there by "perf top"
First results look like:
Samples: 64K of event 'cpu-clock', Event count (approx.): 2394094576
hi,
we have following situation:
pg 9.3.11 on ubuntu.
we have master and slave.
the db is large-ish, but we're removing *most* of its data from all
across the tables, and lots of tables too.
while we're doing it, sometimes, we get LOTS of processes, but only on
slave, never on master, that spend l
On Mon, Apr 25, 2016 at 08:55:21AM -0500, Cal Heldenbrand wrote:
> The only outside tool it requires is lsof to determine the hostname of the
> remote socket. Otherwise it uses plain stuff like awk / sec and bash tools.
Why would you need lsof to get hostname for remote connection, when you
can u
On Wed, Mar 23, 2016 at 03:27:45AM -0700, zh1029 wrote:
> Hi,
> we are using PostgreSQL 9.3.6 version and observe data file size is not
> decreased after we deleted records from the table. It looks quite abnormal.
> Is it as PostreSQL designed?
>
> DBTestPostgres=# select count (*) from test_da
On Tue, Feb 16, 2016 at 09:41:18AM -0600, Suresh Raja wrote:
> I use the function name in the body of pgsql code of the same function. Is
> the function name set to any variable that i can easily reference. Now I'm
> hard coding the function name in the code.
> In the above Raise info i use selec
On Tue, Oct 20, 2015 at 07:00:04AM -0700, pinker wrote:
> Maybe somebody have an idea how to escape text string for use in hstore
> column? I have tried $$ and quote_literal in audit trigger function, but
> still db won't let me pass values with // or ' to the hstore...INSERT FROM
> trigger functio
On Mon, Sep 28, 2015 at 12:53:37PM -0600, Scott Marlowe wrote:
> The issue was reported as omnipitr-cleanup is SLOOOW, so we run
> purgewal by hand, because the cleanup is so slow it can't keep up. But
> running it by hand is not supported.
>
> We fixed the problem though, we wrote out own script
On Mon, Sep 28, 2015 at 08:54:54AM -0600, Scott Marlowe wrote:
> Look up WAL-E. It's works really well. We tried using OmniPITR and
> it's buggy and doesn't seem to get fixed very quickly (if at all).
Any examples? I'm developer of OmniPITR, and as far as I know there are
(currently) no unfixed bu
On Mon, Sep 21, 2015 at 11:55:23AM +0200, Alex Magnum wrote:
> Hello,
>
> I have a float array holding geo location information.
>
> geoloc
> ---
> {5.3443133704554,100.29457569122}
> {5.3885574294704,100.29601335526}
> {3.1654978750403,101.6091585
On Wed, Sep 09, 2015 at 07:51:48PM -0400, Melvin Davidson wrote:
> *O/S = Windows 10PostgreSQL 9.2.10, compiled by Visual C++ build 1600,
> 32-bit*
> http://www.postgresql.org/docs/9.1/interactive/sql-createtable.html
>
> and like_option is:
> { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | I
On Tue, May 12, 2015 at 04:07:52PM +0800, Anil Menon wrote:
> Thank you very much - looks like I will have to prefix all cols.
You should anyway.
Queries with unaliased columns make it impossible to analyze without
in-depth knowledge of the database.
Consider:
select c1, c2, c3, c4, c5
from t1 j
On Tue, May 12, 2015 at 12:26:15AM +0800, Anil Menon wrote:
> manualscan=> select count(*) From msgtxt where msgid in (
> manualscan(> select msgid From courier where org_id=3
> manualscan(> )
> manualscan-> ;
> count
> ---
> 10225
> (1 row)
> manualscan=> select count(*) Fro
On Mon, Feb 02, 2015 at 05:16:40PM +1100, Sam Saffron wrote:
> Even this is fast, and logically equiv as id is primary key unique
> select * from topic
> where id = 1000
> union all
> select * from (
> select * from topics
> where id <> 1000
> order by bumped_at desc
> limit 30
> ) as x
> l
On Wed, Sep 24, 2014 at 2:45 PM, Chris Bandy wrote:
> I would like to create a new type for version strings that sorts
> numerically. The composite type below was quick to write and does not
> require superuser privileges. However, it doesn't respond to type casts the
> way I'd like.
>
> Is there
On Fri, Sep 19, 2014 at 8:35 AM, Philipp Kraus
wrote:
> Is there a buildin way to clone the "database_source" with all structure
> and data into a new database "database1..150" ?
>
assuming you're using bash shell, this should work:
for i in {1..150}; do createdb -T database_source database$i;
On Fri, Aug 22, 2014 at 6:55 PM, Jeff Janes wrote:
> What transaction isolation level is being used?
>
Sorry for late reply - the user was away for parts of friday, I was away on
weekend, and just now got answer - it's read committed.
depesz
On Fri, Aug 22, 2014 at 9:21 PM, Alvaro Herrera
wrote:
> FWIW this problem was reported also by Andrew Sackville-West at
>
> http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230
> I strongly suspect now that the problem is related to the locking of
> updated versions a
On Fri, Aug 22, 2014 at 8:33 PM, Adrian Klaver
wrote:
> Not sure, just the combination of parallel operations and remote
> connections seemed to be an avenue to explore. Given that everything is
> local, turns out it was dead end.
> Looking at the pastebin log again, am I reading it right that th
On Fri, Aug 22, 2014 at 7:54 PM, Adrian Klaver
wrote:
> Which in itself might be a clue.
>
> Is all the code/data running on/coming from that machine or is some coming
> in remotely?
>
> Where network latency might be an issue?
>
All locally, but hey - how could network latency be a problem? Tra
On Fri, Aug 22, 2014 at 7:49 PM, Tom Lane wrote:
> You have not shown us the full sequence of events leading up to the
> deadlock failure, but I hypothesize that there were yet other transactions
> that updated that same row in the very recent past. That might allow
> there to be more than one t
On Fri, Aug 22, 2014 at 7:43 PM, Adrian Klaver
wrote:
> Which begs the question, what is different about that machine?
>
No idea. I can pass all the question you might have, but I'm ~ 6000 miles
away from any machine running this code.
depesz
On Fri, Aug 22, 2014 at 7:20 PM, Adrian Klaver
wrote:
> So why are different processes running the exact same queries coming in on
> different ports?
>
the importer is parallelized, and sometimes two processes handle batches of
data that happen to update the same "top level row".
but the deadlo
On Fri, Aug 22, 2014 at 7:29 PM, John R Pierce wrote:
> On 8/22/2014 9:29 AM, hubert depesz lubaczewski wrote:
>
>> select * from table where pkey = limit 1 for update;
>>
> why is there a limit 1 in there?pkey=somevalue should only return a
> single row. if it
On Fri, Aug 22, 2014 at 6:45 PM, Adrian Klaver
wrote:
> So process 66017 and 66014 are blocking each because they are running the
> exact same queries. The interesting part is the process with the lower pid
> is starting later then the none with the higher pid.
>
Locking is obvious. But why dead
I have developer with pg 9.3.5, which is reporing something really strange.
He runs importer, which does, in single transaction:
begin;
select * from table where pkey = limit 1 for update;
update table set ... where pkey = ;
commit;
and two backends running the same transaction deadlock.
I che
Hi,
we have weird situation today.
Pg 9.1.13, on ubuntu, on AWS.
At ~ 7am one of db servers stopped responding, and got restarted using AWS
console.
After reboot everything seemed to work fine.
But later on I found that one of the segments was not archived.
Segment ...7 was written (mtime of f
On Thu, Jul 24, 2014 at 7:13 PM, Vasudevan, Ramya <
ramya.vasude...@classmates.com> wrote:
> >> You could compare list of tables before (_start) and after (_end) the
> ddl. Doing it in plpgsql will be tricky, but if you'd use some other
> language - like plperl - it's relatively simple:
> http://
On Thu, Jul 24, 2014 at 3:35 AM, wrote:
> I have a suggestion for a table checksumming facility within PostgreSQL.
> The applications are reasonably obvious - detecting changes to tables,
> validating data migrations, unit testing etc. A possible algorithm is as
> follows:
>
> 1. For each row of
On Thu, Jul 24, 2014 at 2:22 AM, Vasudevan, Ramya <
ramya.vasude...@classmates.com> wrote:
> CREATE TABLE log_ddl_info(ddl_tag text, ddl_event text, ddl_time
> timestamp);
>
> CREATE OR REPLACE FUNCTION log_ddl_execution()
>
> RETURNS event_trigger AS $$
>
> DECLARE
>
> insertquery TEXT;
>
> BEGI
I don't know what collect actually does, but just guessing, I would say
that you're looking for string_agg()
depesz
On Wed, Jul 23, 2014 at 6:12 PM, Ramesh T
wrote:
> Hi,
>
> SELECT hr.tab_to_largestring(CAST(COLLECT(cust_name) AS
> t_varchar2_tab)) AS employees
> FROM hr.custome
On Wed, Jul 9, 2014 at 3:28 PM, Ramesh T
wrote:
> Yes,not an error it is a warning and archive is working.
>
How can you say that archive is working when in logs you have?
> 2014-07-09 18:53:33 IST LOG: archive command failed with exit code 1
> 2014-07-09 18:53:33 IST DETAIL: The fail
Hi,
First - do not take the messages off list. Someone else might be able to
help you if you'll mail to the list, I'm not always there, and others might
have better ideas.
On Wed, Jul 9, 2014 at 2:43 PM, Ramesh T
wrote:
> i rune problem is when i run the pg_stop_backup() its return error in
On Wed, Jul 9, 2014 at 12:45 PM, Ramesh T
wrote:
> HI,
> I have a problem when i take backup using pg_basebackup and
> pg_start_backup('base');
> following ,
>
What is the problem? You didn't write it. Just some outputs and config, but
you never mentioned what is the problem itself.
i did ran it in caesius database and not prod01 db that was a typo
>
> there is no long running transactions. i just ran this command select
> min(xact_start) from pg_stat_activity where xact_start is not null; to make
> sure
>
> Thanks
>
>
> On Tue, Jul 8, 2014 at 4:43 A
First question - are you sure you ran vacuum in the correct database? I.e.
in caesius?
Second - is there any long running transaction? select min(xact_start) from
pg_stat_activity where xact_start is not null; should tell you.
depesz
On Tue, Jul 8, 2014 at 12:44 PM, Prabhjot Sheena <
prabhjot.
How many rows do you plan on having in this table? Why this particular key
range?
depesz
On Fri, Jul 4, 2014 at 3:24 PM, Kynn Jones wrote:
> I'm looking for a way to implement pseudorandom primary keys in the range
> 10..99.
>
> The randomization scheme does not need to be cryptographi
On Tue, Jul 1, 2014 at 3:58 PM, Rich Shepard
wrote:
> I'm developing a new application and want to take advantage of postgres
> features such as triggers and stored procedures and put as much of the
> middleware 'business logic' into the database engine as is practical.
> Is it possible, or p
On Thu, Jun 26, 2014 at 10:03 PM, Anand Kumar, Karthik <
karthik.anandku...@classmates.com> wrote:
> We run postgres 9.3.3 on Centos 6.3, kernel 2.6.32-431.3.1. Every once in
> a while, we see postgres processes spinning on semop:
>
it might be long shot, but when we had problems with lots of bac
On Wed, Jun 25, 2014 at 5:18 PM, James Le Cuirot
wrote:
> > Also - I have no idea what "peer authentication" has to do with Pg
> > gem - care to elaborate? The gem is for client, and authentication
> > happens in server, so ... ?
> Right but peer authentication is all to do with the operating sys
Perhaps you can explain what is the functionality you want to achieve, as
I, for one, don't understand. Do you want transactions? Or not?
Also - I have no idea what "peer authentication" has to do with Pg gem -
care to elaborate? The gem is for client, and authentication happens in
server, so ...
On Tue, Jun 10, 2014 at 8:13 PM, David Wall wrote:
> Is it safe to assume that my working PG 8.3 archive command on the master
> and recovery.conf (using contrib's pg_standby) on the standby will work the
> same under 9.3?
>
Yes, it will work just fine. Of course you can't load 9.3 xlogs into 8.
localedef.
depesz
On Sat, Jun 7, 2014 at 9:57 PM, Adrian Klaver
wrote:
> On 06/07/2014 12:43 PM, hubert depesz lubaczewski wrote:
>
>> Seems I now have the locale. Too bad I had to delete the "bad" databases
>> earlier.
>>
>
> So, just for completeness
Seems I now have the locale. Too bad I had to delete the "bad" databases
earlier.
Thanks Adrian,
depesz
On Sat, Jun 7, 2014 at 7:16 PM, Adrian Klaver
wrote:
> On 06/07/2014 08:17 AM, hubert depesz lubaczewski wrote:
>
>> localedef --no-archive, requires additional argum
And nothing changes.
depesz
On Sat, Jun 7, 2014 at 2:23 AM, Adrian Klaver
wrote:
> On 06/06/2014 04:36 PM, hubert depesz lubaczewski wrote:
>
>> locale-gen just regenerates the locale - which I have. It's just that
>> PostgreSQL doesn't see it. When I run locale-g
locale-gen just regenerates the locale - which I have. It's just that
PostgreSQL doesn't see it. When I run locale-gen, it just shows that all
locales (listed by locale -a) are "up to date".
depesz
On Fri, Jun 6, 2014 at 10:04 PM, Adrian Klaver
wrote:
> On 06/06/2014
Hi,
I'm running Pg 9.3.4 on Ubuntu Linux 12.04/Precise. Pg is installed from
PGDG repo (http://apt.postgresql.org/pub/repos/apt/).
It somehow got database created in locale that it can't now open:
$ psql
psql: FATAL: database locale is incompatible with operating system
DETAIL: The database was
OmniPITR (https://github.com/omniti-labs/omnipitr) has working backups off
slave. Working as in - we were using in it production since 8.3 at least.
depesz
On Thu, May 29, 2014 at 8:02 PM, Andres Freund
wrote:
> Hi,
>
> On 2014-05-16 12:49:25 +0530, chiru r wrote:
> > Yes, It is possible to ex
On Thu, Mar 06, 2014 at 06:03:54PM +0100, hubert depesz lubaczewski wrote:
> On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
> > hubert depesz lubaczewski writes:
> > > I didn't have a chance to do it. Can try if there is a way to get trace
> > > *witho
On Thu, Mar 06, 2014 at 12:02:50PM -0500, Tom Lane wrote:
> hubert depesz lubaczewski writes:
> > I didn't have a chance to do it. Can try if there is a way to get trace
> > *without* making core (sorry, my c/gdb knowledge is very, very limited).
>
> Sure, you
On Thu, Mar 06, 2014 at 11:56:06AM -0500, Tom Lane wrote:
> hubert depesz lubaczewski writes:
> > Thread 1 (LWP 21422):
> > #0 0x7ffa60da2dc7 in semop () from /lib/x86_64-linux-gnu/libc.so.6
> > No symbol table info available.
> > #1 0x005f65e8 in PGSem
Hi,
Ihave following situation:
amazon, but not RDS - just plain EC2, with ubuntu system.
zone reclaiming is disabled (at least in my system, no idea about host).
and there is no "transparent huge page support".
today we had at least 3 cases where bunch of abckends (~40) gets
"interlocked".
Some of
On Thu, Jan 30, 2014 at 12:52:35PM -0800, bricklen wrote:
> > Since Postgres does not consider a table as a dependency of a function if
> > that table is referenced in the function (probably a good reason), I often
> > find myself in a position of asking "is this table/sequence/index
> > referenced
On Tue, Nov 26, 2013 at 12:24:08PM -0500, David Rysdam wrote:
> I'm not really looking for information on how to speed this query
> up. I'm just trying to interpret the output enough to tell me which step
> is slow:
You might want to read this:
http://www.depesz.com/tag/unexplainable/
Best regard
On sob, lis 16, 2013 at 08:44:52 +0530, Jayadevan M wrote:
> I was trying different options of log_line_prefix. I am making chnages,
> doing a pg_ctl reload and checking the output in the log files. For some
> reason ,the changes seem to have no impact. What am I doing wrong? Here is
> the output f
On czw, paź 31, 2013 at 07:25:21 -0700, si24 wrote:
> Can some one please give me a bit more of a better explanation on how exactly
> the pgbouncer works as I am now lost.
>
> I'm not sure if it is pooling the connections cause surely if its not being
> used the connections should go down not up i
On pon, paź 21, 2013 at 08:38:52 -0400, Robert James wrote:
> I have a table of event_id, event_time. Many times, several events
> happen in a row. I'd like a query which replaces all of those events
> with a single record, showing the count.
>
> Eg: Take A,A,A,B,C,A,D,A,A,D,D,B,C,C and return:
On sob, paź 12, 2013 at 06:30:51 +0200, jane...@web.de wrote:
>What does it mean if the Error: 'argument exceeds the maximum length of
> 1024 bytes' is raising using psql.
>How can I increase this limit?
How did you achieve it? Can you show screenshot of what you did, and how
the error wa
On nie, wrz 29, 2013 at 02:09:54 +0100, António M. Rodrigues wrote:
> The code is the following:
>
> -
> DO $$
> DECLARE
> i integer;
> BEGIN
> FOR i IN (select nn from numeros)
> LOOP
> EXECUTE
> 'create table contagio' || i || ' as
> SELECT
On Thu, Aug 08, 2013 at 12:01:17PM +1000, Victor Hooi wrote:
> I'm just wondering if this is still the case?
Yes. Order by random() is and, most likely, will be slow. Not sure if
there is any engine that could make it fast.
> I just ran those benchmarks on my system (Postgres 9.2.4), and using OR
On Fri, Jul 12, 2013 at 12:30:22PM +0530, Amit Langote wrote:
> >> Increasing the wal_keep_segments ?
> > I know that I can increase wal_keep_segments to "solve" it, but
> > shouldn't it be *impossible* to happen with synchronous replication?
> > After all - all commits should wait for slave to be
On Thu, Jul 11, 2013 at 11:29:24PM +0530, Raghavendra wrote:
> On Thu, Jul 11, 2013 at 11:18 PM, hubert depesz lubaczewski <
> dep...@depesz.com> wrote:
>
> > We are seeing situation like this:
> > 1. 9.2.4 database
> > 2. Master settings:
> >
We are seeing situation like this:
1. 9.2.4 database
2. Master settings:
name|setting
---+---
fsync | on
synchronize_seqscans | on
synchronous_commit| remote_write
synchronous_standby_names | *
On Mon, Jul 08, 2013 at 09:09:26AM -0400, Robert James wrote:
> I have two relations, where each relation has two fields, one
> indicating a name and one indicating a position. That is, each
> relation defines a sequence.
>
> I need to determine their longest common subsequence. Yes, I can do
>
On Fri, Jul 05, 2013 at 08:35:22AM +0200, Luca Ferrari wrote:
> On Fri, Jul 5, 2013 at 5:09 AM, Jayadevan M
> wrote:
>
> >
> > So each student may get counted many times, someone with 99 will be counted
> > 10 times. Possible to do this with a fat query? The table will have many
> > thousands of
On Fri, Jun 28, 2013 at 02:21:10PM +0530, Atri Sharma wrote:
> How would this be helpful for general use cases? Querying on tids on a
> specific page doesn't seem too useful for any other case than the one
> you mentioned above, and IMHO it seems to be the job of vacuum.
> I may be missing somethin
Hi,
while working on removing bloat from some table, I had to use ltos of
logic simply because there are no (idnexable) inequality scans for
ctids.
Is it because just noone thought about adding them, or are there some
more fundamental issues?
I could imagine that things like:
select * from table
Hi,
We have following situation:
Pg master on US east coast.
Pg slave on US west coast.
Replication set using omnipitr + streaming replication.
Setup on slave:
postgres=# select name, setting from pg_settings where name ~ '(checkpo|wal)';
name | setting
---
Not sure if it helps, but it's apparently not a very rare thing.
Quick analysis on data from explain.depesz.com showed that > 12% of
plans with nested loop have such estimate.
Couple of examples:
http://explain.depesz.com/s/Qm4
http://explain.depesz.com/s/qmW
http://explain.depesz.com/s/qnG
http:
On Wed, Apr 24, 2013 at 12:48:44PM -0700, Stephen Scheck wrote:
> I have a UDF (written in C) that returns SETOF RECORD of an anonymous
> record type
> (defined via OUT parameters). I'm trying to use array_agg() to transform
> its output to
> an array:
> pg_dev=# SELECT array_agg((my_setof_record_r
On Tue, Apr 23, 2013 at 03:08:52PM -0400, François Beausoleil wrote:
> I used omnipitr to launch a base backup, but I fumbled a couple of
> things, so I Ctrl+C'd *once* the console where I had
> omnipitr-backup-master running. omnipitr-backup-master correctly
> launched pg_stop_backup, but pg_stop_
On Wed, Apr 10, 2013 at 07:36:59AM +, Albe Laurenz wrote:
> What libraries are loaded in this backend (lsof)?
> Maybe it's something non-PostgreSQL that's hogging the memory.
I don't have this particular backend anymore, and I don't have lsof. But
in smaps there are libraries listed, so:
Stil
Hi,
So, I checked a backend on Linux, and found such thing:
2ba63c797000-2ba63fa68000 rw-p 2ba63c797000 00:00 0
Size: 52036 kB
Rss: 51336 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean:0 kB
Private_Dirty:51336 kB
Swap: 0 k
On Thu, Mar 28, 2013 at 08:50:50PM +, Severn, Chris wrote:
> What I want to do is return items that have 'Robocop' or 'Robocop and
> DVD' or 'Robocop and Collection' or 'Robocop and DVD and collection'
Based on the criteria above, I would say that:
SELECT m.* FROM movies m WHERE to_tsvector(m.
On Sun, Jan 13, 2013 at 11:36:11AM -0500, AI Rumman wrote:
> Its already installed.
> I am runninf Postgresql 9.0 with uuid successfully in this server.
Most likely you installed just part of the library. Not sure what
OS/distribution you're using, but on debian, for example - there is
distinction
On Sun, Jan 13, 2013 at 10:18:50AM -0500, AI Rumman wrote:
> I am trying to build Postgresql 9.2
>
> ./configure --prefix=/usr/pgsql-9.2 --with-ossp-uuid --with-libxml
>
> Got the error at config.log:
>
> configure:9747: result: no
> configure:9752: checking for uuid_export in -luuid
>
On Tue, Jan 08, 2013 at 04:19:59PM -0600, Kirk Wythers wrote:
> I have a column of type TIMESTAMP, I'd like to query all records from 2011.
> If it were text I could use a partial such as:
>
> WHERE
> text ~ '2011'
>
> There must be a simple way to pull the year part out of a timestamp for
On Thu, Dec 20, 2012 at 02:50:12PM +0200, Heikki Linnakangas wrote:
> Hmm, that's a different error than you got before. Thom also
> reported a "requested WAL segment ... has already been removed"
> error, but in his test case, and as far as I could reproduce it, the
> error doesn't reoccur when th
1 - 100 of 552 matches
Mail list logo