On 11/11/2012 08:54 AM, Craig Ringer wrote:
>
> Now follow Tom's advice:
>> In gdb,
>> call MemoryContextStats(TopMemoryContext)
>> should produce some useful information on the process's stderr file.
>
Oh, I forgot to explain how to actually get the output.
stderr goes to the PostgreSQL log
On 11/11/2012 12:50 AM, Carlos Henrique Reimer wrote:
> Hi,
>
> How is the best way to attach a debugger to the SELECT and identify
> why is it exhausting server storage.
This page is more focused on getting a stack trace after a crash, but
provides some information about how to identify the backe
Sounds like a file sharing issue. In other words..writing to the same
file at the same time...from two separate pg_dump streams.
Perhaps adding a time var to the file name call below
and see if the error goes away.
On Sat, 2012-11-10 at 08:03 -0600, Tefft, Michael J wrote:
> We have several Post
On 11/10/2012 02:23 PM, Scott Marlowe wrote:
When in doubt there are the docs:)
http://www.postgresql.org/docs/9.1/interactive/routine-vacuuming.html#AUTOVACUUM
"The "autovacuum daemon" actually consists of multiple processes. There is a
persistent daemon process, called the autovacuum launc
On Sat, Nov 10, 2012 at 3:20 PM, Adrian Klaver wrote:
> On 11/10/2012 02:08 PM, Scott Marlowe wrote:
>>
>> On Sat, Nov 10, 2012 at 2:12 PM, Jeff Janes wrote:
>>>
>>> On Fri, Nov 9, 2012 at 5:56 PM, Scott Marlowe
>>> wrote:
As well, since the default nap time is 1 minute, it will t
On Fri, Nov 9, 2012 at 4:28 PM, Lists wrote:
...
> 3) For each of the tables from #2, run the commands
> REINDEX TABLE $table;
> VACUUM FULL ANALYZE $table;
>
> The end result is a squeaky-clean database server with expected disk usage.
>
> NOTES:
...
>
>
> 2) It was sheer chance that I discove
On 11/10/2012 02:08 PM, Scott Marlowe wrote:
On Sat, Nov 10, 2012 at 2:12 PM, Jeff Janes wrote:
On Fri, Nov 9, 2012 at 5:56 PM, Scott Marlowe wrote:
As well, since the default nap time is 1 minute, it will take at least
50 minutes to vacuum each db as nap time is how long autovac waits
betwe
On Sun, Nov 11, 2012 at 8:05 AM, Jeff Janes wrote:
> Totally not. With default settings and default pgbench, the easiest
> way for host B to beat host A is by lying about the durability of
> fsync.
True. Without the ability to brutally cut the power to a cloud
instance or other remote (and in so
On Sat, Nov 10, 2012 at 2:12 PM, Jeff Janes wrote:
> On Fri, Nov 9, 2012 at 5:56 PM, Scott Marlowe wrote:
>>
>> As well, since the default nap time is 1 minute, it will take at least
>> 50 minutes to vacuum each db as nap time is how long autovac waits
>> between databases.
>
> That isn't how it
On Fri, Nov 9, 2012 at 5:56 PM, Scott Marlowe wrote:
>
> As well, since the default nap time is 1 minute, it will take at least
> 50 minutes to vacuum each db as nap time is how long autovac waits
> between databases.
That isn't how it works. The naptime is per database, not per
cluster. If the
On Fri, Nov 9, 2012 at 6:17 PM, Chris Angelico wrote:
> On Sat, Nov 10, 2012 at 12:26 PM, Steve Crawford
> wrote:
>> Don't do that. Defaults are good for ensuring that PostgreSQL will start on
>> the widest reasonable variety of systems. They are *terrible* for
>> performance and are certainly wr
Carlos Henrique Reimer writes:
> How is the best way to attach a debugger to the SELECT and identify why is
> it exhausting server storage.
In gdb,
call MemoryContextStats(TopMemoryContext)
should produce some useful information on the process's stderr file.
regar
Hi,
How is the best way to attach a debugger to the SELECT and identify why is
it exhausting server storage.
Thank you in advance!
On Fri, Nov 9, 2012 at 4:10 AM, Craig Ringer wrote:
> On 11/08/2012 11:35 PM, Carlos Henrique Reimer wrote:
> > Hi Craig,
> >
> > work_mem is defined with 10MB and
On 11/09/2012 11:35 PM, Boris Epstein wrote:
> Hello there,
>
> Once in awhile, as I am trying to run various versions of the Postgres
> DB engine I get a message on startup indicating that my control file
> is not up to snuff. Last time it happened with Postgres 9.1 on
> OpenIndiana 11.
What's th
We have several Postgres 9.4 databases on Solaris 10 that are structural
clones but with different data . While running multiple concurrent
pg_dump exports for these databases, we get sporadic errors like this:
pg_dump: dumping contents of table attachment
pg_dump: [custom archiver] could not
"Tefft, Michael J" writes:
> We have several Postgres 9.4 databases on Solaris 10 that are structural
> clones but with different data . While running multiple concurrent
> pg_dump exports for these databases, we get sporadic errors like this:
> pg_dump: dumping contents of table attachment
> pg_
On 11/10/2012 06:19 AM, Tefft, Michael J wrote:
Correction, that should have been Postgres 9.0.4 not 9.4.
For some reason I did not get the original question. Found it in the
archives.
In that post you show the following:
The command used to invoke pg_dump is as follows:
${currentCodeDir}
Correction, that should have been Postgres 9.0.4 not 9.4.
- Original Message -
From: Igor Romanchenko
>On Fri, Nov 9, 2012 at 9:21 PM, George Weaver wrote:
>>Hi Everyone,
>>I have a view made up of a local query unioned with a view comprised of a
>>dblink query.
>>If the dblink query cannot establish a connection, I get the "could not
>>
19 matches
Mail list logo