> My problem is that the server works very slow.
Someone may chime in with general advice, but for more details, can
you be more specific? E.g.,
http://wiki.postgresql.org/wiki/Slow_Query_Questions
---
Maciek Sakrejda | System Architect | Truviso
1065 E. Hillsdale Blvd., Suite 215
Foster City, CA
I know there were a lot of performance issues with ext4, but i don't know the
state of it now.
I have a private openstreetmap server installed on a ubuntu 11.10 64bit pc with
both partitions (/ and /home) formated with ext4. My problem is that the server
works very slow.
On Fri, Nov 11, 2011 at 10:01 PM, Ruslan Zakirov wrote:
> Hello,
>
> A table has two columns id and EffectiveId. First is primary key.
> EffectiveId is almost always equal to id (95%) unless records are
> merged. Many queries have id = EffectiveId condition. Both columns are
> very distinct and Pg
On Nov 14, 2011, at 4:42 PM, Cody Caughlan wrote:
> We have anywhere from 60-80 background worker processes connecting to
> Postgres, performing a short task and then disconnecting. The lifetime
> of these tasks averages 1-3 seconds.
[snip]
> Is this something that I should look into or is it no
On Mon, Nov 14, 2011 at 4:59 PM, Ben Chobot wrote:
> On Nov 14, 2011, at 4:42 PM, Cody Caughlan wrote:
>
>> We have anywhere from 60-80 background worker processes connecting to
>> Postgres, performing a short task and then disconnecting. The lifetime
>> of these tasks averages 1-3 seconds.
>
> [s
We have anywhere from 60-80 background worker processes connecting to
Postgres, performing a short task and then disconnecting. The lifetime
of these tasks averages 1-3 seconds.
I know that there is some connection overhead to Postgres, but I dont
know what would be the best way to measure this ov
On Mon, Nov 14, 2011 at 2:57 PM, Tomas Vondra wrote:
> On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
>>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
>>> usually too low).
>>
>> Ok, will do.
>
> Yes, but find out what that means and think about the possible impact
>
On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
>> usually too low).
>
> Ok, will do.
Yes, but find out what that means and think about the possible impact
first. It usually improves the checkpoint behaviour but increases
Hi all. Been using postgres for years, and lurking on this newsgroup for a
short while now to help me gain the benefit of your expertise and
experience and learn how to get most out of postgresql possible.
I do a fair bit of work on tables using composite keys. I have discovered
a couple of thin
Thanks for your response. Please see below for answers to your questions.
On Mon, Nov 14, 2011 at 11:22 AM, Tomas Vondra wrote:
> On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
>> shared_buffers = 3584MB
>> wal_buffers = 16MB
>> checkpoint_segments = 32
>> max_wal_senders = 10
>> checkpoint_com
On 11/14/2011 03:47 PM, Kevin Grittner wrote:
This sounds like it might be another manifestation of something that
confused me a while back:
http://archives.postgresql.org/pgsql-hackers/2009-11/msg01754.php
http://archives.postgresql.org/pgsql-hackers/2009-12/msg00060.php
Interesting. That wa
Shaun Thomas wrote:
> Why on earth is it sending IN_CLOSE_WRITE messages for 0014, 1145,
> and 0061?
This sounds like it might be another manifestation of something that
confused me a while back:
http://archives.postgresql.org/pgsql-hackers/2009-11/msg01754.php
http://archives.postgresql.org
On 11/11/2011 04:21 PM, Shaun Thomas wrote:
So my real question: is this safe?
So to answer my own question: no, it's not safe. The PG backends
apparently write to the xlog files periodically and *close* them after
doing so. There's no open filehandle that gets written until the file is
ful
On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
> shared_buffers = 3584MB
> wal_buffers = 16MB
> checkpoint_segments = 32
> max_wal_senders = 10
> checkpoint_completion_target = 0.9
> wal_keep_segments = 1024
> maintenance_work_mem = 256MB
> work_mem = 88MB
> shared_buffers = 3584MB
> effective_ca
Hi, running Postgres 9.1.1 on an EC2 m1.xlarge instance. Machine is a
dedicated master with 2 streaming replication nodes.
The machine has 16GB of RAM and 4 cores.
We're starting to see some slow queries, especially COMMITs that are
happening more frequently. The slow queries are against seemingl
"Anibal David Acosta" wrote:
> I am doing asynchronous commit but sometimes I think that there
> are so many "things" in an insert/update transaction, for a table
> that has not too much important information.
>
> My table is a statistics counters table, so I can live with a
> partial data los
I am doing asynchronous commit but sometimes I think that there are so many
"things" in an insert/update transaction, for a table that has not too much
important information.
My table is a statistics counters table, so I can live with a partial data
loss, but not with a full data loss because man
"Anibal David Acosta" wrote:
> Maybe an option like "Recover from file " will be useful
> So, for example, daily some process do a COPY of entire table to a
> file
>
> In case of crash postgres recover content from the file.
If you need to recover file contents on a crash, then an unlogged
t
Maybe an option like "Recover from file " will be useful
So, for example, daily some process do a COPY of entire table to a file
In case of crash postgres recover content from the file.
:)
-Mensaje original-
De: Sergey Konoplev [mailto:gray...@gmail.com]
Enviado el: lunes, 14 de novi
On 14 November 2011 14:17, Richard Huxton wrote:
> On 14/11/11 10:08, Sergey Konoplev wrote:
>>
>> On 14 November 2011 12:58, Richard Huxton wrote:
> Let's say you were doing something like "UPDATE unlogged_table SET x=1 WHERE
> y=2". If a crash occurs during this command, there's no guarantee th
On 14/11/11 10:08, Sergey Konoplev wrote:
On 14 November 2011 12:58, Richard Huxton wrote:
Because they bypass the transaction-log (WAL), hence unlogged.
There's no way to know whether there were partial updates applied when the
system restarts.
I probably did not understand the "truncate" me
On 14 November 2011 12:58, Richard Huxton wrote:
> Because they bypass the transaction-log (WAL), hence unlogged.
> There's no way to know whether there were partial updates applied when the
> system restarts.
I probably did not understand the "truncate" meaning correct. It
truncates all the reco
On 14/11/11 08:10, Sergey Konoplev wrote:
Hi,
On 12 November 2011 00:18, Stephen Frost wrote:
In a crash, unlogged tables are automatically truncated.
BTW I wonder what for they are truncated in a crash case?
Because they bypass the transaction-log (WAL), hence unlogged.
There's no way to
Hi,
On 12 November 2011 00:18, Stephen Frost wrote:
> In a crash, unlogged tables are automatically truncated.
BTW I wonder what for they are truncated in a crash case?
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail
24 matches
Mail list logo