Our set up:
* Db version: postgres 9.3.4
* OS: CentOS 5.6
* kernel Version - Linux 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24
EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
* memory - 256 GB
* We have slony replicating to this DB from the primary server
*
I haven’t been able to find clear information online about what happens with
existing indexes on a postgres post-9.0 database (we specifically use 9.3)
after a vacuum full.
There is a lot of information on how a vacuum differs from a vacuum full, but
my question is – is a re-index needed after
>Are there a lot of tuples in the table that have been inserted or
updated by still-open transactions?
Yes, there are likely to be inserts. That table is a log capture table
used by our replication software, so essentially every
update/delete/insert will have a record inserted into the table. It
>1. disabling zone_reclaim (echo 0 > /proc/sys/vm/zone_reclaim_mode)
> 2. disabling transparent hugepage support - this has various names on
> different kernel/distributions, but "find /sys | grep -i
> transparent.*hugepage.*enable" will find it, and then just echo never there.
Thank you, yes,
Hi,
We run postgres 9.3.3 on Centos 6.3, kernel 2.6.32-431.3.1. Every once in a
while, we see postgres processes spinning on semop:
Here is an output from an strace on a delete process:
root@site-db01a:~ # strace -p 53744
Process 53744 attached - interrupt to quit
semop(21692498, {{6, 1, 0}}, 1
We use symmetricDS for this. Works pretty well.
http://www.symmetricds.org/
From: Serge Fonville mailto:serge.fonvi...@gmail.com>>
Date: Wednesday, May 7, 2014 at 2:49 AM
To: Geoff Montee mailto:geoff.mon...@gmail.com>>
Cc: Sameer Kumar mailto:sameer.ku...@ashnik.com>>,
PostgreSQL General Discu
size: 393215 MB
node 1 free: 319 MB
node distances:
node 0 1
0: 10 20
1: 20 10
root@site-db01b:~ # cat /proc/sys/vm/zone_reclaim_mode
0
Thanks,
Karthik
On 3/26/14 1:54 PM, "Bruce Momjian" wrote:
>On Wed, Mar 26, 2014 at 08:22:01PM +, Anand Kumar,
arthik
On 3/14/14 3:37 PM, "Anand Kumar, Karthik"
wrote:
>For anyone that's still following - we tried upgrading to postgres 9.3.3 -
>that hasn't helped.
>
>Running an strace on the pid that was consuming the highest CPU at the
>time of the outage shows:
>
blogspot.com/2012/06/perf-good-bad-ugly.html - will see what
that has to show.
Thanks,
Karthik
On 3/11/14 1:06 PM, "John R Pierce" wrote:
>On 3/11/2014 10:20 AM, Anand Kumar, Karthik wrote:
>> We typically see about 500-700 active queries at a time
>
>if these are prima
down our applications (they maintain persistent
connections once established, so any connection overhead slows them down), and
with transaction level pooling, simply did not work.
Thanks,
Karthik
From: Jeff Janes mailto:jeff.ja...@gmail.com>>
Date: Tuesday, March 11, 2014 9:23 AM
To: "
ed on recommendations from posrgtes user groups, and via testing with
setting it up and running explain analyze on queries. None of the settings have
changed when this problem began.
Thanks,
Karthik
From: Venkata Balaji Nagothi mailto:vbn...@gmail.com>>
Date: Monday, March 10, 2014 7:35 PM
To: &
Hi all,
We're running postgres 9.3.2, server configuration below.
Seemingly randomly, we will see the number of active queries in postgres go up
until we hit max_connections. The DB will recover after a few minutes.
We had the issue a couple of times in Feb 2014. We then upgraded the postgres
Thank you.
Its a 'yes' on 2 there. I did copy the config file from the 9.1
installation. Thank you for mentioning that, will fix it.
Thanks,
Karthik
On 1/30/14 4:17 PM, "Tom Lane" wrote:
>Adrian Klaver writes:
>> On 01/30/2014 03:17 PM, Anand Kumar, Karthik wr
Hi,
We just upgraded our postgres database from 9.1 to 9.3. And noticed that the
timezone changed from PST to GMT.
Is that known behavior? Has anyone else run into it, or am I just missing
something?
I've verified the server's timezone is right, and nothing in the postgres
user's profile is ch
Thanks all for your suggestions. Looks like disabling transparent huge
pages fixed this issue for us. We haven't had it occur in two days now
after the change.
Thanks,
Karthik
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www
The reason we're on ext2 is to get around
http://www.postgresql.org/message-id/CED87E13.C57E7%karthik.anandkumar@memo
rylane.com
We had pretty severe index and table corruption that would occur randomly
- this was on ext3, and centos 5.6, 2.6.18 kernel. The problems got fixed
after we upgraded th
Hi,
We run postgres 9.1.11, on Centos 6.3, and an ext2 filesystem
Everything will run along okay, and every few hours, for about a couple of
minutes, postgres will slow way down. A "select 1" query takes between 10 and
15 seconds to run, and the box in general gets lethargic.
This causes a pil
We use symmetricDS pretty extensively, across oracle and postgres
databases. It has its flaws and its strengths. It shines when there's
multiple database platforms involved, when the volume of transactions is
not too high, and supports multi master. Its optimized for wan topologies,
so its great if
>
>Thanks Shaun!
>
>Yes, we're getting synchronous_commit on right now.
>
>The log_min_duration was briefly set to 0 at the time I sent out the post,
>just to see what statements were logged right before everything went to
>hell. Didn't yield much since we very quickly realized we couldn't cope
>wi
Thank you. pg_dump is what we will use.
We did re-assign the LUN to a new system last night, and are monitoring.
Too early to say anything, but so far, we haven't seen the corruption.
And yes, we will get the dump from the new system.
We do not have the hardware to move the backups to a differen
HI,
We have an issue with possibly corrupt data in our postgresql server. Errors
like:
ERROR: index "photos_p00_n2" contains unexpected zero page at block 0
ERROR: invalid page header in block 12707 of relation
pg_tblspc/5020557/PG_9.1_201105231/16393/9014673
Thanks to all the suggestions fr
Hi Jerry,
Thanks for the suggestion
Yes, until about a month ago, we weren't wrapping our snapshots with
pg_start_backup and pg_stop_backup. Same reason as you mentioned, the
database would start up and "trivial checks" would be okay, and so we
figured "why write a script?".
However we did chang
Thanks Shaun!
Yes, we're getting synchronous_commit on right now.
The log_min_duration was briefly set to 0 at the time I sent out the post,
just to see what statements were logged right before everything went to
hell. Didn't yield much since we very quickly realized we couldn't cope
with the vol
Hi,
We're looking for help with possible corruption of our indexes and tables.
Seemingly in the middle of normal operations, we will run into errors like
the below:
ERROR: index "mv_visits_p03_n2" contains unexpected zero page at block
15939
ERROR: invalid page header in block 344713 of relati
Hi All,
Im looking for some case studies or reference sites or benchmarks for
Solaris/Linux and Postgresql
and application level benchmarks like JBOS/Tomcat with Postgres as
backend.
It would be great if anybody could help me.
Regards
Anand Kumar.
begin:vcard
fn:Anand Kumar M
n:Meenakshi
25 matches
Mail list logo