On 5.11.2013 23:19, Merlin Moncure wrote:
On Tue, Nov 5, 2013 at 3:52 PM, Jan Walter wrote:
On 5.11.2013 20:51, Merlin Moncure wrote:
On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure wrote:
On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter wrote:
Hi,
I am in a need of a very robust (esp. fast in
On Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка wrote:
SNIP
> max_connections = 350
SNIP
> work_mem = 256MB
These two settings together are quite dangerous.
1: Look into a db pooler to get your connections needed down to no
more than 2x # of cores in your machine. I recommend pgbouncer
2: You
On Tue, Nov 5, 2013 at 8:37 AM, Евгений Селявка wrote:
> I set work_mem to 1/4 from available RAM. I have 32Gb RAM so i set
> shared_buffers to 8Gb.
I am sure you are mentioning shared_buffers here and not work_mem.
work_mem is a per-operation parameter. So if you are using an
operation involving
On Tue, Nov 5, 2013 at 3:52 PM, Jan Walter wrote:
>
> On 5.11.2013 20:51, Merlin Moncure wrote:
>
> On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure wrote:
>
> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter wrote:
>
> Hi,
>
> I am in a need of a very robust (esp. fast in read, non-blocking in update)
On 5.11.2013 20:51, Merlin Moncure wrote:
On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure wrote:
On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter wrote:
Hi,
I am in a need of a very robust (esp. fast in read, non-blocking in update)
tree structure storage (95% trees are of depth <4, current max. i
> PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
> 20120305 (Red Hat 4.4.6-4), 64-bit
First, you should be using the latest update version. You are currently
missing multiple patch updates.
> listen_addresses = '*'
> port = 5433
> max_connections = 350
> shared_buffer
On Tue, Nov 5, 2013 at 11:30 AM, Merlin Moncure wrote:
> On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter wrote:
>> Hi,
>>
>> I am in a need of a very robust (esp. fast in read, non-blocking in update)
>> tree structure storage (95% trees are of depth <4, current max. is 12). We
>> have 10k-100k trees
On Tue, Nov 5, 2013 at 6:25 AM, Jan Walter wrote:
> Hi,
>
> I am in a need of a very robust (esp. fast in read, non-blocking in update)
> tree structure storage (95% trees are of depth <4, current max. is 12). We
> have 10k-100k trees now, millions in the future.
> I made many tests, benchmarks of
On Mon, Nov 4, 2013 at 2:10 PM, Caio Casimiro wrote:
>
> You said that I would need B-Tree indexes on the fields that I want the
> planner to use index only scan, and I think I have them already on the
> tweet table:
>
> "tweet_ios_index" btree (id, user_id, creation_time)
>
> Shouldn't the tweet_
From: Caio Casimiro [mailto:casimiro.lis...@gmail.com]
Sent: Monday, November 04, 2013 4:33 PM
To: Igor Neyman
Cc: Jeff Janes; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field
These are the parameters I have set in postgresql.conf:
wor
Hi,
I am in a need of a very robust (esp. fast in read, non-blocking in
update) tree structure storage (95% trees are of depth <4, current max.
is 12). We have 10k-100k trees now, millions in the future.
I made many tests, benchmarks of usual operations, and after all,
materialized path ('1.5.
Add column job_id to translation table and create index on job_id an
translation_id columns
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/ORDER-BY-performance-deteriorates-very-quickly-as-dataset-grows-tp5776965p5776974.html
Sent from the PostgreSQL - performance mai
I suggest to use denormalization, add column job_id to translation table
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/ORDER-BY-performance-deteriorates-very-quickly-as-dataset-grows-tp5776965p5776973.html
Sent from the PostgreSQL - performance mailing list archive at
Hello desmodemone, i look again and again through my sar statistics and i
don't think that my db swapping in freeze time. For example:
sar -B
12:00:02 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s
pgscand/s pgsteal/s%vmeff
09:40:01 PM 66.13352.43 195070.33 0.00 7
I have a model like this:
http://i.stack.imgur.com/qCZpD.png
with approximately these table sizes
JOB: 8k
DOCUMENT: 150k
TRANSLATION_UNIT: 14,5m
TRANSLATION: 18,3m
Now the following query takes about 90 seconds to finish.
*select* translation.id
*from* "TRANSLATION" translation
*inner join*
15 matches
Mail list logo