Hi
what is your random_page_cost and seq_page_cost?
Regards
Pavel Stehule
2015-03-19 7:23 GMT+01:00 Jake Magner :
> I am having problems with a join where the planner picks a merge join and
> an
> index scan on one of the tables. Manually disabling merge joins and running
> the query both ways
I am having problems with a join where the planner picks a merge join and an
index scan on one of the tables. Manually disabling merge joins and running
the query both ways shows the merge join takes over 10 seconds while a hash
join takes less than 100ms. The planner total cost estimate favors the
Hi,
>>
>> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
>> avgqu-sz await r_await w_await svctm %util
>> sda 0.00 0.00 0.00 5.00 0.00 2056.00 822.40
>> 0.00 0.00 0.00 0.00 0.00 0.00
>> sdb 0.00 0.00 1055.00 549.00 41166.50 22840.00 79.81
>> 5.28 3.28 4.94 0.10 0.62 100.00
> Your sdb is satura
"Carson, Leonard" writes:
> There is only one server at this point. The 8.4 machine was upgraded to 9.3
> about a year ago and we have no 8.4 backups so it's difficult if not
> impossible to recreate the 8.4 environment AFAIK. One of our developers
> pointed out the discrepancy in execution t
There is only one server at this point. The 8.4 machine was upgraded to 9.3
about a year ago and we have no 8.4 backups so it's difficult if not impossible
to recreate the 8.4 environment AFAIK. One of our developers pointed out the
discrepancy in execution times. I decomposed a slow view an
Hi
2015-03-18 20:07 GMT+01:00 Vivekanand Joshi :
> Hi Team,
>
>
>
> I don't know under which section does this question comes, so I am posting
> this question to both Admin and performance mailing list. Apologies in
> advance.
>
>
>
> Objective:
>
>
>
> We are planning to use PostgreSQL instead o
Hi,
On 18.3.2015 20:07, Vivekanand Joshi wrote:
> Hi Team,
>
> I don't know under which section does this question comes, so I am
> posting this question to both Admin and performance mailing list.
> Apologies in advance.
Let's keep this in pgsql-performance.
>
> Objective:
>
> We are plannin
On 03/18/2015 12:07 PM, Vivekanand Joshi wrote:
Here are few questions:
1.) I don't need a load balancing solution. It must be high availability
server and I can work with asynchronous replication. The most important
thing here would be recovery should be as fast as possible.
What approach w
Hi,
On 18.3.2015 18:31, Vivekanand Joshi wrote:
> So, here is the first taste of success and which gives me the
> confidence that if properly worked out with a good hardware and
> proper tuning, PostgreSQL could be a good replacement.
>
> Out of the 9 reports which needs to be migrated in Postgre
Hi Team,
I don't know under which section does this question comes, so I am posting
this question to both Admin and performance mailing list. Apologies in
advance.
Objective:
We are planning to use PostgreSQL instead of Netezza for our data warehouse
as well as database solutions. Right no
You can set it for the db user or use stored proc.
Best regards, Vitalii Tymchyshyn
Ср, 18 бер. 2015 14:48 Vivekanand Joshi пише:
> The issue here is that the queries are running inside a Jasper Reports. So
> we cannot set this only for a one single query.
>
> We are accessing our reports from
The issue here is that the queries are running inside a Jasper Reports. So
we cannot set this only for a one single query.
We are accessing our reports from a web-browser, which in turn runs the
report from Application Server (Jasper). This server connects to
PostgreSQL server.
Inside a JRXML(Jas
Vivekanand Joshi writes:
> So, here is the first taste of success and which gives me the confidence
> that if properly worked out with a good hardware and proper tuning,
> PostgreSQL could be a good replacement.
>
> Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are now
> runnin
2015-03-18 14:31 GMT-03:00 Vivekanand Joshi :
> So, here is the first taste of success and which gives me the confidence
> that if properly worked out with a good hardware and proper tuning,
> PostgreSQL could be a good replacement.
>
> Out of the 9 reports which needs to be migrated in PostgreSQL
So, here is the first taste of success and which gives me the confidence
that if properly worked out with a good hardware and proper tuning,
PostgreSQL could be a good replacement.
Out of the 9 reports which needs to be migrated in PostgreSQL, 3 are now
running.
Report 4 was giving an issue and I
"Carson, Leonard" wrote:
> While acknowledging that nested loops and sequential table scans
> account for 85% of the execution time which suggests that a
> better query may be needed, why would the same query run in
> seconds on 8.x but take minutes on 9.x?
First, please show the output of this
We have views that run from 20x to failing to complete at all in 9.4.1 whereas
they finished in seconds in 8.4.7 on the same platform.
After upgrading from 8.4 to 9.4, I ran ANALYZE on the entire DB. Performance
improved for some but not all of the views.
Here is the explain-analyze output fro
Hi,
On Wed, Mar 18, 2015 at 12:21 PM, Steven Jones
wrote:
> Hi,
> iostat -x snapshot:
>
> avg-cpu: %user %nice %system %iowait %steal %idle
>0.500.002.35 15.090.00 82.05
>
> Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
> avgqu-sz
On Wed, Mar 18, 2015 at 1:21 PM, Steven Jones
wrote:
> #autovacuum = on# Enable autovacuum subprocess? 'on'
> #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
> #autovacuum_max_workers = 3 # max number of autovacuum
> subprocesses
Autovacuum - default:
#autovacuum = on # Enable autovacuum subprocess? 'on'
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
#autovacuum_naptime = 1min
Thanks.
Kernel is: 3.5.0-36-generic #57~precise1-Ubuntu
Also - xlog is on ext4; db partition is zfs (on top of hardware RAID1+0).
Regards,
Steve
> From: ilya.kosmodemian...@postgresql-consulting.com
> Date: Wed, 18 Mar 2015 13:13:36 +0100
> Subject: Re:
On Wed, Mar 18, 2015 at 12:21 PM, Steven Jones
wrote:
> - typical DW - relatively constant periodic data loads - i.e. heavy write
> - we receive large CSV files ~ 5-10Gb every 15 minutes spread out across
> 5-7 minutes
> - DB size is ~2.5Tb; rotating load of 30 days keeps the database stable
On Wed, Mar 18, 2015 at 12:58 PM, Steven Jones
wrote:
> Yes BBU is on the controller; 1024Mb. It is a HP P410i controller, with write
> caching turned on the controller; off on disk level.
vm.dirty_background_bytes=67108864 and vm.dirty_bytes=536870912 looks
resonable for 512MB BBU, you can cal
Hi Ilya,
Thank you for the response.
Yes BBU is on the controller; 1024Mb. It is a HP P410i controller, with write
caching turned on the controller; off on disk level.
2x 15k SAS SFF for WAL and 12x 10k SAS SFF for DB
We have tried longer settings for checkpoint_timeout, but not 1hr; so we wil
Hi Steven,
On Wed, Mar 18, 2015 at 12:21 PM, Steven Jones
wrote:
> - system is HP blade; 128Gb RAM, 2x 8-core, 12x 10k RPM RAID1+0 (database)
Have you BBU on your controller? And how your controller configured, I
mean cache mode, io mode, disk write cache mode. You have 15K SAS
(which form fac
Apologies about the formatting; resending again as plain-text.
Regards,
Steve
From: steven.jones1...@outlook.com
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Very slow checkpoints
Date: Wed, 18 Mar 2015 11:21:08 +
Hi,
We have a so far (to us) unexplainable issue on our productio
Hi,
We have a so far (to us) unexplainable issue on our production systems after we
roughly doubled the amount of data we import daily. We should be ok on pure
theoretical hardware performance, but we are seeing some weird IO counters when
the actual throughput of the writes is very low. The use
27 matches
Mail list logo