>> Assuming the table's NOT bloated, you may do well to increase the
>> effective_cache_size, which doesn't allocate anything,
>
>> try setting it to something like 512MB or so.
>
> It's currently set to 1000MB.
>
>
>> If your table is bloating, and you don't have idle transactions hanging
>>
> That's about 32% dead rows. Might be worth scheduling a vacuum full,
> but it's not like I was afraid it might be. It looks to me like you
> could probably use a faster I/O subsystem in that machine though.
>
> If the random page cost being lower fixes your issues, then I'd just run
> with it
> Any finally, any ideas on planner constants? Here's what I'm using:
>
> seq_page_cost = 0.5 # measured on an arbitrary scale
> random_page_cost = 1.0 # same scale as above
> cpu_tuple_cost = 0.001 # same scale as above
> cpu_index_tuple_cost
>
> Hi,
>
> I find various references in the list to this issue of queries
> being too slow because the planner miscalculates things and
> decides to go for a sequenctial scan when an index is available
> and would lead to better performance.
>
> Is this still an issue with the latest version?
> Can anybody help me out
>
> I just wanted to knw what will be the configuraion settings for
> partitioning table so as to make inserts faster on the partitioned tables.
Well, that depends. Many questions are important here. Will you insert
into several partitions or only to a single one? Do yo
> Following is one of the update query and it's explain plan which takes
> about 6 mins to execute. I am trying to find a way to execute it faster.
> The functions used in the update statement are if then else test and
> then return one value or the other.
>
If you want to disable it only for some tables, you can put special
values into pg_autovacuum. This won't disable the autovacuum daemon, but
some of the tables won't be vacuumed.
Tomas
Not quite a performance question, but I can't seem to find a simple
answer to this. We're using 8.1.4 and th
Dne 17.11.2010 05:47, Pavel Stehule napsal(a):
> 2010/11/17 Humair Mohammed :
>>
>> There are no indexes on the tables either in SQL Server or Postgresql - I am
>> comparing apples to apples here. I ran ANALYZE on the postgresql tables,
Actually no, you're not comparing apples to apples. You've pr
Dne 16.2.2011 21:51, Chris Hoover napsal(a):
> All,
>
> I'm trying to estimate the size of my hot data set, and wanted to get
> some validation that I'm doing this correctly.
>
> Basically, I'm using the sum(heap_blks_read + idx_blks_read) from
> pg_statio_all_tables, and diffing the numbers over
Hello,
we have some performance problems with postgres 8.0.4, more precisely
with vacuuming 'large' database with a lot of deleted rows.
We had a 3.2 GB database, consisting mainly from 4 large tables, two of
them (say table A and B) having about 14.000.000 of rows and 1 GB of
size each, and two
Tom Lane wrote:
> Tomas Vondra <[EMAIL PROTECTED]> writes:
>> 1) drop, create and fill table B (aggregated data from A, C, D)
>> 2) copy 'old' data from A, C and D to A_old, C_old a D_old
>> 3) delete old data from A, C, D
>> 4) dump data from A_old, C_
> Probably the indexes are bloated after the vacuum full. I think the
> best way to get rid of the "fat" is to recreate both tables and indexes
> anew. For this the best tool would be to CLUSTER the tables on some
> index, probably the primary key. This will be much faster than
> VACUUMing the t
> I guess you're right. I forgot to mention there are 12 composed indexes
> on the largest (and not deleted) table B, having about 14.000.000 rows
> and 1 GB of data. I'll try to dump/reload the database ...
Aaargh, the problem probably is not caused by the largest table, as it
was dropped, filled
> Huh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to
> apply it only to tables where you have lots of dead tuples, which IIRC
> are A, C and D.
Sorry, I should read more carefully. Will clustering a table according
to one index solve problems with all the indexes on the table
> Hi Tomas,
>
> Tomas wrote:
> We've decided to remove unneeded 'old' data, which means removing about
> 99.999% of rows from tables A, C and D (about 2 GB of data). At the
> beginning, the B table (containing aggregated from A, C and D) was emptied
> (dropped and created) and filled in with curre
If you want to benefit from the usage of an index, the query has to
contain some WHERE conditions (on the indexed columns). This is a
'select all' query - there is no way to speed it up using index.
Tomas
> > Why Postgres 8.1 does not use makse_kuupaev_idx index in the
following query
> > ?
> >
>
More precisely - the Postgres could use the index to speed up the
sorting, but in this case the sorting is very fast (less than one
second according to the output), so Postgres probably decided not
to use the index because it would be slower.
Btw. have you run ANALYZE on the table recently? What i
It depends what is the purpose of the function. If it's mainly a
container for a heap of SQL queries along with some simple IF, ELSE
etc. then I use two simple ways to analyze the performance (or lack
of performance):
1) I use a lot of debug messages
2) I print out all SQL and the execute EXPLAIN
I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?
Depends heavily on the particular application, but mo
Index is not used for
is null
condition:
create index makse_dokumnr_idx on makse(dokumnr);
explain select
sum( summa)
from MAKSE
where dokumnr is null
"Aggregate (cost=131927.95..131927.96 rows=1 width=10)"
" -> Seq Scan on makse (cost=0.00..131927.94 rows=1 width=10)"
"F
This is the critical point. You have this line:
There were 132969 unused item pointers.
Which says there's 132k or so dead rows in your table. Which means
vacuum / autovacuum isn't keeping up. Did you try and stop the update
several times? Each time it starts then gets killed it creates dead
The explain plan tree only shows the time to fetch/compute the new rows,
not to actually perform the update, update indexes, or fire triggers.
If there is a big discrepancy then the extra time must be going into
one of those steps.
8.1 does show trigger execution time separately, so the most ob
Well, you're obviously right - I didn't know this. I guess I've found
that the index is not used for null values, and deduced somehow that
NULL values are not stored in the index.
Thanks, it's nice to find out a 'bug' before it's too late :-)
regards
Tomas
Are you sure NULL values are not sto
I know that it's much faster (for us) to run many smaller queries than
one large query, and I think that it's primarily because of your reason
a. Most of our problems come from Pg misunderstanding the results of a
join and making a bad plan decision. Batching dramatically reduces the
liklihood o
Mark Roberts napsal(a):
On Tue, 2008-11-18 at 17:38 +0100, [EMAIL PROTECTED] wrote:
I bet there is no 'critical' length - this is just another case of
index
scan vs. seqscan. The efficiency depends on the size of the table /
row,
amount of data in the table, variability of the column used in the
Thank you.
My 8.1.4 postgresql.conf does not contain such option. So
vacuum_cost_delay is off probably.
Since doc does not recommend any value, I planned to use 2000
Will value of 30 allow other clients to work when VACUUM FULL is running ?
No, as someone already noted the VACUUM FULL is bloc
2. Run the following commands periodically in this order:
VACUUM FULL;
vacuum full pg_shdepend;
CLUSTER rid on (toode);
CLUSTER dok on (kuupaev);
REINDEX DATABASE mydb;
REINDEX SYSTEM mydb;
ANALYZE;
Are all those command required or can something leaved out ?
Running CLUSTER after VACUUM FULL
Risky to try in prod server. Requires creating randomly distributed
product_id testcase to measure
difference.
What should I do next?
I guess you have backups - take them, restore the database on a
different machine (preferably with the same / similar hw config) and
tune the queries on it.
My test computer has PostgreSql 8.3, 4 GB RAM, SSD disks, Intel X2Extreme
CPU
So it is much faster than this prod server.
No idea how to emulate this environment.
I can create new db in prod server as old copy but this can be used in late
night only.
Well, a faster but comparable system may not
Scott,
thank you.
> work_mem = 512
This is very easy to try. You can change work_mem for just a single
session, and this can in some cases help performance quite a bit, and in
others not at all.
I would not recommend having it lower than at least 4MB on a server like
that unless you have a lo
Dne 6.4.2011 17:33, Kevin Grittner napsal(a):
> "Maria L. Wilson" wrote:
>
>> Autovacuum is not running - but regular vacuums are being done
>> twice daily.
>
> Is the ANALYZE keyword used on those VACUUM runs? What version of
> PostgreSQL is this. If it's enough to need fsm settings, do you
Dne 4.4.2011 16:32, Kevin Grittner napsal(a):
> Nothing there makes a write glut on checkpoint less likely to be the
> cause. Without a BBU write-back cache it is actually *more* likely,
> and having enough RAM to hold the whole database makes it *more*
> likely. If you haven't placed your pg_xlo
Dne 31.3.2011 19:26, Laszlo Nagy napsal(a):
> For this query:
>
> select pp.id,pp.product_id,pp.selling_site_id,pp.asin
> from product_price pp
> where
> (pp.asin is not null and pp.asin<>'')
> and (pp.upload_status_id<>1)
> and pp.selling_site_id in (8,7,35,6,9)
> and (pp.last_od < 'now'::timesta
There's a quite nice description in the docs:
http://www.postgresql.org/docs/9.0/interactive/row-estimation-examples.html
and a some more details for indexes:
http://www.postgresql.org/docs/9.0/interactive/index-cost-estimation.html
A bit more info about how this is used is available in this pr
Dne 12.4.2011 19:23, Ogden napsal(a):
>
> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
>
>> Ogden wrote:
>>
>>> I have been wrestling with the configuration of the dedicated Postges 9.0.3
>>> server at work and granted, there's more activity on the production server,
>>> but
>>> the
Dne 12.4.2011 20:28, Ogden napsal(a):
>
> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:
>
>> Dne 12.4.2011 19:23, Ogden napsal(a):
>>>
>>> On Apr 12, 2011, at 12:18 PM, Andreas Kretschmer wrote:
>>>
>>>> Ogden wrote:
>>>>
&
Dne 12.4.2011 23:19, Ogden napsal(a):
>
> On Apr 12, 2011, at 4:09 PM, Tomas Vondra wrote:
>
>> Dne 12.4.2011 20:28, Ogden napsal(a):
>>>
>>> On Apr 12, 2011, at 1:16 PM, Tomas Vondra wrote:
>>>
>>>> Dne 12.4.2011 19:23, Ogden napsal
Dne 12.4.2011 09:33, Claudio Freire napsal(a):
> I actually implemented a statistical system for measuring these kinds
> of correlations.
>
> It's complex, but it might be adaptable to pgsql. Furthermore, one of
> the latest projects of mine was to replace the purely statistical
> approach with SV
Dne 12.4.2011 01:22, Radhya sahal napsal(a):
> Thanks Mr Nathan Boley ,
> i want these equations to solve thsese equtions of parameters and total
> time in order to get each paramter formula
> i need these formula in my experiments is very important to know the
> rate for each parameter in total
Dne 14.4.2011 00:05, Nathan Boley napsal(a):
>>> If you model the costing to reflect the reality on your server, good
>>> plans will be chosen.
>>
>> Wouldn't it be "better" to derive those costs from actual performance
>> data measured at runtime?
>>
>> Say, pg could measure random/seq page cost,
Dne 14.4.2011 01:10, Claudio Freire napsal(a):
> On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra wrote:
>>
>> Another issue is that when measuring multiple values (processing of
>> different requests), the decisions may be contradictory so it really
>> can't be
Dne 24.4.2011 11:38, Stefan Keller napsal(a):
> AFAIK it helps at least bulk loading my data every other time.
Yes, but this thread was about setting the DB for read-only workload, so
those settings were a bit strange.
> So I'm confused and backup again: Given a single-disk virtual Linux
> system
Dne 26.4.2011 07:35, Robert Haas napsal(a):
> On Apr 13, 2011, at 6:19 PM, Tomas Vondra wrote:
>> Yes, I've had some lectures on non-linear programming so I'm aware that
>> this won't work if the cost function has multiple extremes (walleys /
>> hills etc.)
Dne 27.4.2011 20:56, Kevin Grittner napsal(a):
> Greg Smith wrote:
>
>> The reason no work can be done in this area is because there are
>> no standardized benchmarks of query execution in PostgreSQL being
>> run regularly right now. Bringing up ideas for changing the
>> computation is easy; p
Dne 27.4.2011 23:55, Greg Smith napsal(a):
> The easiest place to start is by re-using the work already done by the
> TPC for benchmarking commercial databases. There are ports of the TPC
> workloads to PostgreSQL available in the DBT-2, DBT-3, and DBT-5 tests;
> see http://wiki.postgresql.org/wi
Dne 2.5.2011 06:53, Adarsh Sharma napsal(a):
> I am also in need of a proper documentation that explains how to set
> SHMAX and SHMALL variables in Postgres.
>
> What things need to be taken in consideration before doing that ?
> What is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgre
Dne 10.5.2011 18:22, Shaun Thomas napsal(a):
> On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:
>
>>> I have 8-core server, I wanted to ask whether a query can be divided for
>>> multiple processors or cores, if it could be what to do in postgresql
>>
>> No, at this time (and for the foreseeable fut
Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):
> Hi,
>
> We have some indexes that don't seem to be used at all.
> I'd like to know since when they have not been used.
> That is, the time when postgres started counting to reach the number
> that is in pg_stat_user_indexes.idx_scan
>
> Is there a
Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):
> Hi,
>
> We have some indexes that don't seem to be used at all.
> I'd like to know since when they have not been used.
> That is, the time when postgres started counting to reach the number
> that is in pg_stat_user_indexes.idx_scan
>
> Is there a
Dne 12.5.2011 22:03, Willy-Bas Loos napsal(a):
> Then, are the index scans counted in a memory variable and written at
> analyze time?
No, I believe raghu mixed two things - stats used by the planner and
stats about access to the data (how many tuples were read using an
index, etc.)
Stats for the
Dne 19.5.2011 23:13, Strange, John W napsal(a):
> Am I reading this right in that the sort is taking almost 8 seconds?
You're probably reading it wrong. The sort itself takes about 1 ms (just
subtract the numbers in "actual="). If you include all the overhead it
takes about 2.3 seconds (the hash j
Dne 23.5.2011 19:01, Maciek Sakrejda napsal(a):
>> You're probably reading it wrong. The sort itself takes about 1 ms (just
>> subtract the numbers in "actual=").
>
> I thought it was cost=startup_cost..total_cost. That is not quite the
> same thing, since startup_cost is effectively "cost to prod
Dne 23.5.2011 15:30, Shaun Thomas napsal(a):
> On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:
>
>> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog
>> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers
>> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
Dne 23.5.2011 21:05, Robert Haas napsal(a):
> On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra wrote:
>> Really? He already has 64 checkpoint segments, which is about 1GB of
>> xlog data. The real problem is that the amount of buffers to write is
>> constantly growing. At the be
Dne 24.5.2011 07:24, Terry Schmitt napsal(a):
> As near as I can tell from your test configuration description, you have
> JMeter --> J2EE --> Postgres.
> Have you ruled out the J2EE server as the problem? This problem may not
> be the database.
> I would take a look at your app server's health and
Dne 20.6.2011 18:04, Sushant Sinha napsal(a):
>
> On Mon, 2011-06-20 at 10:58 -0500, Kevin Grittner wrote:
>> Sushant Sinha wrote:
>>
>>> I have a tsvector column docvector and a gin index on it
>>> docmeta1_docvector_idx
>>>
>>> I have a simple query "select * from docmeta1 where docvector @@
Dne 20.6.2011 09:57, Didik Prasetyo napsal(a):
> hai friend i have problem with performance database in postgre, how to
> know slowly query in postgre,
> i want kill or stop query to make postgre slowly, on the server status
> on the admin pg, sometimes the query and how long the query runs do not
Dne 27.6.2011 17:58, Jenish napsal(a):
>
> Hi,
>
> I have already checked all the statements present in the trigger, no one
> is taking more then 20 ms.
>
> I am using 8-Processor, Quad-Core Server ,CPU utilization is more then
> 90-95 % for all. (htop result)
So all cores are 95% utilized? Tha
Dne 27.6.2011 17:01, Jenish napsal(a):
> Hi,
>
> DB : POSTGRES 8.4.8
> OS : Debian
> HD : SAS 10k rpm
>
> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
How much data are we talking about? Does that fit into the shared
buffers or is it significantly larger? Do the trigger
Dne 27.6.2011 22:14, Jenish napsal(a):
> And I am testing the limit for the concurrent active users. When I am
> running my test for 400 concurrent user ie. Active connection. I am
> getting good performance but when I am running the same the same test
> for 950 concurrent users I am getting very
Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
> Daily table explain analyze: http://explain.depesz.com/s/iLY
> Half month table explain analyze: http://explain.depesz.com/s/Unt
Are you sure those two queries are exactly the same? Because the daily
case output says the width is 50B, while the half-
Dne 29.6.2011 01:26, Craig McIlwee napsal(a):
>> Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
>> Are you sure those two queries are exactly the same? Because the daily
>> case output says the width is 50B, while the half-month case says it's
>> 75B. This might be why the sort/aggregate steps are s
Dne 29.6.2011 01:50, Craig McIlwee napsal(a):
>> > work_mem: 512MB
>> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query
>> > plan and took the same amount of time to execute give or take a few
>> > seconds
>>
>> shared_buffers doesn't normally impact the query plan; it impacts
Dne 6.7.2011 15:30, bakkiya napsal(a):
> Any help, please?
According to the EXPLAIN ANALYZE output (please, don't post it to the
mailing list directly - use something like explain.depesz.com, I've done
that for you this time: http://explain.depesz.com/s/HMN), you're doing a
UNIQUE over a lot of da
Dne 18.7.2011 22:11, k...@rice.edu napsal(a):
>> > In my testing I have a 32bit CentOS on the x3450, but a 64bit CentOS
>> > on the E5335. Can this make such a bit difference or should the
>> > perform fairly close to the same speed? Both servers have 8GB of
>> > RAM, and the database I tested wi
Dne 25.7.2011 09:11, Adarsh Sharma napsal(a):
> I go through the link, so it is impossible to get the data back.
> I have following files in my pg_xlog directory :
>
> 000100070091
> 000100070092
> 000100070093
> 000100070094
> 0001000700
On 10 Srpen 2011, 16:26, Grzegorz Blinowski wrote:
> Now, the query above takes about 60sec to execute; exactly: 70s for the
> first run and 60s for the next runs. In my opinion this is too long: It
> should take 35 s to read the whole table into RAM (assuming 100 MB/s
> transfers - half the HDD b
Dne 10.8.2011 19:01, Grzegorz Blinowski napsal(a):
> However, changing shared_mem didn't help. We also checked system I/O
> stats during the query - and in fact there is almost no IO (even with
> suboptimal shared_memory). So the problem is not disk transfer/access
> but rather the way Postgres han
Hi,
I've run a lot of pgbench tests recently (trying to compare various fs,
block sizes etc.), and I've noticed several really strange results.
Eeach benchmark consists of three simple steps:
1) set-up the database
2) read-only run (10 clients, 5 minutes)
3) read-write run (10 clients, 5 minutes
On 17 Srpen 2011, 3:35, Ogden wrote:
> Hope all is well. I have received tremendous help from this list prior and
> therefore wanted some more advice.
>
> I bought some new servers and instead of RAID 5 (which I think greatly
> hindered our writing performance), I configured 6 SCSI 15K drives with
On 17 Srpen 2011, 16:28, Ogden wrote:
> I was wondering what the best parameters would be with my new setup. The
> work_mem obviously will increase as will everything else as it's a 64Gb
> machine as opposed to a 16Gb machine. The configuration I posted was for
> a 16Gb machine but this new one is
On 17 Srpen 2011, 18:39, Ogden wrote:
>> Yes, but it greatly depends on the amount of WAL and your workload. If
>> you
>> need to write a lot of WAL data (e.g. during bulk loading), this may
>> significantly improve performance. It may also help when you have a
>> write-heavy workload (a lot of cli
On 17 Srpen 2011, 21:22, Ogden wrote:
>> This is a very important point. I've found on most machines with
>> hardware caching RAID and 8 or fewer 15k SCSI drives it's just as
>> fast to put it all on one big RAID-10 and if necessary partition it to
>> put the pg_xlog on its own file system. Afte
On 18 Srpen 2011, 0:40, hyelluas wrote:
> Should I start with replacing the sql with procedures?
>
> Should I start with replacing the views with the procedures to save time
> on
> recreating an execution plan and parsing?
>
> Should I start with tuning server parameters ?
Yes, you should start b
On 24 Srpen 2011, 20:48, gnuo...@rcn.com wrote:
> It's worth knowing exactly what that means. Turns out that NAND quality
> is price specific. There's gooduns and baduns. Is this a failure in the
> controller(s) or the NAND?
Why is that important? It's simply a failure of electronics and it ha
On 24 Srpen 2011, 21:41, Merlin Moncure wrote:
> On Wed, Aug 24, 2011 at 2:32 PM, Tomas Vondra wrote:
>> On 24 Srpen 2011, 20:48, gnuo...@rcn.com wrote:
>>> Also, given that PG is *nix centric and support for TRIM is win
>>> centric,
>>> having that m
On 24 Srpen 2011, 21:42, gnuo...@rcn.com wrote:
>
>
> Original message
>>Date: Wed, 24 Aug 2011 21:32:16 +0200
>>From: pgsql-performance-ow...@postgresql.org (on behalf of "Tomas Vondra"
>> )
>>Subject: Re: [PERFORM] Reports from SSD
On 29 Srpen 2011, 11:13, Tasdassa Asdasda wrote:
> Hi. I have a table called work (id bigserial, userid int4, kind1 enum,
> kind2 enum, kind3 enim, value bigint, modified timestamp)
> Table will have about 2*10^6 rows (at same time - overall it can have
> higher IDs but old records are eventually d
On 31 Srpen 2011, 13:19, Jayadevan M wrote:
> Hello,
>
>> >
>> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
> say
>> > what's wrong from just the query plan, without knowing where the time
> is
>> > actually spent.
>> Here is the explain analyze
>> http://explain.depesz.com
On 31 Srpen 2011, 15:59, Andy Colson wrote:
> I assume:
> Buckets: 16384 Batches: 1 Memory Usage: 4531kB
>
> That means a total of 4.5 meg of ram was used for the hash, so if my
> work_mem was lower than that it would swap? (or choose a different plan?)
Why don't you try that? Just set the work
On 4 Září 2011, 20:06, Jayadevan wrote:
> I don't think I understood all that. Anyway, is there a way to fix this -
> either by rewriting the query or by creating an index? The output does
> match
> what I am expecting. It does take more than 10 times the time taken by
> Oracle for the same result
On 3 Září 2011, 9:26, Gerhard Wohlgenannt wrote:
> Dear list,
>
> we are encountering serious performance problems with our database.
> Queries which took around 100ms or less last week now take several
> seconds.
>
> The database runs on Ubuntu Server 10.4.3 (kernel: 2.6.32-33) on
> hardware as fo
On 5 Září 2011, 15:51, Andy Colson wrote:
> On 09/05/2011 02:48 AM, Tomas Vondra wrote:
>> That seems a bit slow ... 27MB/s for writes and 41MB/s forreads is ait
>> slow with 8 drives.
>>
>> Tomas
>>
>>
>
> Agreed, that's really slow. A single
On 5 Září 2011, 16:08, Gerhard Wohlgenannt wrote:
> Below please find the results of vmstat 2 over some periode of time ..
> with normal database / system load.
What does a "normal load" mean? Does that mean a time when the queries are
slow?
Are you sure the machine really has 48GB of RAM? Becaus
On 5 Září 2011, 21:07, Andy Colson wrote:
> On 09/05/2011 01:45 PM, Scott Marlowe wrote:
>> On Mon, Sep 5, 2011 at 8:08 AM, Gerhard Wohlgenannt
>> wrote:
>>> Below please find the results of vmstat 2 over some periode of time ..
>>> with
>>> normal database / system load.
>>>
> 2 1 1344204 240924
On 6 Září 2011, 10:26, Gerhard Wohlgenannt wrote:
> Thanks a lot to everybody for their helpful hints!!!
>
> I am running all these benchmarks while the VMs are up .. with the
> system under something like "typical" loads ..
>
> The RAID is hardware based. On of my colleagues will check if there is
On 6 Září 2011, 10:55, Gerhard Wohlgenannt wrote:
>
>> That's why I love dstat, just do this
>>
>> $ dstat -C 0,1,2,3,4,5,6,7
>>
>> and you know all you need.
>
> dstat looks like a very nice tool, results below ..
> (now the system load seems a bit lower then before when generating
> results for v
On 8 Září 2011, 14:51, Anibal David Acosta wrote:
> Hi!
>
>
>
> I have a table not too big but with aprox. 5 millions of rows, this table
> must have 300 to 400 select per second. But also must have 10~20
> delete/insert/update per second.
>
> So, I need to know if the insert/delete/update really a
Dne 12.9.2011 00:44, Anthony Presley napsal(a):
> We've currently got PG 8.4.4 running on a whitebox hardware set up,
> with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM
> SATA drives, using the onboard IDE controller and ext3.
>
> A few weeks back, we purchased two refurb'd HP DL3
On 14 Září 2011, 15:09, MirrorX wrote:
> here is the explain analyze output->
> server=# explain analyze select count(*) from temp_by_hour where xid > 100
> and xdate > now() - interval '1 week';
> QUERY
> PLAN
> --
On 14 Září 2011, 17:14, MirrorX wrote:
> thx for the answer.
>
> - What is the problem, i.e. what behaviour you expect?
> - How much data is the table?
> - What portion of it matches the conditions?
> - What is the index definition?
>
> i think in my first post i provided most of these details but
Dne 15.9.2011 01:40, Tom Lane napsal(a):
> Stefan Keller writes:
>> 2011/9/14 Tom Lane writes:
>>> (...) I think that
>>> the current state of affairs is still what depesz said, namely that
>>> there might be cases where they'd be a win to use, except the lack of
>>> WAL support is a killer. I i
On 28 Září 2011, 9:05, Greg Smith wrote:
> Venkat Balaji wrote:
>>
>> 1. Big Full Table Scans
>> 2. Table with high IOs (hot tables)
>> 3. Highly used Indexes
>> 4. Tables undergoing high DMLs with index scans 0 (with unused indexes)
>> 5. Index usage for heap blk hits
>> 6. Tracking Checkpoints
>
Hi,
On 28 Říjen 2011, 19:27, Sorbara, Giorgio (CIOK) wrote:
> Dear all,
>
> I am new to PG but I have a solid background on tuning in Oracle and
> MSSQL.
> I have a query coming out from a piece of software from our SW-Stack (I
> can't change it) and of course it takes a large amount of time.
>
>
On 28 Říjen 2011, 18:11, Merlin Moncure wrote:
> On Fri, Oct 28, 2011 at 10:28 AM, Marcus Engene wrote:
>> Hi list,
>>
>> Every now and then I have write peaks which causes annoying delay on my
>> website. No particular reason it seems, just that laws of probability
>> dictates that there will be
Hi,
On 28 Říjen 2011, 17:28, Marcus Engene wrote:
> Hi list,
>
> Every now and then I have write peaks which causes annoying delay on my
> website. No particular reason it seems, just that laws of probability
> dictates that there will be peaks every now and then.
>
> Anyway, thinking of ways to m
On 28 Říjen 2011, 20:40, Merlin Moncure wrote:
> sure, but then you have to have a more complicated setup with a
> drive(s) designated for WAL, another for storage, etc. Also, your
> argument falls away if the WAL is shared with another drive. The era
> of the SSD is here. All new systems I plan w
On 1 Listopad 2011, 10:57, Marcus Engene wrote:
> Hi Hashim,
>
> One workaround I've done is if something looking like this
>
> select
> ...
> from
> table_linking_massive_table tlmt
> ,massive_table mt
> ,some_table1 st1
> ,some_table2 st2
> ,some_table3 st3
> ,so
On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
> Am 02.11.2011 08:12, schrieb Mohamed Hashim:
>> Dear All
>>
>> Thanks for your suggestions & replies.
>>
>> The below are the sample query which i put for particular one bill_id
>>
>> EXPLAIN ANALYZE SELECT abd.bill_no as
>> bill_no,to_char(abd.bil
On 3 Listopad 2011, 16:52, siva palanisamy wrote:
> I basically have 3 tables. One being the core table and the other 2 depend
> on the 1st. I have the requirement to add upto 7 records in the
> tables.
> I do have constraints (primary & foreign keys, index, unique etc) set for
> the tables. I
1 - 100 of 351 matches
Mail list logo