Hi Martin, please CC the mailing-list,
then others can repply ;)
Cédric Villemain (13:59 2008-03-31):
> Le Monday 31 March 2008, Martin Kjeldsen a écrit :
> > I've done the same query on a 8.2.5 database. The first one is prepared
> > first and the other is executed directly.
> >
> > I understand
Hi there,
I have an application accessing a postgres database, and I need to estimate
the following parameters:
- read / write ratio
- reads/second on typical load / peak load
- writes/second on typical load / peak load
Is there any available tool to achieve that ?
TIA,
Sabin
--
Sent via
Hi,
the following statement retrieves 16358 rows in a cursor by fetching
1024 rows in bulks on a 8.2.4 server:
DECLARE curs_285058224 CURSOR FOR SELECT objid, attrid, aggrid, lineid,
objval FROM atobjval WHERE objid IN
(281479288456304,281479288456359,281479288456360,281479288456384,2814792
88456
On Tue, 1 Apr 2008, Sabin Coanda wrote:
I have an application accessing a postgres database, and I need to estimate
the following parameters:
- read / write ratio
- reads/second on typical load / peak load
- writes/second on typical load / peak load
Is there any available tool to achieve that ?
"Hell, Robert" <[EMAIL PROTECTED]> writes:
> When we use 20 as default_statistics_target the retrieval of the data
> takes 7.5 seconds - with 25 as default_statistics_target (with restart
> and analyze) it takes 0.6 seconds.
> The query plan is identical in both situations (row estimation differs a
Hi everyone,
I am running a test with 1 thread calling a stored
procedure in an endless loop. The stored procedure
inserts 1000 records in a table that does not have
indexes or constraints.
In the log file I see that the time to execute the
procedure sometimes it jumps from 100 ms to 700 ms.
The a
That's it - I found a more simple statement which has the same problem
(0.02 seconds vs. 6 seconds):
With cursor (6 seconds):
appcooelakdb2=> explain DECLARE curs_1 CURSOR FOR SELECT DISTINCT
t2.objid FROM atobjval t2 WHERE t2.aggrid = 0 AND t2.attrid =
281479288455385 ORDER BY t2.objid;
"Hell, Robert" <[EMAIL PROTECTED]> writes:
> That's it - I found a more simple statement which has the same problem
> (0.02 seconds vs. 6 seconds):
This isn't necessarily the very same problem --- what are the plans for
your original case with the two different stats settings?
> What's the differ
Here are the query plans for the original query - looks very similar (to
me):
EXPLAIN SELECT objid, attrid, aggrid, lineid, objval FROM atobjval WHERE
objid IN (281479288456304,,285774255837674) ORDER BY
objid, attrid, aggrid, lineid;
QUERY PLAN
--
"Hell, Robert" <[EMAIL PROTECTED]> writes:
> That's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full
> results of most cursors.
Just out of curiosity, why use a cursor at all then? But anyway, you
might want to consider running a custom build with a higher setting for
tuple_fract
On Tue, Apr 01, 2008 at 12:42:03PM -0400, Tom Lane wrote:
>> That's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full
>> results of most cursors.
> Just out of curiosity, why use a cursor at all then?
This isn't the same scenario as the OP, but I've used a cursor in cases where
I c
Hi
Iam getting the below error when iam running my program.
ERROR: cannot have more than 2^32-1 commands in a transaction
SQL state: 54000
If iam not wrong this error ocuurs when there are too many statements executing
in one single transaction.
But this error is occuring in a function that iam lea
Looks much better when using 0.0 for tuple_fraction in case of a cursor instead
of 0.1.
But why are the first 15 fetches (15360 rows) processed in 0.5 seconds and the
last fetch (998 rows) takes 7 seconds.
Are we just unlucky that the last fetch takes that long?
EXPLAIN SELECT objid, attrid, a
"Hell, Robert" <[EMAIL PROTECTED]> writes:
> But why are the first 15 fetches (15360 rows) processed in 0.5 seconds and
> the last fetch (998 rows) takes 7 seconds.
> Are we just unlucky that the last fetch takes that long?
Well, the indexscan plan is going to scan through all the rows in objid
o
Tried harder to find info on the write cycles: found som CFs that claim
2million
cycles, and found the Mtron SSDs which claim to have very advanced wear
levelling and a suitably long lifetime as a result even with an
assumption that
the underlying flash can do 100k writes only.
The 'consumer'
My colleague has tested a single Mtron Mobo's and a set of 4. He also
mentioned the write performance was pretty bad compared to a Western
Digital Raptor. He had a solution for that however, just plug the SSD in
a raid-controller with decent cache performance (his favorites are the
Areca contro
16 matches
Mail list logo