Hi ,
my database seems to be taking too long for a select count(*)
i think there are lot of dead rows. I do a vacuum full it improves
bu again the performance drops in a short while ,
can anyone please tell me if anything worng with my fsm settings
current fsm=55099264 (not sure how i calculate
On Friday 14 November 2003 12:51, Rajesh Kumar Mallah wrote:
> Hi ,
>
> my database seems to be taking too long for a select count(*)
> i think there are lot of dead rows. I do a vacuum full it improves
> bu again the performance drops in a short while ,
> can anyone please tell me if anything worn
Hi Everyone,
I am using PostgreSQL 7.3.2 and have used earlier versions (7.1.x onwards)
and with all of them I noticed same problem with INSERTs when there is a
large data set. Just to so you guys can compare time it takes to insert
one row into a table when there are only few rows present and w
Heya,
FYI just spotted this and thought I would pass it on, for all those who are
looking at new boxes.
http://www.theinquirer.net/?article=12665
http://www.promise.com/product/product_detail_eng.asp?productId=112&familyId
=2
Looks like a four-channel hot-swap IDE (SATA) hardware RAID controller
On Fri, 14 Nov 2003 20:38:33 +1100 (EST)
Slavisa Garic <[EMAIL PROTECTED]> wrote:
> Any help would be greatly appreciated even pointing me in the right
> direction where to ask this question. By the way I designed the
> database this way as my application that uses PGSQL a lot during the
> executi
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah)
wrote:
> INFO: "profiles": found 0 removable, 369195 nonremovable row versions in 43423 pages
> DETAIL: 246130 dead row versions cannot be removed yet.
> Nonremovable row versions range from 136 to 2036 bytes long
Dear Gurus,
I have two SQL function that produce different times and I can't understand
why. Here is the basic difference between them:
CREATE FUNCTION test_const_1234 () RETURNS int4 AS '
SELECT ... 1234 ... 1234 1234 ...
' LANGUAGE 'SQL';
CREATE FUNCTION test_param (int4) RETURNS int4 A
> Does actor_case_assignment contain more columns than just the two ids?
> If yes, do these additional fields account for ca. 70 bytes per tuple?
> If not, try
> VACUUM FULL ANALYSE actor_case_assignment;
actor_case_assignment has its own primary key and a "role" field in addition
to the id
Josh Berkus <[EMAIL PROTECTED]> writes:
> The only thing you're adding to the query is a second SORT step, so it
> shouldn't require any more time/memory than the query's first SORT
> did.
Interesting -- I wonder if it would be possible for the optimizer to
detect this and avoid the redundant inn
On Fri, 14 Nov 2003 11:00:38 -0500, "Nick Fankhauser"
<[EMAIL PROTECTED]> wrote:
>Good question... I've never used clustering in PostgreSQL before, so I'm
>unsure. I presume this is like clustering in Oracle where the table is
>ordered to match the index?
Yes, something like that. With the except
Neil Conway <[EMAIL PROTECTED]> writes:
> Interesting -- I wonder if it would be possible for the optimizer to
> detect this and avoid the redundant inner sort ... (/me muses to
> himself)
I think the ability to generate two sort steps is a feature, not a bug.
This has been often requested in conn
Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
> Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah)
> wrote:
> > INFO: "profiles": found 0 removable, 369195 nonremovable row versions in 43423
> > pages
> > DETAIL: 246130 dead row versions cannot be removed
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
>> I have seen this happen somewhat-invisibly when a JDBC connection
>> manager opens transactions for each connection, and then no processing
>> happens to use those connections for a long time. The
Hannu Krosing wrote:
Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah) wrote:
INFO: "profiles": found 0 removable, 369195 nonremovable row versions in 43423 pages
DETAIL: 246130 dead row versions canno
After a long battle with technology, [EMAIL PROTECTED] (Hannu Krosing), an earthling,
wrote:
> Christopher Browne kirjutas R, 14.11.2003 kell 16:13:
>> Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Rajesh Kumar Mallah)
>> wrote:
>> > INFO: "profiles": found 0 removable, 369195 no
Will LaShell <[EMAIL PROTECTED]> writes:
> Hannu Krosing wrote:
>> Can't the backend be made to delay the "real" start of transaction until
>> the first query gets executed ?
> That seems counter intuitive doesn't it? Why write more code in the
> server when the client is the thing that has the
"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=" <[EMAIL PROTECTED]> writes:
> I have two SQL function that produce different times and I can't understand
> why.
The planner often produces different plans when there are constants in
WHERE clauses than when there are variables, because it can get more
accurate
Hi-
I'm seeing estimates for n_distinct that are way off for a large table
(8,700,000 rows). They get better by setting the stats target higher, but
are still off by a factor of 10 with the stats set to 1000. I've noticed and
reported a similar pattern before on another table. Because this follows
On Fri, Nov 14, 2003 at 02:16:56PM -0500, Christopher Browne wrote:
> otherwise-unoccupied connection in the pool, in effect, doing a sort
> of "vacuum" of the connections. I don't get very favorable reactions
> when I suggest that, though...
Because it's a kludge on top of another kludge, perhap
Slavisa Garic wrote:
> Hi Everyone,
> I am using PostgreSQL 7.3.2 and have used earlier versions (7.1.x
> onwards)
> and with all of them I noticed same problem with INSERTs when there is
> a
> large data set. Just to so you guys can compare time it takes to insert
> one row into a table when
When I execute a transaction using embedded sql statements in a c program,
I get the error,
Error in transaction processing. I could see from the documentation that
it means, "Postgres signalled to us that we cannot start, commit or
rollback the transaction"
I don't find any mistakes in the trans
"Nick Fankhauser" <[EMAIL PROTECTED]> writes:
> I'm seeing estimates for n_distinct that are way off for a large table
Estimating n_distinct from a small sample is inherently a hard problem.
I'm not surprised that the estimates would get better as the sample size
increases. But maybe we can do be
Does VACUUM ANALYZE help with the analysis or it also speeds up the
process. I know i could try that before I ask but experiment is running
now and I am too curious to wait :),
Anyway thanks for the hint,
Slavisa
On Fri, 14 Nov 2003, George Essig wrote:
> Slavisa Garic wrote:
>
> > Hi Everyone,
The only thing you're adding to the query is a second SORT step, so it
shouldn't require any more time/memory than the query's first SORT
did.
Interesting -- I wonder if it would be possible for the optimizer to
detect this and avoid the redundant inner sort ... (/me muses to
himself)
That's som
24 matches
Mail list logo