i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance The table has
76K rows growing about 1000 rows per working day so the performance is
not that great it takes about 20 to 30 seconds to get all the records
for the t
Gregory Williamson wrote:
Justin --
You wrote:
>
> i've had to write queries to get trail balance values out of the GL
> transaction table and i'm not happy with its performance
>
>
> The table has 76K rows growing about 1000 rows per working day so the
> performance is not that great it take
yes the cross join is intentional.
Thanks creating the two column index drop processing time to 15 to 17
seconds
put per period down to 1 second
Scott Marlowe wrote:
You're joining these two tables: period, accnt, but I'm not seeing an
on () clause or a where clause joining them. Is the cr
Justin --
You wrote:
>
> i've had to write queries to get trail balance values out of the GL
> transaction table and i'm not happy with its performance
>
>
> The table has 76K rows growing about 1000 rows per working day so the
> performance is not that great it takes about 20 to 30 seconds
On Mon, 5 May 2008, Craig James wrote:
pgbench -i -s 20 -U test
That's way too low to expect you'll see a difference in I/O schedulers.
A scale of 20 is giving you a 320MB database, you can fit the whole thing
in RAM and almost all of it on your controller cache. What's there to
schedule?
You're joining these two tables: period, accnt, but I'm not seeing an
on () clause or a where clause joining them. Is the cross product
intentional?
But what I'm seeing that seems like the lowest hanging fruit would be
two column indexes on the bits that are showing up in those bit map
scans. Li
On Mon, May 5, 2008 at 5:33 PM, Craig James <[EMAIL PROTECTED]> wrote:
>
> (It also reinforces what the pgbench man page says: Short runs aren't
> useful. Even these two-minute runs have a lot of variability. Before I
> turned off AutoVacuum, the variability was more like 50% between runs.)
I'd
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance
The table has 76K rows growing about 1000 rows per working day so the
performance is not that great it takes about 20 to 30 seconds to get all
the records for the table
Mario Weilguni <[EMAIL PROTECTED]> writes:
> I can confirm this for a quite larger result set (4020 rows) for a DB
> with 410 tables and a lot of foreign key constraints.
> ...
> This is Postgresql 8.2.4, on a Dual-Core XEON 3.6GHz. With nested_loops
> off, I get a very fast response (330ms).
FW
I had the opportunity to do more testing on another new server to see whether
the kernel's I/O scheduling makes any difference. Conclusion: On a
battery-backed RAID 10 system, the kernel's I/O scheduling algorithm has no
effect. This makes sense, since a battery-backed cache will supercede an
On Mon, May 5, 2008 at 2:03 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 5 May 2008, Campbell, Lance wrote:
>
>
> > We currently backup all of our database tables per schema using pg_dump
> > every half hour. We have been noticing that the database performance
> > has been very poor during
On Mon, 5 May 2008, Scott Marlowe wrote:
Other than that, you might wanna look at a faster / better RAID
controller in the future. One with lots of battery backed cache set to
write back.
Hopefully Lance's PERC 6/I SAS already has its battery installed. The 6/I
with 256MB of cache is decen
On Mon, 5 May 2008, Campbell, Lance wrote:
We currently backup all of our database tables per schema using pg_dump
every half hour. We have been noticing that the database performance
has been very poor during the backup process. How can I improve the
performance?
Uh, don't do that? pg_dump
On Mon, May 5, 2008 at 11:19 AM, Campbell, Lance <[EMAIL PROTECTED]> wrote:
> Scott,
> When I do a df -h I see that the database takes up a total of 8Gig of
> disk space. This is not the size of the backup file of the database.
Ok. Just wanted to make sure. Looking at vmstat, with little or n
On Mon, 2008-05-05 at 09:10 -0700, Craig James wrote:
> Campbell, Lance wrote:
> > We currently backup all of our database tables per schema using pg_dump
> > every half hour. We have been noticing that the database performance
> > has been very poor during the backup process. How can I improve
On Mon, May 5, 2008 at 10:11 AM, Campbell, Lance <[EMAIL PROTECTED]> wrote:
> Scott,
> The last 6 entries are when the system is not backing up. The system
> was running fine. But the other entries are when it was backing up.
> Reads seem to be fine but any operations that need to write data j
Campbell, Lance wrote:
We currently backup all of our database tables per schema using pg_dump
every half hour. We have been noticing that the database performance
has been very poor during the backup process. How can I improve the
performance?
It sounds like the goal is to have frequent, n
On Mon, May 5, 2008 at 9:10 AM, Campbell, Lance <[EMAIL PROTECTED]> wrote:
> Scott,
> The server is a Dell PowerEdge 2900 II with the standard Perc 6/I SAS
> controller with 256 MB cache.
It's probably not gonna win any awards, but it's not too terrible.
What does vmstat 1 say during your backu
Scott,
The server is a Dell PowerEdge 2900 II with the standard Perc 6/I SAS
controller with 256 MB cache.
Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu
-Original Message-
From: Scott Marl
On Mon, May 5, 2008 at 8:59 AM, Campbell, Lance <[EMAIL PROTECTED]> wrote:
>
> PostgreSQL: 8.2.4
You should update to 8.2.7 as a matter of periodic maintenance. It's
a very short and easy update.
> We currently backup all of our database tables per schema using pg_dump
> every half hour. We hav
PostgreSQL: 8.2.4
We currently backup all of our database tables per schema using pg_dump
every half hour. We have been noticing that the database performance
has been very poor during the backup process. How can I improve the
performance?
Server Specs:
Dedicated DB server
Database take
Tom Lane schrieb:
Ernesto <[EMAIL PROTECTED]> writes:
I'm wondering why would this query take about 90 seconds to return 74 rows?
EXPLAIN ANALYZE might tell you something.
Is this really the query you're running? Because these two columns
don't exist:
INFORMATION_SCHEMA.KEY_COLUM
22 matches
Mail list logo