On 2013-03-20, jg wrote:
> Hi,
>
> I have a PostgreSQL database with 50 tables.
> Every minute, sequentially, a batch load 10.000 rows of 250 bytes with a COPY.
>
> After a day, i got a database with 50 tables with 1.440 set of 10.000 rows.
> The tables are cleany and naturally clustered by the in
Hi,
Atfer 30 minutes, on my Linux computer, with 2 files fill one after the other.
I got a fragmented files with many back step:
# /usr/sbin/filefrag -v 24586
Filesystem type is: ef53
File size of 24586 is 822231040 (200740 blocks, blocksize 4096)
ext logical physical expected length flags
0
Hi,
I create a test cas on Linux:
postgres=# create table a (v int);
postgres=# create table b (v int);
Then a while(true) over the following script where 24577 and 24580 are the
files of the tables a and b
#!/bin/sh
psql test -c 'insert into a select generate_series(1,10,1);'
psql test -c
jg, 20.03.2013 12:13:
I suspect the heavy fragmented files to the cause of the IO wait
grows (PostgreSQL on WIndows).
How to cope with that?
I would first investigate that it's *really* the fragmentation.
As a database does a lot of random IO, fragmentation isn't such a big issue.
You could u
Hi,
> That doesn't make sense then, to have fragmentation if you are creating new
> tables with fresh data copied into them. The files should be pretty much
> sequentially written.
>
> O I see. You're using Windows. Maybe you need some OS with a better
> file system that doesn't fragment
On 03/20/2013 07:14 AM, Vick Khera wrote:
On Wed, Mar 20, 2013 at 9:53 AM, jg mailto:j...@rilk.com>>
wrote:
The rotated script, as explained, just drops tables and creates
empty ones.
That doesn't make sense then, to have fragmentation if you are creating
new tables with fresh data co
On Wed, Mar 20, 2013 at 9:53 AM, jg wrote:
> The rotated script, as explained, just drops tables and creates empty ones.
>
That doesn't make sense then, to have fragmentation if you are creating new
tables with fresh data copied into them. The files should be pretty much
sequentially written.
Hi,
> It sounds like you are using partitioned tables. your partitions should be
> divided up such that they help optimize your queries. that is, minimize the
> number of partitions you need to scan for any given query.
>
> That said, try to make is so that this cleanup script purges whole
> part
On Wed, Mar 20, 2013 at 7:13 AM, jg wrote:
> Now, there is a partition rotation script, that suppress old tables when
> some size limit happens.
> Let suppose, that this script runs and suppress only one table qith few
> days of data, then recreates a new empty one.
>
It sounds like you are usin
Hi,
I have a PostgreSQL database with 50 tables.
Every minute, sequentially, a batch load 10.000 rows of 250 bytes with a COPY.
After a day, i got a database with 50 tables with 1.440 set of 10.000 rows.
The tables are cleany and naturally clustered by the inserted timestamp.
Each table has data
10 matches
Mail list logo