On Thu, Jan 05, 2006 at 06:50:22PM -0800, David Lang wrote:
> On Thu, 5 Jan 2006, Mark Liberman wrote:
>
> >Obviously, I will be testing this - but it might take a few days, as I
> >haven't
> >figure out how to simulate the "period of inactivity" to get the data
> >flushed
> >out of the cache ..
"Mark Liberman" <[EMAIL PROTECTED]> wrote
>
> Now, my follow-up question / assumption. I am assuming that the IO time
> is
> so long on that index because it has to read the entire index (for that
> file_id) into memory
>
> any confirmation / corrections to my assumptions are greatly appreciate
On Thu, 5 Jan 2006, Mark Liberman wrote:
Obviously, I will be testing this - but it might take a few days, as I haven't
figure out how to simulate the "period of inactivity" to get the data flushed
out of the cache ... so I have to run this each morning.
cat large_file >/dev/null
will probabl
On Thursday 05 January 2006 15:12, Qingqing Zhou wrote:
> "Mark Liberman" <[EMAIL PROTECTED]> wrote
>
> > First run, after a night of inactivity:
> >
> > -> Bitmap Index Scan on
> > 1min_events_file_id_begin_idx (cost=0.00..37.85 rows=3670 width=0)
> > (actual time=313.468..313
"Mark Liberman" <[EMAIL PROTECTED]> wrote
>
> First run, after a night of inactivity:
>
> -> Bitmap Index Scan on 1min_events_file_id_begin_idx
> (cost=0.00..37.85 rows=3670 width=0) (actual time=313.468..313.468
> rows=11082
> loops=1)
> Index Cond:
Hello,
We have a web-application running against a postgres 8.1 database, and
basically, every time I run a report after no other reports have been run for
several hours, the report will take significantly longer (e.g. 30 seconds),
then if I re-run the report again, or run the report when the w