Jeff,
Some off topic questions:
Is it possible to boot the OS from the ioDrive? If so, is the difference in
boot up time noticeable?
Also, how does ioDrive impact compilation time for a moderately large code
base? What about application startup times?
Cheers,
Behrang
Jeffrey Baker wrote:
>
"Matt Smiley" <[EMAIL PROTECTED]> writes:
> So an Index Scan is always going to have a higher cost estimate than
> an equivalent Seq Scan returning the same result rows (unless
> random_page_cost is < 1). That's why I think the planner is always
> preferring the plan that uses a Seq Scan.
If
Hi David,
Early in this thread, Pavel suggested:
> you should partial index
>
> create index foo(b) on mytable where a is null;
Rather, you might try the opposite partial index (where a is NOT null) as a
replacement for the original unqualified index on column A. This new index
will be ignor
Jessica Richard a écrit :
> for a short test purpose, I would like to see what queries are running
> and how long each of them takes.by reconfiguring postgres.conf on
> the server level.
>
> log_statement = 'all' is giving me the query statements.. but I don't
> know where I can turn "timing"
for a short test purpose, I would like to see what queries are running and how
long each of them takes.by reconfiguring postgres.conf on the server level.
log_statement = 'all' is giving me the query statements.. but I don't know
where I can turn "timing" on just like what I can run from th
On Tue, 2 Sep 2008, Guillaume Lelarge wrote:
AFAICT, 32K transactions' status are available on a single (8KB) page.
You're right, I had that right on the refered to page but mangled it when
writing the e-mail.
262144 bytes are 2Mb (mega bits), so if a status is 2 bits, [a clog
file] holds
Greg Smith a écrit :
> [...]
>> - When, or in what case is a new clog file produced?
>
> Every 32K transactions.
Are you sure about this?
y clog files get up to 262144 bytes. Which means 100 transactions'
status: 262144 bytes are 2Mb (mega bits), so if a status is 2 bits, it
holds 1M transa
Thanks very much for your help Guillaume, I appreciate you spending time on
this.
> Well, if your have 95% of NULL actorid_ and 10% for each value of
> pooledactor_, then it makes sense to assume it will have to fetch
> about 150 rows to find the 15 awaited ones...
This is only true if the data i
"David West" writes:
> INFO: "jbpm_taskinstance": moved 1374243 row versions, truncated 166156 to
> 140279 pages
nothing which would explain so much planning off :/
> Yep, the table is from the jboss jbpm (business process management) schema.
I've went to that kind of test then, but it didn't
On Mon, 1 Sep 2008, Scott Carey wrote:
Thanks for the info on the patch to support it -- however the versions
posted there are rather old...
Over here, we're using an extremely old patched version of the JDBC
driver. That's the patch I sent to some mailing list a couple of years
ago. It works
>A single VACUUM may not report how bloated your table is, if it's
>been VACUUM'ed some before, but not frequently enough. If you
>have time for it, and you can afford a full lock on the table,
>only a VACUUM FULL VERBOSE will tell you the previous bloat (the
>"table .. truncated to .." line IIRC)
Here is the results of 'vacuum analyse verbose' on my table:
INFO: vacuuming "public.jbpm_taskinstance"
INFO: scanned index "jbpm_taskinstance_pkey" to remove 928153 row versions
DETAIL: CPU 0.70s/2.40u sec elapsed 46.49 sec.
INFO: scanned index "idx_tskinst_tminst" to remove 928153 row versio
"David West" writes:
> Yes I inserted values in big batches according to a single value of 'B', so
> indeed a sequence scan may have to scan forward many millions of rows before
> finding the required value.
That may well be why the seqscan is so slow to give your results;
that said, it doesn't
"Limit (cost=0.00..3.68 rows=15 width=128) (actual
time=85837.043..85896.140 rows=15 loops=1)"
" -> Seq Scan on my_table this_ (cost=0.00..258789.88 rows=1055580
width=128) (actual time=85837.038..85896.091 rows=15 loops=1)"
"Filter: ((A IS NULL) AND ((B)::t
Yes I inserted values in big batches according to a single value of 'B', so
indeed a sequence scan may have to scan forward many millions of rows before
finding the required value.
I have been doing regular analyse commands on my table. I don't think my
table is bloated, I haven't been performing
Russell Smith writes:
> Pavel Stehule wrote:
>> Hello
>>
>> 2008/9/1 David West <[EMAIL PROTECTED]>:
>>
>>> Thanks for your suggestion but the result is the same.
>>>
>>> Here is the explain analyse output from different queries.
>>> Select * from my_table where A is null and B = '21' limit 15
On Tue, 2 Sep 2008, Thomas Finneid wrote:
You should try setting this to open_sync , that can be considerably faster
for some write-heavy situations. Make sure to test that throughly though,
there are occasional reports of issues with that setting under Linux
The production machine is Solari
2008/9/2 Guillaume Cottenceau <[EMAIL PROTECTED]>:
> "Pavel Stehule" writes:
>
>> Hello
>>
>> 2008/9/1 David West <[EMAIL PROTECTED]>:
>>> Thanks for your suggestion but the result is the same.
>>>
>>> Here is the explain analyse output from different queries.
>>> Select * from my_table where A is
"Pavel Stehule" writes:
> Hello
>
> 2008/9/1 David West <[EMAIL PROTECTED]>:
>> Thanks for your suggestion but the result is the same.
>>
>> Here is the explain analyse output from different queries.
>> Select * from my_table where A is null and B = '21' limit 15
>>
>> "Limit (cost=0.00..3.68 ro
19 matches
Mail list logo