.2 0 54 c1t0d0s1 (/var)
1.0 282.0 8.0 2849.0
1.5 129.2 5.2 456.5 10 100 c1t0d0s6 (/usr)
Thank you in advance for your help!
JunOn 8/30/06, Junaili Lie <[EMAIL PROTECTED]> wrote:
I have tried this to no avail.
I have also tried changing the bg_writer_delay parameter to 10.
s /usr/demo/dtrace/whoio.d>> It will tell you the pids doing the io activity and on which devices.> There are more scripts in that directory like iosnoop.d, iotime.d
> and others which also will give> other details like file accessed, time it took for the io etc.>>
Hi Jignesh,
Thank you for my reply.
I have the setting just like what you described:
wal_sync_method = fsyncwal_buffers = 128checkpoint_segments = 128bgwriter_all_percent = 0bgwriter_maxpages = 0
I ran the dtrace script and found the following:
During the i/o busy time, there are postgres processe
Hi everyone,
We have a postgresql 8.1 installed on Solaris 10. It is running fine.
However, for the past couple days, we have seen the i/o reports
indicating that the i/o is busy most of the time. Before this, we only
saw i/o being busy occasionally (very rare). So far, there has been no
performanc
.3.14 rows=1 width=6) Index Cond: (id = 1023::bigint)(2 rows)
Thanks,
J
On 4/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Junaili Lie" <[EMAIL PROTECTED]> writes:> ie. delete from scenario where id=3D'1023' is very fast, but delete from
> scenario where id=3D
alues.
ie. delete from scenario where id='1023' is very fast, but delete from scenario where id='1099' is running forever.
Any ideas?
J
On 4/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Junaili Lie" <[EMAIL PROTECTED]> writes:> we encounter issues when
QUERY PLAN
-
Index Scan using scenario_pkey on scenario (cost=0.00..3.17
rows=1 width=64) (actual time=0.016..0.017 rows=1 loops=1)
Index Cond: (id = 1099::bigint)
Total runtime: 0.072 ms
(3 rows)
On 4/25/06, Junaili Lie <[EMAIL PROTECTED]> w
Hi all,
we encounter issues when deleting from a table based on id (primary
key). On certain 'id', it took forever to delete and the i/o is 100%
busy.
Table scenario has around 1400 entries. It is the parent of 3 other table.
Table "public.scenario"
Column
t works for inner or outer joins and works
> regardless of how complex the logic for picking the best choice is. I
> think one reason this tends to optimize well is that an EXISTS test can
> finish as soon as it finds one matching row.
>
> -Kevin
>
>
> >>> Junaili L
B for sorting and more than 80% for
effective_cache_size and shared_buffers = 32768.
Any further ideas is much appreciated.
On 6/8/05, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 08, 2005 at 15:48:27 -0700,
> Junaili Lie <[EMAIL PROTECTED]> wrote:
> > Hi,
8/05, Tobias Brox <[EMAIL PROTECTED]> wrote:
> [Junaili Lie - Wed at 12:34:32PM -0700]
> > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
> > by f.p_id will work.
> > But I understand this is not the most efficient way. Is there another
> &g
Hi,
I have the following table:
person - primary key id, and some attributes
food - primary key id, foreign key p_id reference to table person.
table food store all the food that a person is eating. The more recent
food is indicated by the higher food.id.
I need to find what is the most recent fo
HI all,
I also would like to know if there is a way to force a use of a
specific index for a specific query. I am currently using Postgresql
7.4.6
In my case I have a relatively big table (several millions of records)
that are frequently used to join with other tables (explicit join or
through vie
Hi guys,
We are in the process of buying a new dell server.
Here is what we need to be able to do:
- we need to be able to do queries on tables that has 10-20 millions
of records (around 40-60 bytes each row) in less than 5-7 seconds.
We also need the hardware to be able to handle up to 50 millions
14 matches
Mail list logo