[PERFORM] Deadlocks...

2004-03-14 Thread postgres
Hi,

i have 2 Functions, one ist called by page visitors (something about 2,000 times / 1 hour)
and the other just by the admin (let say 1 time per hour or per login)
i often get a deadlock error after calling the admin function
yes they try to access the same table somewhere in the function code.
The Admin function can take up to 20-30 seconds and visitor function just 20 to 30 ms
What can i do there? Have i to study a lot about locking tables or something else?

Thanks for your Help
regards,
Boris


[PERFORM] Function & Update/Insert Problem and Performance

2004-01-22 Thread postgres
Hi there,

i m new in this board. I read a little here to find a solution for my problem. But i couldn´t find something.

My Problem ist:
I m programming a counter in PHP with Postgres as DB. I had 6,000 visitors across all day, so everything worked fine first.
Yesterday i got 80K Users at my sites, that was the point were all crashed.

I have an Intel Celeron 2,2 Server with 1 GB Ram
I have PHP in Version 4.2.0 and Postgres 7.3.4
The Connection to DB from php ist over PEAR-DB-Class
 
I wrote a postgres function which gets up to 17 parameters such as os, browser, referer and so on
This function tries to update a row in the db/table which matches with the hour of the current datetime
If this returns not found then i do an insert
So i have a few tables and th
 e update-insert procedure is done a few times (about 15-17 times). At the end i collect some data and return them to show as the counter-banner

I called the function with expain analyze, so it showed something around 222 ms duration

My first problem ist, what is about transactions? Do i have to care about? I read that a function is always just one transaction
So if something fails, the whole function will be undone with a rollback

Second problem is my update works just if they are not too much visitors/postmaster processes...
If there are too much postmasters i get too many errors in my data (the update seems not to work, it doesnt find the exisiting row in the current hour, so it does inserts i think)
Do i have to care about permissions? I have set for example “lock table os in exclusive mode” for all tables i work with

The next problem is, i m looking to get 2,000,000 visitors day
So i will have to change something in pos
 tgres config right? But what exactly? Max_connectionsnumber, what is else important? Something in apache maybe too?

I hope i can get some ideas, because everything works, except the perfomance and the number of the visitors manipulate data in wrong way and are making me seek!
I know there are too many questions, but every idea of you guys will help me, thanks

Thank you so far

Boris


[PERFORM] Persistent Connections

2004-01-24 Thread postgres
Hi

I have a php script and i make a pg_pconnect
If i want to make 4-10 pg_query in that script
Have i to close the connection at end of the script?
(i would say yes, is it right?)

Sorry I m a little bit confused about the persistent thing!!
Is it smart to use persistent connections at all if i expect 100K Users to hit the script in an hour and the script calls up to 10-15 pg functions?
I have at the mom one function but the server needs 500 ms, its a little bit too much i think, and it crashed when i had 20K users

Thanks
Bye


[PERFORM] Select-Insert-Query

2004-02-27 Thread postgres
Hi,

what is the most performant way to select for example the first 99 rows of a table and insert them into another table...

at the moment i do this:

for userrecord in select * from table where account_id = a_account_id and counter_id = userrecord.counter_id and visitortable_id between a_minid and a_maxid limit 99 loop
	insert into lastusers (account_id, counter_id, date, ip, hostname) values(a_account_id,userrecord.counter_id,userrecord.date,userrecord.ip,userrecord.hostname);
end loop;

i think "limit" is a performance killer, is that right? but what to do instead

thanks
bye


Re: [PERFORM] Select-Insert-Query

2004-03-01 Thread postgres
Hi,

nobody has an idea? :-(

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Im Auftrag von [EMAIL PROTECTED]
Gesendet: Freitag, 27. Februar 2004 17:53
An: [EMAIL PROTECTED]
Betreff: [PERFORM] Select-Insert-Query 

Hi,

what is the most performant way to select for example the first 99 rows of a table and insert them into another table...

at the moment i do this:

for userrecord in select * from table where account_id = a_account_id and counter_id = userrecord.counter_id and visitortable_id between a_minid and a_maxid limit 99 loop
insert into lastusers (account_id, counter_id, date, ip, hostname) values(a_account_id,userrecord.counter_id,userrecord.date
 ,userrecord.ip,userrecord.hostname);
end loop;

i think "limit" is a performance killer, is that right? but what to do instead

thanks
bye



[PERFORM] 121+ million record table perf problems

2007-05-18 Thread cyber-postgres

I need some help on recommendations to solve a perf problem.

I've got a table with ~121 million records in it.  Select count on it 
currently takes ~45 minutes, and an update to the table to set a value on 
one of the columns I finally killed after it ran 17 hours and had still 
not completed.  Queries into the table are butt slow, and


System:   SUSE LINUX 10.0 (X86-64)
Postgresql:   PostgreSQL 8.2.1
Index type:   btree

A select count took ~48 minutes before I made some changes to the 
postgresql.conf, going from default values to these:

shared_buffers = 24MB
work_mem = 256MB
maintenance_work_mem = 512MB
random_page_cost = 100
stats_start_collector = off
stats_row_level = off

As a test I am trying to do an update on state using the following queries:
update res set state=5001;
select count(resid) from res;

The update query that started this all I had to kill after 17hours.  It 
should have updated all 121+ million records.  That brought my select 
count down to 19 minutes, but still a far cry from acceptable.


The system has 2GB of RAM (more is alreads on order), but doesn't seem to 
show problems in TOP with running away with RAM.  If anything, I don't 
think it's using enough as I only see about 6 processes using 26-27 MB 
each) and is running on a single disk (guess I will likely have to at the 
minimum go to a RAID1).  Workload will primarily be comprised of queries 
against the indicies (thus why so many of them) and updates to a single 
record from about 10 clients where that one records will have md5, state, 
rval, speed, audit, and date columns updated.  Those updates don't seem to 
be a problem, and are generally processed in bulk of 500 to 5000 at a 
time.


Here is the schema for the table giving me problems:

CREATE TABLE res
(
  res_id integer NOT NULL DEFAULT nextval('result_id_seq'::regclass),
  res_client_id integer NOT NULL,
  "time" real DEFAULT 0,
  error integer DEFAULT 0,
  md5 character(32) DEFAULT 0,
  res_tc_id integer NOT NULL,
  state smallint DEFAULT 0,
  priority smallint,
  rval integer,
  speed real,
  audit real,
  date timestamp with time zone,
  gold_result_id integer,
  CONSTRAINT result_pkey PRIMARY KEY (res_id),
  CONSTRAINT unique_res UNIQUE (res_client_id, res_tc_id)
)
WITHOUT OIDS;
ALTER TABLE res OWNER TO postgres;

CREATE INDEX index_audit
  ON res
  USING btree
  (audit);

CREATE INDEX index_event
  ON res
  USING btree
  (error);

CREATE INDEX index_priority
  ON res
  USING btree
  (priority);

CREATE INDEX index_rval
  ON res
  USING btree
  (rval);

CREATE INDEX index_speed
  ON res
  USING btree
  (speed);

CREATE INDEX index_state
  ON res
  USING btree
  (state);

CREATE INDEX index_tc_id
  ON res
  USING btree
  (res_tc_id);

CREATE INDEX index_time
  ON res
  USING btree
  ("time");

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] filesystem option tuning

2004-05-28 Thread share-postgres
Hi!

On Mon, May 17, 2004 at 06:04:54PM +0100, Richard Huxton wrote:
> [EMAIL PROTECTED] wrote:
> > [...]
> 
> In no official capacity whatsoever, welcome aboard.

Thanks ;-)

> > There is just one point where I found the documentation lacking any
> > description and practical hints (as opposed to all other topics), namely
> > that of how to tune a setup for maximum performance regarding the layout of
> > partitions on hard-disks and their mount options.
> 
> I'm not a Sun user, so I can't give any OS-specific notes, but in general:
>   - Don't bypass the filesystem, but feel free to tinker with mount 
> options if you think it will help

Right, raw partitions are too low-level for me these days anyhow...
I assume that all postgres partitions can be mounted with noatime?

>   - If you can put WAL on separate disk(s), all the better.

Does that mean only the xlog, or also the clog? As far as I understand, the
clog contains some meta-information on the xlog, so presumably it is flushed
to disc synchronously together with the xlog? That would mean that they each
need a separate disk to prevent one disk having to seek too often...?

>   - Battery-backed write-cache for your SCSI controller can be a big 
> performance win

I probably won't be able to get such a setup for this project; that's why I
am bothering about which disk will be seeking how often.

>   - Tablespaces _should_ be available in the next release of PG, we'll 
> know for sure soon. That might make life simpler for you if you do want 
> to spread your database around by hand,

Ok, I think tablespaces are not the important thing - at least for this
project of ours.

> > What I should add is that reliability is a premium for us, we do not want to
> > sacrifice integrity for speed, and that we are tuning for a high commit rate
> > of small, simple transactions...
> 
> Make sure the WAL is on fast disks I'd suggest. At a guess that'll be 
> your bottleneck.
> 
> For more info, your best bet is to check the archives on the 
> plpgsql-performance list, and then post there. People will probably want 
> to know more about your database size/number of concurrent 
> transactions/disk systems etc.

Here goes ... we are talking about a database cluster with two tables where
things are happening, one is a kind of log that is simply "appended" to and
will expect to reach a size of several million entries in the time window
that is kept, the other is a persistent backing of application data that
will mostly see read-modify-writes of single records. Two writers to the
history, one writer to the data table. The volume of data is not very high
and RAM is enough...

If any more information is required feel free to ask - I would really
appreciate getting this disk layout sorted out.

Thanks,
Colin

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] 8.0 vs. 7.4 benchmarks

2004-12-06 Thread Postgres Learner
Hi all!
Has anyone done any performance benchmarking of postgresql 7.4 vs 8.0? 
Are there any scenarios where 8.0 can be expected to be faster? I
would love to get my hands on any numbers that someone might have.
Also does anyone know how long it will take for a stable release of
8.0 to come? Given the loads of additional features in 8.0, I can't
wait to use it in production. :-)

thanks a lot everyone!!!
ps

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])