Re: [PERFORM] Writting a "search engine" for a pgsql DB
Charles Sprickman wrote: > On Tue, 27 Feb 2007, Dave Page wrote: > >> Magnus Hagander wrote: >>> >>> Just as a datapoint, we did try to use mnogosearch for the >>> postgresql.org website+archives search, and it fell over completely. >>> Indexing took way too long, and we had search times several thousand >>> times longer than with tsearch2. >>> >>> That said, I'm sure there are cases when it works fine :-) >> >> There are - in fact before your time the site did use Mnogosearch. We >> moved to our own port of ASPSeek when we outgrew Mnogo's capabilities, >> and then to your TSearch code when we outgrew ASPSeek. > > At risk of pulling this way too far off topic, may I ask how many > documents (mail messages) you were dealing with when things started to > fall apart with mnogo? I honestly don't remember now, but it would have been in the tens or maybe low hundreds of thousands. Don't get me wrong, I've built sites where Mnogo is still running fine and does a great job - it just doesn't scale well. > We're looking at it for a new project that will > hopefully get bigger and bigger. We will be throwing groups of mailing > lists into their own mnogo config/tables... If we should save ourselves > the pain and look at something more homebrew, then we'll start > investigating "Tsearch". Well put it this way, the PostgreSQL mailing list archives outgrew Mnogo years ago and even ASPSeek was beginning to struggle when it got removed a few months back. Regards, Dave ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Writting a "search engine" for a pgsql DB
On Tue, Feb 27, 2007 at 01:33:47PM +, Dave Page wrote: > When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop > pretending to be Google... Just for the record: Google has been known to sponsor sites in need with Google Minis and such earlier -- I don't know what their[1] policy is on the matter, but if tsearch2 should at some point stop being usable for indexing postgresql.org, asking them might be worth a shot. [1] Technically "our", as I start working there in July. I do not speak for Google, etc., blah blah. :-) /* Steinar */ -- Homepage: http://www.sesse.net/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Writting a "search engine" for a pgsql DB
Steinar H. Gunderson wrote: > On Tue, Feb 27, 2007 at 01:33:47PM +, Dave Page wrote: >> When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop >> pretending to be Google... > > Just for the record: Google has been known to sponsor sites in need with > Google Minis and such earlier -- I don't know what their[1] policy is on the > matter, but if tsearch2 should at some point stop being usable for indexing > postgresql.org, asking them might be worth a shot. I think if postgresql.org outgrows tsearch2 then the preferred solution would be to improve tsearch2/postgresql, but thanks for the tip :-) > [1] Technically "our", as I start working there in July. Congratulations :-) Regards, Dave ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PERFORM] Writting a "search engine" for a pgsql DB
On Wed, 28 Feb 2007, Dave Page wrote: Steinar H. Gunderson wrote: On Tue, Feb 27, 2007 at 01:33:47PM +, Dave Page wrote: When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop pretending to be Google... Just for the record: Google has been known to sponsor sites in need with Google Minis and such earlier -- I don't know what their[1] policy is on the matter, but if tsearch2 should at some point stop being usable for indexing postgresql.org, asking them might be worth a shot. I think if postgresql.org outgrows tsearch2 then the preferred solution would be to improve tsearch2/postgresql, but thanks for the tip :-) Guys, current tsearch2 should works with millions of documents. Actually, the performance killer is the necessity to consult heap to calculate rank which is unavoidably slow, since one need to read all records. Search itself is incredibly fast ! If we find a way to store an additional information in index and workout visibility issue, full text search will be damn fast. Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Writting a "search engine" for a pgsql DB
Oleg Bartunov wrote: > Guys, current tsearch2 should works with millions of documents. ... > Search itself is incredibly fast ! Oh, I know - you and Teodor have done a wonderful job. Regards, Dave. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[PERFORM] Upgraded to 8.2.3 --- still having performance issues
As the subject says. A quite puzzling situation: we not only upgraded the software, but also the hardware: Old system: PG 7.4.x on Red Hat 9 (yes, it's not a mistake!!!) P4 HT 3GHz with 1GB of RAM and IDE hard disk (120GB, I believe) New system: PG 8.2.3 on Fedora Core 4 Athlon64 X2 4200+ with 2GB of RAM and SATA hard disk (250GB) I would have expected a mind-blowing increase in responsiveness and overall performance. However, that's not the case --- if I didn't know better, I'd probably tend to say that it is indeed the opposite (performance seems to have deteriorated) I wonder if some configuration parameters have somewhat different meaning, or the considerations around them are different? Here's what I have in postgresql.conf (the ones I believe are relevant) : max_connections = 100 shared_buffers = 1024MB #temp_buffers = 8MB #max_prepared_transactions = 5 #work_mem = 1MB #maintenance_work_mem = 16MB #max_stack_depth = 2MB max_fsm_pages = 204800 checkpoint_segments = 10 Here's my eternal confusion --- the kernel settings for shmmax and shmall: I did the following in /ec/rc.local, before starting postgres: echo -n "1342177280" > /proc/sys/kernel/shmmax echo -n "83886080" > /proc/sys/kernel/shmall I still haevn't found any docs that clarify this issue I know it's not PG-specific, but Linux kernel specific, or maybe even distro-specific??) For shmall, I read "if in bytes, then , if in pages, then ", and I see a reference to PAGE_SIZE (if memory serves --- no pun intended!); How would I know if the spec has to be given in bytes or in pages? And if in pages, how can I know the page size?? I put it like this to maintain the ratio between the numbers that were by default. But I'm still puzzled by this. PostgreSQL does start (which it wouldn't if I put shmmax too low), which suggests to me that the setting is ok ... Somehow, I'm extremely uncomfortable with having to settle for a "seems like it's fine". The system does very frequent insertions and updates --- the longest table has, perhaps, some 20 million rows, and it's indexed (the primary key is the combination of two integer fields). This longest table only has inserts (and much less frequent selects), at a peak rate of maybe one or a few insertions per second. The commands top and ps seem to indicate that postgres is quite comfortable in terms of CPU (CPU idle time rarely goes below 95%). vmstat indicates activity, but it all looks quite smooth (si and so are always 0 --- without exception). However, I'm seeing the logs of my application, and right now the app. is inserting records from last night around midnight (that's a 12 hours delay). Any help/tips/guidance in troubleshooting this issue? It will be much appreciated! Thanks, Carlos -- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Upgraded to 8.2.3 --- still having performance issues
Carlos Moreno <[EMAIL PROTECTED]> writes: > I would have expected a mind-blowing increase in responsiveness and > overall performance. However, that's not the case --- if I didn't know > better, I'd probably tend to say that it is indeed the opposite > (performance seems to have deteriorated) Did you remember to re-ANALYZE everything after loading up the new database? That's a frequent gotcha ... regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Upgraded to 8.2.3 --- still having performance issues
Tom Lane wrote: Carlos Moreno <[EMAIL PROTECTED]> writes: I would have expected a mind-blowing increase in responsiveness and overall performance. However, that's not the case --- if I didn't know better, I'd probably tend to say that it is indeed the opposite (performance seems to have deteriorated) Did you remember to re-ANALYZE everything after loading up the new database? That's a frequent gotcha ... I had done it, even though I was under the impression that it wouldn't be necessary with 8.2.x (I still chose to do it just in case). I've since discovered a problem that *may* be related to the deterioration of the performance *now* --- but that still does not explain the machine choking since last night, so any comments or tips are still most welcome. Thanks, Carlos -- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PERFORM] Upgraded to 8.2.3 --- still having performance issues
Carlos Moreno wrote: Tom Lane wrote: Carlos Moreno <[EMAIL PROTECTED]> writes: I would have expected a mind-blowing increase in responsiveness and overall performance. However, that's not the case --- if I didn't know better, I'd probably tend to say that it is indeed the opposite (performance seems to have deteriorated) Did you remember to re-ANALYZE everything after loading up the new database? That's a frequent gotcha ... I had done it, even though I was under the impression that it wouldn't be necessary with 8.2.x (I still chose to do it just in case). I've since discovered a problem that *may* be related to the deterioration of the performance *now* --- but that still does not explain the machine choking since last night, so any comments or tips are still most welcome. Thanks, Carlos -- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster And the problem that *may* be related is? All the information is required so someone can give you good information... ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Upgraded to 8.2.3 --- still having performance issues
Rodrigo Gonzalez wrote: I've since discovered a problem that *may* be related to the deterioration of the performance *now* --- but that still does not explain the machine choking since last night, so any comments or tips are still most welcome. [...] And the problem that *may* be related is? All the information is required so someone can give you good information... You are absolutely right, of course --- it was an instance of "making a long story short" for everyone's benefit :-) To make the story as short as possible: I was running a program that does clean up on the database (move records older than 60 days). That program creates log files, and it exhausted the available space on the /home partition (don't ask! :-)). The thing is, all of postgres's data is below the /var partition (which has a total of 200GB, and still around 150GB available) --- in particular, the postgres' home directory is /var/users/postgres, and the database cluster's data directory is /var/users/postgres/data --- that tells me that this issue with the /home partition should not make postgres itself choke; the clean up program was totally choking, of course. And yes, after realizing that, I moved the cleanup program to some place below the /var directory, and /home now has 3.5GB available. Thanks, Carlos -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Upgraded to 8.2.3 --- still having performance issues
Are there any issues with client libraries version mismatching backend version? I'm just realizing that the client software is still running on the same machine (not the same machine where PG is running) that has PG 7.4 installed on it, and so it is using the client libraries 7.4 Any chance that this may be causing trouble on the performance side? (I had been monitoring the logs to watch for SQLs now failing when they worked before... But I was thinking rather incompatibilities on the backend side ... ) Thanks, Carlos -- ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [PERFORM] Two hard drives --- what to do with them?
On Sun, Feb 25, 2007 at 23:11:01 +0100, Peter Kovacs <[EMAIL PROTECTED]> wrote: > A related question: > Is it sufficient to disable write cache only on the disk where pg_xlog > is located? Or should write cache be disabled on both disks? With recent linux kernels you may also have the option to use write barriers instead of disabling caching. You need to make sure all of your stacked block devices will handle it and most versions of software raid (other than 1) won't. This won't be a lot faster, since at sync points the OS needs to order a cache flush, but it does give the disks a chance to reorder some commands in between flushes. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Two hard drives --- what to do with them?
On Tue, Feb 27, 2007 at 15:35:13 +1030, Shane Ambler <[EMAIL PROTECTED]> wrote: > > From all that I have heard this is another advantage of SCSI disks - > they honor these settings as you would expect - many IDE/SATA disks > often say "sure I'll disable the cache" but continue to use it or don't > retain the setting after restart. It is easy enough to tests if your disk lie about disabling the cache. I doubt that it is all that common for modern disks to do that. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Two hard drives --- what to do with them?
On Wed, Feb 28, 2007 at 05:21:41 +1030, Shane Ambler <[EMAIL PROTECTED]> wrote: > > The difference between SCSI and IDE/SATA in this case is a lot if not > all IDE/SATA drives tell you that the cache is disabled when you ask it > to but they either don't actually disable it or they don't retain the > setting so you get caught later. SCSI disks can be trusted when you set > this option. I have some Western Digital Caviars and they don't lie about disabling write caching. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[PERFORM] increasing database connections
Hi, I am sorry if it is a repeat question but I want to know if database performance will decrease if I increase the max-connections to 2000. At present it is 100. I have a requirement where the clent want 2000 simultaneous users and the only option we have now is to in crease the database connection but I am unale to find any document which indicates that this is a good or a bad practise. thanks for your help and time. regards shiva - Heres a new way to find what you're looking for - Yahoo! Answers
Re: [PERFORM] increasing database connections
On 3/1/07, Shiva Sarna <[EMAIL PROTECTED]> wrote: I am sorry if it is a repeat question but I want to know if database performance will decrease if I increase the max-connections to 2000. At present it is 100. Most certainly. Adding connections over 200 will degrade performance dramatically. You should look into pgpool or connection pooling from the application. -- Jonah H. Harris, Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED] Iselin, New Jersey 08830| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [PERFORM] increasing database connections
Jonah H. Harris wrote: > On 3/1/07, Shiva Sarna <[EMAIL PROTECTED]> wrote: >> I am sorry if it is a repeat question but I want to know if database >> performance will decrease if I increase the max-connections to 2000. At >> present it is 100. > > Most certainly. Adding connections over 200 will degrade performance > dramatically. You should look into pgpool or connection pooling from > the application. huh? That is certainly not my experience. I have systems that show no depreciable performance hit on even 1000+ connections. To be fair to the discussion, these are on systems with 4+ cores. Usually 8+ and significant ram 16/32 gig fo ram. Sincerely, Joshua D. Drake > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] increasing database connections
Joshua D. Drake wrote: Jonah H. Harris wrote: On 3/1/07, Shiva Sarna <[EMAIL PROTECTED]> wrote: I am sorry if it is a repeat question but I want to know if database performance will decrease if I increase the max-connections to 2000. At present it is 100. Most certainly. Adding connections over 200 will degrade performance dramatically. You should look into pgpool or connection pooling from the application. huh? That is certainly not my experience. I have systems that show no depreciable performance hit on even 1000+ connections. To be fair to the discussion, these are on systems with 4+ cores. Usually 8+ and significant ram 16/32 gig fo ram. Yeah - I thought that somewhere closer to 1 connections is where you get hit with socket management related performance issues. Cheers Mark ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq