[PERFORM] create index with substr function

2004-10-20 Thread Ray



Hi All,
 
I have a table in my postgres:
Table: doc
 
Column 
|    
Type | 
Modifiers  
---+-+--- doc_id  | 
bigint  
| not null comp_grp_id | 
bigint  
| not null doc_type  | character 
varying(10)| not null doc_urn    
| character varying(20)| not null
I want to create an index on doc_urn column with 
using substr function like this:
CREATE INDEX idx_doc_substr_doc_urn ON doc USING 
btree (SUBSTR(doc_urn,10));
 
but there is an error:
ERROR:  parser: parse error at or near "10" at character 68
 
what's wrong for this SQL? As I have found some reference on the internet, 
I can't find anything wrong in this SQL.
 
Thanks
Ray


Re: [PERFORM] create index with substr function

2004-10-20 Thread Ray
Thank you all kindly response. : )

I am currently using postgres 7.3, so any example or solution for version
after 7.4 if i want to create an index with substr function???

Thanks,
Ray


- Original Message - 
From: "Stephan Szabo" <[EMAIL PROTECTED]>
To: "Ray" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, October 21, 2004 10:57 AM
Subject: Re: [PERFORM] create index with substr function


>
> On Thu, 21 Oct 2004, Ray wrote:
>
> > Hi All,
> >
> > I have a table in my postgres:
> > Table: doc
> >  Column |Type | Modifiers
> >  ---+-+---
> >  doc_id  | bigint  | not null
> >  comp_grp_id | bigint  | not null
> >  doc_type  | character varying(10)| not null
> >  doc_urn| character varying(20)| not null
> >
> > I want to create an index on doc_urn column with using substr function
like this:
> > CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree
(SUBSTR(doc_urn,10));
> >
> > but there is an error:
> >
> > ERROR:  parser: parse error at or near "10" at character 68
> >
> > what's wrong for this SQL? As I have found some reference on the
> > internet, I can't find anything wrong in this SQL.
>
> What version are you using? If you're using anything previous to 7.4 then
> the above definately won't work and the only work around I know of is to
> make another function which takes only the column argument and calls
> substr with the 10 constant.
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] create index with substr function

2004-10-20 Thread Ray
sorry it doesn't works, as my postgres is 7.3 not 7.4. any other alternative
solution for version after 7.4??

Thank
Ray : )

- Original Message - 
From: "Rosser Schwarz" <[EMAIL PROTECTED]>
To: "Ray" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, October 21, 2004 11:34 AM
Subject: Re: [PERFORM] create index with substr function


> while you weren't looking, Ray wrote:
>
> > CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree
(SUBSTR(doc_urn,10));
>
> CREATE INDEX idx_doc_substr_doc_urn ON doc USING btree
((SUBSTR(doc_urn,10)));
>
> You need an additional set of parens around the SUBSTR() call.
>
> /rls
>
> -- 
> :wq
>


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] how to plan for vacuum?

2007-01-25 Thread Ray Stell
On Thu, Jan 25, 2007 at 07:29:20PM +0900, Galy Lee wrote:
> so what is the principle to set them?
>  - keep dead space lower than some disk limit
>  - or keep the garbage rate lower than fillfactor
>  or any other general principle?


How do you measure "dead space" and "garbage rate?"

I'm a newbe, I don't even know what these terms mean, but if I can measure
them, perhaps it will gel, and really if you can't measure the effect
of a setting change, what have you got?  I would hope any discussion on
autovac parms would include some metric evaluation techniques.  Thanks.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PERFORM] how to plan for vacuum?

2007-01-25 Thread Ray Stell
On Thu, Jan 25, 2007 at 08:04:49AM -0800, Joshua D. Drake wrote:
> 
> It really depends on the system. Most of our systems run anywhere from
> 10-25ms. I find that any more than that, Vacuum takes too long.


How do you measure the impact of setting it to 12 as opposed to 15?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] How to debug performance problems

2007-02-21 Thread Ray Stell

I'd like to have a toolbox prepared for when performance goes south.
I'm clueless.  Would someone mind providing some detail about how to
measure these four items Craig listed:

1. The first thing is to find out which query is taking a lot of time.

2. A long-running transaction keeps vacuum from working.

3. A table grows just enough to pass a threshold in the
   planner and a drastically different plan is generated.

4. An index has become bloated and/or corrupted, and you
   need to run the REINDEX command.

Thx.





On Wed, Aug 30, 2006 at 11:45:06AM -0700, Jeff Frost wrote:
> On Wed, 30 Aug 2006, Joe McClintock wrote:
> 
> >I ran a vacuum, analyze and reindex on the database with no change in 
> >performance, query time was still 37+ sec, a little worse. On our test 
> >system I found that a db_dump from production and then restore brought the 
> >database back to full performance. So in desperation I shut down the 
> >production application, backed up the production database, rename the 
> >production db, create a new empty production db and restored the 
> >production backup to the empty db. After a successful db restore and 
> >restart of the web application, everything was then up and running like a 
> >top.
> 
> Joe,
> 
> I would guess that since the dump/restore yielded good performance once 
> again, a VACUUM FULL would have also fixed the problem.  How are your FSM 
> settings in the conf file?  Can you run VACUUM VERBOSE and send us the last 
> 10 or so lines of output?
> 
> A good article on FSM settings can be found here:
> 
> http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W
> 
> You probably should consider setting up autovacuum and definitely should 
> upgrade to at least 8.0.8 if not 8.1.4 when you get the chance.
> 
> When you loaded the new data did you delete or update old data or was it 
> just a straight insert?
> 
> -- 
> Jeff Frost, Owner <[EMAIL PROTECTED]>
> Frost Consulting, LLC http://www.frostconsultingllc.com/
> Phone: 650-780-7908   FAX: 650-649-1954
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>   choose an index scan if your joining column's datatypes do not
>   match







--

On Mon, Feb 19, 2007 at 10:02:46AM -0800, Craig A. James wrote:
> Andreas Tille wrote:
> >My web application was running fine for years without any problem
> >and the performance was satisfying.  Some months ago I added a
> >table containing 450 data rows ...
> >
> >Since about two weeks the application became *drastically* slower
> >and I urgently have to bring back the old performance.  As I said
> >I'm talking about functions accessing tables that did not increased
> >over several years and should behave more or less the same.
> 
> Don't assume that the big table you added is the source of the problem.  It 
> might be, but more likely it's something else entirely.  You indicated that 
> the problem didn't coincide with creating the large table.
> 
> There are a number of recurring themes on this discussion group:
> 
>  * A long-running transaction keeps vacuum from working.
> 
>  * A table grows just enough to pass a threshold in the
>planner and a drastically different plan is generated.
>  
>  * An index has become bloated and/or corrupted, and you
>need to run the REINDEX command.
> 
> And several other common problems.
> 
> The first thing is to find out which query is taking a lot of time.  I'm no 
> expert, but there have been several explanations on this forum recently how 
> to find your top time-consuming queries.  Once you find them, then EXPLAIN 
> ANALYZE should get you started 
> Craig
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings

-- 
You have no chance to survive make your time.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] How to debug performance problems

2007-02-21 Thread Ray Stell
On Wed, Feb 21, 2007 at 08:09:49AM -0800, Craig A. James wrote:
> I hope I didn't give the impression that these were the only thing to look 
> at ... those four items just popped into my head, because they've come up 
> repeatedly in this forum.  There are surely more things that could be 
> suspect; perhaps others could add to your list.

I'm only clueless about the details of pg, not db perf concepts.  Really,
a mechanism to determine where the system is spending the response
time is key.  As you pointed out, the added table may not be the issue.
In fact, if you can't measure where the db time is being spent
you will be lucky to fix a performance issue, since you don't really
know what resources need to be addressed.  


> so you have to dig in and find it yourself.

this afternoon, maybe.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Feature Request --- was: PostgreSQL Performance Tuning

2007-04-27 Thread Ray Stell
On Fri, Apr 27, 2007 at 02:40:07PM -0400, Kevin Hunter wrote:
> out that many run multiple postmasters or have other uses for the  
> machines in question), but perhaps it could send a message (email?)  
> along the lines of "Hey, I'm currently doing this many of X  
> transactions, against this much of Y data, and working under these  
> constraints.  You might get better performance (in this area ... ) if  
> you altered the the configurations options like so: ..."


or storing the values in the db for later trending analysis, witness 
ora statspack.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Volunteer to build a configuration tool

2007-06-19 Thread Ray Stell
On Mon, Jun 18, 2007 at 04:35:11PM -0700, Steve Atkins wrote:
> 
> On Jun 18, 2007, at 4:09 PM, [EMAIL PROTECTED] wrote:
> 
> The tricky bits are going to be defining the problem and creating the
> alogrithm to do the maths from input to output.


Why not methodically discuss the the alogrithms on pgsql-performance,
thus improving the chance of being on target up front.  Plus, us newbies
get to see what you are thinking thus expanding our universe.  I know I'd 
read every word.

Thanks for doing this, btw.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] PostgreSQL Configuration Tool for Dummies

2007-06-19 Thread Ray Stell
On Tue, Jun 19, 2007 at 10:49:01AM -0700, Y Sidhu wrote:
> On 6/19/07, Francisco Reyes <[EMAIL PROTECTED]> wrote:
> >
> >Campbell, Lance writes:
> >
> >> Francisco and Richard,
> >> Why ask about disk or raid?  How would that impact any settings in
> >> postgresql.conf?
> >
> >If the user has 2 disks and says that he will do a lot of updates he could
> >put pg_xlog in the second disk.
> >
> >
> >---(end of broadcast)---
> >TIP 2: Don't 'kill -9' the postmaster
> >
> 
> Let's not ask about disk or raid at this level of sanity tuning. It is
> important for a newbie to take the right first step. When it comes to disks,
> we start talking I/O, SATA, SCSI and the varying degrees of SATA and SCSI,
> and controller cards. Then we throw in RAID and the different levels
> therein. Add to that, we can talk about drivers controlling these drives and
> which OS is faster, more stable, etc. As you can see, a newbie would get
> drowned. So, please keep it simple. I know many people on this list are
> Gurus. We know you are the best in this field, but we are not and are just
> trying to improve what we have.



Ignoring the i/o subsystem in db configuration, there's an idea.

You could request some bonnie++ output (easy to aquire) as a baseline, 
do your magic analysis based on this, and skip it if it is not provided
with a warning.  Course the magic may be harder to come by. 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] PostgreSQL publishes first real benchmark

2007-07-13 Thread Ray Stell


I had such great hopes for this thread.  "Alas, poor Yorick! I 
knew him, Horatio ..."




On Thu, Jul 12, 2007 at 11:00:54AM -0400, Greg Smith wrote:
> On Thu, 12 Jul 2007, Gregory Stark wrote:
> 
> >In any case I wouldn't think the use case for a feature like this would
> >actually apply in the case of a benchmark.
> 
> I've also seen a tiny setting for commit_delay (like the 10 they used) as 
> helping improve throughput under a heavy commit load with many processors. 
> I'm not sure why a quick yield of the processor at that point helps, but 
> there seem to be cases where it does.  Don't think it has anything to do 
> with the originally intended use for this parameter, probably some sort of 
> OS scheduler quirk.
> 
> >The use case where something like this is needed is where there are not 
> >enough concurrent requests to keep the server busy during the fsync of 
> >the wal.
> 
> I've actually finished an long investigation of this recently that will be 
> on my web page soon.  On a non-caching controller where you'd think 
> there's the most benefit here, I was only able to get about 10% more 
> commits at low client loads by setting the delay to about 1/2 of the fsync 
> time, and a few percent more at high loads by setting a delay longer than 
> the fsync time.  It's really a slippery setting though--very easy to set 
> in a way that will degrade performance significantly if you're not very 
> systematic about testing it many times at various client counts.
> 
> --
> * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>   choose an index scan if your joining column's datatypes do not
>   match

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres insert performance and storage requirement compared to Oracle

2010-10-25 Thread Ray Stell

On Mon, Oct 25, 2010 at 11:39:30AM -0700, Divakar Singh wrote:
> Thanks Ray,
> Already seen that, but it does not tell about storage requirement compared to 
> Oracle. I find it takes 2 times space than oracle. 
> 
> 
>  Best Regards,
> Divakar
> ____
> From: Ray Stell 
> To: Divakar Singh 
> Sent: Tue, October 26, 2010 12:05:23 AM
> Subject: Re: [PERFORM] Postgres insert performance and storage requirement 
> compared to Oracle
> 
> On Mon, Oct 25, 2010 at 11:12:40AM -0700, Divakar Singh wrote:
> > 
> > 2. What are the average storage requirements of postgres compared to 
> > Oracle? I 
> 
> > inserted upto 1 million records. The storage requirement of postgreSQL is 
> >almost 
> >
> > double than that of Oracle.
> 
> there's a fine manual:
> http://www.postgresql.org/docs/9.0/interactive/storage.html


Maybe compare to oracle's storage documentation:

 
http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/sql_elements001.htm#SQLRF30020
 
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/schema007.htm#ADMIN11622

I don't believe for a second the byte count is double in pg, but that's just
a religious expression, I've never counted.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-06 Thread Ray Stell
On Sun, Feb 06, 2011 at 11:48:50AM +0100, felix wrote:
> BRUTAL
> 

Did the changes work in your test environment?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Bottleneck?

2009-08-06 Thread Ray Stell
On Thu, Aug 06, 2009 at 12:50:51PM +1000, Ip Wing Kin John wrote:
> (running DTrace tool kit iofile.d script to show I/O wait time by
> filename and process)

Is the dtrace toolkit a viable product for a linux environment or
is it strickly Sun/Oracle?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Bottleneck?

2009-08-06 Thread Ray Stell
On Thu, Aug 06, 2009 at 11:01:52AM -0400, Tom Lane wrote:
> 
> dtrace is available on Solaris and Mac OS X and probably a couple
> other platforms, but not Linux.  

I wondered if anyone had given this a go:

 http://amitksaha.blogspot.com/2009/03/dtrace-on-linux.html

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Bottleneck?

2009-08-06 Thread Ray Stell
On Thu, Aug 06, 2009 at 09:12:22AM -0700, Joshua D. Drake wrote:
> Why aren't you using systemtap again? 

1. significant solaris responsibilites
2. significant linux responsibilities
3. tool consolidation delusions

Can you drive dtace toolkit via systemtap?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Stalls on PGSemaphoreLock

2014-03-25 Thread Ray Stell

On Mar 25, 2014, at 8:46 AM, Matthew Spilich wrote:

> The symptom:   The database machine (running postgres 9.1.9 on CentOS 6.4) is 
> running a low utilization most of the time, but once every day or two, it 
> will appear to slow down to the point where queries back up and clients are 
> unable to connect.  Once this event occurs, there are lots of concurrent 
> queries, I see slow queries appear in the logs, but there doesn't appear to 
> be anything abnormal that I have been able to see that causes this behavior.
...
> Has any on the forum seen something similar?   Any suggestions on what to 
> look at next?If it is helpful to describe the server hardware, it's got 2 
> E5-2670 cpu and 256 GB of ram, and the database is hosted on 1.6TB raid 10 
> local storage (15K 300 GB drives).  



I could be way off here, but years ago I experienced something like this (in 
oracle land) and after some stressful chasing, the marginal failure of the raid 
controller revealed itself.  Same kind of event, steady traffic and then some 
i/o would not complete and normal ops would stack up.  Anyway, what you report 
reminded me of that event.  The E5 is a few years old, I wonder if the raid 
controller firmware needs a patch?  I suppose a marginal power supply might 
cause a similar "hang."  Anyway, marginal failures are very painful.  Have you 
checked sar or OS logging at event time?