On Thu, Nov 14, 2013 at 10:44 PM, Alexander Farber
wrote:
> sysctl.conf:
>
>kernel.shmmax=17179869184
>kernel.shmall=4194304
You do not need those settings in sysctl.conf since 9.3 as consumption
of V shared memory has been reduced with this commit:
http://git.postgresql.org/gitweb/?p=post
On 14 Listopad 2013, 16:09, Alexander Farber wrote:
> pgtune has produced the following for my server (the specs:
> http://www.hetzner.de/en/hosting/produkte_rootserver/px60ssd ):
>
> default_statistics_target = 50
> maintenance_work_mem = 1GB
> constraint_exclusion = on
> checkpoint_completion_tar
On Thu, Nov 14, 2013 at 5:44 AM, Alexander Farber
wrote:
> sysctl.conf:
>
>kernel.shmmax=17179869184
>kernel.shmall=4194304
You can also consult this document about kernel and OS settings:
https://github.com/grayhemp/pgcookbook/blob/master/database_server_configuration.md
--
Kind regar
On 11/14/2013 10:09 AM, Alexander Farber wrote:
pgtune has produced the following for my server (the specs:
http://www.hetzner.de/en/hosting/produkte_rootserver/px60ssd ):
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effec
pgtune has produced the following for my server (the specs:
http://www.hetzner.de/en/hosting/produkte_rootserver/px60ssd ):
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effective_cache_size = 22GB
work_mem = 192MB
wal_buffer
And pgtune is 4 years old...
Hello, my problem with
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
is that it never mentions larger amounts of RAM,
so while reading it I always wonder how up-to-date it is...
Hi,
On Thu, Nov 14, 2013 at 7:14 PM, Alexander Farber <
alexander.far...@gmail.com> wrote:
> Hello,
>
> do these changes please look okay for a PostgreSQL 9.3 running on CentOS
> 6.4 server with 32 GB RAM (with Drupal 7 and few custom PHP scripts)
>
> postgresql.conf:
>
>shared_buffers = 4096
Hello,
do these changes please look okay for a PostgreSQL 9.3 running on CentOS
6.4 server with 32 GB RAM (with Drupal 7 and few custom PHP scripts)
postgresql.conf:
shared_buffers = 4096MB
work_mem = 32MB
checkpoint_segments = 32
log_min_duration_statement = 1
sysctl.conf:
On May 16, 2013, at 5:56 PM, Ramsey Gurley wrote:Hi All,I tried bumping my read ahead up to 4096. Instead of having faster reads, it seems it actually slowed things down. In fact, most of the tuning suggestions I've tried have made little to no difference in the results I get from bonnie++.I've run
Hi All,
I tried bumping my read ahead up to 4096. Instead of having faster reads, it
seems it actually slowed things down. In fact, most of the tuning suggestions
I've tried have made little to no difference in the results I get from
bonnie++. I'll include a table of values in html. I'm wonderi
On May 16, 2013, at 6:01 AM, Shaun Thomas wrote:
> On 05/15/2013 08:04 PM, Ramsey Gurley wrote:
>
>> My question: Is that advice just for the database drive, or should I
>> increase read ahead on the OS/WAL disk as well?
>
> Definitely the database drive, but it doesn't hurt to do both. It does
On 05/15/2013 08:04 PM, Ramsey Gurley wrote:
My question: Is that advice just for the database drive, or should I
increase read ahead on the OS/WAL disk as well?
Definitely the database drive, but it doesn't hurt to do both. It
doesn't mention it in the book, but if you have a Debian or Ubunt
Hi all,
I've just gotten into my new database server yesterday and I've started doing
database setup and tuning.
I'm on a Rackspace Linux server with two raid arrays. Both are ext3. One is a
two disk RAID1 I plan on using for WAL and OS, the other is a four disk RAID10
I will use for the data.
On 11/07/2012 01:29 PM, Bryan Montgomery wrote:
> Thanks for the insight. I know why some of the settings were set, but
> there are others that have been highlighted that now don't make sense.
> I'll strongly recommend adjusting those now.
Good. If you've been running with fsync=off all this time I
Thanks for the insight. I know why some of the settings were set, but there
are others that have been highlighted that now don't make sense. I'll
strongly recommend adjusting those now.
On Tue, Nov 6, 2012 at 8:54 PM, Craig Ringer wrote:
> Thanks for including your configuration and version; it
Thanks for including your configuration and version; it makes things
much easier.
Reply follows inline.
On 11/06/2012 09:04 PM, Bryan Montgomery wrote:
> I'm wondering what general ways there are to monitor and improve
> performance? We look at pgadmin's server status but that only sees the
> fun
Bryan Montgomery wrote:
> We have a system, that due to recent events is getting a lot
> heavier use. The application makes extensive use of functions.
> These functions would typically run sub-second but now can take
> several seconds.
> The server is running on suse 11.4 with 8 vcpu and 32Gb ra
Hello experts! (and other like me).
**
We have a system, that due to recent events is getting a lot heavier use.
The application makes extensive use of functions. These functions would
typically run sub-second but now can take several seconds.
I'm wondering what general ways there are to monitor
On 11/06/11 8:51 AM, René Fournier wrote:
Just wondering what I can do to squeeze out more performance of my
database application? Here's my configuration:
- Mac mini server
- Core i7 quad-core at 2GHz
- 16GB memory
- Dedicated fast SSD (two SSDs in the server)
- Mac OS X
Just wondering what I can do to squeeze out more performance of my database
application? Here's my configuration:
- Mac mini server
- Core i7 quad-core at 2GHz
- 16GB memory
- Dedicated fast SSD (two SSDs in the server)
- Mac OS X 10.7.2 (*not* using OS X Server)
- PostgreSQL 9.05
- PostGIS 1.5.3
2011/10/6 Carlos Mennens :
> I read all the time that most DBA's are required or should tune their
> DBMS which obviously in my case would be PostgreSQL but I'm curious
> what exactly is involved when tuning a DBMS like PostgreSQL. What are
> some of the basic functions involved when tuning? Are th
I read all the time that most DBA's are required or should tune their
DBMS which obviously in my case would be PostgreSQL but I'm curious
what exactly is involved when tuning a DBMS like PostgreSQL. What are
some of the basic functions involved when tuning? Are there specific
things I should tweak
Steve Atkins wrote:
> On Jun 20, 2011, at 10:49 PM, CSS wrote:
>
>> Hello,
>>
>> I couldn't find much information in the archives on this -- perhaps this
>> is a bit of a specialized need, but I was hoping for some input from
>> some experienced postgres admins.
>>
>> I'm moving some DNS servers f
Greg Smith wrote:
> On 06/21/2011 01:49 AM, CSS wrote:
>> Some raw numbers: We're only looking at a total of about six tables in
>> one db. In total there are going to be well under 10,000 records in ALL
>> tables. That might increase to at most 100,000 in the next few years.
>> Our raw DNS queri
On Jun 20, 2011, at 10:49 PM, CSS wrote:
> Hello,
>
> I couldn't find much information in the archives on this -- perhaps this
> is a bit of a specialized need, but I was hoping for some input from
> some experienced postgres admins.
>
> I'm moving some DNS servers from djbdns/tinydns to PowerD
On 06/21/2011 01:49 AM, CSS wrote:
Some raw numbers: We're only looking at a total of about six tables in
one db. In total there are going to be well under 10,000 records in ALL
tables. That might increase to at most 100,000 in the next few years.
Our raw DNS queries/second tops out around 50 q
Le mardi 21 juin 2011 à 01:49 -0400, CSS a écrit :
> I couldn't find much information in the archives on this -- perhaps this
> is a bit of a specialized need, but I was hoping for some input from
> some experienced postgres admins.
>
Hi,
I am not an experienced postgres admin, but I am an exper
Hello,
I couldn't find much information in the archives on this -- perhaps this
is a bit of a specialized need, but I was hoping for some input from
some experienced postgres admins.
I'm moving some DNS servers from djbdns/tinydns to PowerDNS. While it
supports many backends, postgresql seems li
Hello,
I couldn't find much information in the archives on this -- perhaps this
is a bit of a specialized need, but I was hoping for some input from
some experienced pgsql admins.
I'm moving some DNS servers from djbdns/tinydns to PowerDNS. While it
supports many backends, postgres seems like th
On 27/04/11 01:34, Joel Reymont wrote:
On Apr 26, 2011, at 4:31 PM, Scott Marlowe wrote:
What's your work_mem and max_connections set to?
I have the default settings, e.g. work_mem = 1MB and max_connections = 100.
I'm looking to process 400 requests per second, though. What should I use for
On Tue, Apr 26, 2011 at 9:34 AM, Joel Reymont wrote:
>
> On Apr 26, 2011, at 4:31 PM, Scott Marlowe wrote:
>
>> It's a reasonable start. However, if you consistently using less than
>> that in aggregate then lowering it is fine.
>
> Is there a way to tell if I consistently use less than that in a
On Apr 26, 2011, at 4:31 PM, Scott Marlowe wrote:
> It's a reasonable start. However, if you consistently using less than
> that in aggregate then lowering it is fine.
Is there a way to tell if I consistently use less than that in aggregate?
> What's your work_mem and max_connections set to?
On Tue, Apr 26, 2011 at 8:59 AM, Joel Reymont wrote:
> I'm running pgsql on an m1.large EC2 instance with 7.5gb available memory.
>
> The free command shows 7gb of free+cached. My understand from the docs is
> that I should dedicate 1.75gb to shared_buffers (25%) and set
> effective_cache_size t
I'm running pgsql on an m1.large EC2 instance with 7.5gb available memory.
The free command shows 7gb of free+cached. My understand from the docs is that
I should dedicate 1.75gb to shared_buffers (25%) and set effective_cache_size
to 7gb.
Is this correct? I'm running 64-bit Ubuntu 10.10, e.g
On Mon, Feb 7, 2011 at 7:43 PM, Vinubalaji Gopal wrote:
> Thank you. I will try to run some performance tests using the async
> commit option. Is there an easy way to find the lost transactions or
> does it have to be handled by the application?
>
By definition, your application has to be able to
Thank you. I will try to run some performance tests using the async
commit option. Is there an easy way to find the lost transactions or
does it have to be handled by the application?
On Mon, Feb 7, 2011 at 6:23 AM, Vick Khera wrote:
> On Thu, Feb 3, 2011 at 7:08 PM, Vinubalaji Gopal wrote:
>> a
On Thu, Feb 3, 2011 at 7:08 PM, Vinubalaji Gopal wrote:
> already does this. I looked at the WAL parameters and the new async
> commit but not sure if I am looking at the right place. Say i have 10
> clients connecting and each client is inserting a record. I want to
>
You want the async commit
Hi,
I was wondering if Postgresql can be tuned such that it writes to
disk every n seconds or until the buffer reaches a certain threshold
to tune the performance . This is a special case where the application
wouldn't care if there is a data loss of 1 seconds or less. I would be
interested to k
> Hi ( sorry for the double posting, thought Id use the wrong email
> address but both have been posted anyway). As far as the db is concerned
> Im just reading data then writing the data to a lucene search index (which
> is outside of the database) , but my labtop is jut a test machine I want
> to
On Thu, 2010-12-09 at 17:40 +, Paul Taylor wrote:
> what
> have I got to be careful of.
I think that was in reference to turning fsync off, not work_mem values.
On Thu, Dec 9, 2010 at 5:25 AM, Paul Taylor wrote:
> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
> I using the database with just one db connection to build a lucene search
> index from some of the data, and Im trying to improve performance. The key
> thing is that I'm only a single user bu
On 09/12/2010 15:51, t...@fuzzy.cz wrote:
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that I'm only a single user but
manipulating l
On 12/09/2010 12:36 PM, Paul Taylor wrote:
> On 09/12/2010 15:12, Reid Thompson wrote:
>> On 12/09/2010 09:59 AM, Andy Colson wrote:
>>> On 12/9/2010 8:50 AM, Andy Colson wrote:
On 12/9/2010 6:25 AM, Paul Taylor wrote:
You need to bump up your SHMMAX is your OS.
>>> sorry: SHMMAX _in_ you
On 09/12/2010 15:12, Reid Thompson wrote:
On 12/09/2010 09:59 AM, Andy Colson wrote:
On 12/9/2010 8:50 AM, Andy Colson wrote:
On 12/9/2010 6:25 AM, Paul Taylor wrote:
You need to bump up your SHMMAX is your OS.
sorry: SHMMAX _in_ your OS.
its an OS setting not a PG one.
-Andy
scroll down
> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
> I using the database with just one db connection to build a lucene
> search index from some of the data, and Im trying to improve
> performance. The key thing is that I'm only a single user but
> manipulating large amounts of data , i.e process
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that I'm only a single user but
manipulating large amounts of data , i.e processing ta
On 12/09/2010 09:59 AM, Andy Colson wrote:
> On 12/9/2010 8:50 AM, Andy Colson wrote:
>> On 12/9/2010 6:25 AM, Paul Taylor wrote:
>> You need to bump up your SHMMAX is your OS.
>
> sorry: SHMMAX _in_ your OS.
>
> its an OS setting not a PG one.
>
> -Andy
>
>
scroll down to the section on OSX
On 12/9/2010 8:50 AM, Andy Colson wrote:
On 12/9/2010 6:25 AM, Paul Taylor wrote:
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that
On 12/9/2010 6:25 AM, Paul Taylor wrote:
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that I'm only a single user but
manipulating la
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that I'm only a single user but
manipulating large amounts of data , i.e processing ta
Ben Chobot wrote:
Hm, my shared_buffers is already 10GB, but I'm using about 80GB for filesystem
cache. Would a larger shared_buffers make sense? I thought I read somewhere
that 10GB is on the high end of the useful size for shared_buffers.
Yeah, I figured that out when I was analyzing your
On Feb 22, 2010, at 6:47 PM, Greg Smith wrote:
> Ben Chobot wrote:
>> Is it reading it correctly to say that the bgwriter probably wouldn't help
>> much, because a majority of the dirty pages appear to be popular?
>
> Yes. The background writer cleaner process only does something useful if
> t
Ben Chobot wrote:
Is it reading it correctly to say that the bgwriter probably wouldn't
help much, because a majority of the dirty pages appear to be popular?
Yes. The background writer cleaner process only does something useful
if there are pages with low usage counts it can evict. You woul
On Feb 17, 2010, at 6:38 PM, Greg Smith wrote:
> Ben Chobot wrote:
>> Is there a way to tell if I really am just keeping the same few pages dirty
>> throughout every checkpoint? I wouldn't have expected that, but given our
>> application I suppose it is possible.
>
> You can install pg_bufferca
Ben Chobot wrote:
Is there a way to tell if I really am just keeping the same few pages dirty
throughout every checkpoint? I wouldn't have expected that, but given our
application I suppose it is possible.
You can install pg_buffercache and look at what's in the cache to check
your theory.
On Feb 17, 2010, at 3:41 PM, Greg Smith wrote:
> Ben Chobot wrote:
>> As I understand things, assuming I don't keep updating the same pages then
>> buffers_backend should be a small percentage of buffers_alloc, and
>> buffers_clean should be larger than it is compared to buffers_checkpoint. Is
Ben Chobot wrote:
As I understand things, assuming I don't keep updating the same pages
then buffers_backend should be a small percentage of buffers_alloc,
and buffers_clean should be larger than it is compared to
buffers_checkpoint. Is my understanding correct?
Sure; your buffers_clean is re
On Feb 14, 2010, at 10:25 AM, Ben Chobot wrote:
> We recently upgraded to 8.4.2 and I'm trying to make sure our bgwriter is
> working as well as it can. Based on:
>
> # select * from pg_stat_bgwriter ;
> checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |
> maxwritten_cl
We recently upgraded to 8.4.2 and I'm trying to make sure our bgwriter is
working as well as it can. Based on:
# select * from pg_stat_bgwriter ;
checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |
maxwritten_clean | buffers_backend | buffers_alloc
---+--
Just wanted to let everyone know that I set up partitioning and got my
30 days of data in. This sucker is FAST. And it wasn't nearly as
complicated to make happen as I thought it would be. I didn't use the
triggers or rules to make sure data goes in the right table on insert,
as there's only
On Thu, May 21, 2009 at 3:37 PM, Alex Thurlow wrote:
> I was hoping to not have to change all my code to automate the partitioning
> table creation stuff, but if that's really the best way, I'll check it out.
> Thanks for the advice.
About a 18 months ago we split a large table with 300+ million
I was hoping to not have to change all my code to automate the
partitioning table creation stuff, but if that's really the best way,
I'll check it out. Thanks for the advice.
Alex Thurlow
Blastro Networks
http://www.blastro.com
http://www.roxwel.com
http://www.yallwire.com
On 5/21/2009 2:2
On Thu, May 21, 2009 at 1:39 PM, Alex Thurlow wrote:
> I should also specify how my inserts are happening I guess. I'm actually
> doing the logs to flat files and then inserting them into the database on an
> hourly basis using COPY, so I don't need to worry as much about the log
> insert speed a
On Thu, May 21, 2009 at 1:36 PM, Scott Marlowe wrote:
>
Below, I meant with a logging / reporting database...
> With a logging database you're optimizing two often opposing actions.
> Lots of small inserts in a stream that HAVE to get processed and put
> in efficiently. This is often accomplish
I should also specify how my inserts are happening I guess. I'm
actually doing the logs to flat files and then inserting them into the
database on an hourly basis using COPY, so I don't need to worry as much
about the log insert speed as I do the reporting.
On 5/21/2009 2:36 PM, Scott Marlowe
On Thu, May 21, 2009 at 1:13 PM, Alex Thurlow wrote:
> I have a postgresql database that I'm using for logging of data. There's
> basically one table where each row is a line from my log files. It's
> getting to a size where it's running very slow though. There are about 10
> million log lines p
I have a postgresql database that I'm using for logging of data.
There's basically one table where each row is a line from my log files.
It's getting to a size where it's running very slow though. There are
about 10 million log lines per day and I keep 30 days of data in it.
All the columns
On Thu, May 21, 2009 at 3:13 PM, Alex Thurlow wrote:
> I have a postgresql database that I'm using for logging of data. There's
> basically one table where each row is a line from my log files. It's
> getting to a size where it's running very slow though. There are about 10
> million log lines p
I have a postgresql database that I'm using for logging of data.
There's basically one table where each row is a line from my log files.
It's getting to a size where it's running very slow though. There are
about 10 million log lines per day and I keep 30 days of data in it.
All the columns I
On Tue, 2007-12-04 at 14:21 -0500, Gauthier, Dave wrote:
> Is there something like a freeware windows client app that does DBA
> stuff for a remote server? Sort of like TOAD for Oracle?
pgadmin3?
and please don't hijack threads
---(end of broadcast)--
On Tue, Dec 04, 2007 at 02:21:43PM -0500, Gauthier, Dave wrote:
> Is there something like a freeware windows client app that does DBA
> stuff for a remote server? Sort of like TOAD for Oracle?
TOAD either does or used to work for Postgres.
A
---(end of broadcast)---
: Postgresql
Subject: Re: [GENERAL] Tuning configuration
On Dec 4, 2007, at 11:17 AM, Konrad Neuwirth wrote:
> Hello,
>
> I have a hopefully simple question. I've found documentation about
> the meanings of the various variables in postgres.conf -- if it comes
> to memory consump
On Dec 4, 2007, at 11:17 AM, Konrad Neuwirth wrote:
Hello,
I have a hopefully simple question. I've found documentation about
the meanings of the various variables in postgres.conf -- if it comes
to memory consumption and buffer size. I've also found hints as to
making them too large decrease
Hello,
I have a hopefully simple question. I've found documentation about
the meanings of the various variables in postgres.conf -- if it comes
to memory consumption and buffer size. I've also found hints as to
making them too large decreases performance. But -- how can I measure
how well the c
Roman Neuhauser wrote:
# [EMAIL PROTECTED] / 2006-08-11 10:12:40 -0400:
I think my Dell Precision 650 has SATA on the motherboard. The boss says
I can order one drive, so what should I get? How much faster is RAID 0+1
than a single drive?
If you can order one drive, get a drive that is twice
# [EMAIL PROTECTED] / 2006-08-11 10:12:40 -0400:
> I think my Dell Precision 650 has SATA on the motherboard. The boss says
> I can order one drive, so what should I get? How much faster is RAID 0+1
> than a single drive?
You need 4 disks for 0+1 (or 1+0, also called 10).
--
How many Viet
On 8/11/06, Tom Laudeman <[EMAIL PROTECTED]> wrote:
Merlin,
The problem is disk. I've got a WD Caviar. hdparm says it does 44MB/sec
(I ran that in single user mode so there was nothing interfering). A WD
Caviar SE SATA in one of my servers at home gets 56MB/sec on a quiescent
system at runlevel 3
Merlin,
The problem is disk. I've got a WD Caviar. hdparm says it does 44MB/sec
(I ran that in single user mode so there was nothing interfering). A WD
Caviar SE SATA in one of my servers at home gets 56MB/sec on a quiescent
system at runlevel 3. What kind of values does hdparm give for a SATA
Michael Fuhr <[EMAIL PROTECTED]> writes:
> Excerpt from the 8.1 Release Notes:
> * Allow nonconsecutive index columns to be used in a multicolumn
> index (Tom)
> For example, this allows an index on columns a,b,c to be used in
> a query with WHERE a = 4 and c = 10.
> If you're quer
On Thu, Aug 10, 2006 at 11:00:00AM -0400, Tom Laudeman wrote:
> As far as I can tell (from running actual tests) Postgres will not use a
> multi-column index when the SELECT is constraining on only one of the
> columns in the index.
Excerpt from the 8.1 Release Notes:
* Allow nonconsecutive i
Reece,
We have some multi-column indexes. Speed of insert, update, and delete
are not an issue since this data is essentially static: write-once,
read-many.
As far as I can tell (from running actual tests) Postgres will not use
a multi-column index when the SELECT is constraining on only one o
On 8/9/06, Tom Laudeman <[EMAIL PROTECTED]> wrote:
The speed of the query is (as Michael implies) limited to the rate at which
the disk can seek and read. I have done experiments with views and cursors;
there was no improvement in speed. I've also tried only pulling back
primary keys in the ho
Richard Broersma Jr wrote:
Views certainly help in managing complexity. They do nothing to improve
query-speed.
Querying a view gets rewritten to queries to the underlying tables on the
fly.
(as long as there are no materialized views, which are still on a the TODO
list)
Would partial indexs o
On Wed, 2006-08-09 at 16:54 -0400, Tom Laudeman wrote:
Great suggestion. I've read about CLUSTER, but never had a chance to use it. The only problem is that this table with 9 million records has 5 or 6 indexes. It is hard to pick the most used, but I'll bet CLUSTER will make at least one o
On Wed, Aug 09, 2006 at 04:54:00PM -0400, Tom Laudeman wrote:
> I'll try CLUSTER (I'm looking forward to that test), but if we really
> need speed, it will probably be necessary to create copies of the table,
> or copy portions of the table elsewhere (essentially creating
> materialized views, I
Michael,
Great suggestion. I've read about CLUSTER, but never had a chance to
use it. The only problem is that this table with 9 million records has
5 or 6 indexes. It is hard to pick the most used, but I'll bet CLUSTER
will make at least one of the queries run very fast, especially for an
inde
Title: Re: [GENERAL] Tuning to speed select
On Wed, 2006-08-09 at 14:58, louis gonzales wrote:
> I'm not so sure about that, when you create a view on a table - at least
> with Oracle - which is a subset(the trivial or 'proper' subset is the
> entire table view) of th
I'm not so sure about that, when you create a view on a table - at least
with Oracle - which is a subset(the trivial or 'proper' subset is the
entire table view) of the information on a table, when a select is
issued against a table, Oracle at least, determines if there is a view
already on a t
> Views certainly help in managing complexity. They do nothing to improve
> query-speed.
>
> Querying a view gets rewritten to queries to the underlying tables on the
> fly.
> (as long as there are no materialized views, which are still on a the TODO
> list)
Would partial indexs on the most queri
Louis,Views certainly help in managing complexity. They do nothing to improve query-speed. Querying a view gets rewritten to queries to the underlying tables on the fly.(as long as there are no materialized views, which are still on a the TODO list)
-- GHUM Harald Massapersuadere et programmareHara
What about creating views on areas of the table that are queried often?
I don't know if you have access or the ability to find what type of
trends the table has, in terms of queries, but if you create some views
on frequently visited information, this could also help.
Tom Laudeman wrote:
Hi
On Wed, Aug 09, 2006 at 03:46:38PM +0200, Martijn van Oosterhout wrote:
> On Wed, Aug 09, 2006 at 09:19:31AM -0400, Tom Laudeman wrote:
> > Is there a tuning parameter I can change to increase speed of selects?
> > Clearly, there's already some buffering going on since selecting an
> > indexed ~5
On Wed, Aug 09, 2006 at 09:19:31AM -0400, Tom Laudeman wrote:
> Hi,
>
> I'm running PostgreSQL version 8 on a dual 2.4GHz Xeon with 1GB of RAM
> and an IDE hard drive. My big table has around 9 million records.
>
> Is there a tuning parameter I can change to increase speed of selects?
> Clearly
Hi,
I'm running PostgreSQL version 8 on a dual 2.4GHz Xeon with 1GB of RAM
and an IDE hard drive. My big table has around 9 million records.
Is there a tuning parameter I can change to increase speed of selects?
Clearly, there's already some buffering going on since selecting an
indexed ~50,
Mike Nolan wrote:
Maybe you could return a refcursor pointing to the EXPLAIN ANALYZE of
the query inside the function.
The raw materials exist to do this: if you know which elements of a
query will be replaced by plpgsql variables, you can duplicate the
results via
PREPARE foo(...) AS ...
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Mike Nolan wrote:
>> That part I get, but I cannot seem to get an 'explain select' to return
>> the explain output inside a function.
> Oh interesting. Hmmm. Alvaro can you think of a way to execute the
> result into a variable and return it as
Mike Nolan wrote:
Mike Nolan wrote:
select * from foo('bar','debug')
But how do I do that inside a pl/pgsql function? 'select into' doesn't
seem to work properly.
You would have to code it. For example:
IF $2 = ''debug'' THEN:
That part I get, but I cannot seem to get an 'explain select' to
> Mike Nolan wrote:
> >>select * from foo('bar','debug')
> >
> >
> > But how do I do that inside a pl/pgsql function? 'select into' doesn't
> > seem to work properly.
>
>
> You would have to code it. For example:
>
> IF $2 = ''debug'' THEN:
That part I get, but I cannot seem to get an 'expla
Mike Nolan wrote:
select * from foo('bar','debug')
But how do I do that inside a pl/pgsql function? 'select into' doesn't
seem to work properly.
You would have to code it. For example:
IF $2 = ''debug'' THEN:
I would have to check be able to include a timestamp at the beginning
of each notice
1 - 100 of 121 matches
Mail list logo