Giuseppe Broccolo wrote:
> I'm not sure about the '%' operator, but I'm sure that the GIST
> index will never be used in the
>
> SELECT * FROM users WHERE lower(name) LIKE '%john%';
>
> query; it is used for left or right anchored search, such as
> 'john%' or '%john'.
It *will* use a *trigram*
Jeff, I'm not seeing that limitation.
On Thu, Dec 18, 2014 at 10:33 AM, Jeff Janes wrote:
>
> On Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <
> giuseppe.brocc...@2ndquadrant.it> wrote:
>>
>> I'm not sure about the '%' operator, but I'm sure that the GIST index
>> will never be used in the
>
On Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <
giuseppe.brocc...@2ndquadrant.it> wrote:
>
> I'm not sure about the '%' operator, but I'm sure that the GIST index will
> never be used in the
>
> SELECT * FROM users WHERE lower(name) LIKE '%john%';
>
> query; it is used for left or right anc
I'm pretty sure '%John%' uses the index.
explain analyze verbose SELECT name FROM wai_users WHERE lower(name) LIKE
'%john%';
QUERY PLAN
--
I'm not sure about the '%' operator, but I'm sure that the GIST index will
never be used in the
SELECT * FROM users WHERE lower(name) LIKE '%john%';
query; it is used for left or right anchored search, such as 'john%' or
'%john'.
Giuseppe.
--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL T
I know! I was surprised that % 'John' or % 'JOHN' or even % 'jOhn' all
returned the same result.
Besides readability would there be any technical differences between a GIST
index that is lower or not and using LIKE vs. %?
Thanks!
On Thu, Dec 18, 2014 at 9:18 AM, Tom Lane wrote:
>
> Robert DiFa
Robert DiFalco writes:
> So, for my use case I simply need to search for a case insensitive
> substring. It need not be super exact. It seems like there are two ways I
> can do this:
> CREATE INDEX idx_users_name ON users USING GIST(lower(name) gist_trgm_ops);
> SELECT * FROM users WHERE lower(na
On 18 March 2014 22:26, Yu Zhao wrote:
> In PostgreSQL 9.3.3 Documentation 11.8. Partial Indexes Example 11-2
> (http://www.postgresql.org/docs/9.3/interactive/indexes-partial.html),
> the partial index is created
>
> CREATE INDEX orders_unbilled_index ON orders (order_nr) WHERE billed
> is not t
On 25.04.2013 02:56, Kelphet Xiong wrote:
In all the experiments, the lineitem and partsupp tables reside in memory
because there is no io activities observed from iotop.
Since there is enough network bandwidth (1Gb/s or 128MB/s) between
client and server,
I would like to know what determines the
On Sat, Mar 30, 2013 at 11:00 PM, Kelphet Xiong wrote:
> I guess it is because postgres only uses a single thread to read
> the data or “pushing the data around in RAM” according to Kevin’s statement.
> Then my question is actually why postgres can not use the remaining
> 93.4%CPU.
postgres can u
Thanks a lot for replies from Kevin, Ken, and Ants Aasma. I really
aappreciate your suggestions and comments.
My server configuration is two physical quad-core CPUs with
hyper-threading enabled.
Each CPU is Intel(R) Xeon(R) CPU E5620@2.40GHz. Physical memory is 16GB.
I set shared_buffers as 4GB,
On Mar 28, 2013 9:07 PM, "kelphet xiong" wrote:
> explain analyze select * from inventory;
> QUERY PLAN
>
>
--
>
> Seq S
On Thu, Mar 28, 2013 at 02:03:42PM -0700, Kevin Grittner wrote:
> kelphet xiong wrote:
>
> > When I use postgres and issue a simple sequential scan for a
> > table inventory using query "select * from inventory;", I can see
> > from "top" that postmaster is using 100% CPU, which limits the
> > qu
kelphet xiong wrote:
> When I use postgres and issue a simple sequential scan for a
> table inventory using query "select * from inventory;", I can see
> from "top" that postmaster is using 100% CPU, which limits the
> query execution time. My question is that, why CPU is the
> bottleneck here an
thanks a lot for your feedback.
It helped me a lot and I have now a better overview in very specific hints,
which I wasn't able to find in any documentation.
Cheers Markus
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
On Thu, Aug 30, 2012 at 11:34 AM, Markus Innerebner
wrote:
> Hi Laurenz,
>
>
> In your approach 1 to 3, what do you mean with "load into main memory"?
>
>
>
> I forgot to say: I use Java and connect with JDBC.
>
> in approach 1 I do an initial loading of the entire relation, by executing 1
> SQL q
On Thu, Aug 30, 2012 at 10:34 AM, Markus Innerebner
wrote:
>
> > To flush the filesystem cache (from Linux 2.6.16 on), use
> > sync; echo 3 > /proc/sys/vm/drop_caches
>
>
> I started to do that , and
> yes, this solves my problem!!
>
> I assume that deleting file system cache implies that also pos
Hi Laurenz,
>
> In your approach 1 to 3, what do you mean with "load into main memory"?
I forgot to say: I use Java and connect with JDBC.
in approach 1 I do an initial loading of the entire relation, by executing 1
SQL query to load all edges in main memory, where I create my main memory
st
Markus Innerebner wrote:
> I am doing some runtime experiments in my implementation, which is
computing multi-modal range queries
> for a query point (if you want to know details check the website:
www.isochrones.inf.unibz.it).
> The network is explored using Dijkstra Shortest Path algorithm that
s
Josh Berkus wrote:
> On 12/5/11 1:36 PM, Kevin Grittner wrote:
>> I understand the impulse to run autovacuum less frequently or
>> less aggressively. When we first started running PostgreSQL the
>> default configuration was very cautious.
>
> The default settings are deliberately cautious, as de
On 12/5/11 1:36 PM, Kevin Grittner wrote:
> I understand the impulse to run autovacuum less frequently or less
> aggressively. When we first started running PostgreSQL the default
> configuration was very cautious.
The default settings are deliberately cautious, as default settings
should be.
Bu
On Mon, Dec 5, 2011 at 11:36 AM, Kevin Grittner
wrote:
> Ernesto Quiñones wrote:
>> vacuum_cost_limit 200
> We've boosted this to 600. Once you're in a "steady state", this is
> the setting you might want to adjust up or down as needed to make
> cleanup aggressive enough without putting a notic
Ernesto Quiñones wrote:
I understand the impulse to run autovacuum less frequently or less
aggressively. When we first started running PostgreSQL the default
configuration was very cautious. A lot of bloat would accumulate
before it kicked in, at which point there was a noticeable
performance h
no problem Scott, thanks for your appreciations
2011/12/5 Scott Marlowe :
> On Mon, Dec 5, 2011 at 10:42 AM, Scott Marlowe
> wrote:
>> On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones wrote:
>>> vacuum_cost_delay 1s
>>> vacuum_cost_limit 200
>>
>> Those are insane settings for va
On Mon, Dec 5, 2011 at 10:42 AM, Scott Marlowe wrote:
> On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones wrote:
>> vacuum_cost_delay 1s
>> vacuum_cost_limit 200
>
> Those are insane settings for vacuum costing, even on a very slow
> machine. Basically you're starving vacuum and auto
On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones wrote:
> Hi Kevin, comments after your comments
>
> 2011/12/3 Kevin Grittner :
>> Ernesto Quiñones wrote:
>>> Scott Marlowe wrote:
Ernesto Quiñones wrote:
>>
> I want to know if it's possible to predict (calculate), how long
> a VACU
Hi Kevin, comments after your comments
2011/12/3 Kevin Grittner :
> Ernesto Quiñones wrote:
>> Scott Marlowe wrote:
>>> Ernesto Quiñones wrote:
>
I want to know if it's possible to predict (calculate), how long
a VACUUM FULL process will consume in a table?
>
> I don't think you said w
On Sat, Dec 3, 2011 at 6:11 AM, Ernesto Quiñones wrote:
> Thanks for the answer Scott, actually my autovacuum_naptime is 1h ..
> but I don't find naptime parameter for a manual vacuum
That's really high, but what I meant to as was what your
vacuum_cost_delay was set to. Also vacuum_cost_limit.
Ernesto Quiñones wrote:
> Scott Marlowe wrote:
>> Ernesto Quiñones wrote:
>>> I want to know if it's possible to predict (calculate), how long
>>> a VACUUM FULL process will consume in a table?
I don't think you said what version of PostgreSQL you're using.
VACUUM FULL prior to version 9.0 i
Thanks for the answer Scott, actually my autovacuum_naptime is 1h ..
but I don't find naptime parameter for a manual vacuum
thanks again
2011/12/2 Scott Marlowe :
> On Fri, Dec 2, 2011 at 8:32 PM, Ernesto Quiñones wrote:
>> Hi friends
>>
>> I want to know if it's possible to predict (calculate)
On Fri, Dec 2, 2011 at 8:32 PM, Ernesto Quiñones wrote:
> Hi friends
>
> I want to know if it's possible to predict (calculate), how long a
> VACUUM FULL process will consume in a table?
>
> can I apply some formula to calculate this?
If you look at what iostat is doing while the vacuum full is r
On 05/10/2011 01:28 PM, Tory M Blue wrote:
AMD Opteron(tm) Processor 4174 HE vs Intel(R) Xeon(R) CPUE5345 @ 2.33GHz
I'm wondering if there is a performance difference running postgres on
fedora on AMD vs Intel (the 2 listed above).
I have an 8 way Intel Xeon box and a 12way AMD box and
On Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:
> On Mon, Jan 3, 2011 at 9:13 PM, Greg Smith wrote:
>> Strange, John W wrote:
>>>
>>> Has anyone had a chance to recompile and try larger a larger blocksize
>>> than 8192 with pSQL 8.4.x?
>>
>> While I haven't done the actual experiment you're
: Ben Chobot; Merlin Moncure
Cc: pgsql-performance@postgresql.org Performance
Subject: Re: [PERFORM] Question: BlockSize > 8192 with FusionIO
This has gotten a lot better with the 2.x drivers as well.
I'm completely aware of the FusionIO and it's advantages/disadvantages.. I'm
M
To: Merlin Moncure
Cc: pgsql-performance@postgresql.org Performance
Subject: Re: [PERFORM] Question: BlockSize > 8192 with FusionIO
On Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:
>
> most flash drives, especially mlc flash, use huge blocks anyways on
> physical level.
On Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:
>
> most flash drives, especially mlc flash, use huge blocks anyways on
> physical level. the numbers claimed here
> (http://www.fusionio.com/products/iodrive/) (141k write iops) are
> simply not believable without write buffering. i didn't se
On Mon, Jan 3, 2011 at 9:13 PM, Greg Smith wrote:
> Strange, John W wrote:
>>
>> Has anyone had a chance to recompile and try larger a larger blocksize
>> than 8192 with pSQL 8.4.x?
>
> While I haven't done the actual experiment you're asking about, the problem
> working against you here is how WA
Strange, John W wrote:
Has anyone had a chance to recompile and try larger a larger blocksize than
8192 with pSQL 8.4.x?
While I haven't done the actual experiment you're asking about, the
problem working against you here is how WAL data is used to protect
against partial database writes. S
On Tue, Nov 30, 2010 at 3:23 PM, T.H. wrote:
> Just looking into it now, thanks for the suggestion. Is there a reason that
> EXISTS is generally faster than IN for this sort of query?
>
> -Tristan
Exists will return immediately upon finding a match -- assuming there is one.
--
Sent via pgsql-pe
On 11/30/10 5:54 PM, Kevin Grittner wrote:
"T.H." wrote:
Also, are there any better ways you can think of doing such an IN
query, using non-subselect means that might be more efficient?
Have you tried the EXISTS predicate?
-Kevin
Just looking into it now, thanks for the suggestion. Is th
"T.H." wrote:
> Also, are there any better ways you can think of doing such an IN
> query, using non-subselect means that might be more efficient?
Have you tried the EXISTS predicate?
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to yo
>Kaloyan Iliev Iliev wrote:
> Kevin Grittner wrote:
>> Out of curiosity, what happens if you consistently use JOIN
>> clauses, rather than mixing that with commas?:
> The plan improves. So can you explain why?
Commas in a FROM clause bind more loosely than JOIN clauses,
rearrangement from on
Kaloyan Iliev Iliev wrote:
> The 8.2.15 plan was on an empty database.
> On a full database the plan was almost the same. So the question
> is could I speed up the plan?
Since this is an entirely new query which doesn't include a LEFT
JOIN, it's not good to just tack it onto the other thread.
Sorry for the spam.
The 8.2.15 plan was on an empty database.
On a full database the plan was almost the same. So the question is
could I speed up the plan?
Why the "Hash Cond: (dp.person1_id = p.id)" isn't used for index scan on
that table?
Best regards,
Kaloya Iliev
Here is the plan on a fu
Hello again,
I have another query which performance drops drastically after PG upgrade.
I can not improve the plan no matter how hard I try. I try creating new
indexes and rewrite the query with JOIN .. ON instead of commas but
nothing happens.
I will appreciate any suggestions.
Best regards,
K
Hi,
The plan improves. So can you explain why?
Thanks in advance.
Kaloyan
QUERY
PLAN
---
Kaloyan Iliev Iliev wrote:
> I thing they should be access only if there are rows from the
> where. Why the left join executes first?
Out of curiosity, what happens if you consistently us JOIN clauses,
rather than mixing that with commas?:
explain analyze
SELECT
DD.debtid,
ADD.amount
Kaloyan Iliev Iliev writes:
> I have I query which behave strangely (according to me).
> According to the first plan PG makes absolutely unnecessary seq scan on
> tables "invoices" and "domeini" and etc.
I think you might get better results if you could get this rowcount
estimate a bit more in l
"Benjamin Krajmalnik" writes:
> That is what I thought.
> The trigger calls a 3000 row stored procedure which does all of the
> calculations to aggregate data into 3 separate tables and then insert the raw
> data point into a 4th table.
Youch. Seems like you might want to rethink the idea of d
day, July 15, 2010 4:47 PM
> To: Benjamin Krajmalnik; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Question of using COPY on a table with triggers
>
> > Essentially, we insert a set of columns into a table, and each row
> fires
> > a trigger function which ca
Essentially, we insert a set of columns into a table, and each row fires
a trigger function which calls a very large stored procedure
For inserting lots of rows, COPY is much faster than INSERT because it
parses data (a lot) faster and is more "data-stream-friendly". However the
actual inse
partition tables are handled and how the order by
/ limit is applied in this scenario.
Thanks,
Ranga
> Date: Tue, 6 Jul 2010 16:26:23 -0400
> From: sfr...@snowman.net
> To: ranga_gopa...@hotmail.com
> CC: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Question about p
Ranga,
* Ranga Gopalan (ranga_gopa...@hotmail.com) wrote:
> It seems that this is an issue faced by others as well - Please see this
> link:
> http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table
>
> Is this a known bug? Is this something that someone
On Tue, Jul 6, 2010 at 12:30 PM, Ranga Gopalan
wrote:
> It seems that this is an issue faced by others as well - Please see this
> link:
> http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table
>
> Is this a known bug? Is this something that someone is work
Hi,
It seems that this is an issue faced by others as well - Please see this link:
http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table
Is this a known bug? Is this something that someone is working on or is there a
known work around?
Thanks,
Ranga
In postgresql.conf, what are your settings for constraint_exclusion?
There are 3 settings - on, off, or partition.
Mine are set as follows:
constraint_exclusion = on# on, off, or partition
Under 8.4.4 I had it set to partition, but the behavior was not what I
expected so I set
David Kerr writes:
> Fortunately the network throughput issue is not mine to solve.
> Would it be fair to say that with the pgbench output i've given so far
> that if all my users clicked "go" at the same time (i.e., worst case
> scenario), i could expect (from the database) about 8 second respo
Tom Lane wrote:
Simon Riggs writes:
On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:
400 concurrent users doesn't mean that they're pulling 1.5 megs /
second every second.
There's a world of difference between 400 connected and 400 concurrent
users. You've been testing 400 concurrent us
Simon Riggs writes:
> On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:
>> 400 concurrent users doesn't mean that they're pulling 1.5 megs /
>> second every second.
> There's a world of difference between 400 connected and 400 concurrent
> users. You've been testing 400 concurrent users, yet w
On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:
> 400 concurrent users doesn't mean that they're pulling 1.5 megs /
> second every second. Just that they could potentially pull 1.5 megs at
> any one second. most likely there is a 6 (minimum) to 45 second
> (average) gap between each individu
On Fri, Apr 03, 2009 at 10:35:58PM -0400, Greg Smith wrote:
- On Fri, 3 Apr 2009, Tom Lane wrote:
-
- and a bunch of postmaster ones, with "-c" (or by hitting "c" while top is
- running) you can even see what they're all doing. If the pgbench process
- is consuming close to 100% of a CPU's time
On Fri, 3 Apr 2009, Tom Lane wrote:
However, I don't think anyone else has been pgbench'ing transactions
where client-side libpq has to absorb (and then discard) a megabyte of
data per xact. I wouldn't be surprised that that eats enough CPU to
make it an issue. David, did you pay any attention
Gah - sorry, setting up pgbouncer for my Plan B.
I meant -pgbench-
Dave Kerr
On Fri, Apr 03, 2009 at 04:34:58PM -0700, David Kerr wrote:
- On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote:
- - Greg Smith writes:
- - > pgbench is extremely bad at simulating large numbers of clients. Th
On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote:
- Greg Smith writes:
- > pgbench is extremely bad at simulating large numbers of clients. The
- > pgbench client operates as a single thread that handles both parsing the
- > input files, sending things to clients, and processing their r
Greg Smith writes:
> pgbench is extremely bad at simulating large numbers of clients. The
> pgbench client operates as a single thread that handles both parsing the
> input files, sending things to clients, and processing their responses.
> It's very easy to end up in a situation where that bo
David Kerr writes:
> On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote:
> - How much more "real" is the target hardware than what you have?
> - You appear to need about a factor of 10 better disk throughput than
> - you have, and that's not going to be too cheap.
> The hardware i'm using i
On Fri, 3 Apr 2009, David Kerr wrote:
Here is my transaction file:
\setrandom iid 1 5
BEGIN;
SELECT content FROM test WHERE item_id = :iid;
END;
Wrapping a SELECT in a BEGIN/END block is unnecessary, and it will
significantly slow down things for two reason: the transactions overhead
an
On Fri, Apr 3, 2009 at 1:53 PM, David Kerr wrote:
> Here is my transaction file:
> \setrandom iid 1 5
> BEGIN;
> SELECT content FROM test WHERE item_id = :iid;
> END;
>
> and then i executed:
> pgbench -c 400 -t 50 -f trans.sql -l
>
> The results actually have surprised me, the database isn't
On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote:
- > I'm not really sure how to evaulate the tps, I've read in this forum that
- > some folks are getting 2k tps so this wouldn't appear to be good to me.
-
- Well, you're running a custom transaction definition so comparing your
- number to
David Kerr writes:
> The results actually have surprised me, the database isn't really tuned
> and i'm not working on great hardware. But still I'm getting:
> caling factor: 1
> number of clients: 400
> number of transactions per client: 50
> number of transactions actually processed: 2/2
Many thanks for your answer. I did see a comment about this in the
documentation on the link I posted below.
My main question remains though: Is it necessary to cluster after a restore?
Thanks again!
On Thu, Jan 22, 2009 at 2:58 PM, Kenneth Marshall wrote:
> On Thu, Jan 22, 2009 at 02:52:12PM
On Thu, Jan 22, 2009 at 02:52:12PM -0500, Harold A. Gim?nez Ch. wrote:
> Hi list,
>
> Clustering my indexes dramatically improves the query performance of many of
> my queries. Also, the actual clustering takes a very long time for big
> databases, roughly 20 hours. I have two questions about how
Hi guys !!!
Sorry for the wrong spelling. =)
I could see that PostgreSQL will support my application, but i have to do a
good configuration on my server.
Thanks for answers, now i will look for informations about PostgreSQL on
OpenSolaris 2008.05
Have a nice week,
Levi
2008/7/4 Chris Browne <[E
[EMAIL PROTECTED] ("Leví Teodoro da Silva") writes:
> Hi guys, How are you ?
> I am from Brazil and i work for a little company and it company is working is
> medium-big project and we want to use PostGree like the DataBase
> system, but i got some questions.
> I want to know if the PostGree has l
I want to know if the PostGree has limitations about the concurrent
access,
because a lot of people will access this database at the same time.
PostgreSQL has excellent concurrency provided you use it correctly.
But what do you mean by concurrent access ?
* Number o
On Wed, Jul 2, 2008 at 12:31 PM, Leví Teodoro da Silva
<[EMAIL PROTECTED]> wrote:
> Hi guys, How are you ?
>
>
> I am from Brazil and i work for a little company and it company is working
> is medium-big project and we want to use PostGree like the DataBase system,
> but i got some questions.
> I w
On Wed, 2008-07-02 at 15:31 -0300, Leví Teodoro da Silva wrote:
> we want to use PostGree like the DataBase system,
> but i got some questions.
First of all: Please learn the correct spelling: It is PostgreSQL, or
Postgres.
> I want to know if the PostGree has limitations about the concurrent
> a
An index scan looks through the index and pulls in each pages as it sees
it.
A bitmap index scan looks through the index and makes a sorted list of
all
the pages it needs and then the bitmap heap scan reads all the pages.
If your data is scattered then you may as well do the index scan, but
On Thu, Apr 24, 2008 at 12:56 PM, PFC <[EMAIL PROTECTED]> wrote:
>
> Our ~600,000,000
>> row table is changed very infrequently and is on a 12 disk software raid-6
>> for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
>> Fusion-MPT SAS Our ~50,000,000 row staging table is o
Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software
raid-6
for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk
hardware
raid-10 using a Dell PowerEdge Expandable RAID con
On Thu, 24 Apr 2008, Nikolas Everett wrote:
The setup is kind of a beast.
No kidding.
When I run dstat I see only around 2M/sec and it is not consistent at all.
Well, it is having to seek over the disc a little. Firstly, your table may
not be wonderfully ordered for index scans, but goodne
In response to "Mark Steben" <[EMAIL PROTECTED]>:
> Bill,
> Thanks for your quick response.
> We are at version 8.2.5 - just recently upgraded from 7.4.5.
> This strategy using truncate was just implemented yesterday.
> Now I will revisit the vacuum full strategy. Does seem to
> Be redundant.
> Is
-
From: Bill Moran [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 19, 2008 9:35 AM
To: Mark Steben
Cc: 'Chris'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] question on TRUNCATE vs VACUUM FULL
In response to "Mark Steben" <[EMAIL PROTECTED]>:
>
> I kn
Mark Steben escribió:
> My confusion lies in the fact that we empty table C after
> Function D finishes. There aren't any current data or records
> To touch on the table. The MVCC leftovers are all purely dead
> Rows that should be deleted.
Not if there are open transactions that might want to l
eport.
>
> I've attached my original memo to the bottom.
>
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Chris
> Sent: Tuesday, March 18, 2008 9:11 PM
> To: Mark Steben
> Cc: pgsql-performance@postgresql.org
> Subject: R
erformance@postgresql.org
Subject: Re: [PERFORM] question on TRUNCATE vs VACUUM FULL
>
> So my question is this: Shouldn't VACUUM FULL clean Table C and reclaim
> all its space?
You've got concepts mixed up.
TRUNCATE deletes all of the data from a particular table (and
So my question is this: Shouldn’t VACUUM FULL clean Table C and reclaim
all its space?
You've got concepts mixed up.
TRUNCATE deletes all of the data from a particular table (and works in
all dbms's).
http://www.postgresql.org/docs/8.3/interactive/sql-truncate.html
VACUUM FULL is a p
On 21-Feb-08, at 12:13 AM, bh yuan wrote:
Hi
I am using Postgres8.3 on 8G memory , Xeon X5355 Quad Core x 2
processer RH5 machine with 10G data. (with some table which have
about 2,000,000~ 5,000,000 rows )
I have two quesion.
1. how to set the shared_buffers and other postgresql.conf parame
On Wed, Feb 20, 2008 at 11:13 PM, bh yuan <[EMAIL PROTECTED]> wrote:
> Hi
>
> I am using Postgres8.3 on 8G memory , Xeon X5355 Quad Core x 2
> processer RH5 machine with 10G data. (with some table which have
> about 2,000,000~ 5,000,000 rows )
>
> I have two quesion.
> 1. how to set the share
On Mon, Feb 11, 2008 at 03:33:37PM -0600, Scott Marlowe wrote:
> On Feb 11, 2008 2:03 PM, salman <[EMAIL PROTECTED]> wrote:
> > I'm planning to cluster a few large tables in our database but I'm
> > unable to find any recommendations/documentation on best practices --
> > Mainly, whether it's bette
On Feb 11, 2008 2:03 PM, salman <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm planning to cluster a few large tables in our database but I'm
> unable to find any recommendations/documentation on best practices --
> Mainly, whether it's better to use an index which has a higher idx_scan
> value, a hig
Jason Lustig wrote:
I have some questions about the performance of certain types of SQL
statements.
What sort of speed increase is there usually with binding parameters
(and thus preparing statements) v. straight sql with interpolated
variables? Will Postgresql realize that the following quer
On Mon, Jun 04, 2007 at 11:18:30PM -0400, Jason Lustig wrote:
> I have some questions about the performance of certain types of SQL
> statements.
>
> What sort of speed increase is there usually with binding parameters
> (and thus preparing statements) v. straight sql with interpolated
> var
What sort of speed increase is there usually with binding parameters
(and thus preparing statements) v. straight sql with interpolated
variables? Will Postgresql realize that the following queries are
effectively the same (and thus re-use the query plan) or will it think
they are differen
Ron <[EMAIL PROTECTED]> writes:
> One of the reasons for the wide variance in suggested values for pg
> memory use is that pg 7.x and pg 8.x are =very= different beasts.
> If you break the advice into pg 7.x and pg 8.x categories, you find
> that there is far less variation in the suggestions.
>
On Friday 13 April 2007 14:53:53 Carlos Moreno wrote:
> How does PG take advantage of the available memory? I mean, if I have a
> machine with, say, 4 or 8GB of memory, how will those GBs would end
> up being used? They just do?? (I mean, I would find that a vaild
> answer;
On linux the files
Steve wrote:
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance.
The annotated config file talks about setting shared_buffers to a third
of the
available memory --- well, it says "it should be no more than 1/3 of the
total
amount of memory
At 12:38 PM 4/13/2007, Steve wrote:
Really?
Wow!
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance. Have you done any benchmarks on 8.2.x that
show that you get an improvement from this, or did you just take the
"too much of a good thing is wo
Really?
Wow!
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performance. Have you done any benchmarks on 8.2.x that
show that you get an improvement from this, or did you just take the
"too much of a good thing is wonderful" approach?
Not to be rude
On Tue, 2007-04-10 at 15:28 -0400, Steve wrote:
>
> I'm trying to tune the memory usage of a new machine that has a -lot- of
> memory in it (32 gigs).
...
>
> shared_buffers = 16GB
Really?
Wow!
Common wisdom in the past has been that values above a couple of hundred
MB will degrade performan
1 - 100 of 153 matches
Mail list logo