Hi,
I have a colleague that is convinced that the website is faster if
enable_seqscan is turned OFF.
I'm convinced of the opposite (better to leave it ON), but i would like to
show it, prove it to him.
Now the first query we tried, would do a bitmap heap scan instead of a
seqscan when the latter w
@Pierre: i know.. but first i'd have to find such a query from real-life.
And also, i'm convinced that this query would be faster with a seqscan if
the data wenen't cached.
@Arjen: thanks, that helps.
But that's only the OS cache. There's also the shared_buffers, which are a
postgres specific thi
t daily.
df -h on the data dir gives me 143G
we're growing
there are many queries that should be optimized
the seqscan option is in the connection string, not in the postgresql.conf
Cheers,
On Fri, Aug 27, 2010 at 7:57 PM, Scott Marlowe wrote:
> On Thu, Aug 26, 2010 at 4:32 AM, Willy-
> I found one query that did a seqscan anyway(with enable_seqscan off),
> because doing an index scan would be more than 1M points more
> expensive (to the planner).
Hmm, i guess that says it all :)
--
"Patriotism is the conviction that your country is superior to all
others because you were born
Hi,
I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep them
waiting because of long DSS queries.
We use Debian Lenny.
I've noticed that renicing the process really lowers the load (in "top"),
though i
On Tue, Jan 19, 2010 at 3:09 PM, Jean-David Beyer wrote:
> Willy-Bas Loos wrote:
>
>
>>
>> On Tue, Jan 19, 2010 at 2:28 PM, Jean-David Beyer
>> > jeandav...@verizon.net>> wrote:
>>
>>It could make sense.
>>
>>I once had
Hi,
I'm using PostgreSQL 8.4 (and also 8.3).
A partial index like this:
CREATE INDEX table2_field1_idx
ON table2 (field1)
WHERE NOT field1 ISNULL;
Will not be used when select one record from 100K records:
explain select * from table2 where field1 = 256988
'Seq Scan on table2 (cost=0.00..16
Hi,
Our database has gotten rather large and we are running out of disk space.
our disks are 15K rpm SAS disks in RAID 10.
We are going to rent some space on a FibreChannel SAN.
That gives us the opportunity to separate the data and the indexes.
Now i thought it would be best to move the indexes
are you saying that, generally speaking, moving the data would be better
unless the SAN performs worse than the disks?
besides your point that it depends on what our end looks like i mean.
(and what do you mean by "the DAS way", sry no native speaker)
cheers,
wbl
On Wed, May 4, 2011 at 6:43 AM,
read sequentially and then, when
retrieving the data, there is a lot of random I/O.
if it's a long story, any tips for info about this (book or web site)?
cheers,
wbl
On Wed, May 4, 2011 at 7:33 AM, Jesper Krogh wrote:
> On 2011-05-04 07:25, Willy-Bas Loos wrote:
>
>> are you
Hi,
We have some indexes that don't seem to be used at all.
I'd like to know since when they have not been used.
That is, the time when postgres started counting to reach the number that is
in pg_stat_user_indexes.idx_scan
Is there a way to retrieve that from the database ?
Cheers,
WBL
--
"Pa
Then, are the index scans counted in a memory variable and written at
analyze time?
On Thu, May 12, 2011 at 8:22 PM, raghu ram wrote:
>
> "Analyze" activity will update the statistics of each catalog table.
> --Raghu Ram
>
>
--
"Patriotism is the conviction that your country is superior to al
On Mon, May 16, 2011 at 10:19 AM, Robert Klemme
wrote:
> On Fri, May 13, 2011 at 9:04 PM, Robert Haas
> wrote:
> Separating index and tables might not be a totally good idea
> generally. Richard Foote has an excellent article about Oracle but I
> assume at least a few things do apply to PostgreS
Hi,
is there a way to change the sample size for statistics (that analyze
gathers)?
It is said to be 10%. i would like to raise that, because we are getting bas
estimations for n_distinct.
Cheers,
WBL
--
"Patriotism is the conviction that your country is superior to all others
because you were
On Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus wrote:
> It's not 10%. We use a fixed sample size, which is configurable on the
> system, table, or column basis.
>
It seems that you are referring to "alter column set statistics" and
"default_statistics_target", which are the number of percentiles
On Thu, Aug 4, 2011 at 6:56 AM, Adarsh Sharma wrote:
> After this I change my pg_xlog directory to a separate directory other than
> data directory by symlinking.
>(...)
> Please let me know if I missing any other important configuration.
Moving the pg_xlog to a different directory only helps whe
Hi,
when benchmarking insert , can there be caching effects?
i insert, delete again, and insert again.
does anything cache the things that i deleted?
(postgres 8.4 on debian)
cheers,
WBL
--
"Patriotism is the conviction that your country is superior to all others
because you were born in it."
Hi,
I've read this:
http://wiki.postgresql.org/wiki/Prioritizing_databases_by_separating_into_multiple_clusters
But it doesn't really say anything about memory.
If i can fit an extra cluster into it's shared buffer, it should have fast
reads, right?
Even if i don't have seperate spindles and the
On Wed, Jun 27, 2012 at 9:34 AM, Hannu Krosing wrote:
> Check if you are CPU-bound. On a database which fits fully you may
> already be.
>
> Being CPU-bound is my goal.
That makes your answer a yes to me.
Only i'm afraid that this solution is not optimal.
Because i am stealing more resopurces fro
On Wed, Jun 27, 2012 at 12:01 PM, Willy-Bas Loos wrote:
> I cannot follow that reasoning completely. Who needs OS level file cache
> when postgres' shared_buffers is better? The efficiency should go up again
> after passing 50% of shared buffers, where you would be caching ever
On Wed, Jun 27, 2012 at 1:28 PM, Willy-Bas Loos wrote:
>
> * need fast writes on one cluster, so steal some memory to fit the DB in
> shared_buffers
>
> correction: READs, not writes. sry.
--
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth
Thank you.
Cheers,
WBL
Op 27 jun. 2012 14:59 schreef "Ants Aasma" het volgende:
> On Jun 27, 2012 2:29 PM, "Willy-Bas Loos" wrote:
> > Should i use a larger shared_buffers for the other cluster(s) too, so
> that i bypass the inefficient OS file-cache?
>
>
Hi,
I want to force deafults, and wonder about the performance.
The trigger i use (below) makes the query (also below) take 45% more time.
The result is the same now, but i do have a use for using the trigger (see
"background info").
Isn't there a more efficient way to force the defaults (in the
teger);
create table d2 (id bigint primary key, gid bigint);
--insert into g2 (...)
--insert into d2 (...)
create index g_blok on g2(blok);
create index d_gid on d2(gid);
alter table d2 add constraint d_g_fk foreign key (gid) references g2 (gid);
analyze d2;
analyze g2;
Any advice?
Cheers,
Willy-
ws=121 loops=1)
> Index Cond: (k = 1942)
> Total runtime: 481.600 ms
> Here's the DDL:
> create table g2 (gid bigint primary key, k integer);
> create table d2 (id bigint primary key, gid bigint);
> --insert into g2 (...)
> --insert into d2 (...)
> c
g.geo_id)
-> Bitmap Index Scan on bmp_data_geo_idx (cost=0.00..5.82
rows=179 width=0) (actual time=1.206..1.206 rows=0 loops=121)
Index Cond: (geo_id = g.geo_id)
Total runtime: 208.850 ms
On Wed, Jun 26, 2013 at 9:08 PM, Igor Neyman wrote:
>
>
> From: Willy-B
On Wed, Jun 26, 2013 at 9:30 PM, Igor Neyman wrote:
>
> How much RAM you have on this machine?
>
16 GB
> What else is this machine is being used for (besides being db server)?
>
It's my laptop by now, but i was working on a server before that. The
laptop gives me some liberties to play around.
I
On Wed, Jun 26, 2013 at 10:31 PM, Jeff Janes wrote:
>
> Why is it retrieving 3.1 million, when it only needs 17?
>
>
> that's because of the sequential scan, it reads all the data.
cheers,
willy-bas
--
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth
On Wed, Jun 26, 2013 at 11:20 PM, Willy-Bas Loos wrote:
> On Wed, Jun 26, 2013 at 10:55 PM, Sergey Konoplev wrote:
>
>>
>>
>> These are plans of two different queries. Please show the second one
>> (where d2, g2, etc are) with secscans off.
>>
>>
help!!
Cheers,
Willy-Bas
On Wed, Jun 26, 2013 at 10:46 PM, Victor Yegorov wrote:
> 2013/6/26 Willy-Bas Loos
>
>> postgres does a seqscan, even though there is an index present and it
>> should be much more efficient to use it.
>> I tried to synthetically reproduce i
30 matches
Mail list logo