Sebastian Hennebrueder <[EMAIL PROTECTED]> writes:
> User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
> ...
>
> "Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual
> time=1640.000..3687.000 rows=62 loops=1)"
> " Join Filter: ("inner".fid = "outer".faufgaben_id)"
> " -> Inde
After a long battle with technology, [EMAIL PROTECTED] (Sebastian
Hennebrueder), an earthling, wrote:
> I could not find any recommandations for the level of set statistics
> and what a specific level does actually mean.
> What is the difference between 1, 50 and 100? What is recommanded for
> a t
Sebastian Hennebrueder wrote:
Hello,
I could not find any recommandations for the level of set statistics and
what a specific level does actually mean.
What is the difference between 1, 50 and 100? What is recommanded for a
table or column?
Default I believe is 10. The higher the number, the more s
Hello,
I could not find any recommandations for the level of set statistics and
what a specific level does actually mean.
What is the difference between 1, 50 and 100? What is recommanded for a
table or column?
--
Kind Regards / Viele Grüße
Sebastian Hennebrueder
-
http://www.laliluna.de/tut
Solution not found as I thought. I integrated the query in a view and
the query plan became very bad once again.
The reason is that when I am using the view I have the joins in a
differerent order.
Does anyone have an idea to solve this.
Sebastian
a) bad order but the one I have in my applicatio
Solution to my problem.
I added indexes to each foreign_key (there had been some missing). I
will try tomorrow by daylight what influence this had actually. Only the
indexes did not change anything! Even with lower random_page_costs and
higher shared mem.
The big change was the following
I created
On 12.05.2005, at 16:15, Tom Lane wrote:
David Teran <[EMAIL PROTECTED]> writes:
Any hint how
to force postgres to use the index even with more OR parts?
More up-to-date statistics would evidently help; the thing is
estimating
hundreds of rows returned and actually finding none.
I always do a 'va
Hello,
I am facing a problem in optimizing the query shown below.
Most queries in the application do only find about 20 to 100 matching rows.
The query joins the table taufgaben_mitarbeiter to taufgaben on which a
condition like the following "where clause" is frequently used.
where
am.fmitarbeit
On Wed, 11 May 2005 16:15:16 -0500, "Jim C. Nasby" <[EMAIL PROTECTED]>
wrote:
>> This is divided by the number of index columns, so the index correlation
>> is estimated to be 0.219.
>
>That seems like a pretty bad assumption to make.
Any assumption we make without looking at entire index tuples h
Ross,
> Memcached is a PG memory store, I gather,
Nope. It's a hyperfast resident-in-memory hash that allows you to stash stuff
like user session information and even materialized query set results.
Thanks to SeanC, we even have a plugin, pgmemcached.
> but...what is squid, lighttpd?
> anyt
On 12 May 2005, at 18:33, Josh Berkus wrote:
People,
In general I think your point is valid. Just remember that it
probably
also matters how you count page views. Because technically images
are a
separate page (and this thread did discuss serving up images). So if
there are 20 graphics on a sp
People,
> In general I think your point is valid. Just remember that it probably
> also matters how you count page views. Because technically images are a
> separate page (and this thread did discuss serving up images). So if
> there are 20 graphics on a specific page, that is 20 server hits just
Marc,
> 1) use 3 different disks for:
>
> -1: source data
> -2: index tablespaces
> -3: data tablespaces
Others have already told you about the importance of relocating WAL. If you
are going to be building indexes on the imported data, you might find it
beneficial to relocate
100 hits a second = 8,640,000 hits a day. I work on a site which does >
100 million dynamic pages a day. In comparison Yahoo probably does >
100,000,000,000 (100 billion) views a day
if I am interpreting Alexa's charts correctly. Which is about
1,150,000 a second.
Read the help on Alexa
Alex Turner wrote:
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there ar
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there are around 1 billio
On 12 May 2005, at 15:08, Alex Turner wrote:
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the
internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an
Marc Mamin wrote:
Hello,
I'm not an expert, but I'll give some suggestions.
I'd like to tune Postgres for large data import (using Copy from).
I believe that COPY FROM is supposed to be faster than COPY FROM
STDIN, but must be available to the backend process. If you can
do it, you should think a
"Marc Mamin" <[EMAIL PROTECTED]> writes:
> 1) use 3 different disks for:
> -1: source data
> -2: index tablespaces
> -3: data tablespaces
It's probably much more important to know where you put the WAL.
regards, tom lane
---(end
David Teran <[EMAIL PROTECTED]> writes:
> Any hint how
> to force postgres to use the index even with more OR parts?
More up-to-date statistics would evidently help; the thing is estimating
hundreds of rows returned and actually finding none.
regards, tom lane
--
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an RDBMS) and the
app server than you are between
Hello,
I'd like to tune Postgres for large data import (using Copy from).
here are a few steps already done:
1) use 3 different disks for:
-1: source data
-2: index tablespaces
-3: data tablespaces
2) define all foreign keys as initially deferred
machines. Which has it's own set of issues entirely. I am not entirely
sure that memcached actually does serialize data when it's comitted into
I think it does, ie. it's a simple mapping of [string key] => [string
value].
memcached either, although I could be wrong, I have not looked at the
Filesystems with many Filesystem Objects can slow down the Performance
at opening and reading Data.
On my laptop, lighttpd takes upto 15000 hits PER SECOND on static 2-3 Kb
files (tested with apachebench 2).
Apache is slower, of course : 3-4000 hits per second which is not that
bad.
Using a
Hi,
postgres 8.0.1, mac os x 10.3.9
i have a select with multiple OR's combined with one AND:
explain analyze SELECT t0.ATTRIBUTE_TYPE FROM ATTRIBUTE_VALUE t0 WHERE
(((t0.ATTRIBUTE_TYPE = 'pb'::varchar(10) OR t0.ATTRIBUTE_TYPE =
'po'::varchar(10) OR t0.ATTRIBUTE_TYPE = 'pn'::varchar(10) OR
t0.AT
On 11 May 2005, at 23:35, PFC wrote:
However, memcached (and for us, pg_memcached) is an excellent way
to improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system ve
26 matches
Mail list logo