Palle Girgensohn <[EMAIL PROTECTED]> writes:
> Is it always bad to create index xx on yy (field1, field2, field3);
All generalisations are false...
Seriously, it's true that as the length of your index key gets longer the
harder and harder it is to justify it. That doesn't mean they're always w
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> I have seen exactly this happen a number of times over the last several
> years. However there is still only one Tom Lane implementing the
> improvements.
Ob: Well clearly the problem is we need more Tom Lanes.
--
greg
---
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > while($howmany--) { push @morepgGurus, $pgGuru; }
>
> This is just wrong...
yeah, it would have been much clearer written as:
push @morepgGurus, ($pgGuru)x$howmany;
Or at least the perlish:
for (1..$howmany)
instead of C style while syntax.
O
> stats=# explain analyze SELECT work_units, min(raw_rank) AS rank FROM
> Trank_work_overall GROUP BY work_units;
>
> ...
>
> raw_rank | bigint |
> work_units | bigint |
If you create a copy of the same table using regular integers does that run
fast? And a copy of the table using bigin
Tom Lane <[EMAIL PROTECTED]> writes:
> We could change the hash function, perhaps, but then we'd just have
> different cases where there's a problem ... hashing will always fail on
> *some* set of inputs.
Sure, but completely ignoring part of the input seems like an unfortunate
choice of hash fu
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Tom Lane <[EMAIL PROTECTED]> writes:
> >> (Also, I have been harboring some notions of supporting cross-type hash
> >> joins for integer types, which will not work unless
Tom Lane <[EMAIL PROTECTED]> writes:
> So in the short term I think we have to tell people that Xeon MP is not
> the most desirable SMP platform to run Postgres on. (Josh thinks that
> the specific motherboard chipset being used in these machines might
> share some of the blame too. I don't hav
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> On Sun, 18 Apr 2004, Bruno Wolff III wrote:
>
> > Another option would be to put the numbers into two int4s. For int4 or
> > smaller types one of these would be zero. int8s would be split between
> > the two. The hash function would then be defined
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > If the hash tables were made a power of two then it would be possible to mix
> > the bits of the 32 bit value and just mask off the unneeded bits. I've found
> > one page via
Dave Cramer <[EMAIL PROTECTED]> writes:
> Here's an interesting link that suggests that hyperthreading would be
> much worse.
Uh, this is the wrong thread.
--
greg
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL
Joseph Shraibman <[EMAIL PROTECTED]> writes:
> scott.marlowe wrote:
>
> > I think you might be interested in materialized views. You could create this
> > as a materialized view which should be very fast to just select * from.
>
> That seems to be the count table I envisioned. It just hides th
Tom Lane <[EMAIL PROTECTED]> writes:
> Modding by a *non* power of 2 (esp. a prime) mixes the bits quite well,
> and is likely faster than any multiple-instruction way to do the same.
>
> The quoted article seems to be by someone who has spent a lot of time
> counting assembly cycles and none at
Joseph Shraibman <[EMAIL PROTECTED]> writes:
> J. Andrew Rogers wrote:
>
> > Do these features make a difference? Far more than you would imagine. On one
> > postgres server I just upgraded, we went from a 3Ware 8x7200-RPM
> > RAID-10 configuration to an LSI 320-2 SCSI 3x10k RAID-5, with 256M
>
Vitaly Belman <[EMAIL PROTECTED]> writes:
> With all that said, VMWare is badly suited for running a database,
> while CoLinux can be run as a service (didn't try it yet though),
> VMWare always sits there, it is slow to go up, slow to go down and
> generally feels like a system hog.
Uhm
"Matthew Nuzum" <[EMAIL PROTECTED]> writes:
> I have colinux running on a Fedora Core 1 image. I have the rhdb 3 (or
> PostgreSQL RedHat Edition 3) on it running. Here are tests with fsync on
> and off:
> FSYNC OFF FSYNC ON RUN
> 136.9 142.0 124.5149.1 1
> 122.1
<[EMAIL PROTECTED]> writes:
> Currently we are running a dual cpu dell blade server on redhat linux
> (2.4?) and PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC 2.96,
> raid5 and am using sqlrelay for connection pooling. It works fine under
> ordinary load but bogs down too much under the
"Michael Nonemacher" <[EMAIL PROTECTED]> writes:
> Agreed.
>
> We originally created the indexes this way because we sometimes do
> searches where one of the columns is constrained using =, and the other
> using a range search, but it's not clear to me how much Postgres
> understands multi-column
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> >-> Seq Scan on rules
> > (cost=0.00..22296.32 rows=11294 width=12)
> > (actual time=540.149..2047.308 rows=1 loops=1)
> Simple, the planner is choosing a sequential scan when it should be
> choosing an index scan. This is u
"William Carney" <[EMAIL PROTECTED]> writes:
> The machines used are P4s running FreeBSD 5.2.1. The Postgres version is
> 7.4.3. Can anyone tell me why there's such a big difference?
You're going to have to run tcpdump and see where the delays are. It might be
hard to decode the postgres protoco
"Ross J. Reedstrom" <[EMAIL PROTECTED]> writes:
> In the new schema, the same thing is:
>
> SELECT * from content where 42 = ANY (authors);
>
> Works fine, but for the life of me I can't find nor figure out how to
> build an index that will be used to speed this along. Any ideas?
Well that's b
"Ross J. Reedstrom" <[EMAIL PROTECTED]> writes:
> These groups _really are_ ideal for Joe Conway's work on arrays: we need
> ordered vectors, so we'd be sorting all the time, otherwise. They're
> static, and they're read only. The one thing they're not is fixed, known
> size (Sorry Merlin). They w
Tom Lane <[EMAIL PROTECTED]> writes:
> > I still don't really know why it failed, but after two days building the
> > index I gave up.
>
> Sounds like a bug to me. Could you put together a test case?
At the time I contacted one of the GiST authors and we went over things for a
while. They diag
> Interestingly, it is possible to rewrite the above query by switching
> and with or and >= with >. However when written that way, the planner
> almost never gets it right.
Well, note it's still not really getting it right even in your case. It's
doing an index scan on a>=a1 but if you have lo
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > do it for multi-column keys. It seems it would be nice if some syntax
> > similar to (a,b,c) > (a1,b1,c1) worked for this.
>
> 'nice' would be an understatement...
>
> if the above syntax is not defined in the standard, I would humbly suggest,
> w
Stephan Szabo <[EMAIL PROTECTED]> writes:
> Given the comment on make_row_op,
> /*
>* XXX it's really wrong to generate a simple AND combination for < <=
>* > >=. We probably need to invent a new runtime node type to handle
>* those correctly. For the moment, though, keep on doing
Greg Stark <[EMAIL PROTECTED]> writes:
> Fixing it to write out complex boolean expressions wouldn't be too hard, but
> I'm not clear it would be worth it, since I suspect the end result would be as
> the comment indicates, to introduce a new runtime node.
Just to prove
Tom Lane <[EMAIL PROTECTED]> writes:
> The only reason the code in parse_expr.c appears new is that the
> functionality used to be in gram.y.
Ah, that was what I was missing. Though it's odd since it seems there was code
in parse_expr.c to handle the "=" case specially.
> I'd like to see this f
Tom Lane <[EMAIL PROTECTED]> writes:
> One thing I did not like about your first pass is that it makes
> unsupportable assumptions about there being a semantic relationship
> between operators named, say, '<' and '<='.
Hm, I think I even had caught that issue on the mailing list previously.
I
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Well, if and when the rowtype comparison can be made to work over multi
> part keys (and the optimizer is made to do tricks there), postgres can
> be made to give much better generalized ISAM access. In the meantime,
> I'll just troubleshoot specifi
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> However, this would work:
> Create index on t(stackparam(array[a::text,b::text,c::text),
> array['char(2)', 'int', 'date')];
Well, I fear not all datatypes sort properly when treated as text. Notably
integers don't. "10" sorts before "2" for example
To the person who was looking for a $5k midlevel SSD drive (sorry, I hit 'd'
too fast):
http://www.tigicorp.com/tigijet_exp_s.htm
I found this via this interesting survey of SSD products:
http://www.storagesearch.com/ssd-buyers-guide.html
Incidentally it seems the popular Platypus SSD PCI
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Updated TODO item:
>
> o Automatically maintain clustering on a table
>
> This would require some background daemon to maintain clustering
> during periods of low usage. It might also require tables to be only
> parital
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Agreed. What I am wondering is with our system where every update gets
> a new row, how would this help us? I know we try to keep an update on
> the same row as the original, but is there any significant performance
> benefit to doing that which would
Bruce Momjian <[EMAIL PROTECTED]> writes:
> but is there any significant performance benefit to doing that which would
> offset the compaction advantage?
Just as a side comment. Setting PCTFREE 0 PCTUSED 100 on tables that have no
updates on them has an astonishingly big effect on speed. So the
Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Don't you think this will permit also to avoid extra disk seek and cache
> > invalidation? If you are updating the row (0,1) I think is less expensive
> > put the new version in (0,2) instead of thousand line far from that point.
Well if the other bu
Mr Pink <[EMAIL PROTECTED]> writes:
> AFAIK postgres doesn't peek at values used in a query when optimizing
Of course it does.
However sometimes things don't work perfectly.
To get good answers rather than just guesses we'll need two things:
. What version of postgres are you using.
. The out
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> Also, count(*) is likely to always generate a seq scan due to the way
> aggregates are implemented currently in pgsql. you might want to try:
Huh? I'm curious to know what you're talking about here.
> select somefield from sometable where timestamp
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> PostgreSQL has a "generic" aggregate method. Imagine instead doing a
> select count(id1+id2-id3) from table where ... In that instance, it's
> not a simple shortcut to just grab the number of rows anymore. Since
> PostgreSQL uses a generic aggregat
[I'm actually responding to the previous post from Tom Lane, but I've deleted
it and the archives seem to be down again.]
The assumption being made is that the first provided result is representative
of all future results. I don't see any reason that making this assumption of
all stable functio
Another option here is to use a partial index. You can index on some other
column -- perhaps the column you want the results ordered by where the where
clause is true.
Something like:
create index t_idx on t (name) where c>0 and d>0;
then any select with a matching where clause can use the inde
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > However I have the complementary reaction. I find peeking at the first
> > bind parameter to be scary as hell. Functions seem slightly less scary.
>
> FWIW, we only do it in the
Markus Schaber <[EMAIL PROTECTED]> writes:
> logigis=# explain select count(id) from (select ref_in_id as id from streets union
> select nref_in_id as id from streets) as blubb;
>QUERY PLAN
>
> ---
Tom Lane <[EMAIL PROTECTED]> writes:
> Neil Conway <[EMAIL PROTECTED]> writes:
>
> > How many cycles are we really talking about, though? I have a patch
> > which I'll send along in a few days which implements a similar
> > optimization: if a subselect is referenced by EXISTS or IN, we can
>
"J. Andrew Rogers" <[EMAIL PROTECTED]> writes:
> We do something very similar, also using table inheritance
I have a suspicion postgres's table inheritance will end up serving as a good
base for a partitioned table feature. Is it currently possible to query which
subtable a record came from thou
Patrick Clery <[EMAIL PROTECTED]> writes:
> Method 3 is the only one that used the index, but the only really acceptable
> method here is Method 1.
>
> My questions are...
> - Is there any hope in getting this to use an efficient index?
> - Any mathmaticians know if there is a way to reorder my
Patrick Clery <[EMAIL PROTECTED]> writes:
> Here's the structure of the marital status table:
Also I find it very odd that you have a "marital status table". marital status
is just one attribute of member. Do you expect to have more than one marital
status bitfield per member? How would you dist
Patrick Clery <[EMAIL PROTECTED]> writes:
> PLAN
> -
> Limit
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> hash_join = on
> -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.16 rows=16
> width=32) (actual time=0.017..0.038 rows=16 loops=1)
>
> hash_join = off
> -> Seq Scan on lookup_tipo_evento le (cost=0.00..1.1
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> On 22 Sep 2004, Greg Stark wrote:
>
> > Actually this looks like it's arguably a bug to me. Why does the hash
> > join execute the sequential scan at all? Shouldn't it also like the
> > merge join recognize
Tom Lane <[EMAIL PROTECTED]> writes:
> Yeah, I was just looking at doing that.
Well I imagine it takes you as long to read my patch as it would for you to
write it. But anyways it's still useful to me as exercises.
> It would also be interesting to prefetch one row from the outer table and fall
Andrew Hammond <[EMAIL PROTECTED]> writes:
> My goal is to tune the disk / filesystem on our prototype system. It's
> an EMC disk array, so sectors on disk are 512 bytes of usable space.
> We've decided to go with RAID 10 since the goal is to maximize
> performance. Currently the raid element siz
"Jason Coene" <[EMAIL PROTECTED]> writes:
> All of our "postgres" processes end up in the "semwai" state - seemingly
> waiting on other queries to complete. If the system isn't taxed in CPU or
> disk, I have a good feeling that this may be the cause.
Well, it's possible contention of some sor
John Meinel <[EMAIL PROTECTED]> writes:
> As Janning mentioned, sometimes sequential scans *are* faster. If the number of
> entries that will be found is large compared to the number of total entries (I
> don't know the percentages, but probably >30-40%),
Actually 30%-40% is unrealistic. The tr
Jan Wieck <[EMAIL PROTECTED]> writes:
> Which would require that shared memory is not allowed to be swapped out, and
> that is allowed in Linux by default IIRC, not to completely distort the entire
> test.
Well if it's getting swapped out then it's clearly not being used effectively.
There are A
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 10/8/2004 10:10 PM, Christopher Browne wrote:
>
> > [EMAIL PROTECTED] (Josh Berkus) wrote:
> >> I've been trying to peg the "sweet spot" for shared memory using
> >> OSDL's equipment. With Jan's new ARC patch, I was expecting that
> >> the desired amoun
Russell Smith <[EMAIL PROTECTED]> writes:
> The Index does not store NULL values
This is false.
Though the fact that NULL values are indexed in postgres doesn't help with
this poster's actual problem.
--
greg
---(end of broadcast)---
TIP 2: yo
Alan Stange <[EMAIL PROTECTED]> writes:
> A few quick random observations on the Xeon v. Opteron comparison:
>
> - running a dual Xeon with hyperthreading turned on really isn't the same as
> having a quad cpu system. I haven't seen postgresql specific benchmarks, but
> the general case has been
Pierre-Frédéric Caillaud <[EMAIL PROTECTED]> writes:
> I don't know WHY (oh why) postgres does not use this kind of strategy
> when distinct'ing an indexed field... Anybody got an idea ?
Well there are two questions here. Why given the current plans available does
postgres choose a sequent
Patrick Clery <[EMAIL PROTECTED]> writes:
> PART OF THE QUERY PLAN:
> Index Scan using people_attributes_pkey on people_attributes pa (cost=0.00..5.32
> rows=1 width=20)
> Index Cond: (pa.person_id = "outer".person_id)
> Filter: (((ARRAY[age, gender, orientation, children, drin
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > why isn't a "skip index scan" plan available? Well, nobody's written the code
> > yet.
>
> I don't really think it would be a useful plan anyway.
Well
Pierre-Frédéric Caillaud <[EMAIL PROTECTED]> writes:
> I see this as a minor annoyance only because I can write GROUP BY
> instead of DISTINCT and get the speed boost. It probably annoys people
> trying to port applications to postgres though, forcing them to rewrite
> their queries.
Yeah,
Allen Landsidel <[EMAIL PROTECTED]> writes:
> QUERY PLAN
> ---
> Index Scan using sname_unique on "testtable" (cost=0.00..34453.74
> rows=8620 width=20) (ac
William Yu <[EMAIL PROTECTED]> writes:
> Biggest speedup I've found yet is the backup process (PG_DUMP --> GZIP). 100%
> faster in 64-bit mode. This drastic speed might be more the result of 64-bit
> GZIP though as I've seen benchmarks in the past showing encryption/compression
> running 2 or 3 ti
Josh Berkus <[EMAIL PROTECTED]> writes:
> So memcached becomes a very good place to stick data that's read often but
> not
> updated often, or alternately data that changes often but is disposable. An
> example of the former is a user+ACL list; and example of the latter is web
> session info
Alexandre Leclerc <[EMAIL PROTECTED]> writes:
> Thanks for those tips. I'll print and keep them. So in my case, the
> product_code being varchar(24) is:
> 4 bytes + string size (so possibly up to 24) = possible 28 bytes. I
> did the good thing using a serial. For my shorter keys (4 bytes + up
> to
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > This could hurt if they ever reuse an old previously retired postal code,
> > which isn't an entirely impossible case. As far as I know it hasn't happened
> > yet thoug
Bryan Vest <[EMAIL PROTECTED]> writes:
> Currently the database is at a size of 87.6Gig. A Vacuum Analyze runs every
> night and has been taking 4 or 5 hours to complete. Everything seems to run
> fine for a while, then at some point the load goes through the roof and the
> iowait % also goes way
Josh Berkus <[EMAIL PROTECTED]> writes:
> Stacy,
>
> > Thanks again for the reply. So it sounds like the answer to my original
> > question is that it's expected that the pseudo-partitioning would introduce
> > a fairly significant amount of overhead. Correct?
>
> Correct. For that matter, O
Josh Berkus <[EMAIL PROTECTED]> writes:
> > But I'm a bit puzzled. Why would Append have any significant cost? It's
> > just taking the tuples from one plan node and returning them until they run
> > out, then taking the tuples from another plan node. It should have no i/o
> > cost and hardly any
Richard Huxton <[EMAIL PROTECTED]> writes:
> Not going to do anything in this case. The planner is roughly right about how
> many rows will be returned, it's just not expecting everything to be in RAM.
That doesn't make sense or else it would switch to the index at
random_page_cost = 1.0. If it w
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Postgres is also more pessimistic about the efficiency of index scans. It's
> > willing to use a sequential scan down to well below 5% selectivity when
> > other
> > data
Pallav Kalva <[EMAIL PROTECTED]> writes:
> >> I had a setup a cronjob couple of weeks ago to run vacuum analyze every 3
> >> hours on this table and still my stats are totally wrong. This is affecting
> >> the performance of the queries running on this table very badly.
> >> How can i fix this pro
Madison Kelly <[EMAIL PROTECTED]> writes:
> Without it, specially in a failure state, the performance can collapse as
> the CPU performs all that extra math.
It's really not the math that makes raid 5 hurt. It's that in order to
calculate the checksum block the raid controller needs to read in t
Frank Wiles <[EMAIL PROTECTED]> writes:
> Now that's rich. I don't think I've ever seen a database perform
> worse after it was normalized. In fact, I can't even think of a
> situation where it could!
Just remember. All generalisations are false.
--
greg
---(
Rosser Schwarz <[EMAIL PROTECTED]> writes:
> Try also the Appro 1U 4-way Opteron server, at:
> http://www.appro.com/product/server_1142h.asp
Back in the day, we used to have problems with our 1U dual pentiums. We
attributed it to heat accelerating failure. I would fear four opterons in 1U
would b
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> heh, our apps do tend to be CPU bound. Generally, I think the extra CPU
> horsepower is worth the investment until you get to the really high end
> cpus.
I find that while most applications I work with shouldn't be cpu intensive
they do seem end up
Alex Turner <[EMAIL PROTECTED]> writes:
> Infact the cache hit ratio that Oracle suggests is the minimum good
> value is 95%. Anything below that is bad news.
Well that seems very workload dependent. No amount of cache is going to be
able to achieve that for a DSS system chugging sequentially
Jan Dittmer <[EMAIL PROTECTED]> writes:
> You could always do raid 1 over raid 0, with newer kernels (2.6ish)
> there is even a dedicated raid10 driver.
Aren't you much better off doing raid 0 over raid 1?
With raid 1 over raid 0 you're mirroring two stripe sets. That means if any
drive from th
Josh Berkus writes:
> Merlin,
>
> > I think the danger about SATA is that many SATA components are not
> > server quality, so you have to be more careful about what you buy. For
> > example, you can't just assume your SATA backplane has hot swap lights
> > (got bit by this one myself, heh).
>
"Matt Casters" <[EMAIL PROTECTED]> writes:
> I've been reading up on partitioned tabes on pgsql, will the performance
> benefit will be comparable to Oracle partitioned tables?
Postgres doesn't have any built-in support for partitioned tables. You can do
it the same way people did it on Oracle u
Steve Wampler <[EMAIL PROTECTED]> writes:
> Hervé Piedvache wrote:
>
> > No ... as I have said ... how I'll manage a database getting a table of may
> > be 250 000 000 records ? I'll need incredible servers ... to get quick
> > access
> > or index reading ... no ?
>
> Probably by carefully part
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
> > If you're using this to provide "pages" of results, could you use a cursor?
> What do you mean by that? Cursor?
>
> Yes I'm using this to provide "pages", but If I jump to the last pages it goes
> very slow.
The best way to do pages for is not t
Hervé Piedvache <[EMAIL PROTECTED]> writes:
> Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
> > Could you explain us what do you have in mind for that solution? I mean,
> > forget the PostgreSQL (or any other database) restrictions and explain us
> > how this hardware would be. W
ken <[EMAIL PROTECTED]> writes:
> >From my understanding, all the data for these columns in all the child
> tables will be stored in this one parent table
No, all the data is stored in the child table.
> and that, furthermore, there is a "hidden" column in the parent table called
> tableoid th
> > Now I read all the posts and I have some answers.
> >
> > Yes, I have a web aplication. I HAVE to know exactly how many pages I have
> > and I have to allow the user to jump to a specific page(this is where I
> > used limit and offset). We have this feature and I cannot take it out.
I'm afra
ken <[EMAIL PROTECTED]> writes:
> On Fri, 2005-01-21 at 08:14, Greg Stark wrote:
> > ken <[EMAIL PROTECTED]> writes:
> >
> > > >From my understanding, all the data for these columns in all the child
> > > tables will be stored in this one parent t
Ioannis Theoharis <[EMAIL PROTECTED]> writes:
> Can you explain me in more details what kind of optimization is missing in
> that case?
Uh, no I can't really. It was mentioned on the mailing list with regards to
UNION ALL specifically. I think it applied to inherited tables as well but I
wouldn't
Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> Quick thought -- would it be to possible to implement a 'partial VACUUM'
> per analogiam to partial indexes?
No.
But it gave me another idea. Perhaps equally infeasible, but I don't see why.
What if there were a map of modified pages. So every time a
Alex Turner <[EMAIL PROTECTED]> writes:
> I am also very interesting in this very question.. Is there any way to
> declare a persistant cursor that remains open between pg sessions?
> This would be better than a temp table because you would not have to
> do the initial select and insert into a f
Alex Turner <[EMAIL PROTECTED]> writes:
> The problem with this approach is TTFB (Time to first Byte). The
> initial query is very slow, but additional requests are fast. In most
> situations we do not want the user to have to wait a disproportionate
> amount of time for the initial query. If
Oleg Bartunov writes:
> On Thu, 27 Jan 2005, PFC wrote:
>
> >
> > > > beware that SELECT * FROM table WHERE id =ANY( array ) won't use an
> > > > index,
> > > contrib/intarray provides index access to such queries.
> >
> > Can you provide an example of such a query ? I've looked at the operato
PFC <[EMAIL PROTECTED]> writes:
> intset(x) seems to be like array[x] ?
> Actually what I want is the opposite.
What you want is called UNNEST. It didn't get done in time for 8.0. But if
what you have is an array of integers the int_array_enum() function I quoted
in the other post i
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> what about
> CREATE AGGREGATE array_accum (
> sfunc = array_append,
> basetype = anyelement,
> stype = anyarray,
> initcond = '{}'
> );
huh, that is faster. It's only 14x slower than the C implementation.
For completeness, here are
William Yu <[EMAIL PROTECTED]> writes:
> 1 beefy server w/ 32GB RAM = $16K
>
> I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
> all the trickyness of partitioning a DB over multiple servers. Yes your data
> will grow to a point where even the XXGB can't cache everyt
Dirk Lutzebaeck <[EMAIL PROTECTED]> writes:
> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
> tables have been analyzed before.
Really? A lot of the estimates are very far off. If you really just analyzed
these tables immediately prior to the query then perhaps you should t
Tom Lane <[EMAIL PROTECTED]> writes:
> It was 700k rows to sort, not 22k.
Oops, missed that.
> > The temporary table does need to store three copies of the records at
> > a given time, but still it sounds like an awful lot.
>
> Huh?
Am I wrong? I thought the disk sort algorithm was the polyp
I gave a bunch of "explain analyze select" commands to test estimates for
individual columns. What results do they come up with? If those are inaccurate
then raising the statistics target is a good route. If those are accurate
individually but the combination is inaccurate then you have a more dif
"Leeuw van der, Tim" <[EMAIL PROTECTED]> writes:
> I don't think EXPLAIN ANALYZE puts that much overhead on a query.
EXPLAIN ANALYZE does indeed impose a significant overhead. What percentage of
the time is overhead depends heavily on how much i/o the query is doing.
For queries that are primar
John Arbash Meinel <[EMAIL PROTECTED]> writes:
> >-> Hash (cost=1418.68..1418.68 rows=3226 width=4) (actual
> > time=77.062..77.062 rows=0 loops=1)
>
> This seems to be at least one of the problems. The planner thinks there
> are going to be 3000+ rows, but in reality there are 0.
No, that
Chris Kratz <[EMAIL PROTECTED]> writes:
> We continue to tune our individual queries where we can, but it seems we
> still
> are waiting on the db a lot in our app. When we run most queries, top shows
> the postmaster running at 90%+ constantly during the duration of the request.
>
> The d
101 - 200 of 349 matches
Mail list logo