"Medora Schauer" <[EMAIL PROTECTED]> writes:
> Merge Join (cost=0.00..287726.10 rows=100221 width=58) (actual time=61.60..5975.63
> rows=100425 loops=1)
>Merge Cond: (("outer".shot_line_num = "inner".shot_line_num) AND
> ("outer".shotpoint = "inner".shotpoint))
>-> Index Scan using hs
Rob Nagler <[EMAIL PROTECTED]> writes:
> Incorrect. If the tuples smaller, Oracle does the right thing. If
> there's enough space in the page, it shifts the tuples to make room.
> That's what pctfree, pctused and pctincrease allow you to control.
> It's all in memory so its fast, and I don't thi
Rob Nagler <[EMAIL PROTECTED]> writes:
> Mario Weilguni writes:
> > of course both approaches have advantages, it simply depends on the usage
> > pattern. A case where oracle really rules over postgresql are m<-->n
> > connection tables where each record consist of two foreign keys, the
> > ov
Rob Nagler <[EMAIL PROTECTED]> writes:
> Greg Stark writes:
> > Note that pctfree/pctused are a big performance drain on the usual case. Try
> > setting them to 0/100 on a table that doesn't get updates (like a many-many
> > relation table) and see how much
"John Pagakis" <[EMAIL PROTECTED]> writes:
> UPDATE baz SET customer_id = '1234' WHERE baz_key IN( SELECT baz_key FROM
> baz WHERE customer_id IS NULL LIMIT 1000 );
Do an "explain analyze" on this query. I bet it's doing two sequential scans.
Unfortunately in 7.3 the WHERE IN type of clause is p
Dror Matalon <[EMAIL PROTECTED]> writes:
> explain analyze select count(*) from items where channel < 5000;
> QUERY PLAN
> ---
Christopher Browne <[EMAIL PROTECTED]> writes:
> In the last exciting episode, [EMAIL PROTECTED] (Dror Matalon) wrote:
> > I was answering an earlier response that suggested that maybe the actual
> > counting took time so it would take quite a bit longer when there are
> > more rows to count.
Tha
Rob Nagler <[EMAIL PROTECTED]> writes:
> I didn't find ALTER SESSION for postgres (isn't that Oracle?), so I
> set sort_mem in the conf file to 512000, restarted postrgres. Reran
> the simpler query (no name) 3 times, and it was still 27 secs.
Sorry, I don't know how that bubbled up from the dep
Neil Conway <[EMAIL PROTECTED]> writes:
> On Sun, 2003-10-26 at 22:49, Greg Stark wrote:
> > What version of postgres is this?. In 7.4 (and maybe 7.3?) count() uses an
> > int8 to store its count so it's not limited to 4 billion records.
> > Unfortunately int8 is s
Damien Dougan <[EMAIL PROTECTED]> writes:
> Our batch upload is performing a number of stored procedures to insert data on
> the database. Initially, this results in quite good performance, but rapidly
> spirals down to approximately 1 per second after some minutes.
It's fairly unlikely anyone
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg is correct. int8 is a pass-by-reference datatype
Just to keep the conversation on track. the evidence from this particular post
seems to indicate that my theory was wrong and the overhead for count(*) is
_not_ a big time sink. It seems to be at most 1
In fact the number of records seems to be almost irrelevant. A sequential scan
takes almost exactly the same amount of time up until a critical region (for
me around 10 records) at which point it starts going up very quickly.
It's almost as if it's doing some disk i/o, but I'm watching vmstat
Rob Nagler <[EMAIL PROTECTED]> writes:
> One of the reason postgres is faster on the q1-4 is that postgres
> supports OFFSET/LIMIT, and oracle doesn't. q7 and q8 are the queries
> that I've referred to recently (avg of group by).
Well the way to do offset/limit in Oracle is:
SELECT *
FROM (
Tom Lane <[EMAIL PROTECTED]> writes:
> (I'd argue that the SQL generator is broken anyway ;-) if it generates
> such horrible conditions as that. Or maybe the real problem is that
> the database schema is a mess and needs rethinking.)
I had the same reaction when I first saw those queries. But
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
> -> Seq Scan on tickets main (cost=0.00..465.62 rows=1 width=164)
> Filter: ((effectiveid = id) AND (("type")::text = 'ticket'::text) AND
> (((status)::text = 'new'::text) OR ((status)::text = 'open'::text)))
This query has to read through
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
> rt3=# explain
>
> SELECT DISTINCT main.*
> FROM (((
> (Tickets main JOIN Groups as Groups_1 ON ( main.id = Groups_1.Instance))
> JOIN Principals as Principals_2 ON ( Groups_1.id = Principals_2.ObjectId)
> ) JOIN C
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
> Nopes the query are not Equiv , earlier one returns 4 rows and the below one
> none,
Sorry, i lowercased a string constant and dropped the lower() on email.
Try this:
SELECT *
FROM tickets
WHERE id IN (
SELECT groups.instance
Well, you might want to try the EXISTS version. I'm not sure if it'll be
faster or slower though. In theory it should be the same.
Hum, I didn't realize the principals table was the largest table. But Postgres
knew that so one would expect it to have found a better plan. The IN/EXISTS
handling wa
Bill Moran <[EMAIL PROTECTED]> writes:
> Just for an additional viewpoint. I'm finishing up a project based on FreeBSD
> and PostgreSQL. The target server is a Dual 2.4G Intel machine. I have tested
> the application with hyperthreading enabled and disabled. To all appearances,
> enabling hype
William Yu <[EMAIL PROTECTED]> writes:
> Rob Sell wrote:
>
> > Not being one to hijack threads, but I haven't heard of this performance hit
> > when using HT, I have what should all rights be a pretty fast server, dual
> > 2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50%
Dennis Bjorklund <[EMAIL PROTECTED]> writes:
> On Mon, 10 Nov 2003, Marc G. Fournier wrote:
>
> >
> > explain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic
> > FROM company c, traffic_logs ts
> >WHERE c.company_id = ts.company_id
> > AND month_trunc(ts.
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> On Tue, 11 Nov 2003, Greg Stark wrote:
>
> > Actually you might be able to get the same effect using function indexes
> > like:
> >
> > create index i on traffic_log (month_trunc(runtime), company_id
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> Just as a side note, just doing a straight scan for the records, with no
> SUM()/GROUP BY involved, with the month_trunc() index is still >8k msec:
Well so the problem isn't the query at all, you just have too much data to
massage online. You can
William Yu <[EMAIL PROTECTED]> writes:
> > You're right, though, mirroring a solid state drive is pretty pointless; if
> > power fails, both mirrors are dead.
>
> Actually no. Solid state memory is non-volatile. They retain data even without
> power.
Note that flash ram only has a finite number
Is it possible another connection has updated the record and not committed,
and it takes a minute for the connection to time out and commit or roll back?
--
greg
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROT
Erik Norvelle <[EMAIL PROTECTED]> writes:
> Here's the query I am running:
> update indethom
> set query_counter = nextval('s2.query_counter_seq'), -- Just for keeping track
> of how fast the query is running
> sectref = (select clavis from s2.sectiones where
> s2.sectio
Jared Carr <[EMAIL PROTECTED]> writes:
> Furthermore noticed that in the following query plan it is doing the
> sequential scan on quotes first, and then doing the sequential on zips. IMHO
> this should be the other way around, since the result set for zips is
> considerably smaller especially giv
Jared Carr <[EMAIL PROTECTED]> writes:
> Greg Stark wrote:
>
> >
> >> Merge Cond: ("outer"."?column7?" = "inner"."?column5?")
> >>
> >
> >Well it looks like you have something strange going on. Wha
Jared Carr <[EMAIL PROTECTED]> writes:
> Greg Stark wrote:
>
> > Well it looks like you have something strange going on. What data type is
> > car_id in each table?
> >
> car_id is a varchar(10) in both tables.
Huh. The following shows something strange. It
Greg Stark <[EMAIL PROTECTED]> writes:
> Huh. The following shows something strange.
Worse, with enable_hashjoin off it's even more obvious something's broken:
test=# set enable_hashjoin = off;
SET
test=# explain select * from a,b where a.x=b.x;
Tom Lane <[EMAIL PROTECTED]> writes:
> Define "no longer works well". varchar doesn't have its own comparison
> operators anymore, but AFAIK that makes no difference.
Well it seems to completely bar the use of a straight merge join between two
index scans:
test=# set enable_seqscan = off;
SET
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> > 3) Estimated number of transactions to be written into the Postgresql db is
> > around 15000 records per day.
> >
> > The growth rate in terms of number of connections is around 10% per year
> > and the data retention is kept on average at least fo
Jared Carr <[EMAIL PROTECTED]> writes:
> The patch definitely makes things more consistent...unfortunately it is more
> consistent toward the slower execution times. Of course I am looking at this
> simply from a straight performance standpoint and not a viewpoint of what
> *should* be happening.
Ivar Zarans <[EMAIL PROTECTED]> writes:
> > qry = "UPDATE table1 SET status = %s WHERE recid = '%s'"
> > cursor.execute(qry, status, recid)
>
> Yes, this helps. But then it sort of obsoletes PyPgSQL-s own quoting
> logic. I would prefer to take care of this all by myself or trust some
> underlyin
Christopher Browne <[EMAIL PROTECTED]> writes:
> Here's a scheme for query optimization that probably needs to be
> avoided in that it would run afoul of a patent held by Oracle...
What does this have to do with Hervé Piedvache's post "Why memory is not
used?" ?
--
greg
-
Clive Page <[EMAIL PROTECTED]> writes:
> SELECT myfunc(mycol) FROM table LIMIT 50 OFFSET 1 ;
> It looks as if OFFSET is implemented just be throwing away the results,
> until the OFFSET has been reached.
>
> It would be nice if OFFSET could be implemented in some more efficient
> way.
You
Does anyone have any data to support arguing for a particular stripe size in
RAID-0? Do large stripe sizes allow drives to stream data more efficiently or
defeat read-ahead?
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your fr
"D. Dante Lorenso" <[EMAIL PROTECTED]> writes:
> Any thoughts? Sure, the PHP function I'm using above 'works', but is it
> the most efficient? I hope I'm not actually pulling all 100,000 records
> across the wire when I only intend to show 10 at a time. See what I'm
> getting at?
I tend to do
Richard van den Berg <[EMAIL PROTECTED]> writes:
> Hi there,
>
> I am quite new to postgresql, and love the explain feature. It enables us to
> predict which SQL queries needs to be optimized before we see any problems.
> However, I've run into an issue where explain tells us a the costs of a qu
"Shea,Dan [CIS]" <[EMAIL PROTECTED]> writes:
> Indexes:
> "forecastelement_vrwi_idx" btree (valid_time,region_id.wx_element.issue_time)
>
> explain analyze
> SELECT DISTINCT ON (valid_time)
> to_char(valid_time,'MMDDHH24MISS') AS valid_time,
> value
>from (
>
Christopher Browne <[EMAIL PROTECTED]> writes:
> Ah, but there is a bit of a 'problem' nonetheless; XFS is not
> 'officially supported' as part of the Linux kernel until version 2.6,
> which is still pretty "bleeding edge." Until 2.6 solidifies a bit
> more (aside: based on experiences with 2.6.
Tom Lane <[EMAIL PROTECTED]> writes:
> Jack Coates <[EMAIL PROTECTED]> writes:
> > yup -- here it is. It will probably be a nasty mess after linewrap gets
> > done with it,
>
> yup, sure is :-( If I was familiar with the layout I could probably
> decipher where the line breaks are supposed to b
The other posts about using explicit joins and using stored procedures are
both good points. But I have a few other comments to make:
"Eric Brown" <[EMAIL PROTECTED]> writes:
> WHERE
> w0.wid > 0 AND
> w0.pinyin = 'zheng4' AND
> w0.def_exists = 't' AND
> w0.sequence = 0 AND
>
Sorry, I forgot a key clause there:
Greg Stark <[EMAIL PROTECTED]> writes:
> select w8.wid,
>w8.variant,
>w8.num_variants,
>sum_text(w8.unicode) as unicodes,
>sum_text(w8.pinyin) as pinyins
> from (
> select wid,vari
On 19 Sep 2014 19:40, "Josh Berkus" wrote:
>
> On 09/19/2014 10:15 AM, Merlin Moncure wrote:
> > On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus wrote:
> >> This is the core issue with abort-early plans; they depend on our
> >> statistics being extremely accurate, which we know they are not. And if
On Fri, Sep 26, 2014 at 9:06 AM, Simon Riggs wrote:
> If we can at least agree it is a problem, we can try to move forwards.
Well that's a good question. I don't think we do and I think the
reason why is because we haven't actually pinned down exactly what is
the problem.
The real problem here i
On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus wrote:
> Yes, it's only intractable if you're wedded to the idea of a tiny,
> fixed-size sample. If we're allowed to sample, say, 1% of the table, we
> can get a MUCH more accurate n_distinct estimate using multiple
> algorithms, of which HLL is one. W
On Wed, Oct 15, 2014 at 7:02 PM, Tomas Vondra wrote:
> If you know the title of the article, it's usually available elsewhere
> on the web - either at the university site, or elsewhere. I found these
> two articles about block-based sampling:
>
>
> http://ranger.uta.edu/~gdas/websitepages/preprint
On Sat, Oct 18, 2014 at 6:01 PM, Tomas Vondra wrote:
> Hmmm. I have 0 experience with handling patents and related issues. Any
> idea how to address that?
Well there's no real way to address it. But to summarize: 1) We should
not go searching for patents, knowing that something is patented
incre
301 - 349 of 349 matches
Mail list logo