not looking for NULL rows or anything. In fact, it looks like
> mysql is smart enough to know that you've negated the OUTER JOIN by putting
> conditions on the joined tables in the WHERE clause, and convert then to
> INNER JOINS. Don't rely on that! Use the correct join type
.
Those queries
Regards,
Gavin Towey
-Original Message-
From: Ciaran Lee [mailto:ciaran@gmail.com]
Sent: Tuesday, September 22, 2009 1:32 PM
To: mysql@lists.mysql.com
Subject: query optimization question (my struggle against 'using temporary;
using filesort')
Hi,
I hope th
Hi,
I hope this is the right place to ask a question about query optimization.
Background:
I have a database which has events, which occur in places (listings). Places
have addresses, and addresses belong to a city. I can select the latest
event within a particular city very efficiently (less tha
I asked a question yesterday that was quite onerous, so I have been
working out some of the details today, and would like to re-post my
question under a different light.
I have a table called data__ProcessedDataFrames that can grow to be
gigantic, and I need to speed up the query for pulling the d
Hey! I figured out this one myself:
On Apr 19, Adam Wolff wrote:
> * Question 2:
> Why does introducing an extra WHERE clause make things slower?
> If I do this:
> SELECT * FROM contacts WHERE fullname LIKE "j%" AND user_id=1
> ORDER BY fullname LIMIT 10;
>
It's because the index needs t
Thanks for the replies. The database is basically read-only at the moment,
so OPTIMIZE TABLE didn't do anything.
When I force the key to be fullname for the second problem, it runs even
worse. It's not practical to create an additional email,fullname index
because in my app I actually have 3! c
On 4/20/06, Adam Wolff <[EMAIL PROTECTED]> wrote:
> How can I optimize the case where I filter on one key but sort on another?
> This is fast:
>SELECT * FROM contacts WHERE fullname LIKE "j%" ORDER BY fullname LIMIT 10;
>
> But this is slow:
>SELECT * FROM contacts WHERE fullname LIKE "j%"
Hi,
2006/4/20, Adam Wolff <[EMAIL PROTECTED]>:
> Hi. I'm new to database optimization and I have a couple of questions.
>
> I have a table like this:
>
> +++-+-+
> | id | fullname | email | user_id |
> +++-
Hi. I'm new to database optimization and I have a couple of questions.
I have a table like this:
+++-+-+
| id | fullname | email | user_id |
+++-+-+
Where fullname and email are varchar(100) a
[EMAIL PROTECTED] wrote:
Yes, a ranged query should respond faster than a negation. In some cases
you can seriously improve query performance for a negation query if you
split it into two range queries unioned together. Here is a pseudo
example:
This query should be slow due to the table scan
Robert DiFalco wrote:
> In a previous database engine I was using an IN was more optimal than a
> <>. So, for example:
>
> SELECT * FROM table WHERE table.type IN (1,2,3);
>
> Where the possible values of type are 0-3, was appreciably faster than:
>
> SELECT * FROM table WHERE table.type <
ql.com
Subject: RE: Query Optimization Question
Yes, a ranged query should respond faster than a negation. In some cases
you can seriously improve query performance for a negation query if you
split it into two range queries unioned together. Here is a pseudo
example:
This query should be slow
>
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 13, 2006 6:37 AM
> To: Robert DiFalco
> Cc: mysql@lists.mysql.com
> Subject: Re: Query Optimization Question
>
>
>
>
> "Robert DiFalco" <[EMAIL PROTECTED]>
Shawn,
Any performance gains for specifying "type > 0" than "type <> 0" ?
R.
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, March 13, 2006 6:37 AM
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re
"Robert DiFalco" <[EMAIL PROTECTED]> wrote on 03/11/2006 12:43:43 PM:
> In a previous database engine I was using an IN was more optimal than a
> <>. So, for example:
>
> SELECT * FROM table WHERE table.type IN (1,2,3);
>
> Where the possible values of type are 0-3, was appreciably faster th
Robert DiFalco wrote:
In a previous database engine I was using an IN was more optimal than a
<>. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type <> 0;
I
In a previous database engine I was using an IN was more optimal than a
<>. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type <> 0;
I've been playing with the
0/04/2004 10:33:22 AM:
> Query optimization question
>
> I am selecting from a single table but it has a lot of rows and it has
> a very involved calculation. What I really want to do is
> is FIRST restrict the number of rows so that the big calculation is only
> performed
Gerald Taylor wrote:
Query optimization question
I am selecting from a single table but it has a lot of rows and it
has a very involved calculation. What I really want to do is
is FIRST restrict the number of rows so that the big calculation is only
performed on the ones that are within 3
Query optimization question
I am selecting from a single table but it has a lot of rows and it has
a very involved calculation. What I really want to do is
is FIRST restrict the number of rows so that the big calculation is only
performed on the ones that are within 3 degrees.
Using 4.0.20
A
Hi,
I currently store some boolean values in tinyint(1) unigned fields. When
running the procedure analyze on my tables, I am told that the best data
type for these fields would be an enum(0,1). Is that true? How is that
faster than tinyints?
Also, does specifying the length of an integer-type
Using 3.23.44 compiled from source from mysql.com on Linux Kernel 2.4.16
from RedHat distro.
I'm using a cross table to support many-to-many relationship between two
tables.
EXPLAIN shows that two of the joins aren't optimized.
All columns used in the two straight joins are of the same type and
On Tue, Sep 11, 2001 at 03:17:47PM -0500, Weslee Bilodeau wrote:
> > > They average around 500 connections/second at any given time, two
> > > have a master/slave setup.
> >
> > Connections/sec or Queries/sec? That's a lot of connections per
> > second? Can you use persistent connections? It wo
> > They average around 500 connections/second at any given time, two
> > have a master/slave setup.
>
> Connections/sec or Queries/sec? That's a lot of connections per
> second? Can you use persistent connections? It would save a lot of
> overhead.
Half and half, at the moment .. We had a pro
On Mon, Sep 10, 2001 at 04:54:57PM -0500, Weslee Bilodeau wrote:
>
> Basic (maybe?) question on some optimal variables for MySQLd's my.cnf
> configuration ..
>
> Right now, I have three MySQL servers, each with 2 GB ram, dual-CPU
> P3 1ghz.
Nice. :-)
> They average around 500 connections/second
Basic (maybe?) question on some optimal variables for MySQLd's my.cnf
configuration ..
Right now, I have three MySQL servers, each with 2 GB ram, dual-CPU P3 1ghz.
They average around 500 connections/second at any given time, two have a
master/slave setup.
MySQL is prettty much the only thing r
ECTED]
Betreff: Re: Optimization question and possible bug
.
> In the database i have a merge-table that merges 10 myisam tables.
> Sometimes the loadaverage of the system raise above 50 and the
> long-query-log is
> filled with some query accessing the merge table. This happens 2-3
time
Stefan Pinkert wrote:
> Examine that query with explain select... show that the indexes are not
> used. The keyfields are unsigned tiny int where every bit has a different
> meaning. Any clue how i can rewrite this query in a way where the indexes
> are
> used?
If MySQL thinks it will be faster
Hi,
i'm using MySQL 3.23.37 on a Linux system with an average of about
20 queries/second.
First of all i have a little question how i can optimize a query.
The query is like that:
select * from tbl where key_field1 & constant1 and key_field2 &
constant2
Examine that query with explain select.
29 matches
Mail list logo