On 7/12/07, mos <[EMAIL PROTECTED]> wrote:
BTW, joins will work faster if you load one or more tables in a Memory
table before you do the join.
Well, if your tables are so small that you can load them entirely into
memory, it probably doesn't matter how you code the query.
- Perrin
--
MySQL G
At 07:26 PM 7/12/2007, Perrin Harkins wrote:
On 7/12/07, Jerry Schwartz <[EMAIL PROTECTED]> wrote:
Since the "rows" is identical except for the last bit, where mine is 4 and
yours is 2, does that mean yours is roughly more efficient by a 2:1 ratio?
For the most part, MySQL will do better with
On 7/12/07, Jerry Schwartz <[EMAIL PROTECTED]> wrote:
Since the "rows" is identical except for the last bit, where mine is 4 and
yours is 2, does that mean yours is roughly more efficient by a 2:1 ratio?
For the most part, MySQL will do better with LEFT JOIN than an IN
subquery. You can read a
On 7/12/07, Jerry Schwartz <[EMAIL PROTECTED]> wrote:
I think that will give me one record for every price that is not Yen, so if
a product has a price in USD and a price in GBP it will show up twice.
That would happen if you removed the 'USD' condition from the first
JOIN. Like I said, I'm no
At 04:26 PM 7/12/2007, Mukul Sabharwal wrote:
Hello,
Is the query_cache a cache for *exact* queries -- exactness here
refers to as determined by the query plan. Or is it word for word?
It is word for word, and is case sensitive.
Query in question, SELECT * FROM tbl WHERE ... LIMIT 100;
as o
Thanks so much Mario, this solved the problem
Thanks again
Hector
-Mensaje original-
De: Mario Guenterberg [mailto:[EMAIL PROTECTED]
Enviado el: Lunes, 09 de Julio de 2007 05:41 p.m.
Para: mysql@lists.mysql.com
Asunto: Re: error in login
On Mon, Jul 09, 2007 at 04:33:04PM -0500, Héctor
Hello,
Is the query_cache a cache for *exact* queries -- exactness here
refers to as determined by the query plan. Or is it word for word?
Query in question, SELECT * FROM tbl WHERE ... LIMIT 100;
as opposed to SELECT * FROM tbl WHERE ... LIMIT 10
Apparently, it seems that the second query is d
My apologies, you were correct: I left out a line from my query, so it would
have given bogus results except for the fortunate fact that every product
having at least one price has a USD price.
The EXPLAIN output didn't change.
Regards,
Jerry Schwartz
The Infoshop by Global Information Incorpora
I never thought of putting an additional condition on the LEFT JOIN. That
seems to do the trick.
My original query, with the sub-SELECT, does work. Both your technique and
mine generate identical results.
I did an EXPLAIN on each technique, but I don't know enough to interpret it.
Since the "row
Baron,
Thank you for your answer. The information you refered me to is
very interesting and will take some time to study. However most of what
it appears to cover is getting the search data into a table, which is
already accomplished. Basically the query I am trying to come up with
would be
Hi,
John Mancuso wrote:
I am running a fairly large Innodb database on mysql 5.x and would like
to know what would be the best variables to monitor (for Nagios)- for
instance: threads
connected,Innodb_data_pending_reads,Innodb_data_pending_writes, slow
queries etc
This is kind of a bottoml
Hi,
If at all possible, normalizing the data is far and away the best solution.
Failing that, you can use regular expressions
RLIKE REPLACE(col, ' ', '|')
or INSTR(). Another solution is fulltext search, but it would have many
limitations depending on what you really need to do. Othe
I think that will give me one record for every price that is not Yen, so if
a product has a price in USD and a price in GBP it will show up twice. A
GROUP BY might help, I'll have to chew on that. It seems too simple.
Regards,
Jerry Schwartz
The Infoshop by Global Information Incorporated
195 F
Hello all. I have a query I am tring to implement and I can't wrap my
mind around it somehow. Can anyone help?
.
I have two tables within a database.
Within the first table, I have a blob field that basically has a large
amount of text.
Within the second table, I have a list of st
On 7/12/07, Jerry Schwartz <[EMAIL PROTECTED]> wrote:
I believe this query
will do it, but can it be redone without the sub-query by using JOINs?
Yes, use a LEFT JOIN.
Would that be more efficient?
Yes.
SELECT prod.prod_num, price.prod_price
FROM prod JOIN price
WHERE prod.prod_id = price
Try this
SELECT prod.prod_num, price.prod_price
FROM prod JOIN price
WHERE prod.prod_id = price.prod_id
AND price.prod_curr !='YEN';
On 7/12/07, Jerry Schwartz <[EMAIL PROTECTED]> wrote:
I've been banging my head against the walls for hours, so I hope somebody
can help. I know similar questi
I've been banging my head against the walls for hours, so I hope somebody
can help. I know similar questions have been answered in the past.
I have two tables, prod and price. Stripping out the non-essential fields,
they are pretty simple:
prod
---
prod_num (int)
prod_id (char 15)
price
I also tried with long_query_time = 5
but got the same error :(
On 7/12/07, thomas Armstrong <[EMAIL PROTECTED]> wrote:
Hi.
Using mySQL 3.23.58, I'm trying to log slow queries and I made:
log-slow-queries = /usr/local/mysql/log/slow-queries.log
long-query-time = 5
(into my '/etc/my.conf' file)
Hi.
Using mySQL 3.23.58, I'm trying to log slow queries and I made:
log-slow-queries = /usr/local/mysql/log/slow-queries.log
long-query-time = 5
(into my '/etc/my.conf' file)
However, I get this error message:
unrecognized option `--long-query-time=5'
Is this parameter supported in mySQL 3?
--
At 03:31 AM 7/12/2007, Olav Mørkrid wrote:
steve
i'm happy to hear your optimism, handling billions of rows sounds
amazing. but i'd like to be fully assured.
a frequent use of the table will perform selects that show:
a) people you have seen
b) people you haven't seen yet
an average user will
steve
i'm happy to hear your optimism, handling billions of rows sounds
amazing. but i'd like to be fully assured.
a frequent use of the table will perform selects that show:
a) people you have seen
b) people you haven't seen yet
an average user will quite quickly build a list of thousands of
At 8:37 AM +0200 7/12/07, Olav Mørkrid wrote:
say you want to keep track of who knows whose name at a party, storing
one table row per instance explodes into n*(n-1) rows (a million rows
for thousand people).
a) can mysql cope with this, and it's more a question of storage and
processing power?
22 matches
Mail list logo