pgpool for mysql? Or, a gateway that redirects select queries to a host of servers?

2006-04-23 Thread Philip Hallstrom

Hi all -

I'm looking for a way to have a group of mysql servers that are all slaves 
to a single master.  That part I can do.


However, what I'd like is a way to put another server in front of the 
entire cluster and have that pretend to be the mysql database, except that 
it would simply redirect all update/insert/deletes to the master and all 
selects to one of the slaves (randomly or weighted or whatever).


There's an app for postgres called pgpool that does this, but I haven't 
found anything for mysql yet.


http://pgpool.projects.postgresql.org/

I'd rather not get into multi-master if I can help it and I don't think I 
need it.  And I don't want to get into ndb cluster either...  Yes, I'm 
picky :-)


It's also not as simple as changing my front end as we're using Rails and 
from what I can tell Rails can't do this unless I break up all my models 
into ReadModel and WriteModel which defeats the whole purpose.


Any ideas?

Thanks!

-philip




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Date comparisons

2006-07-14 Thread Philip Hallstrom
I've found something that works (in MySQL 5, anyway), but I don't know 
whether it's accepted practice.


If I want to find all records with a date in, say, March 2006, it works if I 
use "datefield like '2006-03%'" because it's a string.


This seems kind of obvious and a lot tidier than doing "datefield >= 
'2006-03-01' and datefield <= '2006-03-31'", but are there pitfalls I should 
know about?


Speed would be my first thought... I'd time them.  I'd also prepend 
'explain' as I'm guessing the first won't use an index and the second will 
(assuming you have indexes).


-p

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Monitoring Slow Queries

2006-08-02 Thread Philip Hallstrom

Though I understand very well that it has been discussed lots of time
before but I don't have time to browse through the previous archives
and dig out the stuff I need.  So, guys, I would be thankful if you
could give me your valuable advice that I need right now.

I just need to know

1)  What parameters I need to set in my.cnf to
log slow queries so that they stick out conspicuously and get noticed,
and

2) How I can find out from the log that MySQL
creates as a result of 1) as to
which queries are running slow.


mysqlsla is kind of handy...

http://hackmysql.com/mysqlsla

mysqlsla analyzes general, slow, and raw MySQL statement logs. Formerly 
called mysqlprofile, the new name reflects what the script really does: 
combined MySQL Statement Log Analysis. mysqlsla can read multiple MySQL 
general and slow logs (and logs containing raw SQL statements), combine 
them, then run various analyses on all the queries.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database design question

2006-08-07 Thread Philip Hallstrom
I want to design a database for lots of users.  Each user will be managing 
their own messages.  Does it make sense to create a table for each user after 
they've registered?
Or should I just create one MESSAGES table and store messages there keyed off 
of their user_id?


If I create a table for each user (I can potentially have hundreds of 
thousands of users), will MySQL be able to handle this?
If I just have one table, I could potentially have millions of records in one 
table.  Will MySQL be able to handle this?


My gut feel is that MySQL will be fine with one table and millions of 
records.


One table, with a user_id field.  If you're worried about searching 
through millions of records, perhaps you could have archival tables that 
don't normally get searched and move messages from one to other after they 
get "old"...


-p

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Incrementing using Max(Field) ?

2006-08-15 Thread Philip Hallstrom

On 8/15/06, Visolve DB TEAM <[EMAIL PROTECTED]> wrote:


Hello William

Try the below Query to insert next maximum value of  the field into same
table

INSERT  INTO  Sample(id)  SELECT MAX(id)+1 FROM  Sample



Wooww it works :D ... i didn't know that...  great !

Wouldn't this cause a problem if more than one person were trying to insert a 
record in that table at the same time?  Could that not cause the ID to be the 
same for 2 records


Yes.  It would.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Cluster 5.0.24 (Import) Slow

2006-08-21 Thread Philip Hallstrom

Hi everybody

I am running linuz AS-4  with 5.0.24  max version MySQL Cluster i am able to 
create all the table as ndb but when comming to
the import i am not able to import 20 lakhs of record for a  table.please help 
to solve the problem .


20 lakhs => 2 million rows?

My memory is that cluster can only do operations in batches of about 
30,000 rows at a time.  So, if that import is using extended inserts 
(typical if it's a mysqldump output) it won't work.  You need to insert 
them in batches of no more than 30,000


-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Zip Code & Distance

2006-08-29 Thread Philip Hallstrom
This is not necessarily SQL Specific, but it will impact my MySQL 
application.


Using the zip code, how can you determine what's in a (say) 50 mile radius. 
In other words, if I'm looking for schools within a 50 mile radius of my 
location (Zip Code 33881), How can I determine this by the zip code?  I have 
seen lots of search engines that do this, but don't know how they determine 
the distance.  I figured that I can use the first 4 digits as a match, then 
the first 3 digits which will be a little further away, but I don't know how 
accurate that would be.  Does anyone have any ideas?


Don't use the zip code itself.  It might work for some areas, but 
certainly doesn't work in the western washington.  It's just cut uptoo 
weird.


We bought a zip code database from um... www.zipcodedownload.com which has 
a big list of cities, zips, and lat/long coordinates.


Once you have the lat/long you can do the math easily...

Not sure if they have a zip code only version... but the one we bought was 
like $30 and works just fine...


-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query cache question

2006-08-30 Thread Philip Hallstrom


What else would prevent queries from getting into the cache?



Anything with NOW() in it or any of it's related functions...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mass E-mail Options

2006-08-30 Thread Philip Hallstrom
Again, I know this is not necessarily a MySQL specific question, and sorry 
for getting off target, but this is a pretty smart and experienced bunch, and 
I'm sure that someone out there will be able to help me.


We have a web application in which we have a mass e-mail function.  It allows 
the user to compose a simple text-only e-mail and send to everyone in our 
database (MySQL).  Problem is that our e-mail server seems to be getting 
overloaded, and it just shuts down, causing an error.  We're using ArgoSoft 
Mail server, which works very well for our normal needs.  We do not want to 
change to Microsoft's Exchange Server.  I hear it's expensive, and difficult 
to set up and get working properly.


I was wondering if anyone knows of any alternative mass e-mail options.  I 
don't want to use servers that spammers use, because first, and foremost, 
this is NOT spam, and second, some recipients may have these servers black 
listed.  What other alternatives are there?


http://www.lyris.com/products/mailengine/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mass E-mail Options

2006-08-30 Thread Philip Hallstrom

I would also try to unorder the email addresses, so that large groups of
yahoo and aol etc emails do not hit the outbound queue at the same time.
They tend to look down on that activity, and throttle you back, making it
take even longer to get your messages out.


You might put special-case conditons on specific domains, but actually, 
you're much better off ordering by domain because your server will end up 
sending _ONE_ message addressed to whatever number of recipients at that 
domain.


But some providers will block based on number of recipients per message... 
so don't go crazy sending one message to 50,000 aol addresses, etc...


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query to find duplicate rows

2006-09-12 Thread Philip Hallstrom

Hi all, a though query problem for me...

I have a table with 2 rows that matter: url and id

If url and id are the same in 2 rows, then that's no good (bad data).

I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any pointers?



Select COUNT(*) as num_entries, url from table WHERE num_entries>1 GROUP BY 
url


Untested, but the concept should work for you.


That would mark rows that have different id's, but the same url...

SELECT t1.id, t2.id, t1.url
FROM table t1, table t2
WHERE t1.id = t2.id
  AND t1.url = t2.url

Expensive if you have a lot of rows, but should work I think... if my 
memory is right :)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query to find duplicate rows

2006-09-13 Thread Philip Hallstrom

Philip Hallstrom wrote:

Hi all, a though query problem for me...

I have a table with 2 rows that matter: url and id

If url and id are the same in 2 rows, then that's no good (bad data).

I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any pointers?



Select COUNT(*) as num_entries, url from table WHERE num_entries>1 GROUP 
BY url


Untested, but the concept should work for you.


That would mark rows that have different id's, but the same url...


No it wouldn't, he's grouping by the url.


Yes, it would.  As your example shows.  Parent said "If url and id are the 
same in 2 rows".  In your example you have 1/google and 2/google and that 
is showing up in your final query.


Which parent doesn't want.. unless I'm mis-reading his request...



mysql> create table a(a int, b text);
Query OK, 0 rows affected (0.02 sec)

mysql> insert into a(a, b) values (1, 'google.com');
Query OK, 1 row affected (0.00 sec)

mysql> insert into a(a, b) values (2, 'google.com');
Query OK, 1 row affected (0.00 sec)

mysql> insert into a(a, b) values (3, 'yahoo.com');
Query OK, 1 row affected (0.00 sec)

mysql> select count(*) as num, b from a group by b;
+-++
| num | b  |
+-++
|   2 | google.com |
|   1 | yahoo.com  |
+-++
2 rows in set (0.00 sec)


The final query should be:

mysql> select count(*) as num, b from a group by b having count(*) > 1;
+-++
| num | b  |
+-++
|   2 | google.com |
+-++
1 row in set (0.00 sec)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Running a Staging and Development DB on the same server?

2006-09-29 Thread Philip Hallstrom

I'm setting up our one server for staging and development.  I am need to set
up MySQL to have 2 copies of the same database on the same server.  Could
anyone direct me to some documentation or tell me how to do this?


Is there a reason you can't have one instance of the mysql server and 
simply have "foo_staging" and "foo_development" databases?


Unless you are tweaking server parameters, this should work just fine...

-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MYSQL: Unlimited Forums, Scalablity Issues, advice please?

2006-02-27 Thread Philip Hallstrom

Hi there,
I am in the midst of creating a forums hosting site.

1. It has to be highly scalable, so I doubt a single MYSQL db with TONS of 
"subforums that simulate full forums" would do.


2. Also regarding scalablity, I hope to "Add capacity" as and when its 
needed. So i'll have one server running initially, and when it gets too 
crowded, i'll get two servers etc.


3. I will be providing a user with a "dashboard" that allows him to view all 
his subscribed posts across ALL forums. So lets say a user is a member of 25 
forums, this dashboard view will allow the user to view all his posts across 
all the forums.


Does anyone have advice that could point me in the right direction?
I have solved the scalability issue WITHIN a forum (code can handle million + 
posts easy), but I havent solved the issue of scaling MULTIPLE separate 
forums.


What about having a single write master with many read-only slaves?  Then 
modify your code so that posts go to the master and everything else 
happens on the slaves?


Also, does there exist any php package that helps ease the process of 
"deciding which Server/database to connect to"?
For example, someone accesses FORUM A, so the script would automatically know 
to direct all queries to the DB in SERVER 1 etc, and if i try to access FORUM 
J, it would connect to SERVER 2 etc. I could easily hard code this, but I was 
thinking "what if internal IP addresses change, or I decide to migrate a busy 
forum to a server of its own etc", so perhaps there is a better available 
packaged solution designed for this task.


Create a table on a "central" server that contains this mapping.  This 
server could also hold the login tables as well...


Just a thought.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: performance between InnoDB vs MyISAM

2006-03-08 Thread Philip Hallstrom
Just want to share and confirm my findings on a performance issue I've been 
experiencing.


My database is strictly non-transactional, but it's got about 200,000 records 
in this particular table. The table has a primary index, and 2 integers - one 
for the date and the other for the time. Among the other fields there's a 
text field which usually stores a huge amount of text.


One thing I notice, is that under MyISAM running the following sql:
select id from mytable where id in (#subselect to extract a set of ids)
order by mydate desc, mytime desc

The time taken is really bad, like > 90 secs. But in InnoDB it is usually <8 
secs. The time difference is too crazy to ignore.


Can anyone explain this? Is there something in InnoDB that creates the magic?


Innodb clusters the table data around the primary key... which is what 
you're searching on.  So your query is able to go right to the spot and 
read the whole row, as opposed to myisam which would need to look it up in 
the index to find the position in the row, then go read the table itself 
to get the row.


among other reasons I suppose.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication and AUTO_INCREMENT; is it safe?

2007-10-23 Thread Philip Hallstrom

Reading How AUTO_INCREMENT Handling Works in InnoDB[1] makes me
wonder how is it possible to replicate AUTO_INCREMENTed value to slaves.

According to the doc,

"If you specify an AUTO_INCREMENT column for an InnoDB table, the
table handle in the InnoDB data dictionary contains a special counter
called the auto-increment counter that is used in assigning new values
for the column. This counter is stored only in main memory, not on
disk."

Let's say there are two server, A and B. A replicates its data to B, the slave.
A and B has a table that looks like(column 'id' is auto_increment field)

id value
1  a
2  b
3  c
4  d

If After "delete from table where id = 4" and restart mysqld on server B,
"insert into table (value) values(e)" is executed on server A.



Why would you delete data from the slave?




In this case, because A's internal counter is 4, table on A would be
1 a
2 b
3 c
5 e

But B's would be different because restarting mysqld flushed InnoDB's
internal counter.
1 a
2 b
3 c
4 e

Is this correct?
or MySQL is smart enough to handle this problem?

Thanks.

[1]http://dev.mysql.com/doc/refman/4.1/en/innodb-auto-increment-handling.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: short e-mail survey on load-balance technology used

2007-11-01 Thread Philip Hallstrom

1) Do you use a load-balancer for your MySQL Deployment?


Yes. Well, up till just a little while ago...


2) What load-balancer (product name) do you use for your MySQL Deployment?


We used to run MySQL's NDB Cluster behind a hardware load balancer (don't 
remember the name, but it's not that relevant).



3) Do you use the default mechanisms of the load-balancer to negotiate traffic
to your MySQL deployment, or have you created your own custom mechanism for the
load-balancer to use?


Default.


4) (a) Other than your current load-balancer, have you tried to use any other
load-balancers with success or failure?
(b) Or is there another load-balancer you are looking into possibly using?
(Some Examples: MySQL Proxy, Continuent, Sequoia (C-JDBC), Linux Virtual Server,
F5 Networks BigIP, EddieDNS, or even Heartbeat, Pen, Python Director, 
Distributor)


We tried Continuent about two years ago and had bad experiences with it. 
It was a three node setup and as long as we didn't write to it it was 
fine, but otherwise it would regularly hang.  Frequently we had to reboot 
everything to get it unstuck.  We never put it into production.


But that was two years ago and I dont know anything about it since.


5) How do you primarily use the load balancer?
 1) load-balance read-only SQL queries
 2) load-balance read-write SQL queries
 3) other? (like some custom setup)


2.


6) When it comes to scaling MySQL and the use of load-balancing, what do you
feel is a technology that is missing that the MySQL community should create?
(I.e. perhaps some new technical item in the MySQL database server software, or
something on the load-balancing technology side)


Something similar to pgpool that can automatically redirect writes to the 
master and reads to a pool of slaves.


http://pgpool.projects.postgresql.org/

If that already exists and you know about it, link please! :)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: short e-mail survey on load-balance technology used

2007-11-01 Thread Philip Hallstrom

pgpool is a connection pool server for PostgreSQL. pgpool runs between
PostgreSQL's clients(front ends) and servers(back ends).

Well there are a few solutions out there.

The first comparable product for MySQL is MySQL Proxy.
 http://forge.mysql.com/wiki/MySQL_Proxy


Almost, but the faq says it's not quite there (auto splitting 
reads/writes).


http://forge.mysql.com/wiki/MySQL_Proxy_FAQ#In_load_balancing.2C_how_can_I_separate_reads_from_writes.3F

Will be nice once it is though!

-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Select rows containing identical values in two columns

2007-11-16 Thread Philip Hallstrom
I have a table that has a column with the id of the person that created the 
row.  In another column in the same row there is a column with the id of the 
person that modified that row.


Is there a way to write a SELECT statement that will return all the rows 
where the value in the creation column equals the value in the modification 
column?  I don't want to specify a specific id in either of the columns.


SELECT * FROM your_table WHERE created_by_id = updated_by_id;

?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "show slave staus"

2007-11-27 Thread Philip Hallstrom

mysql> show slave status

returns a number of fields, with information on the status of the slave. is
there a way to only return the field(s) i'm interested in...

i thought that i had figured this out, but i can't recall, and it's been
awhile since i've played with this!


My slave server isn't nearby, but...

show status like '%threads%';

will return just the results from "show status" that match on "threads".

Might work for slave status as well.

-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Easiest Way To Replicate DB

2007-01-25 Thread Philip Hallstrom

I am doing tests so I want to easy take my DB and make a full copy of
it into a test db everytime I want to test something against the
non-produciton version of DB.  What is the easiest way to do this.

So I have a DB called "backlog" and I want to copy it's structure and
data into "backlog_test" with ease :-).  Any sexy suggetions? :-)


This is what I do...

mysqldump -v --lock-tables=false backlog | mysql backlog_test

Not practical if your dataset is huge, but if it's not that big, works 
great.


-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: implementation of SELECT ... ORDER BY RAND() LIMIT 1

2007-02-07 Thread Philip Hallstrom

I have a question about the combination of RAND and LIMIT 1. If I have
a query like:

SELECT  ... ORDER BY RAND() LIMIT 1

with the ... replaced with a normal query on one table. How is this
implemented? Is this optimized for the fact that it only needs one
entry?


Try prefixing your query with "EXPLAIN" and see what it says it's going to 
do.  Pretty sure it's going to look at *every* row in the table, compute a 
random value, sort it, then return the first one.


So, for a table with a good number of rows, the above is going to be 
horrificly inefficient.  It would be a lot faster to do something like:


rowcount = select count(*) from table
random_value = something between 0 and rowcount - 1
select ... LIMIT 1 OFFSET random_value

-philip



And what about when there is a combination of tables

SELECT a.a, b.b FROM a,b WHERE a.b = b.id ORDER BY RAND() LIMIT 1

And in the case of

SELECT a.a, b.b FROM a LEFT JOIN b ON a.b = b.id ORDER BY RAND() LIMIT 1

Some say that especially in the last two cases, it is faster to just
retrieve the entire list and then select randomly.

And what if the case is that the limit is larger than 1, but smaller
than the entire table?

I am asking because we have various of these queries in our code and
serious issues with speed, and I was wondering whether I am assuming
optimization in the mysql code where they don't actually exist.

Any help on this would be much appreciated.

Regards,

Jos
http://www.cantr.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT ... ON DUPLICATE KEY UPDATE seems so close and yet so far...

2007-04-27 Thread Philip Hallstrom

Maybe this is some SQL standard implementation and that's why it is what it
is, but to me it seems completely retarded that you have to explicitly call
out the columns...

http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html

Why doesn't it work in a more convenient and sane way?!

So instead of this cumbersome incantation that makes you want to rip out
your hair and puch your cube-mate dead in the nose:

insert into table1 (c1,c2,c3) values (v1,v2,v3) on duplicate key
update c1=v1, c2=v2, c3=v3;

Just allow a more sane and logical:

insert into table1 (c1,c2,c3) values (v1,v2,v3) on duplicate key update;


Because if you wanted that you'd use REPLACE which is mysql specific which 
is okay since it's mysql you're using I guess.


And in my case, I have a stats table... I either want to insert a row with 
hits=1 or I want to hits=hits+1.  So I need to specify what I want.  And I 
don't want to change *any* of the other columns (such as the date for the 
hit or the id, etc.)


ANyway... not saying they couldn't do it both ways, but there is a reason.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to optimize this long query

2007-06-21 Thread Philip Hallstrom

Hello,

I have several tables storing item information, keyword (mainly for searching), 
category and subcategory (also for searching). The query I am using now is:

SELECT i.*, it.*, ic.*, c.*, cs.*, s.*, st.* FROM item i
LEFT JOIN iteminfo it ON i.id = it.id
LEFT JOIN itemkeyword ik ON i.id = ik.id
LEFT JOIN state st ON it.state = st.id
LEFT JOIN itemcategory ic ON i.id = ic.id
LEFT JOIN subcategory s ON ic.sid = s.id
LEFT JOIN catsubcat cs ON cs.sid = s.id
LEFT JOIN category c ON c.id = cs.cid
WHERE (((UCASE(i.english) LIKE UCASE('% bank %') OR UCASE(i.desceng) LIKE 
UCASE('% bank %') OR UCASE(s.english) LIKE UCASE('% bank %') OR 
UCASE(c.english) LIKE UCASE('% bank %') OR UCASE(it.street1) LIKE UCASE('% bank 
%') OR UCASE(it.street2) LIKE UCASE('% bank %') OR UCASE(it.garden) LIKE 
UCASE('% bank %')))
OR ((UCASE(i.english) LIKE UCASE('bank %') OR UCASE(i.desceng) LIKE UCASE('bank 
%') OR UCASE(s.english) LIKE UCASE('bank %') OR UCASE(c.english) LIKE 
UCASE('bank %') OR UCASE(it.street1) LIKE UCASE('bank %') OR UCASE(it.street2) 
LIKE UCASE('bank %') OR UCASE(it.garden) LIKE UCASE('bank %')))
OR ((UCASE(i.english) LIKE UCASE('% bank') OR UCASE(i.desceng) LIKE UCASE('% 
bank') OR UCASE(s.english) LIKE UCASE('% bank') OR UCASE(c.english) LIKE 
UCASE('% bank') OR UCASE(it.street1) LIKE UCASE('% bank') OR UCASE(it.street2) 
LIKE UCASE('% bank') OR UCASE(it.garden) LIKE UCASE('% bank')))
OR (UCASE(ik.keyword) LIKE UCASE('%bank%')))
AND i.duedate > 1182339575 GROUP BY it.iid ORDER BY i.credit DESC, it.hits 
DESC, i.english LIMIT 0, 10;


You can drop all the UCASE's as LIKE is case insensitive.  That should 
help a little bit as it won't have to upper case all the fields in your 
where clause.




Here is the EXPLAIN table:

id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE i ALL NULL NULL NULL NULL 1820 Using where; Using temporary; Using 
filesort
1 SIMPLE it ref id id 8 item.i.id 19
1 SIMPLE ik ref id id 8 item.i.id 19
1 SIMPLE st eq_ref PRIMARY PRIMARY 2 item.it.state 1
1 SIMPLE ic ref id id 8 item.i.id 19
1 SIMPLE s eq_ref PRIMARY PRIMARY 4 item.ic.sid 1
1 SIMPLE cs ref sid sid 4 item.s.id 2
1 SIMPLE c eq_ref PRIMARY PRIMARY 3 item.cs.cid 1 Using where

Now I have the questions below:
1) Is it possible to shorten the WHERE clause?
2) Sometimes, the keyword that I use to search takes a long time ( over 6 
seconds). What is the main problem causing this problem?
3) If I would like to sort the data by the `category.english` (if the keyword 
found in category english name) and then following by the other criteria, how 
do I write the ORDER BY clause?

Thank you very much for your help~

22-06-2007



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Geographic math problem

2007-06-28 Thread Philip Hallstrom
Not being very strong at math, I have a little problem that I'm not sure how 
to solve. Maybe someone can help me.


Basically, given a point (latitude, longitude) and a radius (100 meters) 
(think circle), I need to compute an equivalent square: That is, two points 
that would correspond to two corners of the square.


From:   51, -114100 meters
To: 51.005, -114.005NE corner
49.995, -113.995SW corner

Now, the above is not really accurate, of course, since the earth is 
spherical (well, at least most people think so), and I would like this 
computation to run in MySQL query, e.g.:


UPDATE places SET ne_latitude = (*), ne_longitude = (*), sw_latitude = (*), 
sw_longitude = (*)


In the above table, there are already three columns with the centre latitude 
and longitude and radius. Any ideas? Thanks.


http://www.mathforum.com/library/drmath/view/51711.html



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ordering dates

2007-09-12 Thread Philip Hallstrom
$result= mysql_query("SELECT date_format(date, '%d/%m/%Y') as date, title, 
id, display FROM news ORDER BY date DESC ");


I have the query above the problem is oders them like so

30/05/2007
29/07/2007
25/0/2007


The order is taken by the first number. Is there any way to order them 
properly without a timestamp?


You're ordering by "date" but previously you turn "date" into a string by 
calling date_format on it.  Change the "as date" to something else and 
then the ordering will be chronologically descending.  Like this:


SELECT date_format(date, '%d/%m/%Y') as formatted_date, title, id, display 
FROM news ORDER BY date DESC



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Efficiently finding a random record

2005-05-13 Thread Philip Hallstrom
I have a db of about 300,000 records and when I try to find one random record 
like this:

select * from table order by rand() limit 1;
it can take several minutes. My Sherlock Holmes instincts tell me that what 
I'm doing is somehow inefficient. What is the primary culprit here?
The culprit is that mysql has to assign a random value (via rand()) to 
each of the 300,000 rows, then order all 300,000 rows by that random value 
and return the first one.

So even though you're just retrieving one value, you're processing 300,000 
rows.

You'd be better off doing something like this in your application..
row_count = get row count;
random_row = get random number from 0 to row_count - 1;
result = db query LIMIT 1 OFFSET random_row
or something like that...
-philip
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Efficiently finding a random record

2005-05-16 Thread Philip Hallstrom
Michael Stassen wrote:
For example, if the selected random id is missing, we take the next id we 
find, like this:

  SELECT @rand_id:= CAST( 1 + MAX(id)*RAND() AS UNSIGNED) FROM history;
  SELECT * FROM history WHERE id >= @rand_id LIMIT 1;
That will have a possibly undesired effect.  Records that have gaps in the 
IDs before them will be twice, three times, etc. (depending on the size of 
the gap), as likely to be selected as records with no preceding gaps.
Replace MAX with COUNT and the WHERE clause with an OFFSET and the gap 
problem should go away...

-philip
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]