Dehua,
Thanks. You are correct. Perhaps I was just over-thinking it.
-Hank
On Fri, Oct 19, 2012 at 9:48 PM, Dehua Yang wrote:
> Hi Hank
>
> I just can think like this table
>
> CREATE TABLE xyz (
> hashtag VARCHAR(...) NOT NULL,
> comment_id ... NOT NULL,
> user_id
aid, I could sit down and design it
myself pretty quickly, but I would like to see what other people have
*actually done* to solve the problem before.
-Hank
On Fri, Oct 19, 2012 at 2:42 PM, Rick James wrote:
> Many-to-many? That is, can a comment have many different hashtags? And a
> has
difficult), but
I'd like to see what other people have done in terms of storage and
features.
I'm also looking for a solid basic implementation, not something
overly complex.
Thanks,
-Hank
(query, mysql)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
ll that I could see
was that 5.5.25 mysteriously disappeared to be replaced by 5.5.24.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
they retracted 5.5.25?
thanks,
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
They are regular words. I was hoping someone would already know how
to do it. I was trying to avoid rolling my own solution using the
string functions. It gets really messy, really quick.
-Hank
On Thu, Mar 8, 2012 at 8:18 PM, Michael Dykman wrote:
> If your words need to be regu
rtant to spend that much time on. I'd like one
SQL statement to do it.
Thanks!
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
mand on the slave.
There are a few FEDERATED tables on the slave.. is that what would
cause a communication packet error?
If not, what else could cause this on a "flush tables" command?
Thanks.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsub
n Fri, Sep 30, 2011 at 11:08 PM, Jan Steinman wrote:
> Okay, I've reviewed the online man page for date and time functions, and I've
> played with several likely candidates, and I am still having trouble
> subtracting two arbitrary Datetimes to get something that is useful. A simple
> subtracti
Check out the GET_LOCK and RELEASE_LOCK virtual lock functions in MySQL.
-Hank
On Wed, Sep 28, 2011 at 9:15 AM, Alex Schaft wrote:
> Hi,
>
> We're busy moving legacy apps from foxpro tables to mysql. User logins were
> tracked via a record in a table which the app then l
I've been watching this page to see when my anti-virus tool was
updated to scan for this specific virus/threat:
http://www.virustotal.com/file-scan/report.html?id=d761babcb55d21b467dd698169c921995bf58eac5e9912596693fee52c8690a1-1317175019
I use AVG
-Hank
On Wed, Sep 28, 2011 at 8:
Hello Johan,
Just an update. Using the "load index into cache" statement for the
200 million row indexed "source" table, my correlated update statement
ran in 1 hour, 45 minutes to update 144 million rows. A 50% increase
in performance!
Thank you very much,
-Hank
On Fr
Hello Johan,
 Thanks for your comprehensive reply. I'll try to answer each of your
questions below.
-Hank
> > But if seeing some SQL will make you happy, here is just one example:
> >
> > UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;
>
> S
On Thu, Sep 22, 2011 at 3:11 PM, Hassan Schroeder <
hassan.schroe...@gmail.com> wrote:
> On Thu, Sep 22, 2011 at 11:51 AM, Hank wrote:
> > Like I said, the problem is not just one particular SQL statement. It is
> > several dozen statements operating on tables with se
that index is built once
the update is complete. This query takes about 3.5 hours when I don't use
LOCK TABLES, and over 4 hours when I do use LOCK TABLES.
-Hank
On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar wrote:
> May be if u can let the audience know a sip-net of ur sql, some can
Sorry, but you do not understand my original issue or question.
-Hank
On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar wrote:
> No,
> Use a cursor(select to be used in where condition of update
> stmt), loop through it for each update.
>
> regards
> anandkl
>
>
> On
be much faster than ur individual update stmt.
>
> regards
> anandkl
>
> On Thu, Sep 22, 2011 at 8:24 PM, Hank wrote:
>
>> That is what I'm doing. I'm doing a correlated update on 200 million
>> records. One UPDATE statement.
>>
>> Also, I'm not ask
nce for large update statements on MYISAM tables when it is
supposed to increase performance on exactly the type of queries I am
performing.
If you can't help answer *that* question, please stop lecturing me on the
reasons not to use LOCK TABLES. Thanks.
-Hank
On Thu, Sep 22, 2011 at 10:19 AM, Anton
at
might be the case.
-Hank
On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis
wrote:
> LOCK TABLES...WRITE is very likely to reduce performance if you are using a
> transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is
> that only one connection is holding the write
on a single
user-box and mysql instance, that locking tables would cause these DML
statements to slow down compared to not locking the tables?
Thanks,
-Hank
in the array $M to be
inserted, and have a function like this to escape them all at once:
for each ($M as &$val) $val= mysql_real_escape_string($val);
then your method starts to make more sense.
-Hank
Best of both worlds:
> $username=$_POST['username'];
> // do some stuff with username here
> $M=array(); // Array of things to be inserted into MySQL
> $M[username]=mysql_real_escape_string($username); // Everything that
> goes into $M is escaped
> $query="INSERT INTO table (username) VALUES ('{$M
>
> what ugly style - if it is not numeric and you throw it to the database
> you are one of the many with a sql-injection because if you are get
> ivalid values until there you have done no sanitize before and do not here
>
>
It's a matter of opinion. I never said the data wasn't sanitized (it is
>
> > Exactly - I can't create an index on the table until I remove the
> > duplicate records.
>
> I was under the impression you were seeing this during a myisamchk run -
> which indicates you should *already* have a key on that field. Or am I
> interpreting that wrong?
>
>
I'm trying to rebuild a
On Mon, Sep 19, 2011 at 7:19 AM, Johan De Meersman wrote:
> - Original Message -
> > From: "Hank"
> >
> > While running a -rq on a large table, I got the following error:
> >
> > myisamchk: warning: Duplicate key for record at 54381140 agains
On Sun, Sep 18, 2011 at 12:28 PM, Dotan Cohen wrote:
> On Sun, Sep 18, 2011 at 17:44, Brandon Phelps wrote:
> > Personally I don't use any quotes for the numeric types, and single
> quotes
> > for everything else. Ie:
> >
>
> Thanks, Brandon. I understand then that quote type is a matter of
> t
While running a -rq on a large table, I got the following error:
myisamchk: warning: Duplicate key for record at 54381140 against
record at 54380810
How do I find which records are duplicated (without doing the typical
self-join or "having cnt(*)>1" query)? This table has 144 million rows,
Given the choice between doing right the first time, or having the second
largest site on the internet, I'll take the latter, and deal with the
problems of not doing it right the first time.
-Hank
On Tue, Jul 12, 2011 at 10:45 AM, Jerry Schwartz wrote:
> Let this be a lesson to all
Sveta Smirnova at Mysql just confirmed this bug in 5.5.13:
http://bugs.mysql.com/45670
On Wed, Jun 15, 2011 at 5:38 PM, Claudio Nanni wrote:
> No worries!
>
> I think I would have figured that out!
>
> I'll feedback you tomorrow.
>
> Thanks again
>
> Claud
Oops... big typo in above steps... add the following line:
replicate-ignore-table=db.log
to the SLAVE my.cnf, and restart the SLAVE server.
The master does not need to be restarted or changed. Just the SLAVE.
Sorry about that.
-Hank Eskin
On Wed, Jun 15, 2011 at 5:19 PM, Claudio Nanni
this bug goes away and correct results are reported on the slave.
-Hank Eskin
On Wed, Jun 15, 2011 at 4:38 PM, Hank wrote:
>
> This is a follow-up to my previous post. I have been narrowing down what
> is causing this bug. It is a timing issue of a replication ignored table
> with an
;-- should be "1", but has values from "log" on the master
| 1 | 8 | <-- should be "2"
| 2 | 9 | <-- should be "1"
| 2 | 44450 | <-- should be "2"
++---+
If there is the slightest delay between the inserts into "log" and "test",
the replication happens correctly.
Thoughts?
-Hank Eskin
ent the last-insert-id of the replication *ignored* table on
the slave
Yeah, pretty strange, I know. But totally repeatable.
-Hank
2011/6/14 Halász Sándor
> >>>> 2011/06/13 22:38 -0400, Hank >>>>
> But that bug report was closed two years ago. I have no idea if it
That is the slave relay log dump I posted (and mis-labeled). Thanks.
-Hank
On Tue, Jun 14, 2011 at 2:34 AM, Claudio Nanni wrote:
> You should also have a look at the slave relay log.
>
> But in any case sounds like a bug.
>
> Claudio
> On Jun 14, 2011 5:18 AM, "Hank"
@@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
BEGIN
/*!*/;
use test/*!*/;
SET TIMESTAMP=1308012505/*!*/;
insert into test values (1,null)
/*!*/;
SET TIMESTAMP=1308012505/*!*/;
COMMIT
/*!*/;
-Hank
On Mon, Jun 13, 2011 at 10:38 PM, Hank wrote:
>
> Yes, it's b
se on
the slave error, it clearly is getting this statement: "insert into test
values (1,null)" to replicate, but when it is executed, the "null" is
converted into a random number. But it's happening on all of my slaves, a
mix of 32 and 64 bit 5.5.8 and 5.5.11 boxes.
<ht
would
work, but replicated statements do not.
Nothing really changed on my system, but for some reason, this all started
happening about a week or so ago. I've been running this 5.5.8/5.5.11
configuration for months now (since 5.5.8 was released).The PHP code
that does this hasn't changed one bit, and this is a simplified version of
the database and code that is running in production.
Additional note: If I drop the 'id' field, and the primary key is just the
auto-increment field, it works correctly in replication.
Any ideas? Can anyone else replicate these results?
-Hank
it works perfectly (and as it did in 4.x).
How can I get this to work in 5.5.x?
Thanks,
-Hank
What is the highest version of MySQL available for a 2.4 kernel (Redhat/Cent
OS 3.5)?
And where can I find it to download?
Thanks,
-Hank
e large MYISAM tables and indexes (I
have posted about this before).
-Hank
Nevermind -- it's working absolutely perfectly between 5.5.8 and 4.1.x.
Thanks again for the push.
-Hank
On Tue, Jan 4, 2011 at 5:14 PM, Hank wrote:
>
> Also, can I do this:
>
> insert into federated_table select * from local_table?
>
> -Hank
>
>
> On Tue, Jan
Also, can I do this:
insert into federated_table select * from local_table?
-Hank
On Tue, Jan 4, 2011 at 4:15 PM, Shawn Green (MySQL) <
shawn.l.gr...@oracle.com> wrote:
> On 1/4/2011 15:53, Hank wrote:
>
>> Hello,
>>
>>I have a background process that runs
Wow, that might just work! I've seen "Federated" tables mentioned about,
but I never knew that's what they are here for.. thanks.
Can I have a host (remote) table on a MySQL 4.1.x server, and the federated
table on a 5.5.8 server?
-Hank
On Tue, Jan 4, 2011 at 4:15 PM
o the other server using
another mysql command line client command. I'd like to find something
cleaner than that.
I'm using 5.5.8.
thanks,
-Hank
i.e. just try this:
mysql> select 4E5664736F400E8B482EA7AA67853D13;
ERROR 1367 (22007): Illegal double '4E5664736' value found during parsing
-Hank
On Mon, Dec 20, 2010 at 12:50 PM, Hank wrote:
>
> Here's my 5 second guess..
>
> 4E5664736... is being interpr
Here's my 5 second guess..
4E5664736... is being interpreted as a number in scientific notation ..
i.e. 4*10^5664736 and the parser doesn't like that as a field name.
-Hank
On Mon, Dec 20, 2010 at 12:43 PM, Ramsey, Robert L
wrote:
> I am having the hardest time gettin
Sorry...
One small correction to my above post..
'FLUSH TABLES' should be issued between steps 8 and 9.
My 200+ million record table completed in 71 minutes.
-Hank
mysql; query;
tables would take days to complete.
So why can't the REPAIR TABLE command do something like this in the
background for large MYISAM tables?
-Hank
Mysql 5.5 -- when is it going to go GA?
And when it does, which version will it be? 5.5.8 or 5.5.6rc?
Thanks,
-Hank
ables" and then unlock.
Your table will be unreadable until you rebuild the index with REPAIR
TABLE or myisamchk. The MYD file will remain intact.
If your MYI file is smaller than 200k, then just reduce the count=#.
-Hank
> On Tue, Oct 19, 2010 at 7:53 PM, Steve Staples wrote:
>
me)%6) as dtime ,count(*)
from table
group by dhour,dtime;
-Hank
On Wed, Oct 6, 2010 at 4:22 PM, Johan De Meersman wrote:
> Two people already who suggested a text-based approach vs. my numeric
> approach.
>
> Analysing, my method takes a single function call per record (to_u
Here's what I came up with:
select concat(left(DATE_FORMAT(start_time, "%Y-%m-%d %h:%i"
),15),"0") as time, count(*) from table group by time
-Hank
>>
>> How would one go about to construct a query that counts items within an
>> increment or span
optimization issues...
but sure, that doesn't mean they will never exist for other
applications, but it has worked well for me.
Best,
-Hank
On Fri, Oct 1, 2010 at 4:50 PM, BMBasal wrote:
> Your suggestion seems more elegant. However, you missed the mathematical
> meaning of "BETWEE
On Fri, Oct 1, 2010 at 2:34 PM, Joerg Bruehe wrote:
> Hi!
>
>
> Hank wrote:
>> On Wed, Sep 29, 2010 at 8:33 AM, Patrice Olivier-Wilson
>> wrote:
>>> On 9/28/10 8:33 PM, Chris W wrote:
>>>
>>>> SELECT *
>>>> FROM anno
> 2. Don't stare at the screen. Start it, script the process & have it email
> your phone when it's done. Do something else in the mean time.
I don't literally stare at the screen -- of course I script it and do
other things.. but when I have a resource limited environment, it sure
would be ni
On Wed, Sep 29, 2010 at 8:33 AM, Patrice Olivier-Wilson
wrote:
> On 9/28/10 8:33 PM, Chris W wrote:
>
>>
>> SELECT *
>> FROM announcements
>> WHERE announcements_expiredate > CURDATE()
>> AND announcements_postdate <= CURDATE()
>> ORDER BY announcements_expiredate ASC
Or how about something like
old MyISAM counterparts. (I am using single-file-per-table). Is
this normal? If not, how can I adjust the space requirements for
these tables so they don't take up so much additional space?
I'm sure I'll have more questions later, but many thanks for your
comments and thoughts.
-Han
On Mon, Sep 20, 2010 at 7:36 AM, Shawn Green (MySQL)
wrote:
> Hello Hank,
>
> On 9/18/2010 9:35 PM, Hank wrote:
>>
>> I have the following pseudo code running on mysql 4.x:
>>
>> set @cnt:=0;
>> insert ignore into dest_table
>> select t1.fiel
t;in order"... so somehow mysql is
inserting the rows in some strange order.
How can I fix this so it works in both mysql 4.x and 5.x?
Many thanks.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
On 02/09/2010 8:30 p, Hank wrote:
>>
>> Simple question about views:
>>
>>
> Hank,
> Have you tried "running away from the problem :-)" by doing...
>
> CREATE PROCEDURE `combo`(theid INT)
> BEGIN
>(SELECT * FROM table1 WHERE id = theid)
to do.
(I've also tried "UNION ALL" with the same results).
Any suggestions on how to query both tables using the indexed and the
view at the same time? That was my intention.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
onfig. I think with 4GB of
memory, the settings can be better than this example:
[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
Any suggestions? Thanks,
-Hank
query, mysql
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
Thank you,
> :)
Assuming you are using MYISAM tables, all you really need to do is (a)
use a LOCK TABLE before the first UPDATE statement and UNLOCK TABLES
after, and (b) put a LIMIT clause on the UPDATE statement. Other than
that, what you outlined is exactly what I do for a very similar
process, although right now I only have one "worker" process, but if I
wanted to add more, it's already built to handle that.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
d about this
recently).
Second, I like your second creative solution (I never would have come up
with that), but in order for it to work, mysql would have to sort 180
million records before creating the table or retrieve them out of the table
via the contactenated index, both of which I think will take a long time...
but I'll certainly give it a shot tomorrow and let you know how it goes.
Thanks again.
-Hank
ain that trying to do this in PHP
one-record at a time would take much longer than a SQL solution.
Thanks,
-Hank
looking for a better
solution (if one exists). thanks.
-Hank
On Wed, Sep 2, 2009 at 7:50 PM, Gavin Towey wrote:
> Do you know that if you create seq column on the original table as an
> auto_increment primary key, it will fill in the numbers automatically?
> There's no need to crea
n records with a correlated
update query? And I'm fairly certain that trying to do this in PHP
one-record at a time would take much longer than a SQL solution.
Thanks,
-Hank
On Fri, Aug 28, 2009 at 9:18 AM, Shawn Green wrote:
> Hank wrote:
>
>> Hello All,
>> I'm in the process of upgrading my database from 4.1 to 5.0 on CentOS.
>> I've been testing the "mysqlcheck --check-upgrade --auto-repair"
>> command,
>
thing to "upgrade" the tables, instead of using
mysqlcheck, which seems to be rebuilding the table row-by-row, instead of
sorting (which myisamchk does).
thanks.
-Hank
connecting are one of three or four hosts behind the
same firewall.
thanks.
-Hank
query
ke (may or may not work):
mysql_real_escape_string($_REQUEST["Assign_Engineer[$id]['Job_Title']"]);
Since this is a PHP problem, and you can't figure it out, I'd suggest moving
your request to a PHP list.
-Hank
#1
to setup Slave #2 in the CHANGE MASTER TO command?
Thanks.
-Hank
I used to use UltraEdit, but then switched to EditPlus because it can edit
remote files almost transparently. (Opening a file FTP's it down, you edit
local copy, Saving FTP's it back.)
FYI - UltraEdit can do this - It uses SFTP over SSH to edit remote files.
Works like a charm.
-
Are the other fields in the update statement actually changing the
data? I don't know for sure, but if the data on disk is the same as
the update statement, mysql won't actually update the record, and
therefore might not update the last_updated field also. Just a
thought.
--
MySQL General Mailing
query
to:
$sql="select count(*) from my_table where cid=123"
and still using the mysql_numrows() to get the result, that is your
error. You'll need to use mysql_result() or some other fetch function
to get the results of the query.
That's my guess.
-Hank
--
MySQL General M
Don't you want the queries to be "outer join" and not "left join"?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
ts
> WHERE projects.id = '1'
> AND projects_teams.project_id = projects.id
> AND teams.id = projects_teams.team_id
> AND users_teams.user_id = users.id
>
> gives me ALL the users who are on any team... even teams not assigned
> to that project.
>
> What gives? My
to
have to help out here on the way to maximize the memory or minimize
the time for the internal sorting of the result set, if it's possible.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
retrieve and print the rest of the product info. Sorting 300,000+
records in that huge result set is going to take some time (although
it shouldn't take 10 minutes).
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
dex key on `salesrank` on the
product table?
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Grant,
You can just to a "desc pn_pricecompare_catprod" and "desc
pn_pricecompare_product" and post the results. The CREATE TABLE
statements would be OK, but the describes are better.
The flush the query cache, I think if you do a "flush tables".
-Hank
--
MySQL
on where the
problem is being introduced.
Also, table descriptions of both tables would be helpful in locating
the problem.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
You can also create a sym-link for the seperate databases/directories,
and leave my.cnf as-is. I've been doing that since 3.23, and it's
never caused me any problems.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PR
the
straight line distance you provided.
--
-Hank
mysql, query
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
now what it is now, though,
and have (or already have) considered adding support for it in MySQL.
-Hank
On 10/5/05, C.R. Vegelin <[EMAIL PROTECTED]> wrote:
> Hi Hank,
> You are quite right.
> I need separate non-unique indices on a, b, c, d, e and f to avoid table
> scans.
> An
L,
f smallint NOT NULL,
PRIMARY KEY (a,b,c,d,e,f),
KEY b (b),
KEY c (c),
KEY d (d),
KEY e (e),
KEY f (f)
);
--
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
; Hal
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
--
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> I'll be setting up a second master to do this same
> thing once per day to act as my daily backup.
Oops...I meant to say "second slave".
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> The long story short is we use the fact that MySQL has the ability to
> run the SQL thread and the IO thread of replication separately, and
> control them individually.
I'm fairly green with replication, but I have a simple cron job that
starts a PHP program that issues a "slave start", watches
eally feasible.
myisamchk --block-search # looked promising, but I can't find any
documentation on how to use it properly.
thanks.
--
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> i've read way too many articles about the 3 kids/guys/etc... who managed to
> get $10 million in funding for esentially a basic idea, but they had/have
> traffic/eyeballs!!!
Welcome to 1999. Blind reliance on "Traffic and Eyeballs" as a
business plan was what (in part) caused the great runups
ed.
The system currently has MySQL version 4.0.1, so I can't use
subqueries (i.e. NOT IN (...)).
Any suggestions would be greatly appreciated. thanks.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I think you should review the very recent thread "why NOT NULL in
PRIMARY key??" which might shed some light on your particular issue.
In a nutshell, NULL!=NULL, so the database engine can not detect the
duplicate rows, as is expected.
-Hank
On 5/4/05, Dennis Fogg <[EMAIL PROT
> No, those indexes were intentional. If you read the section of the manual
> on optimizing queries, you will encounter a page that mentions what are
> known as "covering indexes". The advantage to a covering index is that if
> your data is numeric and in the index,
Except that the zip code fie
ated by someone pretty green with SQL.
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
53293) *
COS(b.lat*0.017453293) *
POWER(SIN(((a.lng-b.lng)*0.017453293)/2),2) AS distance
FROM zips a, zips b
WHERE
a.zip_code = '90210'
GROUP BY distance
having distance <= 5;
-Hank
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To un
> Applying this same thing to apply to the 80k estimated US zipcodes
> currently
Just for the record, there are about 43,000 distinct US zip codes...
and 56,000 zip codes if you double count the zips with multiple city
names (when zip codes cross city limits).
-Hank
mysql, query
--
(not zips, though), here's a
place to download it for free:
http://www.maxmind.com/app/worldcities
For $50, you can get the addition of population of each city.
-Hank
mysql, query
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
still
> couldn't figure out why 0's got inserted instead of a meaningful current
> time stamp. I would greatly appreciate if someone can let me know what
> the correct way is. Unfortunately I cannot recreate the table.
>
> Thank you very much!
>
> Regards,
>
as to use fsockopen()/fclose() to simply ping the
database? I do NOT want to use mysql_connect in this case - I just want to
ping 3306 without causing MySQL to count them as broken connections and
disconnect the client host.
Thanks,
-Hank
__
Do you
1 - 100 of 132 matches
Mail list logo