My understanding is that SCSI has a faster transfer rate, for
transferring large files. A busy database needs really fast access,
for making numerous fast calls all over the disk. Two different,
unrelated things.
I am more than willing to be called Wrong, slapped, and cast from a
bridge.
I'm tracking site visitors with a database - one field is the IP
address, the other is the page they've visited. I don't want any
duplicates of this combination. Is there a MySQL statement similar to
INSERT IGNORE but will ignore only if the combination is duplicated?
--
MySQL General Maili
I'm searching a database of geopoints, and when two records have the
same latitude and longitude, I only want to return one of them -
basically just find all the unique locations. How do you set up a
select like this? Thanks...
--
MySQL General Mailing List
For list archives: http://lists.m
Many different records will be returned though, I just don't want any
dupes where both lat/lon is the same.
:)
On Sep 12, 2006, at 12:20 PM, Hiep Nguyen wrote:
select * from table where . limit 1
that would do it if you don't care which one it returns
JC
On Tue, 12 Sep 2
Lat & lon are two different fields. Either can be duplicated, but not
both.
On Sep 12, 2006, at 12:33 PM, Steve Musumeche wrote:
Select DISTINCT(lat_long_field) from table where...
Steve Musumeche
CIO, Internet Retail Connection
[EMAIL PROTECTED]
Brian Dunning wrote:
Many diffe
)) from table where ...
Steve Musumeche
CIO, Internet Retail Connection
[EMAIL PROTECTED]
Brian Dunning wrote:
Lat & lon are two different fields. Either can be duplicated, but
not both.
On Sep 12, 2006, at 12:33 PM, Steve Musumeche wrote:
Select DISTINCT(lat_long_field) from table w
Never mind, I figured it out:
select distinct(concat(lat,lon)), lat, lon where
On Sep 13, 2006, at 6:57 AM, Brian Dunning wrote:
But if I do this, how do I still get lat and lon as two different
fields? This finds the right record set, but it returns both fields
concatenated into a
Hi Agrapin -
This sounds great. Could you please post some of your timber products
here to the list? Many of us are really looking for a break from this
boring MySQL stuff. Thanks, and our kind regards to you too.
- Brian
On Sep 11, 2006, at 7:57 PM, Agrapin S.A. - Timber Industry and
T
We need a guy for some hourly PHP/MySQL work. Large project to start
with, plenty of incremental stuff down the road. Looking for someone
expert, with a flexible schedule who can make hours available when we
need them. Your regular hourly rate.
Also - only interested in someone local to sou
I have a very busy 14,000,000 record table. I made a new version of
the table, that's more efficient, and now I just need to copy over
the data. Just about anything I try swamps the machine and locks up
MySQL with "too many connections" because it's so damn busy. Can
anyone suggest the most
I have a need to insert a record only when the combination of 3
fields is unique. I do this by having an index with all 3 fields, and
doing an INSERT IGNORE. This works fine.
Here's the element I can't figure out how to add: When there is a pre-
existing record, I want to update two of its c
when you
re-enable them.
You might try this workaround, where you're copying into a duplicate
of your new table structure.
- CREATE TABLE newtable2 LIKE newtable
- INSERT INTO newtable2 SELECT * from oldtable /* or however you're
copying */
- RENAME TABLE newtable TO newtable_bkup, newtabl
Thanks Dan, I believe that's exactly what I was looking for. Thanks
for not saying "RTFM" even though it clearly applies. :)
On Sep 27, 2006, at 12:57 PM, Dan Julson wrote:
Brian,
Look at the ON DUPLICATE KEY UPDATE syntax within the INSERT SYNTAX
of the
Docs. That should give you wha
Before I do this, I just wanted to check with you all to see if this
is the correct command:
/etc/rc.d/init.d/mysqld restart --log-slow-queries
If so, where exactly will I find the slow query log?
Will the slow query log be turned off by default next time I restart it?
--
MySQL General Mailin
I have a 17,000,000 record table that I'm trying to duplicate in
order to make some changes and improvements, then I'll rename it and
drop the original table. So I need this duplicate to be a live table
in the same database as the original.
I tried the "copy table" function in the Operation
Thanks Chris, this sounds great but when I read about mysqlhotcopy I
didn't see a way to make it create a live table that's open within
the same database, it seems to want only to create a separate backup
file in some directory.
On Sep 27, 2006, at 6:10 PM, Wagner, Chris (GEAE, CBTS) wrote
/* or however you're
copying */
- RENAME TABLE newtable TO newtable_bkup, newtable2 TO newtable
Dan
On 9/27/06, Brian Dunning <[EMAIL PROTECTED]> wrote:
I have a very busy 14,000,000 record table. I made a new version of
the table, that's more efficient, and now I just ne
phpMyAdmin is giving me the following warning:
PRIMARY and INDEX keys should not both be set for column `referer`
Here is what the table looks like:
CREATE TABLE `myspacemap_visitors_2` (
`creation` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,
`lat` double NOT
I'm trying to delete all but the newest n records.
DELETE FROM tablename ORDER BY creation DESC LIMIT=n
This does the opposite of what I want. Is there some way to tell it
to start the delete after n and delete all the remaining records?
--
MySQL General Mailing List
For list archives: http:
The offset is what I was thinking of - that would be the simplest -
but as far as I can tell, delete doesn't support the offset. It's not
documented, and it gives me an error when I try it. I was hoping to
avoid two queries but it sounds like that's what I might have to do.
On Oct 4, 2006,
e visit http://www.messagelabs.com/email
__
<
-- Original Message --
FROM: Brian Dunning <[EMAIL PROTECTED]>
TO:mysql@lists.mysql.com
DATE: Wed, 4 Oct 2006 08:49:48 -0700
SUBJECT: Re: Deleting, skip the first n records
The offset is wh
I've got a GoDaddy virtual dedicated server and I'm trying to run a
LOAD DATA INFILE, but I keep getting "Can't get stat of '/home/httpd/
vhosts/04planet.info/httpdocs/test.txt' (Errcode: 13)"
As you can see from that error message, I've uploaded my data file to
the httpdocs directory and tr
I just solved my own problem. For the benefit of others, I only
needed to set permissions to 755 for the directory containing my data
file. Works fine now.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
You have to purchase a database. ZIP codes are not geographic, you
can't calculate lat/lon from them. Here is one source:
http://www.zipwise.com
On Jun 26, 2005, at 6:43 PM, Scott Gifford wrote:
Jack Lauman <[EMAIL PROTECTED]> writes:
A couple of months ago these was a discussion about ZI
Dude, that's more than 5 years old.
On Jun 26, 2005, at 8:27 PM, Scott Gifford wrote:
Brian Dunning <[EMAIL PROTECTED]> writes:
You have to purchase a database. ZIP codes are not geographic, you
can't calculate lat/lon from them. Here is one source:
http://www.zipwise.
real business that needs their app to give correct
results. :)
If anyone does know a free source of CURRENT data (updated at least
monthly) please post it - just cuz I can't find one doesn't mean it's
not out there! :)
On Jun 27, 2005, at 8:21 AM, Scott Gifford wrote:
A lot cheaper here:
http://www.zipwise.com/database-download-now.php
On Jun 28, 2005, at 5:30 AM, Mattias Håkansson wrote:
This one goes for $169, and you get longitudes and latitudes.
http://www.buyzips.com/platinum-expanded.htm
Regards,
Mattias Håkansson
- Original Message -
From
http://www.buyzips.com/platinum-expanded.htm
This one also says it's only updated every 6 months. Ouch!! Another
reason I recommend Zipwise instead. Cheaper and fresher data:
http://www.zipwise.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Hi all -
I have an urgent need to update several million records. There is a
URL stored in a MySQL 'text' field. Many of the records contain
"1234" like this:
http://www.domain.com?etc=etc&arg=1234&etc=etc
Any occurence of "1234" has to be changed to "5678" like this:
http://www.domain.co
Wow - so easy! What a dork. Thanks guys. :) :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I am cross-posting this to the PHP and the MySQL lists because I'm
not sure in which technology my solution will lie.
I have a pretty busy PHP/MySQL site that executes the following query
a lot:
select count(*) as `count` from terms;
My MySQL account was disabled by my ISP because this que
If I have a table with 200K records, is there an easy way to split it
into two separate tables with 100K records each?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
My RaQ4 is throwing the "Too many connections" error. Now, it is
getting hit pretty hard by traffic, but I wonder if my coding might
be exacerbating this error.
Due to my relatively lazy structure of includes and functions, my
pages might have half a dozen or so of these (most are to the sa
In one case I do need to jump back and forth between databases that
are on different physical servers. What's the most efficient way to
handle this?
On Aug 2, 2005, at 11:14 AM, Devananda wrote:
Brian Dunning wrote:
My RaQ4 is throwing the "Too many connections" er
I'm searching a table of people who own properties, and I want to
also include the total count of related properties, and the count of
related properties whose (status is 'Active' and approval is
'Active'). I've got:
select accounts.name, count(properties.property_id) as totalcount
from a
That's exactly what I'm looking for, thanks Eugene. :)
On Aug 5, 2005, at 12:46 AM, Eugene Kosov wrote:
Brian Dunning wrote:
I'm searching a table of people who own properties, and I want to
also include the total count of related properties, and the count
of related p
I am hosting on a RaQ4 which is terribly underpowered. Within a
minute of a reboot it's CPU and RAM are both red in the admin screen.
The app is optimized as much as possible but it just gets too much
traffic, too many MySQL connections. It's maxed out at 512K RAM.
I have a generic 2.5GHz P
Same machine, any performance difference?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
http://www.briandunning.com/helios.shtml
Sorry this is WAY OFF TOPIC, but it's a pretty darn scary picture.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I have a table of questions, and a table of answers. Each question
has a related answer record for each person who has answered that
question, and each answer is timestamped. How do I find a list of
questions where the MOST RECENT answer is less than 30 days ago?
(Basically trying to exclud
So simple - I was trying to WAY overcomplicate it. Thanks. :)
On Aug 17, 2005, at 10:05 AM, Jon Drukman wrote:
Brian Dunning wrote:
I have a table of questions, and a table of answers. Each
question has a related answer record for each person who has
answered that question, and each
My web site has always worked, I didn't change anything, so whatever
is causing this error happened all by itself:
#1017 - Can't find file: './kessler/products.frm' (errno: 13)
In phpMyAdmin, it shows the tables are "in use" and there's no way to
execute a repair or anything. I don't know wh
this machine?
do you have administrator access?
does kessler/products.frm still exist? [likely: /var/lib/mysql/
kessler/products.frm]
On Sat, 3 Sep 2005, Brian Dunning wrote:
My web site has always worked, I didn't change anything, so
whatever is causing this error happened all by itsel
On Sep 3, 2005, at 1:43 PM, Jason Pyeron wrote:
google pulls up many instances of this error from several
'different' servers, you might lookinto the software.
http://www.google.com/search?num=50&hl=en&lr=lang_en&safe=off&q=%
27kessler%2Fproducts.frm%27&btnG=Search&lr=
The reason for that is
On Sep 3, 2005, at 1:43 PM, Jason Pyeron wrote:
it could be that the permissions are wrong, too.
The permissions for all the files in there are -rwxrwxr-x ...I don't
know if that's what they should be or not.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
FIXED...
I did a chmod +rwx on the directory, and now all is well.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
On Sep 3, 2005, at 1:43 PM, Jason Pyeron wrote:
google pulls up many instances of this error from several
'different' servers, you might lookinto the software.
http://www.google.com/search?num=50&hl=en&lr=lang_en&safe=off&q=%
27kessler%2Fproducts.frm%27&btnG=Search&lr=
The reason for that
I got a 12-hour invoice from a consultant who was tasked to do the
following:
- Install a Red Hat machine from absolute scratch for PHP/MySQL/Apache
- Copy over some MySQL databases
- Have mod_rewrite working via htaccess, and have wildcard DNS
I realize there are a billion different variabl
How do I append one table's contents to another? Both have identical
structure. Problem is I don't have shell access, only phpAdmin or a
PHP file I write & upload myself.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[E
INSERT into tbl1 SELECT * FROM tbl2
Thanks to both of you, but this is not working. Since one of the
fields is a primary key that's duplicated in both tables (both tables
have records numbered 1, 2, 3...), it won't allow the duplicate
entries. Fortunately I do not need those primary key va
On Oct 11, 2005, at 8:24 AM, Jose Miguel Pérez wrote:
INSERT INTO table1 (field1, field2)
SELECT field1, field FROM table2
Jose's solution worked perfectly. Thanks everyone, sorry for being so
dense today. :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
I'm in the midst of the age-old problem of finding the right SQL to
make a Dealer Locator by ZIP code. Like Steffan's posts from a couple
weeks ago, I have the ZIP code data in one table with latitudes and
longitudes, and the table of Dealers with ZIP codes in a separate table.
While resear
If I say this, I get all my data:
But if I say this, I get no results at all:
Seems pretty straightforward. What am I missing?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I got it to work - turns out it was a stupid typo on my part (hate it
when that happens - and hate wasting the list's time even more!). How
long does this temporary table persist for - just the execution of
the one page, or will it live on the server (using resources) until I
explicitly mak
That's awesome! I love it. Even though it didn't include me... :(
How are you doing the geotargeting?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I'm trying to find a list of customers including a count of all their
invoices, but it's not including customers who have no invoices - and
it should. What's broken?
SELECT customers.company, count(invoices.id) as invcount
FROM customers, invoices
WHERE customers.id= invoices.customer_id
GROU
Thanks very much to all of you!
Obviously I need to learn more about joins. Appreciate the kick in
the pants.
:)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
If I say:
SELECT *
FROM tablename
ORDER BY rand()
LIMIT 50
Will that give me the first 50 records from the table and randomize
their order, or will it give me 50 records randomly from throughout
the entire table?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
On Nov 2, 2005, at 6:03 PM, Matt Babineau wrote:
It will do your whole table. Then give you 50 records.
But there won't be any duplicates, right? It won't include the same
records more than once. I know that sounds stupid, just trying to
debug an odd problem.
--
MySQL General Mailing Lis
This simple find is taking 4 to 7 seconds. Way too long!! (This is a
geotargeting query using the database from IP2location.)
select lat,lon from geocodes where ipFROM<=1173020467 and
ipTO>=1173020467
The database looks like this (how IP2location recommends):
CREATE TABLE `geocodes` (
`i
I'd like to delete all but the newest 100 records from a table. I
have a timestamp field. I can't figure out how to do this in a single
SQL statement - can anyone help? Thanks!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.
The problem is the most recent 100 records won't be sequential. There
are records with many different identifiers, in random order mixed
with other records that I don't want deleted, and each time I do this
I'm going to be limiting each subset of identified records to only
the most recent 1
On Nov 6, 2005, at 6:05 PM, Jeremy Cole wrote:
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104
Yahoo employs MySQL geeks? Always wondered what db that big index
runs on... :)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lis
On Nov 6, 2005, at 6:05 PM, Jeremy Cole wrote:
DELETE FROM tbl ORDER BY ts DESC LIMIT 100, 99;
That's exactly needed: I was not aware of the OFFSET option. So I
tried this, which appears correct to me:
DELETE FROM table
WHERE field='somevalue'
ORDER BY timestamp DESC
LIMIT 100,
On Nov 7, 2005, at 11:04 AM, Scott Noyes wrote:
delete from x where ID not in (select ID from x order by timestamp
desc limit
100);
It's a good suggestion, I'm just shying away from it because it seems
more resource intensive than using an offset, and my ISP is super
anal about resources
Interesting thought. I just tried it with and even 999 - same
error! I'm sure that 999 records is not too large for MySQL... :) :)
On Nov 7, 2005, at 4:52 PM, Joseph Cochran wrote:
The quoted line is 99 but you're using , which I
presume is too big for the system to reco
I'm using the following to find the nearest 10 records by proximity
to the ZIP code $zip. The problem is that it only returns one record
per ZIP code, even when there are multiple records with the same ZIP:
(Note this uses a temp table, but I already double-checked that all
the desired reco
You're exactly right, and that solved it! Thank you.
On Nov 22, 2005, at 7:49 AM, [EMAIL PROTECTED] wrote:
Brian Dunning <[EMAIL PROTECTED]> wrote on 11/22/2005 10:43:13
AM:
> I'm using the following to find the nearest 10 records by proximity
> to the ZIP code $zi
OK, I'm bad - I have curly quotes in my db that I failed to eliminate
prior to the insert. Now that they're in there, is there a way to
replace them with straight quotes? Everything I try fails to find them.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsu
I have a DATE field that includes a date sometime within the past 30
days. I'm trying to show the number of days until the expiration
date, which is 30 days in the future from the date in the field. I've
tried a bunch of permutations of something like this:
select (30 - SUBDATE(CURDATE() -
Thanks Peter, that appears to be exactly what I'm looking for, but it
still gives an error and I've been through it with a fine-toothed
comb, tried different versions, parens, etc. Here is the exact SQL
statement I'm using, with your suggestion:
select
accounts.username,
mee
On Jan 5, 2006, at 9:38 PM, Michael Stassen wrote:
DATEDIFF was added in 4.1.1. What version of mysql do you have?
Thanks Michael - that was indeed the problem. Some ancient-ass
version that's been on my development server for who knows how many
years, in accordance with the "If it ain't
I'm logging IP addresses and also searching for existing ones. Does
anyone happen to know which is more resource intensive: storing the
IP's as integers using INET_NTOA() and INET_ATON() to convert them at
each query; or simply storing the IP as a varchar? In each case the
IP field would be
I have a bunch of databases - some are really big, 2GB - on a number
of different accounts at PowWeb. I am buying a Rackspace server and
want to move everything over -- hopefully all in one night. Can
anyone suggest the best way to do this? Would it be to use the Export
command in phpMyAdmi
I have a really simple two-column database:
domain_name (primary key)
timestamp
I'm trying to keep track of the referrer of every visit to a web
site, and I'm looking for a single SQL statement (since my ISP limits
the total number of calls I can make in a day) that will either
insert a ne
Pretend I'm Netflix and I want to return a list of found movies,
including the average of related ratings for each movie. Something
like this:
select movies.*, average(ratings.rating) from movies, ratings where
movies.movie_id=ratings.movie_id
I'm sure that's wrong in about 10 different w
.* average(ratings.rating) FROM movies
INNER JOIN ratings ON movies.movie_id=ratings.movie_id
GROUP BY movies.movie_id
Change the INNER JOIN to a LEFT JOIN if you want all movies, even
those with no ratings.
Brent Baisley
On Mon, Dec 22, 2008 at 11:13 AM, Brian Dunning > wrote:
Pretend I'm Ne
I have a table of projects with several columns for the IDs of some
people in various roles, and then a table of the people. How can I get
all the people for a given project, but keep their roles straight?
Projects
--
Project_ID
Boss_ID
Admin_ID
Assistant_ID
People
ID
Name
Sorry if this is a n00b question, I did read the docs for mysqldump
before posting, but couldn't figure it out.
I'm successfully doing a mysqldump, but I want only a few of the
columns, and want them in a different order. Is this possible without
substantially slowing it down? The table has
Hi all -
I have a table of PEOPLE, and a table of CARS owned by various people,
and a table of PETS owned by various people. Each person may have 0 or
more pets, and each person may have 0 or more cars. I'm trying to
return a list of all the people, showing how many pets each person
has,
Thanks, this solved it!
On Nov 2, 2009, at 12:37 AM, Michael Dykman wrote:
I suspect 'distinct' might help you out here.
SELECT
people.*,
count(distinct cars.car_id) as car_count,
count(distinct pets.pet_id) as pet_count
--
MySQL General Mailing List
For list archives: htt
Johnny - Your solution might actually help me solve my next step,
which is to also return a count of pets bought only within the last 7
days. Something like this:
SELECT people.*,
SUM(IF(cars.id IS NULL,0,1)) AS car_count,
SUM(IF(pets.id IS NULL,0,1)) AS pet_count,
SUM ( IF ( pets.d
Darn, it's not working after all.
SELECT
people.*,
COUNT ( DISTINCT cars.car_id ) AS car_count,
COUNT ( DISTINCT pets.pet_id ) AS pet_count,
SUM ( IF ( pets.date_bought > NOW() - INTERVAL 7 DAY, 1, 0 ) ) AS
new_pet_count
WHERE...etc
car_count and pet_count are calculating correctly, but new_p
I thought I could find the answer to this by googling, but had no luck.
How do I show a list of customers who:
(a) Have placed no orders within 14 days,
(b) Have been a customer for at least 14 days. (They do not have to
have placed any orders, ever, to be a customer.)
I'm trying to show inac
select into outfile was the ticket. Thanks!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
More info. If the tables look like this:
customers
--
cust_id
signup_date
orders
--
order_id
cust_id
order_date
Then the SQL needs to accomplish something like this:
select * from customers
left join orders on customers.cust_id = orders.cust_id
where signup_date
Thanks. I added orders.order_date to the list of fields being
selected, but it still returns "invalid use of group function".
On Nov 3, 2009, at 1:09 PM, Martin Gainty wrote:
any column used in any group functions such as SUM needs to be
requested in select column e.g.
select foo from tab
Hey all -
I need a few million sample contact records - name, company, address, email,
web, phone, fax. ZIP codes and area codes and street addresses should be
correct and properly formatted, but preferably not real people or companies or
email addresses. But they'd work if you did address vali
zips, exchange, so will work for mapping, phone or
address validation, whatever your needs are. Hope someone find it useful.
http://www.briandunning.com/sample-data/
On Jan 28, 2010, at 3:52 PM, Brian Dunning wrote:
> I need a few million sample contact records - name, company, address, email
Hey all -
I have a table listing references for chapters in a book. I'm trying to find
all the cases where a single chapter lists more than one reference from the
same author. In this example table, I want it to find IDs 1 and 2, because
they're both from the same author, and both in chapter 1
My brain just squirted out my ear trying to figure this out, so maybe one of
you can help me scoop it back up.
I have a table of accounts, and a table of hits-per-day per account. I'm trying
to sort my accounts by the most hits-per-day, averaged over the preceding 7
days (8 days ago through yes
You're exactly right, that's an important point that I neglected when putting
together my example. Good catch.
On Mar 18, 2010, at 11:27 AM, Chris W wrote:
> I think I would change the math. Since there are several days in there where
> there are no hits, that should in my opinion count again
Hey all -
I have a table with 12,000,000 records spread over about 6 years. I'm trying to
delete all but the last 2 years, but no matter how small of a group I try to
delete at a time, it keeps hanging up the server and I eventually have to
restart MySQL. The table looks like this:
`creation
Hey all -
I've just uploaded some free Canada sample data to complement the US data that
was already available. Testing apps with a representation amount of sample data
is crucial to evaluate performance. Download the data here:
http://www.briandunning.com/sample-data/
Enjoy,
- Brian
--
MySQL
Can someone tell me if what I'm trying to do can be done more efficiently? I
just got off the phone with Rackspace when my server was hung up, and they
found a whole bunch of this one same query was all stacked up with waiting
queries and locked.
Here's the query:
$query = "insert ignore into
I have similar (not identical) tables in two different db's. I want to
merge all the data from one into the other, and will be dropping the
original. There will be some duplication of keys so some rows will
need to be ignored. Problem is these are on two different servers on
different machi
OK, I have all my data on the new machine in a SQL file with 664,000
insert statements. But when I try to run it like this from the mysql
command line:
source filename.sql;
It starts to work, but after a few seconds the server freezes up with
"too many connections". How do I avoid this?
I am an idiot. table_a and table_b have exactly the same structure.
How do I say this in SQL:
INSERT (all records from table_a) into table_b where table_a.customer
= '12'
Just trying to eventually duplicate the whole table, one customer's
set of records at a time. Thanks.
--
MySQL Gener
Thanks to everyone who replied. So simple I couldn't see it. :-)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Can someone spot a way to improve the performance of this query? I see
that every time it runs, it's "Copying to tmp table" and then
"Creating sort index" and taking way too long.
select count(distinct(stats.ip)) as popcount, stats.id, episodes.title
from stats, episodes where stats.id=epis
1 - 100 of 131 matches
Mail list logo