Hi all -
I'm looking for a way to have a group of mysql servers that are all slaves
to a single master. That part I can do.
However, what I'd like is a way to put another server in front of the
entire cluster and have that pretend to be the mysql database, except that
it would simply redire
I've found something that works (in MySQL 5, anyway), but I don't know
whether it's accepted practice.
If I want to find all records with a date in, say, March 2006, it works if I
use "datefield like '2006-03%'" because it's a string.
This seems kind of obvious and a lot tidier than doing "da
Though I understand very well that it has been discussed lots of time
before but I don't have time to browse through the previous archives
and dig out the stuff I need. So, guys, I would be thankful if you
could give me your valuable advice that I need right now.
I just need to know
I want to design a database for lots of users. Each user will be managing
their own messages. Does it make sense to create a table for each user after
they've registered?
Or should I just create one MESSAGES table and store messages there keyed off
of their user_id?
If I create a table for e
On 8/15/06, Visolve DB TEAM <[EMAIL PROTECTED]> wrote:
Hello William
Try the below Query to insert next maximum value of the field into same
table
INSERT INTO Sample(id) SELECT MAX(id)+1 FROM Sample
Wooww it works :D ... i didn't know that... great !
Wouldn't this cause a problem if
Hi everybody
I am running linuz AS-4 with 5.0.24 max version MySQL Cluster i am able to
create all the table as ndb but when comming to
the import i am not able to import 20 lakhs of record for a table.please help
to solve the problem .
20 lakhs => 2 million rows?
My memory is that cluste
This is not necessarily SQL Specific, but it will impact my MySQL
application.
Using the zip code, how can you determine what's in a (say) 50 mile radius.
In other words, if I'm looking for schools within a 50 mile radius of my
location (Zip Code 33881), How can I determine this by the zip cod
What else would prevent queries from getting into the cache?
Anything with NOW() in it or any of it's related functions...
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Again, I know this is not necessarily a MySQL specific question, and sorry
for getting off target, but this is a pretty smart and experienced bunch, and
I'm sure that someone out there will be able to help me.
We have a web application in which we have a mass e-mail function. It allows
the us
I would also try to unorder the email addresses, so that large groups of
yahoo and aol etc emails do not hit the outbound queue at the same time.
They tend to look down on that activity, and throttle you back, making it
take even longer to get your messages out.
You might put special-case condit
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach the sql for this.. any pointers?
Select COUNT(*)
Philip Hallstrom wrote:
Hi all, a though query problem for me...
I have a table with 2 rows that matter: url and id
If url and id are the same in 2 rows, then that's no good (bad data).
I need to find all the rows that are duplicates. I can't think of how
to approach the sql for
I'm setting up our one server for staging and development. I am need to set
up MySQL to have 2 copies of the same database on the same server. Could
anyone direct me to some documentation or tell me how to do this?
Is there a reason you can't have one instance of the mysql server and
simply h
Hi there,
I am in the midst of creating a forums hosting site.
1. It has to be highly scalable, so I doubt a single MYSQL db with TONS of
"subforums that simulate full forums" would do.
2. Also regarding scalablity, I hope to "Add capacity" as and when its
needed. So i'll have one server runn
Just want to share and confirm my findings on a performance issue I've been
experiencing.
My database is strictly non-transactional, but it's got about 200,000 records
in this particular table. The table has a primary index, and 2 integers - one
for the date and the other for the time. Among t
Reading How AUTO_INCREMENT Handling Works in InnoDB[1] makes me
wonder how is it possible to replicate AUTO_INCREMENTed value to slaves.
According to the doc,
"If you specify an AUTO_INCREMENT column for an InnoDB table, the
table handle in the InnoDB data dictionary contains a special counter
c
1) Do you use a load-balancer for your MySQL Deployment?
Yes. Well, up till just a little while ago...
2) What load-balancer (product name) do you use for your MySQL Deployment?
We used to run MySQL's NDB Cluster behind a hardware load balancer (don't
remember the name, but it's not that re
pgpool is a connection pool server for PostgreSQL. pgpool runs between
PostgreSQL's clients(front ends) and servers(back ends).
Well there are a few solutions out there.
The first comparable product for MySQL is MySQL Proxy.
http://forge.mysql.com/wiki/MySQL_Proxy
Almost, but the faq says it'
I have a table that has a column with the id of the person that created the
row. In another column in the same row there is a column with the id of the
person that modified that row.
Is there a way to write a SELECT statement that will return all the rows
where the value in the creation colum
mysql> show slave status
returns a number of fields, with information on the status of the slave. is
there a way to only return the field(s) i'm interested in...
i thought that i had figured this out, but i can't recall, and it's been
awhile since i've played with this!
My slave server isn't n
I am doing tests so I want to easy take my DB and make a full copy of
it into a test db everytime I want to test something against the
non-produciton version of DB. What is the easiest way to do this.
So I have a DB called "backlog" and I want to copy it's structure and
data into "backlog_test"
I have a question about the combination of RAND and LIMIT 1. If I have
a query like:
SELECT ... ORDER BY RAND() LIMIT 1
with the ... replaced with a normal query on one table. How is this
implemented? Is this optimized for the fact that it only needs one
entry?
Try prefixing your query with "
Maybe this is some SQL standard implementation and that's why it is what it
is, but to me it seems completely retarded that you have to explicitly call
out the columns...
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
Why doesn't it work in a more convenient and sane way?!
So i
Hello,
I have several tables storing item information, keyword (mainly for searching),
category and subcategory (also for searching). The query I am using now is:
SELECT i.*, it.*, ic.*, c.*, cs.*, s.*, st.* FROM item i
LEFT JOIN iteminfo it ON i.id = it.id
LEFT JOIN itemkeyword ik ON i.id = ik
Not being very strong at math, I have a little problem that I'm not sure how
to solve. Maybe someone can help me.
Basically, given a point (latitude, longitude) and a radius (100 meters)
(think circle), I need to compute an equivalent square: That is, two points
that would correspond to two co
$result= mysql_query("SELECT date_format(date, '%d/%m/%Y') as date, title,
id, display FROM news ORDER BY date DESC ");
I have the query above the problem is oders them like so
30/05/2007
29/07/2007
25/0/2007
The order is taken by the first number. Is there any way to order them
properly wit
I have a db of about 300,000 records and when I try to find one random record
like this:
select * from table order by rand() limit 1;
it can take several minutes. My Sherlock Holmes instincts tell me that what
I'm doing is somehow inefficient. What is the primary culprit here?
The culprit is tha
Michael Stassen wrote:
For example, if the selected random id is missing, we take the next id we
find, like this:
SELECT @rand_id:= CAST( 1 + MAX(id)*RAND() AS UNSIGNED) FROM history;
SELECT * FROM history WHERE id >= @rand_id LIMIT 1;
That will have a possibly undesired effect. Records that
28 matches
Mail list logo