Re: stuck on a query

2008-12-17 Thread Gavin Towey
Three solutions, the first one is not recommended I just showed it for 
fun -- I think the last one is the most efficient:


mysql> show create table job \G
*** 1. row ***
  Table: job
Create Table: CREATE TABLE `job` (
 `job_id` int(10) unsigned NOT NULL auto_increment,
 `name` varchar(12) default NULL,
 PRIMARY KEY  (`job_id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

mysql> show create table color \G
*** 1. row ***
  Table: color
Create Table: CREATE TABLE `color` (
 `color_id` int(10) unsigned NOT NULL auto_increment,
 `color` varchar(32) default NULL,
 `job_id` int(10) unsigned default NULL,
 PRIMARY KEY  (`color_id`)
) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1
1 row in set (0.02 sec)


mysql> select * from job;
++---+
| job_id | name  |
++---+
|  1 | job 1 |
|  2 | job 2 |
|  3 | job3  |
++---+
3 rows in set (0.00 sec)

mysql> select * from color;
+--+-++
| color_id | color   | job_id |
+--+-++
|1 | yellow  |  1 |
|2 | cyan|  1 |
|3 | black   |  1 |
|4 | cyan|  2 |
|5 | magenta |  2 |
|6 | black   |  2 |
|7 | yellow  |  2 |
|8 | cyan|  3 |
+--+-++
8 rows in set (0.00 sec)

Method #1

mysql> select job_id, GROUP_CONCAT(color) as colors FROM job j JOIN color c 
USING (job_id) GROUP BY job_id HAVING colors LIKE '%cyan%magenta';
++---+
| job_id | colors|
++---+
|  2 | black,yellow,cyan,magenta |
++---+
1 row in set (0.01 sec)


Method #2
SELECT j.job_id, c1.color, c2.color FROM job j JOIN color c1 ON 
j.job_id=c1.job_id AND c1.color='cyan' JOIN color c2 ON j.job_id=c2.job_id AND 
c2.color='magenta';
++---+-+
| job_id | color | color   |
++---+-+
|  2 | cyan  | magenta |
++---+-+
1 row in set (0.00 sec)

Method #3
mysql> SELECT job_id, BIT_OR(CASE WHEN c.color='cyan' THEN 1 WHEN 
c.color='magenta' THEN 2 END) as colors  FROM job j JOIN color c USING (job_id) 
GROUP BY job_id HAVING colors=3;
+++
| job_id | colors |
+++
|  2 |  3 |
+++
1 row in set (0.00 sec)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: How to set proper ibdata file?

2009-05-14 Thread Gavin Towey
David,

The only metric is "as big as it needs to be."

Personally I recommend setting innodb-file-per-table in the my.cnf, then 
configuring the ibdata file to be some small size, with the autoextend option 
enabled.   That way it will take up only as much space as it needs,  you'll be 
able to reduce space effectively by dropping tables.

Regards,
Gavin Towey



Message-ID: <69069d7d0905140247w29e73f81pb394981fd9193...@mail.gmail.com>



--0016e6476110ee6f5e0469dc372e

Content-Type: text/plain; charset=ISO-8859-1

Content-Transfer-Encoding: 7bit



 Hi.

   I want to know if there're a metric to adjust the innodb's ibdata file

size.

Any reply is appreciated.

--

David Yeung,

MySQL Senior Support Engineer,

Sun Gold Partner.

My Blog:http://yueliangdao0608.cublog.cn

Comanpy: http://www.actionsky.com


--0016e6476110ee6f5e0469dc372e


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Select Into OUTFILE problem

2009-05-14 Thread Gavin Towey
Hi Bruce,



SELECT ... INTO OUTFILE always creates the file local to the database server.  
If you want to dump results where your perl script is running you'll have to 
use another method such as receiving the results of the query normally and 
writing the file in the perl script.



Regards,

Gavin Towey





I have a bit of perl code that ends with an error:



$sql="SELECT convert_tz( a.stamp,'GMT','$tz' ) as ts, a.status,

 a.reason, a.tl

  INTO OUTFILE  '/application/result.csv'

  FIELDS TERMINATED BY ','

  ENCLOSED BY '\"'

  LINES TERMINATED BY '\n'

  FROM alerts a

  WHERE a.stamp BETWEEN ? AND ?

  ORDER BY a.stamp DESC";



 $sth = $dbh->prepare($sql);

 $rv = $sth->execute;



DBD::mysql::st execute failed: Can't create/write to file

'/application/result.csv' (Errcode: 2)



Te database is remote from the system where the perl is executing.



Te SQL works as expected when fed to mysql command line client

(i.e. mysql -h remote).  The outfile ends up in the application

directory of the macine running the mysql client.



What I'd found is, when the perl code runs the file tries to drop on the

database server and the application directory doesn't exist there giving

me the error.



Any suggestions to get the outfile to drop in the right place would be

appreciated.



Bruce





The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: MAC address as primary key - BIGINT or CHAR(12)

2009-05-14 Thread Gavin Towey
A MAC address is just a number, it doesn't contain letters unless you're doing 
something silly like storing the HEX representation of it. Do not use CHAR!  
This does DOUBLE for all of you storing IP addresses!



Since a MAC address is going to be between 48 and 64 bits, then BIGINT is 
appropriate.  When you select the value you can do SELECT HEX(mac) FROM table; 
to get a more readable version of it.   When you're storing values you can do: 
INSERT INTO table (mac) VALUES (x'FFEEDDCCBBAA9988');  to convert a hex string 
to the numeric value.





Date: Thu, 14 May 2009 09:53:58 -0400

To: Ilia KATZ 

From: Fish Kungfu 

Cc: mysql@lists.mysql.com

Subject: Re: MAC address as primary key - BIGINT or CHAR(12)

Message-ID: 



--001636e90cddd7f9c70469dfa8fe

Content-Type: text/plain; charset=ISO-8859-1

Content-Transfer-Encoding: 7bit



Since MAC addreses also contain letters, BIGINT wouldn't work.  So, yes, I

would say go with CHAR(12).



On May 14, 2009 9:43 AM, "Ilia KATZ"  wrote:



Hi.

Currently I have a table:

1. MAC address defined as BIGINT

2. MAC address set as primary key



Should I consider changing it to CHAR(12)?



Replies will be appreciated.

Ilia



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


Re: MAC address as primary key - BIGINT or CHAR(12)

2009-05-14 Thread Gavin Towey
"I've run up on the rock of a binary (meaning: indecipherable) field."





SELECT hex(some_binary_field) FROM table;



Solved.



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: mysql not able to import mysqldump file

2009-05-19 Thread Gavin Towey
Hi Niel,

What version is the mysql dump from?  Are you importing to a different version?

Could you show the line from the file that is generating the error?

Regards,
Gavin Towey


-Original Message-
From: Neil Aggarwal [mailto:n...@jammconsulting.com]
Sent: Tuesday, May 19, 2009 6:56 AM
To: mysql@lists.mysql.com
Subject: mysql not able to import mysqldump file

Hello:

I have a database with several double columns
in the tables.

I used mysqldump to dump the data from the primary
database and I am trying to import it into a
secondary database.

I am importing the data by passing the generated
sql file to the mysql command line client.

When I do that, I am getting this error:

Illegal double '1.79769313486232e+308' value found during parsing

Any ideas how to fix this?

Thanks,
Neil


--
Neil Aggarwal, (832)245-7314, www.JAMMConsulting.com
Eliminate junk email and reclaim your inbox.
Visit http://www.spammilter.com for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: load data into temporary table

2009-05-19 Thread Gavin Towey
Hi Alex,

It is true that use LOAD DATA INFILE you do need to know the schema of the 
table.  I'm not sure how useful it would be to import arbitrary data if you 
don't have some expectations about what that data is.  There are a couple 
options for you:

1. Make sure your users upload a CSV is a specific format, reject 
non-conforming input.

2. Let your script transform the user uploaded CSV file into the format the 
database is expecting.

3. Have your script simply parse the user uploaded CSV and generate insert 
statements as needed.  Just because you want to accept CSV from your app, does 
not mean you must use LOAD DATA INFILE to get the data into MySQL.

Regards,
Gavin Towey

-Original Message-
From: alex.ksi...@gmail.com [mailto:alex.ksi...@gmail.com] On Behalf Of Alex K
Sent: Tuesday, May 19, 2009 5:43 AM
To: MySQL General List
Subject: Re: load data into temporary table

Thank you but the real problem occurs when you don't know the schema
of the table in advance. If data.txt has two columns columns how can I
still load it in a temporary table? I'm asking this question because
I'd like to add an import csv feature to a web application. I know
that you can load data infile into table without specifying the schema
of this table but it does not look like you can do load data infile
into a temporary table.

Thank you,

Alex

2009/5/19 Janek Bogucki :
> Hi,
>
> mysql> create temporary table t(i int);
>
> mysql> \! echo 1 > /tmp/data.txt
>
> mysql> load data infile '/tmp/data.txt' into table t;
> Query OK, 1 row affected (0.00 sec)
> Records: 1  Deleted: 0  Skipped: 0  Warnings: 0
>
> mysql> select * from t;
> +--+
> | i|
> +--+
> |1 |
> +--+
> 1 row in set (0.00 sec)
>
> Best Regards,
> -Janek, CMDEV 5.0.
> StudyLink. Helping People Realise Their Potential.
> http://studylink.com
>
>
> On Tue, 2009-05-19 at 02:57 +0100, Alex K wrote:
>> Hello,
>>
>> Would anyone know how to load data infile into a temporary table?
>>
>> Thank you,
>>
>> Alex
>>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=alex.ksi...@gmail.com
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: SOS mysql signal syntax error

2009-05-20 Thread Gavin Towey
Interesting.  This syntax is only supposed to be available as of 5.4, but it 
doesn't even work there.  The reference I found was at :
http://dev.mysql.com/tech-resources/articles/mysql-54.html

But I couldn't find other references to the new signal support.

This is listed as the example on that page, but it doesn't work in 5.4.0-beta

CREATE PROCEDURE p (divisor INT)
BEGIN
 DECLARE divide_by_zero CONDITION FOR SQLSTATE '22012';
 IF divisor = 0 THEN
  SIGNAL divide_by_zero;
END IF;
END

Methinks someone forgot to include this feature in the release!


-Original Message-
From: Alex Katebi [mailto:alex.kat...@gmail.com]
Sent: Wednesday, May 20, 2009 10:58 AM
To: mysql
Subject: SOS mysql signal syntax error

Hi Folks,

  I am getting syntax error with the mysql signal. I have a trigger
that needs a signal for raising an error condition if a row with
specific value  is removed.

  CREATE TRIGGER my_trig BEFORE DELETE ON my_tbl
  FOR EACH ROW BEGIN
  DECLARE mysig CONDITION FOR SQLSTATE '45000';
  IF OLD.name = 'base' THEN
 SIGNAL mysig SET MESSAGE_TEXT='base row removal is not allowed';
  END IF;
  END

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mySQL slave IO Running and SQL Running

2009-05-20 Thread Gavin Towey
Please note that this is *NOT* a way to "get them synched again"

In fact if you have to skip a replication statement on the slave then it is 
usually a sign your slave has different data than you master already.  Skipping 
statements/errors may keep replication running, but you're just masking 
problems.



-Original Message-
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: Wednesday, May 20, 2009 12:49 PM
To: Daevid Vincent
Cc: mysql@lists.mysql.com
Subject: Re: mySQL slave IO Running and SQL Running

Yeah Daevid!  I know very well the issue!

first set the slave to READ ONLY

[mysqld]
read-only

then there is a configuration option to tell the server to skip some type of
errors automatically

slave-skip-errors=

http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_slave-skip-errors


But, But, BUT!

What I did is to remove the constraint on the table of the slave so that you
can control better the thing.
Because if you systematically skip the 'foreign key forcing' error, you will
skip them with any table,
if you remove just that constraint on that table you have the situation more
under control.

I think one of these two are enough, the cron is very not recomended!

Ciao

Claudio



2009/5/20 Daevid Vincent 

> We have a master / slave setup and as you know, one bad query can ruin your
> whole day. Or if you accidentally write to the slave when you meant to
> write
> to the master, or any number of other things that break the fragility of a
> replication setup.
>
> The magic incantation to get them synched again seems to be to login to the
> slave and do this (over and over again until the Slave_IO_Running and
> Slave_SQL_Running both say "Yes"):
>
> mysql> stop slave; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; start slave; show
> slave status\G
>
> Is there a way to automate this a little bit. Maybe some bash script that
> uses "mysql -e" and parses for those two strings?
> Is this dangerous to do?
> Is there a setting to have the slave do this already?
>
> In every case I've ever seen, it's always some SQL that got out of whack
> like this:
>
> Last_Error: Error 'Duplicate key name 'id_operator'' on query. Default
> database: 'core'. Query: 'ALTER TABLE `user_has_notification` ADD INDEX
> `id_operator` (`id_operator`)'
>

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Left join query

2009-06-01 Thread Gavin Towey
A LEFT OUTER JOIN in that query in not necessary.  An inner join should be used.

-Original Message-
From: Shiv [mailto:shiv...@gmail.com]
Sent: Saturday, May 30, 2009 10:18 PM
To: bharani kumar
Cc: mysql
Subject: Re: Left join query

Hi,
   Along with tables, you should also provide details on how they are
related.  Assuming "Code" is unique in both tables and left joined on
Airport table, you can do something like this

SELECT A.Code, C.Code, A.SlNo, C.SlNo, A.Name, C.Location, A.status,
C.status, C.type
FROM Airport A
LEFT OUTER JOIN Common C
ON A.Code = C.Code
WHERE A.status = 1 and C.status = 1 and C.type = 'airport'
Cheers,
Shiva


On Fri, May 29, 2009 at 8:52 PM, bharani kumar <
bharanikumariyer...@gmail.com> wrote:

> Airport table
>
> SlNoName  Code  AuthLocation status
> 1   ChennaiCHN  Yes India 1
> 2.  Hydarabed   HYD Yes India 0
> 3   walkerWAK   Yes uk1
>
>
> common table
>
> SlNoName  CodeType   Location
>  status
> 1   ChennaiCHNAirport   India
> 1
> 2.  guest Road GR Seaport   India   1
> 3.  Hyderabad  HYDAirport
> India   0
> 4.  John gardenJG Seaport   India   0
> 5   walkerWAKairportuk1
>
>
> Hi All
>
> Can u please tell the query for the above table ,
>
> Display the record which satisfy below condtions ,
>
> 1.Need to display airport name , Where airport status = 1 and common
> table status = 1 and common table type = airport
>
> Output somthing like below
>
> AirportCode CommonCode AirportSlNo CommonSlNo  AirportName
> CommonLocation Status Type
>
> CHNCHN1   1
>  Chennai  india  1airport
>
> WAK   WAK   35
>  walker uk   1airport
>
>
> Any idea ,
>
>
>
>
> Thnaks
>
>
>
>
>
>
>
> --
> Regards
> B.S.Bharanikumar
> http://php-mysql-jquery.blogspot.com/
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=shiv...@gmail.com
>
>

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mysql 5.0.67

2009-06-03 Thread Gavin Towey
Niel,

I don't use ASP, but it sounds more like a syntax error in your script.
http://classicasp.aspfaq.com/general/how-do-i-prevent-invalid-use-of-null-errors.html

However, if you're still having problems, please give us the exact query you're 
running, the table structure, and the Code snippet from your script.

Regards,
Gavin Towey

-Original Message-
From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
Sent: Wednesday, June 03, 2009 7:26 AM
To: [MySQL]
Subject: mysql 5.0.67

Hi

I'm using version 5.0.67 and have a strange problem.  I have a text field
and have a query which returned 193 characters (with spaces).  In the mySQL
query browser I can see the field content fine.

However on my classic ASP page, I get nothing when I reponse the data to the
screen. Infact if I try to perform a replace function on the field data I
get


Microsoft VBScript runtime error '800a005e'

Invalid use of Null: 'replace'



Any ideas what the problem might be ?
Cheers
Neil

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: MySQL tuning

2009-07-01 Thread Gavin Towey
That's impossible to say with the limited information you've provided; however, 
try this script for some basic tuning suggestions.  It should make sure you're 
not doing anything crazy:

https://launchpad.net/mysql-tuning-primer

Regards,
Gavin Towey

-Original Message-
From: fa so [mailto:fak...@yahoo.com]
Sent: Tuesday, June 30, 2009 10:11 PM
To: mysql@lists.mysql.com
Subject: MySQL tuning

Hi all,
I have a MySQL server with 1.7 GB of RAM, Some of the tables in my database use 
MyISAM and others use InnoDB

Do you think the following mysql configurations are Ok? or should I change 
something?

innodb_buffer_pool_size =712M
innodb_additional_mem_pool_size =20M
innodb_log_buffer_size =8M
Current tmp_table_size  = 32 M
key_buffer_size   = 712M
max_allowed_packet  = 16M
thread_stack= 128K
thread_cache_size   = 128
myisam-recover  = BACKUP
max_connections= 1000
query_cache_limit   = 1M
query_cache_size= 64M
expire_logs_days= 10
max_binlog_size = 100M

thank you





The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: CONCAT with IF?

2009-07-08 Thread Gavin Towey
Something like:

SET @version = "6.0";
SELECT CASE direction WHEN '>' THEN IF( @version > version, 'Y', 'N') WHEN '<' 
THEN IF (@version < version, 'Y','N)  ... END AS operation FROM test;

-Original Message-
From: Matt Neimeyer [mailto:m...@neimeyer.org]
Sent: Wednesday, July 08, 2009 2:45 PM
To: mysql@lists.mysql.com
Subject: CONCAT with IF?

I want to store the product version that an article applies to and a
comparison operator in my news system. But I can't wrap my head around
the where clause...

Here's what I've tried...

CREATE TABLE test (version char(10), direction char(2));

select concat("6.0",direction,version) as operation from test;

+-+
|operation|
+-+
| 6.0=6.0
| 6.0>=6.0
| 6.0<6.1
| 6.0>6.2
+-+

But when I do select if(concat("6.0",direction,version),"Y","N") from
test I ALWAYS get "Y"... which means I wouldn't get any meaningful
results if I used it in a where clause...

Any ideas on ways I can do this?

Thanks!

Matt

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: [PHP] mysql cache query as xml

2009-07-10 Thread Gavin Towey
This is more of an application architecture question than a mysql question.  
Though I'll offer my two cents:

What are you going to do with the XML file?  Write it to disk, so you can later 
read it, and parse it just to get your result set back?  That sounds a lot 
slower than fetching it from mysql again.  You can test it though -- write a 
script, time a bunch of iterations and prove to yourself if it's a good idea or 
not.

Generally the way to optimize reading xml faster is to not do it.

IMO if you want caching then use something that was meant for it like memcached.

Regards,
Gavin Towey


-Original Message-
From: paras...@gmail.com [mailto:paras...@gmail.com] On Behalf Of Daniel Brown
Sent: Friday, July 10, 2009 9:37 AM
To: workerho...@studysite.eu
Cc: PHP; MYSQL General List
Subject: Re: [PHP] mysql cache query as xml

Chris;

From my understanding of your question, your message (included
below in its entirety) is better sent to the MySQL General list, which
I've CC'd on this reply.  If you haven't yet, please subscribe there
at mysql-subscr...@lists.mysql.com to follow the thread for responses.

If I'm misunderstanding and you're asking a PHP-related question,
please rephrase your question.

[Full original message follows.]


On Fri, Jul 10, 2009 at 12:22,
workerho...@studysite.eu wrote:
> hi guys, i need some help by optimize the performance.
> my problem is that i need a lot of rows the whole site (don't ask i need the
> rows really :-) )
> this is about ~4000 rows it will be loaded from mysql database in 0.3
> seconds
> my idea was to cache this rows in a xml file like for example:
>
> 
>   some hash id
>   category title 
> 
> ..
>
> also load query from mysql first, save to xml using 6 hours, erase the
> cached file, load query against
> but to load the same num rows from xml during more then 3 seconds in
> comparison mysql need just 0.3 seconds.
>
> how can i optimize the reading from xml faster?
>
> server design:
> 2 mysql server (Master & Slave with Replication  )
> 8 Applikation Server with connect to the 2 mysql server
>
> this i the reason why i want to cache this query anyway! other querys just
> need about 0.0004 seconds, but this is the slowest query!
> i hope someone can help me or had a better ideas to solve this problem!
>
> thanks chris
>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>



--

daniel.br...@parasane.net || danbr...@php.net
http://www.parasane.net/ || http://www.pilotpig.net/
Check out our great hosting and dedicated server deals at
http://twitter.com/pilotpig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Any tool convert ibdata1/ib_logfile0/ib_logfile1 files to myisam data offline

2009-07-15 Thread Gavin Towey
Or you could just do an export with
mysqldump --single-transaction > backup.sql

Then replace all the ENGINE=InnoDB with ENGINE=MyIsam in the .sql file.

-Original Message-
From: mos [mailto:mo...@fastmail.fm]
Sent: Wednesday, July 15, 2009 11:58 AM
To: mysql@lists.mysql.com
Subject: Re: Any tool convert ibdata1/ib_logfile0/ib_logfile1 files to myisam 
data offline

At 07:53 AM 7/15/2009, you wrote:
>hi all,
>
>I have backup all the database files(innodb engine).
>Now I want to recover it as myisam engine offline.
>
>Any suggestions?

Why can't you copy the InnoDb files to another machine and open the
database with another MySQL server? (You could do it on your current server
but it will be CPU & disk intensive).

  Then do a:

create table MyTable1 engine=MYISAM select * from InnoTable1;

If these tables are large, then you could use:

create table MyTable1 engine=MYISAM select * from InnoTable1 limit 0;
insert into MyTable1 select * from InnoTable1 limit 0,10;
insert into MyTable1 select * from InnoTable1 limit 10,10;

to import just 100k rows at a time.

Now build the indexes for MyTable1.

Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: ordering search results

2009-07-17 Thread Gavin Towey
From your description of your problem, it's very difficult to understand what 
you're doing and what results you're trying to get.

Making some assumtions, I think you might need to use a technique described in 
this link to get the results you want: 
http://jan.kneschke.de/projects/mysql/groupwise-max

Otherwise, for better help:
1) Show us the exact query you are doing, and the structure of the tables.
2) Show a small sample data set if possible
3) Show what the result set you want would look like.

Regards,
Gavin Towey

-Original Message-
From: PJ [mailto:af.gour...@videotron.ca]
Sent: Friday, July 17, 2009 8:07 AM
To: MySql
Subject: ordering search results

Can't find anything on the web that deals with my problem(s).
I have to display thousands of book listings by title, sub_title with 10
books per page. The php/mysql code works fine - except:
ASC or DESC does not change one iota.
I have checked by commandline and find that it is not working at all how
I would expect.
From commandline, using just title and switching between ASC & DESC give
totally different results rather than displaying the same data in
reverse order.
The display is, as mentioned above, 10 books per output page: so, from
what appears to me, the ordering seems to be done on the entire db & not
just on the search results (this is basically from a SELECT statement).
Furthermore, not all the data is in 1 table; authors, categories &
publishers are in separate tables because of 1 to many & many to 1
relationships.
Still another problem is the use of a number of foreign languages which
have those strange accent on many letters that do not order very well.
Now, that I have spewed out my problems, would it be possible that there
is someone out there who could suggest how to go about figuring this out?
Thanks in advance.

--
Hervé Kempf: "Pour sauver la planète, sortez du capitalisme."
-
Phil Jourdan --- p...@ptahhotep.com
   http://www.ptahhotep.com
   http://www.chiccantine.com/andypantry.php


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Slave log files going nuts...

2009-07-17 Thread Gavin Towey
The binlogs are closed and reopened every time you do a FLUSH LOGS; command, or 
when the server restarts.  Is your server crashing continuously?  Take a look 
at your error log as well.

Regards,
Gavin Towey

-Original Message-
From: Gary Smith [mailto:g...@primeexalia.com]
Sent: Thursday, July 16, 2009 3:16 PM
To: Todd Lyons
Cc: mysql@lists.mysql.com
Subject: RE: Slave log files going nuts...

I'll have to look into that.  We have a single cronjob that just does a 
mysqldump daily but not during the time of the log file generation, but that's 
all I can think of off the top of my head.

Gary


From: tly...@sitehelp.org [tly...@sitehelp.org] On Behalf Of Todd Lyons 
[tly...@ivenue.com]
Sent: Thursday, July 16, 2009 2:41 PM
To: Gary Smith
Cc: mysql@lists.mysql.com
Subject: Re: Slave log files going nuts...

On Thu, Jul 16, 2009 at 1:18 PM, Gary Smith wrote:
> I have a new slave I setup against a new master server.  The master server 
> has 4 log files in it, the most current being updated on the 16th.  The slave 
> server on the other hand has several files, many which seem to be blank.  
> This slave is set to slave the master and act as a master for downstream 
> slaves.  Note, there is no master/master on this configuration, even though 
> the master itself could do it.
>
> Any ideas?

Something is doing several 'mysqladmin refresh' or a related command,
all sequentially in a row in short order.  Look at your cron jobs that
start or end around the time that all those "empty" binlogs are being
created.

--
Regards...  Todd

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=g...@primeexalia.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: ordering search results

2009-07-17 Thread Gavin Towey
Please echo the query and show the actual result.  We have no way to know what 
your program puts in your variables.  The problem is often a that a programming 
error causes the code to construct a query that's different from what you 
assume.

Regards,
Gavin Towey

-Original Message-
From: PJ [mailto:af.gour...@videotron.ca]
Sent: Friday, July 17, 2009 1:25 PM
To: Darryle Steplight
Cc: MySql
Subject: Re: ordering search results

Here's the query:
"SELECT * FROM book ORDER BY $sort $dir LIMIT $offset, $records_per_page";
$sort == 'title, sub_title'; $dir==ASC (or DESC); $offset== (any
multiple of 10); $records_per_page== 10;
I could let the user change these, but I decided it is simpler to keep
it at 10 books/page. Only the $dir is user settable.
The character set is utf8-general; (I just went through some
self-torture changing all those foreign curlicues to :&cutesy_stuff;
fortunately, the mysql replace() function helped - but six languages
have quite a few weird accents.
The title field can be fairly long - 182 chars; sub_title is 128, but it
looks like I have to lengthen that too; found some truncated inputs.

Another problem is the Thes - how do you exclude them
from the ordering.

Also: one output with just title for field returned a series of The
Art.., The Birds..., The Birth...etc. in ASC; whereas DESC returned:
Boats, Black Cumin, Birds of..., Biological..., Bioarchaeology..,
Avaris... etc.



Darryle Steplight wrote:
> You might have to change the collation you are currently using to one
> that best match the language of those weird accents you are referring
> too. That's part of the reason you may be getting unexpected results
> with your ORDER BY statement. Also, can you show us your select
> statements?
>
> On Fri, Jul 17, 2009 at 11:06 AM, PJ wrote:
>
>> Can't find anything on the web that deals with my problem(s).
>> I have to display thousands of book listings by title, sub_title with 10
>> books per page. The php/mysql code works fine - except:
>> ASC or DESC does not change one iota.
>> I have checked by commandline and find that it is not working at all how
>> I would expect.
>> From commandline, using just title and switching between ASC & DESC give
>> totally different results rather than displaying the same data in
>> reverse order.
>> The display is, as mentioned above, 10 books per output page: so, from
>> what appears to me, the ordering seems to be done on the entire db & not
>> just on the search results (this is basically from a SELECT statement).
>> Furthermore, not all the data is in 1 table; authors, categories &
>> publishers are in separate tables because of 1 to many & many to 1
>> relationships.
>> Still another problem is the use of a number of foreign languages which
>> have those strange accent on many letters that do not order very well.
>> Now, that I have spewed out my problems, would it be possible that there
>> is someone out there who could suggest how to go about figuring this out?
>> Thanks in advance.
>>
>> --
>> Herv� Kempf: "Pour sauver la plan�te, sortez du capitalisme."
>> -
>> Phil Jourdan --- p...@ptahhotep.com
>> � http://www.ptahhotep.com
>> � http://www.chiccantine.com/andypantry.php
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe: � �http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com
>>

--
Hervé Kempf: "Pour sauver la planète, sortez du capitalisme."
-
Phil Jourdan --- p...@ptahhotep.com
   http://www.ptahhotep.com
   http://www.chiccantine.com/andypantry.php


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: ordering search results

2009-07-17 Thread Gavin Towey
I think I see it anyway:

ORDER BY tile DESC, sub_title ASC

Each item in the list fields takes ASC/DESC individually.  Your statement would 
only change the sort order on the second column, not both.



-Original Message-
From: PJ [mailto:af.gour...@videotron.ca]
Sent: Friday, July 17, 2009 1:25 PM
To: Darryle Steplight
Cc: MySql
Subject: Re: ordering search results

Here's the query:
"SELECT * FROM book ORDER BY $sort $dir LIMIT $offset, $records_per_page";
$sort == 'title, sub_title'; $dir==ASC (or DESC); $offset== (any
multiple of 10); $records_per_page== 10;
I could let the user change these, but I decided it is simpler to keep
it at 10 books/page. Only the $dir is user settable.
The character set is utf8-general; (I just went through some
self-torture changing all those foreign curlicues to :&cutesy_stuff;
fortunately, the mysql replace() function helped - but six languages
have quite a few weird accents.
The title field can be fairly long - 182 chars; sub_title is 128, but it
looks like I have to lengthen that too; found some truncated inputs.

Another problem is the Thes - how do you exclude them
from the ordering.

Also: one output with just title for field returned a series of The
Art.., The Birds..., The Birth...etc. in ASC; whereas DESC returned:
Boats, Black Cumin, Birds of..., Biological..., Bioarchaeology..,
Avaris... etc.



Darryle Steplight wrote:
> You might have to change the collation you are currently using to one
> that best match the language of those weird accents you are referring
> too. That's part of the reason you may be getting unexpected results
> with your ORDER BY statement. Also, can you show us your select
> statements?
>
> On Fri, Jul 17, 2009 at 11:06 AM, PJ wrote:
>
>> Can't find anything on the web that deals with my problem(s).
>> I have to display thousands of book listings by title, sub_title with 10
>> books per page. The php/mysql code works fine - except:
>> ASC or DESC does not change one iota.
>> I have checked by commandline and find that it is not working at all how
>> I would expect.
>> From commandline, using just title and switching between ASC & DESC give
>> totally different results rather than displaying the same data in
>> reverse order.
>> The display is, as mentioned above, 10 books per output page: so, from
>> what appears to me, the ordering seems to be done on the entire db & not
>> just on the search results (this is basically from a SELECT statement).
>> Furthermore, not all the data is in 1 table; authors, categories &
>> publishers are in separate tables because of 1 to many & many to 1
>> relationships.
>> Still another problem is the use of a number of foreign languages which
>> have those strange accent on many letters that do not order very well.
>> Now, that I have spewed out my problems, would it be possible that there
>> is someone out there who could suggest how to go about figuring this out?
>> Thanks in advance.
>>
>> --
>> Herv� Kempf: "Pour sauver la plan�te, sortez du capitalisme."
>> -
>> Phil Jourdan --- p...@ptahhotep.com
>> � http://www.ptahhotep.com
>> � http://www.chiccantine.com/andypantry.php
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe: � �http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com
>>

--
Hervé Kempf: "Pour sauver la planète, sortez du capitalisme."
-
Phil Jourdan --- p...@ptahhotep.com
   http://www.ptahhotep.com
   http://www.chiccantine.com/andypantry.php


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Query_cache instance creation

2009-07-21 Thread Gavin Towey
You might have better luck on the mysql-internals list

-Original Message-
From: Rajarshi Chowdhury [mailto:mailtorajar...@gmail.com]
Sent: Tuesday, July 21, 2009 4:58 AM
To: mysql@lists.mysql.com
Subject: Query_cache instance creation

Hi,

MySQL query cache implementation is based on the Query_cache object (ref:
sql_cache.cc). But I cannot find where the instance for the object is
created ... (like new Query_cache qcache ...). Can anybody point me to the
file please?

Regards,
Raja

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Selecting from the range of serial numbers

2009-07-24 Thread Gavin Towey
WHERE 70 BETWEEN start_sequence AND end_sequence
 AND 'C' BETWEEN start_batch AND end_batch;

Though what you should really do is add a unique auto increment to every 
symbol, then you only have to do a simple range check.

Or use a full relational structure and have a OTM or MTM with your stock table.

Regards,
Gavin Towey

-Original Message-
From: hezjing [mailto:hezj...@gmail.com]
Sent: Friday, July 24, 2009 10:27 AM
To: mysql@lists.mysql.com
Subject: Selecting from the range of serial numbers

Hi

My stock serial number format is 1A - 9A, 1B - 9B ... 1Z
- 9Z.

These stocks are check in to warehouse in different order, so the worker may
check in the stocks like

1A - 00100A
9B - 00010C
00051B - 00070B


I have a table to keep track the stock check in transaction:


STOCK_CHECKIN

start_sequence int(10) unsigned
start_batch char(1)
end_sequence int(10) unsigned
end_batch char(1)


and the data for the above example will look like this:

start_sequence start_batch end_sequence end_batch
1 A  00100 A
9 B  00010 C
00051 B  00070 B


I couldn't figure out how to determine if a specific serial number is
already checked in into the warehouse.
Based on the above data, the serial number 00010A, 1C and 00070B are
already checked in.

How would be the effective SQL looks like?


Thank you!


--

Hez

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: SELECT query question

2009-07-27 Thread Gavin Towey
Should be more efficient to do something like:

SELECT Main_ID FROM Table1 WHERE Source1_Name = 'name'
UNION
SELECT Main_ID FROM Table2 WHERE Source2_Name = 'name'
UNION
SELECT Main_ID FROM Table3 WHERE Source3_Name = 'name'


-Original Message-
From: João Cândido de Souza Neto [mailto:j...@consultorweb.cnt.br]
Sent: Monday, July 27, 2009 1:09 PM
To: mysql@lists.mysql.com
Subject: Re: SELECT query question

select
*
from
MainTable MT
left join Table1 T1 on MT.Main_ID = T1.MainID
left join Table2 T2 on MT.Main_ID = T2.MainID
left join Table3 T3 on MT.Main_ID = T3.MainID
where
T1.Source1_Name = "anything" or
T2.Source2_Name = "anything" or
T3.Source3_Name = "anything"

Not tested.

--
João Cândido de Souza Neto
SIENS SOLUÇÕES EM GESTÃO DE NEGÓCIOS
Fone: (0XX41) 3033-3636 - JS
www.siens.com.br

""Rytsareva, Inna (I)""  escreveu na mensagem
news:3c9bdf0e91897443ad3c8b34ca8bdca80218f...@usmdlmdowx028.dow.com...
Hello.

I have 4 tables:

MainTable (Main_ID, Main_Name)
Table1 (Source1_ID, Source1_Name, Main_ID)
Table2 (Source2_ID, Source2_Name, Main_ID)
Table3 (Source3_ID, Source3_Name, Main_ID)

And a search box.

A user can type any names from Source1_Name or Source2_Name or
Source3_Name.
I need to get Main_ID

How to make it?

Thanks, Inna




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: User Defined Types

2009-07-28 Thread Gavin Towey
Nope

-Original Message-
From: Jeff Lanzarotta [mailto:delux256-my...@yahoo.com]
Sent: Tuesday, July 28, 2009 11:37 AM
To: mysql@lists.mysql.com
Subject: User Defined Types

Hello,

I am evaluating MySQL. I am coming from Microsoft SQL Server 2005.

This may have been discussed before but I have not been able to find it. In SQL 
Server you are able to define a user-defined type like this:

CREATE TYPE [dbo].[status] FROM INT NOT NULL

This will allow you to then define columns like this:

CREATE TABLE [dbo].[tableName]
(
[statusColumn] [dbo].[status]
)

Is there such a thing in MySQL?

Thanks,

Jeff

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replication recovery on restart

2009-07-30 Thread Gavin Towey
Hi Bryan,

Please define "out of whack."  Tell us exactly what you're doing when you 
restart, and what the replication state is before and after, and where the 
updates are coming from.

Regards,
Gavin Towey

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com]
Sent: Thursday, July 30, 2009 11:00 AM
To: mysql@lists.mysql.com
Subject: Replication recovery on restart

I have 2 machines 'master' and 'slave'. I have the mysql 5.0.51a-log databases 
both replicating wonderfully. They are configured in a dual master scenario so 
that one can take over for the other in my HA environment I've built. All is 
working great until... If one or the other box reboots or the mysql restarts, 
the replication gets out of whack. Especially if I simulate both of them 
crashing in a worst case scenario, they are then both trying to sync from the 
wrong Master_log_file and Read_Master_Log_Pos...

Since catastrpohe WILL happen eventually (heence the need for HA) how do I 
direct the newly restarted boxes to the right position in the correct files on 
restart?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replication recovery on restart

2009-07-31 Thread Gavin Towey
Bryan,

How are you restarting mysql? In the case a master crashes, it's definitely 
common for the slave to miss the fact that the master is using a different 
binlog.  The slave advances to a position past the end of the previous binlog, 
and stops with and error like "tried to read impossible position."  In this 
case you do have to intervene, but that's an easy enough case to write a script 
to handle.

When restarting mysql normally, you shouldn't have this problem: i.e. service 
mysql restart / /etc/ini.d/mysql restart

Regards,
Gavin Towey

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com]
Sent: Friday, July 31, 2009 10:08 AM
To: mysql@lists.mysql.com
Subject: RE: Replication recovery on restart

Before I simulate a total server failure, master1 is using binary file 
msyql-bin1 position  2231467 and it's slave master2 is following the 
correct file at the correct position. This is after initial setup. Once I 
restart master1, it will then start to use msyql-bin2 position 98 and 
master 2 is still trying to follow msyql-bin1 position  2231467.

And since I have this as dual master setup, if I simulate both boxes restarting 
in a total catastrophe, the masters both change files and the slaves remain 
trying to follow on the old information.



-----Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Thursday, July 30, 2009 5:08 PM
To: Cantwell, Bryan; mysql@lists.mysql.com
Subject: RE: Replication recovery on restart

Hi Bryan,

Please define "out of whack."  Tell us exactly what you're doing when you 
restart, and what the replication state is before and after, and where the 
updates are coming from.

Regards,
Gavin Towey

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com]
Sent: Thursday, July 30, 2009 11:00 AM
To: mysql@lists.mysql.com
Subject: Replication recovery on restart

I have 2 machines 'master' and 'slave'. I have the mysql 5.0.51a-log databases 
both replicating wonderfully. They are configured in a dual master scenario so 
that one can take over for the other in my HA environment I've built. All is 
working great until... If one or the other box reboots or the mysql restarts, 
the replication gets out of whack. Especially if I simulate both of them 
crashing in a worst case scenario, they are then both trying to sync from the 
wrong Master_log_file and Read_Master_Log_Pos...

Since catastrpohe WILL happen eventually (heence the need for HA) how do I 
direct the newly restarted boxes to the right position in the correct files on 
restart?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: logging slow queries with time

2009-07-31 Thread Gavin Towey
Entries in the slow log have a timestamp.  You can read the file directly, but 
it's much easier to use a tool like maatkit for parsing the results of the log.

Try this:
http://www.maatkit.org/doc/mk-query-digest.html

Regards,
Gavin Towey


-Original Message-
From: Milan Andric [mailto:mand...@gmail.com]
Sent: Friday, July 31, 2009 11:15 AM
To: mysql@lists.mysql.com
Subject: logging slow queries with time

Hello,

I'm serving a burly Drupal install and at some points throughout the
day the mysql threads go way up and iowait peaks.  I'm not sure which
is causing which but during this time the server is unresponsive.  I
would like to determine if there is a poorly optimized query causing
this.  I'm logging slow queries but is there a way to see when the
slow queries take place also?  I'd like to know what queries are being
processed during this window of poor response time, usually around
noon local time.

Thanks in advance,

--
Milan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replication recovery on restart

2009-07-31 Thread Gavin Towey
Bryan,

When the slave encounters that error, you can simply set it to replicate from 
the next binlog file in the sequence starting at position 98.  It should be 
easy to have a script automate this process.

Regards,
Gavin Towey

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com]
Sent: Friday, July 31, 2009 12:51 PM
To: mysql@lists.mysql.com
Subject: RE: Replication recovery on restart

 Yes I am trying to simulate total failure. In this test case I am using 2 
Virtual Machines and I just kill one and then when it comes back I have the 
challenge described.
How can I go about getting the slave back in tune with the newly restarted 
master?

Thanks

-Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Friday, July 31, 2009 1:21 PM
To: Cantwell, Bryan; mysql@lists.mysql.com
Subject: RE: Replication recovery on restart

Bryan,

How are you restarting mysql? In the case a master crashes, it's definitely 
common for the slave to miss the fact that the master is using a different 
binlog.  The slave advances to a position past the end of the previous binlog, 
and stops with and error like "tried to read impossible position."  In this 
case you do have to intervene, but that's an easy enough case to write a script 
to handle.

When restarting mysql normally, you shouldn't have this problem: i.e. service 
mysql restart / /etc/ini.d/mysql restart

Regards,
Gavin Towey

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com]
Sent: Friday, July 31, 2009 10:08 AM
To: mysql@lists.mysql.com
Subject: RE: Replication recovery on restart

Before I simulate a total server failure, master1 is using binary file 
msyql-bin1 position  2231467 and it's slave master2 is following the 
correct file at the correct position. This is after initial setup. Once I 
restart master1, it will then start to use msyql-bin2 position 98 and 
master 2 is still trying to follow msyql-bin1 position  2231467.

And since I have this as dual master setup, if I simulate both boxes restarting 
in a total catastrophe, the masters both change files and the slaves remain 
trying to follow on the old information.



-----Original Message-
From: Gavin Towey [mailto:gto...@ffn.com]
Sent: Thursday, July 30, 2009 5:08 PM
To: Cantwell, Bryan; mysql@lists.mysql.com
Subject: RE: Replication recovery on restart

Hi Bryan,

Please define "out of whack."  Tell us exactly what you're doing when you 
restart, and what the replication state is before and after, and where the 
updates are coming from.

Regards,
Gavin Towey

-Original Message-
From: Cantwell, Bryan [mailto:bcantw...@firescope.com]
Sent: Thursday, July 30, 2009 11:00 AM
To: mysql@lists.mysql.com
Subject: Replication recovery on restart

I have 2 machines 'master' and 'slave'. I have the mysql 5.0.51a-log databases 
both replicating wonderfully. They are configured in a dual master scenario so 
that one can take over for the other in my HA environment I've built. All is 
working great until... If one or the other box reboots or the mysql restarts, 
the replication gets out of whack. Especially if I simulate both of them 
crashing in a worst case scenario, they are then both trying to sync from the 
wrong Master_log_file and Read_Master_Log_Pos...

Since catastrpohe WILL happen eventually (heence the need for HA) how do I 
direct the newly restarted boxes to the right position in the correct files on 
restart?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain pri

RE: Table advice.

2009-08-03 Thread Gavin Towey
Another trick is  to reverse the domain when you store it.  For example instead 
of storing www.facebook.com, store it as com.facebook.www.  That way you write 
a query like:

WHERE domain LIKE 'com.facebook%'

This will use the index, since you're not using a wildcard at the beginning of 
the string, and the results will include all subdomains for the given domain 
you're looking for.

Regards,
Gavin Towey

-Original Message-
From: Alexander Kolesen [mailto:kolese...@mail.by]
Sent: Saturday, August 01, 2009 11:10 AM
To: mysql@lists.mysql.com
Subject: Re: Table advice.

Hello.
Your query performs a full table scan, because if you match text with '%...' 
wildcard, MySQL can't using index. Try to use external full-text
search engines like Sphinx (http://www.sphinxsearch.com/) or Lucene 
(http://lucene.apache.org).
> I have a database that I am (will) be using to track URL's. The table
> structure looks like this:
>
> CREATE TABLE event
> (
>   eid   INT UNSIGNED NOT NULL AUTO_INCREMENT,
>   timestamp INT(10) UNSIGNED NOT NULL DEFAULT 0,
>   ipINT(10) UNSIGNED NOT NULL DEFAULT 0,
>   fqdn  VARCHAR(255),
>   domainVARCHAR(63),
>   tld   VARCHAR(63),
>   actionVARCHAR(4),
>   request   TEXT,
>   referrer  TEXT,
>   clientVARCHAR(255),
>   INDEX eid (eid),
>   INDEX timestamp (timestamp),
>   INDEX ip (ip),
>   INDEX fqdn (fqdn),
>   INDEX domain (domain),
>   INDEX tld (tld)
> );
>
> The is no real logic behind the indexes, the table was hobbled
> together looking at examples. Currently I am trying queries on about
> 300 million records and the results are pretty crappy. for example, a
> query like this:
>
> select domain,count(domain) as count from event where domain like
> '%facebook%' group by domain order by count desc;
>
> takes about 5 minutes to complete.
>
> Most of the queries will be like that above but probably with
> additional filters like date constraints or IP constraints or a
> mixture of both. I can also see searches through the requests for
> filetypes etc.
>
> Any suggestions or comments would be appreciated.
>
> Thanks.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=kolese...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Remote connection

2009-08-04 Thread Gavin Towey


1.) remove bind-address= and skip-networking from my.cnf
2.) grant permission to the external 'user'@'host'
3.) remove any firewall rules blocking port 3306
4.) make sure no overrides on the mysqld commandline.

See  http://hashmysql.org/index.php?title=Remote_Clients_Cannot_Connect


If you continue to have problems, give us the exact steps you have tried and 
the exact error message you are receiving.  Please try to connect using the 
mysql command line.

Regards,
Gavin Towey

-Original Message-
From: Hugo Leonardo Ferrer Rebello [mailto:hugo.rebe...@t-systems.com.br]
Sent: Tuesday, August 04, 2009 1:09 PM
To: mysql@lists.mysql.com
Subject: Remote connection

Hello Guys,



I have a big doubt.



I'm trying to access the mysql database remotely, but I can't.



I have changed the skip-networking option on my.cnf file however it
doesn't work.



I have tried to include the bind_address = 0.0.0.0 but it still doesn't
work. Sure I have commented the skip-networking option before enable the
bind_address.



I don't know what else I must do. Please, anybody can help me ?



Look at the error message below.



# mysql -u root -p -h 192.168.12.50

Enter password:

ERROR 1045 (28000): Access denied for user 'root'@'192.168.12.50' (using
password: YES)



Cheers,

Hugo






The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about MySQL

2009-08-07 Thread Gavin Towey
I always accepted that NFS was unacceptably slow for database access, until I 
actually tested it.  Turns out that with lots of RAM and properly tuned caches, 
you're optimizing for minimal IO anyway. A good file server will have massive 
amounts of IO OPS.  On top of that if you're using GigE over few hops, then 
it's really not slower than local disks.

Remember: benchmark and test your assumptions!

Regards,
Gavin Towey

-Original Message-
From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
Sent: Friday, August 07, 2009 1:19 AM
To: Peter Chacko
Cc: mysql
Subject: Re: Question about MySQL

Hi Peter, all,


let me just concentrate on the NFS aspect:


Peter Chacko wrote:
> [[...]]
>
> Another question is , whats the general experience of  running MySQL
> servers on NFS shares ?

I would *never* use NFS storage for any DBMS (except for some testing):
NFS access is slower than local disk access, and it adds more components
to the critical path. So your operations get slower, reliability
decreases, and (in case of any trouble) analysis becomes more difficult.

I cannot imagine any setup where you have a machine strong enough to run
your DBMS on it, but not capable of driving sufficient local disks.

The typical argument for having centralized disks serving many machines
is based on economies of scale (huge disks), flexibility (partitioning),
and centralized management (RAID replacement, backup).
There may be some merit to this in a specialized setup (NAS systems -
I'm not convinced of them, but don't claim expert knowledge about them),
but IMO not using general-purpose machines and NFS.

Whatever the economical advantages during normal operation may be, you
should not forget the huge costs you would incur if any in-between
component breaks and your database stops operating.
This may be tolerable for some applications, depending on the required
availability, but simply intolerable for others.


> We are working on developing a custom
> protocol for MySQL clustering that takes care of all file management
> as part of the database clustering protocol, rather than a storage
> engine feature.

Sorry, I don't get the meaning of this in sufficient detail to comment.


> What are the likely setup of a database storage ? Is
> it on RAW partition or on a File system ?

That may depend on the storage engine used.
MyISAM uses a file-per-table approach, so must be run on a file system;
InnoDB may be able to use a raw partition (I'm not sure there);
for others, I can't tell.


>   Will ocfs2  be better used
> for mySQL as well, in a clustered environment ?

I strongly doubt it.

AIUI, the big advantage of cluster file systems is that they allow all
machines to access and modify shared data.
With a DBMS, you don't want to share data across machines, you want the
database to be accessed only be the one machine (possibly multi-CPU, but
shared memory) running the database processes, because on that machine
you have the various database caches.
Then, that machine makes the data available to all clients, so you get a
logical sharing on a higher protocol level (SQL).

To have multiple machines accessing the same database storage, you would
first need some protocol to ensure cache coherency, and that is not
contained in MySQL (in the general server).
To use MySQL on multiple machines for the same data, you set up replication.

The alternative approach would be to use MySQL Cluster, which is
designed to hold the data in main memory (for extremely low latency) and
to use the disk only for backup purposes.


>
> I would appreciate if any one share with me their thoughts on this.

My comments above are based on my experience during DBMS development
(including distributed DBMS), but not on any financial calculations or
DBA work. Weigh them with other answers.


Regards,
Jörg

--
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about MySQL

2009-08-11 Thread Gavin Towey
Hi Banyan,

I'm really just talking about basic optimization techniques:

1. Install lots of RAM
2. Convert all table to innodb
3. Allocate about 80% of memory to innodb_buffer_pool_size

If you haven't seen this script yet, I suggest you start here:
https://launchpad.net/mysql-tuning-primer

Regards,
Gavin Towey

-Original Message-
From: Banyan He [mailto:ban...@rootong.com]
Sent: Friday, August 07, 2009 11:12 AM
To: Gavin Towey; joerg.bru...@sun.com; Peter Chacko
Cc: mysql
Subject: Re: Question about MySQL

Hi Gavin,

I am interested in the things you made for the optimization. Can you share
with us such things?

Thanks a lot,

--
Banyan He
Network & System Security Infrastructure
Mail: ban...@rootong.com
Blog: http://www.rootong.com/blog
LinkedIn: http://www.linkedin.com/in/banyanhe
Website: http://www.rootong.com


> From: Gavin Towey 
> Date: Fri, 7 Aug 2009 11:07:19 -0700
> To: "joerg.bru...@sun.com" , Peter Chacko
> 
> Cc: mysql 
> Subject: RE: Question about MySQL
>
> I always accepted that NFS was unacceptably slow for database access, until I
> actually tested it.  Turns out that with lots of RAM and properly tuned
> caches, you're optimizing for minimal IO anyway. A good file server will have
> massive amounts of IO OPS.  On top of that if you're using GigE over few hops,
> then it's really not slower than local disks.
>
> Remember: benchmark and test your assumptions!
>
> Regards,
> Gavin Towey
>
> -Original Message-
> From: joerg.bru...@sun.com [mailto:joerg.bru...@sun.com]
> Sent: Friday, August 07, 2009 1:19 AM
> To: Peter Chacko
> Cc: mysql
> Subject: Re: Question about MySQL
>
> Hi Peter, all,
>
>
> let me just concentrate on the NFS aspect:
>
>
> Peter Chacko wrote:
>> [[...]]
>>
>> Another question is , whats the general experience of  running MySQL
>> servers on NFS shares ?
>
> I would *never* use NFS storage for any DBMS (except for some testing):
> NFS access is slower than local disk access, and it adds more components
> to the critical path. So your operations get slower, reliability
> decreases, and (in case of any trouble) analysis becomes more difficult.
>
> I cannot imagine any setup where you have a machine strong enough to run
> your DBMS on it, but not capable of driving sufficient local disks.
>
> The typical argument for having centralized disks serving many machines
> is based on economies of scale (huge disks), flexibility (partitioning),
> and centralized management (RAID replacement, backup).
> There may be some merit to this in a specialized setup (NAS systems -
> I'm not convinced of them, but don't claim expert knowledge about them),
> but IMO not using general-purpose machines and NFS.
>
> Whatever the economical advantages during normal operation may be, you
> should not forget the huge costs you would incur if any in-between
> component breaks and your database stops operating.
> This may be tolerable for some applications, depending on the required
> availability, but simply intolerable for others.
>
>
>> We are working on developing a custom
>> protocol for MySQL clustering that takes care of all file management
>> as part of the database clustering protocol, rather than a storage
>> engine feature.
>
> Sorry, I don't get the meaning of this in sufficient detail to comment.
>
>
>> What are the likely setup of a database storage ? Is
>> it on RAW partition or on a File system ?
>
> That may depend on the storage engine used.
> MyISAM uses a file-per-table approach, so must be run on a file system;
> InnoDB may be able to use a raw partition (I'm not sure there);
> for others, I can't tell.
>
>
>>   Will ocfs2  be better used
>> for mySQL as well, in a clustered environment ?
>
> I strongly doubt it.
>
> AIUI, the big advantage of cluster file systems is that they allow all
> machines to access and modify shared data.
> With a DBMS, you don't want to share data across machines, you want the
> database to be accessed only be the one machine (possibly multi-CPU, but
> shared memory) running the database processes, because on that machine
> you have the various database caches.
> Then, that machine makes the data available to all clients, so you get a
> logical sharing on a higher protocol level (SQL).
>
> To have multiple machines accessing the same database storage, you would
> first need some protocol to ensure cache coherency, and that is not
> contained in MySQL (in the general server).
> To use MySQL on multiple machines for the same data, you set up replication.
>
> The alterna

RE: Replication - connecting a slave to a master on the same host via a port or socket

2009-08-11 Thread Gavin Towey
Andrew,

Yes it's true, because when you specify localhost, you're using the local 
socket file.  The port only has meaning for TCP connections.

Regards,
Gavin Towey

-Original Message-
From: Andrew Braithwaite [mailto:andrew.braithwa...@lovefilm.com]
Sent: Tuesday, August 11, 2009 8:38 AM
To: mysql@lists.mysql.com
Subject: RE: Replication - connecting a slave to a master on the same host via 
a port or socket

Ah.  I have found that if you use 'localhost' to connect, you cannot
specify a port, it silently fails...

You can connect using a hostname (even though it's the same server),
specifying a port...

Andrew

-Original Message-
From: Andrew Braithwaite [mailto:andrew.braithwa...@lovefilm.com]
Sent: 11 August 2009 16:12
To: mysql@lists.mysql.com
Subject: Replication - connecting a slave to a master on the same host
via a port or socket

Hi,

I have 2 mysql instances running on a server on different ports with
different datadirs and different .sock files.

I can connect locally via the sock with the -S flag to mysql but I
cannot connect locally via port (-P flag).

Does anyone know if there is a way to configure a mysql slave to use a
socket to connect to a master on localhost?

If not; does anyone know a way to connect to another mysql instance on
the same linux server using a port?

Thanks for your help,

Andrew

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=andrew.braithwa...@lovefilm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow performance Query

2009-08-11 Thread Gavin Towey
Have you tried removing the quotes from around the value in:

user_id='1421767810'

The column is defined as bigint.  You're comparing it to a string.  I just saw 
a case where comparing a float value to a string column in a query caused it to 
take a long time.

-Original Message-
From: Dan Nelson [mailto:dnel...@allantgroup.com]
Sent: Tuesday, August 11, 2009 1:09 PM
To: Tachu(R)
Cc: mysql@lists.mysql.com
Subject: Re: Slow performance Query

In the last episode (Aug 11), Tachu(R) said:
> Hi guys I've been having some slow performance on queries that should
> otherwise be pretty fast.  I've checked my indexes etc.  and cant see what
> could cause it here is an example.  This one is taking long in the sending
> data step.  although its running on localhost so its not like its a
> network issue.  I sometimes have some queries take long in the statistics
> step.  Although i cannot find a reliable document that says what
> statistics means.  can anyone throw some help here

Is the system serving a lot of other queries at the same time?  On an idle
system that query should take a fraction of a second.  One way to speed it
up would be to add another index on (user_id,app_id).  That will group all
the data you need together in one block in the index so mysql won't have to
seek into the table at all.  Your `app_id` index has the necessary columns,
but your WHERE clause needs an index with user_id first so it has to fall
back to the `user_id` index, which doesn't have the app_id column.

> select app_id from app_user where user_id='1421767810' limit 3;
> ++
> | app_id |
> ++
> | 100876 |
> |  46888 |
> |  93166 |
> ++
> 3 rows in set (1.16 sec)

> mysql> show create table app_user;
> | app_user | CREATE TABLE `app_user` (
>`app_user_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
>`app_id` int(10) unsigned NOT NULL,
>`user_id` bigint(20) unsigned NOT NULL,
>`num_sent` int(10) unsigned NOT NULL,
>PRIMARY KEY (`app_user_id`),
>KEY `app_id` (`app_id`,`user_id`),
>KEY `user_id` (`user_id`),
>
> mysql> explain select app_id from app_user where user_id='1421767810'
> limit 3;
> ++-+--+--+---+-+-+---+--+---+
> | id | select_type | table| type | possible_keys | key | key_len | 
> ref   | rows | Extra |
> ++-+--+--+---+-+-+---+--+---+
> |  1 | SIMPLE  | app_user | ref  | user_id   | user_id | 8   | 
> const |5 |   |
> ++-+--+--+---+-+-+---+--+---+
> 1 row in set (0.01 sec)

--
Dan Nelson
dnel...@allantgroup.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Erratic query performance

2009-08-13 Thread Gavin Towey
Hi Leo,

Also include the EXPLAIN SELECT ...;  output, and the SHOW CREATE TABLE table\G 
for each table used in the query.

Have you considered that your query's execution time depends on other activity 
on the server?  Have you tried skipping the cache ( SELECT SQL_NO_CACHE ... ) 
and see if you get consistent times?  What about running this directly through 
the mysql cli?

Regards,
Gavin Towey

-Original Message-
From: Leo Siefert [mailto:lsief...@sbcglobal.net]
Sent: Thursday, August 13, 2009 1:10 PM
To: mysql@lists.mysql.com
Subject: Erratic query performance

I have a moderate sized database set up and a program that allows users to 
create ad-hoc queries into the data based on entries in a form, so that I, as 
the programmer, have control over the actual construction of the queries and 
can do what is needed to optimize queries. I also keep a log of all queries 
that are run so that I can easily see the exact query that may have caused a 
problem.

For the most part, unless a query is quite complex, there is no problem with 
the response time - from a few seconds up to a minute or two for more complex 
queries or one returning very large result sets. Recently a seemingly very 
simple query has resulted in unacceptably long processing times.

After playing around with the query in PhpMyAdmin I am totally perplexed as to 
what could be causing the problem. Sometimes the query will execute in less 
than 30 seconds, but other times it takes from 4 to 10 or more minutes. It 
never seems to complete in between 30 seconds and 4 minutes.

To try to isolate the problem today I did a lot of testing on an isolated 
server - nothing on it but MySql and this database and no one but me has access 
to it.  Tried rearranging the joins and eliminating one of the joins as well as 
everything else I could think of to figure out what could be causing the issue. 
Through all of the testing I got consistent results in the four minute range 
for all of the variations I tried - repeated attempts with the same query 
varied by only a second or two.

Then I want back to my program and ran the original query on the "public" 
database - the same place that the problem had been originally found and 
instead of timing out the gateway (five minute limit) as it had done 
consistently over the past few days it ran it successfully in about 20 seconds. 
I was able to repeat this many times both using the program and by entrering 
the query into PhpMyAdmin. Still takes 4 minutes on the private server, though.

A couple of hours later - shortly before starting this message - I tried again 
on the public server and again the response time was under 30 seconds. Trying 
again now and it's on its way to timing out again. Checked and there are no 
other processes running on the server - volume is usually low as there are less 
than 100 users total.

Any ideas of what could be causing the varied response time on a simple query 
when everything on the server appears to be identical from one run to another? 
Are there settings that can be made on the server to tweak response time for a 
database/query like this?

Here are stats on the files involved in the query and the actual query I am 
trying to run. Note that the number of receipts with amount >= 1 is very 
smal compared to the total number of records.

Main: 900,000 records, 500 Mb (886,361 where recordtype = "INDIVIDUAL")
  Primary key: id (int)
Receipt: 4,500,000 records, 700 Mb (6,817 where amount <= 1)
  Primary key: id (int)
  Indexed on: mainid (int)
Committee: 4,500 records, 600Kb (1,476 where party = "D")
  Primary key: id (int)
  Indexed on: Comm_id (varchar(6))

create temporary table tmp type = heap
select distinct 3 filterid, m.id, "" GroupLevel, 0 GroupCum
from main m
left join receipt r on m.id = r.mainid
left join campaccommon.committee c on r.comm_id = c.Comm_id
where
   recordtype = "INDIVIDUAL"
   and c.party = "D"
   and r.amount >= 1

Returns 294 records.

Thanks for any insight you can offer.

 - Leo Siefert

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: foreign keys: Cannot create InnoDB table

2009-08-14 Thread Gavin Towey
Run:
SHOW ENGINE INNODB STATUS \G

And look for the "LATEST FOREIGN KEY ERROR" section.  It'll explain the reason 
for the (errno: 150) message.

Regards,
Gavin Towey

-Original Message-
From: wabiko.takuma [mailto:wab...@sysrdc.ns-sol.co.jp]
Sent: Friday, August 14, 2009 3:35 AM
To: mysql@lists.mysql.com
Subject: foreign keys: Cannot create InnoDB table

Hi, All,

I can't create InnoDB table with foreign key constraints using more than 3 
colmuns.
When I create table `test_fk`.`tbl1`, it gives me:

Can't create table 'test_fk.tbl1' (errno: 150)

why? CREATE TABLE syntax looks perfectly right to me.

Any suggestions are welcome.

Thank you,
wabi

-- --
-- DDL
CREATE  TABLE IF NOT EXISTS `test_fk`.`tbl2` (
  `col1` VARCHAR(2) NOT NULL ,
  `col2` VARCHAR(2) NOT NULL ,
  `col3` VARCHAR(2) NOT NULL ,
  PRIMARY KEY (`col1`, `col2`, `col3`) )
ENGINE = InnoDB;

CREATE  TABLE IF NOT EXISTS `test_fk`.`tbl1` (
  `tbl1_id` VARCHAR(12) NOT NULL ,
  `col1` VARCHAR(2) NULL ,
  `col2` VARCHAR(2) NULL ,
  `col3` VARCHAR(2) NULL ,
  PRIMARY KEY (`tbl1_id`) ,
  INDEX `fk_test` (`col1` ASC, `col2` ASC, `col3` ASC) ,
  CONSTRAINT `fk_test`
FOREIGN KEY (`col1` , `col2` , `col3` )
REFERENCES `test_fk`.`tbl2` (`col1` , `col3` , `col2` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;



-- --
-- mysql Output
mysql> SELECT VERSION(),NOW() FROM DUAL\G
*** 1. row ***
VERSION(): 5.1.31sp1-enterprise-gpl-advanced
NOW(): 2009-08-14 18:04:00
1 row in set (0.00 sec)

mysql> DROP DATABASE `test_fk` ;
ERROR 1008 (HY000): Can't drop database 'test_fk'; database doesn't exist
mysql> CREATE DATABASE IF NOT EXISTS `test_fk` ;
Query OK, 1 row affected (0.00 sec)

mysql> SHOW WARNINGS;
+---+--+---+
| Level | Code | Message   |
+---+--+---+
| Error | 1008 | Can't drop database 'test_fk'; database doesn't exist |
+---+--+---+
1 row in set (0.00 sec)

mysql> USE `test_fk`;
Database changed
mysql>
mysql> DROP TABLE IF EXISTS `test_fk`.`tbl2` ;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> SHOW WARNINGS;
+---+--+--+
| Level | Code | Message  |
+---+--+--+
| Note  | 1051 | Unknown table 'tbl2' |
+---+--+--+
1 row in set (0.00 sec)

mysql> CREATE  TABLE IF NOT EXISTS `test_fk`.`tbl2` (
->   `col1` VARCHAR(2) NOT NULL ,
->   `col2` VARCHAR(2) NOT NULL ,
->   `col3` VARCHAR(2) NOT NULL ,
->   PRIMARY KEY (`col1`, `col2`, `col3`) )
-> ENGINE = InnoDB;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW WARNINGS;
Empty set (0.00 sec)

mysql>
mysql> DROP TABLE IF EXISTS `test_fk`.`tbl1` ;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> SHOW WARNINGS;
+---+--+--+
| Level | Code | Message  |
+---+--+--+
| Note  | 1051 | Unknown table 'tbl1' |
+---+--+--+
1 row in set (0.00 sec)

mysql> CREATE  TABLE IF NOT EXISTS `test_fk`.`tbl1` (
->   `tbl1_id` VARCHAR(12) NOT NULL ,
->   `col1` VARCHAR(2) NULL ,
->   `col2` VARCHAR(2) NULL ,
->   `col3` VARCHAR(2) NULL ,
->   PRIMARY KEY (`tbl1_id`) ,
->   INDEX `fk_test` (`col1` ASC, `col2` ASC, `col3` ASC) ,
->   CONSTRAINT `fk_test`
-> FOREIGN KEY (`col1` , `col2` , `col3` )
-> REFERENCES `test_fk`.`tbl2` (`col1` , `col3` , `col2` )
-> ON DELETE NO ACTION
-> ON UPDATE NO ACTION)
-> ENGINE = InnoDB;
ERROR 1005 (HY000): Can't create table 'test_fk.tbl1' (errno: 150)
mysql> SHOW WARNINGS;
+---+--++
| Level | Code | Message|
+---+--++
| Error | 1005 | Can't create table 'test_fk.tbl1' (errno: 150) |
+---+--++
1 row in set (0.00 sec)

mysql> Bye
# perror 150
MySQL error code 150: Foreign key constraint is incorrectly formed
#



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
th

RE: Picking Collation Confusion

2009-08-14 Thread Gavin Towey
Hi Matt,

You need to worry about consistent collations if you want consistent behavior 
for sorting and comparing fields.  That sounds pretty important to me.

Note that latin1 can hold accented characters as well.

Regards,
Gavin Towey

-Original Message-
From: Matt Neimeyer [mailto:m...@neimeyer.org]
Sent: Friday, August 14, 2009 9:52 AM
To: mysql@lists.mysql.com
Subject: Picking Collation Confusion

First off... I've read chapter 9.1.3 on character sets and collations
and I'm still confused... :) (that could just be because today is
Friday)

Our application is installed at several different sites some running
on Mac OS, some Windows and a few Linux which I suspect is what led to
this situation.

To deploy our app we basically do the following...

1. "create and test"
2. (on test server) mysqldump > export.sql
3. (on deployment server) mysql < export.sql

Now I need to move a set of changes from the test server to the
deployment server and I'm using mysqldiff to find the differences.

I've noticed that 90% of the changes are simply to align the collation
of fields and default collations for tables. Usually it's bouncing
between utf8_general_ci and latin1_swedish_ci.

99.99% of the records in our various customers databases will be
"normal" U.S. names and addresses but I know of a few customers that
target their base fairly narrowly and might POTENTIALLY need to enter
"foreign" names with accents and the like.

Ultimately what it comes down to is... how worried should I be about
making collations "universal" across at least a given customers
instances of the application? (If not all copies of the application
for all customers)

I already have a routine that I call "normalize database" that makes
sure default indexes are applied, etc... so it would be "easy" to add
to that routine to check for and "correct" collations but then do I
need to worry about existing data?

Thanks for the advice!

Matt

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Query Question

2009-08-18 Thread Gavin Towey
To further emphasize this point:  A table has no order by itself, and you 
should make no assumptions about the order of rows you will get back in a 
select statement, unless you use an ORDER BY clause.

Regards,
Gavin Towey

-Original Message-
From: walterh...@gmail.com [mailto:walterh...@gmail.com] On Behalf Of Walter 
Heck - OlinData.com
Sent: Tuesday, August 18, 2009 9:51 AM
To: b...@arbucklellc.com
Cc: mysql@lists.mysql.com
Subject: Re: Query Question

Bill,

if you use an order by clause in your query, the limit will pick the first
100K rows in that order. That way you can ensure that all rows will be
processed in (wait for it...) order :)

Cheers,

Walter

On Tue, Aug 18, 2009 at 18:44, Bill Arbuckle  wrote:

> I am in need of some help for the following:
>
>
>
> Say I have a table with 1M rows.  Users are being added constantly (not
> deleted) during the queries that I am about to explain.  The pk is uid and
> appid.  I need to run queries in increments of 100K rows until reaching the
> end without duplicating rows in the queries.  I am using a select statement
> with a limit of row_index and row_count.  This start row is where my
> question arises.
>
>
>
> If I make a query with limit 0,10 then 2 minutes later 10,10
> then 2minutes later 30,10 and so on.  My question is are new rows
> added to the end of the table or will they randomly appear in my queries?
> If they are added to the end of the table, that is fine because I will pick
> them up in my final pass.
>
>
>
> I hope this is clear enough.  If not, let me know and I will provide more
> information.  Thanks!
>
>


--
Walter Heck, Engineer @ Open Query (http://openquery.com)
Affordable Training and ProActive Support for MySQL & related technologies

Follow our blog at http://openquery.com/blog/
OurDelta: free enhanced builds for MySQL @ http://ourdelta.org

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: text records and cross referencing

2009-08-19 Thread Gavin Towey
Hi Pol,

MySQL support FULLTEXT indexes, and natural language searches, including 
Boolean conditions.  This may help you; however, you will have to adjust the 
default behavior of the index, but changing server settings.  By default there 
is a minimum word length which you will have to adjust, and a list of stopwords 
(words to ignore.)  Both of these will prevent you from indexing each and every 
word, and even still words that show up in most or all records will be ignored 
(and I don't know if you can change that behavior.)   The alternative, is using 
unindexed lookups, which would be very slow for any non-trivial application.

See:
http://dev.mysql.com/doc/refman/5.0/en/tutorial.html
http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html

The highlighting and switching behavior is up to your application.  MySQL just 
stores data,  it's up to you to write programs that manipulate it.

In all honesty, if this really is a "personal" application, you may be better 
off using another, simpler method.

Regards,
Gavin Towey

-Original Message-
From: news [mailto:n...@ger.gmane.org] On Behalf Of Pol
Sent: Wednesday, August 19, 2009 2:38 AM
To: mysql@lists.mysql.com
Subject: text records and cross referencing

Hi

i am very new to mysql. I am playing around with it to test it as a
personal vocabulary and language expression manager.
Main field would be a passage from a text work, where each word is
searchable.
Any experience and reports about such use?

A further questions is about the possibility to make cross references, that
is
1) highlighting all records that have been connected (by the user) to the
displayed record
2) switching from a record to another one, within the same database.

Thank you

---Pol


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Scaling Mysql

2009-08-21 Thread Gavin Towey
RENAME statement is atomic, and you can specify multiple tables to rename at 
once.

Instead of two statements, do this:
rename table send_sms to send_sms_full, send_sms_empty to
send_sms;

There will be no "gap" in-between.

-Original Message-
From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
Sent: Friday, August 21, 2009 8:32 AM
To: 'mos'; 'MySQL'
Subject: RE: Scaling Mysql

>
>Krishna,
> Rather than copying rows from one table to another, and deleting the
>previous rows, why not just do:
>
>1) create table send_sms_empty like send_sms;
>
>2) rename table send_sms to send_sms_full;rename send_sms_empty to
send_sms;
>
>3) insert into alt_send_sms select * from send_sms_full; drop table
>send_sms_full;
>
>because step #2 is two sql statements, they will get executed together and
>will take just 1 or 2 ms and now you have an empty table that continues to
>get filled. This eliminates the insert delete table locking. Plus you
>always start with an empty optimized table.
>
>Step #3 uses a drop table which is much faster than trying to delete the
rows.
>
[JS] You'd have to make sure that the application, which is after all
pounding the database pretty hard, doesn't gag. As fast as that operation
might be, the application is likely to collide with it.

You cannot rename a locked table, so I'm not sure how you could do what you
are suggesting in an ACID way. You'd need some kind of semaphore somewhere.

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com





>Mike
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the-
>infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Scaling Mysql

2009-08-24 Thread Gavin Towey
Have you looked at MySQL cluster?  It was created specifically for telco needs.


-Original Message-
From: Krishna Chandra Prajapati [mailto:prajapat...@gmail.com]
Sent: Friday, August 21, 2009 9:23 PM
To: wha...@bfs.de
Cc: MySQL
Subject: Re: Scaling Mysql

Hi wharms,

Yor are right. It's some kind of queue mechanism. Right now i am working i
telco company (We used to send sms)

Users will be inserting records into send_sms @ 30,000msg/min  Then those
record will be updated and moved to alt_send_sms and deleted from send_sms.
After that 30,000msg/min will be taken out from alt_send for processing and
sending to client. All the above task are happening concurrently. We will be
dealing with million of records/hour

On Fri, Aug 21, 2009 at 6:11 PM, walter harms  wrote:

>
>
> Krishna Chandra Prajapati schrieb:
> > Hi list,
> >
> > I have two tables send_sms and alt_send_sms. Users are inserting records
> > into send_sms @ 500/sec ie 3/min. After applying some updates to
> > send_sms data are transferred to alt_send_sms and deleted from send sms.
> The
> > same thing is happening with alt_send_sms table.
> >
> > Is it possible to insert 1000records/sec in send_sms table and taken out
> at
> > the rate 1000records/seconds from alt_send_sms.
> >
> > Which engine is more better for the above senario.
> >
>
> Hi Krishna,
> i see you are using some kind of queue mechanism but
> to get a useful answer you need to be more specific:
> e.g. what are your safety requirements ?

After moving the data to next stage Data is deleted from current table.

> Tables in RAM are very fast.
> e.g. do you need forgein keys ?

No

>
>
> When will data be copied (send->alt) ?

3records/min

> after 1 day ? 1 hour ?
> how long to you need to store data at alt ?

Min 1 minute (we need to process immeaditely and send to the users as sms)

>
> how often is the access ?

per/sec

>
>
> If speed is a concern do you need a database at all ? (KISS)
>
> where does the current system spend its time ? and why ?
>
> You see your request is far from simple and demands detail knowlegde about
> your requirements going beyound what can be done in such a ML
> (and this is only software, there is also hardware an economics).
> Here you can ask "how can i improve SQL statement  XX ?"
>
> re,
>  wh
>
Thanks
krishna

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Natural Join Issue: column names are equal but doesn't work anyways

2009-08-25 Thread Gavin Towey
Hi Deviad,

NATURAL JOIN uses all column names that are the same between both tables as 
conditions.

select * from rappresentanti NATURAL JOIN clienti;

is the same as:

select * from rappresentanti r JOIN client c ON r.cognome=c.cognome AND 
r.nome=c.nome AND r.vita=c.vita AND r.citta=c.citta AND r.prov=c.prov AND 
r.cap=c.cap AND r.CodRappr=c.CodRappr;


Regards,
Gavin Towey

-Original Message-
From: Deviad [mailto:dev...@msn.com]
Sent: Monday, August 24, 2009 6:27 PM
To: mysql@lists.mysql.com
Subject: Re: Natural Join Issue: column names are equal but doesn't work anyways

Hi again,
since I'm not someone who gives up easily, I have restyled that old code
(actually is from an example back of my teacher into 2002, I saw that
she changed a bit her way to code this stuff), I restyled the code just
to be sure it isn't some parsing issue or whatever.
http://pastebin.com/f50d77dcf

On that database, this query works:
select CodCliente, Cognome, Nome from Ordini NATURAL JOIN Clienti where
Data='2002-09-05';

whereas this one does not:

select *
from rappresentanti NATURAL JOIN clienti;



I pasted the database in there.

Deviad ha scritto:
> Hello,
> I have been training for a test on Database Design and MySQL.
> The following is inside a file we have to run before starting to code
> what the excercises require us to.
> Call the file as you wish and run it if it helps to understand the
> reason behind my problem.
> ---
> DROP DATABASE IF EXISTS premiere;
>
> CREATE DATABASE premiere;
>
> USE premiere;
>
> create table if not exists Articoli(
>   NroArt char(4) primary key,
>   descrizione char(20),
>   giacenza int,
>   categoria char (2),
>PrezzoUnitario decimal(8,2)
> ) TYPE=INNODB;
>
> create table if not exists Rappresentanti(
>  CodRappr char(2) primary key,
>cognome char(10),
>nome char(8),
>via char (15),
>citta char(15),
>prov char(2),
>cap char (5),
>  TotProvv decimal(8,2),
>PerProvv decimal(8,2)
> ) TYPE=INNODB;
>
>
> create table if not exists clienti(
> CodCliente char(3) primary key,
> cognome char(10),
> nome char(8),
> via char (15),
> citta char(15),
> prov char(2),
> cap char (5),
>   saldo decimal(8,2),
> fido decimal(8,2),
> CodRappr char(2) not null references
> Rappresentanti(CodRappr)
> ) TYPE=INNODB;
>
>
> create table if not exists Ordini(NroOrdine char(6) primary key,
> data date,
>   CodCliente char(3) not null
> references Clienti(CodClienti)
> ) TYPE=INNODB;
>
>
> insert into articoli
>   values ('AX12','ferro da stiro',104,'cs',24.95);
> insert into articoli
>   values ('AZ52','freccette',20,'sp',12.95);
> insert into articoli
> values ('BA74','pallone',40,'sp',29.95);
> insert into articoli
>   values ('BH22','tritatutto',05,'cs',24.95);
> insert into articoli
>   values ('BT04','forno',11,'el',149.49);
> insert into articoli
>   values ('BZ66','lavatrice',52,'el',399.99);
> insert into articoli
>   values ('CA14','setaccio',78,'cs',39.99);
> insert into articoli
>   values ('CB03','bicicletta',44,'sp',299.99);
> insert into articoli
>   values ('CX11','frullino',142,'cs',22.95);
> insert into articoli
>   values ('CZ81','tavola pesi',68,'sp',349.95);
>
>
> insert into Rappresentanti
> values('03','Jones','Mary','123 Main','Grant','MI','49219',215,5);
> insert into Rappresentanti
> values('06','Smith','William','102
> Raymond','Ada','MI','49441',49412.5,7);
> insert into Rappresentanti
> values('12','Diaz','Miguel','419 Harper','Lansing','MI','49224',2150,5);
>
>
> insert into clienti
>   values
> ('124','Adams','Sally','481Oak','Lansing','MI','492

RE: Got error 124 from storage engine

2009-08-25 Thread Gavin Towey
Which version of mysql are you using?  In mysql 4, you could get away with some 
differences between the definition of the merge table and the underlying tables.

As you've discovered, the structure and index definitions must now be exactly 
the same, otherwise you will get errors.

Regards,
Gavin Towey

-Original Message-
From: stutiredboy [mailto:stutired...@gmail.com]
Sent: Tuesday, August 25, 2009 12:23 AM
To: mysql@lists.mysql.com
Subject: Got error 124 from storage engine

hi, all:

i have met an question as below:

table A1,A2

A1 has been *packed by myisampack, and rebuild the index by myisamchk*

A2 is a noraml table, and the struct of A1 and A2 is exactlly same

talbe A is the merge table of A1 and A2

while i use:
*
mysql> select max(id) from A;
** ERROR 1030 (HY000): Got error 124 from storage engine

+---+---+--+-+---++
| Field | Type | Null | Key | Default | Extra |
+---+---+--+-+---++
| id | bigint(20) unsigned | NO | MUL | NULL | auto_increment |


*but when i try another table, the situation is as before, such as table
B1,B2,B
*
mysql> select max(id) from loot;
+-+
| max(id) |
+-+
| 110415 |
+-+
1 row in set (0.00 sec)

*
the only difference is (*table A the id Field is auto_increment and
table B the id is not*):

*+---+---+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+---+--+-+-+---+
| id | bigint(20) unsigned | NO | MUL | NULL | |


*and if i do not use myisampack/myisamchk, all are work fine,
*our system is freebsd 7.2, the mysql version is 5.0.84
Server version: 5.0.84 Source distribution
*

thanks for your reply

tiredboy



**

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Viable alternatives to SQL?

2009-08-27 Thread Gavin Towey
Isn't that basically the same as pointing phpmyadmin at a read-only copy of the 
data?

There's a reason most searches are limited; when you most people too many 
options, it's confusing.  Good interfaces hide complexity from the end user.


-Original Message-
From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
Sent: Thursday, August 27, 2009 8:08 AM
To: 'Kelly Jones'; mysql@lists.mysql.com
Subject: RE: Viable alternatives to SQL?

>-Original Message-
>From: Kelly Jones [mailto:kelly.terry.jo...@gmail.com]
>Sent: Thursday, August 27, 2009 9:44 AM
>To: mysql@lists.mysql.com
>Subject: Viable alternatives to SQL?
>
>Many sites let you search databases of information, but the search
>queries are very limited.
>
>I'm creating a site that'll allow arbitrary SQL queries to my data (I
>realize I'll need to handle injection attacks).
>
[JS] Taking you literally, wouldn't that presume that your users have
knowledge of the underlying database structure?

If not, then I think you'll need to go with "fill in the blanks" type of
query form.

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com




>Are there other viable ways to query data? I read a little on
>"Business System 12" (BS12), Tutorial D, and even something called
>T-SQL (I think), but they all seem theoretical and not fully
>implemented.
>
>I want a query language that non-techies can use easily, but also
>supports arbitrarily complex queries. Does such a language exist?
>
>--
>We're just a Bunch Of Regular Guys, a collective group that's trying
>to understand and assimilate technology. We feel that resistance to
>new ideas and technology is unwise and ultimately futile.
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the-
>infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Speeding up a pretty simple correlated update query

2009-09-02 Thread Gavin Towey
Do you know that if you create seq column on the original table as an 
auto_increment primary key, it will fill in the numbers automatically?  There's 
no need to create the values on another table and update with a join.

Regards,
Gavin Towey

-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Wednesday, September 02, 2009 4:35 PM
To: mysql@lists.mysql.com
Subject: Speeding up a pretty simple correlated update query

Hello All,
  I have a legacy application which was written using a compound primary key
of an item number (non unique) along with a category ID. The combination of
the item number and category ID make the records unique.

  I am in the process of replacing the compound (VARCHAR) keys with an
unique integer key in these tables.

So I have created an item_seq table and assigned a unique sequence number to
each compound key -- it looks like this (all tables are myisam tables, and
mysql version 5.0)

desc item_seq;
+---+--+--+-+-++
| Field | Type | Null | Key | Default | Extra  |
+---+--+--+-+-++
| seq   | int(10) unsigned | NO   | PRI | NULL| auto_increment |
| itemid| char(11) | NO   | MUL | ||
| category  | char(4)  | NO   | | ||
+---+--+--+-+-++

I also have my main transactional table with about 180,000,000 rows -- it
looks like this:

desc item_trans;
+-+---+--+-+-+---+
| Field   | Type  | Null | Key | Default |
Extra |
+-+---+--+-+-+---+
| seq | int(10) unsigned  | NO   | MUL | |
|
| itemid  | char(11)  | NO   | PRI | |
|
| category| char(4)   | NO   | PRI | |
|
| transid | int(10)   | NO   | PRI | |
|

Currently the "seq" field is null for the entire table.  So of course, I
want to update the main transaction table with the new sequence number.

So I've disabled all the keys on the "item_trans" table -- since I am
updating every row, it wouldn't (shouldn't) be using the index anyway.

Here is my correlated update query:

 update item_trans i, item_seq is
 set i.seq=is.seq
 where is.itemid=i.itemid and is.category=i.category;

  If I run an explain on the select version of the update, this is what I
get:

++-+--++---++-++---+---+
| id | select_type | table| type   | possible_keys | key| key_len |
ref| rows  | Extra |
++-+--++---++-++---+---+
|  1 | SIMPLE  | item_trans| ALL| PRIMARY   | NULL   | NULL
   | NULL   | 178948797 |   |
|  1 | SIMPLE  | item_seq | eq_ref | itemid| itemid | 20  |
g.item_trans.itemid,g.item_trans.category| 1 |
|
++-+--++---++-++---+---+

... which is exactly what I would expect it to do.  Update every record of
the item_trans table, and do a full index lookup on the items_seq table.

SO... I've been running this query to update item_trans, and it's been
running for 5 days now.

I've also tried running this with the primary key index on the item_trans
table (but not the seq index), and that ran slower in my initial tests.

Are there any faster ways to update 180 million records with a correlated
update query?  And I'm fairly certain that trying to do this in PHP
one-record at a time would take much longer than a SQL solution.

Thanks,

-Hank

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: a better way, code technique?

2009-09-04 Thread Gavin Towey
You could mash it into two lines.  Though I think the verbose syntax is more 
readable.

mysql_fetch_array( mysql_query("SELECT uid FROM users WHERE
 users.username='".mysql_real_escape_string($username)."'") );
$u = $uid['uid'];

However do you really think that 4 lines is too much to make a (possible) 
network call to an external resource, ask it to parse a statement and retrieve 
a specific piece of data, then return it to you and assign it to a variable?

For one, that's what functions are for, write it once then call your function!

Two, you should look at how much code is already hidden from you in those few 
functions! =P

Regards,
Gavin Towey



-Original Message-
From: Brent Baisley [mailto:brentt...@gmail.com]
Sent: Friday, September 04, 2009 6:21 AM
To: AndrewJames
Cc: mysql@lists.mysql.com
Subject: Re: a better way, code technique?

You should store the current user id in a session variable. Then you
don't have to hit the database at all.

There really is no short way of doing it. Normally you would create a
function (i.e. runQuery) that you pass the query too. Then it handles
running the query, fetching the data, error checking, etc. That way
you don't have to rewrite the same lines every time you want to run a
query.

Brent Baisley

On Fri, Sep 4, 2009 at 6:51 AM, AndrewJames wrote:
> is there a better way (hopefully simpler) to code this?
>
> i want to get the user id of the logged in user to use in my next statement.
>
> $q1 = sprintf("SELECT uid FROM users WHERE users.username='$username'");
> $result1 = mysql_query($q1);
> $uid = mysql_fetch_array($result1);
> $u = $uid['uid'];
>
> it seems like a long way around to get 1 bit of data??
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=brentt...@gmail.com
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Fwd: Help with Timestamp invalid value error

2009-09-04 Thread Gavin Towey
Mysql doesn't store sub-second values.

try
2008-03-09 02:56:34
Instead of
2008-03-09 02:56:34.737

Regards,
Gavin Towey

-Original Message-
From: Proemial [mailto:proem...@gmail.com]
Sent: Friday, September 04, 2009 8:37 AM
To: John Daisley
Cc: mysql@lists.mysql.com
Subject: Re: Fwd: Help with Timestamp invalid value error

I had already tried that, actually.  Produces the same error.  I
should have mentioned that as well, sorry!

The version is 5.1.34

thanks for the help, btw!
Martin

On Fri, Sep 4, 2009 at 11:14 AM, John
Daisley wrote:
> Is your table innodb? If so i think the 'STRICT_TRANS_TABLES' sql_mode 'may' 
> be causing the problem.
>
> Try inserting the value as
>
>  '2008-03-09 02:56:34.737'
>
> Do you get the same error?
>
> What mysql version is your server?
>
>
>
> Regards
>
> John Daisley
> Mobile +44(0)7812 451238
> Email j...@butterflysystems.co.uk
>
> Certified MySQL 5 Database Administrator (CMDBA)
> Certified MySQL 5 Developer
> Cognos BI Developer
>
> ---
> Sent from HP IPAQ mobile device.
>
>
>
> -Original Message-
> From: Proemial 
> Sent: Friday, September 04, 2009 3:39 PM
> To: mysql@lists.mysql.com
> Subject: Fwd: Help with Timestamp invalid value error
>
> Currently set to:
> NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
>
> On Fri, Sep 4, 2009 at 10:24 AM, John
> Daisley wrote:
>> What is your sql_mode set to?
>>
>> I tried inserting that value into a timestamp column on our test server and 
>> it works fine.
>>
>> Regards
>>
>> John Daisley
>> Mobile +44(0)7812 451238
>> Email j...@butterflysystems.co.uk
>>
>> Certified MySQL 5 Database Administrator (CMDBA)
>> Certified MySQL 5 Developer
>> Cognos BI Developer
>>
>> ---
>> Sent from HP IPAQ mobile device.
>>
>>
>>
>
>
> [The entire original message is not included]
>



--
---
This is a signature.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Queue / FIFO in MySQL?

2009-09-08 Thread Gavin Towey
You can add a LIMIT n to your update clause.

Regards,
Gavin Towey

-Original Message-
From: Allen Fowler [mailto:allen.fow...@yahoo.com]
Sent: Monday, September 07, 2009 5:18 PM
To: mysql@lists.mysql.com
Subject: Queue / FIFO in MySQL?

Hello,

I need to create a system where records are generated by a "producer" process 
and processed by several "worker" processes.

I was thinking about something like:

Producer:
1) Producer INSERTs new records with "state" = "new" & "worker" = "null"
2) Producer sleeps and loops back to step #1

Worker(s):
1) Worker UPDATEs all records with "worker" = "pid" & "state" = "working" where 
"state" == "new"
2) Worker SELECTs all records where "worker" = "pid" & "state" = "working"
3) For each record that is done, worker updates record with "state" = "done"
4) Worker loops back to step #1

Note: In this scheme the worker winds up with all "new" records generated since 
the last worker claimed any. Not sure how else to guarantee atomicity. I would 
prefer "only n records per request". Ideas?

I am sure something like this must have been before  Can anyone point me to 
example code, libraries, and/or refinements on the scheme?  (preferably using 
python...)

Thank you,
:)




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: DB/table problem

2009-09-10 Thread Gavin Towey
When using innodb, your data is stored in the data/ibdata* files.  Copying the 
database directory is not a valid backup.

In the future, I would suggest using mysqldump.  However if you want to make a 
file-system copy backup with innodb you need to 1. Shutdown the server 2. Copy 
*everything* in the data directory 3. Store a copy of your my.ini with the 
backup as well. 4. Restart mysql

Regards,
Gavin Towey

-Original Message-
From: Néstor [mailto:rot...@gmail.com]
Sent: Thursday, September 10, 2009 11:28 AM
To: mysql@lists.mysql.com
Subject: DB/table problem

I am running mysql 5.0 on a windows 2003 host with php and apache
I am trying to restore a database from 08/28/2009.  The backup is done by
windows and all I do is copy the  "data/scroller612" folder to the DATA
folder in the MYSQL directory.

when I restore/copy it I get the following errors my .err file found on the
data directory:
==
090910 11:04:03090910 11:04:03 [ERROR] Cannot find table scroller612/cache
from the internal data dictionary
of InnoDB though the .frm file for the table exists. Maybe you
have deleted and recreated InnoDB data files but have forgotten
to delete the corresponding .frm files of InnoDB tables, or you
have moved .frm files to another database?
See http://dev.mysql.com/doc/refman/5.0/en/innodb-troubleshooting.html
how you can resolve the problem.
=

When I go to the above link it saids to run the "check table" statement,
but when I do this, I get below response:
===
mysql> check table scroller612.access;
++---+
--+--+
| Table  | Op| Msg_type | Msg_text
  |
++---+--+--+
| scroller612.access | check | Error| Table 'scroller612.access' doesn't
exist |
| scroller612.access | check | error| Corrupt
   |
++---+--+--+
2 rows in set (0.00 sec)


It says that the table is corrupt. I can actually log into mysql and see the
list
of tables, but I can NOT desc or select the tables.

how do I restore the back up?
Do I need to remove the .frm files , but that would kill my data.

Thanks,

Nestor :-)

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: DB/table problem

2009-09-10 Thread Gavin Towey
No, the .frm files don't contain data.  They are only the table structure.  You 
should always test your backup and restore procedure nefore you need to use it.

Regards,
Gavin Towey

From: Néstor [mailto:rot...@gmail.com]
Sent: Thursday, September 10, 2009 4:35 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: DB/table problem

OK, All I have then is a bunch of .frm files.

Can I recover the data from the FRM files if I install mysql 5.0  without 
INNODB?

Thanks,

On Thu, Sep 10, 2009 at 2:49 PM, Gavin Towey 
mailto:gto...@ffn.com>> wrote:
When using innodb, your data is stored in the data/ibdata* files.  Copying the 
database directory is not a valid backup.

In the future, I would suggest using mysqldump.  However if you want to make a 
file-system copy backup with innodb you need to 1. Shutdown the server 2. Copy 
*everything* in the data directory 3. Store a copy of your my.ini with the 
backup as well. 4. Restart mysql

Regards,
Gavin Towey

-Original Message-
From: Néstor [mailto:rot...@gmail.com<mailto:rot...@gmail.com>]
Sent: Thursday, September 10, 2009 11:28 AM
To: mysql@lists.mysql.com<mailto:mysql@lists.mysql.com>
Subject: DB/table problem

I am running mysql 5.0 on a windows 2003 host with php and apache
I am trying to restore a database from 08/28/2009.  The backup is done by
windows and all I do is copy the  "data/scroller612" folder to the DATA
folder in the MYSQL directory.

when I restore/copy it I get the following errors my .err file found on the
data directory:
==
090910 11:04:03090910 11:04:03 [ERROR] Cannot find table scroller612/cache
from the internal data dictionary
of InnoDB though the .frm file for the table exists. Maybe you
have deleted and recreated InnoDB data files but have forgotten
to delete the corresponding .frm files of InnoDB tables, or you
have moved .frm files to another database?
See http://dev.mysql.com/doc/refman/5.0/en/innodb-troubleshooting.html
how you can resolve the problem.
=

When I go to the above link it saids to run the "check table" statement,
but when I do this, I get below response:
===
mysql> check table scroller612.access;
++---+
--+--+
| Table  | Op| Msg_type | Msg_text
 |
++---+--+--+
| scroller612.access | check | Error| Table 'scroller612.access' doesn't
exist |
| scroller612.access | check | error| Corrupt
  |
++---+--+--+
2 rows in set (0.00 sec)


It says that the table is corrupt. I can actually log into mysql and see the
list
of tables, but I can NOT desc or select the tables.

how do I restore the back up?
Do I need to remove the .frm files , but that would kill my data.

Thanks,

Nestor :-)
The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: DB/table problem

2009-09-10 Thread Gavin Towey
Hi Todd,

I didn't mention xtrabackup because I have no experience with it =)  I know the 
other methods work, and honestly, for most mysql users just starting out, it's 
easier to use mysqldump.  It's simple, and it's easy to get help with, and it 
works across storage engines.

Regards,
Gavin Towey

-Original Message-
From: tly...@sitehelp.org [mailto:tly...@sitehelp.org] On Behalf Of Todd Lyons
Sent: Thursday, September 10, 2009 3:44 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: DB/table problem

On Thu, Sep 10, 2009 at 2:49 PM, Gavin Towey  wrote:
> When using innodb, your data is stored in the data/ibdata* files.  Copying 
> the database directory is not a valid backup.
>
> In the future, I would suggest using mysqldump.  However if you want to make 
> a file-system copy backup with innodb you need to 1. Shutdown the server 2. 
> Copy *everything* in the data directory 3. Store a copy of your my.ini with 
> the backup as well. 4. Restart mysql

Do you exclude xtrabackup as a potential solution because he's using
Windows (it doesn't build on Windows) or because you have some issue
or problem with xtrabackup?  I'm not aware of any show-stopper bugs.
I'm just curious, because it works well for us.  The backup is online
with the only lock held occuring while it's copying over the *.frm and
*.MY* files, which is not very much for us.
--
Regards...  Todd

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing CSV into MySQL

2009-09-16 Thread Gavin Towey
Hi Tim,

Try using LOAD DATA INFILE from the mysql CLI.  PMA can often introduce 
unexpected behavior for export/import.

Regards,
Gavin Towey

-Original Message-
From: Tim Thorburn [mailto:webmas...@athydro.com]
Sent: Wednesday, September 16, 2009 7:14 AM
To: mysql@lists.mysql.com
Subject: Importing CSV into MySQL

Hi,

I'm sure I'm missing something quite obvious here, but the caffeine
hasn't quite kicked in yet.  As the subject says, I'm importing a csv
file into MySQL 5.1.36 on WinXP using phpMyAdmin 3.3.2 (Apache 2.2.11
and PHP 5.3.0 should it matter).  I've done this many times, however I'm
now getting a strange error regarding a character in my csv file.  I've
tried both MyISAM and InnoDB tables (I don't think that would make any
difference) and switching between utf8_general_ci and utf8_unicode_ci
(thought this might make a difference, but sadly no).

The error message I get is |#1366 - Incorrect string value: '\xE8s' when
it attempts to insert the word Radès.  I'm using a simple varchar,
nothing really special there.  After googling for awhile it seems the
most common option is to use PHP to convert the character into UTF8
format before putting it into MySQL - the problem is some of these csv's
are several hundred mb a piece so I'd much rather simply import than go
this route.

Any thoughts?

TIA,
-Tim
|

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Datediff function

2009-09-16 Thread Gavin Towey
Hi John,

You can't use aggregate function in the WHERE clause, because they aren't 
evaluated until after the WHERE clause is applied.

Wouldn't it be much easier to simply keep a last_tweet_date field updated 
somewhere then simply do
SELECT USER_NAME FROM USERS WHERE last_tweet_date < NOW()-INTERVAL 7 DAY; ?

Regards,
Gavin Towey

-Original Message-
From: John Meyer [mailto:johnme...@pueblocomputing.com]
Sent: Wednesday, September 16, 2009 12:52 PM
To: mysql@lists.mysql.com
Subject: Datediff function

I'm trying to pull up a list of users who haven't tweeted in 7 or more
days, and I'm trying to use this statement:
SELECT USER_NAME, MAX(TWEET_CREATEDAT) FROM USERS NATURAL JOIN TWEETS
WHERE DATEDIFF(NOW(),MAX(TWEET_CREATEDAT)) > 7 GROUP BY USERS.USER_ID

But it says "invalid group function".  How should I reword this query?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Problem with MySQL user

2009-09-16 Thread Gavin Towey
Hi John,

You can turn of name resolution by adding skip-name-resolve to the [mysqld] 
section of your my.cnf file.

Regards,
Gavin Towey

-Original Message-
From: John Oliver [mailto:joli...@john-oliver.net]
Sent: Wednesday, September 16, 2009 4:24 PM
To: mysql@lists.mysql.com
Subject: Problem with MySQL user

I'm working with two VMs, one a web server, one a MySQL database server.
In mysql, I added a 'user'@'172.16.1.2' with privileges appropriate for
the web site, and that works. The VMs got shipped off to a hosting
facility. They got the 172.16.1.X network between the two VMs up, but
when they try to "mysql -h 172.16.1.1 -uuser -p" and enter the password,
they get an error denying 'user'@'hostname'

Why is MySQL resolving the IPs? I figured it would be quicker to fix
that than it would be to look up the specific privileges for that user,
add 'user'@'hostname', and then go in and start modifying the config
(it's set in settings.php as well as several places in the httpd.conf)

I do have /etc/hosts entries on both VMs resolving both 172.16.1.1 / 2
to known hostnames.

--
***
* John Oliver http://www.john-oliver.net/ *
* *
***

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Datediff function

2009-09-17 Thread Gavin Towey
Hi John,

If judicious transformation of data makes it easier to do the queries you want, 
then you should consider it.  ETL isn't a common acronym in the database world 
just because we like three letters =)  Though it depends on how often you're 
doing this, if it's one-off then it's probably not worth it, though I was 
making the assumption you're probably going to be using that query frequently.

Regards,
Gavin Towey

-Original Message-
From: John Meyer [mailto:johnme...@pueblocomputing.com]
Sent: Wednesday, September 16, 2009 4:51 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Datediff function

Gavin Towey wrote:
> Hi John,
>
> You can't use aggregate function in the WHERE clause, because they aren't 
> evaluated until after the WHERE clause is applied.
>
> Wouldn't it be much easier to simply keep a last_tweet_date field updated 
> somewhere then simply do
> SELECT USER_NAME FROM USERS WHERE last_tweet_date < NOW()-INTERVAL 7 DAY; ?
>
> Regards,
> Gavin Towey
>


I don't know if that would be so simple. I'd have to run programming
logic when I fetch the information off the twitter server. I just hoped
that there was a way to do it through SQL.

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: [ERROR] /usr/sbin/mysqld: unknown option '--ndbcluster'

2009-09-18 Thread Gavin Towey
Edit your /etc/my.cnf file and remove the option.

Regards,
Gavin Towey

-Original Message-
From: Manoj Burande [mailto:manoj.bura...@artificialmachines.com]
Sent: Friday, September 18, 2009 7:17 AM
To: mysql@lists.mysql.com
Subject: [ERROR] /usr/sbin/mysqld: unknown option '--ndbcluster'

Dear All,

Can anyone brief me about this error. I am unable to start the mysql so
please suggest me work around on how to fix this error.

I am installing "mysql  Ver 14.14 Distrib 5.1.38, for pc-linux-gnu (i686)
using readline 5.1"


[ERROR] /usr/sbin/mysqld: unknown option '--ndbcluster'


--
Manoj M. Burande,
Artificial Machines Pvt Ltd,
System Administrator.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: incremental name search?

2009-09-21 Thread Gavin Towey
Last time I did this for a small app, I just selected all the names and 
returned them as a javascript array.  All the auto-complete as you type logic 
was just done in the UI.  If the list is large-ish you could simply have a cron 
process that creates a .js file which the web page would include.

Regards,
Gavin Towey

-Original Message-
From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
Sent: Monday, September 21, 2009 2:29 PM
To: 'Mike Spreitzer'
Cc: 'Michael Dykman'; mysql@lists.mysql.com
Subject: RE: incremental name search?

Beyond being leery of the whole project, I can't really say how well or poorly
it would work. You are, after all, talking about at least one query per
keystroke. On top of that you're going to be doing some tricky stuff in a PHP
(or some such) back-end, to figure out what query to use - again, one
execution per keystroke.

It doesn't matter if it's only one user, if they type fast enough it will
generate many server hits: several per second, I should think.

I wonder how Google handles their search suggestion gimmick - I never gave it
much thought. That's as close as I can come to a similar application, off the
top of my head.

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com


>-Original Message-
>From: Mike Spreitzer [mailto:mspre...@us.ibm.com]
>Sent: Monday, September 21, 2009 4:11 PM
>To: Jerry Schwartz
>Cc: 'Michael Dykman'; mysql@lists.mysql.com; Mike Spreitzer
>Subject: RE: incremental name search?
>
>Ah, yes, I forgot to describe the server and the load.  Suppose my web app
>and MySQL are done via shared hosting by some common hosting business.  I
>do expect multiple people to be using my web app, but generally only one
>(usually zero, sometimes one, maybe occasionally a few) at a time.  Is
>this going to fly, in terms of latency for the incremental lookups and
>overall load at the hosting site?
>
>Thanks
>
>
>
>
>"Jerry Schwartz" 
>09/21/09 03:47 PM
>
>To
>"'Michael Dykman'" , Mike Spreitzer/Watson/i...@ibmus
>cc
>
>Subject
>RE: incremental name search?
>
>
>
>
>
>
>SoundEx doesn't do much for names, or non-English words for that matter.
>
>Although you could use AJAX to handle the web part of this, I can't
>imagine it
>being able to handle much of a load. I think you'll beat the system to
>death,
>to little avail.
>
>Regards,
>
>Jerry Schwartz
>The Infoshop by Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>
>www.the-infoshop.com
>
>>-Original Message-
>>From: Michael Dykman [mailto:mdyk...@gmail.com]
>>Sent: Monday, September 21, 2009 12:21 PM
>>To: Mike Spreitzer
>>Cc: mysql@lists.mysql.com
>>Subject: Re: incremental name search?
>>
>>Perhaps this  could help you out..
>>
>>http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_soundex
>>
>> - michael dykman
>>
>>On Mon, Sep 21, 2009 at 11:51 AM, Mike Spreitzer 
>wrote:
>>> Suppose I have a table of a few thousand people, with a FirstName field
>>> and a LastName field.  Sadly, my people are not so regular.  Some names
>>> have three parts (e.g., due to marriage) crammed into the two fields
>>> ("Hillary Rodham Clinton").  Some even have titles ("Dir, gastroent.
>>> dept., Fubar hosp. OurTown") wedged in there.  I want to make a web app
>>> that searches this table incrementally as I type into a web page in my
>>> browser.  I am thinking I will have to do something like continuously
>>> display the top 10 matches to what I have typed so far.  Of course,
>when I
>>> am typing I do not know exactly what is in the database.  I generally
>know
>>> only some of the parts of the name when I am typing (e.g., I am looking
>up
>>> "Mary Jones" without knowing whether Jones is her maiden name).
>Sometimes
>>> I am even typing something that is a spelled a bit wrong ("Schiller"
>vs.
>>> "Shiller") or variantly ("Lizzie" vs. "Elizabeth").  This seems pretty
>far
>>> from what MySQL can do directly.  I know about "LIKE" matching.  I know
>>> the wider SQL community has something called "soundex", but I have not
>yet
>>> found it in MySQL.  I have a hard time imagining what will help me with
>>> variants on a name.  I do not see any easy way to find the "top 10"
>>>

RE: query optimization question (my struggle against 'using temporary; using filesort')

2009-09-24 Thread Gavin Towey
Hi Ciaran,

So I think there's a couple things going on:

1. The explain plan for your "slow" query looks wrong, such as mysql is 
confused.  It's possible your index statistics are incorrect.  Try ANALYZE 
TABLE  on listings and addresses.

I think a sure way to fix it is to add STRAIGHT_JOIN to force the join order.  
That should get rid of the temp table and filesort operations and give faster 
results.

SELECT
  STRAIGHT_JOIN
  listings.*, addresses.*
  FROM
`listings`
JOIN
  `addresses` ON `addresses`.addressable_id = `listings`.id
  AND
  `addresses`.addressable_type = 'Listing'
  WHERE (addresses.parent_city_id = 3)
  ORDER BY listings.id DESC LIMIT 1



2. I need to make some comments about your table design:

This column is AWFUL:
`addressable_type` varchar(255) DEFAULT NULL,

Why have field that hold up to 255 characters and put a little string in it 
like "Listing?" Why does it matter?  Well it makes your indexes disasterously 
bloated:

KEY `index_addresses_on_parent_city_id_and_addressable_type`
(`parent_city_id`,`addressable_type`),


If you noticed in the explain plan, that index is 733 bytes *per row*.  
Especially using utf8 means each character takes 3 bytes in the index.  That's 
terrible. That type field should be a foreign key tinyint or at the very least 
be a much much shorter varchar field (such as 8 or 10)

You have lots of those varchar(255) fields, which looks like lazy design -- by 
not gathering correct requirements and designing accordingly you will hurt your 
database performance, waste disk space and cause yourself all kinds of future 
problems.

3.  Why are you using OUTER JOIN?

It looks to me like you're using it because you don't know the difference, 
since you're not looking for NULL rows or anything.  In fact, it looks like 
mysql is smart enough to know that you've negated the OUTER JOIN by putting 
conditions on the joined tables in the WHERE clause, and convert then to INNER 
JOINS.  Don't rely on that!  Use the correct join type.

Those queries
Regards,
Gavin Towey

-Original Message-
From: Ciaran Lee [mailto:ciaran@gmail.com]
Sent: Tuesday, September 22, 2009 1:32 PM
To: mysql@lists.mysql.com
Subject: query optimization question (my struggle against 'using temporary; 
using filesort')

Hi,

I hope this is the right place to ask a question about query optimization.

Background:
I have a database which has events, which occur in places (listings). Places
have addresses, and addresses belong to a city. I can select the latest
event within a particular city very efficiently (less than 2ms), but
selecting the latest listing within a city is REALLY slow (10-20 seconds)
despite being almost a subset of the event query.

I have been working on this for about a day, and have tried all sorts of
tweaks to the indexes but to no avail. I always seem to end up with 'using
temporary; using filesort' as the 'extra' content in the explain result. If
anyone has a suggestion for what I might do to fix this, I'd really
appreciate it. If not, I could further de-normalize the database for
performance reasons, but I would feel dirty for doing so.

Here is the fast query (select the latest event within a particular city),
and it's explain.
SELECT
  events.*, listings.*, addresses.*
  FROM
`events`
LEFT OUTER JOIN
  `listings` ON `listings`.id = `events`.listing_id
LEFT OUTER JOIN
  `addresses` ON `addresses`.addressable_id = `listings`.id
  AND
  `addresses`.addressable_type = 'Listing'
  WHERE (addresses.parent_city_id = 3)
  ORDER BY events.id DESC LIMIT 1

  
++-+---++++-+--+--+-+
  | id | select_type | table | type   | possible_keys

   | key
   | key_len | ref  | rows | Extra   |
  
++-+---++++-+--+--+-+
  |  1 | SIMPLE  | events| index  | index_events_on_listing_id

  | PRIMARY
   | 4   | NULL |1 | |
  |  1 | SIMPLE  | listings  | eq_ref | PRIMARY

   | PRIMARY
   | 4   | ratemyarea.events.listing_id |1 | Using where |
  |  1 | SIMPLE  | addresses | ref|
index_addresses_on_parent_city_id_and_addressable_type,index_addresses_on_addressable_type_and_addressable_id,addressabl

RE: Stupid GROUP BY question

2009-09-25 Thread Gavin Towey
Commonly refered to as a "groupwise max"

http://dev.mysql.com/doc/refman/5.0/en/example-maximum-column-group-row.html
http://jan.kneschke.de/projects/mysql/groupwise-max/

Regards,
Gavin Towey

-Original Message-
From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
Sent: Friday, September 25, 2009 1:28 PM
To: mysql@lists.mysql.com
Subject: Stupid GROUP BY question

It must be too late in the week…



Suppose table Cust has one field, CustID. Suppose table Notes has four fields: 
NoteID (unique), CustID, NoteTime, and NoteText.



A customer can have zero or more notes. Now here’s the seemingly simple problem 
that I’m trying to solve: I want to find the newest note (if any) for each 
customer.



If all I want is the date, then I can do



SELECT Cust.CustID, MAX(Notes.NoteTime) FROM Cust LEFT JOIN Notes ON 
Cust.CustID = Notes.Cust_ID GROUP BY Cust.CustID;



That will work just fine, but now I also want the NoteText associated with the 
newest note. Obviously I can’t use MAX(NoteText). I could do this using a 
temporary table, but it seems like there should be another way.



Regards,



Jerry Schwartz

The Infoshop by Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032



860.674.8796 / FAX: 860.674.8341



 <http://www.the-infoshop.com> www.the-infoshop.com




The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Master/Slave Replication Question

2009-09-25 Thread Gavin Towey
You'd be surprised how many places uses this as an actual solution. All 
arguments aside about what level of the architecture should be doing what, it 
simply works well.  Moreover, it works today as opposed to waiting until the 
end of time for the database developers to add features like that (which mysql 
cluster is already a distributed database, and the devs have said they're not 
interested in trying to turn the regular mysql into a distributed product, 
instead they want to focus on what it does best)

I would love to see a drop-in solution that requires no application changes, 
and doesn't introduce any additional complications such as adding excessive 
overhead, or have yet another single point of failure, but none (or at least no 
good ones) exist.  This is because optimization is a very specific process.

Applications like PHPBB, Drupal and WordPress should have their database access 
encapsulated well enough that making the changes to split reads and writes is 
trivial.  If not, then it's bad design on their part. Scalability is very much 
a part of application design as well, and just because lots of people use those 
apps, doesn't mean they were designed to scale well.

Regards,
Gavin Towey

-Original Message-
From: Tim Gustafson [mailto:t...@soe.ucsc.edu]
Sent: Friday, September 25, 2009 2:44 PM
To: mysql@lists.mysql.com
Subject: Re: Master/Slave Replication Question

> Another thought would be at the application layer, sending
> all the updates (insert,delete, update, etc) to server A,
> and the selects to the local slave servers.

This has been suggested before, and I'm totally against it.  Applications like 
PHPBB, Drupal, WordPress, etc can't be easily configured to do this, and I'd 
really like to use those applications in a more robust and redundant 
environment.

Pushing the work of this sort of master/slave relationship to the application 
level means that most applications will not support it.  Replication is a 
database server problem, not an application problem.

Tim Gustafson
Baskin School of Engineering
UC Santa Cruz
t...@soe.ucsc.edu
831-459-5354

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Master/Slave Replication Question

2009-09-25 Thread Gavin Towey
Cluster is not supposed to be a universal solution, for a reason.  Universal 
solutions tend not to be very performant.

"If each application handles this sort of thing differently, then when I run 
all these applications on my server (and I do - we host about 175 web sites 
altogether) I have to configure each application separately, and I have to 
instruct all my users (many of them inexperienced grad students) to remember 
that "writes go here, reads go there" when they write their own PHP code."

Do you want geographic redundancy or do you want to scale reads?  In this case 
you're talking about scaling reads for a bunch of apps all running together.  
If you want performance in that case, then first you'd want to isolate the apps 
from each other.


"And, of course, handling this sort of thing at the application level means 
that some applications will never support it, and therefore never be able to be 
geographically redundant."

Geographical redundancy is different:  a dns record with a zero ttl, with a 
master->slave replication setup. Point the record a the master and if it fails, 
change the dns entry to point to the slave.  Your applications never need to 
know about replication.

That’s even if you don't want to go with the more complex Linux HA or hardware 
based ip takeover solutions.  There are many ways you could add redundancy 
without modifying the apps.

That's the great thing about open source software and techniques. They're like 
building blocks, and you can put them together however you want.  I find this 
much more preferable to the all-in-one black-box solution.

Regards,
Gavin Towey

-Original Message-
From: Tim Gustafson [mailto:t...@soe.ucsc.edu]
Sent: Friday, September 25, 2009 4:18 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Master/Slave Replication Question

> Moreover, it works today as opposed to waiting until the end
> of time for the database developers to add features like that
> (which mysql cluster is already a distributed database, and
> the devs have said they're not interested in trying to turn
> the regular mysql into a distributed product, instead they
> want to focus on what it does best)

With all due respect to the mySQL cluster people, setting up a mySQL cluster 
just isn't in the cards for lots of organizations.  It's just too much.  
There's a huge implementation gap between a single mySQL server and a mySQL 
Cluster.  I've also heard from people who have tried to implement mySQL 
clustering that wide-area cluster replication is hard or impossible (I can't 
remember which), so the ability to provide geographic redundancy (one of my 
requirements here) isn't workable.

I think saying that I'd have to wait until the end of time is a bit harsh.  
Sure, it's not going to happen tomorrow, but I wasn't expecting that anyhow.

I'm not sure if you've looked at the database integration for things like 
Drupal, but there will probably never be a way for Drupal to use an "updates go 
to this server, reads go to this server" configuration, as there are thousands 
of Drupal modules and almost all of them use the database directly, and each 
would have to be re-coded to work with the read/write split configuration.

And anyhow, I think that suggestion is missing the point:

If each application handles this sort of thing differently, then when I run all 
these applications on my server (and I do - we host about 175 web sites 
altogether) I have to configure each application separately, and I have to 
instruct all my users (many of them inexperienced grad students) to remember 
that "writes go here, reads go there" when they write their own PHP code.

And, of course, handling this sort of thing at the application level means that 
some applications will never support it, and therefore never be able to be 
geographically redundant.

So yeah, maybe lots of custom-written software handles the read/write split 
configuration well, but there's lots more that doesn't.  I don't know of a 
single open source application that does.

So again, I go back to my original statement: replication is a database server 
problem, not an application problem.  :)

Tim Gustafson
Baskin School of Engineering
UC Santa Cruz
t...@soe.ucsc.edu
831-459-5354

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Newbie question: importing cvs settings

2009-09-28 Thread Gavin Towey
1. Try opening up the csv file in a text editor, viewing it in a spreadsheet 
looks like it's hiding some extra formatting or lines that may be causing 
problems.

2. Try importing through the mysql CLI.  From the screenshot you posted, it 
looks like PMA is parsing the file and creating an insert statement for each 
line.  It may be incorrectly handling some of that data.  Using the CLI you'll 
get better feedback about what, if any, the error is.

Regards,
Gavin Towey

-Original Message-
From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com]
Sent: Saturday, September 26, 2009 11:02 AM
To: 'mysql'
Subject: Re: Newbie question: importing cvs settings

Yep, typo ...:-(

I did some screen shots of 2 tests. A workaround solution is to make a
final entry in the csv file that I don't really need. Then everything up
to that point gets imported ok using CSV method. The LOAD DATA method
did not enter anything.

My earlier assumption about line 17 was false. It was dependent on how
many rows, and I had been using tests with same amount of data.
As I said, very beginner level, so thanks for the patience.

screenshots at
http://biz-comm.com/mysqlprojects/

thank you

John wrote:
> I assume you mean csv not cvs!
>
> What is the error you get when the import fails? What version of MySQL are
> you using? Can you post the output of SHOW CREATE TABLE for the table you
> are trying to load the file in to and a sample of the csv which is failing
> to load?
>
> Do you get the same error if you try and load the files using MySQL client
> and the LOAD DATA INFILE command? (See this link for details on how to use
> LOAD DATA INFILE http://dev.mysql.com/doc/refman/5.1/en/load-data.html)
>
> Regards
>
> John Daisley
> MySQL & Cognos Contractor
>
> Certified MySQL 5 Database Administrator (CMDBA)
> Certified MySQL 5 Developer (CMDEV)
> IBM Cognos BI Developer
>
> Telephone +44 (0)7812 451238
> Email j...@butterflysystems.co.uk
>
> -Original Message-
> From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com]
> Sent: 26 September 2009 17:08
> To: mysql
> Subject: Newbie question: importing cvs settings
>
> Greetings:
>
> I have a project for which need to import cvs files into db.
>
> I can do so up to a point. The import will only do 16 lines,
> consistently. Error is failing at line 17.
>
> Steps:
>
> create table fields in Excel document, where they all match database fields
> enter information in several of the columns, but not all as client will
> be filling it in online (leaving ID blank)
> save excel to .cvs
> log into phpMyAdmin
> import cvs
>
>
> I've experimented with several settings in the import, but consistently,
> it fails at line 17, even with different .cvs files.
>
>
> Any guidance, most appreciated.
>
>


--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: upgrading from 4.1 to 5.4

2009-10-01 Thread Gavin Towey

Using mysqldump and loading directly into 5.4 *might* work, but you should 
never do any of this on your production system without testing.

Get another box, start with 4.1 and do the upgrade on a test server -- even 
test your queries as there a few incompatible changes between 4 and 5.  One you 
know the exact steps you need to take, and how much time it takes, then you can 
plan the upgrade accordingly on your live system.

Regards,
Gavin Towey

-Original Message-
From: monem mysql [mailto:monem.my...@gmail.com]
Sent: Thursday, October 01, 2009 9:31 AM
To: mysql@lists.mysql.com
Subject: upgrading from 4.1 to 5.4

Hello


I have to upgrade many mysql databases from mysql 4 and 4.1 to 5.4 with a
large size 2.7 TB

All tables use the MyISAM engine.

I have to make that update on live system with minimal down time possible.



The official method takes too much time. But I've read that we can use '*dump
and reload'* to upgrade directly to 5.1, will it work with 5.4?

Also the tables contain many charset? Will they be altered?



It's first time that I do that.

Are there any better solution and any precaution to take?



thanks for your help.



monem

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Nested Joins

2009-10-01 Thread Gavin Towey
Joins aren't nested like that, unless you use a subquery.  I think you just 
need to remove the parens around the second join.

For better help:
1. show the real SQL -- echo the statement.  Most people here don't like 
looking at app code because your variables could contain anything.

2. Give the exact error message

3. If there's no error, explain what you expect and what you're getting

4. Include table schema

5. Explain what you're trying to accomplish.

Regards,
Gavin Towey



-Original Message-
From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Thursday, October 01, 2009 1:34 PM
To: mysql@lists.mysql.com
Subject: Nested Joins

Hi;
I'm new to join statements. Here's my python syntax:

  cursor.execute('select * from %s left join products on
%s.Item=products.Item (left join categories on
products.Category=categories.ID);' % (client, client))

I believe it's clear how I want to nest, but I don't know how to repair my
syntax.
TIA,
V

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Nested Joins

2009-10-01 Thread Gavin Towey
Victor,

Thank you for the information, that was helpful.

At least part of the problem is the variables you are replacing in that string, 
which we can't see.

The statement should be something like:

select * from ben_franklin_planners c join products p on c.Item=p.Item join 
categories cat on p.Category=cat.ID

Make your code produce the above, and you should be fine.  I suspect you don't 
need LEFT JOIN there, an inner join will suffice.

For more info on joins:
http://hashmysql.org/index.php?title=Introduction_to_Joins
For more indepth info: 
http://dev.mysql.com/tech-resources/articles/mysql-db-design-ch5.pdf
http://dev.mysql.com/doc/refman/5.1/en/join.html

Regards,
Gavin Towey

From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Thursday, October 01, 2009 2:25 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: Re: Nested Joins

On Thu, Oct 1, 2009 at 4:03 PM, Gavin Towey 
mailto:gto...@ffn.com>> wrote:
Joins aren't nested like that, unless you use a subquery.  I think you just 
need to remove the parens around the second join.

I tried that and no go :(

For better help:
1. show the real SQL -- echo the statement.  Most people here don't like 
looking at app code because your variables could contain anything.
ProgrammingError: (1064, "You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right syntax to 
use near ') left join categories on products.Category=categories.ID)' at line 
1"), referer: http://13gems.com/global_solutions/spreadsheet_edit.py
2. Give the exact error message
Isn't that the same thing?

3. If there's no error, explain what you expect and what you're getting

4. Include table schema
DESCRIBE `ben_franklin_planners`
ID  int(4) unsigned  NULL
Item int(4) unsigned NULL
Discount int(2) unsigned NULL

DESCRIBE categories
ID  int(3)  primary key not NULL  auto_increment
Category varchar(20) unique NULL

describe products
ID  int(4)  primary key not NULL
Category int(3) NULL
Item varchar(20) UNIQUE NULL
Description varchar(255) NULL
UOM varchar(20) NULL
Price float(7,2) NULL


5. Explain what you're trying to accomplish.
  cursor.execute('select * from %s left join products on %s.Item=products.Item 
left join categories on products.Category=categories.ID;' % (client, client))

The "client" in this case is ben_franklin_planners
ben_franklin_planners has an item # that is the same as the item # in products, 
where all the information about the products is to be found, EXCEPT the name of 
the category. For that, we have to go to the categories table.

Hope that makes it clear.
TIA,
V



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Nested Joins

2009-10-01 Thread Gavin Towey
Victor,

Just noticed, the join condition from client to productions should be changed:


select * from ben_franklin_planners c join products p on c.Item=p.ID join 
categories cat on p.Category=cat.ID

If you're still getting syntax errors you need to check your variables.  Try 
assigning the query you're building to a string, then printing it out so you 
know *exactly* what you're sending to mysql.

Regards,
Gavin Towey

From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Thursday, October 01, 2009 3:04 PM
To: Gavin Towey; mysql@lists.mysql.com
Subject: Re: Nested Joins

Well, your syntax is *exactly* what I had (with a few cosmetic changes). I've 
been over the MySQL manual on joins with no luck. I'll read over your resources 
tonight. Any other ideas would be appreciated.
Thanks,
V
On Thu, Oct 1, 2009 at 4:49 PM, Gavin Towey 
mailto:gto...@ffn.com>> wrote:

Victor,



Thank you for the information, that was helpful.



At least part of the problem is the variables you are replacing in that string, 
which we can't see.



The statement should be something like:



select * from ben_franklin_planners c join products p on c.Item=p.Item join 
categories cat on p.Category=cat.ID



Make your code produce the above, and you should be fine.  I suspect you don't 
need LEFT JOIN there, an inner join will suffice.



For more info on joins:

http://hashmysql.org/index.php?title=Introduction_to_Joins

For more indepth info: 
http://dev.mysql.com/tech-resources/articles/mysql-db-design-ch5.pdf

http://dev.mysql.com/doc/refman/5.1/en/join.html



Regards,

Gavin Towey



From: Victor Subervi 
[mailto:victorsube...@gmail.com<mailto:victorsube...@gmail.com>]
Sent: Thursday, October 01, 2009 2:25 PM
To: Gavin Towey; mysql@lists.mysql.com<mailto:mysql@lists.mysql.com>
Subject: Re: Nested Joins



On Thu, Oct 1, 2009 at 4:03 PM, Gavin Towey 
mailto:gto...@ffn.com>> wrote:

Joins aren't nested like that, unless you use a subquery.  I think you just 
need to remove the parens around the second join.

I tried that and no go :(

For better help:
1. show the real SQL -- echo the statement.  Most people here don't like 
looking at app code because your variables could contain anything.

ProgrammingError: (1064, "You have an error in your SQL syntax; check the 
manual that corresponds to your MySQL server version for the right syntax to 
use near ') left join categories on products.Category=categories.ID)' at line 
1"), referer: http://13gems.com/global_solutions/spreadsheet_edit.py

2. Give the exact error message

Isn't that the same thing?


3. If there's no error, explain what you expect and what you're getting

4. Include table schema

DESCRIBE `ben_franklin_planners`
ID  int(4) unsigned  NULL
Item int(4) unsigned NULL
Discount int(2) unsigned NULL

DESCRIBE categories
ID  int(3)  primary key not NULL  auto_increment
Category varchar(20) unique NULL

describe products
ID  int(4)  primary key not NULL
Category int(3) NULL
Item varchar(20) UNIQUE NULL
Description varchar(255) NULL
UOM varchar(20) NULL
Price float(7,2) NULL



5. Explain what you're trying to accomplish.

  cursor.execute('select * from %s left join products on %s.Item=products.Item 
left join categories on products.Category=categories.ID;' % (client, client))

The "client" in this case is ben_franklin_planners
ben_franklin_planners has an item # that is the same as the item # in products, 
where all the information about the products is to be found, EXCEPT the name of 
the category. For that, we have to go to the categories table.

Hope that makes it clear.
TIA,
V




The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: The Execute from the command line

2009-10-02 Thread Gavin Towey
Mysql -vv

See mysql --help for more info

Regards,
Gavin Towey


-Original Message-
From: lucas.ctr.heu...@faa.gov [mailto:lucas.ctr.heu...@faa.gov]
Sent: Friday, October 02, 2009 2:20 PM
To: mysql@lists.mysql.com
Subject: The Execute from the command line

I am using


linuxmachine:/var/wwwdev/lucas# mysql -udatabase -ppassword -e "LOAD DATA
LOCAL INFILE  '/var/wwwdev/lucas/cardreaderimport/update.csv' into table
Project.testtest fields terminated by ',' lines terminated by '\n'
(Id,LastName,MiddleName,FirstName,TimeStamp,Ext,Status,Location,Expire,BadgeNum);"

and it works great, I just get no verification that anything has been
loaded in.. if I login it gives me back a little message saying it was
completed and how many are duplicated. How can I get this information from
the command line?

Wishing you the best you know you deserve,

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql Perl DBI DBD Version Compatability for MAC OS 10.6

2009-10-05 Thread Gavin Towey
MySQL Server can be 32 or 64 bit and it shouldn't make a difference to PERL.  
However, if you can run 64 bit, you should.

PERL and it's modules all need to be the same architecture.  It doesn't matter 
if they're 32 bit or 64 bit, as long as it's consistent with itself.

I see no reason why those versions should have a problem interacting; what are 
you trying, and what is the error you get?

Regards
Gavin Towey

-Original Message-
From: Hagen [mailto:finha...@comcast.net]
Sent: Monday, October 05, 2009 11:58 AM
To: mysql@lists.mysql.com
Cc: finha...@comcast.net
Subject: Mysql Perl DBI DBD Version Compatability for MAC OS 10.6

After installing Apple's MAC Snow Leopard OS 10.6 I haven't been able to get my 
DBD:mysql module to work. I am kind of beyond trying to fix that issue directly 
so I am requesting assistance defining a compatible version set which is known 
to work. Between various releases and 32/64 bit options I have to confess there 
are too many choices and not enough information on how to choose for me to 
trust my own judgment anymore.

Currently, I am running:

1.  mysql Server Version 5.4.1-beta MySQL Community Server

2.  perl version 5.8.9 built for Darwin 2level

3.  perl DBI version 1.609

4.  perl DBD:mysql 4.012

I am not positive which of the above are 32 or 64 bit but help discerning which 
32/64 versions are required would also be appreciated. I am happy ;-( to 
uninstall and reinstall any of the above to get to a working configuration. 
Thanks in advance.

Hagen Finley
Boulder, CO


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Questions on un-index searches and slow-query-log

2009-10-05 Thread Gavin Towey
See
 log-queries-not-using-indexes
option in my.cnf, used with the slow log.

http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html


Regards,
Gavin Towey


-Original Message-
From: Brown, Charles [mailto:cbr...@bmi.com]
Sent: Monday, October 05, 2009 12:59 PM
To: Mysql List
Cc: John Meyer; Mark Phillips
Subject: Questions on un-index searches and slow-query-log

Questions Folks:
(1) What do you about un-index searches. How can one report and monitor them?
(2) What do you do with the slow-query log. Are there any utilities or scripts 
out there to filter and manage this log?

Thanks




This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql Perl DBI DBD Version Compatability for MAC OS 10.6

2009-10-05 Thread Gavin Towey
I'm using:
DBD::mysql 3.007
DBI 1.52
Perl 5.8.8


That output looks like missing shared library dependencies to me, but maybe 
someone else has a better idea.

Try:
Ldconfig -p | grep mysql

And see if you see something like:
libmysqlclient.so (libc6,x86-64) => /usr/lib64/libmysqlclient.so

in the output.  If not you may have to find it and make sure ldconfig knows 
about it.

You should also be able to use ldd to check shared lib dependencies.

Regards,
Gavin Towey

-Original Message-
From: Hagen Finley [mailto:finha...@comcast.net]
Sent: Monday, October 05, 2009 1:30 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Mysql Perl DBI DBD Version Compatability for MAC OS 10.6

Here is the error I am receiving ( I posted this issue in the recent past):

dyld: lazy symbol binding failed: Symbol not found: _mysql_init
  Referenced from:
/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
  Expected in: flat namespace

dyld: Symbol not found: _mysql_init
  Referenced from:
/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
  Expected in: flat namespace

Trace/BPT trap


Could you send me the versions of Perl, DBD & DBI you are using?

Hagen


On 10/5/09 1:16 PM, "Gavin Towey"  wrote:

> MySQL Server can be 32 or 64 bit and it shouldn't make a difference to PERL.
> However, if you can run 64 bit, you should.
>
> PERL and it's modules all need to be the same architecture.  It doesn't matter
> if they're 32 bit or 64 bit, as long as it's consistent with itself.
>
> I see no reason why those versions should have a problem interacting; what are
> you trying, and what is the error you get?
>
> Regards
> Gavin Towey
>
> -Original Message-
> From: Hagen [mailto:finha...@comcast.net]
> Sent: Monday, October 05, 2009 11:58 AM
> To: mysql@lists.mysql.com
> Cc: finha...@comcast.net
> Subject: Mysql Perl DBI DBD Version Compatability for MAC OS 10.6
>
> After installing Apple's MAC Snow Leopard OS 10.6 I haven't been able to get
> my DBD:mysql module to work. I am kind of beyond trying to fix that issue
> directly so I am requesting assistance defining a compatible version set which
> is known to work. Between various releases and 32/64 bit options I have to
> confess there are too many choices and not enough information on how to choose
> for me to trust my own judgment anymore.
>
> Currently, I am running:
>
> 1.  mysql Server Version 5.4.1-beta MySQL Community Server
>
> 2.  perl version 5.8.9 built for Darwin 2level
>
> 3.  perl DBI version 1.609
>
> 4.  perl DBD:mysql 4.012
>
> I am not positive which of the above are 32 or 64 bit but help discerning
> which 32/64 versions are required would also be appreciated. I am happy ;-( to
> uninstall and reinstall any of the above to get to a working configuration.
> Thanks in advance.
>
> Hagen Finley
> Boulder, CO
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
>
>
> The information contained in this transmission may contain privileged and
> confidential information. It is intended only for the use of the person(s)
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended recipient,
> please contact the sender by reply email and destroy all copies of the
> original message.



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: mysql.socket location problem

2009-10-05 Thread Gavin Towey
Hi Scott,

Change socket =  in the [client] section of your my.cnf as well.

Regards
Gavin Towey

-Original Message-
From: Scott Wagner [mailto:gildedp...@comcast.net]
Sent: Monday, October 05, 2009 3:21 PM
To: mysql@lists.mysql.com
Subject: mysql.socket location problem

Hi
I just did a new installation of Suse 11.2 and found that I couldn't
connect to mysql from a php script because mysql.sock was
in /var/run/mysql/mysql.sock rather than /var/lib/mysql/mysql.sock.  I
changed all the lines in /etc/my.cnf for the socket
to /var/lib/mysql/mysql.sock.  Now my php scripts connect but I can't
connect to  mysql from the command line.  I get the error can't connect
through socket /var/run/mysql/mysql.sock. What do I have to change to
make the konsole use /var/lib/mysql/mysql.sock?

Thanks,
Scott


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replication recovery

2009-10-07 Thread Gavin Towey
In the case that one machine has a power failure, then starts a new binlog, you 
just have to set the slave to start replicating from the beginning of that 
binlog.  That's easy to detect and repair with a daemon script.  Even if both 
machines die, it'll be a similar scenario.

Regards,
Gavin Towey

-Original Message-
From: Bryan Cantwell [mailto:bcantw...@firescope.com]
Sent: Wednesday, October 07, 2009 10:47 AM
To: mysql@lists.mysql.com
Subject: Replication recovery

I have 2 - 5.0.51a mysql databases setup in a dual master scenario. A is
master of B and vise versa...
In Linux 2.6.26 (if that matters).
Everything is great while all is running normally. But, when I am
testing the system by creating disasterous scenarios, I find some
challenges I hope to get overcome.

Let's say 'A' machine's plug gets kicked out of the wall and so when
mysql restarts it starts fresh bin-log and the slave 'B' does not
realize this change and we are now out of sync. 'A', however will simply
catch up to 'B' and there MAY not be a problem.

Even worse, 'A' dies and no one does anything about it, then later 'B'
dies. Now Someone finally comes along and restarts both machines at the
same time and neither are on the 'same page' and are totally out of sync.

How, without re-copying the datafiles and starting over (after
determining the most up to date machine to use), can I bring both 'A'
and 'B' to the same point so I can move forward?

Thanks,

Bryancan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Replication recovery

2009-10-07 Thread Gavin Towey
B should be the only one with a bad replication position, since it was 
replicating when A crashed.  So just adjust B, and A should catch up as normal 
(provided you have the last 24 hours of binlogs on B for A to read )

Regards,
Gavin Towey

-Original Message-
From: Bryan Cantwell [mailto:bcantw...@firescope.com]
Sent: Wednesday, October 07, 2009 11:12 AM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Replication recovery

When only one machine dies I do send the new master position info to the
still running slave, and yes, it does the trick.
My main challenge is when A dies and is dead for 24 hours and then B
dies too. Now A is already out of synch with B and now B has a new log
position... Doesnt this make A now have a huge gap in data? How do I get
A up to date with B?

thx,
Bryancan

On 10/07/2009 12:53 PM, Gavin Towey wrote:
> In the case that one machine has a power failure, then starts a new binlog, 
> you just have to set the slave to start replicating from the beginning of 
> that binlog.  That's easy to detect and repair with a daemon script.  Even if 
> both machines die, it'll be a similar scenario.
>
> Regards,
> Gavin Towey
>
> -Original Message-
> From: Bryan Cantwell [mailto:bcantw...@firescope.com]
> Sent: Wednesday, October 07, 2009 10:47 AM
> To: mysql@lists.mysql.com
> Subject: Replication recovery
>
> I have 2 - 5.0.51a mysql databases setup in a dual master scenario. A is
> master of B and vise versa...
> In Linux 2.6.26 (if that matters).
> Everything is great while all is running normally. But, when I am
> testing the system by creating disasterous scenarios, I find some
> challenges I hope to get overcome.
>
> Let's say 'A' machine's plug gets kicked out of the wall and so when
> mysql restarts it starts fresh bin-log and the slave 'B' does not
> realize this change and we are now out of sync. 'A', however will simply
> catch up to 'B' and there MAY not be a problem.
>
> Even worse, 'A' dies and no one does anything about it, then later 'B'
> dies. Now Someone finally comes along and restarts both machines at the
> same time and neither are on the 'same page' and are totally out of sync.
>
> How, without re-copying the datafiles and starting over (after
> determining the most up to date machine to use), can I bring both 'A'
> and 'B' to the same point so I can move forward?
>
> Thanks,
>
> Bryancan
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
>
>
> The information contained in this transmission may contain privileged and 
> confidential information. It is intended only for the use of the person(s) 
> named above. If you are not the intended recipient, you are hereby notified 
> that any review, dissemination, distribution or duplication of this 
> communication is strictly prohibited. If you are not the intended recipient, 
> please contact the sender by reply email and destroy all copies of the 
> original message.
>


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Some MySQL questions

2009-10-08 Thread Gavin Towey
Try the tutorial:

http://dev.mysql.com/doc/refman/5.0/en/tutorial.html

Regards,
Gavin Towey


-Original Message-
From: John Oliver [mailto:joli...@john-oliver.net]
Sent: Thursday, October 08, 2009 2:19 PM
To: mysql@lists.mysql.com
Subject: Some MySQL questions

1) When I select * from whatever; is there a way to have the results go
by one screen at a time?

2) In reference to the above, is there a way to just display the row
that shows the names of each column?

I need to drop one row from a table with a few thousand rows.  I
guessing I want to:

delete from 'users' where  = 1898;

Is that right?  I'm not sure if COLUMN_NAME is "uid" or "id" or maye
something else, and since I can't pipe it through more or less... :-)

--
***
* John Oliver http://www.john-oliver.net/ *
* *
***

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Req. suitable .cnf file for Server used by 2000 users daily

2009-10-14 Thread Gavin Towey
Hi,

This script might help with some tuning suggestions, run it after you have some 
production traffic running against your database.
https://launchpad.net/mysql-tuning-primer

Also you should enable the slow query log, so you can capture queries to be 
optimized:
http://dev.mysql.com/doc/mysql/en/Slow_query_log.html

http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html


Regards,
Gavin Towey


-Original Message-
From: jeetendra.ran...@sampatti.com [mailto:jeetendra.ran...@sampatti.com]
Sent: Wednesday, October 14, 2009 3:21 AM
To: mysql@lists.mysql.com
Subject: Req. suitable .cnf file for Server used by 2000 users daily

Hi,
Will you plesae guide me ?
We are about to launch one website whose database is in MySQL. I am very exited 
about the server setting specially about .cnf file.
I have below hardware and .cnf details. Will you please guide me is the .cnf 
file details sufficient to support current hardware.
Initially 2000 users will visit this site everyday.

Hardware and OS
*
Operating System : Red Hat Fedora Core 8
Processor  : Intel Core 2 Quad - 2.83 GHz,
RAM  : 4 GB
Total Disk Space : 600 GB (300 GB usable)
RAID  : RAID1
Disk Drive(s) : 300 GB (Drive #1), 300 GB (Drive #2)
Bandwidth Quota : 500 GB
Firewall  : PIX 501
Version : 5.0.81-community-log
Version_comment : MySQL Community Edition (GPL)
Version Compile Machine : i686
Version Compile OS  : pc-linux-gnu
my.cnf details
***
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
key_buffer = 16M
key_buffer_size=4M
sort_buffer_size=2M
query_cache_size=64M
log-bin
log_queries_not_using_indexes=1
long_query_time=1
log_slow_queries=slowQry.log
join_buffer_size=4M
max_connections=150
max_allowed_packet = 32M
table_cache = 256
net_buffer_length = 8K
read_buffer_size = 2M
read_rnd_buffer_size = 2M
myisam_sort_buffer_size = 8M
thread_stack=5M
thread_cache_size=128M
connect_timeout=30
query_cache_limit=32M
log-error
# Comment the following if you are using InnoDB tables
innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 4M
innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 16M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Thanks in advance
Regards
Jeetendra Ranjan


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Inserting an Image

2009-10-15 Thread Gavin Towey
"Image in string form"  sounds like you're not inserting binary data, rather 
some sort of encoded data.  Even if it is binary, you'll have to escape at 
least end quote characters, you can see clearly at the top of your data there 
is :
'ÿØÿà JFIF ÿÛC "" $(4,$&1'


Another thing to think about is storing image data directly in a database is 
often not a good idea. See these links for more info:

http://mysqldump.azundris.com/archives/36-Serving-Images-From-A-Database.html 
and http://hashmysql.org/index.php?title=Storing_files_in_the_database

Regards,
Gavin Towey

-Original Message-
From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Thursday, October 15, 2009 12:05 PM
To: mysql@lists.mysql.com
Subject: Inserting an Image

Hi;
I have successfully inserted images, like yesterday, before into MySQL with
the following code:

  sql = 'update productsX set pic1="%s" where ID=2;' % pic1
  cursor.execute(sql)
where pic1 is simply an image uploaded through a form then sent over without
any alteration to another (python) script that uploads it.  Printing it out
looks like this (other variables included):

update productsX set Name=%s, Title=%s, Description=%s, Price=%s,
Bedrooms=%s, Bathrooms=%s, Conditions=%s, Acreage=%s, Construction=%s,
Location=%s, Estate=%s, Address=%s, Furnished=%s, pic1=%s, pic2=%s, pic3=%s,
pic4=%s, pic5=%s, pic6=%s where ID=%s;', ('name1', 'title1', 'descr1',
'1.1', '2', '1', 'New', '1.5', 'New', 'arbor', 'abor', 'abor', 'Furnished',
'ÿØÿà JFIF ÿÛC "" $(4,$&1' -=-157:::#+?D?8C49:7ÿÛC 7%
%77ÿÀ y| " ÿÄ ÿÄM !1A Qa "qs
³ 27Bbc‚ #46Rrt‘¡±² $'3C’%5DTUƒ“¢£ÑñÿÄ ÿÄ ÿÚ ?œ*‹RP’¥¨% d’pªÔcµkä«”øZ Âá îj
Xêxö, `...@$ý <%6...@$ý> ô¨2mÒžÚ ¦ ÚRÑ¥¬ò guD|!%bO :ðç’ Z…æµËÚvsM¡·¢&T ‘œ¬ ºâ
“ñ â1Ò¶¶KT[%¦-² {‘£6 Ô÷“âNI=äÒfÓ ø/RhËîR”³q1 QüÇ“ƒŸ ƒE Š Š( (¢Š Š( (¢Š Š( ×ê
´{ –mÒYü VTâ†pUŽI $à Qvà ɼÜ/ZÒì{IrÝ,´£È x ß #¸$Š·÷F_Ë ë}……áRTdH ó
Á#ÐU“ö*DÙõ¨Yt]ž îêÑ +p}5ùêÿ¹F † vã Êöy5Ô‚\ˆëO Žc 'ö(Óõku-´^4õÊÚ Ú£8Ðð%$ ì84
Ú*ø G¥­·T Vû#µc æ ·uÏ; Ö"ÓusN\\Ü‹5{ÑʸvoòÝû@ é ¼×CP U
¤¡%KPJR2I8T+´½¨®{ß{:)kzD‡; &2~1' ¤“ ïÙßA¶Õº®íª5 Ñú AiMçá ’O ’8 •tÆpHâNÇ
Uºl¤Ûmë¸é«ÝÙ øéí û’7�...@guc q Ó ŠhÙ¶ cFØ Í\çðäLJÎ^8$} ä í=i² Aj ©ôœ «‰J_u
/¥<ƒ‰%*áÓˆÎ;ˆ¦ TÐö©6)WûjØZ`ü dÂsæ–ÝHQB E@ m5Ð QE EbÜî
íP]›q’Üx̧yn8pÿ߇3IP5–¨¿´©úgL2å©K)aéÒû ¼ 5 ã‚s 8žT æÕæ*ýµ)Q’­ä4óp› ¦0 ?ÌU]B
r + ç\»]_"éŸ pTŒŸXU] Èò ­ “¬gK ®tTxò^i‰ É 4‡ Rè ¤ à8 Y«›` üM ]ߊ󌼀ÑK
,¥Iü29 ƃž6™iU‡]Ý£ È/²G 5~xÇ£8öT۱͡§SÀ ›«¿éˆÈॠÆ[ ;ô‡Qן~4»vÒOܬðu ”ëÐÙ Ë
%®a d“Ÿ g ¨Ãe1. õõœÛ æû2 ëËG$4>9' )$xç h3ö‰´Ë®­ ôHî.%œ,†ã ຠx S× ‡Ž3Nû
ÐËAûêº5Œ‚ˆ ¨qÁà§ ˆ ÓÜh±ìËOÏÚV¢‰ >`ÛTíÅß «.¤¨¥Gžè<€ãŽg½ãP<ìM£h¸1 [ ÌÄ®;J)mA-
ÐR8 :g• ãXjû> ‚™W‡Ô „†Xho8é ÷G‡yÀâ8ñ ‡¤õ5ëQ¥‰Ÿ{žCjxo7"LÑÚ­ à ØG#Ú‡ ‘ž¨ t
–›-LjFžhÈÂXy¥(… \8) ŽJð} -ÜKp¬’ T–PÔu œî €“Œ ” ôTc¢ö…§íZ Ò«õùµMì h‚µ<öwŽ7€É
1Î’šÛ2­7 ó ";= & b*Sê e Æ7xž|p è:@Ö[YÓÚq.1 Ñs¸'€b2 BOÓ_!è > ¡
¦²Özòbm©}÷ƒêÂ`ÂFâ ¤ $ õ Š•vq±è–bÍÏR†æ\ ˆÃÎiƒãùê °tÏ A‡§4Þ Ú<ÆoÚíkfÐ… õ¤
%ÁÑE<Â|Oœ¯ ŒÌM4Û-!¦ ”6€ ”$`$ @ ‚½t¢ƒˆ_Bš}Æ× ä(¥Yç k®vqyM÷DÚf…ï9äéiîþÑ
j³é#>‚+™ö‘lU§]^¢) 'Ê”êüÅùéýÊ ñ÷?jÔÛ®¯iÙ®nÇœ­øÅG‚^ ˆû@ j...@ëa#럔 ú̯všÈÛGÉ
ë†|Ö½ò*θÇßö„â3å2º}ZjþÙ¾M/_ ß½E t_Ä™õiþ °ÖZogV÷ m q޾´¤ ¼:°2zð律¿‰³êÓü)/bC
7¶ Ë´ ß.€Ò ÿJ:ÛÕÀ÷J«ZÍá i:9ò•(5 ಔŽ' ƒŠ½¥‡ö¡­x ‡ Ž>©UMT3µ- ×ðSýÐ 5†Ñ5
«.³6Yf Õ‘ ºØ‚ ê¬ ½Ô±"T‰Kß’û¯+½Å• ßSVÕö[c´Yn ŠÔãñ”…¡F Âšó–” Þ© vq“ÝÀVÝ
Óp!?"D«„· aj kJ ¼ xà óñ çÆ vC¨e†Öë‹8J ’¢£à :~Ùö˦ê‰Ó ¹H6æ 8–ä6¤ öñ Ýð :ž]ƦM
Zíñ´%®lxqÛ”ûJ/>†ÀZüõsW3Ê«³ÒF©Öé<…Ñ'ö¢ ƒKi[6•…äÖhik{ £ªóœt÷©\Ï£ è
nv´¡%...@ÉrŽ«sfg Ùs^C1ÙI[Ž,à$ ¦¢ØÓåm_Pöl¡Ö4m¹à§w I¸:8„Ÿ£Èîô ø‘º Í t¢‚
û¢t–˜š–*2 ix †r… i)ÏèÔ Ë®0ò ejmÆÔ …¤à¤Ž ƒß]«r ë E

RE: insert random number into table

2009-10-16 Thread Gavin Towey
Don't try to give them a random number, instead use a table with a primary key 
that is AUTO_INCREMENT.  Then you just insert the record, and afterwards SELECT 
LAST_INSERT_ID(); to retrieve the id for the record created.

With random numbers, you're going to have more collisions when you add more 
records.

Regards,
Gavin Towey

-Original Message-
From: Ray [mailto:r...@stilltech.net]
Sent: Friday, October 16, 2009 8:43 AM
To: mysql@lists.mysql.com
Subject: insert random number into table

Hello All,
I am trying to insert a unique random number* with each row when inserting
into a table. I can do it in the application layer (php) but just wondering if
there is an easier way in the database layer.
Thanks
Ray

* for example, when registering a person for a contest, I want to give them a
unique, random identifier.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Mysql Upgrade from version 4 to 5

2009-10-21 Thread Gavin Towey
See:
http://dev.mysql.com/doc/refman/5.0/en/upgrade.html


-Original Message-
From: Tharanga Abeyseela [mailto:tharanga.abeyse...@gmail.com]
Sent: Wednesday, October 21, 2009 2:34 PM
To: mysql@lists.mysql.com
Subject: Mysql Upgrade from version 4 to 5

Hi Guys,
Iam going to upgrade mysql version 4 to version 5.x  on redhat ES4  . is
there any particular way to do that ? if i take a mysqldump of the database
and simply restore on to version 5 will work ? are there any differences
between syntaxes/db structure etc on those 2 versions ?
Thanks in advance,
Tha

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Help! Can't get my MySQL service started!

2009-10-22 Thread Gavin Towey
Do you have a program called mysql_install_db?  It looks like you need that to 
create the initial mysql system database in your data directory.

Regards,
Gavin Towey

-Original Message-
From: Matthew Laurence [mailto:mattlaure...@gmail.com]
Sent: Thursday, October 22, 2009 10:24 AM
To: mysql@lists.mysql.com
Subject: Help! Can't get my MySQL service started!

I'm trying to configure and start a MySQL 5.1 service for the first time,
and I'm having trouble. The MySQL Server 5.1 system was installed on a
separate drive from the system drive (if that's an issue). My GUI tools are
on the C: drive, and the server installation is on the E: drive.


Anyway, I'm using mysql administrator to try and create, configure and start
a new MySQL database. I've never done this before, and I'm having trouble
finding any clear guidance on how to do it online. I think I'm close, but
here is the log of my last three attempts to start my new service:


SERVICE SETTINGS:

Display Name: MySQL

Service Description: MySQLESAI


CONFIGURATION FILE:

Config fielname: E:\Program Files\MySQL\MySQL Server 5.1\my-large.ini

 (I chose this one at random to have something to start with)

Section Name: mysqld


PATH TO BINARY: E:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld


This all seems pretty clear and ok... so here we go:


Attempt 1:

Trying to start the server ...

Server could not be started.


Can't create test file E:\Program Files\MySQL\MySQL Server
5.1\data\mail.lower-test


Can't create test file E:\Program Files\MySQL\MySQL Server
5.1\data\mail.lower-test


Aborting


E:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Shutdown complete



I then created a "data" folder in that directory, and tried to start it
again:



Attempt 2:

Trying to start the server ...

Server could not be started.


Plugin 'FEDERATED' is disabled.


Can't open the mysql.plugin table. Please run mysql_upgrade to create it.



I ran mysql_upgrade which seemed to work (gave me no indication there were
any problems. Then tried again to start through Administrator:



Attempt 3:

Trying to start the server ...

Server could not be started.


Plugin 'FEDERATED' is disabled.


Can't open the mysql.plugin table. Please run mysql_upgrade to create it.


Recovering after a crash using mysql-bin


Starting crash recovery...


Crash recovery finished.


Fatal error: Can't open and lock privilege tables: Table 'mysql.host'
doesn't exist




So this sounds bad. I don't know if this has anything to do with it, but I
followed the advice at this site:


http://dev.mysql.com/doc/refman/5.1/en/windows-start-service.html


and added the MyQL bin path (E:\Program Files\MySQL\MySQL Server 5.1\bin)
to my system PATH... any issues there?


Any help or guidance would be very very helpful - I need to get this thing
running ASAP! Thank you so much in advance - please send any responses to
mattlaure...@gmail.com

--
Matt
--
Matt Laurence
mattlaure...@gmail.com
http://www.mattlaurence.com/

Senior Interactive Art Director
http://www.linkedin.com/in/mattlaurence

Bassist, Publicist, Humorist
www.secretagentmen.com

Pix: www.flickr.com/mattlaurence
Follow me at: http://twitter.com/Mattynabib
DJ Nabib: http://blip.fm/Mattynabib

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: trigger

2009-11-04 Thread Gavin Towey
1. Triggers must have FOR EACH ROW -- it's described in the manual: 
http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html

So the correct syntax would be:
CREATE TRIGGER greylist_ins AFTER INSERT on greylist FOR EACH ROW delete from 
greylist where first_seen < NOW()-60*60*24*5;

BEGIN/END and DELIMITER  are not needed for single statement triggers

2. However you still can't do that.  You can't update the table used in the 
trigger.  What you really want is either a separate cron process, or a mysql 
event (if using 5.1)

Regards
Gavin Towey


-Original Message-
From: freedc@gmail.com [mailto:freedc@gmail.com] On Behalf Of Phil
Sent: Wednesday, November 04, 2009 11:42 AM
To: Mysql
Subject: Re: trigger

You are missing a BEGIN in the trigger

delimiter |

CREATE TRIGGER greylist AFTER INSERT on greylist
BEGIN
delete from greylist where first_seen < NOW()-60*60*24*5;
END;
|
delimiter ;

Phil


On Wed, Nov 4, 2009 at 2:28 PM, Stefan Onken  wrote:

> Hello,
>
> I am new to using triggers in mysql. I am using mysql 5.1.37  and would
> like to setup a trigger like:
>
> CREATE TRIGGER greylist AFTER INSERT on greylist
> delete from greylist where first_seen < NOW()-60*60*24*5;
> END;
>
> When typing this into mysql I am getting an error. Where is my mistake?
>
>
> mysql> show fields from greylist;
> +---+---+--+-+-+
> | Field | Type  | Null | Key | Default |
> +---+---+--+-+-+
> | id| int(11)   | NO   | PRI | NULL|
> | SenderIP  | varchar(15)   | NO   | MUL | NULL|
> | SenderAddress | varchar(1024) | NO   | MUL | NULL|
> | first_seen| int(11)   | NO   | | NULL|
> +---+---+--+-+-+
> 4 rows in set (0,00 sec)
>
> I would like to archive that after every insert in the greylist table I am
> purging the oldest xx records.
>
> Stefan
>
>
>
> www.stonki.de : My, myself and I
> www.kbarcode.net : barcode solution for KDE
> www.krename.net : renamer for KDE
> www.proftpd.de : a FTP server...
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com
>
>


--
Distributed Computing stats
http://stats.free-dc.org

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: trigger

2009-11-04 Thread Gavin Towey
Oops, one more mistake:

NOW()-60*60*24*5 isn't the way to do date math.  It should be: NOW() - INTERVAL 
5 DAY

-Original Message-
From: Gavin Towey
Sent: Wednesday, November 04, 2009 2:33 PM
To: 'Phil'; Mysql; 'Stefan Onken'
Subject: RE: trigger

1. Triggers must have FOR EACH ROW -- it's described in the manual: 
http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html

So the correct syntax would be:
CREATE TRIGGER greylist_ins AFTER INSERT on greylist FOR EACH ROW delete from 
greylist where first_seen < NOW()-60*60*24*5;

BEGIN/END and DELIMITER  are not needed for single statement triggers

2. However you still can't do that.  You can't update the table used in the 
trigger.  What you really want is either a separate cron process, or a mysql 
event (if using 5.1)

Regards
Gavin Towey


-Original Message-
From: freedc@gmail.com [mailto:freedc@gmail.com] On Behalf Of Phil
Sent: Wednesday, November 04, 2009 11:42 AM
To: Mysql
Subject: Re: trigger

You are missing a BEGIN in the trigger

delimiter |

CREATE TRIGGER greylist AFTER INSERT on greylist
BEGIN
delete from greylist where first_seen < NOW()-60*60*24*5;
END;
|
delimiter ;

Phil


On Wed, Nov 4, 2009 at 2:28 PM, Stefan Onken  wrote:

> Hello,
>
> I am new to using triggers in mysql. I am using mysql 5.1.37  and would
> like to setup a trigger like:
>
> CREATE TRIGGER greylist AFTER INSERT on greylist
> delete from greylist where first_seen < NOW()-60*60*24*5;
> END;
>
> When typing this into mysql I am getting an error. Where is my mistake?
>
>
> mysql> show fields from greylist;
> +---+---+--+-+-+
> | Field | Type  | Null | Key | Default |
> +---+---+--+-+-+
> | id| int(11)   | NO   | PRI | NULL|
> | SenderIP  | varchar(15)   | NO   | MUL | NULL|
> | SenderAddress | varchar(1024) | NO   | MUL | NULL|
> | first_seen| int(11)   | NO   | | NULL|
> +---+---+--+-+-+
> 4 rows in set (0,00 sec)
>
> I would like to archive that after every insert in the greylist table I am
> purging the oldest xx records.
>
> Stefan
>
>
>
> www.stonki.de : My, myself and I
> www.kbarcode.net : barcode solution for KDE
> www.krename.net : renamer for KDE
> www.proftpd.de : a FTP server...
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=pchap...@nc.rr.com
>
>


--
Distributed Computing stats
http://stats.free-dc.org

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: cannot find my.cnf file

2009-11-12 Thread Gavin Towey
Also note that mysql doesn't need a my.cnf file and will happily run with 
default values.  It's possible that there is none and you'll have to create it.

To see where your mysqld is configured to check for the config file do:
mysql --verbose --help | grep -C3 my.cnf

This will give you a list of paths it checks in order.

Regards,
Gavin Towey

-Original Message-
From: John Daisley [mailto:john.dais...@butterflysystems.co.uk]
Sent: Thursday, November 12, 2009 10:30 AM
To: Sydney Puente
Cc: mysql@lists.mysql.com
Subject: Re: cannot find my.cnf file

should be in

/etc/my.cnf

or try the following at the command line

locate my.cnf

That should give you the location


On Thu, 2009-11-12 at 18:10 +, Sydney Puente wrote:
> Hello,
> I want to log all sql queries made against a mysql db.
> Googled and found I should add a line to my.cnf.
>
> However I cannot find a my.cnf file
> [r...@radium init.d]# ps -ef | grep mysql
> root 13614 1  0 Sep24 ?00:00:00 /bin/sh /usr/bin/mysqld_safe 
> --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/object01.pid
> mysql13669 13614  0 Sep24 ?00:21:40 /usr/sbin/mysqld --basedir=/ 
> --datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/object01.err 
> --pid-file=/var/lib/mysql/object01.pid
> root 23050 22746  0 19:05 pts/000:00:00 grep mysql
> [r...@radium init.d]# locate cnf
> /usr/share/doc/MySQL-server-community-5.1.39/my-huge.cnf
> /usr/share/doc/MySQL-server-community-5.1.39/my-innodb-heavy-4G.cnf
> /usr/share/doc/MySQL-server-community-5.1.39/my-large.cnf
> /usr/share/doc/MySQL-server-community-5.1.39/my-medium.cnf
> /usr/share/doc/MySQL-server-community-5.1.39/my-small.cnf
> /usr/share/man/man8/cnfsheadconf.8.gz
> /usr/share/man/man8/cnfsstat.8.gz
> /usr/share/ssl/openssl.cnf
> /usr/share/mysql/my-large.cnf
> /usr/share/mysql/my-huge.cnf
> /usr/share/mysql/my-innodb-heavy-4G.cnf
> /usr/share/mysql/my-medium.cnf
> /usr/share/mysql/my-small.cnf
> Any ideas?
> I might add i did not install mysql and I did not start it and the guy who 
> did is in holiday!
>
> TIA
>
> Syd
>
>
>
>
>


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: cannot find my.cnf file

2009-11-13 Thread Gavin Towey
Did you remove the my.cnf file and then run /etc/init.d/mysql stop?  The my.cnf 
probably had non-default paths for the pid file, so if you remove the config 
file, now the startup script is looking in the wrong location.


Also for your password issue, please show use the exact command you're using to 
try to log in, and the exact error message you get.

Regards
Gavin Towey

-Original Message-
From: Sydney Puente [mailto:sydneypue...@yahoo.com]
Sent: Friday, November 13, 2009 5:31 AM
To: mysql@lists.mysql.com
Subject: Re: cannot find my.cnf file

Yes I enter the password manually into the remote mysql client (actually 
Oracle's SQL developer) when I login.

I thought I would restart mysql with the /etc/init.d/mysql script and go back 
to the original "default" settings without any my.cnf present. Just to check it 
was some setting in my.cnf that caused the log in problem.

# ./mysql status
MySQL is running but PID file could not be found   [FAILED]
# ./mysql stop
MySQL manager or server PID file could not be found!   [FAILED]

In fact I cannot find a pid file anywhere on the box.
# ps -ef | grep mysql
root  6517 1  0 10:10 pts/000:00:00 /bin/sh /usr/bin/mysqld_safe 
--datadir=/var/lib/mysql --pid-file=/var/lib/mysql/radium01.pid
mysql 6623  6517  0 10:10 pts/000:00:24 /usr/sbin/mysqld --basedir=/ 
--datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/object01.err 
--pid-file=/var/lib/mysql/object01.pid --socket=/var/lib/mysql/mysql.sock 
--port=3306
# locate pid | grep mysql
/usr/share/man/man1/mysql_waitpid.1.gz
/usr/bin/mysql_waitpid

I have no idea why a pid file would be missing!

any ideas

Syd




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Selecting data from multiple tables

2009-11-15 Thread Gavin Towey
These table names are extraordinarily confusing; especially since your schema 
is de-normalized.  One of these tables should have (user_id int unsigned not 
null auto increment primary key, username varchar(100) );  All the rest should 
be using user_id.

Anyway, to answer your first question:

select * from Table_1 left join Table_2 using (photo_uid) where
Table_1.username != 'dopey' and Table_2!='dopey';

You need to move the conditions on Table_2 into the join clause:

select * from Table_1 left join Table_2 where Table_1.photo_id=Table_2.photoid 
AND Table_2.username != 'dopey' where
Table_1.username != 'dopey';

Regards,
Gavin Towey

-Original Message-
From: Ashley M. Kirchner [mailto:kira...@gmail.com]
Sent: Sunday, November 15, 2009 4:38 AM
To: mysql@lists.mysql.com
Subject: Selecting data from multiple tables

Hi folks,

I'm trying to, possibly do the impossible here.  I have to select data from
4 different tables to come up with the right information and I'm having one
heck of time trying to figure it out.  This is going to be a long email ...

Table_1:
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+--+--+-+-+---+
| photo_uid | int(7) unsigned zerofill | NO   | PRI | NULL|   |
| username  | varchar(100) | NO   | | NULL|   |
| votes | int(5)   | YES  | | 0   |   |
+---+--+--+-+-+---+

Table_2:
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+--+--+-+-+---+
| photo_uid | int(7) unsigned zerofill | NO   | UNI | NULL|   |
| username  | varchar(100) | NO   | PRI | NULL|   |
| vote  | int(2)   | NO   | | 0   |   |
| voted_on  | datetime | NO   | | NULL|   |
+---+--+--+-+-+---+

Table_3:
++-+--+-+-++
| Field  | Type| Null | Key | Default | Extra  |
++-+--+-+-++
| username   | varchar(100)| NO   | UNI | NULL||
| info   | varchar(100)| NO   | | NULL||
++-+--+-+-++

Table_4:
+---+--+--+-+-++
| Field | Type | Null | Key | Default |
Extra  |
+---+--+--+-+-++
| photo_uid | int(7) unsigned zerofill | NO   | PRI | NULL|
auto_increment |
| username  | varchar(100) | NO   | | NULL
||
| photo | varchar(100) | NO   | | NULL
||
+---+--+--+-+-++

Data used for query:
username=foo


The goal here is several.
  1. query Table 3 for info where username=foo  (always 1 record)

  2. query Table 3 for username where info = (result of Q1 above) EXCLUDING
username=foo
 (results in 0 to many records)

  3. query Table 1 for photo_uid where username= (all records in query from
Q2 above)

  4. query Table 4 for photo_uid from Q2 above and EXCLUDING username=foo


Now, I started fiddling with LEFT JOIN and came up with this:

select * from Table_1 left join Table_2 using (photo_uid) where
Table_1.username != 'dopey';
+---+--+---+--+--+-+
| photo_uid | username | votes | username | vote | voted_on|
+---+--+---+--+--+-+
|   011 | bashful  | 0 | NULL | NULL | NULL|
|   010 | bashful  | 0 | NULL | NULL | NULL|
|   005 | bashful  | 0 | dopey|1 | 2009-11-15 03:56:30 |
|   003 | bashful  | 0 | NULL | NULL | NULL|
|   001 | bashful  | 0 | NULL | NULL | NULL|
|   014 | grumpy   | 0 | bashful  |1 | 2009-11-15 03:48:55 |
+---+--+---+--+--+-+

Close, I need to also set Table_2.username != 'dopey', however the moment I
do that, I get exactly 1 record returned:

+---+--+---+--+--+-+
| photo_uid | username | votes | username | vote | voted_on|
+---+--+---+--+--+-+

RE: DELETE DATA FROM TABLE

2009-11-19 Thread Gavin Towey
Assuming you're using either myisam tables, or innodb with file-per-table 
option turned on, then dropping a whole partition at a time will allow you to 
reclaim disk space.

If you're using innodb with a single tablespace currently, then unfortunately, 
you would have to export all your data, shutdown mysql, change you're my.cnf & 
delete the tablespace & ib_log files, then restart and re-import all your data. 
 If you need to do this, you should probably seek a bit more information about 
from this list or other sources.

Regards,
Gavin Towey

-Original Message-
From: Krishna Chandra Prajapati [mailto:prajapat...@gmail.com]
Sent: Thursday, November 19, 2009 12:13 AM
To: MySQL
Subject: DELETE DATA FROM TABLE

Hi Experts,

I have a crm table where 12 millions records inserted/day. We are running
report queries on this table and using partitioning features for faster
results. we have to maintain 45 days data means 540million records. As per
my calculation 540 records will use 1.8 TB of disk space. Total disk space
available is 2.3TB.

Deleting data doesn't free up the disk space. So, I was thinking of rotating
the table. But doesn't have enough disk space.

Any Idea, how this task can be performed.

Any idea or suggestion is highly appreciated.

Thanks & Regards,
Krishna Ch. Prajapati

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: DELETE DATA FROM TABLE

2009-11-19 Thread Gavin Towey
Hi Krishna,

Drop partition should be very quick - much faster than doing a DELETE on the 
same amount of data.  Internally, it will be the same as doing a drop table for 
that partition.

Regards,
Gavin Towey

From: Krishna Chandra Prajapati [mailto:prajapat...@gmail.com]
Sent: Thursday, November 19, 2009 1:15 AM
To: Gavin Towey
Cc: MySQL
Subject: Re: DELETE DATA FROM TABLE

Hi Gavin,

I am using innodb with file-per-table. I agree with you dropping a partition 
will reclaim disk space.
alter table  drop partition 

But, my concern is "alter table  drop partition " 
on very big table would might take a lot of time. (Although, I haven't tested)

Thanks for the immediate response.

Thanks & Regard,
Krishna Ch. Prajapati
On Thu, Nov 19, 2009 at 2:22 PM, Gavin Towey 
mailto:gto...@ffn.com>> wrote:
Assuming you're using either myisam tables, or innodb with file-per-table 
option turned on, then dropping a whole partition at a time will allow you to 
reclaim disk space.

If you're using innodb with a single tablespace currently, then unfortunately, 
you would have to export all your data, shutdown mysql, change you're my.cnf & 
delete the tablespace & ib_log files, then restart and re-import all your data. 
 If you need to do this, you should probably seek a bit more information about 
from this list or other sources.

Regards,
Gavin Towey

-Original Message-
From: Krishna Chandra Prajapati 
[mailto:prajapat...@gmail.com<mailto:prajapat...@gmail.com>]
Sent: Thursday, November 19, 2009 12:13 AM
To: MySQL
Subject: DELETE DATA FROM TABLE

Hi Experts,

I have a crm table where 12 millions records inserted/day. We are running
report queries on this table and using partitioning features for faster
results. we have to maintain 45 days data means 540million records. As per
my calculation 540 records will use 1.8 TB of disk space. Total disk space
available is 2.3TB.

Deleting data doesn't free up the disk space. So, I was thinking of rotating
the table. But doesn't have enough disk space.

Any Idea, how this task can be performed.

Any idea or suggestion is highly appreciated.

Thanks & Regards,
Krishna Ch. Prajapati
The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.



The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.


RE: Strange problem with mysqldump / automysqlbackup (ERROR 1300)

2009-11-20 Thread Gavin Towey
Have you tried dumping that table manually using mysqldump on the command line 
to confirm it's not an issue with automysqlbackup?

Regards,
Gavin Towey

-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Friday, November 20, 2009 8:31 AM
To: mysql
Subject: Strange problem with mysqldump / automysqlbackup (ERROR 1300)

I've been using automysqlbackup 2.5 for years on a particular database, and 
it's always performed great. Recently, however, I've become encountering 
problems when trying to re-import one of its dumped sql files. (Not sure if it 
matters, but the database file in question is large and growing -- about 10GB. 
The other databases automysqlbackup backs up are fine.)

Basically on the import, MySQL fails and returns an error indicating a problem 
with the dump file:

mysql -u root -p < dump_file.sql  (~10GB)
Enter password:
ERROR 1300 (HY000) at line 426: Invalid utf8 character string: '?03422'

Sure enough, I look at the line in dump_file.sql, which should contain two 
unsigned ints, and two unsigned small ints:

[...],(32562206,1228?03422,1641,135),[...]

And yup, there's a question mark in the middle of the second unsigned int, for 
some strange reason. Not in any of the other rows in that statement. When I 
look at the existing database from which the dump file was made, that row is 
fine:

mysql> SELECT * FROM bandwidth WHERE id = 32562206;
+--++---+---+
| id   | time_sec   | device_id | bytes |
+--++---+---+
| 32562206 | 1228803422 |  1641 |   135 |
+--++---+---+
1 row in set (0.00 sec)


So... It appears either mysqldump and/or automysqlbackup is having a problem 
dumping a true copy of the database.

Anyone else run into this sort of thing? Any suggestions? Thanks.

...Rene

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Creating Table Through Union

2009-11-22 Thread Gavin Towey
The form would be like:

CREATE TABLE products
SELECT b0basics, b0fieldValues, s0prescriptions,
s0prescriptions0doctors, s0prescriptions0patient, pics FROM table1
UNION
SELECT b0basics, b0fieldValues, s0prescriptions,
s0prescriptions0doctors, s0prescriptions0patient, pics FROM table2

Regards,
Gavin Towey

-Original Message-
From: Victor Subervi [mailto:victorsube...@gmail.com]
Sent: Sunday, November 22, 2009 10:56 AM
To: mysql@lists.mysql.com
Subject: Creating Table Through Union

Hi;
I would like to create a table out of merging the fields in other,
previously created tables. I have the following syntax which doesn't work:

create table products union (b0basics, b0fieldValues, s0prescriptions,
s0prescriptions0doctors, s0prescriptions0patient, pics);

Please advise.
TIA,
Victor

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Gavin Towey
I think he's trying to say that this method wouldn't work for innodb, unless 
you copied files from an LVM snapshot, or something similar.

I would say that it's very important to know why data is getting out of sync 
between your master and slave.  Fixing those root causes would eliminate the 
need for this.  There are cases where non-deterministic queries will produce 
different results, but that's what row based replication is supposed to solve =)

There are ways to resync data that don't involve all this as well:  Maatkit has 
some tools that compare data between servers, and can fix them with queries.  
No stopping the slave or locking the master necessary.  I've used them in 
production with good results.

Regards,
Gavin Towey



-Original Message-
From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
Sent: Friday, December 04, 2009 9:00 AM
To: Tom Worster; mysql@lists.mysql.com
Subject: RE: Here's an Idea for Re-Syncing Master and Slave During Production 
Hours without Interrupting Users (Much)

> (1) innodb?

It's an off-the-shelf application that uses MyISAM tables. It is
possible to convert to innodb, but I have not been sold on innodb in
terms of its  performance characteristics for this particular
application. Maybe I've been reading the wrong stuff. Do you have
general thoughts on the differences with respect to performance?

> (2) why delete slave logs when you can
> restart the slave with --skip-slave and
> then use CHANGE MASTER TO?

Well... I guess mainly because I didn't know about that option! I
thought I needed to "fake out" mysql on this, but it sounds like I can
just do 'flush tables with read lock;reset master;' on the master and
'change master to...;' on the slave. So cool. Thanks for the input!

--
Eric Robinson


Disclaimer - December 4, 2009
This email and any files transmitted with it are confidential and intended 
solely for Tom Worster,my...@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of . Warning: Although  has taken reasonable 
precautions to ensure no viruses are present in this email, the company cannot 
accept responsibility for any loss or damage arising from the use of this email 
or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Here's an Idea for Re-Syncing Master and Slave During Production Hours without Interrupting Users (Much)

2009-12-04 Thread Gavin Towey
> I would never have any confidence that the replication is solid
> enough to use the slave server for backup purposes.

I agree completely there.  That's the other reason I like filesystem snapshots 
is that it allows you to take a backup from the master relatively painlessly.

-Original Message-
From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
Sent: Friday, December 04, 2009 1:24 PM
To: Gavin Towey; Tom Worster; mysql@lists.mysql.com
Subject: RE: Here's an Idea for Re-Syncing Master and Slave During Production 
Hours without Interrupting Users (Much)

> I would say that it's very important to know why data
> is getting out of sync between your master and slave.

Ultimately, I agree. But since it's a canned application, getting to
that point might be hard, and once it is resolved, new issues might
arise. I would never have any confidence that the replication is solid
enough to use the slave server for backup purposes. (Which, by the way,
is the real reason I'm doing this. In the middle of the night, when
there are few users on the system, I want to backup the slave, but first
I want to make sure I have a 100% reliable copy of the data.)

> There are ways to resync data that don't involve all
> this as well:  Maatkit has some tools

I've looked with great interest at Maatkit, but their tools are replete
with warnings about dangers, bugs, and crashes. They certainly do not
inspire confidence.

--
Eric Robinson



Disclaimer - December 4, 2009
This email and any files transmitted with it are confidential and intended 
solely for Gavin Towey,Tom Worster,my...@lists.mysql.com. If you are not the 
named addressee you should not disseminate, distribute, copy or alter this 
email. Any views or opinions presented in this email are solely those of the 
author and might not represent those of . Warning: Although  has taken 
reasonable precautions to ensure no viruses are present in this email, the 
company cannot accept responsibility for any loss or damage arising from the 
use of this email or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: login problem from django script, using python2.5/MySQLdb/connections.py

2009-12-09 Thread Gavin Towey
Access Denied means you're using an incorrect username and password 
combination.  Test your credentials using the mysql cli.  You can log in as 
root to mysql to make changes as necessary, or supply the correct user/pass 
from your script.

Regards,
Gavin Towey

-Original Message-
From: John Griessen [mailto:j...@industromatic.com]
Sent: Wednesday, December 09, 2009 10:16 AM
To: mysql@lists.mysql.com
Subject: login problem from django script, using 
python2.5/MySQLdb/connections.py

If I can login from a shell, what could stop a script from login?

I'm following a newbie tutorial for django, a web content mgt. system.

The following user and password are good if I use them fromthe same shell the 
script launches from.

Here's the error message from a django script using a python module about mysql:


File "/usr/lib/pymodules/python2.5/MySQLdb/connections.py", line 170, in 
__init__
 super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1044, "Access denied for user 
'django_editor'@'%' to database 'django_server'")



Any ideas?

thanks,

John

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: stored procedure and random table name -> temp table, merge, prepared statement

2009-12-10 Thread Gavin Towey
Creating a temporary merge table works fine for me on 5.0.

Your table isn't innodb is it?  That will fail with an error like you're 
getting.

Regards,
Gavin Towey

-Original Message-
From: Dante Lorenso [mailto:da...@lorenso.com]
Sent: Thursday, December 10, 2009 3:20 PM
To: mysql@lists.mysql.com
Subject: stored procedure and random table name -> temp table, merge, prepared 
statement

All,

I have a stored procedure that I'm writing where I need to run a lot of
queries against a particular table.  The name of the table will be a
parameter to the stored procedure ... example:

CALL normalize_data('name_of_table_here');

Since I want to run queries against this table, I don't want to have to use
prepared statements for all the queries because treating my queries as
strings gets ugly.  Ideally I want to use the table name as a variable in
the stored procedure, but as a hack around that, I thought about trying this
trick instead: give the table name an alias.

-- remove our temporary table if it already exists
DROP TABLE IF EXISTS dante;
--
-- clone the table structure
CREATE TEMPORARY TABLE dante LIKE name_of_table_here;
--
-- change the temporary table to a merge table which references the named
table
ALTER TABLE dante ENGINE=MERGE UNION(name_of_table_here);

Once these 3 statements were run, the "merge" table would essentially just
be a view on the underlying table and all my following queries could
reference the "dante" table and not the strangely named random table.

Note, that queries above that use "name_of_table_here" would need to be
prepared and executed using the string concat approach.

The problem I am having is that this strategy is not working.  After running
the statements above, I check my new "dante" table and it doesn't work:

DESC dante;
Error Code : 1168
Unable to open underlying table which is differently defined or of
non-MyISAM type or doesn't exist

So, how can I accomplish what I am trying to do?  I just want to alias a
random table to a fixed name (preferably as a temporary table name so that
it won't conflict with other connections running similar code
simultaneously) so that I can avoid having to use prepared statements
through my whole stored procedure.  I may potentially perform 20-30 queries
to the table which is passed in and want to keep this code looking clean.

I could avoid this problem altogether if I can assign an alias to a table:

ALIAS dante TO name_of_table_here;

or use a variable table name in a query inside a stored procedure:

SET @table_name = 'name_of_table_here';

INSERT INTO some_table (value)
SELECT something
FROM @table_name
WHERE ...;

Am using MySQL 5.1.36.  Any pointers?

-- Dante

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: errno: 13

2009-12-11 Thread Gavin Towey
Mysql daemon runs as the 'mysql' user

Chown -r mysql:mysql /storage/mysql/data

Regards,
Gavin Towey

-Original Message-
From: Carl [mailto:c...@etrak-plus.com]
Sent: Friday, December 11, 2009 11:55 AM
To: mysql@lists.mysql.com
Subject: errno: 13

Fresh install of 5.1.41 on a brand new (Slackware 13 - 64 bit)  machine.  
Installed from tar.  Directory structure is: basedir=/usr/local/mysql and 
datadir=/storage/mysql/data.  I am currently running as root.  The permissions 
on the directories in /storage/mysql/data are 766 (I have double and triple 
checked this.)  I have created the mysql data tables by running 
mysql_install_db... it seemed to complete without error:

r...@mysql3:/usr/local/mysql/scripts# ./mysql_install_db 
--datadir=/storage/mysql/data --basedir=/usr/local/mysql -uroot
Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/local/mysql/bin/mysqladmin -u root password 'new-password'
/usr/local/mysql/bin/mysqladmin -u root -h 10.10.10.31 password 'new-password'

Alternatively you can run:
/usr/local/mysql/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr/local/mysql ; /usr/local/mysql/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/local/mysql/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/local/mysql/scripts/mysqlbug script!

The latest information about MySQL is available at http://www.mysql.com/
Support MySQL by buying support/licenses from http://shop.mysql.com/

I then ran

/usr/local/mysql/bin/mysqld_safe -uroot &

which produced the following error report:

091211 13:19:18 mysqld_safe Starting mysqld daemon with databases from 
/storage/mysql/data
091211 13:19:18 [Warning] Ignoring user change to 'root' because the user was 
set to 'mysql' earlier on the command line

091211 13:19:18 [Note] Plugin 'FEDERATED' is disabled.
091211 13:19:18 [Warning] /usr/local/mysql/bin/mysqld: ignoring option 
'--innodb-use-sys-malloc' due to invalid value 'ON'
^G/usr/local/mysql/bin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
091211 13:19:18 [ERROR] Can't open the mysql.plugin table. Please run 
mysql_upgrade to create it.
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
091211 13:19:19  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name /storage/mysql/data/ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.
091211 13:19:19 mysqld_safe mysqld from pid file /storage/mysql/data/mysql3.pid 
ended
~

The relevant portions of the /storage/mysql/data/mysql directory are (for the 
first error):

-rw-rw 1 root root  0 2009-12-11 13:17 plugin.MYD
-rw-rw 1 root root   1024 2009-12-11 13:17 plugin.MYI
-rw-rw 1 root root   8586 2009-12-11 13:17 plugin.frm
-

It appears to me that mysqld is looking for the plugin.frm in the 
/usr/local/mysql/mysql directory which doesn't exist because my data directory 
is /storage/mysql/data.

The second error,

InnoDB: File name /storage/mysql/data/ibdata1
InnoDB: File operation call: 'create'

is probably the show stopper.  The relevant portions of my.cnf are:

# The MySQL server
[mysqld]
port= 3306
socket  = /var/run/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 20M
max_sp_recursion_depth = 100
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
basedir=/usr/local/mysql
datadir=/storage/mysql/data
wait_timeout = 10800
max_connections = 600

and

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /storage/mysql/data
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
#innodb_log_group_home_dir = /var/lib/mysql/
#innodb_log_arch_dir = /var/lib/mysql/
ignore_builtin_innodb
plugin-load=innodb=ha_innodb.so;innodb_trx=ha_innodb.so;innodb_locks=ha_innodb.so;innodb_lock_waits=ha_innodb.so;innodb_cmp=ha_innodb.so;innodb_cmp_reset=ha_
innodb.so;innodb_cmpmem=ha_innodb.so;innodb_cmpmem_reset=ha_innodb.so
#

Note: ha_innodb.so is in the 'plugins' directory.

This error makes no sense to me.

Can anyone kick me in the right direction?

Thanks,

Carl




This 

RE: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?

2009-12-11 Thread Gavin Towey
Don't forget triggers, stored routines, views, database/table specific user 
permissions, and replication/binlog options!

Regards,
Gavin Towey

-Original Message-
From: Saravanan [mailto:suzuki_b...@yahoo.com]
Sent: Friday, December 11, 2009 2:02 PM
To: MySql; Michael Dykman
Subject: Re: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?

if you have myisam alone tables you can rename the folder of the database. That 
can work like rename database. If you have innodb table you have to move one by 
one table because details of those tables will be stored in innodb shared table 
space. Moving folder cannot work.

Thanks,
Saravanan

--- On Fri, 12/11/09, Michael Dykman  wrote:

From: Michael Dykman 
Subject: Re: Are you serious? mySQL 5.0 does NOT have a RENAME DATABASE?
To: "MySql" 
Date: Friday, December 11, 2009, 10:54 PM

No, not a loophole.  Just a plain-old management feature..  there is
nothing particularly hacky about it.. this is not trying to leverage
undocumented features: this has been a published part of the API for
at least a couple of years.

On the same file system, yes it should be pretty damned fast.
Depending on how your data is stored, it might now be 'quite' as
simple as a unix 'mv' command..  if this is a production system, I
would recommend you do a dry run with a replicant/slave. No amount of
theorizing will tell as much as the experiment.

 - michael dykman

On Fri, Dec 11, 2009 at 4:40 PM, Daevid Vincent  wrote:
> Will this work in 5.0?
>
> If I'm reading this right, it seems like this is some kind of trick or
> loophole then right? If it works and solves my dilemna, I'm fine with that,
> but I'm just curious.
>
> How fast is this? I mean, if I have an 80GB database, is it like a real
> unix 'mv' command where it simply changing pointers or is it a full on
> copy/rm? (Assume same filesystem/directory)
>
>> -Original Message-
>> From: Michael Dykman [mailto:mdyk...@gmail.com]
>> Sent: Friday, December 11, 2009 6:08 AM
>> To: MySql
>> Subject: Re: Are you serious? mySQL 5.0 does NOT have a
>> RENAME DATABASE?
>>
>> If you want to move the database atomically,  a RENAME TABLE statement
>> may have multiple clauses.
>>
>> RENAME TABLE
>>  olddb.foo to newdb.foo,
>>  olddb.bar to newdb.bar;
>>
>> Here,  I hot-swap a  new lookup table 'active.geo' into a live system
>> confident that, at any given point, some version of this table always
>> exists:
>>
>> RENAME TABLE
>>  active.geo to archive.geo,
>>  standby.geo to active geo;
>>
>>  - michael dykman
>>
>>
>> On Fri, Dec 11, 2009 at 8:58 AM, Johan De Meersman
>>  wrote:
>> > On Fri, Dec 11, 2009 at 1:56 PM, Ken D'Ambrosio
>>  wrote:
>> >
>> >> > rename table oldschema.table to newschema.table;
>> >>
>> >> Just to be 100% clear -- I assume you have to first create
>> the destination
>> >> database, and then do this for all the tables in the
>> source database?
>> >>
>> >
>> > Yep. Easily scriptable, though :-)
>> >
>>
>>
>>
>> --
>>  - michael dykman
>>  - mdyk...@gmail.com
>>
>> "May you live every day of your life."
>> Jonathan Swift
>>
>> Larry's First Law of Language Redesign: Everyone wants the colon.
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/mysql?unsub=dae...@daevid.com
>>
>
>



--
 - michael dykman
 - mdyk...@gmail.com

"May you live every day of your life."
Jonathan Swift

Larry's First Law of Language Redesign: Everyone wants the colon.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=suzuki_b...@yahoo.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Optimization suggestions

2009-12-14 Thread Gavin Towey
Id should probably be an auto_incrementing INT, if you still need a unique text 
identifier, then I would make a separate field.  Though my opinion isn't the 
only way; there is much debate on natural vs. surrogate keys.

I would normalize "folderid" into a lookup in another table, and make folderid 
an INT value.

Threadid is another field that would probably be better as an INT.

As for your indexes, they depend completely on what type of queries you're 
going to be running.  Once you know that, then you can test them using sample 
data and EXPLAIN.

http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html

About normalization:
http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html


Regards,
Gavin Towey


-Original Message-
From: Sudhir N [mailto:sudhir_nima...@yahoo.com]
Sent: Monday, December 14, 2009 10:31 AM
To: Mysql
Subject: Optimization suggestions

I have following table structure, I have to use merge storage engine.
Please have a look, and provide feedback if theres some thing wrong or if 
there's space for optimization.


/*Table structure for table `messages2009` */

CREATE TABLE `messages2009` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

/*Table structure for table `messages` */
/*Merge table definition that covers all message tables*/

CREATE TABLE `messages` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 INSERT_METHOD=LAST 
UNION=(`messages2009`);






Sudhir NimavatSenior software engineer.
Quick start global PVT LTD.
Baroda - 390007
Gujarat, India

Personally I'm always ready to learn, although I do not always like being taught


  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Count records in join

2009-12-15 Thread Gavin Towey
Hi Miguel,

You'll need to use LEFT JOIN, that will show all records that match and a row 
in the second table will all values NULL where there is no match.  Then you 
find all those rows that have no match in your WHERE clause.

Regards,
Gavin Towey

-Original Message-
From: Miguel Vaz [mailto:pagong...@gmail.com]
Sent: Tuesday, December 15, 2009 10:43 AM
To: mysql@lists.mysql.com
Subject: Count records in join

Hi,

I am stuck with a suposedly simple query:

- i have two tables (:

PROGS
id_prog
name

EVENTS
id
id_prog
name

How can i list all records from PROGS with a sum of how many events each
have? I want to find the progs that are empty.

I remember something about using NULL, but i cant remember. :-P

Thanks.

MV

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Cannot created stored procedure (Using example from mysql manual) -- mysql 5.1.37 -- Ubuntu 9.10

2009-12-16 Thread Gavin Towey
You need to use

DELIMITER //

Or some other symbol besides ; to change the client's end-of-statement symbol.  
Otherwise it ends the statement at the first ; inside the procedure you use, 
but it's not yet complete.

This is described in the manual on that same page.

Regards
Gavin Towey

-Original Message-
From: Walton Hoops [mailto:wal...@vyper.hopto.org]
Sent: Wednesday, December 16, 2009 10:46 AM
To: mysql@lists.mysql.com
Subject: Cannot created stored procedure (Using example from mysql manual) -- 
mysql 5.1.37 -- Ubuntu 9.10

Hi all.



I am running into a very frustrating problem trying to created a stored
procedure.

I had originally assumed I was using bad syntax, but even examples copied
and pasted

directly from the manual are giving the same error.





mysql> CREATE DEFINER = 'walton'@'localhost' PROCEDURE account_count()

-> BEGIN

-> SELECT 'Number of accounts:', COUNT(*) FROM mysql.user;

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near '' at line 3

mysql>



This example can be found at:

http://dev.mysql.com/doc/refman/5.1/en/create-procedure.html

Google has failed me on this one.



Can anyone advise me as to what I need to do to troubleshoot this?  Also if
it is in error in the documentation,

how would I go about notifying someone so it can be corrected?



Any help would be greatly appreciated.


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
can speed up this process by running in parallel.

However if you're doing this every week on that large of a dataset, I'd just 
use filesystem snapshots.  You're backup/restore would then only take as long 
as it takes for you to scp the database from one machine to another.

Regards,
Gavin Towey


-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 12:56 PM
To: mysql@lists.mysql.com
Subject: Importing large databases faster

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing this is, at least
in part, due to indexing.

My question is; Given an empty target DB and a dump file generated via:

ssh r...@server "mysqldump --all-databases -psecret" > /path/to/backup.sql

How can I go about efficiently loading it into a new database?
Specifically, can I disable triggers, indexes and what not until after
load finishes? I can only imagine that a single "ok, go create your
indexes now" at the end would be faster. Perhaps some way to hold off
commits from happening as often? The target server has 32Gb of RAM, so I
suspect I should be able to hold things in memory and commit to disk
relatively rarely.

I am currently loading via this command:

mysql -psecret < /path/to/backup.sql

The source and destination MySQL versions are:

Source:
mysql  Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64)
using readline 5.0

Dest:
mysql  Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using
readline 5.1

The reason for the discrepancy is that the old server was setup from
source on CentOS 4.x by a previous tech and the destination server is
the stock version from CentOS 5.x. The source server will be phased out
soon, so no real attempt at maintaining matching versions was done.

Thanks!

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Spatial extensions

2009-12-16 Thread Gavin Towey
Yes, spatial indexes are very fast:

Query would be something like:

SET @center = GeomFromText('POINT(37.372241 -122.021671)');

SET @radius = 0.005;

SET @bbox = GeomFromText(CONCAT('POLYGON((',
  X(@center) - @radius, ' ', Y(@center) - @radius, ',',
  X(@center) + @radius, ' ', Y(@center) - @radius, ',',
  X(@center) + @radius, ' ', Y(@center) + @radius, ',',
  X(@center) - @radius, ' ', Y(@center) + @radius, ',',
  X(@center) - @radius, ' ', Y(@center) - @radius, '))')
  );

select id, astext(coordinates), Distance(@center,line_segment) as dist
FROM places where MBRContains(@bbox, line_segment) order by dist limit 10;

Regards,
Gavin Towey


-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Wednesday, December 16, 2009 4:32 PM
To: mysql
Subject: Spatial extensions

I have table with 2 million rows of geographic points (latitude, longitude).
Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 10 
closest points (records) from that table? Currently, I'm using a simple 
two-column index to speed up queries:

CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL
 PRIMARY KEY (`id`),
 KEY `latlng` (`latitude`,`longitude`)
) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
COLLATE=latin1_general_ci;

My current query is fairly quick:

SELECT SQL_NO_CACHE * FROM places WHERE latitude BETWEEN 51.98228037384 AND 
52.033153677 AND longitude BETWEEN -113.94770681881 AND -113.86685484296;

But I wonder a couple things:

1. Would MySQL's [seemingly anemic] spatial extensions would speed things up if 
I added a column of type POINT (and a corresponding spatial INDEX)?

CREATE TABLE `places` (
 `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
 `latitude` decimal(10,8) NOT NULL,
 `longitude` decimal(12,8) NOT NULL,
 `coordinates` point NOT NULL,
 PRIMARY KEY (`id`),
 KEY `latlng` (`latitude`,`longitude`),
 KEY `coord` (`coordinates`(25))
) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
COLLATE=latin1_general_ci;

2. How would I write the query?

...Rene


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
I don't think so, I'm pretty sure you have to use mk-parallel-dump to get the 
data in a format it wants.  The docs are online though.

Regards,
Gavin Towey

-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 4:35 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Importing large databases faster

Gavin Towey wrote:
> There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
> can speed up this process by running in parallel.
>
> However if you're doing this every week on that large of a dataset, I'd just 
> use filesystem snapshots.  You're backup/restore would then only take as long 
> as it takes for you to scp the database from one machine to another.
>
> Regards,
> Gavin Towey

Thanks! Will the Maatkit script work on a simple --all-databases dump?

As for the copy, it's a temporary thing. This is just being done weekly
while we test out the new server. Once it's live, the new server will
indeed be backed up via LVM snapshots. :)

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


RE: Spatial extensions

2009-12-17 Thread Gavin Towey
Not only is it 5.1, but there's a special branch that has improved GIS 
functions not found in the regular MySQL.  I'm not sure if/when they're 
planning on rolling them back into mysql:

http://downloads.mysql.com/forge/mysql-5.1.35-gis/

If it's not possible to use that version, then you can still implement a 
Distance function yourself as a stored procedure or UDF.  Just google for 
mysql+haversine or something similar.

The important part though is the MBRContains, which does an efficient box cull 
and uses the spatial index.  Oops, I forgot to change a couple occurances of 
"line_segment" to "coordinates" line_segment was just the column name I was 
using in my original query.

Regards,
Gavin Towey

-Original Message-
From: René Fournier [mailto:m...@renefournier.com]
Sent: Thursday, December 17, 2009 8:54 AM
To: Gavin Towey
Cc: mysql
Subject: Re: Spatial extensions

Awesome, this is what I was trying to find, as you succinctly wrote it. I 
*really* appreciate getting pointed in the right direction, since I haven't 
found a lot of MySQL's GIS tutorials directed at what I'm trying to do.

Still, a couple questions, the Distance() function you included, that must 
require 5.1 or higher right? 5.0.88  on my box throws an error:

"Function places.Distance does not exist"

Also, where does line_segment come from in the below query?
Thanks.

...Rene

On 2009-12-17, at 8:45 AM, Gavin Towey wrote:

> Yes, spatial indexes are very fast:
>
> Query would be something like:
>
> SET @center = GeomFromText('POINT(37.372241 -122.021671)');
>
> SET @radius = 0.005;
>
> SET @bbox = GeomFromText(CONCAT('POLYGON((',
>  X(@center) - @radius, ' ', Y(@center) - @radius, ',',
>  X(@center) + @radius, ' ', Y(@center) - @radius, ',',
>  X(@center) + @radius, ' ', Y(@center) + @radius, ',',
>  X(@center) - @radius, ' ', Y(@center) + @radius, ',',
>  X(@center) - @radius, ' ', Y(@center) - @radius, '))')
>  );
>
> select id, astext(coordinates), Distance(@center,line_segment) as dist
> FROM places where MBRContains(@bbox, line_segment) order by dist limit 10;
>
> Regards,
> Gavin Towey
>
>
> -Original Message-
> From: René Fournier [mailto:m...@renefournier.com]
> Sent: Wednesday, December 16, 2009 4:32 PM
> To: mysql
> Subject: Spatial extensions
>
> I have table with 2 million rows of geographic points (latitude, longitude).
> Given a location -- say, 52º, -113.9º -- what's the fastest way to query the 
> 10 closest points (records) from that table? Currently, I'm using a simple 
> two-column index to speed up queries:
>
> CREATE TABLE `places` (
> `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
> `latitude` decimal(10,8) NOT NULL,
> `longitude` decimal(12,8) NOT NULL
> PRIMARY KEY (`id`),
> KEY `latlng` (`latitude`,`longitude`)
> ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
> COLLATE=latin1_general_ci;
>
> My current query is fairly quick:
>
> SELECT SQL_NO_CACHE * FROM places WHERE latitude BETWEEN 51.98228037384 AND 
> 52.033153677 AND longitude BETWEEN -113.94770681881 AND -113.86685484296;
>
> But I wonder a couple things:
>
> 1. Would MySQL's [seemingly anemic] spatial extensions would speed things up 
> if I added a column of type POINT (and a corresponding spatial INDEX)?
>
> CREATE TABLE `places` (
> `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
> `latitude` decimal(10,8) NOT NULL,
> `longitude` decimal(12,8) NOT NULL,
> `coordinates` point NOT NULL,
> PRIMARY KEY (`id`),
> KEY `latlng` (`latitude`,`longitude`),
> KEY `coord` (`coordinates`(25))
> ) ENGINE=MyISAM AUTO_INCREMENT=50 DEFAULT CHARSET=latin1 
> COLLATE=latin1_general_ci;
>
> 2. How would I write the query?
>
> ...Rene
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
>
>
> This message contains confidential information and is intended only for the 
> individual named.  If you are not the named addressee, you are notified that 
> reviewing, disseminating, disclosing, copying or distributing this e-mail is 
> strictly prohibited.  Please notify the sender immediately by e-mail if you 
> have received this e-mail by mistake and delete this e-mail from your system. 
> E-mail transmission cannot be guaranteed to be secure or error-free as 
> information could be intercepted, corrupted, lost, destroyed, arrive late or 
> incomplete, or contain viruses. The sender therefore does not accept 
> liability for any loss or damage caused by viruses or errors or omission

  1   2   >