select recipe_id,max(maxdatetime) from data_csmeta group by recipe_id
having recipe_id=19166;
On Mon, Sep 23, 2013 at 4:15 PM, shawn green wrote:
> Hi Larry,
>
>
> On 9/23/2013 3:58 PM, Larry Martell wrote:
>
>> On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
>> wrote:
>>
>> Hi,
>>>
>>> I
if i u have LVM's then lock is held only for the duration of taking
snapshot, which would be few min, if there is very less activity on the db.
On Wed, Aug 28, 2013 at 3:08 PM, Ed L. wrote:
> On 8/28/13 2:00 PM, Ananda Kumar wrote:
>
>
> Why don't u try snapshot backup
Why don't u try snapshot backups, where the lock held for less duration. Or
can't u take mysql dumps during Night time when there is less bd activity
On Thursday, August 29, 2013, Ed L. wrote:
>
> Mysql newbie here, looking for some help configuring 5.0.45 master-slave
replication. Here's my sce
oomsToSell',4,4, NOW());
> SELECT * FROM tempHotelRateAvailability;
>
>
> On Wed, May 29, 2013 at 2:57 PM, Ananda Kumar wrote:
>
>> did u check if data is getting inserted into tempHotelRateAvailability
>>
>>
>> On Wed, May 29, 2013 at 7:21 PM, Nei
s call in the Trigger and change a value in the table
> it works fine;
>
> INSERT INTO AuditTrail
> (AuditTrailId,UserId,ActionType,TableName,RowKey,FieldName,OldValue,NewValue,
> LoggedOn)
> VALUES (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',
can you please share the code of the trigger. Any kind of error your getting
On Wed, May 29, 2013 at 6:49 PM, Neil Tompkins wrote:
> Hi,
>
> I've a trigger that writes some data to a temporary table; and at the end
> of the trigger writes all the temporary table data in one insert to our
> norm
Does your query use proper indexes.
Does your query scan less number blocks/rows
can you share the explain plan of the sql
On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
ilya.kazakev...@jetbrains.com> wrote:
> Hello,
>
> I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it r
Hello Guys,
I am trying to setup a mysql-cluster with two data nodes and one management
node.
The sequence of step I followed are:
Ran *'ndb_mgmd' *on management node
Ran '*ndbd --initial' *on both the data nodes
Ran '*mysqld' *on both the data nodes
Then the status of the cluster on manageme
When i use mssql, i used the mail agent, so similar one expecting in MYSQL,
On Mon, Apr 8, 2013 at 4:02 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Bharani Kumar"
> >
> > How to enable mail agent service in MYSQL. and what are the n
rsh
> Stefan
>
>
> On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman >wrote:
>
> > --
> >
> > *From: *"Ananda Kumar"
> > *Subject: *Re: Retrieve most recent of multiple rows
> >
> >
> >
> > select qid,max(atimestamp) from
not all the rows, only the distinct q_id,
On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman wrote:
> --
>
> *From: *"Ananda Kumar"
> *Subject: *Re: Retrieve most recent of multiple rows
>
>
>
> select qid,max(atimestamp) from kkk
can you please share the sql that you executed to fetch the above data
On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Norah Jones"
> > Subject: Retrieve most recent of multiple rows
> >
> > 4 10Male3 1363091019
>
select * from tab where anwer_timestamp in (select max(anwer_timestamp)
from tab where q_id in (select distinct q_id from tab) group by q_id);
On Wed, Mar 13, 2013 at 6:48 PM, Norah Jones wrote:
> I have a table which looks like this:
>
> answer_id q_id answer qscore_id answer_timestamp
)
--
---
11 13-MAR-13 02.04.04.00 PM
10 13-MAR-13 02.03.36.00 PM
12 13-MAR-13 02.03.48.00 PM
On Wed, Mar 13, 2013 at 7:28 PM, Ananda Kumar wrote:
> can you please share the sql that you executed to fetch the above d
BY PASSWORD
> '*4EF5..6' |
> | GRANT SELECT, INSERT, UPDATE, DELETE ON `mydb`.* TO 'myuserid'@'%'
> |
>
> +---+
> 2 rows in set (0.00 sec)
>
> mys
gt;
>
> false
> false
> false
>
> EN-US
> X-NONE
> AR-SA
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
you can use checksum to make sure there are not corruption in the file
On Wed, Nov 7, 2012 at 6:39 PM, Claudio Nanni wrote:
> Gary,
>
> It is always a good practice to test the whole solution backup/restore.
> So nothing is better than testing a restore, actually it should be a
> periodic procedu
why dont u create a softlink
On Tue, Oct 30, 2012 at 11:05 PM, Tim Johnson wrote:
> * Reindl Harald [121030 08:49]:
> > >The drupal mysql datafiles are located at
> > > /Applications/drupal-7.15-0/mysql/data
> > >
> > > as opposed to /opt/local/var/db/mysql5 for
> > > 'customary' mysql.
> >
> >
ke on any other unix machine.
>
> how did i connect mysql to what exactly?
>
>
>
> On 10/18/12 6:42 AM, Ananda Kumar wrote:
>
>> how did u connect mysql on your laptop
>>
>> On Thu, Oct 18, 2012 at 1:19 AM, kalin > <mailto:ka...@el.net>> wrote:
; but i still don't get the necessity of "local". i have never used it
> before.
>
> this is all on os x - 10.8.2...
>
>
>
>
> On 10/17/12 1:25 PM, Ananda Kumar wrote:
>
>> also try using "load data local infile 'file path' and see if
also try using "load data local infile 'file path' and see if it works
On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar wrote:
> does both directory have permission "777"
>
>
> On Wed, Oct 17, 2012 at 9:27 PM, Rick James wrote:
>
>> SELinux ?
>&
does both directory have permission "777"
On Wed, Oct 17, 2012 at 9:27 PM, Rick James wrote:
> SELinux ?
>
> > -Original Message-
> > From: Lixun Peng [mailto:pengli...@gmail.com]
> > Sent: Tuesday, October 16, 2012 9:03 PM
> > To: kalin
> > Cc: Michael Dykman; mysql@lists.mysql.com
> >
> I have also gone through the firewall settings and that is only rules for
> connections.
>
>
>
>
>
> On 09/10/2012 02:40 PM, Ananda Kumar wrote:
>
> did u check if there any firewall settings, forbidding you to create
> files, check if " SELinux is disable
gt;
> we have even tried to create a temp table with only one field in order
> to insert one row for testing, but we are currently not able to create any
> temporary tables whatsoever as even the simplest form of table still gives
> the same error.
>
> Regards
>
>
>
&
this temp table will hold how many rows, what would be its size.
On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:
> Hi,
> We confirmed that the /tmp directory permissions is set to rwxrwxrwxt
> and is owned by root , the same as all our other serv
start with 500MB and try
On Mon, Sep 10, 2012 at 3:31 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:
> Hi, the sort_buffer_size was set to 8Mb as well as 32M for the session
> (currently 1M) and retried with same result.
>
>
>
>
>
> On 09/10/201
other transactions overwrite the info, or there is nothing logged.
>
> We even tried running the create statement and immediately running
> Show innodb status, but nothing for that statement.
>
> Regards
>
>
>
>
>
> On 09/10/2012 11:05 AM, Ananda Kumar wrote:
&g
try this command and see if you can get more info about the error
show innodb status\G
On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:
> Hi All
>
> I am hoping someone can point me in the right direction.
>
> We have a mysql 5.0 database whi
if the server is offline , what kind of operation happens on it.
On Thu, Aug 2, 2012 at 11:31 AM, Pothanaboyina Trimurthy <
skd.trimur...@gmail.com> wrote:
> Hi everyone
> i have 4 mysql servers out of those one server will
> be online always and the remaining will be offline and
For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
eb Ananda Kumar:
> > so. its more of inactive connections, right.
> > What do you mean by NEVER LOGOUT
> >
>
> The programms watch certain states in the database,
> the connect automatic at db startup, disconnecting
> is an error case.
>
> re,
> wh
>
>
&g
so. its more of inactive connections, right.
What do you mean by NEVER LOGOUT
On Mon, Jul 23, 2012 at 8:17 PM, walter harms wrote:
>
>
> Am 23.07.2012 16:37, schrieb Ananda Kumar:
> > why dont u setup a staging env, which is very much similar to your
> > production and tune
why dont u setup a staging env, which is very much similar to your
production and tune all long running sql
On Mon, Jul 23, 2012 at 8:02 PM, walter harms wrote:
>
>
> Am 23.07.2012 16:10, schrieb Ananda Kumar:
> > you can check the slow query log, this will give you all the sql&
you can check the slow query log, this will give you all the sql's which
are taking more time to execute
On Mon, Jul 23, 2012 at 7:38 PM, walter harms wrote:
>
>
> Am 23.07.2012 15:47, schrieb Ananda Kumar:
> > you can set this is in application server.
> > You can
you can set this is in application server.
You can also set this parameter in my.cnf
wait_timeout=120 in seconds.
But the above parameter is only for inactive session
On Mon, Jul 23, 2012 at 6:18 PM, walter harms wrote:
> Hi list,
> is there a switch where i can restrict the connect/execution t
SQL> select * from orddd;
ORDERID PRODID
-- --
2 5
1 3
1 2
2 7
1 5
SQL> select prodid,count(*) from orddd group by PRODID having count(*) > 1;
PRODID COUNT(*)
-- --
column used in the order by caluse, should be the first column in the
select statement to make the index work
On Wed, Jul 11, 2012 at 3:16 PM, Reindl Harald wrote:
>
>
> Am 11.07.2012 11:43, schrieb Ewen Fortune:
> > Hi,
> >
> > On Wed, Jul 11, 2012 at 10:31 AM, Reindl Harald
> wrote:
> >> the m
you are using a function-LOWER, which will not make use of the unique key
index on ksd.
Mysql does not support function based index, hence your query is doing a
FULL TABLE scan and taking more time.
On Tue, Jul 10, 2012 at 4:46 PM, Darek Maciera wrote:
> 2012/7/10 Ananda Kumar :
> > c
can u show the explain plan for your query
On Tue, Jul 10, 2012 at 2:59 PM, Darek Maciera wrote:
> Hello,
>
> I have table:
>
> mysql> DESCRIBE books;
>
> |id |int(255) | NO | PRI |
> NULL | auto_increment |
> | idu
looks like the value that you give for myisam_max_sort_size is not enough
for the index creation and hence it doing a "REPAIR WITH KEYCACHE"
Use the below query to set the min values required for myisam_max_sort_size
to avoid "repair with keycache"
select
a.index_name as index_name,
mysqldump --databases test --tables ananda > test.dmp
mysql> show create table ananda\G;
*** 1. row ***
Table: ananda
Create Table: CREATE TABLE `ananda` (
`id` int(11) DEFAULT NULL,
`name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below
LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
(1,'ananda'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3
Did you try using "IGNORE" keyword while using the LOAD DATAFILE command.
This will ignore duplicate rows from getting inserted and proceed further.
On Fri, Jun 15, 2012 at 11:05 AM, Keith Keller <
kkel...@wombat.san-francisco.ca.us> wrote:
> On 2012-06-14, Gary Aitken wrote:
> >
> > So... I wa
unsubscribe:http://lists.mysql.com/mysql
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
> xl
>
> ref
>
> idx_unique_key_ib_xml\,index_message_id
>
> idx_unique_key_ib_xml
>
> 153
>
> reports.pl.Message_Id
>
> 1
>
> Using where
>
> ** **
>
> Sorry for the previous mailâ¦.. this is my execution plan
t. In my database I am having 8 innodb tables and at the same time
> I am joining 4 tables to get the report.
>
> I am maintaining 60days records because the user will try to generate the
> report out of 60 days in terms of second, minute, hourly, weekly and
> Monthly report also.
>
ives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Did you try with myisam tables.
They are supposed to be good for reporting requirement
On Wed, Jun 13, 2012 at 11:52 PM, Rick James wrote:
> I'll second Johan's comments.
>
> "Count the disk hits!"
>
> One minor change: Don't store averages in the summary table; instead
> store the SUM(). That
is iptables service running on db server, if yes, trying stopping it and
check
On Wed, Jun 13, 2012 at 5:04 PM, Claudio Nanni wrote:
> 2012/6/13 Johan De Meersman
>
> >
> > - Original Message -
> > > From: "Claudio Nanni"
> > >
> > > @Johan, you say "I'm having trouble with clients abor
or you can check application logs to see why the client lost connectivity
from the app
On Tue, Jun 12, 2012 at 5:12 PM, Ananda Kumar wrote:
> is there anything you can see in /var/log/messages
>
>
> On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni wrote:
>
>> Johan,
>>
is there anything you can see in /var/log/messages
On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni wrote:
> Johan,
>
> "Print out warnings such as Aborted connection... to the error log."
> the dots are not telling if they comprise Aborted clients as well.
> I find the MySQL error log extremely po
when u say redudency.
Do u just want replication like master-slave, which will be active-passive
or
Master-master which be active-active.
master-slave, will work just a DR, when ur current master fails you can
failover the slave, with NO LOAD balancing.
Master-master allows load balancing.
On Mo
e in
> > central server.
> >
> >
> > How can we achive this ? solution needs very much real time data
> > accepting nework lags.
> >
> >
> > Solution
> >
> > Collect all changes in other 49 server into 1 central server(How can we
> > collect data)
> >
> >
> > 49 keeps updating data into local database from central server(Using
> > Repliation Can be done)
> >
> >
> >
> > --Anupam
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql To unsubscribe:
> > http://lists.mysql.com/mysql
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
: my.cnf information
> > >> # mysqlext_20120522131034.log : variable and status information from
> > >> mysqladmin
> > >>
> > >> I have 2 database working with high load.
> > >>
> > >> I wanted to speed up my s
is the central database server just ONE server, to which all your 50 data
center app connects
On Thu, May 24, 2012 at 2:47 PM, Anupam Karmarkar
wrote:
> Hi All,
>
>
> I need architectural help for our requirement,
>
>
> We have nearly 50 data centre through out different cities from these data
>
Hi,
How much ever tuning you do at my.cnf will not help much, if you do not
tune your sql's.
Your first priority should be tune sql's, which will give you good
performance even with decent memory allocations and other settings
regards
anandkl
On Wed, May 23, 2012 at 3:45 PM, Andrew Moore wrote:
or it could be that your buffer size is too small, as mysql is spending lot
of CPU time for compress and uncompressing
On Tue, May 22, 2012 at 5:45 PM, Ananda Kumar wrote:
> Is you system READ intensive or WRITE intensive.
> If you have enable compression for WRITE intensive data, then CP
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data, then CPU cost will
be more.
On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman wrote:
>
>
> - Original Message -
> > From: "Reindl Harald"
> >
> > interesting because i have here a d
yes, Barracuda is limited to FILE_PER_TABLE.
Yes, true there is CPU cost, but very less.
To gain some you have to loss some.
On Tue, May 22, 2012 at 5:07 PM, Johan De Meersman wrote:
> --
>
> *From: *"Ananda Kumar"
>
>
> yes, there some
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can enable
BARACUDA file format, which in turn provides data compression
and dynamic row format, which will reduce IO.
For more benefits read the doc
On Tue, May 22, 20
table and
> doing the optimization will reduce the size of the datafile size ? If yes,
> then why this not possible on the datafile (one single file) too ?
> *
> *
> *thanks & regards,*
> *__*
> Kishore Kumar Vaishnav
> *
> *
> On Tue, May 22, 2012
On Tue, May 22, 2012 at 2:58 PM, Kishore Vaishnav
wrote:
> Right now one tablespace datafile. But does it matters if i have one file
> per table.
>
> *thanks & regards,
> ______*
> Kishore Kumar Vaishnav
> *
> *
> On Tue, May 22, 2012 at 2:56 PM, Ananda K
d why can't
> it gets decreased ?
>
> *thanks & regards,
> __*
> Kishore Kumar Vaishnav
> *
> *
> On Tue, May 22, 2012 at 1:40 PM, Claudio Nanni >wrote:
>
> > Kishore,
> > No, as already explained, it is not possible, Innodb datafiles *nev
why are not using any where condition in the update statment
On Wed, May 16, 2012 at 1:24 PM, GF wrote:
> Good morning,
> I have an application where the user ids were stored lowercase.
> Some batch import, in the user table some users stored a uppercase
> id, and for some applicative logic, in
I used to have these issues in mysql version 5.0.41.
On Mon, May 14, 2012 at 8:13 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Ananda Kumar"
> >
> > If numeric, then why are u using quotes. With quotes, mysql will
> > igno
r now dev team is
> updating the batch process from long secuencial process with huge slow
> inserts, to small parallel task with burst of inserts...
>
>
>
>
> On Mon, May 14, 2012 at 8:18 AM, Ananda Kumar wrote:
>
>> is accountid a number or varchar column
>>
&g
is accountid a number or varchar column
On Sat, May 12, 2012 at 7:38 PM, Andrés Tello wrote:
> While doning a batch process...
>
> show full processlist show:
>
> | 544 | prod | 90.0.0.51:51262 | tmz2012 | Query |6 |
> end | update `account` set `balance`= 0.00 +
>
which version of mysql are you using.
Is this secondary index.?
On Mon, May 7, 2012 at 12:07 PM, Zhangzhigang wrote:
> hi all:
>
> I have a question:
>
> Creating indexes after inserting massive data rows is faster than before
> inserting data rows.
> Please tell me why.
>
ion you have to restart the Server.
>
>
>
> Am 02.05.2012 um 13:58 schrieb vishesh kumar :
>
> > I am getting following in error log
> >
> >
>
:56:04 mysqld ended
-
Thanks
~Vishesh
On Wed, May 2, 2012 at 4:54 PM, vishesh kumar wrote:
> Thanks for response .
>
> I didn't set any open_files limit
Thanks for response .
I didn't set any open_files limit in my.cnf .
For testing i set open_files_limit to 300 but still MySQL crashing after
128.
~Vishesh
On Wed, May 2, 2012 at 4:28 PM, Reindl Harald wrote:
>
>
> Am 02.05.2012 12:52, schrieb vishesh kumar:
> > Hi Members
Do you just want to replace current value in client column to "NEW".
You can write a stored proc , with a cursor and loop through the cursor,
update each table.
regards
anandkl
On Mon, Apr 30, 2012 at 2:47 PM, Pothanaboyina Trimurthy <
skd.trimur...@gmail.com> wrote:
> Hi all,
> i have one
did you check permission of file /var/run/mysqld/mysqld.sock
On Wed, Apr 11, 2012 at 9:48 AM, Larry Martell wrote:
> On Wed, Apr 11, 2012 at 2:51 AM, Ganesh Kumar wrote:
> > Hi Guys,
> >
> > I am using debian squeeze it's working good, I am trying to install
>
ou are notified that any dissemination, distribution or copying of this
> email is strictly prohibited. If you have received this email in error,
> please notify us immediately by return email or to
> mailad...@spanservices.com and destroy the original message. Opinions,
> conclusions and other information in this message that do not relate to the
> official business of SPAN, shall be understood to be neither given nor
> endorsed by SPAN.
> >
> >
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
>
> /*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
> /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
> /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
> /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
> /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
>
> -- Dump completed on 2011-04-18 4:14:26
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
gt;
> Kay Rozeboom
> Information Technology Enterprise
> Iowa Department of Administrative Services
> Telephone: 515.281.6139 Fax: 515.281.6137
> Email: kay.rozeb...@iowa.gov
>
>
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
drinkt, is eene kwezel
> Hy die't drinkt, is ras een ezel
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
; a better way to do live backups, or have a hot space in the event of a
> > > catastrophe? Is there 3rd party software that would better achieve
> data
> > > integrity or something?
> > >
> > > Any help here would be appreciated.
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Create PROCEDURE qrtz_purge() BEGIN
declare l_id bigint(20);
declare NO_DATA INT DEFAULT 0;
DECLARE LST_CUR CURSOR FOR select id from table_name where id> 123;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
OPEN LST_CUR;
SET NO_DATA = 0;
FETCH LST_CUR INTO l_id;
WH
Why dont you create a new table where id < 2474,
rename the original table to "_old" and the new table to actual table name.
or
You need to write a stored proc to loop through rows and delete, which will
be faster.
Doing just a simple "delete" statement, for deleting huge data will take
ages.
re
o monitor the UPDATE/INSERT performance, check out if there's any
> performance bottleneck, for example:
> slow INSERT/UPDATE
> more I/O where execute INSERT
>
> Regards
>
> Thanks
> J.W
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.
top accessing some tables to open others.
>
> http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html
>
> --
>Dan Nelson
>dnel...@allantgroup.com
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To un
hours when I do use LOCK TABLES.
>
> -Hank
>
>
>
> On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar wrote:
>
>> May be if u can let the audience know a sip-net of ur sql, some can help u
>>
>>
>> On Thu, Sep 22, 2011 at 11:43 PM, Hank wrote:
>>
>>
Your outer query "select cpe_mac,max(r3_dt) from rad_r3cap", is doing a full
table scan, you might want to check on this and use a "WHERE" condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star wrote:
>
>
> I have a table with 24 million rows, I need to figure out how to op
May be if u can let the audience know a sip-net of ur sql, some can help u
On Thu, Sep 22, 2011 at 11:43 PM, Hank wrote:
>
> Sorry, but you do not understand my original issue or question.
>
> -Hank
>
>
>
> On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar wrote:
>
&g
mmit.
>
>
>
>
> On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar wrote:
>
>> Hi,
>> Why dont u use a stored proc to update rows ,where u commit for every 1k
>> or 10k rows.
>> This will be much faster than ur individual update stmt.
>>
>> regards
&
Hi,
Why dont u use a stored proc to update rows ,where u commit for every 1k or
10k rows.
This will be much faster than ur individual update stmt.
regards
anandkl
On Thu, Sep 22, 2011 at 8:24 PM, Hank wrote:
> That is what I'm doing. I'm doing a correlated update on 200 million
> records. One U
advise the sender immediately by
> e-mail
> or by telephone and then to delete this e-mail.
> Vox Orion accepts no liability for any loss, expense or damage
> arising from this e-mail and/or any attachments.
>
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
or u can use "for loop", have only the database to be exported and use that
variable in --database and do mysqldump of each database.
On Thu, Sep 15, 2011 at 6:27 PM, Carsten Pedersen wrote:
> On 15-09-2011 10:31, Chris Tate-Davies wrote:
>
>> Adarsh,
>>
>> 1)
>>
>> When restoring a mysqldump you
abhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
umber of rows you cite, but it works beautifully and it is quick as
> lightning.
>
> HTH,
> Arthur
>
>
> On Wed, Sep 14, 2011 at 9:24 AM, Ananda Kumar wrote:
>
>> Dr. Doctor,
>> What kind of 10 entries? Is it insert,update delete etc.
>>
>> regards
Dr. Doctor,
What kind of 10 entries? Is it insert,update delete etc.
regards
anandkl
On Wed, Sep 14, 2011 at 6:30 PM, The Doctor wrote:
> Question:
>
> How can you optimise MySQL for 10 entires?
>
> Just running OSCemmerce and it is slow to pull up a who catalogue.
>
> --
> Member - Libe
Can you lets us know what is the output of
select * from user_info where user_id=16078845;
On Thu, Sep 8, 2011 at 1:02 PM, umapathi b wrote:
> I wanted to change the login_date of one user . The original data of that
> user is like this ..
>
> select * from user_info where user_id = 16078845 \G
com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?**
> unsub=aim.prab...@gmail.com<http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com>
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
which
> would have troubles?
>
> we have upgraded some hundret webspaces and two dbmail-servers
> in februray becaus we know our self written applications and
> having test-environments, if you can do this can nobody say
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
ell> mv host_name.err-old backup-directory
>
> (Bug #29751)
>
> See also Bug #56821.
> "
> --
> Paul DuBois
> Oracle Corporation / MySQL Documentation Team
> Madison, Wisconsin, USA
> www.mysql.com
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
; notify us
> immediately at ad...@sifycorp.com
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com
>
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
> Thanks.
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com
> >
> >
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql?**
> > unsub=sureshkumar...@gmail.com<
> http://lists.mysql.com/mysql?unsub=sureshkumar...@gmail.com>
> >
> >
>
>
> --
> Thanks
> Suresh Kuna
> MySQL DBA
>
--
Best Regards,
Prabhat Kumar
MySQL DBA
My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat
Is this a production setup.
If not, take complete dump of the all databases.
Drop the xYZ database, see if you can see all the objects under XYZ.
Since the xYZ database is created, its obvious, that names are case
sensitive, and it show not show object from XYZ, when u under xYZ.
Can you please
1 - 100 of 848 matches
Mail list logo