> We have a PSE05 "Master" and PSE06 "Slave" (PRODUCTION servers) both
> are
> Ubuntu 32-bit.
> We have a third slave PSE07 which is Ubuntu 64-bit. This is our 'live
> backup' so to speak. We take mysqld down daily on there and tarball the
> /var/lib/mysql and /var/log/mysql as snapshots (since mys
Little, Timothy wrote:
> We have a 20 gig db (that includes the MYIs and MYDs and FRMs).
>
> We are wondering how long LVM snapshots take.. in that how long might
> the DB be read-locked? Do we have to read-lock it and flush tables?
Take a look at mylvmbackup which takes care of flushing tables,
Glyn Astill wrote:
>> Begone Postgres troll!
>
> Oh the hostility of a scorned mysql user. Joshua has posted no more FUD
> than you mysql chaps have done yourselvs over the past few days. You were
> worried about the future and he's posted a few ideas of how you can
> prepare.
No he didn't. He p
Joshua D. Drake wrote:
> I would expect that MySQL in two years likely won't exist except on the
> most tertiary level. Most new projects will be developed in either
> PostgreSQL, Interbase or one of the forks (MariaDB, Drizzle).
>
> Sincerely,
>
> Joshua D. Drake
>
> --
> PostgreSQL - XMPP: jdr
> Right now if you want a more scalable *current* version of
> MySQL, you need to look to the Google patches, the Percona builds (and
> Percona XtraDB, a fork of InnoDB), or OurDelta builds.
Is there a webpage somewhere that compares and contrasts the above patchsets?
I thought the Google patches
Hi all,
I'm just creating my first partitioned table and have run into a bit of a
snag. The table primary key is a double and I want to create partitions based
on ranges of the key.
I have read the Partition Limitations section in the docs which states that
the partition key must be, or resolve
Gmail User wrote:
> I had perfectly working complex queries both with LEFT JOIN and without
> and they were returning results in under a second. After upgrade to
> 5.0.x, the same queries would return results in 20-30 second range.
I had a similar problem once (not related to 4.x->5.x though), it
The version I'm using is:
Your MySQL connection id is 6 to server version: 4.1.22
I log on as root. I then:
mysql> show grants for 'WP_INT_BASEBALL'@'localhost';
++
| Grants
> It makes no sense to use a default value with an auto_increment
> attribute, which means, the default value is the auto-incremented
> value.
>
> Carlos
Hmmm... I see your point. I sort of figured that the default was an
initial value, and from there, it incremented when accessed.
Sort of weir
I'm trying to install ProBIND, and I'm running mysql 4.1.20.
One of the ProBIND install scripts calls for tables to be created in MySQL, and
I've culled it down to this:
mysql> CREATE TABLE zones (
-> id INT(11) DEFAULT '1' NOT NULL AUTO_INCREMENT,
-> PRIMARY KEY (id)
I shut down a database using:
mysqladmin -uroot -pxxx shutdown
and the db shutdown as expected. But then it restarted! My only guess
is that mysqlmanager can't tell the difference between a clean shutdown
and a crash.
Maybe this is expected? But then what good is the shutdown command
availabl
I've switched over to the mysqlmanager startup system instead of the old
mysqld_safe because thats the only supported method in mysql5.
I needed to restart a DB so I did a `/etc/init.d/mysqlmanager restart`
which seemed to work, but there were some problems:
- the daemon was no longer accepting c
Hi Juan,
The default (and recommended) is 2. The log files, save the trasactions
into
file in circular order. This files are like a redolog files in oracle. This
log file are useful when you recover your database after some crash for
example or when you use a replication mysql.
innodb_buffer_
Is there any benefit/reason to set innodb_log_files_in_group to
something other than 2?
Thanks,
ds
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Jochem van Dieten wrote:
> On 12/19/06, David Sparks wrote:
>> I noticed an interesting benchmark at tweakers.net that shows mysql not
>> scaling very well on hyperthreading and multicore cpus (see links at end
>> of email).
>>
>> Does anyone know what engi
I noticed an interesting benchmark at tweakers.net that shows mysql not
scaling very well on hyperthreading and multicore cpus (see links at end
of email).
Does anyone know what engine they are using for their tests? (Innodb,
myisam, berkdb heheh)
In fact they seem to show that postgres is a fast
Mathieu Bruneau wrote:
> I never experience any dump that were slow due to the index. The index
> aren't dumped anyway they will be recreate when you import them back so
> it shouldn't matter. (And that will cause problem if the db is running)
> so I wouldn't drop the index on your table if I were
David Sparks wrote:
> I want to move 3 100GB .ibd files into a new DB.
>
> I followed the instructions here:
>
> http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
>
> But it doesn't work:
>
> mysql> alter table reports discard tablespace;
&g
Kevin Old wrote:
> Hello everyone,
>
> We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
> MySQL with a mix of InnoDB and MyISAM tables.
>
> We normally run at 1500 queries/per second and lately, the server will
> all of a sudden lock up and we are forced to restart mysql.
T
I want to move 3 100GB .ibd files into a new DB.
I followed the instructions here:
http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
But it doesn't work:
mysql> alter table reports discard tablespace;
Query OK, 0 rows affected (0.04 sec)
mysql> alter table reports import tablesp
I have a table with ~100,000,000 rows. I recently discovered that I
need to start using one of the non-indexed columns in WHERE clauses. As
expected the performance is horrid. I decided to bite the bullet and
create an index (innodb):
mysql> show full processlist\G
*** 1
Basically, I'm new to mysql (or to any database for that matter).
I have an old version installed on my linux machine. I thought, as a
learning exercise I'd take 2 files (tab separated tables) load them
into mysql and then merge or join them.
So what are the steps? The first thing I tried was t
Here is a config diff that made mysql usable again. As the database
grew in size, buffer sizes in the config were increased to try to boost
mysql performance.
Unfortunately it didn't work as expected. As the config was tweaked,
mysql slowed down even more. Removing all settings from the my.
or you have such a cool server :)! Please send the
>output of 'SHOW VARIABLES' statement, 'SHOW STATUS' statement and your
>configuration file. Include the amount of physical memory.
>
>
>
>David Sparks wrote:
>
>
>>mysql usually crashes when being sh
I forgot to include the output of show variables and show status in the
last message :(
mysql> show variables\G
*** 1. row ***
Variable_name: back_log
Value: 50
*** 2. row ***
Variable_name:
Gleb Paharenko wrote:
Hello.
> = 77591546 K
Really - something is wrong with your memory settings - MySQL is using
about 77G of memory (or you have such a cool server :)! Please send the
output of 'SHOW VARIABLES' statement, 'SHOW STATUS' statement and your
configuration file. Include the a
mysql usually crashes when being shutdown. The machine is a dual AMD64
w 8GB RAM running mysql-4.1.14 on Gentoo linux with a ~40GB database. I
had similar crashes running 4.0.24 on an x86 running a ~275GB database.
I always use `mysqladmin shutdown` rather than the init scripts to
shutdown t
Hi all!
Gleb Paharenko wrote:
> Hello.
>
>
>
>>I have a query that is taking days to complete (not good). If I change
>
>
> Really, not good. What does SHOW PROCESSLIST report about the thread of
> this query?
The query has been running for ~5 days now:
Id: 27977
User: root
Ho
I have a query that is taking days to complete (not good). If I change
the query so that it selects less rows it runs fast.
I ran an explain on both queries and it didn't give any hints as to why
the one query is taking days to run. In fact explain knows how many
rows each query will examine.
P
db1 corruption # cat > my.sql
DROP TABLE IF EXISTS service_contacts;
CREATE TABLE service_contacts (
croeated datetime NOT NULL default '-00-00 00:00:00'
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT INTO service_contacts VALUES ('2006-06-14 10:27:40');
db1 corruption # mysqladmin -u root -
Petr,
Thanks for the reply!
> I think you are looking for --extern option of the test suite. I
> corrected the README file. The changes should be propagated to the
> public repository soon, but you could check the commit mail for more
> details right now:
> http://lists.mysql.com/internals/26266
According to the README, mysql-test-run supports an --external option:
db1 mysql-test # grep -a1 external README
If you want to run the test with a running MySQL server use the --external
option to mysql-test-run.
However it doesn't actually support it:
db1 mysql-test # ./mysql-test-run --exte
Hi,
I want to have an automatic backup done of my SQL databases, but cant quite
figure out how to use mysqldump to do this properly...
Ideally, I would like the database backed up, then FTP'd to my home server
Can someone lend me a hand with this please!
Thanks!
Sparks...
--
Hi,
I want to have an automatic backup done of my SQL databases, but cant quite
figure out how to use mysqldump to do this properly...
Ideally, I would like the database backed up, then FTP'd to my home server
Can someone lend me a hand with this please!
Thanks!
S
Paul,
It is not an error for $category to be 'no' in all records. The warning
is just telling me that it didn't find any 'yes' records.
I need to read up and find out how to error check the SELECT statement,
I guess...
-tom
I am doing a mysql_num_rows after a SELECT statement and am getting
the following warning message:
Warning: Supplied argument is not a valid MySQL result resource in {pathname to
program} on line 40
Line 40 - $result = mysql_num_rows($res);
The SELECT statement:
$res = mysql_query("SELECT * FRO
an apostrophe
>Description: If I try to INSERT data containing an apostrophe into a
Field defined as mediumtext it works ok. However, if I try to insert the
same data into a Field defined as varchar, the INSERT fails. And, it not
only doesn't work...it does nothing at all...no error message of a
37 matches
Mail list logo