Ian, I ran into a similar situation not too long ago when setting up replication. I don't recall if the error message was identical but was certainly close.

My problem was that we had customized our original server (and mysqldump) to allow 16 MB "packets", which is essentially a setting governing the largest SQL command allowed. When I did the install on the slave machine, I didn't remember to tweak those same settings, so the initial import kept failing until I did.

I see in your my.cnf file you've got an 8M setting for mysqld and a 16M setting for mysqldump - you should set those to both be the same, first off.

As for your import problem - is it possible this 3.2 GB dumpfile contains some large binary data? Or that for some other reason the people at the source might have chosen to drastically increase the max_allowed_packet setting on their server and their mysqldump?

The numbers you show below indicate a max_allowed_packet setting of 180M might work for you, allowing you to finish the import.

Note you need to set the large size for the server AND the client, AND mysqldump if you're using it too.
http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

HTH,
Dan



Ian Barnes wrote:
Sorry, forgot to send to the list aswell. My reply is at the bottom.

-----Original Message-----
From: Ian Barnes [mailto:[EMAIL PROTECTED]
Sent: 08 June 2006 09:58 PM
To: 'Kishore Jalleda'
Subject: RE: Importing 3Gb File



-----Original Message-----
From: Kishore Jalleda [mailto:[EMAIL PROTECTED]
Sent: 08 June 2006 06:18 PM
To: Ian Barnes
Cc: mysql@lists.mysql.com
Subject: Re: Importing 3Gb File

On 6/8/06, Ian Barnes <[EMAIL PROTECTED]> wrote:
Hi,

I am trying to import a 3.2Gb sql dump file back into my sql server (
4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)

That error comes up after about 30 minutes worth of import and I would
guess
about half way through the import. The file in question is a mysqldump
-all-databases file from another server that im trying to import onto
my
desktop machine. I have tried to alter the my.cnf file a bit, and this
is
what it looks like:

[client]
#password       = your_password
port            = 3306
socket          = /tmp/mysql.sock

[mysqld]
port            = 3306
socket          = /tmp/mysql.sock
skip-locking
key_buffer = 64M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 8M
net_buffer_length = 8M
myisam_sort_buffer_size = 45M
set-variable=max_connections=300

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id       = 1

#bdb_cache_size = 4M
#bdb_max_lock = 10000

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Does anyone have any advice as to what I could change to make it
import,
and
not break half way through. The command im running to import is: mysql
-
n
-f
-p < alldb.sql

Thanks in advance,
Ian


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

Assuming that you have dumped your databases using mysqldump, what
options
did you give to mysqldump, as of 4.1, "--opt" is enabled by default, and
this enables the "--quick" option which basically forces mysqldump to
retrieve one row at a time instead of buffering the whole table into
memory
and then writing out the result.

So if you have the --quick option enabled in myslqdump, you should not
be
getting the out of memory errors, also I see you are using the -n option
with mysql CLT, which does not buffer sql statements/queries  into
memory
before flushing them, but if the dump itself consists of large rows of
table
data flushed into one large sql statement, then mysql CLT would still
treat
it as one query, so i am sure you have to change the way you dump your
tables...

Kishore Jalleda
http://kjalleda.googlepages.com/projects
Hi Kishore,

Thanks for the info!

I don't know how the file was dumped (I know it was via mysqldump), but I
assume it was the default dump method (mysqldump -uuser -p --all-databases
alldb.sql).
I have tried running it with the -q option and it still fails with that
message. (mysql -f -q -p < /home/iandb.sql)

Any other ideas?

Cheers
Ian



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to