The open blogging platform. Say no to algorithms and paywalls.

How to Dump and Restore Databases with mysqldump in an RDS Instance in AWS

A guide on dumping and restoring databases with mysqldump in an RDS instance in AWS.


Photo by Alp Duran on Unsplash

The topic of this article is of a basic level, but it can be useful to those who have never encountered these problems while performing a simple task like this.

We have all used mysqldump at least once, some manually in the old school sysadmin way, some in an automated way within scripts or pipelines.
For testing reasons and above all to have the same databases of our acceptance environment, I was asked to dump two databases inside an RDS instance and restore them inside a new RDS instance.

What should appear as a very simple activity has actually shown some problems related to security updates in the MySQL client, encoding and dumps too large to be restored in one go, and so on.

The purpose of this article is therefore only to illustrate the various workarounds I used, probably saving you some research on Google :)

So, let's start. For obvious reasons I'll obscure the real values of the commands I've used (DB names, passwords, etc.)

First step: Dumping the DB

I only needed to dump one database present on an RDS instance which contains multiple databases, the command I used is the one you probably already know:

mysqldump -h -u root -p --triggers --routines --events dbname > dbname_dump.sql

Easy right? Nothing new so far… let's try to restore this dump into another RDS instance.

Second step: Restoring the DB

Also here, this is something you already saw before:

mysql -u root -p -h dbname < dbname_dump.sql

Smooth as silk right? Not really, you may get:

ERROR: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected.

Well, it seems that this file is not what MySQL expected, let's check the file:

$ file dbname_dump.sql

And this is what you (may) get:

db.sql: Little-endian UTF-16 Unicode text, with very long lines, with CRLF line terminators

In my case, this happened because I use Windows 10 and the default encoding of Powershell is UTF16, but you could get a similar error if your dump is not UTF8 encoded, so it may be handy to know.

To get rid of this annoying error, you shall convert your existing dump file into UTF8 (ASCII) using the iconv Linux command:

iconv -f utf-16 -t utf-8 dbname_dump.sql > dbname_dump_utf8.sql

Well, we solved our encoding issue, let's try again to restore our dump:

mysql -u root -p -h dbname < dbname_dump_utf8.sql

This went even worse than the previous one:

mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces

After a quick investigation I found out what was causing the issue:
the reason here is that if we specify another user as DEFINER when the logged-in user does not have the SUPER privilege (that is by default itself is not allowed in RDS cause it would allow arbitrary privilege escalation) -- stored programs run with the credentials and privileges of their DEFINER .

Good to know, but since we still need to restore our dump, we have to find a workaround here.

To resolve, just some old school Linux sed stuff, since we need to remove all the DEFINER occurrences from our dump file:

sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dbname_dump_utf8.sql

With this magic, you'll get a proper dump ready to be restored in RDS.

Third step: Bonus

What also happened to me while I was trying to restore a large SQL file (very big INSERT queries) is:

ERROR 2006 (HY000) at line 1: MySQL server has gone away

Here the problem is that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore the client fails on the first occurred error.

There are multiple ways to solve this issue, for example:

  1. Add force option (-f) for mysql to proceed and execute the rest of the queries;
  2. Increase **max_allowed_packet** and **wait_timeout** in your server config (e.g. ~/.my.cnf);
  3. Dump the database using --skip-extended-insert option to break down the large queries, then import it again (this is actually the solution that worked for me);
  4. Try applying --max-allowed-packet option for mysql

You can find more info about this on the Mysql official docs.

And there you have it. Thank you for reading.

Continue Learning