Shop Mobile More Submit  Join Login
We've just finished moving every message in every user's message center — all 6 billion of them — to a brand new set of database servers, and we managed to do it without anyone noticing. We didn't find much published material about the best way to do this, so we wanted to share a few details with you. It does get a little bit technical though so I won't be offended if you skip over some parts!

The data that powers message center was stored on one of our main sharded MySQL database clusters. That same cluster powers a lot of different features of the site and we recognised a lot of traffic was caused by this relatively simple but large set of data.

To free up capacity on that cluster, and help keep the site quick as we continue to grow, we decided to move the message center data to its own servers.

Some fun figures:
  • ~6 billion rows of data in 13 tables across 16 shards
  • Average 5k, daily peak of 10k queries per second for this data across the cluster
  • Average 100k, daily peak of over 150k rows read per second
  • Average 500, daily peak of over 3k rows written per second

So we had a pretty large set of data and the high traffic to it meant that we'd have to be very careful how we copied it over without losing any data, taking the message center down for a few days or otherwise affecting users.

To add a bit of fun to the project, the old tables were still using MyISAM storage engine as they've been around from before InnoDB's more recent rise to dominance. We decided it would be a good time to switch to InnoDB. If this doesn't mean much to you, then it means we decided to switch to a different type of database which has become a much better choice than it was a few years ago when we first designed message center.

How Much Hardware?

We started out not totally sure of how much hardware we would need because it was quite hard to get meaningful measurements of our current database server load when it was only a part of the traffic on the old cluster.

So to enable us to experiment and prove the new setup, we developed an extension for our PHP database API to allow us to duplicate query traffic from the current tables to the new cluster. We made it possible to control the percentage of read, write and delete queries mirrored individually though our distributed config system, and ensured that failures in the new cluster would not impact the real traffic.

After some experimentation we found a setup which worked well and gave us much more capacity to grow.

The new cluster has 8 servers as masters, replicating to 8 identical machines as a hot-backup.

Each machine has these specs:
  • 12 Cores
  • 96GB RAM
  • 200GB Intel SSDs

Since our existing cluster is sharded over 16 masters, we kept the data split into 16 logical shards and put 2 on each master machine in separate databases.

Syncing the data

Satisfied with the hardware setup, we needed to find a way to synchronise all the data. The initial import step was made more difficult by the move to InnoDB - it took several days to convert the most recent backup to InnoDB before the new cluster could start receiving new write traffic.

This left us with a gap in the data between the old and new clusters and we needed to find a way to get them back in sync without impacting the performance of the live cluster.

The nature of the data meant that even if we could efficiently copy only rows inserted within the missing period, that would still leave the data inconsistent where deletes happened during that time. People would start to see notifications they had already dismissed popping back up again.

So to be thorough, we needed to crawl all the data and ensure its consistency. Rather than copying everything again, we looked into ways to compare consistency between the live data and the new cluster without impacting users.


We couldn't find any documented technologies or techniques used by others that would work for us without taking the production cluster off-line so we came up with a solution that although not perfect, seems to have proven useful in this case.

It's all based on checksums.

A checksum is a number which is obtained by doing some calculations on the numbers that make up any piece of data on a computer. In general, if the checksum for two pieces of data is the same, they are very very likely to be identical. If the checksums are different, the two pieces of data are guaranteed to be different. The important thing is that using a checksum we can compare many thousands of rows with just one query to each cluster and very little data transferred.

We came up with a query that could calculate the CRC32 checksum of all the primary keys in a batch of in-boxes surprisingly efficiently. In this dataset, rows are never updated - just inserted or deleted - so primary keys are the only data points needed to ensure consistency.

The query looks something like this:

SELECT CRC32(SUM(row_cs)) as cs
  FROM (
      SELECT CRC32(CONCAT(folderid, type, itemid)) as row_cs
        FROM table
       WHERE folderid IN (1,2,3,4,...)
    ORDER BY folderid, type, itemid
  ) as tmp

[edit] rehael pointed out correctly in the comments below that the ORDER BY is unnecessary here. SUM is commutative so it doesn't actually matter if natural order is inconsistent in the inner query. This must have slipped in from an earlier attempt to accumulate rows without the sub query. In this case sort is on primary key so I guess it made little difference, but a good spot. I've left it in for posterity.[/edit]

[edit2] Another interesting potential flaw here is that CONCAT could produce false positives on some data sets. A more correct solution would be to use CONCAT_WS() with a delimiter char such that a row like (123, 456, 78) would not falsely checksum equal to (12, 345, 678). I'll credit pt-table-sync for this realisation (see bottom of article).[/edit2]

Using MySQL's EXPLAIN showed that this query didn't create a temporary table, and could select and sort efficiently using the primary key. This meant it is more likely to run quickly and not cause problems for real users of the site. In testing, we found it ran in under 200ms even for the biggest batches of folders we could find.

Despite the checksum being apparently quick, we still couldn't actually copy the out-of-sync data from big in-boxes from our master servers without locking extremely high-traffic tables, so we ran the checksums and copied the data from our hot backups.

To balance the amount of data copied with number of checksums needed, we first checksummed an entire batch of 50 in-boxes at a time. If there was a mis-match, we checksummed each in-box in the batch in turn and any that didn't match we deleted all rows from the new cluster and copied all rows from the old. There are more sophisticated options we could have used at this level to optimise for big in-boxes where not a lot changed, but none that were without downsides.

In reality the checksum query turned out not to be super-fast all the time, but we found more than 99.5% of the few million we ran in the end executed in under 200ms and over 99.8% in less than 1 second. In general checksum query load was minute compared to the load of the data-copying queries for big in-boxes.

You may not want to try this at home

There are a few flaws in this technique.

Firstly, the fact that we had to sync from replication slaves meant that any replication lag between master and slave caused inconsistency in the copied data. Once there were a few slow copy queries running, the replication lag would get worse. We found generally it kept up very well but we couldn't guarantee 100% consistency in one pass.

Secondly, even ignoring the replication lag, checksums could not be not atomic between the clusters - a user may have deleted or received a message in between the time you get the checksum from one cluster and the other.

So we knew a single pass through the data would not be sufficient. However, thanks to the checksum trick, we could continue running the sync job through the dataset and each time a few orders of magnitude less data would be found out of sync and need to be copied. Each pass also got substantially quicker as less data needed to be transferred.

After just 3 passes (several days for the first, a few hours for subsequent) we measured 99.9995% consistency. We called the job good and switched users over to read and write primarily from the new cluster. We're still writing to the old one for a week or so until we are totally sure of the new cluster.

Obviously this sort of slightly fuzzy consistency would not suitable for other data that may require absolute integrity (like customer's orders). For us, it's acceptable because:
  • a significant proportion of the < 0.0005% of in-boxes still measured as out of sync will actually be fine - just reported due to the measurement error noted above
  • statistically it is likely that any in-boxes genuinely out of sync are very large ones (100k+ notifications) and that their owners never look through and curate or remove them
  • in the highly unlikely case that a user did notice a discrepancy, it is more likely to be a notice they deleted re-appearing (they can fix that in one click)
  • these are not personal messages or unique content, just notifications of activity elsewhere on site

The useful take-aways

Despite not being a perfect solution to all live migration problems, I hope this has been somewhat interesting and may even help others with similar tasks. The exact plan we used is pretty specific to our situation but I think there are a couple of more general points to take away from this:
  • Migrating large amounts of data from one MySQL server cluster to another without affecting operation or losing writes is hard and apparently no one has come up with a really good solution yet
  • Under different circumstances, it may be possible to be more robust than this although probably at the expense of taking longer, requiring more resources, or making the service unavailable for some time
  • Checksum queries, although not a panacea, are a potentially useful tool that are not widely discussed for on-line MySQL consistency checking

Big thanks to chris and randomduck for their hard work on this upgrade.

[edit] A number of people pointed us at pt-table-sync from Percona's Toolkit. That script actually ends up doing something almost exactly like what we did here (with different checksumming techniques). I didn't know of it and somehow didn't find it when looking for similar solutions. It looks like a much more robust solution in general although would not (easily) have been able to take advantage of our distributed job cluster which enabled our migration to run sync queries on many folder chunks in parallel, so it probably would have taken much longer.[/edit]

Add a Comment:
I-Do-Care Featured By Owner Apr 14, 2012
Who says tech stuff isn't an art form! An elegant solution if ever I saw one. Thanks for sharing.
Sonamyperv Featured By Owner Apr 11, 2012  Professional General Artist
96GB Of RAM?!?!?! OMG The most Ram My Cluster server has is 78 :jawdrop:
JosephTimbury Featured By Owner Mar 1, 2012  Hobbyist Photographer
I'm glad I studied website development for a year, so it all makes sense! I have to give a huge applause for all that hard work and in a timely fashion. Well done! :dance:
Vocable Featured By Owner Feb 25, 2012   Writer
Great job, guys! I actually kind of just nodded along for quite a bit of the tech talk, but it all sounds very impressive and what you've accomplished is amazing.

Nice explanation and article.
Garyantart Featured By Owner Feb 23, 2012
You may want to have a look at [link] that solves live migrations, with consistency.
banks Featured By Owner Feb 24, 2012
I'm going to assume that you are not working for chronicdb and posted that link as a genuine suggestion ;)

I am pretty sceptical that their services would scale to meet our needs and still be able to maintain perfect atomic, consistent database migrations whilst maintaining all live queries in the < 4ms range.

If that was possible, it would be very interesting technology but even then separate hosted service would not work for us.

In fact the very concept of a versioned database would not work with this set. We churn through the entire data set within a year maximum meaning after 6 months, the data set would be > 9 billion rows after that > 12 billion -- and that's ignoring growth from increasing usage.

For other situations it's an interesting approach though.

I didn't see any technique articles or open-source sharing of their migration code, perhaps if you have more info on that we could all learn from you could share it with us?
iLantiis Featured By Owner Feb 23, 2012  Hobbyist Digital Artist
I work in IT, and we have never handled anything this big. Bravo, especially since none of us noticed! Interesting way to do it too. I can honestly say I don't understand it all, but I am quite impressed and absolutely inspired to learn even more in my field now!
rehael Featured By Owner Feb 23, 2012  Hobbyist Photographer
Interesting read. But I have two points:

:pointr: Why the ORDER BY for an aggregate query? The effect of the SORT operation is lost — did the optimizer cut it (good) or was it executed (bad optimizer, bad, no cookie)?

:pointr: Really, no one came with a solution? Usually (read: three big migrations) I've seen it done the following way: up NEW box, split INSERTs to NEW, DELETE AS INSERT INTO table_delete on NEW, make sure it works, snapshot OLD, load snapshot on NEW (live traffic still flowing), turn on DELETE on NEW and disable DELETE AS INSERT on NEW, execute DELETE FROM table, table_delete WHERE =, switch traffic to NEW, utlilize OLD. DELETE AS INSERT can be query rewrite, trigger or something like that. I've also seen transaction log syncing (for MyISAM to InnoDB wouldn't work), and convoluted variations of the above scenario. YMMV. ;)
banks Featured By Owner Feb 23, 2012
"Why the ORDER BY for an aggregate query"

Because we are comparing checksums of the rows one by one, if the rows are not in deterministic order, the checksums will be different even if the rows are the same. We can't guarantee order without the ORDER BY especially between differnt DB engines

"Really, no one came with a solution?"

Actually I'm sure plenty of people came up with solutions generally but we had a specific problem of doing the migration under relatively heavy write load and without being able to degrade performance on LIVE cluster at all.

For sure there are other ways, this was just one we found.

Your technique sound interesting and could work in many cases. In ours, we would have had to have the full write/delete traffic going to the new cluster during the incredibly expensive process of converting to InnoDB which would have made it very likely the new cluster would have errors on queries that would then be missing from the final set on new one (and a sync similar to what we ended up doing would be needed again).

It's a good question though - we had some interesting limitations here and there are certainly alternative solutions.

Thanks for reading!
banks Featured By Owner Feb 23, 2012
Wait SUM() is commutative so my argument does not hold :) Well spotted. It is likely that the order by can be dropped. Either the optimiser was clever and did this, or it worked well despite the extra sort.

In this case we were always sorting on the primary key (and only primary key columns in the right order so it was not an issue).
rehael Featured By Owner Feb 23, 2012  Hobbyist Photographer
First, thanks for the answers. I guess we like our optimisers to be clever. And it's good you didn't (I hope) hit any collisions — CRC32 for 6 billion rows is risky business (p(collision)=39.6% for 6b. in short scale) — but I guess you compared these 16 shards and 13 tables separately, not as one 6b. dataset. :D

I feel I need to add some more thoughts about the migration. Without going into too much details ($DEITY bless NDAs) — recently we finished rather large migration project: several years of worldwide market data (raw, not aggregated dimensions), changing DB engines, instance placement and even datacenter switch along the way. All of it with live system, without degrading preformance for end users. The full load took about a week («Monitoring: saving lives, assuring sanity» :giggle:). The scenario I gave up there is bulletproof, every single time I see it used. Split, Snap, Load, Housekeep, Switch, Done. It's true it puts the NEW under much stress, but only for initial load of the snapshot, and it's somewheat offline process without an impact on the LIVE one.

I don't get why „the incredibly expensive process of converting to InnoDB which would have made it very likely the new cluster would have errors on queries that would then be missing from the final set on new one”emphasis mine — could you elaborate on this one a little more?

Anyway — I really love reading the #dt technical journals — keep 'em coming please. :XD: And in future projects — maybe source knowledge from deviants here, I think you can find some smart souls in the community.
banks Featured By Owner Feb 23, 2012
CRC32 collisions could come into play across whole dataset but remember we were only comparing "same" relatively small (less than 150k) batch of rows on each cluster at a time. It's very unlikely for collision to occur in this case, especially given that a large folder almost certainly changed between the multiple passes we ran (and so should be picked up on subsequent pass EVEN if it collided once). Also the birthday paradox doesn't come in since we are not comparing any batch with any other. But you are right -- it is important to consider in each situation.

To clarify my point about InnoDB conversion greatly increasing likelihood of degraded performance and dropped queries: as far as I understand (and I wasn't the person in charge of this aspect) the conversion from MyISAM to InnoDB took several days on each server at pretty much full disk IO. The new cluster would have been totally unable to keep up with our actual write traffic which would have meant queries hung causing too many connection errors (which would be silently ignored so as not to impact real users). Basically the new cluster would not have enough disk IO to cope with both the data conversion and the real write traffic. Throttling the import would perhaps be a possibility but it probably would then have taken a lot longer overall to complete the process.

Your solution seems very sound and robust in general though and would be much more widely applicable to other situations.

Thanks for the sharing. If you have any thoughts on ways we can get technical deviants more involved I'm sure we'd be interested to hear them!
rehael Featured By Owner Feb 23, 2012  Hobbyist Photographer
Ok, thanks for clarification, got your point now. I haven't played with MySQL for a looong time — currently only Oracle and PGSQL and some esoteric ones) are around me.

As for getting tech–savvy deviants attention — advertising the #dt globally from time to time could help. And then — just post a journal (one a month?) with a problem to solve, and wait for comments. It should work, don't expect miracles tho'. ;)

:salute: :ahoy:
tleach0608 Featured By Owner Feb 23, 2012  Hobbyist Photographer
Though I never handled the back end of things at my old job in IT, I nonetheless could at least follow what you said and am very very impressed. Great job, everyone, the hard work was very much worth it and paid off! You guys rock!
raquelvonkaminaru Featured By Owner Feb 23, 2012  Hobbyist Photographer

Now THAT'S impressive!!! ;)
namenotrequired Featured By Owner Feb 22, 2012  Student Interface Designer
I read it all :D Awesome job :clap:
Branchewski Featured By Owner Feb 22, 2012  Hobbyist Photographer
Amazing, just amazing :worship:
TimberClipse Featured By Owner Feb 22, 2012  Professional Filmographer
My 17,000 notes could not possibly of helped with that :lol:

Anyway great job guys! But the truth is...

I noticed

I just didn't say anything :eyes:

Sublime-Feline Featured By Owner Feb 22, 2012  Hobbyist Digital Artist
You all really are SHEER GENUS!!! And thanks for explaining this to all of us! :heart:
endosage Featured By Owner Feb 22, 2012  Professional Interface Designer
Every time one of these dT blogs are posted, I know they're going to be entertaining. I love hearing all the little details behind what goes on in dT and what you guys have to deal with –– always interesting and incredibly informative. Thanks for the great read, Paul! (and thanks for making nice cozy homes for our data to live in!)
deviant-garde Featured By Owner Feb 22, 2012
I doubt I'll have any projects any time soon dealing with data under this much activity, but this is still a really interesting article. :nod:
Arichy Featured By Owner Feb 22, 2012  Hobbyist General Artist
TBBishiXO Featured By Owner Feb 22, 2012  Hobbyist General Artist
rotane Featured By Owner Feb 22, 2012  Professional
You guys are awesome! :worship:
Ikue Featured By Owner Feb 22, 2012  Hobbyist Digital Artist
TBBishiXO Featured By Owner Feb 22, 2012  Hobbyist General Artist
Love your signature tag thing... ^^
Ikue Featured By Owner Feb 23, 2012  Hobbyist Digital Artist
:D Thanks! Would you believe I've had it since I first joined the site 8 years ago? :giggle:
TBBishiXO Featured By Owner Feb 23, 2012  Hobbyist General Artist
WOAH!!!!!!!!! that's amazing!!!
PuRpLeAcE Featured By Owner Feb 22, 2012  Hobbyist General Artist
great job guys :#1:
i will definitely be using this method. thanks for explaining :)
Photosbykev Featured By Owner Feb 22, 2012  Hobbyist Photographer
I'm surprised dA didn't run out of pixels writing the new dB
haldron Featured By Owner Feb 22, 2012
becomm Featured By Owner Feb 22, 2012
Hardcore stuff :)
Thanks for the interesting read and long live dA !!!
Mathness Featured By Owner Feb 22, 2012
I think there is a strange bug somewhere, for some reason I got a ton of Santa's correspondences with women, uhm, err, "admires". :slow:

Great job on doing this without users noticing. :clap:
HaruTotetsu Featured By Owner Feb 22, 2012  Hobbyist Writer
deviantART vs Other Art sites...

devinatART 1
Other Art Sites 0
dxd Featured By Owner Feb 22, 2012  Hobbyist Photographer
lost-angle Featured By Owner Feb 22, 2012  Hobbyist General Artist
banks, this was a wonderful article, great read, great explanation, great wording!
banks Featured By Owner Feb 22, 2012
Thanks, it's hard to make very technical stuff readable so I appreciate your kind words :)
lost-angle Featured By Owner Feb 22, 2012  Hobbyist General Artist
I appreciate the difficulty, I've had to do it myself in the past ><

MidnightExigent Featured By Owner Feb 22, 2012  Hobbyist Digital Artist
Great work, guys! This was great to read. :D
exarobibliologist Featured By Owner Feb 22, 2012  Professional Photographer
Very impressive! I got all the technical bits. That checksum calculation was very impressive!
fritsenator Featured By Owner Feb 22, 2012  Professional Interface Designer
Very impressive. Keep up the amazing work, guys! :#1:
andreiVV Featured By Owner Feb 22, 2012
nice.. and MFW for the machines
fourteenthstar Featured By Owner Feb 22, 2012
Elandria Featured By Owner Feb 22, 2012  Professional General Artist
Magic and rainbows. Awesome stuff! Well done DT!!
Oughter Featured By Owner Feb 22, 2012  Hobbyist General Artist
not gonna lie, i skimmed. but amazing nonetheless.
inazar Featured By Owner Feb 22, 2012
This one time, I had to type a Z-modem transfer program into a DOS machine, as a hex dump through; I ended up with the same method of checking integrity as $banks, which was to CRC32 each chunk of the program.

It's surprising how useful checksumming is, and in how many contexts it crops up.
20after4 Featured By Owner Feb 27, 2012  Hobbyist Photographer
After the long walk, three miles in the snow, uphill both ways, just to get to the keyboard...
pica-ae Featured By Owner Feb 22, 2012  Professional Interface Designer
reederda Featured By Owner Feb 22, 2012  Hobbyist General Artist
Well done. No problems whatsoever.
SparkLum Featured By Owner Feb 22, 2012
And the site DIDNT break during that. That's a very impressive feat indeed
Add a Comment:

:icondt: More from dt

Featured in Collections


Journals, News, and Features by Topicality

README.TXT by MidnightExigent

More from DeviantArt


Submitted on
February 17, 2012
Submitted with


51,796 (1 today)
22 (who?)