Questions

I have a large database (~180GB data + indexes; a couple very tall but thin tables of ~200-400m rows, and two ~20m "wide" tables) and don't have the resources to simply throw money at the problem and buy tons of RAM. It's running on a dedicated box with 2x500GB 15k drives in RAID1 and 32GB DDR2 memory. For some operations, it's fine now, but since the indexes don't fit in memory, if the sections of interest to a particular query haven't been recently read it can be slow as hell. Percona server 5.5 + InnoDB. I've tuned the InnoDB settings. Are there any good hacks or tips?

Assessing your IO, Memory & CPU loads is the first step - Knowing where the bottleneck is, is helpful for suggesting strategy.
Often times joins are handy when you first create your DB as it pulls your data back in a tidy normalized result. As your data grows those joins can become troublesome because you are joining multiple large tables (meaning index lookups/loops through all the pertinent indexes), plus you can be bound to a single thread. Sometimes breaking that joined query into multiple queries that leverage separate threads as well as free up memory and the number of combined index's referenced at an instant between calls can be useful. I have not used Percona however, so it may have its own functionality that mitigates these concerns.


Answered 11 years ago

Unlock Startups Unlimited

Access 20,000+ Startup Experts, 650+ masterclass videos, 1,000+ in-depth guides, and all the software tools you need to launch and grow quickly.

Already a member? Sign in

Copyright © 2024 Startups.com LLC. All rights reserved.