Last time I was checked, how can TokuDB be used as a drop in replacement of InnoDB. The first impressions were jolly good; way less disk space usage, and the TokuDB host can be a part of the current replication cluster.
After TokuDB was announced as a new storage engine for MySQL , it made me very curious, but I didn’t tried it out until now.
MySQL replication is great, and kind of reliable, but sometimes it could be messed up. The good news is that we can handle this.
When you have to drop a large database, you'll encounter some problems, mainly replication lag. Now I'll show you how to avoid this.
We love graphs.
In my last article I showed how to fix replication errors on slaves, but I've made a mistake: my current example wasn't good, after skipping the command or inserting and empty transaction the dataset was different because of a timestamp holding date column which is CURRENT_TIMESTAMP default. Fixing the error solved…
Every MySQL DBA should deal with the situation, when there were an accidental write on one of the slaves. Changing replication to GTID will change the way how we should deal with that problem.
We are in the middle of switching to GTID based replication from the good old logfile & log position based replication.
Yesterday I've put some new features into the ansible's mysql_replication module, because we are planning to move to GTID based replication from the good old binlog position based one, and the module wasn't aware of.
Upgrading mysql to a newer version is a very simple thing: just replace the binaries, and run:
Creating backups and compressing files are always a time consuming task, for example to create the daily backup on the kinja related databases was took about 6,5 hours every day. The first part of creating the backup itself is about an 40 minutes long task - that's the time of the run of innobackupex, and applying the…
In the near past we had a small hickup in service, and now I will tell why it was happened. It caused our mysql replication configuration, and a small mistake I was made a long before the hickup, so the lesson was learned, and now I try to show the caveats of MySQL replication configuration.
As we have told this so many time, we love to use ♥Ansible♥.
I don't know how common is this problem, but it is good to know from time to time about which tables how many storage space needed in certain time. For example, you can catch an amok running software part which writes your database full. Or, - as you will see soo - you can catch up some code what doesn't work as…
Currently at Kinja we are in a middle of big architectural change on database servers, so I have run into a problem regarding this. Sometimes I have to check current connections on database servers, to see what schemas are in use, what servers using a given db server, or even which users are connected to database…
Yesterday Dominis mentioned shell-foo and one cool thing came to my mind what one of my ex colleagues showed me a few years ago. (Hi Pali!)
On HP servers the iLo remote management console can be a real pain in the ass sometimes. It works fine when it needed, but the default settings can make you hard days if you are an UNIX admin like us; I can say, when it accessed through ssh it is fine, but when you need it to use from browser, it can made your life…
We maintain a lot of servers under Kinja, so we have to use some orchestrator software to perform some tasks on a lot of servers. The Ansbile software is used by us, because it is cool.
First, I have to tell you, that the "fragmentation" is not the best word what I should use, but that was the closest to that what I wanted to say. The base of the fragmentation checker script is Peter Zaitchev's 2008 article about Information_schema, I used that query to get that results what I needed.