First:
Formatting a drive using RAID 5 is a very big mistake when setting up a MySQL database server.
In case you missed that:
Unless you are running mission-critical banking transactions that can not for any circumstance be lost or else a whole bunch of people are going to die and RAID 5 is the only solution that works, then do not use RAID 5 for any database server where you wish to continue to get a modicum of performance in the long term.
Really. I work on a, shall we say, very high-transaction marketing tracking service which processes over 242 million transactions a month. And when I say “transactions”, I mean hits. And while the hits are not, say, as intense as downloading an image file, it adds up.
Some of it is discarded traffic from inactive sites, or refresh pages, but most of the data ends up being pumped into the database, constantly.
I have considered myself somewhat of a MySQL aficionado up until recently when I actually bought High Performance MySQL by Schwartz, Zaitsev, Tkachenko, Zawodny, et al. which is worth every penny of the cover cost, for sure. The section on key buffers in MyISAM is worth its weight in gold, and other sections reinforced a lot of stuff which I kind of knew but never really followed through on, and a ton of stuff which I never knew, and you really need to be a developer of MySQL to understand well, or be a full time DBA of MySQL, which I am afraid to say, I am becoming more and more.
February 2008 we upgraded our server farm and purchased two honkin’ ProLiant DL 380 G5 boxes loaded with memory and disk. Our system administration consultant, whom I’ll lovingly call “Jark“, set them up with RAID 5, 7 drives to the array with a hot spare, giving over 1.6 terabytes of storage, a vast increase over our prior 300 gigabytes. Fine and dandy. Everything ran smoothly for the first ten months.
Fast forward to December, when our client who receives the most traffic(10M page views per month) starts to slow everyone down. For a period of about 3 days, our data (which is real-time, ahem) was behind by about six to eight hours during the day, then only caught up to about four hours behind in the low periods in the morning.
I optimized, I profiled, I did everything I could to try and remedy the situation.
There’s a handy dandy little tool I have which shows how long it takes to pump a minutes worth of data into the database. On a good day, it takes about 10 seconds. The week in question, it was taking 60 to 100 seconds. Those rocket scientists out there can see that this was a losing performance.
And, to cap it all off I read in High Performance MySQL how businesses which fail to scale tend to fail. Gulp.
Oddly, the traffic patterns hadn’t changed significantly from the prior month. True, December is a banner month for most merchants, and our customers are no different, but I was starting to sweat. Jark (The administration consultant who set up the boxes originally) offered his high hourly rate to diagnose and check out the situation, which I begrudgingly paid … to no avail.
In the meantime, I asked our development team if there was some software solution for the issue. They have, in the works, a modified data import engine which eliminates the bottleneck for slow sites. However, this code was two months out, sayeth the lead developer. Could we ship it in the next few days? Not without a SNAFU afterwards because we haven’t tested it all. Ok.
Finally, after a few no-op downtimes with Jark where he did nothing, apparently, I asked him about the RAID configuration, having remembered a post on a blog (or MySQL.com) somewhere about which RAID to choose. I found a humorous (or not-so-humorous depending on your mood) organization called BAARFwhich I forwarded to him. His suggestion, “Maybe we should’ve done RAID 1.”
It was in reading this email that I believe sparks were flying from my back teeth.
Needless to say, Jark is no longer with us.
After painstakingly backing up the master database, reformatting and reinstalling the new slave system with RAID 10 (1+0), the system was back up with the newly formatted drives. When I pressed the magic button which starts up the database feeding, I watched the counter.
2 seconds. 3 seconds. 2 seconds. To import each minute.
On occasion, when things work too fast, I worry. It couldn’t be that fast. But the speed improvement was nothing short of miraculous.
The 8 hour delay was caught up in less than an hour.
相关推荐
DELL服务器配置raid5+全局热备
RAID0_RAID1_RAID10_RAID5
这次是在戴尔服务器R710上面尝试的做Raid0和Raid5,亲测成功。 因为创建Raid0与Raid5的方式是一样的,所以就以创建Raid5为例。 1,启动时,Ctrl+R键,进入Raid配置界面,如图(请注意,在此屏幕有操作提示,如果...
RAID 5E是在 RAID 5级别基础上的改进,与RAID 5类似,数据的校验信息均匀分布在各...看起来,RAID 5E和RAID 5加一块热备盘好象差不多,其实由于RAID 5E是把数据分布在所有的硬盘上,性能会与RAID5 加一块热备盘要好。
如何在RAID 5的配置中添加硬盘,itpub出品
详细说明了在Windows 2003操作系统中扩展Raid5的操作过程。
这个文档是描述raid5的详细工作原理,raid5是目前存储中使用的最多的raid类型
做RAID5。RAID0、RAID1、RAID5的区别。
组装raid5及raid1服务器,挺好的资料。
在RAID5下做系统ghost备份
ibm Raid5故障恢复 ibm Raid5故障恢复 ibm Raid5故障恢复 ibm Raid5故障恢复
在RAID5 中推荐的最大数量可能随着供应商的不同而有很大的变化。某些供应商支持12个或者更多,但是其他的却支持得少一些。如果在你的RAID5拥有更多的驱动 器,就会使得你可以使用磁道上的更多磁头来扩展I/O,这将提高...
LSI 8708E RAID卡创建raid5教程
创建RAID5过程RAID5的创建过程123
Dellr710服务器,有4块450G硬盘,默认做的RAID5。我们的目的是取其中3块硬盘做RAID5,留一块硬盘做热备。 在这里,我具体解释一下 ①4块硬盘做成RAID5 ②3块硬盘做RAID5,一块硬盘做热备盘 这两种配置之间的...
RAID5 和RAID4 相似但避免了RAID4 的瓶颈,方法是不用校验磁盘而将校验数据以循环的方式放在每一个磁盘中,RAID5 的控制比较复杂,尤其是利用硬件对磁盘阵列的控制,因为这种方式的应用比其他的RAID level 要掌握更...
HP+DL380+RAID5+配置方法,HP的资源网络上都很少,这是我参与实施的一个项目
hp raid 5 配置方法
raid基础,raid10与raid01比较,raid10与raid5比较
教你组装raid5及raid1磁盘阵列服务器 速度学习