其中,RR4312S和RR4312X的區別在于10G網口分別為SFP+光口和10GBase-T RJ45電口。
首先,外包裝是一個巨大無比的NETGEAR的紙箱,由于RR3312和RR4312外觀和大小相同,所以紙箱上有兩個型號的標注
打開箱子,可以看到附件
分別有上架的導軌,電源線,面板還有冗余電源。
設備拿出來,放在桌子上,標準的2U高度的機架式設備
頂上為銘牌,標識了硬盤槽位的信息,以及機器的SN,所有網口的MAC地址,以及一些必要的信息
推開背上的蓋子,就可以露出設備內部
設備硬件分布非常合理,熱量通過3個8CM暴力風扇帶出,而且對于CPU,還有導風罩配置,幫助風扇吹出的氣流更好的流過CPU散熱片以及旁邊的10G網卡的芯片散熱片
碩大的CPU散熱片旁邊是內存槽,和風扇垂直,以便于氣流流過散熱,內存使用的2跟8G ECC內存,組成16G雙通道配置
CPU旁邊為萬兆網卡,網卡使用的是INTELX710芯片
CPU導熱罩的一邊為3個SFF-8087接口,正面12個硬盤槽位均通過這里連接至主板
接下來我們轉到設備的背面,左側為兩個冗余電源,每個電源550W,白金效率
設備背后一共有4個千兆RJ45 LAN口,2個SPF+萬兆接口,2XUSB3.0,2XESATA可以外掛硬盤柜
為了整機的散熱,上面滿布散熱孔,所以無需擔心機器過熱的問題。
回到機器正面,正面12盤位的硬盤槽
而且所有網件的機架式的NAS的硬盤架均為防震設計,即使您硬盤滿配,也無需擔心震動對硬盤以及設備帶來的影響,為您的數據安全提供周全的保護
接下來我們只要把硬盤安裝至硬盤架,然后插入NAS,就可以完成硬件安裝了
通電后可以看到硬盤架上的LED狀態燈亮起
至此,RR4312S的開箱部分結束
多數NAS性能測試,都是使用IOmeter加多臺高性能服務器/工作站/臺式機,同時配置盡可能多的SSD/SATA硬盤,如36顆、60顆等等,多數情況下測試出來的數據在一般用戶層面是達到不了。當然,這也有其意義,第一是這么測試能得到整套存儲解決方案的“極限性能”,其次是廠商也會表明這是實驗室環境,一般客戶達到不了。那么我這里的測試,我們從更貼近實際普通用戶的角度,從SATA到SSD都會有所涉及。一方面主要是考察RR4312S的IO性能,一方面也可以簡單對比一下SATA/SSD、不同RAID級別、不同的功能集的性能區別。這里測試使用FIO軟件,這雖不是工業級標準性能測試軟件(如IOmeter或者NASPT),但是其好處是我測試的數據,跟你搭配類似級別的硬盤所測試出來的數據估計八九不離十。而不會是我測試出2GBps,而你只能跑出來200MBps。
Fio的開發者是Jens Axboe,Jens Axboe是Linux Kernel block level和眾多IO Scheduler的maintainer。目前供職于Facebook。FIO的開發基本上是因為Jens Axboe平常測試Linux IO所需,下面的介紹摘自Github(https://github.com/axboe/fio)
Fio was originally written to save me the hassle of writing special test case programs when I wanted to test a specific workload, either for performance reasons or to find/reproduce a bug. The process of writing such a test app can be tiresome, especially if you have to do it often.? Hence I needed a tool that would be able to simulate a given I/O workload without resorting to writing a
tailored test case again and again.
A test work load is difficult to define, though. There can be any number of processes or threads involved, and they can each be using their own way of generating I/O. You could have someone dirtying large amounts of memory in an memory mapped file, or maybe several threads issuing reads using asynchronous I/O. fio needed to be flexible enough to simulate both of these cases, and many more.
Fio spawns a number of threads or processes doing a particular type of I/O action as specified by the user. fio takes a number of global parameters, each inherited by the thread unless otherwise parameters given to them overriding that setting is given.? The typical use of fio is to write a job file matching the I/O load one wants to simulate.
設備 | 型號 | 數量 |
2U機架式存儲 | RR4312S | 1 |
交換機 | M4300-28G | 1 |
SATA硬盤 | WD WD4000F9YZ | 10 |
SSD | Intel SSD DC S3710 | 6 |
SSD | Intel SSD 710 | 4 |
10G光纖DAC | NETGEAR AXC761 | 1 |
RAID5, RAID50, RAID10
項目一:RR4312S存儲后臺dd
RAID配置一:RR4312 SATA*8 RAID10
root@nas-E7-4F-68:/data/test# dd if=/dev/zero of=test.img bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 20.3509 s, 515 MB/s
RAID配置二:RR4312 SATA*8 RAID50
root@nas-E7-4F-68:/data/test# dd if=/dev/zero of=test.img bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 14.7158 s, 713 MB/s
RAID配置三:RR4312S SSD*12 RAID5
root@nas-E7-4F-68:/data/test# dd if=/dev/zero of=test2.img bs=1M count=20000
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB, 20 GiB) copied, 11.8209 s, 1.8 GB/s
Linux dd命令,這是最簡單粗暴的測速方法了。從上述測試結果來看,SSD寫入一個20G的文件基本上可以達到1.8GBps的速度。從SATA的測試來看,RAID50比RAID10的速度還快?當然,這可能是因為RAID50(和RAID5)對于大文件寫入的性能還是比較理想的。
備份源:RR4312為SATA RAID50
備份目標:RN526X為SSD RAID5
200MB不到的速度,應該是備份源的SATA速度導致。
由于RR4312是16GB內存,測試文件至少要大于內存才能直接避免緩存等因素過分的影響結果。
以下是測試的配置腳本:
隨機讀
[Random_Read]
rw=randread bs=4k direct=0 size=512M numjobs=32 iodepth=16 ioengine=libaio runtime=240 group_reporting=1 directory=/data/test |
隨機寫
[Random_Write]
rw=randwrite bs=4k direct=0 size=512M numjobs=32 iodepth=16 ioengine=libaio runtime=240 group_reporting=1 directory=/data/test |
順序讀
[Seq_Read]
rw=read bs=512K size=4G numjobs=8 iodepth=1 ioengine=libaio runtime=240 group_reporting=1 directory=/data/test |
順序寫
[Seq_Write]
rw=write bs=512K size=4G numjobs=8 iodepth=1 ioengine=libaio runtime=240 group_reporting=1 directory=/data/test |
FIO測試結果
測試命令行輸出如下所示,這里我貼2個示例吧,后面直接畫個表格出來,不然整篇文章就太長了。
RR4312S RAID5 (10 x SATA), 512K順序讀,950MBps左右
Seq_Read: (g=0): rw=read, bs=512K-512K/512K-512K/512K-512K, ioengine=libaio, iodepth=1
… fio-2.1.11 Starting 8 processes Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB) Seq_Read: Laying out IO file(s) (1 file(s) / 4096MB)
Seq_Read: (groupid=0, jobs=8): err= 0: pid=27325: Tue Aug? 8 15:37:28 2017 read : io=32768MB, bw=950174KB/s, iops=1855, runt= 35314msec slat (usec): min=44, max=510594, avg=4231.24, stdev=26369.82 clat (usec): min=0, max=7, avg= 0.37, stdev= 0.53 lat (usec): min=44, max=510595, avg=4231.72, stdev=26370.04 clat percentiles (usec): |? 1.00th=[??? 0], ?5.00th=[??? 0], 10.00th=[??? 0], 20.00th=[??? 0], | 30.00th=[??? 0], 40.00th=[??? 0], 50.00th=[??? 0], 60.00th=[??? 0], | 70.00th=[??? 1], 80.00th=[??? 1], 90.00th=[??? 1], 95.00th=[??? 1], | 99.00th=[??? 2], 99.50th=[??? 2], 99.90th=[??? 3], 99.95th=[??? 3], | 99.99th=[??? 4] bw (KB? /s): min= 2355, max=272532, per=12.68%, avg=120522.40, stdev=32883.72 lat (usec) : 2=98.00%, 4=1.97%, 10=0.03% cpu????????? : usr=0.02%, sys=2.36%, ctx=8160, majf=0, minf=1067 IO depths??? : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit??? : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete? : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued??? : total=r=65536/w=0/d=0, short=r=0/w=0/d=0 latency?? : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=32768MB, aggrb=950173KB/s, minb=950173KB/s, maxb=950173KB/s, mint=35314msec, maxt=35314msec |
RR4312S RAID5 (10 x SATA), 4K隨機讀,IOPS為1512。
Random_Read: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
… fio-2.1.11 Starting 32 processes Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: Laying out IO file(s) (1 file(s) / 512MB) Random_Read: (groupid=0, jobs=32): err= 0: pid=27106: Tue Aug? 8 15:31:29 2017 read : io=1418.4MB, bw=6049.5KB/s, iops=1512, runt=240080msec slat (usec): min=43, max=1651.2K, avg=21152.26, stdev=49526.06 clat (usec): min=0, max=2407.1K, avg=316815.10, stdev=194304.98 lat (msec): min=4, max=2420, avg=337.97, stdev=200.85 clat percentiles (msec): |? 1.00th=[ ?112],? 5.00th=[? 135], 10.00th=[? 149], 20.00th=[? 174], | 30.00th=[? 198], 40.00th=[? 225], 50.00th=[? 255], 60.00th=[? 297], | 70.00th=[? 351], 80.00th=[? 429], 90.00th=[? 570], 95.00th=[? 701], | 99.00th=[ 1045], 99.50th=[ 1172], 99.90th=[ 1483], 99.95th=[ 1598], | 99.99th=[ 1975] bw (KB? /s): min=??? 3, max=? 482, per=3.23%, avg=195.68, stdev=82.96 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01% lat (msec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.30%, 250=48.25% lat (msec) : 500=37.42%, 750=10.01%, 1000=2.78%, 2000=1.20%, >=2000=0.01% cpu????????? : usr=0.01%, sys=0.06%, ctx=363158, majf=0, minf=689 IO depths??? : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0% submit??? : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete? : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued??? : total=r=363089/w=0/d=0, short=r=0/w=0/d=0 latency?? : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: io=1418.4MB, aggrb=6049KB/s, minb=6049KB/s, maxb=6049KB/s, mint=240080msec, maxt=240080msec |
因為ReadyNAS支持Bit Rot保護,也支持壓縮功能,所以下面的測試分成好多種組合,首先是RAID分為10和50,然后分別包括是否啟用Bit Rot保護、是否開啟壓縮,以及SATA或者SSD。
另外,隨機讀寫只列出IOPS,順序讀寫只列出吞吐量。
圖例:
R表示:RAID,所以R10就是RAID10
B表示:開啟了Bit-Rot功能
C表示:Compression,開啟了壓縮功能
10-12塊SATA硬盤應該是最為常見的12盤位配盤選擇了,這里配置了10塊SATA盤,做了隨機IO測試,對比8塊盤的結果來說,增加硬盤對IO的增加還是有明顯效果了(相比8塊盤增加了20%左右)。
10塊盤、RAID5下的順序讀寫吞吐量,950MBps讀和805MBps寫,一般來說可以粗略認為普通SATA盤的順序讀寫大文件吞吐量是100MBps左右,那么這里去掉RAID5的開銷,跑出這個吞吐量應該都是非常高效而又不離譜的結果了。
這里針對幾項可能影響性能的參數做了對比:RAID級別、是否開啟Bit Rot保護和壓縮。從結果來看:
8塊SATA盤的順序測試結果。
6塊Intel SSD DC S3710,較貴,較新的SSD果然性能就是不一樣。19萬的讀IOPS,6萬的寫IOPS。
SSD進行順序讀寫的測試,1400MBps讀,900MBps的寫吞吐量。
試試12塊SSD,雖然SSD的型號不一致:
RAID5和RAID10的性能對比
結合上面兩個測試項目來看,RAID5和RAID10在ReadyNAS上面的性能相差其實沒有那么大,順序大文件,RAID5較好,隨機小文件是RAID10較好。所以要不要在12盤位NAS上面使用RAID10?這樣會有一半的硬盤用于鏡像、校驗,似乎開銷比較大了。在對比上面一項6塊盤SSD的測試,可以看到當硬盤數量增加的時候,順序讀寫的性能提升非常明顯,但是隨機性能反倒是下降了,可能的原因是12塊盤測試的硬盤里面,有幾塊是早期的Intel SSD,性能相比后面的3710系列要差不少。
我們這篇文章主要是從硬件和性能的角度進行測試,所以并無重點深入軟件和功能方面的。從一方面考察RR4312S單個主柜的大致性能極限,另外一方面也從RAID和功能對性能的影響的角度進行測試。
從硬件的角度來說,采用同類產品中較新的CPU: Intel Xeon ES-1245v5 4核 3.5GHz CPU以及標配16GB的DDR4 ECC內存,可升級擴展到64GB,同時配備有2個SFP+(或10GBase-T)端口。對于該級別NAS來說應該已經是非常高的硬件配置了。
關于性能:
1) 速度和響應時間都遠超SATA(這個基本上不用做啥測試都知道了)
2)RR4312S主柜的極限讀吞吐量應該在2GBps以上,寫應該在1GBps以上
3)Intel不同時期不同SSD性能區別還是挺大的