富士通培訓(xùn)存儲feature_第1頁
富士通培訓(xùn)存儲feature_第2頁
富士通培訓(xùn)存儲feature_第3頁
富士通培訓(xùn)存儲feature_第4頁
富士通培訓(xùn)存儲feature_第5頁
免費(fèi)預(yù)覽已結(jié)束,剩余107頁可下載查看

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

1、Copyright 2015 FUJITSU LIMITEDDX8700 S3 / DX8900 S3FeaturesMay 2015FUJITSU LIMITEDCopyright 2015 FUJITSU LIMITEDData protection with block guard featureAll the components in the system are redundant.The availability was increased because of reverse connection of DE.Excellent reliability with redunda

2、nt configuration and hot maintenanceSecuring redundancy of DE access path at the time of CM abnormality1. Improvement of Reliability1Copyright 2015 FUJITSU LIMITEDData Protection with Block Guard FeatureValidity of all stored data is guaranteedCache ECC ProtectedWRITEREAD Add Check Code Verify Check

3、 Code Verify & delete Check Code(1)(2)(1)User DataCC: Check CodeAdds an 8-byte check code to every 512-byte block of data(3)A0A1A2A0A1A2CCCCCC(2)(2)(3)User DataDISKA0A1A2A0A1A2CCCCCCA0A1A2CCCCCCController ModuleWritten DataETERNUS DX2All the components in the system are redundantEven when a failure

4、occurred in a CE, user can access to the data.CE: Controller Enclosure, FE: Frontend Enclosure, DE: Drive Enclosure, CM: Controller Module, FRT: Frontend Router, SVC: Service Controller, MP: Mid Plane, PSU : Power Supply UnitMPFRTFRTSVCPSUFEPSURAID1CECMDECMCECMDECMCannot be accessedCan be accessedET

5、ERNUS DX8700S3 / DX8900S3: Cooling fanCE is redundant.Cache data is distributed between CEs. When an error occurred in a CE, the other normal CE can keep the operation.Data is distributed between DEs.MPFRTFRTSVCPSUPSUEven when a failure occurred in a midplane of a FE, the operation is continued.All

6、the components in a FE are redundant.Not only a controller, a power unit, and a fan but also components which are rarely failed such as a midplane (MP) of FE or a cable are redundant.Each component can be hot replaced.Even when an error occurred, the system keeps operating.EnhancedCopyright 2015 FUJ

7、ITSU LIMITED3CE: Controller EnclosureDE: Drive EnclosureExisting connection (cascade connection) When a DE (DE#01 in the figure) is failed, user cannot access to the DE in subsequent stage.Cascade connection(Connect to a directly above DE)Reverse connection DE#01 FailureDE#00DE#01DE#02DE#03DE#00DE#0

8、1DE#02DE#03DE#00DE#01DE#02DE#03DE#00DE#01DE#02DE#03CECECECEReverse connection(Connect to the end DE)When a DE (DE#01 in the figure) is failed, user can access to the DE in subsequent stage.The availability was increased because of reverse connection of DECopyright 2015 FUJITSU LIMITEDDE#01 FailureEv

9、en when a DE is failed, access to the DE in subsequent stage is maintained.New4Establish a reliability with redundant configuration and hot maintenanceRedundant configurationThe main components (parts) such as a controller and a power unit are redundant.Data mirroring can be realized with memory red

10、undancy.The connection between a controller and a drive is redundant.Hot replacementMain components can be hot replaced without stopping the system in a failure.A drive can be hot added while the system is in operation. SVC#0 SVC#1FRT#0FRT#1FRT#2FRT#3CPUCPUFEFE#0FE#1FPSUFANUFPSUFANUCE: Controller En

11、closure, FE: Frontend Enclosure, DE: Drive Enclosure, CM: Controller Module, CA: Channel Adapter, PFM: PCIe Flash Module, BUD: Boot up and Utility Device, FRT: Frontend Router, SVC: Service Controller, SW: Switch, IOC: I/O Controller, INF: Interface, EXP: SAS Expander, MP: Mid PlaneBBU: Battery Back

12、up Unit, CPSU : CE Power Supply Unit ,DPSU : DE Power Supply Unit , IOM:I/O Module,FPSU : FE Power Supply Unit, FANU : Fan Unit: Battery: Cooling fan: Input powerCM#0CEPFMCM#1PFMEXPEXPCPSUBBUPFMPFMIOC1IOC0MemoryIOC0IOC1MemoryBUDBUDCPSUCPUCPUCACACACACACACACABUDBUDINFINFControllerDriveDEDEDPSUMPDPSUIO

13、MIOMDPSUMPDPSUIOMIOMDPSUMPDPSUIOMIOMDPSUMPDPSUIOMIOMto Other CEsDEDESWSWSWSWControllerControllerETERNUS DX8700S3 / DX8900 S3Block diagramCopyright 2015 FUJITSU LIMITED5 Data is maintained in a failureCopyright 2015 FUJITSU LIMITEDDEMPIOM6IOM6EXPEXPDEMPIOM6IOM6EXPEXPCM#0panelCEIf CM Expander is norma

14、l, the failed CM can be continuously used as DE access path automatically.(Backend CM operation)Redundancy of DE access path can be maintained even when CM is abnormal.Backend CM operation enables to secure redundancy of DE access pathSecuring Redundancy of DE Access Path at the time of CM Abnormali

15、tyCM#1IAEXPIAEXPFailureControls other than DE accessCM Expander is used continuously (= Backend CM is in operation)Controls DE accessCM Expander: A component built in CM.It controls DE access.6Copyright 2015 FUJITSU LIMITED2. Cache MechanismCache managementAssigned Controller Module (CM)7Copyright 2

16、015 FUJITSU LIMITEDCache Management (1/2)CPUCacheCM0CM1CPUCacheCache memory on CM is divided into two areas and managed; Local area and Mirror areaBoth CMs are duplicated in the mirror area after writing data into the local area At the time of power failure, data is saved as follows:For DX8700 S3 /

17、DX8900 S3 :With power supply from BTU, cache data is saved in BUD on CMLocal areaMirror areaMirror areaLocal area8Copyright 2015 FUJITSU LIMITEDCache Management (2/2)CacheLocal0-0CE#0-CM0Mirror2-1 Cache memory mirroring must be cyclic configuration across CEs to prevent a system down in a CE failure

18、.CacheLocal0-1CE#0-CM1Mirror2-0CacheLocal1-0CE#1-CM0Mirror0-0CacheLocal1-1CE#1-CM1Mirror0-1CacheLocal2-0CE#2-CM0Mirror1-0CacheLocal2-1CE#2-CM1Mirror1-1To CE#0-CM0To CE#0-CM1CacheLocal0-0CE#0-CM0Mirror2-1CacheLocal0-1CE#0-CM1Mirror2-0CacheLocal1-0CE#1-CM0Mirror0-0CacheLocal1-1CE#1-CM1Mirror0-1CacheLo

19、cal2-0CE#2-CM0Mirror0-1CacheLocal2-1CE#2-CM1Mirror1-1To CE#0-CM0To CE#0-CM1CE#1The operation can be continued even when a CE is down.CE#1Mirror CacheEnhanced9Copyright 2015 FUJITSU LIMITEDAssigned Controller Module (CM) (1/2)CM0CM1#1#2#1Each RAID group has an assigned CM which handles I/O to that RA

20、ID group. If an I/O is required to a non-assigned CM, the assigned CM will perform a disk I/O after cache mirroring.If an I/O is required to a non-assigned CM, the I/O performance slows down compared with the I/O required to the assigned CM.The assigned CM can be set manually or automatically when c

21、reating RAID. An assigned CM is changeable later if required.#1. I/O required to the assigned CM port#2. I/O required to the non-assigned CM portRAID#0,assigned CM = CM0RAID#1assigned CM = CM110Copyright 2015 FUJITSU LIMITEDAssigned Controller Module (CM) (2/2)When a CM is failed, the assignment is

22、taken over to the pair CM. The processing of the RAID group is continued by the new assigned CM.CE0-CM0Local areaMirroring areaMaintenanceReconfigure of 3 CMsReconfigure of 4 CMsCE0-CM1Local areaMirroring areaCE1-CM0Local areaMirroring areaCE1-CM1Local areaMirroring areaCE0-CM0Local areaMirroring ar

23、eaCE0-CM1Local areaMirroring areaCE1-CM0Local areaMirroring areaCE1-CM1Local areaMirroring areaFailureCE0-CM0Local areaMirroring areaCE0-CM1CE1-CM0Local areaMirroring areaCE1-CM1Local areaMirroring areaCE0-CM0Local areaMirroring areaCE0-CM1CE1-CM0Local areaMirroring areaCE1-CM1Local areaMirroring ar

24、ea11Data in a frequently-used area is pressed in the PFM beforehand so that the response time is reduced by reading data from PFM if there is the data in it.The PFM (Secondary cache) can provide higher Hit performance than the DRAM memory (Primary cache) because its capacity is larger (up to 5.6TB p

25、er CE).Realizing performance requirement with a small number of disks to achieve low cost and low power consumption.Only for Read I/O, supporting Extreme Cache.Extreme CacheCMDRAM memory= Primary cacheAchieving the secondary cache function (Extreme Cache with PFM (PCIe Flash Module) connected to CM

26、via PCIe Data in a frequently-used area is pressed in the PFM beforehand.If data in the I/O process range is within the PFM, the data is read from the high-speed PFM.New functionPFM= Secondary cacheHDDCopyright 2015 FUJITSU LIMITED12Copyright 2015 FUJITSU LIMITED133. Supported RAID LevelSupported RA

27、ID listRAID5+0 supportRAID6 supportComparison between RAID5, 5+0, and 6Fast Recovery function13Copyright 2015 FUJITSU LIMITEDSupported RAID listETERNUS DX S3 supports the following RAID configurationsDCBACADBDCBADCBADCBADCBACADBCADBDCBAOKP-EFGHDMIEAP-MNOPJFBNP-IJKLGCPLHP-ABCDDCBAPKP-GHDMP-IJEANIP-EF

28、BOP-KLGCP-OPLHP-CDP-MNJFP-ABDCBANP2-IJKLP1-EFGHDMIEAP1-MNOPJFBP2-MNOPP1-IJKLGCOKP2-EFGHP1-ABCDPLHP2-ABCDRAID 0RAID 1RAID 1+0RAID 5RAID 5+0RAID 6RAID6-FRDivides data into blocks, and writes them to multiple disks in a dispersed manner (striping)Writes data into two disks simultaneously (mirroring)Com

29、bination of RAID0 and RAID1 Mirrors striping dataWrites striping data and parity data created. Distributes parity data to multiple disks.Able to correct one disk failure in the RAID arrayStripes two groups of RAID5Distributes two types of parities to different disks (double parity).Able to correct t

30、wo disk failures in the RAID array* RAID6-FR is a RAID level exclusive for fast recovery which is based on RAID6.14Copyright 2015 FUJITSU LIMITEDRAID5+0 support - High Performance, Large Capacity -P3D3-1D2-1D1-1D2-2D1-2D3-2RAID5RAID5 (3+1)D3-3D2-3With use of striping in RAID5+0, data transfer rate i

31、s improved compared with RAID5Equivalent to RAID5(3D+1)RAID5+0 (3+1) x 2 D3-1D2-1D1-1P3D2-2D1-2D3-2P2D1-3D3-3D2-3P1D3-4D2-4D1-4P3”D2-5D1-5D3-5P2”D1-6D3-6D2-6P1”RAID5+0Equivalent to RAID5(3D+1)Higher Performance, Larger Capacity Than RAID5Large capacity can be configured RAID5(2+1) to (15+1)RAID5+0(2

32、+1) x 2 to (15+1) x 2P1P2D1-3Improved transfer rate with use of stripingStriping D: Data areaP: Parity area8 drives4 drives15Copyright 2015 FUJITSU LIMITEDRAID5+0 Support - Improved Reliability -D3-1D2-1D1-1D3-2D2-2D1-2D3-3D2-3D1-3RAID5Equivalent to RAID5 (3D+1)RAID5+0 (3+1) x 2 RAID5 (6+1) D3-4D2-4

33、D1-4P3D2-5D1-5D3-5P2D1-6D3-6D2-6P1D3-1D2-1D1-1P3D2-2D1-2D3-2P2D1-3D3-3D2-3P1D3-4D2-4D1-4P3”D2-5D1-5D3-5P2”D1-6D3-6D2-6P1”RAID6 (6+2) D3-1D2-1D1-1D3-5D2-5P1D3-2D2-2D1-2P3D2-3D1-3Q3D2-4D1-4D3-3P2D1-5D3-4Q2D1-6D3-6D2-6Q1RAID5+0RAID6Equivalent to RAID5 (3D+1)Shorter rebuild timeHigher performance than R

34、AID5, RAID6Striping multiple RAID57 drivesRebuild time es long due to increase in no. of drives8 drives8 drivesSmall no. of isksStripingD: Data areaP,Q: Parity area-Two parity disks are created-High loads when writing dataTwo disk failures can be recoveredRecover one drive failureEx. Configure RAID5

35、, 5+0, and 6 so that each user data will be the same sizeRecover one disk failure in the RAID516Copyright 2015 FUJITSU LIMITEDRAID6 SupportRAID6 (6D+2P)D3-1D2-1D1-1D3-5D2-5P1D3-2D2-2D1-2P3D2-3D1-3Q3D2-4D1-4D3-3P2D1-5D3-4Q2D1-6D3-6D2-6Q1StripeThe system can be restored with two types of parities when

36、 two disks are failed. 8 DiskSupports RAID6. When two disks are failed in a RAID group at a time, the system can be restored.D: Data areaP,Q: Parity areaThe reliability was increased by supporting RAID6.17Copyright 2015 FUJITSU LIMITEDComparison between RAID5, 5+0, and 6Reliability*1Data efficiency*

37、2Write performance*3RAID5OKVery goodGoodRAID5+0GoodGoodVery goodRAID6Very goodGoodOK*1:RAID5 can recover one disk failure on the same stripe.RAID5+0 may be able to sustain two disk failures on the same stripe. (It is possible to recover the failure of 1 drive on the RAID array.)RAID6 can sustain two

38、 disks failures on the same stripe.*2:User data area in all volumes when compared by the RAID array on the same user volumeEx.) RAID5 (6+1) X RAID5+0(3+1)*2) X RAID6(6+2)*3:RAID5+0 is significantly superior to the same RAID array. Ex.) RAID5(3+1) X RAID5+0(3+1)*2)RAID5+0 is superior to the RAID arra

39、y with the same user volume. Ex.) RAID5(6+1) X RAID5+0(3+1)*2)Data efficiencyWriteperformanceReliabilityWriteperformanceRAID5+0RAID5RAID6RAID5+0RAID6RAID518Reduction of rebuilding time in a disk failure (twofold to sixfold compared with conventional model)Acquire disk capacity efficiency of 55% to 8

40、0%.Ensure equal performance with conventional model.User can select the most suitable type among multiple RAID configurationsRAID.* User can give more priority to disk capacity efficiency or to rebuilding speed.Fast Recovery FunctionDivide a logical volume into multiple partitions and distributes mu

41、ltiple HDD reserved areas. (Note)A RAID level exclusive for fast recovery (RAID6-FR) is needed. A reserved area consumes user area.Speed-up by rebuilding to multiple reserved areas at a time when a disk failure occurred.13ParitytoReserved area2ParitytoParity5to46toReserved areaDisk#0Disk#1Disk#2Disk

42、#3Disk#nRAID6-FRReserved areaReserved areaReserved areaXYParity13ParitytoReserved area2ParityParitytoParity35to146toReserved areaDisk#0Disk#1Disk#2Disk#3Disk#nXYParityFast Recovery FunctionThe risk of stopping operation due to a disk failure was decreased with reduction of rebuilding time.Copyright

43、2015 FUJITSU LIMITEDNew19Copyright 2015 FUJITSU LIMITED4. RAID ControlRoles of RAID controlDetection of disk failure signsCopyback-less operationDrive shield functionAutomatic hot spare assignmentGlobal hot spare and Dedicated hot spareRebuild priority settingQuick formatDrive patrolStripe size tuni

44、ngWide striping20Copyright 2015 FUJITSU LIMITEDRoles of RAID controlNewRebuild without HSRebuildDisk replacementHSHSNewHSRebuild & Copyback with HSRebuildCopybackWithout redundancyRedundancy recoveredFailedHSDisk replacementFailedFailedFailedWithout redundancyRedundancy recoveredRebuildRecalculates

45、data on a failed drive from a normal drive and writes it to a hot spare or a replaced driveRedundancy of the RAID group is recovered after rebuild is completedCopybackWhile redundancy of the RAID group is maintained, data is copied from the hot spare to the replaced driveRecovers data and redundancy

46、 in the event of a drive failure21Copyright 2015 FUJITSU LIMITEDDetection of disk failure signsStarts rebuild automatically when signs of disk failures are detectedRemoves disks showing signs of failures at the time of completing RebuildRestore data on a failed disk and write it to HSIncorporate HSH

47、ot spare DiskRAID5 (4+1)Signs of failuresretaining redundancyIncorporate the HS disk to RAID and remove the disk showing signs of failuresHSHSRAID5 (4+1)RAID5 (4+1)Redundant CopyRestores data to a Hot Spare disk while retaining redundancyHS:Hot Spare22Copy Back Less operationCopy back operation afte

48、r rebuilding/redundant copy is not required.After completing rebuilding/redundant copy, hot spare disk drives are used as member disks of RAID.No I/O performance degradation during copy back.Copy back less reduces time to make hot spare disks available. By this, availability of RAID improves.Copy ba

49、ckRAID1DE#10DE#00HSRebuild/Redundant copyDE#10DE#00HSDrive replacementCopy backCompleteWarningRAID1Copy back lessRAID1DE#10DE#00HSDE#10DE#00RAID1After completionConfiguration definitionSwitchesHSRebuild/Redundant CopyA faulty disk would be HS.WarningA copy back is not needed.Copy back less improves

50、the availability of RAID.RAID1DE#10DE#00HSDE#10DE#00RAID1CompleteSwitches configurationdefinitionHSFailed drive (to be replaced later)WarningNo copy backRebuild/Redundant copyNewCopyright 2015 FUJITSU LIMITED23The function to re-install the temporarily isolated hard disk drives is supported. Drive s

51、hield executes the equivalent operation of “Force enable (Motor On)” to the disk drives that are isolated due to an error to recover and re-install them.Drive ShieldDE#10DE#00HSRAID1DE#10DE#00completeHSRebuild/Redundant copyWarningCopy back lessRAID1DE#10DE#00HSRAID1DE#10DE#00HSCompleteCompleteRAID1

52、Re-installing a disk drive as a hot spare disk.NewNewly supports drive shield function that reduces hard disk failure rateOperation (*copy back less operation)Forcibly installed (*Drive shield operation)Copyright 2015 FUJITSU LIMITED24Copyright 2015 FUJITSU LIMITED25Searches appropriate HS and assig

53、ns it to each RAID group(Search 4) Searches Hot Spare with the same number of rotations and larger capacityIn HS disks with large capacity, priority is given to the HS whose capacity is nearer to the failed disks(Search 5) Searches Hot Spare with the different number of rotationsFaster drives are pr

54、eferentially selected.(Search 1) Preferentially searches the dedicated Hot Spare (If no dedicated hot spares exist, searches global hot spares)(Search 2) Searches from DE Cascades in which normal disks in the target RAID Group are not installed.(Search 3) Searches Hot Spare with the same capacity an

55、d number of rotations as the failed diskAutomatic hot spare assignment2.5”300GB10krpm2.5”450GB10krpm2.5”600GB10krpm3.5”450GB15krpm3.5”2TB7.2krpmDisk#001Disk#002Disk#003AllocationHS: Hot Spare2.5”300GB10krpm2.5”300GB10krpm2.5”300GB10krpm23452.5”300GB10krpm2.5”300GB10krpmDedicated HSGlobal HS1Failure6

56、7Priority2.5”300GB10krpmDisk#000RAID groupSame numberGlobal HSDifferent DESame DEHot spare typeLocationRotationCapacitySame sizeGlobal HSGlobal HSLarger sizeGlobal HSFaster rotationGlobal HSSlower rotationSimilar sizeDHS: Dedicated Hot SpareGHS: Global Hot SpareCopyright 2015 FUJITSU LIMITEDGlobal h

57、ot spare and dedicated hot spareWhen a disk of a RAID group with a dedicated HS fails, the dedicated HS is selected as the restoration destination and to rebuild the RAID group.When a disk in a RAID group fails and there is no dedicated HS available, a global HS is selected as a restoration destinat

58、ion to rebuild the RAID group.GHSRAID Group BRAID Group ARAID Group CRAID Group DRAID Group EGHSDHSDHS: Global Hot Spare : Dedicated Hot Spare for RAID Group B : Dedicated Hot Spare for RAID Group ADHSDHSDedicated HSAssigned to a specific RAID groupGlobal HSUsed by all RAID groups26Copyright 2015 FU

59、JITSU LIMITEDRebuild priority setting (1/2)A priority level of rebuilding can be set for each RAID group. Select a level from “Low, Middle or High”.Copyback or Redundant Copy operates according to this priority setting.If the setting is changed during Rebuild, Copyback or Redundant Copy operation, t

60、he change is immediately reflected.27Table below shows Unit operation under each setting.When “High” is set with I/O, Rebuild/Copyback/Redundant copy operates about 2 to 3.5 times faster than the operation with “Low” setting (default)Rebuild priority settingOperation with I/OOperation w/o I/O“Low”(D

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論