大規(guī)模張量計(jì)算系統(tǒng)的構(gòu)建實(shí)踐_第1頁(yè)
大規(guī)模張量計(jì)算系統(tǒng)的構(gòu)建實(shí)踐_第2頁(yè)
大規(guī)模張量計(jì)算系統(tǒng)的構(gòu)建實(shí)踐_第3頁(yè)
大規(guī)模張量計(jì)算系統(tǒng)的構(gòu)建實(shí)踐_第4頁(yè)
大規(guī)模張量計(jì)算系統(tǒng)的構(gòu)建實(shí)踐_第5頁(yè)
已閱讀5頁(yè),還剩26頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、Mars:大規(guī)模張量計(jì)算系統(tǒng)的構(gòu)建實(shí)踐目錄什么是 Mars,為什么要開(kāi)發(fā) MarsMars 的架構(gòu)和執(zhí)行過(guò)程 準(zhǔn)備 Graph調(diào)度策略Worker 端的執(zhí)行 對(duì)比與展望提供 Numpy / Scipy / Pandas / Scikit-Learn 類(lèi)似的 API,使得 Pythoner 能夠用自 己熟悉的方式編寫(xiě)分布式代碼例子從 Numpy 到 Mars Tensor從 Pandas DataFrame 到 Mars DataFrame 從 scikit-learn 到 Mars Learn什么是 Marsimport numpy as npfrom scipy.special import

2、 erf34def black_scholes(P, S, T, rate, vol):a = np.log(P / S)call = P * d1 - Se * d2 put = call - P + Sereturn call, put21222324252627 N = 50000000price = np.random.uniform(10.0, 50.0, N),strike = np.random.uniform(10.0, 50.0, N)t = np.random.uniform(1.0, 2.0, N)print(black_scholes(price, strike, t,

3、 0.1, 0.2)從 Numpy 到 Mars Tensor34import mars.tensor as mtfrom mars.tensor.special import erfdef black_scholes(P, S, T, rate, vol):a = mt.log(P / S)7b=T * -rate7b=T * -rate889z=T * (vol * vol * 2)9z=T * (vol * vol * 2)10c=0.25 * z10c=0.25 * z11y=1.0 / np.sqrt(z)11y=1.0 / mt.sqrt(z)121213w1=(a - b + c

4、) * y13w1=(a - b + c) * y14w2=(a - b - c) * y14w2=(a - b - c) * y151516d1=0.5 + 0.5 * erf(w1)16d1=0.5 + 0.5 * erf(w1)17d2=0.5 + 0.5 * erf(w2)17d2=0.5 + 0.5 * erf(w2)181819Se=np.exp(b) * S19Se=mt.exp(b) * S2020212223242526call = P * d1 - Se * d2 put = call - P + Sereturn call, put27 N = 50000000price

5、 = mt.random.uniform(10.0, 50.0, N)strike = mt.random.uniform(10.0, 50.0, N)t = mt.random.uniform(1.0, 2.0, N)print(mt.ExecutableTuple(black_scholes(price, 32strike, t, 0.1, 0.2).execute()從Pandas DataFrame 到 Mars DataFrameimport numpy as npimport pandas as pd 34df = pd.DataFrame(np.random.rand(10000

6、0000, 4),columns=list(abcd)print(df.sum()import mars.tensor as mtimport mars.dataframe as md 34df = md.DataFrame(mt.random.rand(100000000, 4),columns=list(abcd)print(df.sum().execute()從 scikit-learn 到 Mars Learnfrom sklearn.datasets.samples_generator import make_blobsfrom sklearn.decomposition.pca i

7、mport PCA 41234from sklearn.datasets.samples_generator import make_blobsfrom mars.learn.decomposition.pca import PCA5X,y=make_blobs(n_samples=100000000,5X,y=make_blobs(n_samples=100000000,6n_features=3,6n_features=3,7centers=3,3, 3, 0,0,0,7centers=3,3, 3, 0,0,0, 81,1,1, 2,2,2,81,1,1, 2,2,2,9cluster_

8、std=0.2, 0.1, 0.2,0.9cluster_std=0.2, 0.1, 0.2, 0.102,102,11random_state=9)11random_state=9)pca = PCA(n_components=3)pca.fit(X)print(pca.explained_variance_ratio_)print(pca.explained_variance_)12131415pca = PCA(n_components=3)pca.fit(X) print(pca.explained_variance_ratio_.execute() print(pca.explain

9、ed_variance_.execute()Python 的流行 Numpy、Pandas 是 Python 數(shù)據(jù)處理的事實(shí)標(biāo)準(zhǔn) 為達(dá)到更高的執(zhí)行效率,需要新的框架為什么要開(kāi)發(fā) MarsTIOBE Index(2019年10月)Python 的流行Google Trends(全球)Numpy、Pandas 是Python 數(shù)據(jù)處理的事實(shí)標(biāo)準(zhǔn)1201008060402002014/10/52015/10/52018/10/5pandas2016/10/5TensorFlowNumPy2017/10/5Apache Spark在 Spark 等現(xiàn)有分布式框架中,執(zhí)行諸如矩陣乘法這樣的操作需要 Sh

10、uffle,然而 Shuffle 并不是必需的。使用更細(xì)粒度的依賴(lài)可以顯著提高效率為達(dá)到更高的執(zhí)行效率,需要新的框架dotdot dotsumsumsumMars 的架構(gòu) Actor ModelScheduler 端的執(zhí)行過(guò)程 準(zhǔn)備 Graph調(diào)度策略Worker 端的執(zhí)行過(guò)程 計(jì)算和數(shù)據(jù)存儲(chǔ)Mars 的架構(gòu)和執(zhí)行過(guò)程Mars 使用我們自行開(kāi)發(fā)的簡(jiǎn)化 Actor System 構(gòu)建,可方便地實(shí)現(xiàn)分布式計(jì)算框架Actor Model:12345Dispatch ProcessProcess 1Process 2w:1:actor1w:1:actor1Function CallIPC Call (

11、Async):12345Dispatch ProcessProcess 1Process 2w:1:actor1w:1:actor1w:2:actor1w:2:actor1RPC Call (Async)Mars 的架構(gòu)Actor SystemWeb FrontendResourceAssignerOperandGraphSessionMetaSchedulerTensor (Numpy)ExpressionOptimizationDataFrame (Pandas)Learn (SKLearn)ExecutionStorage LayerQuotaTransferWorkerCalcIn 1

12、: import mars.tensor as mtIn 2: import mars.dataframe as mdIn 3: a = mt.ones(10, 10), chunk_size=5)In 4: a5, 5 = 8In 5: df = md.DataFrame(a)In 6: s = df.sum()In 7: s.execute() Out7:012345678910.010.010.010.010.017.010.010.010.010.0dtype: float64準(zhǔn)備Graph:從代碼到粗粒度圖TileableTileableDataOperandOnesTensorDa

13、taTensor(a)dataIndexS etValu eindexes: (5, 5)value: 8TensorDataFromT ensorDataFrame DataSumSeriesDataDataFrame(df)dataSeries(s)dataBuildClient-side準(zhǔn)備Graph:從粗粒度圖到細(xì)粒度圖OnesTensorDataIndexSet Valueindexes: (5, 5)value: 8TensorDataFromTen sorDataFrameDataSumSeriesDataTileClient-sideSchedulerSerializeSubm

14、itWebOnesTensorChunkData (0, 0)indexes: (0, 0)value: 8FromTe nsorDataFrameChunkDataSumDataFrameChunkDataConcatDataFrameChunkDataSumSeriesChunkDataOnesTensorChunkData (1, 0)FromTe nsorDataFrameChunkDataSumDataFrameChunkDataOnesTensorChunkData (0, 1)FromTe nsorDataFrameChunkDataSumDataFrameChunkDataCo

15、ncatDataFrameChunkDataSumSeriesChunkDataOnesTensorChunkData (1, 1)FromTe nsorDataFrameChunkDataSumDataFrameChunkDataCompose準(zhǔn)備Graph:優(yōu)化細(xì)粒度圖(算子融合)ComposeDataFrameChunk DataComposeDataFrameChunk DataSeriesChunkDataComposeComposeDataFrameChunk DataComposeDataFrameChunk DataSeriesChunkDataFuseSchedulerOne

16、sTensorChunkData (0, 0)indexes: (0, 0)value: 8FromTe nsorDataFrameChunkDataSumDataFrameChunkDataConcatDataFrameChunkDataSumSeriesChunkDataOnesTensorChunkData (1, 0)FromTe nsorDataFrameChunkDataSumDataFrameChunkDataOnesTensorChunkData (0, 1)FromTe nsorDataFrameChunkDataSumDataFrameChunkDataConcatData

17、FrameChunkDataSumSeriesChunkDataOnesTensorChunkData (1, 1)FromTe nsorDataFrameChunkDataSumDataFrameChunkData準(zhǔn)備Graph:從細(xì)粒度圖到可執(zhí)行圖ComposeComposeDataFrameChunk DataComposeDataFrameChunk DataSeriesChunkDataComposeFetchComposeDataFrameChunk DataSeriesChunkDataComposeDataFrameChunk DataFetchMake ExecutableP

18、rocessorShared StorageProcessorShared Storage DiskSubmitSubmitSchedulerWorkerDisk在計(jì)算量一致的情況下,IO 開(kāi)銷(xiāo)是影響性能的重要因素減少I(mǎi)O開(kāi)銷(xiāo)措施之一:合理進(jìn)行初始作業(yè)分配可以顯著減少數(shù)據(jù)復(fù)制調(diào)度策略:初始作業(yè)分配 Worker 1 Worker 2%)6 + 理想的后繼作業(yè)調(diào)度:前趨作業(yè)產(chǎn)生多少數(shù)據(jù),即被后繼作業(yè)消費(fèi)掉 深度優(yōu)先策略:執(zhí)行時(shí)應(yīng)當(dāng)選擇深度更大的可執(zhí)行(ready)作業(yè)調(diào)度策略:后繼作業(yè)選擇TaskQueueActorWorker 中心分發(fā):Scheduler 提交 Ready 作業(yè)到 Worke

19、r,Worker 管理作業(yè)隊(duì)列, 當(dāng)有資源時(shí)搶占作業(yè)并執(zhí)行優(yōu)點(diǎn):Worker 對(duì)自身資源的感知度更強(qiáng),因而能帶來(lái)較低的延遲 缺點(diǎn):缺乏全局信息,容易導(dǎo)致作業(yè)扎堆到某個(gè) Worker調(diào)度策略:作業(yè)的分發(fā)AssignerActorOperandActorExecutorActor根據(jù)數(shù)據(jù) 分布分配提交搶占提交執(zhí)行StatusActorExecutionActorScheduler 中心分發(fā):Scheduler 管理 Worker 剩余資源,負(fù)責(zé)作業(yè)分配和提交優(yōu)點(diǎn):具備全局信息,能夠調(diào)度規(guī)模更大的作業(yè)缺點(diǎn):對(duì) Worker 資源的掌握存在延遲,因而可能存在調(diào)度延遲(但可改善)調(diào)度策略:作業(yè)的分發(fā)(續(xù)

20、)ResourceActorAssignerActorOperandActor定時(shí)更新請(qǐng)求分配提交請(qǐng)求提交Worker 由控制進(jìn)程和若干負(fù)責(zé)計(jì)算和 IO 的進(jìn)程組成,通過(guò)共享內(nèi)存交換數(shù)據(jù)計(jì)算和數(shù)據(jù)存儲(chǔ)Control ProcessRemote EndDisk IO ProcessNetworking ProcessShared Memory (Plasma Store)MemQuotaActorExecutionActorDispatcherActorCPU Calc ProcessCPU Calc ProcessIORunnerActorSenderActorReceiverActorCpu

21、CalcActorCpuCalcActorCalcActorCPU Calc ProcessCUDA Calc ProcessCudaCalcActor根據(jù)可執(zhí)行圖的信息估計(jì)所需的內(nèi)存,申請(qǐng) Memory Quota計(jì)算和數(shù)據(jù)存儲(chǔ)Control ProcessRemote EndDisk IO ProcessNetworking ProcessShared Memory (Plasma Store)MemQuotaActorExecutionActorDispatcherActorIORunnerActorSenderActorReceiverActorCPU Calc ProcessCPU

22、Calc ProcessCpuCalcActorCpuCalcActorCalcActorCPU Calc ProcessCUDA Calc ProcessCudaCalcActor根據(jù)數(shù)據(jù)是否在 Plasma Store 中選擇從磁盤(pán)或者其他 Worker 加載數(shù)據(jù)計(jì)算和數(shù)據(jù)存儲(chǔ)Control ProcessRemote EndNetworking ProcessShared Memory (Plasma Store)MemQuotaActorDispatcherActorDisk IO ProcessIORunnerActorExecutionActorSenderActorReceive

23、rActorCPU Calc ProcessCPU Calc ProcessCpuCalcActorCpuCalcActorCalcActorCPU Calc ProcessCUDA Calc ProcessCudaCalcActor使用單機(jī)計(jì)算庫(kù)執(zhí)行計(jì)算并將結(jié)果存儲(chǔ)回 Plasma Store計(jì)算和數(shù)據(jù)存儲(chǔ)Control ProcessRemote EndNetworking ProcessShared Memory (Plasma Store)MemQuotaActorDispatcherActorDisk IO ProcessIORunnerActorExecutionActorSenderActorReceiverActorCPU Calc ProcessCPU Calc ProcessCpuCalcActorCpuCalcActorCalcActorCPU Calc ProcessCUDA Calc ProcessCudaCalcActor由于 Mars Worker 采用多進(jìn)程開(kāi)發(fā),因而單個(gè)進(jìn)程 Crash 時(shí),只需要重建相關(guān) Actor 即可實(shí)現(xiàn)故障恢復(fù)如果 Worker Fail,Mars Sch

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論