Perance Analysis.ppt_第1頁
Perance Analysis.ppt_第2頁
Perance Analysis.ppt_第3頁
Perance Analysis.ppt_第4頁
Perance Analysis.ppt_第5頁
已閱讀5頁,還剩44頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領

文檔簡介

1、Parallel Programmingwith MPI and OpenMP,Michael J. Quinn,Chapter 7,Performance Analysis,Learning Objectives,Predict performance of parallel programs Understand barriers to higher performance,Outline,General speedup formula Amdahls Law Gustafson-Barsis Law Karp-Flatt metric Isoefficiency metric,Speed

2、up Formula,Execution Time Components,Inherently sequential computations: (n) Potentially parallel computations: (n) Communication operations: (n,p),Speedup Expression,(n)/p,(n,p),(n)/p + (n,p),Speedup Plot,“elbowing out”,Efficiency,0 (n,p) 1,All terms 0 (n,p) 0 Denominator numerator (n,p) 1,Amdahls

3、Law,Let f = (n)/(n) + (n),Example 1,95% of a programs execution time occurs inside a loop that can be executed in parallel. What is the maximum speedup we should expect from a parallel version of the program executing on 8 CPUs?,Example 2,20% of a programs execution time is spent within inherently s

4、equential code. What is the limit to the speedup achievable by a parallel version of the program?,Pop Quiz,An oceanographer gives you a serial program and asks you how much faster it might run on 8 processors. You can only find one function amenable to a parallel solution. Benchmarking on a single p

5、rocessor reveals 80% of the execution time is spent inside this function. What is the best speedup a parallel version is likely to achieve on 8 processors?,Pop Quiz,A computer animation program generates a feature movie frame-by-frame. Each frame can be generated independently and is output to its o

6、wn file. If it takes 99 seconds to render a frame and 1 second to output it, how much speedup can be achieved by rendering the movie on 100 processors?,Limitations of Amdahls Law,Ignores (n,p) Overestimates speedup achievable,Amdahl Effect,Typically (n,p) has lower complexity than (n)/p As n increas

7、es, (n)/p dominates (n,p) As n increases, speedup increases,Illustration of Amdahl Effect,Speedup,Processors,Review of Amdahls Law,Treats problem size as a constant Shows how execution time decreases as number of processors increases,Another Perspective,We often use faster computers to solve larger

8、problem instances Lets treat time as a constant and allow problem size to increase with number of processors,Gustafson-Barsiss Law,Let s = (n)/(n)+(n)/p),Gustafson-Barsiss Law,Begin with parallel execution time Estimate sequential execution time to solve same problem Problem size is an increasing fu

9、nction of p Predicts scaled speedup,Example 1,An application running on 10 processors spends 3% of its time in serial code. What is the scaled speedup of the application?,Example 2,What is the maximum fraction of a programs parallel execution time that can be spent in serial code if it is to achieve

10、 a scaled speedup of 7 on 8 processors?,Pop Quiz,A parallel program executing on 32 processors spends 5% of its time in sequential code. What is the scaled speedup of this program?,The Karp-Flatt Metric,Amdahls Law and Gustafson-Barsis Law ignore (n,p) They can overestimate speedup or scaled speedup

11、 Karp and Flatt proposed another metric,Experimentally Determined Serial Fraction,Inherently serial component of parallel computation + processor communication and synchronization overhead,Single processor execution time,Experimentally Determined Serial Fraction,Takes into account parallel overhead

12、Detects other sources of overhead or inefficiency ignored in speedup model Process startup time Process synchronization time Imbalanced workload Architectural overhead,Example 1,p,2,3,4,5,6,7,1.8,2.5,3.1,3.6,4.0,4.4,8,4.7,What is the primary reason for speedup of only 4.7 on 8 CPUs?,e,0.1,0.1,0.1,0.

13、1,0.1,0.1,0.1,Since e is constant, large serial fraction is the primary reason.,Example 2,p,2,3,4,5,6,7,1.9,2.6,3.2,3.7,4.1,4.5,8,4.7,What is the primary reason for speedup of only 4.7 on 8 CPUs?,e,0.070,0.075,0.080,0.085,0.090,0.095,0.100,Since e is steadily increasing, overhead is the primary reas

14、on.,Pop Quiz,Is this program likely to achieve a speedup of 10 on 12 processors?,p,4,3.9,8,6.5,12,?,Isoefficiency Metric,Parallel system: parallel program executing on a parallel computer Scalability of a parallel system: measure of its ability to increase performance as number of processors increas

15、es A scalable system maintains efficiency as processors are added Isoefficiency: way to measure scalability,Isoefficiency Derivation Steps,Begin with speedup formula Compute total amount of overhead Assume efficiency remains constant Determine relation between sequential execution time and overhead,

16、Deriving Isoefficiency Relation,Determine overhead,Substitute overhead into speedup equation,Substitute T(n,1) = (n) + (n). Assume efficiency is constant.,Isoefficiency Relation,Scalability Function,Suppose isoefficiency relation is n f(p) Let M(n) denote memory required for problem of size n M(f(p)

17、/p shows how memory usage per processor must increase to maintain same efficiency We call M(f(p)/p the scalability function,Meaning of Scalability Function,To maintain efficiency when increasing p, we must increase n Maximum problem size limited by available memory, which is linear in p Scalability

18、function shows how memory usage per processor must grow to maintain efficiency Scalability function a constant means parallel system is perfectly scalable,Interpreting Scalability Function,Number of processors,Memory needed per processor,Cplogp,Cp,Clogp,C,Memory Size,Can maintain efficiency,Cannot m

19、aintain efficiency,Example 1: Reduction,Sequential algorithm complexityT(n,1) = (n) Parallel algorithm Computational complexity = (n/p) Communication complexity = (log p) Parallel overheadT0(n,p) = (p log p),Reduction (continued),Isoefficiency relation: n C p log p We ask: To maintain same level of

20、efficiency, how must n increase when p increases? M(n) = n The system has good scalability,Example 2: Floyds Algorithm,Sequential time complexity: (n3) Parallel computation time: (n3/p) Parallel communication time: (n2log p) Parallel overhead: T0(n,p) = (pn2log p),Floyds Algorithm (continued),Isoefficiency relationn3 C(p n3 log p) n C p log p M(n) = n2 The parallel system has poor

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論