《計(jì)算平臺(tái)部署與測(cè)試(雙語(yǔ))》課件-項(xiàng)目三:AI視覺(jué)算法平臺(tái)部署_第1頁(yè)
《計(jì)算平臺(tái)部署與測(cè)試(雙語(yǔ))》課件-項(xiàng)目三:AI視覺(jué)算法平臺(tái)部署_第2頁(yè)
《計(jì)算平臺(tái)部署與測(cè)試(雙語(yǔ))》課件-項(xiàng)目三:AI視覺(jué)算法平臺(tái)部署_第3頁(yè)
《計(jì)算平臺(tái)部署與測(cè)試(雙語(yǔ))》課件-項(xiàng)目三:AI視覺(jué)算法平臺(tái)部署_第4頁(yè)
《計(jì)算平臺(tái)部署與測(cè)試(雙語(yǔ))》課件-項(xiàng)目三:AI視覺(jué)算法平臺(tái)部署_第5頁(yè)
已閱讀5頁(yè),還剩25頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

UnderstandingDeepLearning深度學(xué)習(xí)的認(rèn)知這些場(chǎng)景,你熟悉嗎?Areyoufamiliarwiththesescenarios?支撐現(xiàn)代智能系統(tǒng)的核心技術(shù)原理是什么?Whatarethecoretechnologicalprinciplesthatsupportmodernintelligentsystems?深度學(xué)習(xí):基于深層神經(jīng)網(wǎng)絡(luò)的特征學(xué)習(xí)DeepLearning:FeatureLearningBasedonDeepNeuralNetworks技術(shù)特征:TechnicalFeatures機(jī)器學(xué)習(xí)技術(shù)的重要分支AnImportantBranchofMachineLearningTechnology基于多層神經(jīng)網(wǎng)絡(luò)架構(gòu)BasedonMulti-LayerNeuralNetworkArchitecture具備端到端特征學(xué)習(xí)能力CapableofEnd-to-EndFeatureLearning支持大規(guī)模數(shù)據(jù)分布式處理SupportsLarge-ScaleDistributedDataProcessingInputLayerHiddenLayerOutputLayer人工智能技術(shù)體系架構(gòu)ArtificialIntelligenceTechnologyArchitecture深度學(xué)習(xí)-核心技術(shù)創(chuàng)新領(lǐng)域DeepLearning-Core-TechnologicalInnovationAreas機(jī)器學(xué)習(xí)-中層技術(shù)實(shí)現(xiàn)路徑MachineLearning-Mid-LevelTechnologicalImplementationPath人工智能-外層技術(shù)范疇ArtificialIntelligence-Outer-LayerTechnologicalScope技術(shù)定位:深度學(xué)習(xí)是機(jī)器學(xué)習(xí)技術(shù)發(fā)展的重要方向,共同構(gòu)成人工智能的核心技術(shù)基礎(chǔ)TechnologicalPositioning:Deeplearningisanimportantdirectioninthedevelopmentofmachinelearningtechnologies,formingthecoretechnologicalfoundationofartificialintelligence.深度學(xué)習(xí)技術(shù)演進(jìn)歷程DeepLearningTechnologyEvolutionProcess2012至今1980-19902006-20121950-1970理論奠基階段-感知機(jī)模型提出TheoreticalFoundationStage–ThePerceptronModelProposed算法突破階段-反向傳播算法發(fā)展AlgorithmBreakthroughStage–DevelopmentoftheBackpropagationAlgorithm技術(shù)復(fù)興階段-深度信念網(wǎng)絡(luò)提出TechnologicalRevivalStage–ProposaloftheDeepBeliefNetwork應(yīng)用爆發(fā)階段-AlexNet突破性成果

ApplicationBoomStage–BreakthroughAchievementsofAlexNet1986年,杰弗里·辛頓與他人在《Nature》上發(fā)表論文,闡述了如何利用反向傳播算法有效訓(xùn)練神經(jīng)網(wǎng)絡(luò)。In1986,GeoffreyHintonandhiscolleaguespublishedapaperinNature,explaininghowtoeffectivelytrainneuralnetworksusingthebackpropagationalgorithm.技術(shù)發(fā)展的核心驅(qū)動(dòng)要素CoreDrivingFactorsofTechnologicalDevelopment數(shù)據(jù)資源:互聯(lián)網(wǎng)時(shí)代產(chǎn)生的超大規(guī)模訓(xùn)練數(shù)據(jù)集DataResources:Ultra-large-scaletrainingdatasetsgeneratedintheInternetera計(jì)算能力:GPU等專(zhuān)用處理器提供的并行計(jì)算架構(gòu)ComputingPower:ParallelcomputingarchitecturesenabledbyGPUsandotherspecializedprocessors算法創(chuàng)新:網(wǎng)絡(luò)結(jié)構(gòu)與訓(xùn)練方法的持續(xù)優(yōu)化改進(jìn)AlgorithmicInnovation:Continuousoptimizationandimprovementofnetworkarchitecturesandtrainingmethods深度學(xué)習(xí)開(kāi)發(fā)框架技術(shù)分析TechnicalAnalysisofDeepLearningDevelopmentFrameworks深度學(xué)習(xí)技術(shù)應(yīng)用領(lǐng)域分析AnalysisofDeepLearningApplicationAreas總結(jié)Summarize深度學(xué)習(xí)技術(shù)將持續(xù)推動(dòng)各行業(yè)智能化轉(zhuǎn)型,建立系統(tǒng)化的技術(shù)認(rèn)知是專(zhuān)業(yè)學(xué)習(xí)的重要基礎(chǔ)。Deeplearningtechnologywillcontinuetodrivetheintelligenttransformationofvariousindustries.Establishingasystematicunderstandingofthetechnologyservesasanessentialfoundationforprofessionallearning.UnderstandingPyTorchPyTorch的認(rèn)知人工智能就在我們身邊ArtificialIntelligenceIsAllAroundUsPyTorch簡(jiǎn)介IntroductiontoPyTorchPyTorch是什么:一個(gè)開(kāi)源的深度學(xué)習(xí)框架WhatIsPyTorch:AnOpen-SourceDeepLearningFramework開(kāi)發(fā)者:FacebookAIResearch(FAIR)Developer:FacebookAIResearch(FAIR)核心特點(diǎn):動(dòng)態(tài)計(jì)算圖、易用、社區(qū)活躍CoreFeatures:DynamicComputationGraph,EaseofUse,andActiveCommunityPyTorch組成ComponentsofPyTorch0103040502核心組件CoreComponentsTorchTensor:多維數(shù)組TorchTensor:MultidimensionalArrayAutograd:自動(dòng)求導(dǎo)Autograd:AutomaticDifferentiationnn模塊:神經(jīng)網(wǎng)絡(luò)構(gòu)建nnModule:NeuralNetworkConstructionoptim模塊:優(yōu)化器optimModule:OptimizerDataLoader&Dataset:數(shù)據(jù)處理DataLoader&Dataset:DataProcessingPyTorch基礎(chǔ)知識(shí)BasicKnowledgeofPyTorch什么是張量WhatIsaTensor張量Tensor類(lèi)似于多維數(shù)組Similartoamultidimensionalarray張量操作TensorOperations創(chuàng)建、索引、運(yùn)算Creation,indexing,andcomputationPyTorch基礎(chǔ)知識(shí)PyTorchBasics計(jì)算圖&自動(dòng)微分ComputationalGraph&Autograd計(jì)算圖ComputationalGraph自動(dòng)微分AutomaticDifferentiation計(jì)算圖:節(jié)點(diǎn)表示操作,邊表示數(shù)據(jù)ComputationalGraph:Nodesrepresentoperations,andedgesrepresentdata.Autograd:自動(dòng)求導(dǎo)機(jī)制Autograd:AutomaticDifferentiationMechanismPyTorch發(fā)展簡(jiǎn)史ABriefHistoryofPyTorch2018202020162016年:PyTorch1.0前身發(fā)布2016:TheprecursortoPyTorch1.0wasreleased2016年:穩(wěn)定版本發(fā)布,受歡迎2018:Thestableversionwasreleasedandgainedwidespreadpopularity2020年以后:廣泛應(yīng)用于學(xué)術(shù)與工業(yè)After2020:WidelyAppliedinAcademiaandIndustryPyTorch安裝與配置PyTorchInstallationandConfiguration安裝方式:pip或condaInstallationMethods:viapiporcondaCUDA支持(GPU加速)CUDASupport(GPUAcceleration)驗(yàn)證安裝VerifyInstallationimporttorch;print(torch.__version__)實(shí)際應(yīng)用PracticalApplications小結(jié)SummarizePyTorch概念+組成PyTorchConcept&Components基礎(chǔ)知識(shí):張量、計(jì)算圖、自動(dòng)微分BasicKnowledge:Tensor,ComputationalGraph,andAutomaticDifferentiation發(fā)展歷程DevelopmentHistory安裝配置方法InstallationandConfigurationMethodsUnderstandingYOLOYOLO的認(rèn)知這些場(chǎng)景背后,是什么技術(shù)讓機(jī)器能自動(dòng)“看見(jiàn)”并“認(rèn)出”物體?Whattechnologyenablesmachinestoautomatically“see”and“recognize”objectsbehindthesescenes?生活中的“眼睛”The“Eyes”inOurDailyLivesPyTorch簡(jiǎn)介IntroductiontoPyTorch目標(biāo)檢測(cè)=找出來(lái)+認(rèn)出來(lái)ObjectDetection=Localization+Classification在圖片中定位(Localization)出物體的位置,并分類(lèi)(Classification)出它是什么。Inanimage,localizethepositionoftheobject(Localization)andclassifywhattheobjectis(Classification).早期目標(biāo)檢測(cè)方法EarlyObjectDetectionMethods代表算法:RepresentativeAlgorithmsR-CNN(2014):基于候選區(qū)域Region-BasedApproach局限性L(fǎng)imitationsFastR-CNN、FasterR-CNN:速度逐步提升Speedgraduallyincreases檢測(cè)速度慢Slowdetectionspeed結(jié)構(gòu)復(fù)雜、難以部署ComplexstructureanddifficulttodeployYOLO:統(tǒng)一的實(shí)時(shí)目標(biāo)檢測(cè)框架YOLO:AUnifiedReal-TimeObjectDetectionFramework左側(cè):傳統(tǒng)兩階段目標(biāo)檢測(cè)流程圖(區(qū)域提議+分類(lèi)回歸)Left:TraditionalTwo-StageObjectDetectionPipeline(RegionProposal+Classification&Regression)右側(cè):YOLO單階段Right:YOLOSingle-StageDetectionYOLO:統(tǒng)一的實(shí)時(shí)目標(biāo)檢測(cè)框架YOLO:AUnifiedReal-TimeObjectDetectionFramework將目標(biāo)檢測(cè)重構(gòu)為單一的回歸問(wèn)題。Reframesobjectdetectionasasingleregressionproblem.對(duì)整張圖像僅處理一次,即可直接輸出目標(biāo)位置與類(lèi)別信息。Processestheentireimageonlyoncetodirectlypredictobjectlocationsandclasses.顯著提升了檢測(cè)速度,滿(mǎn)足實(shí)時(shí)性應(yīng)用需求。Significantlyimprovesdetectionspeed,enablingreal-timeapplications.-YOLO核心優(yōu)勢(shì)-CoreAdvantagesofYOLOYOLO系列模型的演進(jìn)與優(yōu)化EvolutionandOptimizationoftheYOLOSeriesModels2015YOLOv1–提出統(tǒng)一的實(shí)時(shí)檢測(cè)框架。YOLOv1–Proposedaunifiedframeworkforreal-timeobjectdetection.2016YOLOv2–引入錨框機(jī)制,提升召回率與多尺度檢測(cè)能力。YOLOv2–Introducedanchorboxes,improvingrecallandmulti-scaledetectioncapability.2018YOLOv3–采用多尺度預(yù)測(cè),在速度與精度間取得良好平衡。YOLOv3–Adoptedmulti-scaleprediction,achievingagoodbalancebetweenspeedandaccuracy.2020YOLOv4/YOLOv5–集成多種訓(xùn)練技巧,v5以其工程友好性廣泛應(yīng)用。YOLOv4/YOLOv5–Integratedmultipletrainingtricks;YOLOv5becamewidelyusedforitsengineeringefficiency.2022–PresentYOLOv6/YOLOv7/YOLOv8...–在網(wǎng)絡(luò)結(jié)構(gòu)與訓(xùn)練策略上持續(xù)創(chuàng)新。YOLOv6/YOLOv7/YOLOv8...–Continuousinnovationinnetworkarchitectureandtrainingstrategies.2025.09YOLOv26–面向邊緣端與低功耗設(shè)備的全新架構(gòu)之作。YOLOv26–Anewarchitecturedesignedforedgecomputingandlow-powerdevices.環(huán)境配置要求EnvironmentConfigurationRequirements配置要求ConfigurationRequirements:Python3.7及以上版本Python3.7orabovePyCharmIDE開(kāi)發(fā)環(huán)境PyCharmIDEdevelopmentenvironmentUltralyticsYOLO軟件包UltralyticsYOLOpackage環(huán)境作用:提供算法運(yùn)行與開(kāi)發(fā)的基礎(chǔ)平臺(tái)Purpose:P

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論