《人工智能與數(shù)據(jù)挖掘教學(xué)》l_第1頁
《人工智能與數(shù)據(jù)挖掘教學(xué)》l_第2頁
《人工智能與數(shù)據(jù)挖掘教學(xué)》l_第3頁
《人工智能與數(shù)據(jù)挖掘教學(xué)》l_第4頁
《人工智能與數(shù)據(jù)挖掘教學(xué)》l_第5頁
已閱讀5頁,還剩25頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

Chapter8

NeuralNetworksPartIII:AdvanceDataMiningTechniques2023/4/141編輯pptWhat&WhyANN(8.1FeedforwardNeuralNetwork)HowANNworks-workingprinciple(8.2.1SupervisedLearning)MostpopularANN-BackpropagationNetwork(8.5.1TheBackpropagationAlgorithm:Anexample)Content2023/4/142編輯pptWhat&WhyANN:

ArtificialNeuralNetworks(ANN)ANNisaninformationprocessingtechnologythatemulatesabiologicalneuralnetwork.Neuron(神經(jīng)元)vsNode(Transformation)Dendrite(樹突)vsInputAxon(軸突)vsOutputSynapse(神經(jīng)鍵)vsWeightStartsin1970s,becomeverypopularin1990s,becauseoftheadvancementofcomputertechnology.2023/4/143編輯ppt2023/4/144編輯ppt2023/4/145編輯pptWhatisANN:BasicsTypesofANNNetworkstructure,e.g.Figure17.9&17.10(Turban,2000,version5,p663)NumberofhiddenlayersNumberofhiddennodesFeedforwardandfeedbackward(timedependentproblems)Linksbetweennodes(existorabsentoflinks)Theultimateobjectivesoftraining:obtainasetofweightsthatmakesalltheinstancesinthetrainingdatapredictedascorrectlyaspossible.Back-propagationisonetypeofANNwhichcanbeusedforclassificationandestimationmulti-layer:Inputlayer,Hiddenlayer(s),OutputlayerFullyconnectedFeedforwardErrorback-propagation2023/4/146編輯pptWhat&WhyANN(8.1FeedforwardNeuralNetwork)HowANNworks-workingprinciple(8.2.1SupervisedLearning)MostpopularANN-BackpropagationNetwork(8.5.1TheBackpropagationAlgorithm:Anexample)Content2023/4/147編輯ppt2.

HowANN:workingprinciple(I)Step1:CollectdataStep2:SeparatedataintotrainingandtestsetsfornetworktrainingandvalidationrespectivelyStep3:Selectnetworkstructure,learningalgorithm,andparametersSettheinitialweightseitherbyrulesorrandomlyRateoflearning(pacetoadjustweights)Selectlearningalgorithm(Morethanahundredlearningalgorithmsavailableforvarioussituationsandconfigurations)2023/4/148編輯ppt2.

ANNworkingprinciple(II)Step4:TrainthenetworkComputeoutputsCompareoutputswithdesiredtargets.ThedifferencebetweentheoutputsandthedesiredtargetsiscalleddeltaAdjusttheweightsandrepeattheprocesstominimizethedelta.TheobjectiveoftrainingistoMinimizetheDelta(Error).Thefinalresultoftrainingisasetofweights.Step5:TestthenetworkUsetestset:comparingtestresultstohistoricalresults,tofindouttheaccuracyofthenetworkStep6:Deploydevelopednetworkapplicationifthetestaccuracyisacceptable2023/4/149編輯ppt2.

ANNworkingprinciple(III):ExampleExample1:ORoperation(seetablebelow)Twoinputelements,X1andX2InputsCase X1 X2 DesiredResults 1 0 0 0 2 0 1 1(positive) 3 1 0 1(positive) 4 1 1 1(positive)2023/4/1410編輯ppt2.

ANNworkingprinciple(IV):ExampleNetworkstructure:onelayer(seenextpage)LearningalgorithmWeightedsum-summationfunction:Y1=∑XiWiTransformation(transfer)function:Y1lessthanthreshold,Y=0;otherwiseY=1Delta=Z-Y Wi(final)=Wi(initial)+Alpha*Delta*XiInitialParameters: Rateoflearning:alpha=0.2 Threshold=0.5; Initialweight:0.1,0.3Notes:WeightsareinitiallyrandomThevalueoflearningrate-alpha,issetlowfirst.2023/4/1411編輯pptProcessingInformation

inanArtificialNeuronx1w1jx2Yjw2jNeuronj∑wijxiWeightsOutputInputsSummationsTransferfunction2023/4/1412編輯pptWhat&WhyANN(8.1FeedforwardNeuralNetwork)HowANNworks-workingprinciple(8.2.1SupervisedLearning)MostpopularANN-BackpropagationNetwork(8.5.1TheBackpropagationAlgorithm:Anexample)Content2023/4/1413編輯ppt3.Back-propagationNetworkNetworkTopologymulti-layer:Inputlayer,Hiddenlayer(s),OutputlayerFullyconnectedFeedforwardErrorback-propagationInitializeweightswithrandomvalues2023/4/1414編輯pptBack-propagationNetworkOutputnodesInputnodesHiddennodesOutputvectorInputvector:xiwij2023/4/1415編輯ppt3.Back-propagationNetworkForeachnode1.Computethenetinputtotheunitusingsummationfunction2.Computetheoutputvalueusingtheactivationfunction(i.e.sigmoidfunction)3.Computetheerror4.Updatetheweights(andthebias)basedontheerror5.Terminatingconditions:all?wijinthepreviousepoch(周期)weresosmallastobebelowsomespecifiedthresholdthepercentageofsamplesmisclassifiedinthepreviousepochisbelowsomethresholdapre-specifiednumberofepochhasexpired2023/4/1416編輯pptBackpropagationError

OutputLayer2023/4/1417編輯pptBackpropagationError

HiddenLayer2023/4/1418編輯pptTheDeltaRule2023/4/1419編輯pptRootMeanSquaredError2023/4/1420編輯ppt3.Back-propagation(cont.)IncreasenetworkaccuracyandtrainingspeedNetworktopologynumberofnodesininputlayernumberofhiddenlayers(usuallyisone,nomorethantwo)numberofnodesineachhiddenlayernumberofnodesinoutputlayerChangeinitialweights,learningparameter,terminatingconditionTrainingprocess:FeedthetraininginstancesDeterminetheoutputerrorUpdatetheweightsRepeatuntiltheterminatingconditionismet2023/4/1421編輯pptSupervisedLearningwithFeed-ForwardNetworks

BackpropagationLearning2023/4/1422編輯pptSummary:DecisionsthebuildermustmakeNetworkTopology:numberofhiddenlayers,numberofnodesineachlayer,andfeedbackLearningalgorithmsParameters:initialweight,learningrateSizeoftrainingandtestdataStructureandparametersdeterminethelengthof

trainingtimeandtheaccuracyofthenetwork2023/4/1423編輯pptNeuralNetworkInputFormat

(Normalization:categoricaltonumerical)Allinputandoutputmustnumericalandbetween[0,1]CategoricalAttributes.e.g.attributewith4possiblevaluesOrdinal:Setto0,0.33,0.66,1Nominal:Setto[0,0],[0,1],[1,0].[1,1]NumericalAttributes:2023/4/1424編輯pptNeuralNetworkOutputFormatCategoricalAttributes:(Numericaltocategorical)

Type0&1Type0.45NumericalAttributes:([0,1]toordinaryvalue)

Min+X*(Max-min)2023/4/1425編輯pptHomeworkP264,ComputationalQuestions-2r=0.5,Tk=0.65Adjustallweightsforoneepoch2023/4/1426編輯pptCaseStudyExample:BankruptcyPredictionwithNeuralNetworksStructure:Three-layernetwork,back-propagationTrainingdata:Smallsetofwell-knownfinancialratiosDataavailableonbankruptcyoutcomesSupervisednetwork2023/4/1427編輯pptArchitectureoftheBankruptcyPredictionNeuralNetworkX4X3X5X1X2Bankrupt0Notbankrupt12023/4/1428編輯pptBankruptcyPrediction:NetworkarchitectureFiveInputNodesX1:Working

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論