版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
-.z.1Givethedefinitionsoryourprehensionsofthefollowingterms.(12’)1.1TheinductivelearninghypothesisP171.2OverfittingP491.4ConsistentlearnerP1482Givebriefanswerstothefollowingquestions.(15’)2.2Ifthesizeofaversionspaceis,Ingeneralwhatisthesmallestnumberofqueriesmayberequiredbyaconceptlearnerusingoptimalquerystrategytoperfectlylearnthetargetconcept"P272.3Ingenaral,decisiontreesrepresentadisjunctionofconjunctionsofconstrainsontheattributevaluesofinstanse,thenwhate*pressiondoesthefollowingdecisiontreecorrespondsto"OutLookOutLookHumidityWindSunnyOvercastRainYesHighNormalYesNoStrongYesWeakNo3Givethee*plainationtoinductivebias,andlistinductivebiasofCANDIDATE-ELIMINATIONalgorithm,decisiontreelearning(ID3),BACKPROPAGATIONalgorithm.(10’)Howtosolveoverfittingindecisiontreeandneuralnetwork"(10’)Solution:Decisiontree:及早停頓樹增長(stopgrowingearlier)后修剪法(post-pruning)NeuralNetwork權(quán)值衰減(weightdecay)驗證數(shù)據(jù)集(validationset)5ProvethattheLMSweightupdateruleperformsagradientdescenttominimizethesquarederror.Inparticular,definethesquarederrorEasinthete*t.NowcalculatethederivativeofEwithrespecttotheweight,assumingthatisalinearfunctionasdefinedinthete*t.Gradientdescentisachievedbyupdatingeachweightinproportionto.Therefore,youmustshowthattheLMStrainingrulealtersweightsinthisproportionforeachtraininge*ampleitencounters.()(8’)Solution:AsVtrain(b)(Successor(b))wecangetE==AsmentionedinLMS:WecangetTherefore,gradientdescentisachievementbyupdatingeachweightinproportionto;LMSrulesaltersweightsinthisproportionforeachtraininge*ampleitencounters.6Trueorfalse:ifdecisiontreeD2isanelaborationoftreeD1,thenD1ismore-general-thanD2.AssumeD1andD2aredecisiontreesrepresentingarbitrarybooleanfuncions,andthatD2isanelaborationofD1ifID3coulde*tendD1toD2.Iftruegiveaproof;iffalse,acountere*ample.(Definition:Letandbeboolean-valuedfunctionsdefinedover.thenismore_general_than_or_equal_to(written)Ifandonlyifthen)(10’)Thehypothesisisfalse.Onecountere*ampleisA*ORBwhileifA!=B,traininge*amplesareallpositive,whileifA==B,traininge*amplesareallnegative,then,usingID3toe*tendD1,thenewtreeD2willbeequivalenttoD1,i.e.,D2isequaltoD1.7Designatwo-inputperceptronthatimplementsthebooleanfunction.Designatwo-layernetworkofperceptronsthatimplements.(10’)8Supposethatahypothesisspacecontainingthreehypotheses,,,,andtheposteriorprobabilitiesofthesetypothesesgiventhetrainingdataare0.4,0.3and0.3respectively.Andifanewinstanceisencountered,whichisclassifiedpositiveby,butnegativebyand,thengivetheresultanddetailclassificationcourseofBayesoptimalclassifier.(10’)P1259SupposeSisacollectionoftraining-e*ampledaysdescribedbyattributesincludingHumidity,whichcanhavethevaluesHighorNormal.AssumeSisacollectioncontaining10e*amples,[7+,3-].Ofthese10e*amples,suppose3ofthepositiveand2ofthenegativee*ampleshaveHumidity=High,andtheremainderhaveHumidity=Normal.Pleasecalculatetheinformationgainduetosortingtheoriginal10e*amplesbytheattributeHumidity.(log21=0,log22=1,log23=1.58,log24=2,log25=2.32,log26=2.58,log27=2.8,log28=3,log29=3.16,log210=3.32,)(5’)Solution:(a)HerewedenoteS=[7+,3-],thenEntropy([7+,3-])==0.886;(b)Gain(S,a2)Values()={High,Normal},=4,=5ThusGain=0.886-=0.0410Finishthefollowingalgorithm.(10’)GRADIENT-DESCENT(traininge*amples,)Eachtraininge*ampleisapairoftheform,whereisthevectorofinputvalues,andtisthetargetoutputvalue.isthelearningrate(e.g.,0.05).InitializeeachtosomesmallrandomvalueUntiltheterminationconditionismet,DoInitializeeachtozero.Foreachintraining_e*amples,DoInputtheinstancetotheunitandputetheoutputoForeachlinearunitweight,DoForeachlinearunitweight,DoFIND-SAlgorithmInitializehtothemostspecifichypothesisinHForeachpositivetraininginstance*ForeachattributeconstraintaiinhIfThendonothingElsereplaceaiinhbythene*tmoregeneralconstraintthatissatisfiedby*OutputhypothesishWhatisthedefinitionoflearningproblem"(5)Use“acheckerslearningproblem〞asane*ampletostatehowtodesignalearningsystem.(15)Answer:Aputerprogramissaidtolearnfrome*perienceEwithrespecttosomeclassoftasksTandperformancemeasureP,ifitsperformanceattasksinT,asmeasuredbyP,improveswithe*perience.(5)E*ample:Acheckerslearningproblem: T:playcheckers(1)P:percentageofgameswoninatournament(1)E:opportunitytoplayagainstitself(1)Todesignalearningsystem:Step1:ChoosingtheTrainingE*perience(4)Acheckerslearningproblem:TaskT:playingcheckersPerformancemeasureP:percentofgameswonintheworldtournamentTraininge*perienceE:gamesplayedagainstitselfInordertopletethedesignofthelearningsystem,wemustnowchoose1.thee*acttypeofknowledgetobelearned2.arepresentationforthistargetknowledge3.alearningmechanismStep2:ChoosingtheTargetFunction(4)1.ifbisafinalboardstatethatiswon,thenV(b)=1002.ifbisafinalboardstatethatislost,thenV(b)=-1003.ifbisafinalboardstatethatisdrawn,thenV(b)=04.ifbisanotafinalstateinthegame,thenV(b)=V(b'),whereb'isthebestfinalboardstatethatcanbeachievedstartingfrombandplayingoptimallyuntiltheendofthegame(assumingtheopponentplaysoptimally,aswell).Step3:ChoosingaRepresentationfortheTargetFunction(4)*1:thenumberofblackpiecesontheboard*2:thenumberofredpiecesontheboard*3:thenumberofblackkingsontheboard*4:thenumberofredkingsontheboard*5:thenumberofblackpiecesthreatenedbyred(i.e.,whichcanbecapturedonred'se*tturn)*6:thenumberofredpiecesthreatenedbyblack.Thus,ourlearningprogramwillrepresentV(b)a'salinearfunctionoftheformV(b)=wo+wl*l+w2*2+w3*3+w4*4+w5*5+w6*6wherewothroughw6arenumericalcoefficients,orweights,tobechosenbythelearningalgorithm.Learnedvaluesfortheweightsw1throughw6willdeterminetherelativeimportanceofthevariousboardfeaturesindeterminingthevalueoftheboard,whereastheweightwowillprovideanadditiveconstanttotheboardvalue.Answer:Find-S&Find-G:Step1:InitializeStothemostspecifichypothesisinH.(1)S0:{,,,,,}InitializeGtothemostgeneralhypothesisinH.G0:{",",",",","}.Step2:Thefirste*ampleis{<Sunny,Warm,Normal,Strong,Warm,Same,+>}(3)S1:{Sunny,Warm,Normal,Strong,Warm,Same}G1:{",",",",","}.Step3:Theseconde*ampleis{<Sunny,Warm,High,Strong,Warm,Same,+>}(3)S2:{Sunny,Warm,",Strong,Warm,Same}G2:{",",",",","}.Step4:Thethirde*ampleis{<Rainy,Cold,High,Strong,Warm,Change,->}(3)S3:{Sunny,Warm,",Strong,Warm,Same}G3:{<Sunny,",",",",">,<",Warm,",",",">,<",",",",",Same>}Step5:Thefourthe*ampleis{<Sunny,Warm,High,Strong,Cool,Change,+>}(3)S4:{Sunny,Warm,",Strong,","}G4:{<Sunny,",",",",">,<",Warm,",",",">}Finally,allthehypothesesare:(2){<Sunny,Warm,",Strong,",">,<Sunny,",",Strong,",">,<Sunny,Warm,",",",">,<",Warm,",Strong,",">,<Sunny,",",",",">,<",Warm,",",",">}Answer:Flog(*)=-**log(*)-(1-*)*log(1-*);STEP1choosetherootnode:entropy_all=flog(4/10)=0.971;(2)gain_outlook=entropy_all-0.3*flog(1/3)-0.3*flog(1)-0.4*flog(1/2)=0.296;(1)gain_templture=entropy_all-0.3*flog(1/3)-0.3*flog(1/3)-0.4*flog(1/2)=0.02;(1)gain_humidity=entropy_all-0.5*flog(2/5)-0.5*flog(1/5)=0.125;(1)gain_wind=entropy_all-0.6*flog(5/6)-0.4*flog(1/4)=0.256;(1)RootNodeis“outlook〞:(2)overcastovercastSunnyoutlook+1-2+4Rainy+2-1step2choosethesecondNODE:forsunny(humidityORtemperature):entropy_sunny=flog(1/3)=0.918;(1)sunny_gain_wind=entropy_sunny-(2/3)*flog(0.5)-(1/3)*flog(1)=0.252;(1)sunny_gain_humidity=entropy_sunny-(2/3)*flog(1)-(1/3)*flog(1)=0.918;(1)sunny_gain_temperature=entropy_sunny-(2/3)*flog(1)-(1/3)*flog(1)=0.918;(1)choosehumidityortemperature.(1)forrain(wind):entropy_rain=flog(1/2)=1;(1)rain_gain_wind=entropy_rain-(1/2)*flog(1)-(1/2)*flog(1)=1;(1)rain_gain_humidity=entropy_rain-(1/2)*flog(1/2)-(1/2)*flog(1/2)=0;(1)rain_gain_temperature=entropy_rain-(1/4)*flog(1)-(3/4)*flog(1/3)=0.311;(1)choosewind.(1)(2)overcastovercastSunnyoutlookhumidityyesRainywindHighnoyesNormalStrongnoyesWeakorovercastovercastSunnyoutlooktemperatureyesRainywindHotnoyesCoolStrongnoyesWeakAnswer:A:Theprimitiveneuralunitsare:perceptron,linearunitandsigmoidunit.(3)Perceptron:(2)Aperceptrontakesavectorofreal-valuedinputs,calculatesalinearbinationoftheseinputs,thenoutputa1iftheresultisgreaterthansomethresholdand-1otherwise.Moreprecisely,giveninput*1through*n,theoutputo(*1,..*i,..*n)putedbytheperceptronisNSometimeswritetheperceptronfunctionasLinearunits:(2)alinearunitforwhichtheoutputoisgivenbyThus,alinearunitcorrespondstothefirststageofaperceptron,withoutthethreshold.Sigmoidunits:(2)Thesigmoidunitisillustratedinpictureliketheperceptron,thesigmoidunitfirstputesalinearbinationofitsinputs,thenappliesathresholdtotheresult.Inthecaseofthesigmoidunit,however,thethresholdoutputisacontinuousfunctionofitsinput.Moreprecisely,thesigmoidunitputesitsoutputoasWhere,B:(因題目有打印錯誤,所以感知器規(guī)則和delta規(guī)則均可,給出的是delta規(guī)則)Derivationprocessis:(6)感知器規(guī)則〔perceptronlearningrule〕Answer:P(no)=5/14P(yes)=9/14(1)P(sunny|no)=3/5(1)P(cool|no)=1/5(1)P(high|no)=4/5(1)P(strong|no)=3/5(1)P(no|newinstance)=P(no)*P(sunny|no)*P(cool|no)*P(high|no)*P(strong|no)=5/14*3/5*1/5*4/5*3/5=0.02057=2.057*10-2(2)P(sunny|yes)=2/9(1)P(cool|yes)=3/9(1)P(high|yes)=3/9(1)P(strong|yes)=3/9(1)P(yes|newinstance)=P(yes)*P(sunny|yes)*P(cool|yes)*P(high|yes)*P(strong|yes)=9/14*2/9*3/9*3/9*3/9=0.05291=5.291*10-3(2)ANSWER:NO(2)Answer:INDUCTIVEBIAS:(8)ConsideraconceptlearningalgorithmLforthesetofinstances*,Letcbeanarbitraryconceptdefineover*,andletDc={<*;c(*)>}beanarbitrarysetoftraininge*amplesofc.Letdenotetheclassificationassignedtotheinstance*ibyLaftertrainingonthedataDc.TheinductivebiasofLisanyminimalsetofassertionsBsuchthatforanytargetconceptcandcorrespondingtraininge*amplesDc:(?*i∈*)[(B∧*i∧Dc)?L(*i;Dc)]---The
futility
of
bias-free
learning:(7)
A
learner
that
makes
no
a
priori
assumptions
regarding
the
identity
of
the
target
concept
has
no
rational
basis
for
classifying
any
unseen
instances.
In
fact,
the
only
reason
that
the
learner
was
able
to
generalize
beyond
the
observed
training
e*amples
is
that
it
was
biased
by
the
inductive
bias.Unfortunately,theonlyinstancesthatwillproduceaunanimousvotearethepreviouslyobservedtraininge*amples.For,alltheotherinstances,takingavotewillbefutile:eachunobservedinstancewillbeclassifiedpositivebypreciselyhalfthehypothesesintheversionspaceandwillbeclassifiednegativebytheotherhalf.IntheEnjoySportlearningtask,everye*ampledayisrepresentedby6attributes.GiventhatattributesSkyhasthreepossiblevalues,andthatAirTemp、Humidity、Wind、Wind、WaterandForecasteachhavetwopossiblevalues.E*plainwhythesizeofthehypothesisspaceis973.HowwouldthenumberofpossibleinstancesandpossiblehypothesesincreasewiththeadditionofoneattributeAthattakesononKpossiblevalues"WritethealgorithmofCandidate_Eliminationusingversionspace.AssumeGisthesetofma*imallygeneralhopythesesinhypothesisspaceH,andSisthesetofma*imallyspecifichopytheses.Considerthefollowingsetoftraininge*amplesforEnjoySport:E*ampleSkyAirTempHumidityWindWaterForcastEnjoySport1sunnywarmnormalstrongwarmsameYes2sunnywarmhighstrongwarmsameyes3rainycoldhighstrongwarmchaggeno4sunnywarmhighstrongcoolchangeyes5sunnywarmnormalweakwarmsamenoWhatistheEntropyofthecollectiontraininge*ampleswithrespecttothetargetfunctionclassification"Accordingtothe5traninge*amples,putethedecisiontreethatbelearnedbyID3,andshowthedecisiontree.(log23=1.585,log25=2.322)Giveseveralapproachestoavoidoverfittingindecisiontreelearning.Howtodeterminthecorrectfinaltreesize"WritetheBackPropagationalgorithmforfeedforwardnetworkcontainingtwolayersofsigmoidunits.E*plaintheMa*imumaposteriori(MAP)hypothesis.UsingNaiveByesClassifiertoclassifythenewinstance:<Outlook=sunny,Temperature=cool,Humidity=high,Wind=strong>Ourtaskistopredictthetargetvalue(yesorno)ofthetargetconceptPlayTennisforthisnewinstance.Thetableblowprovidesasetof14traininge*amplesofthetargetconcept.DayOutlookTemperatureHumidityWindPlayTennisD1D2D3D4D5D6D7D8D9D10D11D12D13D14SunnySunnyOvercastRainRainRainOvercastSunnySunnyRainSunnyOvercastOvercastRainHotHotHotMildCoolCoolCoolMildCoolMildMildMildHotMildHighHighHighHighNormalNormalNormalHighNormalNormalNormalHighNormalHighWeakStrongWeakWeakWeakStrongStrongWeakWeakWeakStrongStrongWeakStrongNoNoYesYesYesNoYesNoYesYesYesYesYesNoQuestionEight:ThedefinitionofthreetypesoffitnessfunctionsingeneticalgorithmQuestionone:〔舉一個例子,比方:導(dǎo)航儀、西洋跳棋〕Questiontwo:Initilize:G={",",",",","}S={QUOTE,QUOTE,QUOTE,QUOTE,QUOTE,QUOTE}Step1:G={",",",",","}S={sunny,warm,normal,strong,warm,same}Step2:ingonepositiveinstance2G={",",",",","}S={sunny,warm,",strong,warm,same}Step3:ingonenegativeinstance3 G=<Sunny,",",",","><",warm,",",","><",",",",",same>S={sunny,warm,",strong,warm,same}Step4:ingonepositiveinstance4 S={sunny,warm,",strong,","} G=<Sunny,",",",","><",warm,",",",">Questionthree:Entropy(S)=QUOTEQUOTEog(3/5)QUOTEog(2/5)=0.971Gain(S,sky)=Entropy(S)–[QUOTE(4/5)Entropy(Ssunny)+(1/5)Entropy(Srainny)]=0.322Gain(S,AirTemp)=Gain(S,wind)=Gain(S,sky)=0.322Gain(S,Humidity)=Gain(S,Forcast)=0.02Gain(S,water)=0.171ChooseanyfeatureofAirTemp,windandskyasthetopnode.Thedecisiontreeasfollow:(Ifchooseskyasthetopnode)QuestionFour:Answer:Inductivebias:givesomeproorassumptionforatargetconceptmadebythelearnertohaveabasisforclassifyingunseeninstances.SupposeLisamachinelearningalgorithmand*isasetoftraininge*amples.L(*i,Dc)denotestheclassificationassignedto*ibyLaftertraininge*amplesonDc.ThentheinductivebiasisaminimalsetofassertionB,givenanarbitrarytargetconceptCandsetoftraininge*amplesDc:(QUOTE*iQUOTE)[(BQUOTEDcQUOTE*i)-|L(*i,Dc)]C_E:thetargetconceptiscontainedinthegivengypothesisspaceH,andthetraininge*amplesareallpositivee*amples.ID3:a,smalltreesarepreferredoverlargertrees.B,thetreesthatplacehighinformationgainattributeclosetorootarepreferredoverthosethatdonot.BP:Smoothinterpolationbeteendatapoints.QuestionFive:Answer:Inna?vebayesclassification,weassumpthatallattributesareindependentgiventhetatgetvalue,whileinbayesbelifnet,itspecifesasetofconditionalindependencealongwithasetofprobabilitydistribution.QuestionSi*:隨即梯度下降算法QuestionSeven:樸素貝葉斯例子QuestionEight:ThedefinitionofthreetypesoffitnessfunctionsingeneticalgorithmAnswer:Inordertoselectonehypotheseaccordingtofitnessfunction,therearealwaysthreemethods:roulettewheelselection,tournamentselectionandrankselection.Questionnine:Single-pointcrossover: Crossovermask:oror11110000000orTwo-pointcrossover: Offspring:(,)Uniformcrossover: Crossovermask:o
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025年企業(yè)員工離職與退休手續(xù)
- 文化娛樂行業(yè)設(shè)施安全管理規(guī)范
- 電力系統(tǒng)維護與檢修規(guī)范(標(biāo)準(zhǔn)版)
- 城市交通管理處罰制度
- 城市道路施工檔案管理制度
- 采購管理制度
- 辦公室網(wǎng)絡(luò)資源使用規(guī)范制度
- 養(yǎng)老院員工培訓(xùn)及考核制度
- 2026年雄安科技產(chǎn)業(yè)園開發(fā)管理有限公司招聘備考題庫帶答案詳解
- 2026年永仁縣教育系統(tǒng)公開遴選校醫(yī)的備考題庫及答案詳解參考
- 噴粉廠噴粉施工方案
- 電力設(shè)施的綠色設(shè)計與可持續(xù)發(fā)展
- 小型農(nóng)場研學(xué)課課程設(shè)計
- GB/T 3487-2024乘用車輪輞規(guī)格系列
- 第四單元“小說天地”(主題閱讀)-2024-2025學(xué)年六年級語文上冊閱讀理解(統(tǒng)編版)
- 蔣詩萌小品《誰殺死了周日》臺詞完整版
- 中醫(yī)培訓(xùn)課件:《中藥熱奄包技術(shù)》
- 2024年全國初中數(shù)學(xué)聯(lián)合競賽試題參考答案及評分標(biāo)準(zhǔn)
- 七年級上信息科技期末測試卷
- 車輛運用管理工作-認識車輛部門組織機構(gòu)(鐵道車輛管理)
- 22S803 圓形鋼筋混凝土蓄水池
評論
0/150
提交評論