【MOOC答案】《交通數(shù)據(jù)挖掘技術(shù)(Data Mining for Transportation)》(東南大學)章節(jié)作業(yè)慕課答案_第1頁
【MOOC答案】《交通數(shù)據(jù)挖掘技術(shù)(Data Mining for Transportation)》(東南大學)章節(jié)作業(yè)慕課答案_第2頁
【MOOC答案】《交通數(shù)據(jù)挖掘技術(shù)(Data Mining for Transportation)》(東南大學)章節(jié)作業(yè)慕課答案_第3頁
【MOOC答案】《交通數(shù)據(jù)挖掘技術(shù)(Data Mining for Transportation)》(東南大學)章節(jié)作業(yè)慕課答案_第4頁
【MOOC答案】《交通數(shù)據(jù)挖掘技術(shù)(Data Mining for Transportation)》(東南大學)章節(jié)作業(yè)慕課答案_第5頁
已閱讀5頁,還剩12頁未讀, 繼續(xù)免費閱讀

付費下載

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

【答案】《交通數(shù)據(jù)挖掘技術(shù)(DataMiningforTransportation)》(東南大學)章節(jié)作業(yè)慕課答案

有些題目順序不一致,下載后按鍵盤ctrl+F進行搜索Week1.IntroductiontodataminingTest11.單選題:Aboutdataprocess,whichoneiswrong?

選項:

A、Whenmakingdatadiscrimination,wecomparethetargetclasswithoneorasetofcomparativeclasses(thecontrastingclasses).

B、Whenmakingdataclassification,wepredictcategoricallabelsexcludingunorderedone.

C、Whenmakingdatacharacterization,wesummarizethedataoftheclassunderstudy(thetargetclass)ingeneralterms.

D、Whenmakingdataclustering,wewouldgroupdatatoformnewcategories.

答案:【W(wǎng)henmakingdataclassification,wepredictcategoricallabelsexcludingunorderedone.】2.單選題:Whichoneiswrongaboutclusteringandoutliers?

選項:

A、Clusteringbelongstosupervisedlearning.

B、Principlesofclusteringincludemaximizingintra-classsimilarityandminimizinginterclasssimilarity.

C、Outlieranalysiscanbeusefulinfrauddetectionandrareeventsanalysis.

D、Outliermeansadataobjectthatdoesnotcomplywiththegeneralbehaviorofthedata.

答案:【Clusteringbelongstosupervisedlearning.】3.單選題:Whichoneiswrongaboutclassificationandregression?

選項:

A、Regressionanalysisisastatisticalmethodologythatismostoftenusedfornumericprediction.

B、Wecanconstructclassificationmodels(functions)withoutsometrainingexamples.

C、Classificationpredictscategorical(discrete,unordered)labels.

D、Regressionmodelspredictcontinuous-valuedfunctions.

答案:【W(wǎng)ecanconstructclassificationmodels(functions)withoutsometrainingexamples.】4.單選題:Whichoneisnotthenominalvariables?

選項:

A、Occupation

B、Education

C、Age

D、Color

答案:【Age】5.單選題:Whichoneisnottherightalternativenameofdatamining?

選項:

A、Knowledgeextraction

B、Dataarcheology

C、Datadredging

D、Dataharvesting

答案:【Dataharvesting】6.單選題:WhichoneisnotbelongtotheprocessofKDD?

選項:

A、Datamining

B、Datadescription

C、Datacleaning

D、Dataselection

答案:【Datadescription】7.單選題:Whichonedescribestherightprocessofknowledgediscovery?

選項:

A、Selection-Preprocessing-Transformation-Datamining-Interpretation/Evaluation

B、Preprocessing-Transformation-Datamining-Selection-Interpretation/Evaluation

C、Datamining-Selection-Interpretation/Evaluation-Preprocessing-Transformation

D、Transformation-Datamining-election-Preprocessing-Interpretation/Evaluation

答案:【Selection-Preprocessing-Transformation-Datamining-Interpretation/Evaluation】8.單選題:WhichoneisnotthedescriptionofDatamining?

選項:

A、Extractionofinterestingpatternsorknowledge

B、Explorationsandanalysisbyautomaticorsemi-automaticmeans

C、Discovermeaningfulpatternsfromlargequantitiesofdata

D、Appropriatestatisticalanalysismethodstoanalyzethedatacollected

答案:【Appropriatestatisticalanalysismethodstoanalyzethedatacollected】9.單選題:Supportvectormachinescanbeusedforclassificationandregression.

選項:

A、正確

B、錯誤

答案:【正確】10.單選題:Outlierminingsuchasdensitybasedmethodbelongstosupervisedlearning.

選項:

A、正確

B、錯誤

答案:【錯誤】CourseworkAnalysisofDrivingBehavior1.Inthiscoursework,youarerequiredtousetechniquesofdataminingtostudytheabnormaldrivingbehavior.Pleasedownloadtheattachmentandreadthedetailinformationofthecourseworkincoursework.docxfile.Youneedtochooseonetodofromtask1andtask2,andthenchooseonetodofromtask3andtask4.Hopeyougetgoodunderstandingafterlearningthiscourse.

Completestructureandclearcitationstandard.CleanvaluelessdatainVBOX.csv,oversamplingorundersamplingthedataifsupervisedlearningtaskwaschoosed.Fortask1ortask2,suitablemethods,clearsteps,andgoodpredictionresults.Fortask3ortask4,thesolutionproposedshoulebereasonableandfeasible.Week2.Datapre-processingTest21.單選題:Whichoneiswrongwaytonormalizedata?

選項:

A、Min-maxnormalization

B、Simplescaling

C、Z-scorenormalization

D、Normalizationbydecimalscaling

答案:【Simplescaling】2.單選題:WhichoneiswrongaboutEqual-width(distance)partitioningandEqual-depth(frequency)partitioning?

選項:

A、Equal-widthpartitioningisthemoststraightforward,butoutliersmaydominatepresentation.

B、Equal-depthpartitioningdividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamples.

C、Theintervaloftheformeroneisnotequal.

D、Thenumberoftuplesisthesamewhenusingthelatterone.

答案:【Theintervaloftheformeroneisnotequal.】3.單選題:Whichoneiswrongaboutmethodsfordiscretization?

選項:

A、HistogramanalysisandBingingarebothunsupervisedmethods.

B、Clusteringanalysisonlybelongstotop-downsplit.

C、Intervalmergingbyc2Analysiscanbeappliedrecursively.

D、Decision-treeanalysisisEntropy-baseddiscretization.

答案:【Clusteringanalysisonlybelongstotop-downsplit.】4.單選題:HowtoconstructnewfeaturespacebyPCA?

選項:

A、NewfeaturespacebyPCAisconstructedbychoosingthemostimportantfeaturesyouthink.

B、NewfeaturespacebyPCAisconstructedbynormalizinginputdata.

C、NewfeaturespacebyPCAisconstructedbyselectingfeaturesrandomly.

D、NewfeaturespacebyPCAisconstructedbyeliminatingtheweakcomponentstoreducethesizeofthedata.

答案:【NewfeaturespacebyPCAisconstructedbyeliminatingtheweakcomponentstoreducethesizeofthedata.】5.單選題:Whichisnotthemajortasksindatapreprocessing?

選項:

A、Clean

B、Integration

C、Transition

D、Reduction

答案:【Transition】6.單選題:Whichisnotthereasonweneedtopreprocessthedata?

選項:

A、tosavetime

B、tomakeresultmeetourhypothesis

C、toavoidunreliableoutput

D、toeliminatenoise

答案:【tomakeresultmeetourhypothesis】7.多選題:Whicharethecommonusedwaystosampling?

選項:

A、Simplerandomsamplewithoutreplacement

B、Simplerandomsamplewithreplacement

C、Stratifiedsample

D、Clustersample

答案:【Simplerandomsamplewithoutreplacement;Simplerandomsamplewithreplacement;Stratifiedsample;Clustersample】8.多選題:Whichoneisrightaboutwavelettransforms?

選項:

A、Wavelettransformsstorelargefractionsofthestrongestofthewaveletcoefficients.

B、TheDWTdecomposeseachsegmentoftimeseriesviathesuccessiveuseoflow-passandhigh-passfilteringatappropriatelevels.

C、Wavelettransformscanbeusedforreducingdataandsmoothingdata.

D、Wavelettransformsmeansapplyingtopairsofdata,resultingintwosetofdataofthesamelength.

答案:【TheDWTdecomposeseachsegmentoftimeseriesviathesuccessiveuseoflow-passandhigh-passfilteringatappropriatelevels.;Wavelettransformscanbeusedforreducingdataandsmoothingdata.】9.多選題:Whicharetherightwaytohandlenoisedata?

選項:

A、Regression

B、Cluster

C、WT

D、Manual

答案:【Regression;Cluster;WT;Manual】10.多選題:Whicharetherightwaytofillinmissingvalues?

選項:

A、Smartmean

B、Probablevalue

C、Ignore

D、Falsify

答案:【Smartmean;Probablevalue;Ignore】11.單選題:Discretizationmeansdividingtherangeofacontinuousattributeintointervals.

選項:

A、正確

B、錯誤

答案:【正確】Week3.InstancebasedlearningTest31.單選題:What'sthedifferencebetweeneagerlearnerandlazylearner?

選項:

A、Eagerlearnerswouldgenerateamodelforclassificationwhilelazylearnerwouldnot.

B、Eagerlearnersclassifytheturplebasedonitssimilaritytothestoredtrainingturplewhilelazylearnernot.

C、Eagerlearnerssimplystoredata(ordoesonlyalittleminorprocessing)whilelazylearnernot.

D、Lazylearnerwouldgenerateamodelforclassificationwhileeagerlearnerwouldnot.

答案:【Eagerlearnerswouldgenerateamodelforclassificationwhilelazylearnerwouldnot.】2.多選題:What’sthemajorcomponentsinKNN?

選項:

A、Howtomeasuresimilarity?

B、Howtochoose"k"?

C、Howareclasslabelsassigned?

D、Howtodecidethedistance?

答案:【Howtomeasuresimilarity?;Howtochoose"k"?;Howareclasslabelsassigned?】3.多選題:HowtochoosetheoptimalvalueforK?

選項:

A、Cross-validationcanbeusedtodetermineagoodvaluebyusinganindependentdatasettovalidatetheKvalues.

B、LowvaluesforK(likek=1ork=2)canbenoisyandsubjecttotheeffectofoutliers.

C、Alargekvaluecanreducetheoverallnoisesothevaluefor'k'canbeasbigaspossible.

D、Historically,theoptimalKformostdatasetshasbeenbetween3-10.

答案:【Cross-validationcanbeusedtodetermineagoodvaluebyusinganindependentdatasettovalidatetheKvalues.;LowvaluesforK(likek=1ork=2)canbenoisyandsubjecttotheeffectofoutliers.;Historically,theoptimalKformostdatasetshasbeenbetween3-10.】4.多選題:WhichoneofthefollowingwayscanbeusedtoobtainattributeweightforAttribute-WeightedKNN?

選項:

A、Priorknowledge/experience.

B、PCA,FA(Factoranalysismethod).

C、Informationgain.

D、Gradientdescent,simplexmethodsandgeneticalgorithm.

答案:【Priorknowledge/experience.;PCA,FA(Factoranalysismethod).;Informationgain.;Gradientdescent,simplexmethodsandgeneticalgorithm.】5.單選題:Thewaytoobtaintheclassificationforanewinstancefromtheknearestneighborsistocalculatethemajorityclassofkneighbors.

選項:

A、正確

B、錯誤

答案:【正確】6.單選題:Thewaytoobtaintheregressionforanewinstancefromtheknearestneighborsistocalculatetheaveragevalueofkneighbors.

選項:

A、正確

B、錯誤

答案:【正確】7.單選題:DatanormalizationbeforeMeasureDistancecanavoiderrorscausedbydifferentdimensions,self-variations,orlargenumericaldifferences.

選項:

A、正確

B、錯誤

答案:【正確】8.單選題:ByEuclideandistanceorManhattandistance,wecancalculatethedistancebetweentwoinstances.

選項:

A、正確

B、錯誤

答案:【正確】9.單選題:Normalizingthedatacansolvetheproblemthatdifferentattributeshavedifferentvalueranges.

選項:

A、正確

B、錯誤

答案:【正確】10.單選題:AtclassificationstageKNNwouldstoreallinstanceorsometypicalofthem.

選項:

A、正確

B、錯誤

答案:【錯誤】11.單選題:AtlearningstageKNNwouldfindtheKclosestneighborsandthendecideclassifyKidentifiednearestlabel.

選項:

A、正確

B、錯誤

答案:【錯誤】12.單選題:ThewaytoobtaininstanceweightforDistance-WeightedKNNistocalculatethereciprocalofthedistancesquaredbetweenobjectandneighbors.

選項:

A、正確

B、錯誤

答案:【正確】Week4.DecisionTreesTest41.多選題:Whichdescriptionisrightaboutnodesindecisiontree?

選項:

A、Internalnodestestthevalueofparticularfeatures

B、Leafnodesspecifytheclass

C、Branchnodesdecidetheresult

D、Rootnodesdecidethestartpoint

答案:【Internalnodestestthevalueofparticularfeatures;Leafnodesspecifytheclass】2.多選題:Post-pruninginCARTconsistsofthefollowingprocedure:

選項:

A、First,considerthecostcomplexityofatree.

B、Then,foreachinternalnode,N,computethecostcomplexityofthesubtreeatN.

C、AndalsocomputethecostcomplexityofthesubtreeatNifitweretobepruned.

D、Atlast,comparethetwovalues.IfpruningthesubtreeatnodeNwouldresultinasmallercostcomplexity,thesubtreeispruned.Otherwise,thesubtreeiskept.

答案:【First,considerthecostcomplexityofatree.;Then,foreachinternalnode,N,computethecostcomplexityofthesubtreeatN.;AndalsocomputethecostcomplexityofthesubtreeatNifitweretobepruned.;Atlast,comparethetwovalues.IfpruningthesubtreeatnodeNwouldresultinasmallercostcomplexity,thesubtreeispruned.Otherwise,thesubtreeiskept.】3.多選題:Whichoneisrightaboutpre-pruningandpost-pruning?

選項:

A、Bothofthemaremethodstodealwithoverfittingproblem.

B、Pre-pruningdoesnotsplitanodeifthiswouldresultinthegoodnessmeasurefallingbelowathreshold.

C、Post-pruningremovesbranchesfroma“fullygrown”tree.

D、Thereisnoneedtochooseanappropriatethresholdwhenmakingpre-pruning.

答案:【Bothofthemaremethodstodealwithoverfittingproblem.;Pre-pruningdoesnotsplitanodeifthiswouldresultinthegoodnessmeasurefallingbelowathreshold.;Post-pruningremovesbranchesfroma“fullygrown”tree.】4.多選題:Whichoneisrightaboutunderfittingandoverfitting?

選項:

A、Underfittingmeanspooraccuracybothfortrainingdataandunseensamples.

B、Overfittingmeanshighaccuracyfortrainingdatabutpooraccuracyforunseensamples.

C、Underfittingimpliesthemodelistoosimplethatweneedtoincreasethemodelcomplexity.

D、Overfittingoccurstoomanybranchesthatweneedtodecreasethemodelcomplexity.

答案:【Underfittingmeanspooraccuracybothfortrainingdataandunseensamples.;Overfittingmeanshighaccuracyfortrainingdatabutpooraccuracyforunseensamples.;Underfittingimpliesthemodelistoosimplethatweneedtoincreasethemodelcomplexity.;Overfittingoccurstoomanybranchesthatweneedtodecreasethemodelcomplexity.】5.多選題:Whichisthetypicalalgorithmstogeneratetrees?

選項:

A、ID3

B、C4.5

C、CART

D、PCA

答案:【ID3;C4.5;CART】6.多選題:ComputinginformationgainforcontinuousvalueattributewhenusingID3consistsofthefollowingprocedure:

選項:

A、SortthevalueAinincreasingorder.

B、Considerthemidpointbetweeneachpairofadjacentvaluesasapossiblesplitpoint.

C、Selecttheminimumexpectedinformationrequirementasthesplit-point.

D、Split.

答案:【SortthevalueAinincreasingorder.;Considerthemidpointbetweeneachpairofadjacentvaluesasapossiblesplitpoint.;Selecttheminimumexpectedinformationrequirementasthesplit-point.;Split.】7.單選題:ID3useinformationgainasitsattributeselectionmeasure.AndtheattributewiththelowestinformationgainischosenasthesplittingattributefornoteN.

選項:

A、正確

B、錯誤

答案:【錯誤】8.單選題:Ruleiscreatedforeachpartfromitsroottoitsleafnotes.

選項:

A、正確

B、錯誤

答案:【正確】9.單選題:GainratioisusedasattributeselectionmeasureinC4.5andtheformulaisGainRatio(A)=Gain(A)/SplitInfo(A).

選項:

A、正確

B、錯誤

答案:【正確】10.單選題:ThecostcomplexitypruningalgorithmusedinCARTevaluatecostcomplexitybythenumberofleavesinthetree,andtheerrorrate.

選項:

A、正確

B、錯誤

答案:【正確】Week5.SupportVectorMachineTest51.多選題:WhichoneisrightabouttheadvantagesofSVM?

選項:

A、Theyareaccurateinhigh-dimensionalspaces.

B、Theyarememoryefficient.

C、Thealgorithmisnotproneforover-fittingcomparedtootherclassificationmethod.

D、Thesupportvectorsaretheessentialorcriticaltrainingtuples.

答案:【Theyareaccurateinhigh-dimensionalspaces.;Theyarememoryefficient.;Thealgorithmisnotproneforover-fittingcomparedtootherclassificationmethod.;Thesupportvectorsaretheessentialorcriticaltrainingtuples.】2.多選題:What'stheproblemofOVR?

選項:

A、Sensitivetotheaccuracyoftheconfidencefiguresproducedbytheclassifiers.

B、Thescaleoftheconfidencevaluesmaydifferbetweenthebinaryclassifiers.

C、Thebinaryclassificationlearnersseeunbalanceddistributions.

D、Onlywhentheclassdistributionisbalancedcanbalanceddistributionsattain.

答案:【Sensitivetotheaccuracyoftheconfidencefiguresproducedbytheclassifiers.;Thescaleoftheconfidencevaluesmaydifferbetweenthebinaryclassifiers.;Thebinaryclassificationlearnersseeunbalanceddistributions.】3.多選題:WhatadaptationscanbemadetoallowSVMtodealwithMulticlassClassificationproblem?

選項:

A、Oneversusrest(OVR).

B、Oneversusone(OVO).

C、Errorcorrectinginputcodes(ECIC).

D、Errorcorrectingoutputcodes(ECOC).

答案:【Oneversusrest(OVR).;Oneversusone(OVO).;Errorcorrectingoutputcodes(ECOC).】4.多選題:Whichisthetypicalcommonkernel?

選項:

A、Linear

B、Polynomial

C、Radialbasisfunction(Gaussiankernel)

D、Sigmoidkernel

答案:【Linear;Polynomial;Radialbasisfunction(Gaussiankernel);Sigmoidkernel】5.多選題:WhatthefeatureofSVM?

選項:

A、Extremelyslow,butarehighlyaccurate.

B、Muchlesspronetooverfittingthanothermethods.

C、Blackboxmodel.

D、Provideacompactdescriptionofthelearnedmodel.

答案:【Extremelyslow,butarehighlyaccurate.;Muchlesspronetooverfittingthanothermethods.;Provideacompactdescriptionofthelearnedmodel.】6.單選題:Ifyouhaveabigdataset,SVMissuitableforefficientcomputation.

選項:

A、正確

B、錯誤

答案:【錯誤】7.單選題:Regressionformulasincludingthreetypes:linear,nonlinearandgeneralform.

選項:

A、正確

B、錯誤

答案:【正確】8.單選題:Errorcorrectingoutputcodes(ECOC)isakindofproblemtransformationtechniques.

選項:

A、正確

B、錯誤

答案:【錯誤】9.單選題:ThereisnostructuredwayandnogoldenrulesforsettingtheparametersinSVM.

選項:

A、正確

B、錯誤

答案:【正確】10.單選題:Kerneltrickwasusedtoavoidcostlycomputationanddealwithmappingproblems.

選項:

A、正確

B、錯誤

答案:【正確】Week6.OutlierMiningTest61.多選題:Whichoneisrightaboutthreemethodsofoutliermining?

選項:

A、Statistics-basedapproachissimpleandfastbutdifficulttodealwithperiodicitydataandcategoricaldata.

B、Theefficiencyofdistance-basedapproachislowforthegreatdatasetinhighdimensionalspace.

C、Distance-basedapproachcannotbeusedinmultidimensionaldataset.

D、Density-basedapproachspendslowcostonsearchingneighborhood.

答案:【Statistics-basedapproachissimpleandfastbutdifficulttodealwithperiodicitydataandcategoricaldata.;Theefficiencyofdistance-basedapproachislowforthegreatdatasetinhighdimensionalspace.】2.多選題:Howtopicktherightkbyaheuristicmethodfordensity-basedoutlierminingmethod?

選項:

A、Kshouldbeatleast10toremoveunwantedstatisticalfluctuations.

B、Pick10to20appearstoworkwellingeneral.

C、Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybeglobaloutliers.

D、Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybelocaloutliers.

答案:【Kshouldbeatleast10toremoveunwantedstatisticalfluctuations.;Pick10to20appearstoworkwellingeneral.;Picktheupperboundvalueforkasthemaximumof“closeby”objectsthatcanpotentiallybelocaloutliers.】3.多選題:Whichoneisthemethodtodetectoutliers?

選項:

A、Statistics-basedapproach

B、Distance-basedapproach

C、Bulk-basedapproach

D、Density-basedapproach

答案:【Statistics-basedapproach;Distance-basedapproach;Density-basedapproach】4.多選題:Whatisapplicationcaseofoutliermining?

選項:

A、Trafficincidentdetection

B、Creditcardfrauddetection

C、Networkintrusiondetection

D、Medicalanalysis

答案:【Trafficincidentdetection;Creditcardfrauddetection;Networkintrusiondetection;Medicalanalysis】5.多選題:Whichdescriptionisrighttodescribeoutliers?

選項:

A、Outlierscausedbymeasurementerror

B、Outliersreflectinggroundtruth

C、Outlierscausedbyequipmentfailure

D、Outliersneededtobedroppedoutalways

答案:【Outlierscausedbymeasurementerror;Outliersreflectinggroundtruth;Outlierscausedbyequipmentfailure】6.單選題:Distance-basedoutlierMiningisnotsuitabletodatasetthatdoesnotfitanystandarddistributionmodel.

選項:

A、正確

B、錯誤

答案:【錯誤】7.單選題:Anoutlierisadataobjectthatdeviatessignificantlyfromtherestoftheobjects,asifitweregeneratedbyadifferentmechanism.

選項:

A、正確

B、錯誤

答案:【正確】8.單選題:MahalanobisDistanceaccountsfortherelativedispersionsandinherentcorrelationsamongvectorelements,whichisdifferentfromEuclideanDistance.

選項:

A、正確

B、錯誤

答案:【正確】9.單選題:Whenidentifyingoutlierswithadiscordancytest,thedatapointisconsideredasanoutlierifitfallswithintheconfidenceinterval.

選項:

A、正確

B、錯誤

答案:【錯誤】10.單選題:Statistic-basedmethodneedstorequireknowingthedistributionofthedataandthedistributionparametersinadvance.

選項:

A、正確

B、錯誤

答案:【正確】Week7.EnsembleLeaningTest71.多選題:Whichstepisnecessarywhenconstructinganensemblemodel?

選項:

A、Creatingmultipledataset

B、Constructingasetofclassifiersfromthetrainingdata

C、Combiningpredictionsmadebymultipleclassifierstoobtainfinalclasslabel

D、Findthebestperformingpredictionstoobtainfinalclasslabel

答案:【Creatingmultipledataset;Constructingasetofclassifiersfromthetrainingdata;Combiningpredictionsmadebymultipleclassifierstoobtainfinalclasslabel】2.多選題:Whichoneisrightwhendealingwiththeclass-imbalanceproblem?

選項:

A、Oversamplingworksbydecreasingthenumberofminoritypositivetuples.

B、Undersamplingworksbyincreasingthenumberofmajoritynegativetuples.

C、Smotealgorithmaddssynthetictuplesthatareclosetotheminoritytuplesintuplespace.

D、Threshold-movingandensemblemethodswereempiricallyobservedtooutperformoversamplingandundersampling.

答案:【Smotealgorithmaddssynthetictuplesthatareclosetotheminoritytuplesintuplespace.;Threshold-movingandensemblemethodswereempiricallyobservedtooutperformoversamplingandundersampling.】3.多選題:Howtodealwithimbalanceddatain2-classclassification?

選項:

A、Oversampling

B、Undersampling

C、Threshold-moving

D、Ensembletechniques

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

最新文檔

評論

0/150

提交評論