數(shù)據(jù)挖掘概念與技術(shù)_第1頁(yè)
數(shù)據(jù)挖掘概念與技術(shù)_第2頁(yè)
數(shù)據(jù)挖掘概念與技術(shù)_第3頁(yè)
數(shù)據(jù)挖掘概念與技術(shù)_第4頁(yè)
數(shù)據(jù)挖掘概念與技術(shù)_第5頁(yè)
已閱讀5頁(yè),還剩52頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

數(shù)據(jù)挖掘:概念與技術(shù)JiaweiHanandMichelineKamber著MonrganKaufmannPublishersInc.范明孟小峰等譯機(jī)械工業(yè)出版社2023/1/121數(shù)據(jù)挖掘:概念與技術(shù)第3章數(shù)據(jù)預(yù)處理英文幻燈片制作:JiaweiHan中文幻燈片編譯:范明2023/1/122數(shù)據(jù)挖掘:概念與技術(shù)第3章:數(shù)據(jù)預(yù)處理為什么預(yù)處理數(shù)據(jù)?數(shù)據(jù)清理數(shù)據(jù)集成數(shù)據(jù)歸約離散化和概念分層產(chǎn)生小結(jié)2023/1/123數(shù)據(jù)挖掘:概念與技術(shù)為什么數(shù)據(jù)預(yù)處理?現(xiàn)實(shí)世界中的數(shù)據(jù)是臟的不完全:缺少屬性值,缺少某些有趣的屬性,或僅包含聚集數(shù)據(jù)例,occupation=“”噪音:包含錯(cuò)誤或孤立點(diǎn)例,Salary=“-10”不一致:編碼或名字存在差異例,Age=“42”Birthday=“03/07/1997”例,以前的等級(jí)“1,2,3”,現(xiàn)在的等級(jí)“A,B,C”例,重復(fù)記錄間的差異2023/1/124數(shù)據(jù)挖掘:概念與技術(shù)數(shù)據(jù)為什么臟?不完全數(shù)據(jù)源于數(shù)據(jù)收集時(shí)未包含數(shù)據(jù)收集和數(shù)據(jù)分析時(shí)的不同考慮.人/硬件/軟件問(wèn)題噪音數(shù)據(jù)源于收集錄入變換不一致數(shù)據(jù)源于不同的數(shù)據(jù)源違反函數(shù)依賴2023/1/125數(shù)據(jù)挖掘:概念與技術(shù)為什么數(shù)據(jù)預(yù)處理是重要的?沒(méi)有高質(zhì)量的數(shù)據(jù),就沒(méi)有高質(zhì)量的數(shù)據(jù)挖掘結(jié)果!高質(zhì)量的決策必然依賴高質(zhì)量的數(shù)據(jù)例如,重復(fù)或遺漏的數(shù)據(jù)可能導(dǎo)致不正確或誤導(dǎo)的統(tǒng)計(jì).數(shù)據(jù)倉(cāng)庫(kù)需要高質(zhì)量數(shù)據(jù)的一致集成數(shù)據(jù)提取,清理,和變換是建立數(shù)據(jù)倉(cāng)庫(kù)的最主要的工作—BillInmon2023/1/126數(shù)據(jù)挖掘:概念與技術(shù)數(shù)據(jù)質(zhì)量的多角度度量一種廣泛接受的多角度:正確性(Accuracy)完全性(Completeness)一致性(Consistency)合時(shí)(Timeliness)可信性(Believability)Valueadded可解釋性(Interpretability)可存取性(Accessibility)Broadcategories:intrinsic,contextual,representational,andaccessibility.2023/1/127數(shù)據(jù)挖掘:概念與技術(shù)數(shù)據(jù)預(yù)處理的主要任務(wù)數(shù)據(jù)清理填充遺漏的值,識(shí)別孤立點(diǎn)者,消除噪音,并糾正數(shù)據(jù)中的不一致數(shù)據(jù)集成多個(gè)數(shù)據(jù)庫(kù),數(shù)據(jù)方,或文件的集成數(shù)據(jù)變換規(guī)范化和聚集數(shù)據(jù)歸約得到數(shù)據(jù)的歸約表示,它小得多,但產(chǎn)生相同或類似的分析結(jié)果數(shù)據(jù)離散化數(shù)據(jù)歸約的一部分,但具有特殊的重要性,特別是對(duì)于數(shù)值數(shù)據(jù)2023/1/128數(shù)據(jù)挖掘:概念與技術(shù)數(shù)據(jù)預(yù)處理的形式

2023/1/129數(shù)據(jù)挖掘:概念與技術(shù)第3章:數(shù)據(jù)預(yù)處理為什么預(yù)處理數(shù)據(jù)?數(shù)據(jù)清理數(shù)據(jù)集成數(shù)據(jù)歸約離散化和概念分層產(chǎn)生小結(jié)2023/1/1210數(shù)據(jù)挖掘:概念與技術(shù)數(shù)據(jù)清理Importance“Datacleaningisoneofthethreebiggestproblemsindatawarehousing”—RalphKimball“Datacleaningisthenumberoneproblemindatawarehousing”—DCIsurveyDatacleaningtasksFillinmissingvaluesIdentifyoutliersandsmoothoutnoisydataCorrectinconsistentdataResolveredundancycausedbydataintegration2023/1/1211數(shù)據(jù)挖掘:概念與技術(shù)MissingDataDataisnotalwaysavailableE.g.,manytupleshavenorecordedvalueforseveralattributes,suchascustomerincomeinsalesdataMissingdatamaybeduetoequipmentmalfunctioninconsistentwithotherrecordeddataandthusdeleteddatanotenteredduetomisunderstandingcertaindatamaynotbeconsideredimportantatthetimeofentrynotregisterhistoryorchangesofthedataMissingdatamayneedtobeinferred.2023/1/1212數(shù)據(jù)挖掘:概念與技術(shù)HowtoHandleMissingData?Ignorethetuple:usuallydonewhenclasslabelismissing(assumingthetasksinclassification—noteffectivewhenthepercentageofmissingvaluesperattributevariesconsiderably.Fillinthemissingvaluemanually:tedious+infeasible?Fillinitautomaticallywithaglobalconstant:e.g.,“unknown”,anewclass?!theattributemeantheattributemeanforallsamplesbelongingtothesameclass:smarterthemostprobablevalue:inference-basedsuchasBayesianformulaordecisiontree2023/1/1213數(shù)據(jù)挖掘:概念與技術(shù)NoisyDataNoise:randomerrororvarianceinameasuredvariableIncorrectattributevaluesmayduetofaultydatacollectioninstrumentsdataentryproblemsdatatransmissionproblemstechnologylimitationinconsistencyinnamingconventionOtherdataproblemswhichrequiresdatacleaningduplicaterecordsincompletedatainconsistentdata2023/1/1214數(shù)據(jù)挖掘:概念與技術(shù)HowtoHandleNoisyData?Binningmethod:firstsortdataandpartitioninto(equi-depth)binsthenonecansmoothbybinmeans,smoothbybinmedian,smoothbybinboundaries,etc.ClusteringdetectandremoveoutliersCombinedcomputerandhumaninspectiondetectsuspiciousvaluesandcheckbyhuman(e.g.,dealwithpossibleoutliers)Regressionsmoothbyfittingthedataintoregressionfunctions2023/1/1215數(shù)據(jù)挖掘:概念與技術(shù)SimpleDiscretizationMethods:BinningEqual-width(distance)partitioning:ItdividestherangeintoNintervalsofequalsize:uniformgridifAandBarethelowestandhighestvaluesoftheattribute,thewidthofintervalswillbe:W=(B–A)/N.ThemoststraightforwardButoutliersmaydominatepresentationSkeweddataisnothandledwell.Equal-depth(frequency)partitioning:ItdividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamplesGooddatascalingManagingcategoricalattributescanbetricky.2023/1/1216數(shù)據(jù)挖掘:概念與技術(shù)BinningMethodsforDataSmoothing*Sorteddataforprice(indollars):4,8,9,15,21,21,24,25,26,28,29,34*Partitioninto(equi-depth)bins:-Bin1:4,8,9,15-Bin2:21,21,24,25-Bin3:26,28,29,34*Smoothingbybinmeans:-Bin1:9,9,9,9-Bin2:23,23,23,23-Bin3:29,29,29,29*Smoothingbybinboundaries:-Bin1:4,4,4,15-Bin2:21,21,25,25-Bin3:26,26,26,342023/1/1217數(shù)據(jù)挖掘:概念與技術(shù)ClusterAnalysis2023/1/1218數(shù)據(jù)挖掘:概念與技術(shù)Regressionxyy=x+1X1Y1Y1’2023/1/1219數(shù)據(jù)挖掘:概念與技術(shù)Chapter3:DataPreprocessingWhypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary2023/1/1220數(shù)據(jù)挖掘:概念與技術(shù)DataIntegrationDataintegration:combinesdatafrommultiplesourcesintoacoherentstoreSchemaintegrationintegratemetadatafromdifferentsourcesEntityidentificationproblem:identifyrealworldentitiesfrommultipledatasources,e.g.,A.cust-idB.cust-#Detectingandresolvingdatavalueconflictsforthesamerealworldentity,attributevaluesfromdifferentsourcesaredifferentpossiblereasons:differentrepresentations,differentscales,e.g.,metricvs.Britishunits2023/1/1221數(shù)據(jù)挖掘:概念與技術(shù)HandlingRedundantDatainDataIntegrationRedundantdataoccuroftenwhenintegrationofmultipledatabasesThesameattributemayhavedifferentnamesindifferentdatabasesOneattributemaybea“derived”attributeinanothertable,e.g.,annualrevenueRedundantdatamaybeabletobedetectedbycorrelationalanalysisCarefulintegrationofthedatafrommultiplesourcesmayhelpreduce/avoidredundanciesandinconsistenciesandimproveminingspeedandquality2023/1/1222數(shù)據(jù)挖掘:概念與技術(shù)DataTransformationSmoothing:removenoisefromdataAggregation:summarization,datacubeconstructionGeneralization:concepthierarchyclimbingNormalization:scaledtofallwithinasmall,specifiedrangemin-maxnormalizationz-scorenormalizationnormalizationbydecimalscalingAttribute/featureconstructionNewattributesconstructedfromthegivenones2023/1/1223數(shù)據(jù)挖掘:概念與技術(shù)DataTransformation:Normalizationmin-maxnormalizationz-scorenormalizationnormalizationbydecimalscalingWherejisthesmallestintegersuchthatMax(||)<12023/1/1224數(shù)據(jù)挖掘:概念與技術(shù)Chapter3:DataPreprocessingWhypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary2023/1/1225數(shù)據(jù)挖掘:概念與技術(shù)DataReductionStrategiesAdatawarehousemaystoreterabytesofdataComplexdataanalysis/miningmaytakeaverylongtimetorunonthecompletedatasetDatareductionObtainareducedrepresentationofthedatasetthatismuchsmallerinvolumebutyetproducethesame(oralmostthesame)analyticalresultsDatareductionstrategiesDatacubeaggregationDimensionalityreduction—removeunimportantattributesDataCompressionNumerosityreduction—fitdataintomodelsDiscretizationandconcepthierarchygeneration2023/1/1226數(shù)據(jù)挖掘:概念與技術(shù)DataCubeAggregationThelowestlevelofadatacubetheaggregateddataforanindividualentityofintereste.g.,acustomerinaphonecallingdatawarehouse.MultiplelevelsofaggregationindatacubesFurtherreducethesizeofdatatodealwithReferenceappropriatelevelsUsethesmallestrepresentationwhichisenoughtosolvethetaskQueriesregardingaggregatedinformationshouldbeansweredusingdatacube,whenpossible2023/1/1227數(shù)據(jù)挖掘:概念與技術(shù)DimensionalityReductionFeatureselection(i.e.,attributesubsetselection):Selectaminimumsetoffeaturessuchthattheprobabilitydistributionofdifferentclassesgiventhevaluesforthosefeaturesisascloseaspossibletotheoriginaldistributiongiventhevaluesofallfeaturesreduce#ofpatternsinthepatterns,easiertounderstandHeuristicmethods(duetoexponential#ofchoices):step-wiseforwardselectionstep-wisebackwardeliminationcombiningforwardselectionandbackwardeliminationdecision-treeinduction2023/1/1228數(shù)據(jù)挖掘:概念與技術(shù)ExampleofDecisionTreeInductionInitialattributeset:{A1,A2,A3,A4,A5,A6}A4?A1?A6?Class1Class2Class1Class2>Reducedattributeset:{A1,A4,A6}2023/1/1229數(shù)據(jù)挖掘:概念與技術(shù)HeuristicFeatureSelectionMethodsThereare2d

possiblesub-featuresofdfeaturesSeveralheuristicfeatureselectionmethods:Bestsinglefeaturesunderthefeatureindependenceassumption:choosebysignificancetests.Beststep-wisefeatureselection:Thebestsingle-featureispickedfirstThennextbestfeatureconditiontothefirst,...Step-wisefeatureelimination:RepeatedlyeliminatetheworstfeatureBestcombinedfeatureselectionandelimination:Optimalbranchandbound:Usefeatureeliminationandbacktracking2023/1/1230數(shù)據(jù)挖掘:概念與技術(shù)DataCompressionStringcompressionThereareextensivetheoriesandwell-tunedalgorithmsTypicallylosslessButonlylimitedmanipulationispossiblewithoutexpansionAudio/videocompressionTypicallylossycompression,withprogressiverefinementSometimessmallfragmentsofsignalcanbereconstructedwithoutreconstructingthewholeTimesequenceisnotaudioTypicallyshortandvaryslowlywithtime2023/1/1231數(shù)據(jù)挖掘:概念與技術(shù)DataCompressionOriginalDataCompressedDatalosslessOriginalDataApproximatedlossy2023/1/1232數(shù)據(jù)挖掘:概念與技術(shù)WaveletTransformsDiscretewavelettransform(DWT):linearsignalprocessing,multiresolutionalanalysisCompressedapproximation:storeonlyasmallfractionofthestrongestofthewaveletcoefficientsSimilartodiscreteFouriertransform(DFT),butbetterlossycompression,localizedinspaceMethod:Length,L,mustbeanintegerpowerof2(paddingwith0s,whennecessary)Eachtransformhas2functions:smoothing,differenceAppliestopairsofdata,resultingintwosetofdataoflengthL/2Appliestwofunctionsrecursively,untilreachesthedesiredlength

Haar2Daubechie42023/1/1233數(shù)據(jù)挖掘:概念與技術(shù)DWTforImageCompressionImage

LowPassHighPassLowPassHighPassLowPassHighPass2023/1/1234數(shù)據(jù)挖掘:概念與技術(shù)GivenNdatavectorsfromk-dimensions,findc<=korthogonalvectorsthatcanbebestusedtorepresentdataTheoriginaldatasetisreducedtooneconsistingofNdatavectorsoncprincipalcomponents(reduceddimensions)EachdatavectorisalinearcombinationofthecprincipalcomponentvectorsWorksfornumericdataonlyUsedwhenthenumberofdimensionsislargePrincipalComponentAnalysis2023/1/1235數(shù)據(jù)挖掘:概念與技術(shù)X1X2Y1Y2PrincipalComponentAnalysis2023/1/1236數(shù)據(jù)挖掘:概念與技術(shù)NumerosityReductionParametricmethodsAssumethedatafitssomemodel,estimatemodelparameters,storeonlytheparameters,anddiscardthedata(exceptpossibleoutliers)Log-linearmodels:obtainvalueatapointinm-DspaceastheproductonappropriatemarginalsubspacesNon-parametricmethods

DonotassumemodelsMajorfamilies:histograms,clustering,sampling2023/1/1237數(shù)據(jù)挖掘:概念與技術(shù)RegressionandLog-LinearModelsLinearregression:DataaremodeledtofitastraightlineOftenusestheleast-squaremethodtofitthelineMultipleregression:allowsaresponsevariableYtobemodeledasalinearfunctionofmultidimensionalfeaturevectorLog-linearmodel:approximatesdiscretemultidimensionalprobabilitydistributions2023/1/1238數(shù)據(jù)挖掘:概念與技術(shù)Linearregression:Y=+XTwoparameters,andspecifythelineandaretobeestimatedbyusingthedataathand.usingtheleastsquarescriteriontotheknownvaluesofY1,Y2,…,X1,X2,….Multipleregression:Y=b0+b1X1+b2X2.Manynonlinearfunctionscanbetransformedintotheabove.Log-linearmodels:Themulti-waytableofjointprobabilitiesisapproximatedbyaproductoflower-ordertables.Probability:p(a,b,c,d)=ab

acadbcdRegressAnalysisandLog-LinearModels2023/1/1239數(shù)據(jù)挖掘:概念與技術(shù)HistogramsApopulardatareductiontechniqueDividedataintobucketsandstoreaverage(sum)foreachbucketCanbeconstructedoptimallyinonedimensionusingdynamicprogrammingRelatedtoquantizationproblems.2023/1/1240數(shù)據(jù)挖掘:概念與技術(shù)ClusteringPartitiondatasetintoclusters,andonecanstoreclusterrepresentationonlyCanbeveryeffectiveifdataisclusteredbutnotifdatais“smeared”Canhavehierarchicalclusteringandbestoredinmulti-dimensionalindextreestructuresTherearemanychoicesofclusteringdefinitionsandclusteringalgorithms,furtherdetailedinChapter82023/1/1241數(shù)據(jù)挖掘:概念與技術(shù)SamplingAllowaminingalgorithmtorunincomplexitythatispotentiallysub-lineartothesizeofthedataChoosearepresentativesubsetofthedataSimplerandomsamplingmayhaveverypoorperformanceinthepresenceofskewDevelopadaptivesamplingmethodsStratifiedsampling:Approximatethepercentageofeachclass(orsubpopulationofinterest)intheoveralldatabaseUsedinconjunctionwithskeweddataSamplingmaynotreducedatabaseI/Os(pageatatime).2023/1/1242數(shù)據(jù)挖掘:概念與技術(shù)SamplingSRSWOR(simplerandomsamplewithoutreplacement)SRSWRRawData2023/1/1243數(shù)據(jù)挖掘:概念與技術(shù)SamplingRawDataCluster/StratifiedSample2023/1/1244數(shù)據(jù)挖掘:概念與技術(shù)HierarchicalReductionUsemulti-resolutionstructurewithdifferentdegreesofreductionHierarchicalclusteringisoftenperformedbuttendstodefinepartitionsofdatasetsratherthan“clusters”ParametricmethodsareusuallynotamenabletohierarchicalrepresentationHierarchicalaggregationAnindextreehierarchicallydividesadatasetintopartitionsbyvaluerangeofsomeattributesEachpartitioncanbeconsideredasabucketThusanindextreewithaggregatesstoredateachnodeisahierarchicalhistogram2023/1/1245數(shù)據(jù)挖掘:概念與技術(shù)Chapter3:DataPreprocessingWhypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary2023/1/1246數(shù)據(jù)挖掘:概念與技術(shù)DiscretizationThreetypesofattributes:Nominal—valuesfromanunorderedsetOrdinal—valuesfromanorderedsetContinuous—realnumbersDiscretization:dividetherangeofacontinuousattributeintointervalsSomeclassificationalgorithmsonlyacceptcategoricalattributes.ReducedatasizebydiscretizationPrepareforfurtheranalysis2023/1/1247數(shù)據(jù)挖掘:概念與技術(shù)DiscretizationandConcepthierachyDiscretization

reducethenumberofvaluesforagivencontinuousattributebydividingtherangeoftheattributeintointervals.Intervallabelscanthenbeusedtoreplaceactualdatavalues.Concepthierarchies

reducethedatabycollectingandreplacinglowlevelconcepts(suchasnumericvaluesfortheattributeage)byhigherlevelconcepts(suchasyoung,middle-aged,orsenior).2023/1/1248數(shù)據(jù)挖掘:概念與技術(shù)DiscretizationandconcepthierarchygenerationfornumericdataBinning(seesectionsbefore)Histogramanalysis(seesectionsbefore)Clusteringanalysis(seesectionsbefore)Entropy-baseddiscretizationSegmentationbynaturalpartitioning2023/1/1249數(shù)據(jù)挖掘:概念與技術(shù)Entropy-BasedDiscretizationGivenasetofsamplesS,ifSispartitionedintotwointervalsS1andS2usingboundaryT,theentropyafterpartitioningisTheboundarythatminimizestheentropyfunctionoverallpossibleboundariesisselectedasabinarydiscretization.Theprocessisrecursivelyappliedtopartitionsobtaineduntilsomestoppingcriterionismet,e.g.,Experimentsshowthatitmayreducedatasizeandimproveclassificationaccuracy2023/1/1250數(shù)據(jù)挖掘:概念與技術(shù)SegmentationbyNaturalPartitioningAsimply3-4-5rulecanbeusedtosegmentnumericdataintorelativelyuniform,“natural”intervals.Ifanintervalcovers3,6,7or9distinctvaluesatthemostsignificantdigit,partitiontherangeinto3equi-widthintervalsIfitcovers2,4,or8distinctvaluesatthemostsignificantdigit,partitiontherangeinto4intervalsIfitcovers1,5,or10distinctvaluesatthemostsignificantdigit,partitiontherangeinto5intervals2023/1/1251數(shù)據(jù)挖掘:概念與技術(shù)Exampleof3-4-5Rule(-$4000-$5,000)(-$400-0)(-$400--$300)(-$300--$200)(-$200--$100)(-$100-0)(0-$1,000)(0-$200)($200-$400)($400-$600)($600-$800)($800-$1,000)($2,000-$5,000)($2,000-$3,000)($3,000-$4,000)($4,000-$5,000)($1,000-$2,000)($1,000-$1,200)($1,200-$1,400)($1,400-$1,600)($1,600-$1,800)($1,800-$2,000)

msd=1,000 Low=-$1,000 High=$2,000Step2:Step4:Step1:-$351 -$159 profit $1,838 $4,700

MinLow(i.e,5%-tile) High(i.e,95%-0tile)Maxcount(-$1,000-$2,000)(-$1,000-0)(0-$1,000)Step3:($1,000-$2,000)2023/1/1252數(shù)據(jù)挖掘:概念與技術(shù)ConceptHierarchyGenerationforCategoricalDataSpecificationofapartialorderingofattributesexplicitlyattheschemalevelbyusersorexpertsstreet<city<state<countrySpecificationofaportionofahierarchybyexplicitdatagrouping{Urbana,Champaign,Chicago}<IllinoisSpecificationofasetofattributes.SystemautomaticallygeneratespartialorderingbyanalysisofthenumberofdistinctvaluesE.g.,

street<city<state<countrySpecificationofonlyapartialsetofattributesE.g.,onlystreet<city,notothers2023/1/1253數(shù)據(jù)挖掘:概念與技術(shù)AutomaticConceptHierarchyGenerationSomeconcepthierarchiescanbeautomaticallygeneratedbasedontheanalysisofthenumberofdistinctvaluesperattributeinthegivendatasetTheattributewiththemostdistinctvaluesisplacedatthelowestlevelofthehierarchyNote:Exception—weekday,month,quarter,ye

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論