【《一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì)》15000字】_第1頁(yè)
【《一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì)》15000字】_第2頁(yè)
【《一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì)》15000字】_第3頁(yè)
【《一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì)》15000字】_第4頁(yè)
【《一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì)》15000字】_第5頁(yè)
已閱讀5頁(yè),還剩26頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì)目錄TOC\o"1-3"\h\u19563一種基于YOLOv4-CSP算法的條狀缺陷檢測(cè)算法設(shè)計(jì) 113945第1章條狀缺陷檢測(cè)算法設(shè)計(jì) 2114441.1網(wǎng)絡(luò)總體架構(gòu) 215411.1.1YOLOv4-CSP 2313541.1.2改進(jìn)后的YOLOv4-CSP 5160161.2特征提取網(wǎng)絡(luò)優(yōu)化 6260781.2.1可變形卷積 6198281.2.2非對(duì)稱卷積 855411.3特征增強(qiáng)網(wǎng)絡(luò)優(yōu)化 9148261.4偽標(biāo)簽訓(xùn)練優(yōu)化 12106271.5本章小結(jié) 143248第2章算法實(shí)驗(yàn)結(jié)果與分析 15142582.1數(shù)據(jù)集介紹 15147862.1.1竹條數(shù)據(jù)集 15266842.1.2鋁材數(shù)據(jù)集 16176602.2實(shí)驗(yàn)設(shè)置及評(píng)估標(biāo)準(zhǔn) 1791262.2.1實(shí)驗(yàn)設(shè)置 17247292.2.2評(píng)估標(biāo)準(zhǔn) 19143002.3數(shù)據(jù)增強(qiáng)實(shí)驗(yàn) 21319842.5特征增強(qiáng)網(wǎng)絡(luò)優(yōu)化實(shí)驗(yàn) 25116592.6偽標(biāo)簽訓(xùn)練優(yōu)化實(shí)驗(yàn) 28150752.7基線方法對(duì)比實(shí)驗(yàn) 2964832.8本章小結(jié) 29第1章條狀缺陷檢測(cè)算法設(shè)計(jì)TC"Chapter3DesignofSliverDefectDetectionAlgorithm"\l1針對(duì)工業(yè)應(yīng)用高速率的需求和條狀表面缺陷的特點(diǎn),本章提出一種基于YOLOv4-CSP的高效且輕量的工業(yè)缺陷檢測(cè)算法,該算法能夠自動(dòng)提取工業(yè)表面缺陷檢測(cè)特征,快速準(zhǔn)確地檢測(cè)常見缺陷,并且對(duì)呈極端長(zhǎng)寬比的條狀缺陷檢測(cè)性能魯棒。本章首先闡述了基線網(wǎng)絡(luò)YOLOv4-CSP和改進(jìn)后的YOLOv4-CSP的整體架構(gòu),接著介紹算法特征提取部分的優(yōu)化設(shè)計(jì),然后進(jìn)一步論述算法在特征增強(qiáng)部分的改進(jìn),最后介紹了用以優(yōu)化訓(xùn)練的偽標(biāo)簽技術(shù)。1.1網(wǎng)絡(luò)總體架構(gòu)TC"1.1OverallNetworkArchitecture"\l2本節(jié)將介紹基線方法YOLOv4-CSP和改進(jìn)后的YOLOv4-CSP的網(wǎng)絡(luò)結(jié)構(gòu),并著重討論兩者在特征提取網(wǎng)絡(luò)(Backbone)和特征增強(qiáng)網(wǎng)絡(luò)(Neck)上的結(jié)構(gòu)區(qū)別,并對(duì)梯度分流技術(shù)在Backbone和Neck殘差結(jié)構(gòu)中的應(yīng)用進(jìn)行了分析。1.1.1YOLOv4-CSPTC"1.1.1YOLOv4-CSP"\l3Wang等人借鑒CSPNet(CrossStagePartialNetwork)ADDINEN.CITE<EndNote><Cite><Author>Wang</Author><Year>2020</Year><RecNum>103</RecNum><DisplayText><styleface="superscript">[65]</style></DisplayText><record><rec-number>103</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1609937436">103</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Wang,Chien-Yao</author><author>MarkLiao,Hong-Yuan</author><author>Wu,Yueh-Hua</author><author>Chen,Ping-Yang</author><author>Hsieh,Jun-Wei</author><author>Yeh,I-Hau</author></authors></contributors><titles><title>CSPNet:Anewbackbonethatcanenhancelearningcapabilityofcnn</title><secondary-title>ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognitionWorkshops</secondary-title></titles><pages>390-391</pages><dates><year>2020</year></dates><urls></urls></record></Cite></EndNote>[65]梯度分流的思想對(duì)YOLOv4網(wǎng)絡(luò)進(jìn)行優(yōu)化,構(gòu)造了CSPSPP和CSPPAN結(jié)構(gòu),使得特征增強(qiáng)網(wǎng)絡(luò)的梯度被分解為兩個(gè)支路進(jìn)行前向傳播,從而減少了重復(fù)的梯度流,并降低了約40%的計(jì)算成本ADDINEN.CITE<EndNote><Cite><Author>Wang</Author><Year>2021</Year><RecNum>205</RecNum><DisplayText><styleface="superscript">[64]</style></DisplayText><record><rec-number>205</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1646384479">205</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Wang,Chien-Yao</author><author>Bochkovskiy,Alexey</author><author>Liao,Hong-YuanMark</author></authors></contributors><titles><title>Scaled-yolov4:Scalingcrossstagepartialnetwork</title><secondary-title>ProceedingsoftheIEEE/cvfconferenceoncomputervisionandpatternrecognition</secondary-title></titles><pages>13029-13038</pages><dates><year>2021</year></dates><urls></urls></record></Cite></EndNote>[64],由此形成新的YOLO算法分支——YOLOv4-CSP。此網(wǎng)絡(luò)由特征提取部分(Backbone)、特征增強(qiáng)部分(Neck)和檢測(cè)部分(Head)組成,其中特征提取部分負(fù)責(zé)初級(jí)特征的提取,特征增強(qiáng)部分主要是對(duì)前一部分輸出的特征進(jìn)行增強(qiáng)以提高網(wǎng)絡(luò)的表征能力,最終結(jié)果由檢測(cè)部分計(jì)算并輸出。YOLOv4-CSP的特征提取網(wǎng)絡(luò)為CSPDarknet53,它是由Darknet-53和CSPNet融合而成。Darknet-53作為YOLOv3的主干網(wǎng)絡(luò)首次出現(xiàn)在大眾視野中,它借鑒了ResNet的設(shè)計(jì)思想,使用了大量的殘差連接,由53個(gè)卷積層組成,并用步長(zhǎng)為2的卷積層替代了池化層;與ResNet-101/152相比,該網(wǎng)絡(luò)的運(yùn)行速度提升了1.5~2.1倍,并且保持了與之相當(dāng)?shù)木?。CSPNet提出“梯度分流”思想,使網(wǎng)絡(luò)的前向傳播梯度流向不同的路徑:首先將輸入特征圖劃分為兩部分,接著由跨階段層次結(jié)構(gòu)(CrossStageHierarchy)分別對(duì)這兩部分進(jìn)行處理,處理完成后將兩個(gè)分支合并,這項(xiàng)設(shè)計(jì)使得各路徑的計(jì)算量較之前更為平均,既提高了網(wǎng)絡(luò)計(jì)算單元的整體利用率,又大幅減少了網(wǎng)絡(luò)的資源消耗,并且加快了推理速度。如圖1.1(a)所示,在Darknet-53中,殘差結(jié)構(gòu)(Bottleneck)的輸出由初始輸入和殘差塊(圖中紅框部分)的運(yùn)算結(jié)果相加而得。在CSPDarknet53中,殘差階段(BottleneckCSP)的輸入特征圖被分成兩部分,如圖1.2(b)所示,一部分依次經(jīng)過一個(gè)卷積塊、若干個(gè)殘差塊和一個(gè)卷積操作,另一部分先進(jìn)行卷積操作,然后與前一部分合并,合并后的結(jié)果再經(jīng)過一個(gè)過渡層(卷積塊)得到最終輸出。YOLOv4保留了CSPDarknet53的5個(gè)殘差階段及其數(shù)量設(shè)置(1,2,8,8,4),而YOLOv4-CSP則是將第一個(gè)CSP階段(BottleneckCSP)替換為原始?xì)埐罱Y(jié)構(gòu)(Bottleneck),以實(shí)現(xiàn)精度和速度之間的最佳平衡。圖1.1不同網(wǎng)絡(luò)的殘差結(jié)構(gòu)對(duì)比YOLOv4-CSP的特征增強(qiáng)網(wǎng)絡(luò)以YOLOv4的Neck網(wǎng)絡(luò)設(shè)計(jì)為基石,對(duì)核心模塊進(jìn)行梯度分流改造,從而構(gòu)建了CSP化的空間金字塔池化模塊(CSPSPP)和路徑聚合模塊(CSPPAN)。YOLOv4的Neck網(wǎng)絡(luò)包含SPP模塊和PAN模塊,其中SPP使用三個(gè)不同大小的卷積核,分別為5×5,9×9,13×13,以提取不同空間分辨率的語(yǔ)義信息,從而得到更大范圍的感受野。PAN是對(duì)特征金字塔(FPN)的改進(jìn),如圖1.2(a)所示,F(xiàn)PN有兩條路徑,一條是自下而上的前饋計(jì)算路徑:特征圖進(jìn)行多次卷積操作后分辨率越來越?。涣硪粭l則是自上而下的聚合路徑:高層特征圖通過上采樣操作來擴(kuò)大分辨率,然后使用橫向連接與底層特征圖融合。與FPN相比,PAN增加了一條自下而上的聚合路徑以獲得更精確的位置信息,如圖1.2(b)所示。圖1.2展示了PAN的整體框架,其中(c)、(d)、(e)分別對(duì)應(yīng)自適應(yīng)特征池化、預(yù)測(cè)框分支和全連接層融合,YOLOv4主要借鑒(a)和(b)部分的設(shè)計(jì)思路。此外,YOLOv4將PAN的特征圖合并方式由相加操作(Addition)改為拼接操作(Concatenation)以保留豐富的語(yǔ)義信息,從而增強(qiáng)網(wǎng)絡(luò)的預(yù)測(cè)能力。類似于特征提取網(wǎng)絡(luò)中的CSPDarknet53,YOLOv4-CSP算法為SPP、PAN設(shè)計(jì)了額外的分支以實(shí)現(xiàn)前向傳播的梯度分流,該分支經(jīng)由一個(gè)1×1卷積層,然后通過拼接的方式與原分支合并,合并后再進(jìn)行一次卷積操作即得到新模塊(CSPSPP和CSPPAN)的輸出。圖1.2PAN網(wǎng)絡(luò)框架ADDINEN.CITE<EndNote><Cite><Author>Liu</Author><Year>2018</Year><RecNum>106</RecNum><DisplayText><styleface="superscript">[63]</style></DisplayText><record><rec-number>106</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1610104702">106</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Liu,Shu</author><author>Qi,Lu</author><author>Qin,Haifang</author><author>Shi,Jianping</author><author>Jia,Jiaya</author></authors></contributors><titles><title>Pathaggregationnetworkforinstancesegmentation</title><secondary-title>ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition</secondary-title></titles><pages>8759-8768</pages><dates><year>2018</year></dates><urls></urls></record></Cite></EndNote>[63]YOLOv4-CSP的檢測(cè)部分沿用了YOLOv3的檢測(cè)頭(Head),通過特征提取網(wǎng)絡(luò)和特征增強(qiáng)網(wǎng)絡(luò)計(jì)算出三個(gè)不同尺寸的特征圖,其中每個(gè)尺寸的特征圖對(duì)應(yīng)三個(gè)不同長(zhǎng)寬比例的預(yù)設(shè)錨框,然后利用卷積操作對(duì)輸入特征圖進(jìn)行維度向量轉(zhuǎn)換,得到每個(gè)特征圖中各錨框的類別和定位結(jié)果,從而實(shí)現(xiàn)了多尺度預(yù)測(cè)。1.1.2改進(jìn)后的YOLOv4-CSPTC"1.1.2ImprovedYOLOv4-CSP"\l3本文設(shè)計(jì)的基于YOLOv4-CSP的條狀缺陷檢測(cè)算法,即改進(jìn)后的YOLOv4-CSP(ImprovedYOLOv4-CSP),縮減了網(wǎng)絡(luò)規(guī)模并增加了針對(duì)性的條狀缺陷檢測(cè)模塊,其網(wǎng)絡(luò)結(jié)構(gòu)如圖1.3所示。由于工業(yè)缺陷檢測(cè)背景相對(duì)簡(jiǎn)單,如果沿用大型網(wǎng)絡(luò)YOLOv4-CSP深度和寬度的設(shè)置容易出現(xiàn)過擬合現(xiàn)象,因此本文對(duì)網(wǎng)絡(luò)的寬度(通道數(shù))和深度(殘差塊數(shù)量)進(jìn)行了縮減,兩者的縮減因子分別設(shè)置為0.50和0.33,網(wǎng)絡(luò)通道數(shù)變?yōu)樵瓉淼囊话耄珺ackbone部分的殘差階段數(shù)量由原先的“1,2,8,8,4”變?yōu)椤?,1,3,3,1”,中間階段數(shù)量“3”對(duì)應(yīng)圖1.3的“×3”。在網(wǎng)絡(luò)表征能力不受影響的情況下,本文通過縮減規(guī)模以實(shí)現(xiàn)網(wǎng)絡(luò)瘦身,為后續(xù)模型在工業(yè)現(xiàn)場(chǎng)的部署奠定了基礎(chǔ)。圖1.3改進(jìn)后的YOLOv4-CSP網(wǎng)絡(luò)框架為了改善網(wǎng)絡(luò)在條狀缺陷特征提煉的表現(xiàn),本文在特征提取部分引入了非對(duì)稱卷積。具體而言,網(wǎng)絡(luò)在標(biāo)準(zhǔn)卷積的基礎(chǔ)上增加了水平方向的一維非對(duì)稱卷積,構(gòu)造了非對(duì)稱卷積模塊,即圖1.1(c)中ACBlock,然后結(jié)合梯度分流技術(shù)設(shè)計(jì)了新的殘差模塊ACBottleneck和ACBottleneckCSP,以增強(qiáng)模型在重要維度——水平方向的特征表達(dá),從而提高模型在條狀缺陷的檢測(cè)精度,ACBottleneckCSP的具體結(jié)構(gòu)見圖1.1(c),其中紅框部分為ACBottleneck。改進(jìn)后的YOLOv4-CSP繼承了原網(wǎng)絡(luò)Neck部分多尺度融合的思想,利用CSPPAN實(shí)現(xiàn)空間信息與語(yǔ)義信息的有效融合。同時(shí),本文還引入了混合注意力機(jī)制CBAM,并構(gòu)建了新的殘差模塊CBAMBottleneckCSP2,以更好地校準(zhǔn)網(wǎng)絡(luò)的通道和空間權(quán)重,從而提高網(wǎng)絡(luò)表達(dá)顯著性特征的能力。此外,考慮到注意力機(jī)制與SPPCSP在增大感受野方面存在功能重合,本文采用注意力模塊并將CSPSPP移除,簡(jiǎn)化了網(wǎng)絡(luò)結(jié)構(gòu)并減少了計(jì)算開銷。1.2特征提取網(wǎng)絡(luò)優(yōu)化TC"1.2FeatureExtractionNetworkOptimization"\l2作為檢測(cè)網(wǎng)絡(luò)的首要環(huán)節(jié),特征提取的質(zhì)量對(duì)后續(xù)模塊具有重要影響。本文算法將CSPDarknet-53作為特征提取部分的骨干網(wǎng)絡(luò),基于網(wǎng)絡(luò)結(jié)構(gòu)搜索(Networkarchitecturesearch)分析ADDINEN.CITE<EndNote><Cite><Author>Wang</Author><Year>2021</Year><RecNum>205</RecNum><DisplayText><styleface="superscript">[64]</style></DisplayText><record><rec-number>205</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1646384479">205</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Wang,Chien-Yao</author><author>Bochkovskiy,Alexey</author><author>Liao,Hong-YuanMark</author></authors></contributors><titles><title>Scaled-yolov4:Scalingcrossstagepartialnetwork</title><secondary-title>ProceedingsoftheIEEE/cvfconferenceoncomputervisionandpatternrecognition</secondary-title></titles><pages>13029-13038</pages><dates><year>2021</year></dates><urls></urls></record></Cite></EndNote>[64],CSPDarknet-53滿足包含感受野、參數(shù)量和推理速度在內(nèi)的眾多最優(yōu)架構(gòu)指標(biāo),具有優(yōu)良的特征提取能力。然而,該結(jié)構(gòu)在條狀缺陷的特征表達(dá)還有待進(jìn)一步完善。本節(jié)首先探究了可變形卷積與YOLOv4-CSP結(jié)合用以提升條狀缺陷檢測(cè)能力的可行性,鑒于該卷積的改善效果有限,進(jìn)一步研究了極端長(zhǎng)寬比缺陷的特點(diǎn),提出了一種新的非對(duì)稱卷積模塊以更好地滿足檢測(cè)需求。1.2.1可變形卷積TC"1.2.1DeformableConvolution"\l3工業(yè)缺陷檢測(cè)中的條狀缺陷多為水平條狀,且相當(dāng)一部分呈極端的長(zhǎng)寬比。YOLOv4-CSP是基于錨框的檢測(cè)算法,面對(duì)條狀缺陷,預(yù)先定義的錨框和真實(shí)框可能存在較大的差異,這就要求網(wǎng)絡(luò)能夠靈活地預(yù)測(cè)錨框偏移量,并增強(qiáng)對(duì)尺度、形狀等變化的魯棒性?;诖?,本課題首先考慮引入可變形卷積(Deformableconvolution)以提高特征提取網(wǎng)絡(luò)應(yīng)對(duì)缺陷形變的穩(wěn)定性。標(biāo)準(zhǔn)卷積神經(jīng)網(wǎng)絡(luò)中的卷積操作大多以固定的方形卷積核為主,如3×3、5×5卷積,其固定的幾何結(jié)構(gòu)導(dǎo)致建模幾何變換的局限性。然而,如何應(yīng)對(duì)目標(biāo)尺度、姿態(tài)和視點(diǎn)變化以及部件變形的復(fù)雜識(shí)別任務(wù),使模型具備穩(wěn)定的自適應(yīng)特征提取能力,是視覺識(shí)別面臨的一大挑戰(zhàn)。為了解決傳統(tǒng)卷積固定結(jié)構(gòu)帶來的問題,Dai等人構(gòu)造了可變形卷積v1(DeformableConvolution,DCv1)ADDINEN.CITE<EndNote><Cite><Author>Dai</Author><Year>2017</Year><RecNum>124</RecNum><DisplayText><styleface="superscript">[36]</style></DisplayText><record><rec-number>124</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642949043">124</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Dai,Jifeng</author><author>Qi,Haozhi</author><author>Xiong,Yuwen</author><author>Li,Yi</author><author>Zhang,Guodong</author><author>Hu,Han</author><author>Wei,Yichen</author></authors></contributors><titles><title>Deformableconvolutionalnetworks</title><secondary-title>ProceedingsoftheIEEEinternationalconferenceoncomputervision</secondary-title></titles><pages>764-773</pages><dates><year>2017</year></dates><urls></urls></record></Cite></EndNote>[36],如圖1.4所示,該卷積在原有基礎(chǔ)上附加一個(gè)平行卷積層,以學(xué)習(xí)原卷積各采樣點(diǎn)的偏移量,然后在偏移量的指導(dǎo)下擴(kuò)大采樣范圍,使卷積作用范圍更能覆蓋整個(gè)物體,從而提取更有效的上下文特征。作為即插即用模塊,可變形卷積能夠快速地集成到已有檢測(cè)網(wǎng)絡(luò),在圖像分類、目標(biāo)檢測(cè)多個(gè)數(shù)據(jù)集上較原網(wǎng)絡(luò)有顯著的提升,已成功應(yīng)用在遙感圖像分析ADDINEN.CITE<EndNote><Cite><Author>高鑫</Author><Year>2018</Year><RecNum>134</RecNum><DisplayText><styleface="superscript">[66]</style></DisplayText><record><rec-number>134</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1644376539">134</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>高鑫</author><author>李慧</author><author>張義</author><author>閆夢(mèng)龍</author><author>張宗朔</author><author>孫顯</author><author>孫皓</author><author>于泓峰</author></authors></contributors><titles><title>基于可變形卷積神經(jīng)網(wǎng)絡(luò)的遙感影像密集區(qū)域車輛檢測(cè)方法</title><secondary-title>電子與信息學(xué)報(bào)</secondary-title></titles><periodical><full-title>電子與信息學(xué)報(bào)</full-title></periodical><pages>2812-2819</pages><volume>40</volume><number>12</number><dates><year>2018</year></dates><urls></urls></record></Cite></EndNote>[66]、鋼鐵缺陷檢測(cè)ADDINEN.CITE<EndNote><Cite><Author>Hao</Author><Year>2020</Year><RecNum>92</RecNum><DisplayText><styleface="superscript">[30]</style></DisplayText><record><rec-number>92</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1609144136">92</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Hao,Ruiyang</author><author>Lu,Bingyu</author><author>Cheng,Ying</author><author>Li,Xiu</author><author>Huang,Biqing</author></authors></contributors><titles><title>ASteelSurfaceDefectInspectionApproachtowardsSmartIndustrialMonitoring</title><secondary-title>JournalofIntelligentManufacturing</secondary-title></titles><periodical><full-title>JournalofIntelligentManufacturing</full-title></periodical><number>9</number><dates><year>2020</year></dates><urls></urls></record></Cite></EndNote>[30]等領(lǐng)域。但是,該卷積對(duì)每個(gè)采樣位置賦予相同的權(quán)重,可能會(huì)引入了無用的上下文,進(jìn)而影響顯著特征的表達(dá)。針對(duì)該問題,Zhu等人提出了可變形卷積v2版本(DCv2)ADDINEN.CITE<EndNote><Cite><Author>Zhu</Author><Year>2019</Year><RecNum>123</RecNum><DisplayText><styleface="superscript">[37]</style></DisplayText><record><rec-number>123</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642949027">123</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Zhu,Xizhou</author><author>Hu,Han</author><author>Lin,Stephen</author><author>Dai,Jifeng</author></authors></contributors><titles><title>Deformableconvnetsv2:Moredeformable,betterresults</title><secondary-title>ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition</secondary-title></titles><pages>9308-9316</pages><dates><year>2019</year></dates><urls></urls></record></Cite></EndNote>[37],升級(jí)后的可變形卷積為每個(gè)采樣位置賦予不同的權(quán)重以獲得更準(zhǔn)確的上下文信息。圖1.4可變形卷積v1示意圖ADDINEN.CITE<EndNote><Cite><Author>Dai</Author><Year>2017</Year><RecNum>124</RecNum><DisplayText><styleface="superscript">[36]</style></DisplayText><record><rec-number>124</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642949043">124</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Dai,Jifeng</author><author>Qi,Haozhi</author><author>Xiong,Yuwen</author><author>Li,Yi</author><author>Zhang,Guodong</author><author>Hu,Han</author><author>Wei,Yichen</author></authors></contributors><titles><title>Deformableconvolutionalnetworks</title><secondary-title>ProceedingsoftheIEEEinternationalconferenceoncomputervision</secondary-title></titles><pages>764-773</pages><dates><year>2017</year></dates><urls></urls></record></Cite></EndNote>[36]本文先后嘗試將DCv1和DCv2與YOLOv4-CSP的骨干網(wǎng)絡(luò)進(jìn)行融合,具體而言,將高階殘差塊的3×3標(biāo)準(zhǔn)卷積替換為可變形卷積,構(gòu)造新的殘差塊DCNBottleneckCSP。然而,引入可變形卷積后,網(wǎng)絡(luò)的檢測(cè)結(jié)果并不理想,相關(guān)的實(shí)驗(yàn)結(jié)果與分析詳見實(shí)驗(yàn)章節(jié)2.2。通過研究發(fā)現(xiàn),該系列卷積學(xué)習(xí)的偏移量向四周發(fā)散,極有可能引入了無關(guān)的上下文特征,這類卷積較適用于缺陷不規(guī)則形變的檢測(cè)場(chǎng)景,但不適用在水平方向規(guī)則形變的條狀缺陷檢測(cè)。為此,本文繼續(xù)對(duì)條狀缺陷特征進(jìn)行分析,進(jìn)一步探索合適的改進(jìn)模塊。1.2.2非對(duì)稱卷積TC"1.2.2AsymmetricConvolution"\l3通過對(duì)條狀缺陷數(shù)據(jù)進(jìn)一步分析發(fā)現(xiàn),檢測(cè)對(duì)象多為條形工件,且工件的長(zhǎng)度明顯大于寬度,其中鋁材長(zhǎng)寬比值為[1.5,5],竹條長(zhǎng)寬比值為[9,10],由此得出工件和缺陷均具有極高的長(zhǎng)寬比。針對(duì)檢測(cè)對(duì)象長(zhǎng)寬比的特點(diǎn),本文考慮加強(qiáng)圖像水平方向的特征提取,通過借鑒ACNet中的ACBlock的結(jié)構(gòu)設(shè)計(jì)適配的非對(duì)稱卷積模塊,以提高模型對(duì)條狀缺陷檢測(cè)的魯棒性。非對(duì)稱卷積的最初使用目的是減少網(wǎng)絡(luò)參數(shù)的總和,將標(biāo)準(zhǔn)的方形卷積(d×d)轉(zhuǎn)換為成兩個(gè)一維卷積(1×d和d×1)串聯(lián),從而降低了網(wǎng)絡(luò)的計(jì)算量,同時(shí)提高了網(wǎng)絡(luò)訓(xùn)練速度ADDINEN.CITE<EndNote><Cite><Author>Jaderberg</Author><Year>2014</Year><RecNum>126</RecNum><DisplayText><styleface="superscript">[67,68]</style></DisplayText><record><rec-number>126</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642949067">126</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Jaderberg,Max</author><author>Vedaldi,Andrea</author><author>Zisserman,Andrew</author></authors></contributors><titles><title>Speedingupconvolutionalneuralnetworkswithlowrankexpansions</title><secondary-title>arXivpreprintarXiv:1405.3866</secondary-title></titles><periodical><full-title>arXivpreprintarXiv:1405.3866</full-title></periodical><dates><year>2014</year></dates><urls></urls></record></Cite><Cite><Author>Denton</Author><Year>2014</Year><RecNum>125</RecNum><record><rec-number>125</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642949060">125</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Denton,EmilyL</author><author>Zaremba,Wojciech</author><author>Bruna,Joan</author><author>LeCun,Yann</author><author>Fergus,Rob</author></authors></contributors><titles><title>Exploitinglinearstructurewithinconvolutionalnetworksforefficientevaluation</title><secondary-title>Advancesinneuralinformationprocessingsystems</secondary-title></titles><periodical><full-title>AdvancesinNeuralInformationProcessingSystems</full-title></periodical><pages>1269-1277</pages><dates><year>2014</year></dates><urls></urls></record></Cite></EndNote>[67,68]。相比之下,Ding等人ADDINEN.CITE<EndNote><Cite><Author>Ding</Author><Year>2019</Year><RecNum>105</RecNum><DisplayText><styleface="superscript">[69]</style></DisplayText><record><rec-number>105</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1609937838">105</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Ding,Xiaohan</author><author>Guo,Yuchen</author><author>Ding,Guiguang</author><author>Han,Jungong</author></authors></contributors><titles><title>Acnet:Strengtheningthekernelskeletonsforpowerfulcnnviaasymmetricconvolutionblocks</title><secondary-title>ProceedingsoftheIEEEInternationalConferenceonComputerVision</secondary-title></titles><pages>1911-1920</pages><dates><year>2019</year></dates><urls></urls></record></Cite></EndNote>[69]以一種新的視角——卷積設(shè)計(jì),將水平和垂直方向的一維卷積整合到標(biāo)準(zhǔn)方形卷積中,構(gòu)造了非對(duì)稱卷積塊(AsymmetricConvolutionBlock,ACBlock),然后用新模塊替換部分標(biāo)準(zhǔn)卷積,從而構(gòu)建了非對(duì)稱卷積網(wǎng)絡(luò)(AsymmetricConvolutionalNetwork,ACNet)。ACNet中的非對(duì)稱卷積模塊首先在橫向和縱向添加一維非對(duì)稱卷積以增強(qiáng)卷積核骨架部位(卷積核十字交叉的位置)的權(quán)重量級(jí),然后將附加的特征提取分支與原分支合并以豐富特征空間,使模型的學(xué)習(xí)能力得到增強(qiáng),從而改善模型應(yīng)對(duì)旋轉(zhuǎn)失真的魯棒性以及遷移到新數(shù)據(jù)的泛化能力。該模塊包括三個(gè)并行層,分別是帶有d×d方形卷積核、1×d水平方向一維卷積核和d×1垂直方向一維卷積核的卷積層。非對(duì)稱卷積模塊的計(jì)算過程如圖1.5所示:輸入特征圖分別通過并行卷積核得到三個(gè)大小相同的特征圖,這三個(gè)特征圖各自進(jìn)行歸一化操作,之后,將這三個(gè)分支的運(yùn)算結(jié)果進(jìn)行逐元素相加得到最終輸出。圖1.5ACNet非對(duì)稱卷積計(jì)算過程ADDINEN.CITE<EndNote><Cite><Author>Ding</Author><Year>2019</Year><RecNum>105</RecNum><DisplayText><styleface="superscript">[69]</style></DisplayText><record><rec-number>105</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1609937838">105</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Ding,Xiaohan</author><author>Guo,Yuchen</author><author>Ding,Guiguang</author><author>Han,Jungong</author></authors></contributors><titles><title>Acnet:Strengtheningthekernelskeletonsforpowerfulcnnviaasymmetricconvolutionblocks</title><secondary-title>ProceedingsoftheIEEEInternationalConferenceonComputerVision</secondary-title></titles><pages>1911-1920</pages><dates><year>2019</year></dates><urls></urls></record></Cite></EndNote>[69]受ACNet中非對(duì)稱卷積模塊的啟發(fā),本節(jié)提出了一種更適合條狀缺陷檢測(cè)的非對(duì)稱卷積模塊。本文首先分析了添加水平維度非對(duì)稱卷積或垂直維度非對(duì)稱卷積至方形卷積對(duì)網(wǎng)絡(luò)學(xué)習(xí)能力的影響(詳見實(shí)驗(yàn)章節(jié)2.2),然后結(jié)合條狀缺陷極端橫縱比的特點(diǎn)——超過50%的缺陷橫縱比大于8,刪減了垂直維度上的非對(duì)稱卷積,以減少冗余信息的干擾,同時(shí)強(qiáng)化水平維度上局部特征點(diǎn)的影響。因此,本文的非對(duì)稱卷積模塊僅在水平方向引入一維卷積分支,其計(jì)算過程如圖1.6所示,特征圖并行前向傳播至標(biāo)準(zhǔn)卷積分支和非對(duì)稱卷積分支,兩個(gè)分支分別用方形卷積核和一維水平卷積核對(duì)輸入特征圖進(jìn)行“掃描”,之后,將這兩個(gè)分支歸一化后的結(jié)果相加得到最終輸出。本文用非對(duì)稱卷積塊替換殘差階段中的3×3卷積塊,然后構(gòu)建ACBottleneck模塊和ACBottleneckCSP模塊以實(shí)現(xiàn)特征提取網(wǎng)絡(luò)的優(yōu)化。圖1.6ImprovedYOLOv4-CSP非對(duì)稱卷積計(jì)算過程1.3特征增強(qiáng)網(wǎng)絡(luò)優(yōu)化TC"1.3FeatureEnhancementNetworkOptimization"\l2特征增強(qiáng)網(wǎng)絡(luò)在主干網(wǎng)絡(luò)的基礎(chǔ)上進(jìn)一步精煉特征,提高表征能力。本文采用CSP化后的多尺度檢測(cè)網(wǎng)絡(luò)——CSPPAN,作為特征增強(qiáng)網(wǎng)絡(luò)(也稱檢測(cè)網(wǎng)絡(luò)的頸部)的基礎(chǔ)架構(gòu)。此外,本文將注意力機(jī)制融合到頸部網(wǎng)絡(luò),促使網(wǎng)絡(luò)在高層語(yǔ)義信息的引導(dǎo)下更有效地分配視覺處理資源。注意力在人類視覺系統(tǒng)中起著至關(guān)重要的作用ADDINEN.CITE<EndNote><Cite><Author>Corbetta</Author><Year>2002</Year><RecNum>109</RecNum><DisplayText><styleface="superscript">[70]</style></DisplayText><record><rec-number>109</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1610106155">109</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Corbetta,Maurizio</author><author>Shulman,GordonL</author></authors></contributors><titles><title>Controlofgoal-directedandstimulus-drivenattentioninthebrain</title><secondary-title>Naturereviewsneuroscience</secondary-title></titles><periodical><full-title>Naturereviewsneuroscience</full-title></periodical><pages>201-215</pages><volume>3</volume><number>3</number><dates><year>2002</year></dates><isbn>1471-0048</isbn><urls></urls></record></Cite></EndNote>[70],人類通過一系列局部的一瞥來構(gòu)建他們的認(rèn)知,并自然地把注意力轉(zhuǎn)移到復(fù)雜場(chǎng)景中的顯著區(qū)域,這一視覺處理過程激發(fā)了學(xué)者們對(duì)于注意力機(jī)制的研究興致,近年來該技術(shù)得到不斷發(fā)展并成功應(yīng)用于計(jì)算機(jī)視覺各類任務(wù)中,包括目標(biāo)檢測(cè)任務(wù)。視覺領(lǐng)域的注意力機(jī)制通過動(dòng)態(tài)地調(diào)整輸入特征圖的權(quán)重,以選擇關(guān)鍵特征,從而增強(qiáng)模型的表征能力。根據(jù)注意力關(guān)注權(quán)重所屬的數(shù)據(jù)域,可將現(xiàn)有的相關(guān)研究劃分為:空間注意力、通道注意力、時(shí)間注意力、分支注意力以及混合注意力(空間注意力&通道注意力、空間注意力&時(shí)間注意力)ADDINEN.CITE<EndNote><Cite><Author>Guo</Author><Year>2021</Year><RecNum>127</RecNum><DisplayText><styleface="superscript">[71]</style></DisplayText><record><rec-number>127</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642949077">127</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Guo,Meng-Hao</author><author>Xu,Tian-Xing</author><author>Liu,Jiang-Jiang</author><author>Liu,Zheng-Ning</author><author>Jiang,Peng-Tao</author><author>Mu,Tai-Jiang</author><author>Zhang,Song-Hai</author><author>Martin,RalphR</author><author>Cheng,Ming-Ming</author><author>Hu,Shi-Min</author></authors></contributors><titles><title>AttentionMechanismsinComputerVision:ASurvey</title><secondary-title>arXivpreprintarXiv:2111.07624</secondary-title></titles><periodical><full-title>arXivpreprintarXiv:2111.07624</full-title></periodical><dates><year>2021</year></dates><urls></urls></record></Cite></EndNote>[71]。空間注意力旨在通過區(qū)域選擇掩碼來告訴網(wǎng)絡(luò)需要重點(diǎn)關(guān)注的區(qū)域,代表模塊有基于RNN網(wǎng)絡(luò)的注意力RAMADDINEN.CITE<EndNote><Cite><Author>Mnih</Author><Year>2014</Year><RecNum>229</RecNum><DisplayText><styleface="superscript">[72]</style></DisplayText><record><rec-number>229</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647240278">229</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Mnih,Volodymyr</author><author>Heess,Nicolas</author><author>Graves,Alex</author></authors></contributors><titles><title>Recurrentmodelsofvisualattention</title><secondary-title>Advancesinneuralinformationprocessingsystems</secondary-title></titles><periodical><full-title>AdvancesinNeuralInformationProcessingSystems</full-title></periodical><volume>27</volume><dates><year>2014</year></dates><urls></urls></record></Cite></EndNote>[72]、生成空間變換的STNADDINEN.CITE<EndNote><Cite><Author>Jaderberg</Author><Year>2015</Year><RecNum>230</RecNum><DisplayText><styleface="superscript">[73]</style></DisplayText><record><rec-number>230</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647240333">230</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Jaderberg,Max</author><author>Simonyan,Karen</author><author>Zisserman,Andrew</author></authors></contributors><titles><title>Spatialtransformernetworks</title><secondary-title>Advancesinneuralinformationprocessingsystems</secondary-title></titles><periodical><full-title>AdvancesinNeuralInformationProcessingSystems</full-title></periodical><volume>28</volume><dates><year>2015</year></dates><urls></urls></record></Cite></EndNote>[73]、自注意力相關(guān)算法ADDINEN.CITE<EndNote><Cite><Author>Wang</Author><Year>2018</Year><RecNum>227</RecNum><DisplayText><styleface="superscript">[74,75]</style></DisplayText><record><rec-number>227</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647239866">227</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Wang,Xiaolong</author><author>Girshick,Ross</author><author>Gupta,Abhinav</author><author>He,Kaiming</author></authors></contributors><titles><title>Non-localneuralnetworks</title><secondary-title>ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition</secondary-title></titles><pages>7794-7803</pages><dates><year>2018</year></dates><urls></urls></record></Cite><Cite><Author>Dosovitskiy</Author><Year>2020</Year><RecNum>228</RecNum><record><rec-number>228</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647240216">228</key></foreign-keys><ref-typename="JournalArticle">17</ref-type><contributors><authors><author>Dosovitskiy,Alexey</author><author>Beyer,Lucas</author><author>Kolesnikov,Alexander</author><author>Weissenborn,Dirk</author><author>Zhai,Xiaohua</author><author>Unterthiner,Thomas</author><author>Dehghani,Mostafa</author><author>Minderer,Matthias</author><author>Heigold,Georg</author><author>Gelly,Sylvain</author></authors></contributors><titles><title>Animageisworth16x16words:Transformersforimagerecognitionatscale</title><secondary-title>arXivpreprintarXiv:2010.11929</secondary-title></titles><periodical><full-title>arXivpreprintarXiv:2010.11929</full-title></periodical><dates><year>2020</year></dates><urls></urls></record></Cite></EndNote>[74,75]等。通道注意力由通道域注意力掩碼來告訴網(wǎng)絡(luò)需要特別關(guān)注的通道對(duì)象,代表模塊有SEADDINEN.CITE<EndNote><Cite><Author>Hu</Author><Year>2018</Year><RecNum>129</RecNum><DisplayText><styleface="superscript">[76]</style></DisplayText><record><rec-number>129</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1642993016">129</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Hu,Jie</author><author>Shen,Li</author><author>Sun,Gang</author></authors></contributors><titles><title>Squeeze-and-excitationnetworks</title><secondary-title>ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition</secondary-title></titles><pages>7132-7141</pages><dates><year>2018</year></dates><urls></urls></record></Cite></EndNote>[76]及其改進(jìn)模塊ECANetADDINEN.CITE<EndNote><Cite><Author>Wang</Author><Year>2020</Year><RecNum>231</RecNum><DisplayText><styleface="superscript">[77]</style></DisplayText><record><rec-number>231</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647240714">231</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Wang,Q.</author><author>Wu,B.</author><author>Zhu,P.</author><author>Li,P.</author><author>Hu,Q.</author></authors></contributors><titles><title>ECA-Net:EfficientChannelAttentionforDeepConvolutionalNeuralNetworks</title><secondary-title>2020IEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR)</secondary-title></titles><dates><year>2020</year></dates><urls></urls></record></Cite></EndNote>[77]、GCTADDINEN.CITE<EndNote><Cite><Author>Yang</Author><Year>2020</Year><RecNum>232</RecNum><DisplayText><styleface="superscript">[78]</style></DisplayText><record><rec-number>232</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647240791">232</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Yang,Z.</author><author>Zhu,L.</author><author>Wu,Y.</author><author>Yang,Y.</author></authors></contributors><titles><title>GatedChannelTransformationforVisualRecognition</title><secondary-title>2020IEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR)</secondary-title></titles><dates><year>2020</year></dates><urls></urls></record></Cite></EndNote>[78]等。時(shí)間注意力通過生成對(duì)應(yīng)的掩碼以指導(dǎo)網(wǎng)絡(luò)應(yīng)該關(guān)注的時(shí)刻ADDINEN.CITE<EndNote><Cite><Author>Li</Author><Year>2019</Year><RecNum>235</RecNum><DisplayText><styleface="superscript">[79,80]</style></DisplayText><record><rec-number>235</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647241021">235</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Li,Jianing</author><author>Wang,Jingdong</author><author>Tian,Qi</author><author>Gao,Wen</author><author>Zhang,Shiliang</author></authors></contributors><titles><title>Global-localtemporalrepresentationsforvideopersonre-identification</title><secondary-title>ProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision</secondary-title></titles><pages>3958-3967</pages><dates><year>2019</year></dates><urls></urls></record></Cite><Cite><Author>Liu</Author><Year>2021</Year><RecNum>234</RecNum><record><rec-number>234</rec-number><foreign-keys><keyapp="EN"db-id="9atptfrtf25rebe05vrprz0qx0dzzt2s0dpf"timestamp="1647240990">234</key></foreign-keys><ref-typename="ConferenceProceedings">10</ref-type><contributors><authors><author>Liu,Zhaoyang</author><author>Wang,Limin</author><author>Wu,Wayne</author><author>Qian,Chen</author><author>Lu,Tong</author></authors></contributors><titles><title>Tam:Temporaladaptivemoduleforvideorecognition</title><secondary-title>Proc

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論