htmlparser實(shí)現(xiàn)爬蟲_第1頁
htmlparser實(shí)現(xiàn)爬蟲_第2頁
htmlparser實(shí)現(xiàn)爬蟲_第3頁
htmlparser實(shí)現(xiàn)爬蟲_第4頁
htmlparser實(shí)現(xiàn)爬蟲_第5頁
已閱讀5頁,還剩36頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、package parser;import java.io.BufferedReader;import java.io.BufferedWriter;import java.io.FileWriter;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import .MalformedURLException;import .URL;/* * 基本能實(shí)現(xiàn)網(wǎng)頁抓取,不過要手動(dòng)輸入U(xiǎn)RL 將整個(gè)html內(nèi)容保存到指定文件 * * author

2、chenguoyong * */public class ScrubSelectedWeb private final static String CRLF = System.getProperty(line.separator);/* * param args */public static void main(String args) try URL ur = new URL(99:8083/injs100/);InputStream instr = ur.openStream();String s, str;BufferedReader in = ne

3、w BufferedReader(new InputStreamReader(instr);StringBuffer sb = new StringBuffer();BufferedWriter out = new BufferedWriter(new FileWriter(D:/outPut.txt);while (s = in.readLine() != null) sb.append(s + CRLF);System.out.println(sb);str = new String(sb);out.write(str);out.close();in.close(); catch (Mal

4、formedURLException e) e.printStackTrace(); catch (IOException e) e.printStackTrace();基本能實(shí)現(xiàn)網(wǎng)頁抓取,不過要手動(dòng)輸入U(xiǎn)RL,此外沒有重構(gòu)。只是一個(gè)簡(jiǎn)單的思路。1.htmlparser 使用htmlparser是一個(gè)純的java寫的html解析的庫,htmlparser不依賴于其它的java庫,htmlparser主要用于改造 或提取html。htmlparser能超高速解析html,而且不會(huì)出錯(cuò)。毫不夸張地說,htmlparser就是目前最好的html解 析和分析的工具。無論你是想抓取網(wǎng)頁數(shù)據(jù)還是改造htm

5、l的內(nèi)容,用了htmlparser絕對(duì)會(huì)忍不住稱贊。由于htmlparser 結(jié)構(gòu)設(shè)計(jì)精良,所以擴(kuò)展htmlparser 非常便利。Htmlparser中文論壇./thread.php?fid=6Constructor SummaryParser() Parser(URLConnectionconnection) Construct a parser using the provided URLConnection.Method:staticParser createParser(Stringhtml, Stringcharset) Creates the

6、 parser on an input string.void visitAllNodesWith(NodeVisitorvisitor) Apply the given visitor to the current page.HtmlPage(Parserparser)NodeListgetBody() TableTaggetTables() StringgetTitle() voidsetTitle(Stringtitle) voidvisitTag(Tagtag) Called for each Tag visited.Constructor SummaryNodeList() Node

7、List(Nodenode) Create a one element node list.NodeList extractAllNodesThatMatch(NodeFilterfilter) Filter the list with the given filter non-recursively. NodeList extractAllNodesThatMatch(NodeFilterfilter, booleanrecursive) Filter the list with the given filter.Node elementAt(inti)1. html代碼里面所有的鏈接地址和

8、鏈接名稱package parser;import org.htmlparser.Parser;import org.htmlparser.Node;import org.htmlparser.NodeFilter;import org.htmlparser.Parser;import org.htmlparser.filters.TagNameFilter;import org.htmlparser.tags.LinkTag;import org.htmlparser.tags.TableTag;import org.htmlparser.util.NodeList;import org.h

9、tmlparser.util.ParserException;import org.htmlparser.visitors.HtmlPage;/* * htmlparser取得一段html代碼里面所有的鏈接地址和鏈接名稱 * * author chenguoyong * */public class Testhtmlparser /* * param args */public static void main(String args) String htmlcode = AAA+ 連接1+ 連接2;/ 創(chuàng)建Parser對(duì)象根據(jù)傳給字符串和指定的編碼Parser parser = Parser

10、.createParser(htmlcode, GBK);/ 創(chuàng)建HtmlPage對(duì)象HtmlPage(Parser parser)HtmlPage page = new HtmlPage(parser);try / HtmlPage extends visitor,Apply the given visitor to the current/ page.parser.visitAllNodesWith(page); catch (ParserException e1) e1 = null;/ 所有的節(jié)點(diǎn)NodeList nodelist = page.getBody();/ 建立一個(gè)節(jié)點(diǎn)fi

11、lter用于過濾節(jié)點(diǎn)NodeFilter filter = new TagNameFilter(A);/ 得到所有過濾后,想要的節(jié)點(diǎn)nodelist = nodelist.extractAllNodesThatMatch(filter, true);for (int i = 0; i nodelist.size(); i+) LinkTag link = (LinkTag) nodelist.elementAt(i);/ 鏈接地址System.out.println(link.getAttribute(href) + n);/ 鏈接名稱System.out.println(link.getSt

12、ringText();結(jié)果如下:/u/20080522/14/0ff402ef-c382-499a-8213-ba6b2f550425.html連接1連接22. 使用HtmlParser抓去網(wǎng)頁內(nèi)容package parser;import org.htmlparser.Parser;import org.htmlparser.beans.StringBean;import org.htmlparser.filters.NodeClassFilter;import org.htmlparser.parserappl

13、ications.StringExtractor;import org.htmlparser.tags.BodyTag;import org.htmlparser.util.NodeList;import org.htmlparser.util.ParserException;/* * 使用HtmlParser抓去網(wǎng)頁內(nèi)容: 要抓去頁面的內(nèi)容最方便的方法就是使用StringBean. 里面有幾個(gè)控制頁面內(nèi)容的幾個(gè)參數(shù). * 在后面的代碼中會(huì)有說明. Htmlparser包中還有一個(gè)示例StringExtractor 里面有個(gè)直接得到內(nèi)容的方法, * 其中也是使用了StringBean . 另外

14、直接解析Parser的每個(gè)標(biāo)簽也可以的. * * author chenguoyong * */public class GetContent public void getContentUsingStringBean(String url) StringBean sb = new StringBean();sb.setLinks(true); / 是否顯示web頁面的連接(Links)/ 為了取得頁面的整潔美觀一般設(shè)置上面兩項(xiàng)為true , 如果要保持頁面的原有格式, 如代碼頁面的空格縮進(jìn) 可以設(shè)置為falsesb.setCollapse(true); / 如果是true的話把一系列空白字符

15、用一個(gè)字符替代.sb.setReplaceNonBreakingSpaces(true);/ If true regular spacesb.setURL(/51AOP/archive/2006/07/19/59064.html);System.out.println(The Content is :n + sb.getStrings();public void getContentUsingStringExtractor(String url, boolean link) / StringExtractor內(nèi)部機(jī)制和上面的一樣.做了一下包裝Stri

16、ngExtractor se = new StringExtractor(url);String text = null;try text = se.extractStrings(link);System.out.println(The content is :n + text); catch (ParserException e) e.printStackTrace();public void getContentUsingParser(String url) NodeList nl;try Parser p = new Parser(url);nl = p.parse(new NodeCl

17、assFilter(BodyTag.class);BodyTag bt = (BodyTag) nl.elementAt(0);System.out.println(bt.toPlainTextString(); / 保留原來的內(nèi)容格式. 包含js代碼 catch (ParserException e) e.printStackTrace();/* * param args */public static void main(String args) String url = /51AOP/archive/2006/07/19/59064.html;

18、/new GetContent().getContentUsingParser(url);/-new GetContent().getContentUsingStringBean(url);3.將整個(gè)html內(nèi)容保存到指定文件package parser;import java.io.BufferedReader;import java.io.BufferedWriter;import java.io.FileWriter;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader

19、;import .MalformedURLException;import .URL;/* * 基本能實(shí)現(xiàn)網(wǎng)頁抓取,不過要手動(dòng)輸入U(xiǎn)RL 將整個(gè)html內(nèi)容保存到指定文件 * * author chenguoyong * */public class ScrubSelectedWeb private final static String CRLF = System.getProperty(line.separator);/* * param args */public static void main(String args) try URL ur = new

20、 URL(/);InputStream instr = ur.openStream();String s, str;BufferedReader in = new BufferedReader(new InputStreamReader(instr);StringBuffer sb = new StringBuffer();BufferedWriter out = new BufferedWriter(new FileWriter(D:/outPut.txt);while (s = in.readLine() != null) sb.append(s +

21、CRLF);System.out.println(sb);str = new String(sb);out.write(str);out.close();in.close(); catch (MalformedURLException e) e.printStackTrace(); catch (IOException e) e.printStackTrace();4利用htmlparser提取網(wǎng)頁純文本的例子package parser;import org.htmlparser.Node;import org.htmlparser.NodeFilter;import org.htmlpar

22、ser.Parser;import org.htmlparser.filters.TagNameFilter;import org.htmlparser.tags.TableTag;import org.htmlparser.util.NodeList;/* * 標(biāo)題:利用htmlparser提取網(wǎng)頁純文本的例子 */public class TestHTMLParser2 /* * 讀取目標(biāo)html內(nèi)容 * */public static void testHtml() try String sCurrentLine;String sTotalString;sCurrentLine = ;s

23、TotalString = ;java.io.InputStream l_urlStream;.URL l_url = new .URL(99:8083/injs100/);.HttpURLConnection l_connection = (.HttpURLConnection) l_url.openConnection();l_connection.connect();l_urlStream = l_connection.getInputStream();java.io.BufferedRe

24、ader l_reader = new java.io.BufferedReader(new java.io.InputStreamReader(l_urlStream);while (sCurrentLine = l_reader.readLine() != null) sTotalString += sCurrentLine + rn;String testText = extractText(sTotalString); catch (Exception e) e.printStackTrace(); /* * 抽取純文本信息 * param inputHtml:html文本 * ret

25、urn * throws Exception */public static String extractText(String inputHtml) throws Exception StringBuffer text = new StringBuffer();Parser parser = Parser.createParser(new String(inputHtml.getBytes(),GBK), GBK);/ 遍歷所有的節(jié)點(diǎn)NodeList nodes = parser.extractAllNodesThatMatch(new NodeFilter() public boolean

26、 accept(Node node) return true;);System.out.println(nodes.size();for (int i = 0; i nodes.size(); i+) Node nodet = nodes.elementAt(i);/字符串的代表性節(jié)點(diǎn):節(jié)點(diǎn)的描述text.append(new String(nodet.toPlainTextString().getBytes(GBK)+ rn);return text.toString(); /* * 讀取文件的方式/utl 來分析內(nèi)容. filePath也可以是一個(gè)Url. * param resource

27、 :文件/Url * throws Exception */public static void test5(String resource) throws Exception Parser myParser = new Parser(resource);myParser.setEncoding(GBK);String filterStr = table;NodeFilter filter = new TagNameFilter(filterStr);NodeList nodeList = myParser.extractAllNodesThatMatch(filter);/*for(int

28、i=0;inodeList.size();i+)TableTag tabletag = (TableTag) nodeList.elementAt(i);/標(biāo)簽名稱System.out.println(tabletag.getTagName();System.out.println(tabletag.getText();*/TableTag tabletag = (TableTag) nodeList.elementAt(1);public static void main(String args) throws Exception test5(99:808

29、3/injs100/);/testHtml();5.html解析tablepackage parser;import org.apache.log4j.Logger;import org.htmlparser.NodeFilter;import org.htmlparser.Parser;import org.htmlparser.filters.NodeClassFilter;import org.htmlparser.filters.OrFilter;import org.htmlparser.filters.TagNameFilter;import org.htmlparser.tags

30、.TableColumn;import org.htmlparser.tags.TableRow;import org.htmlparser.tags.TableTag;import org.htmlparser.util.NodeList;import org.htmlparser.util.ParserException;import junit.framework.TestCase;public class ParserTestCase extends TestCase private static final Logger logger = Logger.getLogger(Parse

31、rTestCase.class);public ParserTestCase(String name) super(name);/* * 測(cè)試對(duì) * * * * 的解析 */public void testTable() Parser myParser;NodeList nodeList = null;myParser = Parser.createParser( + + 1-111-121-13+ 1-211-221-23+ 1-311-321-33+ + 2-112-122-13+ 2-212-222-23+ 2-312-322-33+ , GBK);NodeFilter tableFil

32、ter = new NodeClassFilter(TableTag.class);OrFilter lastFilter = new OrFilter();lastFilter.setPredicates(new NodeFilter tableFilter );try nodeList = myParser.parse(lastFilter);for (int i = 0; i = nodeList.size(); i+) if (nodeList.elementAt(i) instanceof TableTag) TableTag tag = (TableTag) nodeList.el

33、ementAt(i);TableRow rows = tag.getRows();for (int j = 0; j rows.length; j+) TableRow tr = (TableRow) rowsj;System.out.println(tr.getAttribute(id);if (tr.getAttribute(id).equalsIgnoreCase(tro1) TableColumn td = tr.getColumns();for (int k = 0; k td.length; k+) / logger.fatal( +/ tdk.toPlainTextString(

34、);System.out.println(+ tdk.toPlainTextString(); catch (ParserException e) e.printStackTrace();/* * 得到目標(biāo)數(shù)據(jù) * * param url:目標(biāo)url * throws Exception */public static void getDatabyUrl(String url) throws Exception Parser myParser = new Parser(url);NodeList nodeList = null;myParser.setEncoding(gb2312);Node

35、Filter tableFilter = new NodeClassFilter(TableTag.class);OrFilter lastFilter = new OrFilter();lastFilter.setPredicates(new NodeFilter tableFilter );try nodeList = myParser.parse(lastFilter);/ 可以從數(shù)據(jù)table的size:19-21開始到結(jié)束for (int i = 15; i = nodeList.size(); i+) if (nodeList.elementAt(i) instanceof Tab

36、leTag) TableTag tag = (TableTag) nodeList.elementAt(i);TableRow rows = tag.getRows();for (int j = 0; j rows.length; j+) TableRow tr = (TableRow) rowsj;if (tr.getAttribute(id) != null& tr.getAttribute(id).equalsIgnoreCase(tr02) TableColumn td = tr.getColumns();/ 對(duì)不起,沒有你要查詢的記錄!if (td.length = 1) Syste

37、m.out.println(對(duì)不起,沒有你要查詢的記錄); else for (int k = 0; k td.length; k+) System.out.println(內(nèi)容:+ tdk.toPlainTextString().trim(); catch (ParserException e) e.printStackTrace();/* * 測(cè)試已經(jīng)得出有數(shù)據(jù)時(shí)table:22個(gè),沒有數(shù)據(jù)時(shí)table:19個(gè) * * param args */public static void main(String args) try / getDatabyUrl(http:/gd.12530.co

38、m/user/querytonebytype.do?field=tonecode&condition=619505000000008942&type=1006&pkValue=619505000000008942);getDatabyUrl(/user/querytonebytype.do?field=tonecode&condition=619272000000001712&type=1006&pkValue=619272000000001712); catch (Exception e) e.printStackTrace();6.html解析常用pac

39、kage com.jscud.test;import java.io.BufferedReader;import java.io.File;import java.io.FileInputStream;import java.io.InputStreamReader;import org.htmlparser.Node;import org.htmlparser.NodeFilter;import org.htmlparser.Parser;import org.htmlparser.filters.NodeClassFilter;import org.htmlparser.filters.O

40、rFilter;import org.htmlparser.nodes.TextNode;import org.htmlparser.tags.LinkTag;import org.htmlparser.util.NodeList;import org.htmlparser.util.ParserException;import org.htmlparser.visitors.HtmlPage;import org.htmlparser.visitors.TextExtractingVisitor;import com.jscud.util.LogMan; /一個(gè)日志記錄類/* 演示了Html

41、 Parse的應(yīng)用.* * author scud (/)*/public class ParseHtmlTestpublic static void main(String args) throws ExceptionString aFile = e:/jscud/temp/test.htm;String content = readTextFile(aFile, GBK);test1(content);System.out.println(=);test2(content);System.out.println(

42、=);test3(content);System.out.println(=);test4(content);System.out.println(=);test5(aFile);System.out.println(=);/訪問外部資源,相對(duì)慢test5( (/); System.out.println(=);/* 讀取文件的方式來分析內(nèi)容.* filePath也可以是一個(gè)Url.* * param resource 文件/Url*/public static void test5(String resource)

43、throws ExceptionParser myParser = new Parser(resource);/設(shè)置編碼myParser.setEncoding(GBK);HtmlPage visitor = new HtmlPage(myParser);myParser.visitAllNodesWith(visitor);String textInPage = visitor.getTitle();System.out.println(textInPage);/* 按頁面方式處理.對(duì)一個(gè)標(biāo)準(zhǔn)的Html頁面,推薦使用此種方式.*/public static void test4(String

44、 content) throws ExceptionParser myParser;myParser = Parser.createParser(content, GBK);HtmlPage visitor = new HtmlPage(myParser);myParser.visitAllNodesWith(visitor);String textInPage = visitor.getTitle();System.out.println(textInPage);/* 利用Visitor模式解析html頁面.* 小優(yōu)點(diǎn):翻譯了等符號(hào) * 缺點(diǎn):好多空格,無法提取link* */public

45、static void test3(String content) throws ExceptionParser myParser;myParser = Parser.createParser(content, GBK);TextExtractingVisitor visitor = new TextExtractingVisitor();myParser.visitAllNodesWith(visitor);String textInPage = visitor.getExtractedText();System.out.println(textInPage);/* 得到普通文本和鏈接的內(nèi)容

46、.* * 使用了過濾條件.*/public static void test2(String content) throws ParserExceptionParser myParser;NodeList nodeList = null;myParser = Parser.createParser(content, GBK);NodeFilter textFilter = new NodeClassFilter(TextNode.class);NodeFilter linkFilter = new NodeClassFilter(LinkTag.class);/暫時(shí)不處理 meta/NodeF

47、ilter metaFilter = new NodeClassFilter(MetaTag.class);OrFilter lastFilter = new OrFilter();lastFilter.setPredicates(new NodeFilter textFilter, linkFilter );nodeList = myParser.parse(lastFilter);Node nodes = nodeList.toNodeArray();for (int i = 0; i nodes.length; i+)Node anode = (Node) nodesi;String line = ;if (anode instanceof TextNode)TextNode textnode = (TextNode) anode;/line = textnode.toPlainTextString().trim();line = textnode.getText();else if (anode instanceof LinkTag)LinkTag linknode = (Lin

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論