如何使用HTMLAgilityPack提取HTML數據

c# html-agility-pack web-crawler

我正在學習編寫網絡爬蟲,並找到一些很好的例子讓我開始,但由於我是新手,我對編碼方法有一些問題。

例如,可以在此處找到搜索結果搜索結果

當我查看結果的HTML源代碼時,我可以看到以下內容:

<HR><CENTER><H3>License Information *</H3></CENTER><HR>                                                                       
<P>                                                                                                                           
       <CENTER> 06/03/2014 </CENTER> <BR>                                                                                     
<B>Name : </B> WILLIAMS AJAYA L                     <BR>                                                                      
<B>Address : </B> NEW YORK            NY                                          <BR>                                        
<B>Profession : </B> ATHLETIC TRAINER                          <BR>                                                           
<B>License No: </B> 001475 <BR>                                                                                               
<B>Date of Licensure : </B> 01/12/07      <BR>                                                                                
                                                                                                                                <B>Additional Qualification : </B>     &nbsp; Not applicable in this profession                       <BR>                    
<B> <A href="http://www.op.nysed.gov/help.htm#status"> Status :</A></B> REGISTERED                                        <BR>
<B>Registered through last day of : </B> 08/15      <BR>

如何使用HTMLAgilityPack從網站中刪除這些數據?

我試圖實現如下所示的示例,但不知道在哪裡進行編輯以使其能夠抓取頁面:

private void btnCrawl_Click(object sender, EventArgs e)
        {
            foreach (SHDocVw.InternetExplorer ie in shellWindows)
            {
                filename = Path.GetFileNameWithoutExtension( ie.FullName ).ToLower();

                if ( filename.Equals( "iexplore" ) )
                txtURL.Text = "Now Crawling: " + ie.LocationURL.ToString();
            }
            string url = ie.LocationURL.ToString();
            string xmlns = "{http://www.w3.org/1999/xhtml}";
            Crawler cl = new Crawler(url);
            XDocument xdoc = cl.GetXDocument();
            var res = from item in xdoc.Descendants(xmlns + "div")
                      where item.Attribute("class") != null && item.Attribute("class").Value == "folder-news"
                      && item.Element(xmlns + "a") != null
                      //select item;
                      select new
                      {
                          Link = item.Element(xmlns + "a").Attribute("href").Value,
                          Image = item.Element(xmlns + "a").Element(xmlns + "img").Attribute("src").Value,
                          Title = item.Elements(xmlns + "p").ElementAt(0).Element(xmlns + "a").Value,
                          Desc = item.Elements(xmlns + "p").ElementAt(1).Value
                      };
            foreach (var node in res)
            {
                MessageBox.Show(node.ToString());
                tb.Text = node + "\n";
            }
            //Console.ReadKey();                   
        }

Crawler助手類:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Xml.Linq;

namespace CrawlerWeb
{
    public class Crawler
    {

        public string Url
        {
            get;
            set;
        }
        public Crawler() { }
        public Crawler(string Url)
        {
            this.Url = Url;
        }
        public XDocument GetXDocument()
        {
            HtmlAgilityPack.HtmlWeb doc1 = new HtmlAgilityPack.HtmlWeb();
            doc1.UserAgent = "Mozilla/4.0 (conpatible; MSIE 7.0; Windows NT 5.1)";
            HtmlAgilityPack.HtmlDocument doc2 = doc1.Load(Url);
            doc2.OptionOutputAsXml = true;
            doc2.OptionAutoCloseOnEnd = true;
            doc2.OptionDefaultStreamEncoding = System.Text.Encoding.UTF8;
            XDocument xdoc = XDocument.Parse(doc2.DocumentNode.SelectSingleNode("html").OuterHtml);
            return xdoc;
        }
    }
}

tb是一個多行文本框...所以我希望它顯示以下內容:

Name WILLIAMS AJAYA L

Address NEW YORK NY

Profession ATHLETIC TRAINER

License No 001475

Date of Licensure 1/12/07

Additional Qualification Not applicable in this profession

Status REGISTERED

Registered through last day of 08/15 Registered through last day of

我想將第二個參數添加到數組中,因為下一步是寫入SQL數據庫...

我可以從具有搜索結果的IE獲取URL,但是如何在我的腳本中對其進行編碼?

一般承認的答案

這個小片段應該讓你開始:

HtmlDocument doc = new HtmlDocument();
WebClient client = new WebClient();
string html = client.DownloadString("http://www.nysed.gov/coms/op001/opsc2a?profcd=67&plicno=001475&namechk=WIL");
doc.LoadHtml(html);

HtmlNodeCollection nodes = doc.DocumentNode.SelectNodes("//div");

您基本上使用WebClient類下載HTML文件,然後將該HTML加載到HtmlDocument像中。然後,您需要使用XPath來查詢DOM樹並蒐索節點。在上面的示例中,“節點”將包括文檔中的所有div元素。

以下是有關XPath語法的快速參考: http//msdn.microsoft.com/en-us/library/ms256086( v = vs.110).aspx



許可下: CC-BY-SA with attribution
不隸屬於 Stack Overflow
這個KB合法嗎? 是的,了解原因
許可下: CC-BY-SA with attribution
不隸屬於 Stack Overflow
這個KB合法嗎? 是的,了解原因