How to use htmlagilitypack to scrape an xml file

asp.net c# html-agility-pack screen-scraping

Question

I need to extract the links and descriptions from an xml file from http://feeds.feedburner.com/Torrentfreak.

Using this code, I

    var webGet = new HtmlWeb();
                var document = webGet.Load("http://feeds.feedburner.com/TechCrunch");
    var TechCrunch = from info in document.DocumentNode.SelectNodes("//channel")
                                 from link in info.SelectNodes("//guid[@isPermaLink='false']")
                                 from content in info.SelectNodes("//description")
     select new
                                 {
                                     LinkURL = info.InnerText,
                                     Content = content.InnerText,

                                 };
lvLinks.DataSource = TechCrunch;
            lvLinks.DataBind(); 

This is what I used as the list view control on the ASP.NET website. using

<%# Eval("LinkURL") %>  -  <%# Eval("Text") %> 

But it displays an error

Value can never be empty. Source is a parameter name.

What is the issue? And can data from xml nodes be retrieved using HTML Agility Pack? Please advise Thanks

1
3
7/25/2016 8:42:03 PM

Accepted Answer

Instead of utilizing the HTMLAgilityPack, try using the RSS library:

These links may be of use to you:

0
2/2/2012 9:10:24 AM

Popular Answer

The value is listed as being null in the error. Therefore, there are too many possibilities.

select new
         {
                LinkURL = info.InnerText??string.Empty,
                Content = content.InnerText??string.Empty,

         };

aspx, or both. I believe the string should read "minus" like follows:

<%# Eval("LinkURL")??string.Empty %>+"-"+<%# Eval("Text")??string.Empty %> 


Related Questions





Related

Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow