TNS
VOXPOP
Why did I come to The New Stack today?
We're always glad to see you, but what is the reason for today's visit?
Researching a new technology and Google led me here.
0%
Social media previewed an intriguing post and I wanted to read the whole thing.
0%
I routinely stop by TNS for some good tech reading when I'm bored.
0%
For a glimpse of Alex Williams wearing his fedora. Grrr!
0%
Software Development

A Web Developer’s Guide to Parsing HTML with C#

What methods should devs use to parse HTML? Don't say regex! Instead, we present two options for parsing HTML using C#.
Dec 2nd, 2023 5:00am by
Featued image for: A Web Developer’s Guide to Parsing HTML with C#
Image via Unsplash 

A question that appeared in my last post within a set of interview topics was: “Why is regex bad at parsing HTML?” So, what methods should one use to parse HTML?

Developers use regular expressions (regex) to extract information from text strings; for example, a well-formed regex can grab a valid email address from within a form response. HTML is a presentation format that helps a web browser create a web page from text strings and images. The problem with parsing (that is, reading) HTML is that it uses opening and closing tags for context. This is hardly a problematic concept for humans, but computers work less well when the meaning of something depends on cues elsewhere.

In the HTML below, each reference to “Hello” has to be treated entirely differently by the parser:


Regex could find all the hellos, but would struggle a bit with finding just the bold hello. So we need other methods.

This post, then, is a gentle look at parsing snippets of HTML using elementary C# code with available package options.

C# Packages for Parsing HTML

I’ll look at two C# packages whose API styles will be similar to parallel packages for other languages. Keeping the problem in code means you can be flexible with how you interpret restraints and error conditions.

Imagine that you have asked your housing residents to contribute to a yearly maintenance report that you will eventually represent as a web page. This year, you have asked for each contributing resident to present their report as a very simple HTML snippet. The styling will all be done later. The residents all happen to be geeks, and they agree.

To restrain the exuberant residents, you devise a number of simple rules:

  1. All the paragraphs must appear within a <div> with a class called “YearReport”.
  2. The first paragraph must have a title attribute, from which we will extract the title’s text.
  3. There should no more than three paragraphs in total.

Remember, this is intended to be content within a main report — so that will control styling. These reports are just content.

Here is a good example of a resident’s snippet:


Let’s now check out some C# HTML parsing packages. Note that we aren’t interested in modifying code, just parsing it.

HTML Agility Pack (HAP)

I’ll start with HTML Agility Pack (or HAP), as that appears to be the most popular and is available as a NuGet package within Visual Studio.

Our first thing will be to find the YearlyReport div that all content must be within. This uses XPath, which is unsurprising but not something I want to examine. We just want to check, first, that there exists a <div> that has the attributes class=”YearlyReport”:


The mysterious string is XPath, whose job is to describe positions within an HTML/XML document. The one above says simply “find a <div> anywhere in this document, that has a class attribute with the value ‘YearReport’”.

Any well-documented technology will already be within the purview of generative AI, making the XPath extra step much less of a barrier. Asking ChatGPT to create the required XPath statement — if not the whole HTML parsing example — will certainly work.

Here is the code needed to test the constraints to a reasonable degree:


Note that the path to the file works for me, and will be different for you.

If we run this while it is looking at our example HTML, the result as expected is:


If I use any adulterated HTML that trivially break the restrictions, this code will find it. But we are clearly able to improve on it, using proper Exceptions, etc.

AngleSharp: A Non-XPath Option

However, I am now keen to see if I can use a library that doesn’t need XPath.

AngleSharp originally didn’t support XPath, but it has lots of NuGet package love now. It uses CSS selectors that for many will be a simpler choice. Using the same examples and restrictions, let’s rewrite the above code for that library:


I use the var keyword here to infer the type (when the types would be packaged defined) in order to show that the structures in the two examples are similar.

Other than a bit more coaxing to get the HTML file, this code looks very similar to the HAP version. The CSS selector to find the correct <div> is easier to guess at, but must still be considered as extra technology to learn.

Conclusion

I hope this goes to show that HTML can be used in a pipeline, without too much fear that it will be difficult to handle without tricky code. The kicker is that I still needed either XPath or CSS selectors to quickly hone in on the right <div>, which only adds to the allure of AI tool help.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.