Click here to Skip to main content
13,352,408 members (37,991 online)
Click here to Skip to main content
Add your own
alternative version


162 bookmarked
Posted 12 Sep 2004

Convert any URL to a MHTML archive using native .NET code

, 3 Apr 2005
Rate this:
Please Sign up or sign in to vote.
A native .NET class for saving URLs: text-only, HTML page, HTML archive, or HTML complete.

Sample Image - MhtBuilder.gif


If you've ever used the File | Save As... menu in Internet Explorer, you might have noticed a few interesting options IE provides under the Save As Type drop-down box:

Screenshot - Internet Explorer Save As menu

The options provided are:

  • Web Page, complete
  • Web Archive, single file
  • Web Page, HTML only
  • Text File

Most of these are self-explanatory, with the exception of the Web Archive (MHTML) format. What's neat about this format is that it bundles the web page and all of its references, into a single compact .MHT file. It's a lot easier to distribute a single self-contained file than it is to distribute a HTML file with a subfolder full of image/CSS/Flash/XML files referenced by that HTML file. In our case, we were generating HTML reports and we needed to check these reports into a document management system which expects a single file. The MHTML (*.mht) format solves this problem beautifully!

This project contains the MhtBuilder class, a 100% .NET managed code solution which can auto-generate a MHT file from a target URL, in one line of code. As a bonus, it will also generate all the other formats listed above, too. And it's completely free, unlike some commercial solutions you might find out there.


I know people assume the worst of Microsoft, but the MHTML format is actually based on RFC standard 2557, compliant Multipart MIME Message (MHTML web archive). So it's an actual Internet standard! Web Archive, a.k.a. MHTML, is a remarkably simple plain text format which looks a lot like (and is in fact almost exactly identical to) an email. Here's the header of the MHT file you are viewing at the top of the page:

Screenshot - Mht file header

To generate a MHTML file, we simply merge together all of the files referenced in the HTML. The red line marks the first content block; there will be one content block for each file. We need to follow a few rules, though:

  • Use Quoted-Printable encoding for the text formats.
  • Use Base64 encoding for the binary formats.
  • Make sure the Content-Location has the correct absolute URL for each reference.

Not all websites will tolerate being packaged into a MHTML file. This version of Mht.Builder supports frames and IFrame, but watch out for pages that include lots of complicated JavaScript. You'll want to use the .StripScripts option on sites like that.

Using Mht.Builder

MhtBuilder comes with a complete demo app:

Screenshot - Mht demo application

Try it out on your favorite website. The files will be generated by default in the \bin folder of the solution. Just click the View button to launch them. Bear in mind that for the Web Archive and complete tabs, all the content from the target web page must be downloaded to the /bin folder, so it might take a little while! Although I don't provide any feedback events yet, I do emit a lot of progress feedback via the Debug.Write, so switch to the debug output tab to see what's happening in real time.

There are four tabs here, just like the four options IE provides in its Save As Type options. In MhtBuilder, these are the four methods being called, in the order they appear on the tabs:

Public Sub SavePageComplete(ByVal outputFilePath As String, Optional url As String)
Public Sub SavePageArchive(ByVal outputFilePath As String, Optional url As String)
Public Sub SavePage(ByVal outputFilePath As String, Optional url As String)
Public Sub SavePageText(ByVal outputFilePath As String, Optional url As String)

As of Windows XP Service Pack 2, HTML files opened from disk result in security blocks. In order to avoid this, we need to add the "Mark of the Web" to the file so IE knows what URL it came from, and can thus assign an appropriate security zone to the HTML. That's what the blnAddMark parameter is for; it causes the HTML file to be tagged with this single line at the top:

<!-- saved from url=(0027) -->

The other thing we need to do when saving these files is fix up the URLs. Any relative URLs such as:

<img src="/images/standard/logo225x72.gif">

must be converted to absolute URLs like so:

<img src="">

We do this using regular expressions, which gets us a NameValueCollection of all the references we need to fix. We loop through each reference and perform the fixup on the HTML string.

Private Function ExternalHtmlFiles() As Specialized.NameValueCollection
  If Not _ExternalFileCollection Is Nothing Then
    Return _ExternalFileCollection
  End If
  _ExternalFileCollection = New Specialized.NameValueCollection
  Dim r As Regex
  Dim html As String = Me.ToString
  Debug.WriteLine("Resolving all external HTML references from URL:")
  Debug.WriteLine("    " & Me.Url)
  '-- src='filename.ext' ; background="filename.ext"
  '-- note that we have to test 3 times to catch all quote styles: '', "", and none
  r = New Regex( _
    "(\ssrc|\sbackground)\s*=\s*((?<Key>'(?<Value>[^']+)')|" & _
    "(?<Key>""(?<Value>[^""]+)"")|(?<Key>(?<Value>[^ \n\r\f]+)))", _
    RegexOptions.IgnoreCase Or RegexOptions.Multiline)
    AddMatchesToCollection(html, r, _ExternalFileCollection)
  '-- @import "style.css" or @import url(style.css)
  r = New Regex( _
    "(@import\s|\S+-image:|background:)\s*?(url)*\s*?(?<Key>" & _
    "[""'(]{1,2}(?<Value>[^""')]+)[""')]{1,2})", _
    RegexOptions.IgnoreCase Or RegexOptions.Multiline)
    AddMatchesToCollection(html, r, _ExternalFileCollection)
  '-- <link rel=stylesheet href="style.css">
  r = New Regex( _
    "<link[^>]+?href\s*=\s*(?<Key>" & _
    "('|"")*(?<Value>[^'"">]+)('|"")*)", _
    RegexOptions.IgnoreCase Or RegexOptions.Multiline)
    AddMatchesToCollection(html, r, _ExternalFileCollection)
  '-- <iframe src="mypage.htm"> or <frame src="mypage.aspx">
  r = New Regex( _
    "<i*frame[^>]+?src\s*=\s*(?<Key>" & _
    "['""]{0,1}(?<Value>[^'""\\>]+)['""]{0,1})", _
    RegexOptions.IgnoreCase Or RegexOptions.Multiline)
    AddMatchesToCollection(html, r, _ExternalFileCollection)
  Return _ExternalFileCollection
End Function

We use a similar technique to get a list of all the files we need to download, which are then downloaded via my WebClientEx class. Why use that instead of the built in Net.WebClient? Good question! Because it doesn't support HTTP compression. My class, on the other hand, does:

Private Function Decompress(ByVal b() As Byte, _
      ByVal CompressionType As HttpContentEncoding) As Byte()

  Dim s As Stream
  Select Case CompressionType
    Case HttpContentEncoding.Deflate
      s = New Zip.Compression.Streams.InflaterInputStream(New MemoryStream(b), _
          New Zip.Compression.Inflater(True))
    Case HttpContentEncoding.Gzip
      s = New GZip.GZipInputStream(New MemoryStream(b))
    Case Else
      Return b
  End Select
  Dim ms As New MemoryStream
  Const chunkSize As Integer = 2048
  Dim sizeRead As Integer
  Dim unzipBytes(chunkSize) As Byte
  While True
    sizeRead = s.Read(unzipBytes, 0, chunkSize)
    If sizeRead > 0 Then
      ms.Write(unzipBytes, 0, sizeRead)
      Exit While
    End If
  End While
  Return ms.ToArray
End Function

HTTP compression is a no-brainer: it increases your effective bandwidth by 75 percent by using standard GZIP compression-- courtesy of the SharpZipLib library.


Creating MHTML files isn't hard, but there are lots of little gotchas when dealing with HTML, regular expressions, and HTTP downloads. I tried to document all the difficult bits in the source code. I've also tested MhtBuilder on dozens of different websites so far with excellent results.

There are many more details and comments in the source code provided at the top of the article, so check it out. Please don't hesitate to provide feedback, good or bad! I hope you enjoyed this article. If you did, you may also like my other articles as well.


  • Sunday, September 12, 2004 - Published.
  • Monday, March 28, 2005 - Version 2.0
    • Completely rewritten!
    • Autodetection of content encoding (e.g., international web pages), tested against multi-language websites.
    • Now correctly decompresses both types of HTTP compression.
    • Supports completely in-memory operation for server-side use, or on-disk storage for client use.
    • Now works on web pages with frames and IFrames, using recursive retrieval.
    • HTTP authentication and HTTP Proxy support.
    • Allows configuration of browser ID string to retrieve browser-specific content.
    • Basic cookie support (needs enhancement and testing).
    • Much improved regular expressions used for parsing HTTP.
    • Extensive use of VB.NET 2005 style XML comments throughout.


This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


About the Author

Web Developer
United States United States
My name is Jeff Atwood. I live in Berkeley, CA with my wife, two cats, and far more computers than I care to mention. My first computer was the Texas Instruments TI-99/4a. I've been a Microsoft Windows developer since 1992; primarily in VB. I am particularly interested in best practices and human factors in software development, as represented in my recommended developer reading list. I also have a coding and human factors related blog at

You may also be interested in...


Comments and Discussions

GeneralProposed way to support file:///... based requests Pin
marchesb12-May-10 9:38
membermarchesb12-May-10 9:38 
This solution is awesome, just what I was looking for! I noticed in ealier messages that file:///... based requests were originally supported, but that support was dropped. For anyone still interested, I was able to add that support (at least for my purposes) as follows:

1. In ExternalFile.vb, updated the urlPattern var in ConvertRelativeToAbsoluteRefs method to include "file:":
Dim urlPattern As String = _
"(?<attrib>\shref|\ssrc|\sbackground)\s*?=\s*?" & _
"(?<delim1>[""'\\]{0,2})(?!\s*\+|#|http:|ftp:|mailto:|javascript:|fileSmile | :) " & _

2. In ExternalFile.vb, added a new urlFileRegex var in AddMatchesToCollection method:
Dim urlFileRegex As New Regex("^file*:///\w+", RegexOptions.IgnoreCase)
If (Not urlRegex.IsMatch(value)) AndAlso (Not urlFileRegex.IsMatch(value)) Then

3. In WebClientEx.vb added these vars at top of the class:
Private _IsFileWebRequest As Boolean = False
Private _LoadedInitHTML As Boolean = False

4. In WebClientEx.vb added this prop:
Public Property IsFileWebRequest() As Boolean
Return _IsFileWebRequest
End Get
Set(ByVal Value As Boolean)
_IsFileWebRequest = Value
End Set
End Property

5. In WebClientEx.vb, modified GetUrlData method to be as follows:
Public Sub GetUrlData(ByVal Url As String, ByVal ifModifiedSince As DateTime)
'-- a.) Added If/Then check and handling for _IsFileWebRequest = True:
Dim wreq As System.Net.WebRequest
If _IsFileWebRequest = False Then
wreq = DirectCast(WebRequest.Create(Url), HttpWebRequest)
wreq = DirectCast(WebRequest.Create(Url), FileWebRequest)
End If

'-- do we need to use a proxy to get to the web?
If _ProxyUrl <> "" Then
Dim wp As New WebProxy(_ProxyUrl)
If _ProxyAuthenticationRequired Then
If _ProxyUser <> "" And _ProxyPassword <> "" Then
wp.Credentials = New NetworkCredential(_ProxyUser, _ProxyPassword)
wp.Credentials = CredentialCache.DefaultCredentials
End If
wreq.Proxy = wp
End If
End If

'-- does the target website require credentials?
If _AuthenticationRequired Then
If _AuthenticationUser <> "" And _AuthenticationPassword <> "" Then
wreq.Credentials = New NetworkCredential(_AuthenticationUser, _AuthenticationPassword)
wreq.Credentials = CredentialCache.DefaultCredentials
End If
End If

wreq.Method = "GET"
wreq.Timeout = _RequestTimeoutMilliseconds

'-- b.) Added If/Then check and type cast handling for _IsFileWebRequest = False:
If _IsFileWebRequest = False Then
CType(wreq, HttpWebRequest).UserAgent = _HttpUserAgent
End If

wreq.Headers.Add("Accept-Encoding", _AcceptedEncodings)

'-- c.) Added If/Then check and type cast handling for _IsFileWebRequest = False:
If _IsFileWebRequest = False Then
'-- note that, if present, this will trigger a 304 exception
'-- if the URL being retrieved is not newer than the specified
'-- date/time
If ifModifiedSince <> DateTime.MinValue Then
CType(wreq, HttpWebRequest).IfModifiedSince = ifModifiedSince
End If
End If

'-- d.) Added If/Then check and type cast handling for _IsFileWebRequest = False:
If _IsFileWebRequest = False Then
'-- sometimes we need to transfer cookies to another URL;
'-- this keeps them around in the object
If KeepCookies Then
If _PersistedCookies Is Nothing Then
_PersistedCookies = New CookieContainer
End If
CType(wreq, HttpWebRequest).CookieContainer = _PersistedCookies
End If
End If

'-- download the target URL into a byte array
'-- e.) Added If/Then check and handling for _IsFileWebRequest = True:
Dim wresp As System.Net.WebResponse
If _IsFileWebRequest = False Then
wresp = DirectCast(wreq.GetResponse, HttpWebResponse)
wresp = DirectCast(wreq.GetResponse, FileWebResponse)
End If

'-- convert response stream to byte array
Dim ebr As New ExtendedBinaryReader(wresp.GetResponseStream)
_ResponseBytes = ebr.ReadToEnd()

'-- determine if body bytes are compressed, and if so,
'-- decompress the bytes
Dim ContentEncoding As HttpContentEncoding
If wresp.Headers.Item("Content-Encoding") Is Nothing Then
ContentEncoding = HttpContentEncoding.None
Select Case wresp.Headers.Item("Content-Encoding").ToLower
Case "gzip"
ContentEncoding = HttpContentEncoding.Gzip
Case "deflate"
ContentEncoding = HttpContentEncoding.Deflate
Case Else
ContentEncoding = HttpContentEncoding.Unknown
End Select
_ResponseBytes = Decompress(_ResponseBytes, ContentEncoding)
End If

'-- sometimes URL is indeterminate, eg, ""
'-- in that case the folder and file resolution MUST be done on
'-- the server, and returned to the client as ContentLocation
_ContentLocation = wresp.Headers("Content-Location")
If _ContentLocation Is Nothing Then
_ContentLocation = ""
End If

'-- if we have string content, determine encoding type
'-- (must cast to prevent Nothing)
_DetectedContentType = wresp.Headers("Content-Type")
If _DetectedContentType Is Nothing Then
_DetectedContentType = ""
End If

'-- f.) Added If/Then check and handling for _IsFileWebRequest = True (wanted to force to text/html for my purposes, but only for the initial HTML text content):
If _IsFileWebRequest = True AndAlso _LoadedInitHTML = False Then
_DetectedContentType = "text/html;charset=UTF-8"
End If

If Me.ResponseIsBinary Then
_DetectedEncoding = Nothing
If _ForcedEncoding Is Nothing Then
_DetectedEncoding = DetectEncoding(_DetectedContentType, _ResponseBytes)
End If
End If

'-- g.) Added If/Then check and handling for _IsFileWebRequest = True (wanted to force to utf-8 for my purposes, but only for the initial HTML text content):
If _IsFileWebRequest = True AndAlso _LoadedInitHTML = False Then
_DetectedEncoding = System.Text.Encoding.GetEncoding("utf-8")
End If

'-- h.) Added use of _LoadedInitHTML:
_LoadedInitHTML = True

End Sub

6. In Builder.vb, added IsFileWebRequest arg to SavePageArchive method's signature:
Public Function SavePageArchive(ByVal outputFilePath As String, ByVal st As FileStorage, _
Optional ByVal url As String = "", _
Optional ByVal IsFileWebRequest As Boolean = False) As String

7. Added the following line at top of SavePageArchive method:
WebClient.IsFileWebRequest = IsFileWebRequest

... and that did it for me.

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Terms of Use | Mobile
Web01 | 2.8.180111.1 | Last Updated 4 Apr 2005
Article Copyright 2004 by wumpus1
Everything else Copyright © CodeProject, 1999-2018
Layout: fixed | fluid