I don't know of a tool to do exactly this, but there are similar tools and you could build one yourself.
I once did this to download all links from a webpage. You might want to look at one of the web crawlers on Code Project
and borrow some of their ideas.
Basically, you use a WebRequest to get the HTML for a page. Then, you can use various means to extract the data you want. I chose to use regular expressions, because they were quick and dirty and I was never going to use the application again. You could also parse the HTML.
You'd have to look for patterns, like any TD element that contains the string "Item Description", then look at the next TD defined after that (where your target data will be held). You could then apply that algorithm to every page that uses that pattern. You could also have the web crawler search every page on www.dell.com for you, that way you can have it search the pages and the contents of the pages automatically.