In order to do that, you first have to read your file as a CSV - and find some way to relate "row X" of file 1 with "row Y" of file 2 - then identify the duplicate columns to build a new composite dataset which you can write to a new file.
Just reading lines won't do it: that's treating CSV data as if it was a "straight text file" which it is, but that text has a layer of formatting applied which you need to utilize in order to get this to work.
And ... a CSV data column can be spread across multiple lines, if inclosed in double quotes - so you current approach will not only not allow you to identify any columns, but could seriously mess up your data integrity if you are unlucky.
I'd start by reading each CSV file into a DataTable (
A Fast CSV Reader[
^] could help you there) then combining them into a single new DataTable to write out to a new file.
It's a lot less effort (and more maintainable) than trying to "roll your own" CSV data reader!