Click here to Skip to main content
15,867,488 members
Articles / Programming Languages / C#

LINQ to CSV library

Rate me:
Please Sign up or sign in to vote.
4.97/5 (217 votes)
10 Jan 2015Apache23 min read 989.5K   482   213
Easy to use library to use CSV and tab delimited files with LINQ queries.

Contents

Introduction

This library makes it easy to use CSV files with LINQ queries. Its features include:

  • Follows the most common rules for CSV files. Correctly handles data fields that contain commas and line breaks.
  • In addition to comma, most delimiting characters can be used, including tab for tab delimited fields.
  • Can be used with an IEnumarable of an anonymous class - which is often returned by a LINQ query.
  • Supports deferred reading.
  • Supports processing files with international date and number formats.
  • Supports different character encodings if you need them.
  • Recognizes a wide variety of date and number formats when reading files.
  • Provides fine control of date and number formats when writing files.
  • Robust error handling, allowing you to quickly find and fix problems in large input files.

Requirements

  • To compile the library, you need a C# 2010 compiler or better, such as Visual Studio 2010 or Visual C# 2010 Express Edition.
  • To run the library code, you need to have the .NET 4.0 framework installed.

Installation

Simply install the NuGet package.

Quick Start

Reading from a file

  1. In your project, add a reference to the LINQtoCSV.dll you generated during Installation.
  2. The file will be read into an IEnumerable<T>, where T is a data class that you define. The data records read from the file will be stored in objects of this data class. You could define a data class along these lines:
    C#
    using LINQtoCSV;
    using System;
    class Product
    {
        [CsvColumn(Name = "ProductName", FieldIndex = 1)]
        public string Name { get; set; }
        [CsvColumn(FieldIndex = 2, OutputFormat = "dd MMM HH:mm:ss")]
        public DateTime LaunchDate { get; set; }
        [CsvColumn(FieldIndex = 3, CanBeNull = false, OutputFormat = "C")]
        public decimal Price { get; set; }
        [CsvColumn(FieldIndex = 4)]
        public string Country { get; set; }
        [CsvColumn(FieldIndex = 5)]
        public string Description { get; set; }
    }

    With this definition, you could read into an IEnumerable<Product>.

    Although this example only uses properties, the library methods will recognize simple fields as well. Just make sure your fields/properties are public.

    The optional CsvColumn attribute allows you to specify whether a field/property is required, how it should be written to an output file, etc. Full details are available here.

  3. Import the LINQtoCSV namespace at the top of the source file where you'll be reading the file:
    C#
    using LINQtoCSV;
  4. Create a CsvFileDescription object, and initialize it with details about the file that you're going to read. It will look like this:
    C#
    CsvFileDescription inputFileDescription = new CsvFileDescription
    {
        SeparatorChar = ',', 
        FirstLineHasColumnNames = true
    };

    This allows you to specify what character is used to separate data fields (comma, tab, etc.), whether the first record in the file holds column names, and a lot more (full details).

  5. Create a CsvContext object:
    C#
    CsvContext cc = new CsvContext();

    It is this object that exposes the Read and Write methods you'll use to read and write files.

  6. Read the file into an IEnumerable<T> using the CsvContext object's Read method, like this:
    C#
    IEnumerable<Product> products =
        cc.Read<Product>("products.csv", inputFileDescription);

    This reads the file products.csv into the variable products, which is of type IEnumerable<Product>.

  7. You can now access products via a LINQ query, a foreach loop, etc.:
    SQL
    var productsByName =
        from p in products
        orderby p.Name
        select new { p.Name, p.LaunchDate, p.Price, p.Description };
    // or ...
    foreach (Product item in products) { .... }

To make it easier to get an overview, here is the code again that reads from a file, but now in one go:

C#
CsvFileDescription inputFileDescription = new CsvFileDescription
{
    SeparatorChar = ',', 
    FirstLineHasColumnNames = true
};
CsvContext cc = new CsvContext();
IEnumerable<Product> products =
    cc.Read<Product>("products.csv", inputFileDescription);
// Data is now available via variable products.
var productsByName =
    from p in products
    orderby p.Name
    select new { p.Name, p.LaunchDate, p.Price, p.Description };
// or ...
foreach (Product item in products) { .... }

You'll find this same code in the SampleCode project in the sources.

Writing to a file

This is very similar to reading a file.

  1. In your project, add a reference to LINQtoCSV.dll.
  2. The Write method takes a IEnumerable<T> and writes each object of type T in the IEnumerable<T> as a data record to the file. The definition of your data class could look like this:
    C#
    using LINQtoCSV;
    using System;
    class Product
    {
        [CsvColumn(Name = "ProductName", FieldIndex = 1)]
        public string Name { get; set; }
        [CsvColumn(FieldIndex = 2, OutputFormat = "dd MMM HH:mm:ss")]
        public DateTime LaunchDate { get; set; }
        [CsvColumn(FieldIndex = 3, CanBeNull = false, OutputFormat = "C")]
        public decimal Price { get; set; }
        [CsvColumn(FieldIndex = 4)]
        public string Country { get; set; }
        [CsvColumn(FieldIndex = 5)]
        public string Description { get; set; }
    }

    The optional CsvColumn attribute allows you to specify such things as what date and number formats to use when writing each data field. Details for all CsvColumn properties (CanBeNull, OutputFormat, etc.) are available here.

    Although this example only uses properties, you can also use simple fields.

    The Write method will happily use an anonymous type for T, so you can write the output of a LINQ query right to a file. In that case, you obviously won't define T yourself. Later on, you'll see an example of this.

  3. Import the LINQtoCSV namespace at the top of the source file where you'll be writing the file:
    C#
    using LINQtoCSV;
  4. Make sure the data is stored in an object that implements IEnumerable<T>, such as a List<T>, or the IEnumerable<T> returned by the Read method.
    C#
    List<Product> products2 = new List<Product>();
    // Fill the list with products
    // ...
  5. Create a CsvFileDescription object, and initialize it with details about the file you will be writing, along these lines:
    C#
    CsvFileDescription outputFileDescription = new CsvFileDescription
    {
        SeparatorChar = '\t', // tab delimited
        FirstLineHasColumnNames = false, // no column names in first record
        FileCultureName = "nl-NL" // use formats used in The Netherlands
    };
  6. Create a CsvContext object:
    C#
    CsvContext cc = new CsvContext();
  7. Invoke the Write method exposed by the CsvContext object to write the contents of your IEnumerable<T> to a file:
    C#
    cc.Write(
        products2,
        "products2.csv",
        outputFileDescription);

    This writes the Product objects in the variable products2 to the file "products2.csv".

Here is the code again that writes a file, but now in one go:

C#
List<Product> products2 = new List<Product>();
// Fill the list with products
// ...
CsvFileDescription outputFileDescription = new CsvFileDescription
{
    SeparatorChar = '\t', // tab delimited
    FirstLineHasColumnNames = false, // no column names in first record
    FileCultureName = "nl-NL" // use formats used in The Netherlands
};
CsvContext cc = new CsvContext();
cc.Write(
    products2,
    "products2.csv",
    outputFileDescription);

Writing an IEnumerable of anonymous type

If you have a LINQ query producing an IEnumerable of anonymous type, writing that IEnumerable to a file is no problem:

C#
CsvFileDescription outputFileDescription = new CsvFileDescription
{
.....
};
CsvContext cc = new CsvContext();
// LINQ query returning IEnumerable of anonymous type
// into productsNetherlands
var productsNetherlands =
    from p in products
    where p.Country == "Netherlands"
    select new { p.Name, p.LaunchDate, p.Price, p.Description };
// Write contents of productsNetherlands to file
cc.Write(
    productsNetherlands,
    "products-Netherlands.csv", 
    outputFileDescription);

Here, a LINQ query selects all products for "Netherlands" from the variable products, and returns an IEnumerable holding objects of some anonymous type that has the fields Name, LaunchDate, Price, and Description. The Write method then writes those objects to the file products-Netherlands.csv.

CsvContext.Write Overloads

  • Write<T>(IEnumerable<T> values, string fileName)
  • Write<T>(IEnumerable<T> values, string fileName, CsvFileDescription fileDescription)
  • Write<T>(IEnumerable<T> values, TextWriter stream)
  • Write<T>(IEnumerable<T> values, TextWriter stream, CsvFileDescription fileDescription)

Some interesting facts about these overloads:

  • None of the overloads return a value.
  • Unlike the Read method, Write does not require that T has a parameterless constructor.
  • Overloads that take a stream write the data to the stream. Those that take a file name write the data to the file.
  • Overloads that do not take a CsvFileDescription object simply create one themselves, using the default values for the CsvFileDescription properties.

CsvContext.Read Overloads

  • Read<T>(string fileName)
  • Read<T>(string fileName, CsvFileDescription fileDescription)
  • Read<T>(StreamReader stream)
  • Read<T>(StreamReader stream, CsvFileDescription fileDescription)

Some interesting facts about these overloads:

  • Each overload returns an IEnumerable<T>.
  • T must have a parameterless constructor. If you do not define a constructor for T, the compiler will generate a parameterless constructor for you.
  • Overloads that take a stream read the data from the stream. Those that take a file name read the data from the file. However, see the section on deferred reading.
  • Overloads that do not take a CsvFileDescription object simply create one themselves, using the default values for the CsvFileDescription properties.

Reading Raw Data Rows

Sometimes it's easier to read the raw data fields from the CSV file, instead of having them processed into objects by the library. For example if different rows can have different formats, or if you don't know at compile time which field is going to hold what data.

You can make this happen by having your type T implement the interface IDataRow. This interface is included in the library, so you don't have to write it yourself. It essentially just describes a collection of DataRowItem objects:

C#
public interface IDataRow
{
    // Number of data row items in the row.
    int Count { get; }
    // Clear the collection of data row items.
    void Clear();
    // Add a data row item to the collection.
    void Add(DataRowItem item);
    // Allows you to access each data row item with an array index, such as
    // row[i]
    DataRowItem this[int index] { get; set; }
}

The DataRowItem class is also defined in the library. It describes each individual field within a data row:

C#
public class DataRowItem
{
    ...
    // Line number of the field
    public int LineNbr  { get { ... } }
    // Value of the field
    public string Value { get { ... } }
}

The line number is included in the DataRowItem class, because data rows can span multiple lines.

The easiest way to create a class that implements IDataRow is to derive it from List<DataRowItem>:

C#
using LINQtoCSV;
internal class MyDataRow : List<DataRowItem>, IDataRow
{
}

Now you can read the CSV file into a collection of MyDataRow objects:

C#
IEnumerable<MyDataRow> products =
    cc.Read<MyDataRow>("products.csv", inputFileDescription);

You can then access each individual field within each data row:

C#
foreach (MyDataRow dataRow in products)
{
    string firstFieldValue = dataRow[0].Value;
    int firstFieldLineNbr = dataRow[0].LineNbr;
    string secondFieldValue = dataRow[1].Value;
    int secondFieldLineNbr = dataRow[1].LineNbr;
    ...
}

Deferred Reading

Here is how the Read overloads implement deferred reading:

  • When you invoke the Read method (which returns an IEnumerable<T>), no data is read yet. If using a file, the file is not yet opened.
  • When the Enumerator is retrieved from the IEnumerable<T> (for example, when starting a foreach loop), the file is opened for reading. If using a stream, the stream is rewound (seek to start of the stream).
  • Each time you retrieve a new object from the Enumerator (for example, while looping through a foreach), a new record is read from the file or stream.
  • When you close the Enumerator (for example, when a foreach ends or when you break out of it), the file is closed. If using a stream, the stream is left unchanged.

This means that:

  • If reading from a file, the file will be open for reading while you're accessing the IEnumerable<T> in a foreach loop.
  • The file can be updated in between accesses. You could access the IEnumerable<T> in a foreach loop, then update the file, then access the IEnumerable<T> again in a foreach loop to pick up the new data, etc. You only need to call Read once at the beginning, to get the IEnumerable<T>.

CsvFileDescription

The Read and Write methods need some details about the file they are reading or writing, such as whether the first record contains column names.

As shown in the Reading from a file and Writing to a file examples, you put those details in an object of type CsvFileDescription, which you then pass to the Read or Write method. This prevents lengthy parameter lists, and allows you to use the same details for multiple files.

A CsvFileDescription object has these properties:

SeparatorChar

Type: char
Default: ','
Applies to: Reading and Writing

Example:

C#
CsvFileDescription fd = new CsvFileDescription();
fd.SeparatorChar = '\t'; // use tab delimited file
CsvContext cc = new CsvContext();
cc.Write(data, "file.csv", fd);

The character used to separate fields in the file. This would be a comma for CSV files, or a '\t' for a tab delimited file.

You can use any character you like, except for white space characters or the double quote (").

QuoteAllFields

Type: bool
Default: false
Applies to: Writing only

Example:

C#
fd.QuoteAllFields = true; // forces quotes around all fields

When false, Write only puts quotes around data fields when needed, to avoid confusion - for example, when the field contains the SeparatorChar or a line break.

When true, Write surrounds all data fields with quotes.

FirstLineHasColumnNames

Type: bool
Default: true
Applies to: Reading and Writing

Example:

C#
fd.FirstLineHasColumnNames = false; // first record does not have column headers

When reading a file, tells Read whether to interpret the data fields in the first record in the file as column headers.

When writing a file, tells Write whether to write column headers as the first record of the file.

EnforceCsvColumnAttribute

Type: bool
Default: false
Applies to: Reading and Writing

Example:

C#
fd.EnforceCsvColumnAttribute = true; // only use fields with [CsvColumn] attribute

When true, Read only reads data fields into public fields and properties with the [CsvColumn] attribute, ignoring all other fields and properties. And, Write only writes the contents of public fields and properties with the [CsvColumn] attribute.

When false, all public fields and properties are used.

FileCultureName

Type: string
Default: current system setting
Applies to: Reading and Writing

Example:

C#
fd.FileCultureName = "en-US"; // use US style dates and numbers

Different cultures use different ways to write dates and numbers. 23 May 2008 is 5/23/2008 in the United States (en-US) and 23/5/2008 in Germany (de-DE). Use the FileCultureName field to tell Read how to interpret the dates and numbers it reads from the file, and to tell Write how to write dates and numbers to the file.

By default, the library uses the current language/country setting on your system. So, if your system uses French-Canadian (fr-CA), the library uses that culture unless you override it with FileCultureName.

The library uses the same culture names as the .NET "CultureInfo" class (full list of names).

TextEncoding

Type: Encoding
Default: Encoding.UTF8
Applies to: Reading and Writing

Example:

C#
fd.TextEncoding = Encoding.Unicode; // use Unicode character encoding

If the files that you read or write are in English, there is no need to set TextEncoding.

However, if you use languages other than English, the way the characters in your files are encoded may be an issue. You will want to make sure that the encoding used by the library matches the encoding used by any other programs (editors, spreadsheets) that access your files.

Specifically, if you write files with the Euro symbol, you may need to use Unicode encoding, as shown in the example.

DetectEncodingFromByteOrderMarks

Type: bool
Default: true
Applies to: Reading only

Example:

C#
fd.DetectEncodingFromByteOrderMarks = false; // suppress encoding detection

Related to TextEncoding. The default normally works fine.

Tells Read whether to detect the encoding of the input file by looking at the first three bytes of the file. Otherwise, it uses the encoding given in the TextEncoding property.

MaximumNbrExceptions

Type: int
Default: 100
Applies to: Reading only

Example:

C#
fd.MaximumNbrExceptions = -1; // always read entire file before throwing AggregatedException

Sets the maximum number of exceptions that will be aggregated into an AggregatedException.

To not have any limit and read the entire file no matter how many exceptions you get, set AggregatedException to -1.

For details about aggregated exceptions, see the error handling section.

NoSeparatorChar

Type: bool
Default: false
Applies to: Reading

Example:

C#
fd.NoSeparatorChar = true; // Fields are fixed width

Set this to true when the CSV file uses fixed width fields rather than separator characters.

The number of characters is specified using the CharLength property in the CsvColumnAttribute class.

UseFieldIndexForReadingData

Type: bool
Default: false
Applies to: Reading only

Example:

C#
fd.UseFieldIndexForReadingData = true;

Modifies the behaviour of the FieldIndex property of the CsvColumnAttribute class, to make it suitable for fixed width fields. See the description of the FieldIndex property for details.

IgnoreTrailingSeparatorChar

Type: bool
Default: false
Applies to: Reading only

Example:

C#
fd.IgnoreTrailingSeparatorChar = true; // ignore separator character at the end of the line

Consider following file:

column1;column2;column3;

Though it's not a canonical representation of CSV file, IgnoreTrailingSeparatorChar property tells Read to ignore separator character at the end of the line.

IgnoreUnknownColumns

Type: bool
Default: false
Applies to: Reading only

Example:

There are cases where you don't need to read all the columns, but only a subset of them. Consider the following example of a CSV file containing a list of people:

Id Name Last Name Age City
1 John Doe 15 Washington
2 Jane Doe 20 New York


Suppose you have the following class:

C#
class Person 
{
    [CsvColumn(Name = "Name")]
    public string Name { get ; set; }
    [CsvColumn(Name = "Last Name")]
    public string LastName { get; set; }
    [CsvColumn(Name = "Age")]
    public int Age { get; set; }
}

Note that the input file has columns "Id" and "City" which are not listed in the class. This discrepancy would normally cause an exception.

However, if you set

C#
fd.IgnoreUnknownColumns = true;

then the columns "Id" and "City" will be ignored without an exception.

 

CsvColumn Attribute

As shown in the Reading from a file and Writing to a file examples, you can decorate the public fields and properties of your data class with the CsvColumn attribute to specify such things as the output format for date and number fields.

Use of the CsvColumn attribute is optional. As long as the EnforceCsvColumnAttribute property of the CsvFileDescription object you pass into Read or Write is false, those methods will look at all public fields and properties in the data class. They will then simply use the defaults shown with each CsvColumn property below.

The CsvColumn attribute has these properties:

Name

Type: string
Default: Name of the field or property
Applies to: Reading and Writing

Example:

C#
[CsvColumn(Name = "StartDate")]
public DateTime LaunchDate { get; set; }

The Read and Write methods normally assume that the data fields in the file have the same names as the corresponding fields or properties in the class. Use the Name property to specify another name for the data field.

CanBeNull

Type: bool
Default: true
Applies to: Reading only
C#
[CsvColumn(CanBeNull = false)]
public DateTime LaunchDate { get; set; }

If false, and a record in the input file does not have a value for this field or property, then the Read method generates a MissingRequiredFieldException exception.

FieldIndex

Type: bool
Default: Int32.MaxValue
Applies to: Reading only

Example:

C#
[CsvColumn(FieldIndex = 1)]
public DateTime LaunchDate { get; set; }

This property is used for both reading and writing, but in slightly different ways.

Reading - The Read method needs to somehow associate data fields in the input file with fields and properties in the data class. If the file has column names in the first record, that's easy - Read simply matches the column names with the names of the fields and properties in the data class.

However, if the file does not have column names in the first record, Read needs to look at the order of the data fields in the data records to match them with the fields and properties in the data class. Unfortunately though, the .NET framework does not provide a way to reliably retrieve that order from the class definition. So, you have to specify which field/property comes before which field/property by giving the fields and properties a CsvColumn attribute with the FieldIndex property.

The FieldIndexs do not have to start at 1. They don't have to be consecutive. The Read and Write methods will simply assume that a field/property comes before some other field/property if its FieldIndex is lower.

When the UseFieldIndexForReadingData property of the CsvFileDescription class is true, FieldIndex specifies the specific index within the row of the value. This is 1 based, so if you set FieldIndex to 3, the value begins from the 3rd character.

Writing - The Write method uses the FieldIndex of each field or property to figure out in what order to write the data fields to the output file. Fields and properties without FieldIndex get written last, in random order.

CharLength

Type: int
Default: 0
Applies to: Reading only

Example:

C#
[CsvColumn(CharLength=12)]
public string Name { get; set; }

Allows you to specify that the Name field has an item in the data row items which takes 12 characters in the data line.

Used in combination with

C#
UseFieldIndexForReadingData = true 

 

NumberStyle

Type: NumberStyles
Default: NumberStyles.Any
Applies to: Reading of numeric fields only

Example:

C#
[CsvColumn(NumberStyle = NumberStyles.HexNumber)]
public DateTime LaunchDate { get; set; }

Allows you to determine what number styles are allowed in the input file (list of options).

By default, all styles are permitted, except for one special case. In order to accept hexadecimal numbers that do not start with 0x, use NumberStyles.HexNumber, as shown in the example.

OutputFormat

Type: string
Default: "G"
Applies to: Writing only

Example:

C#
[CsvColumn(OutputFormat = "dd MMM yy")]
public DateTime LaunchDate { get; set; }

Lets you set the output format of numbers and dates/times. The default "G" format works well for both dates and numbers most of the time.

When writing a date/time or number field, the Write method first determines the type of the field (DateTime, decimal, double, etc.) and then calls the ToString method for that type, with the given OutputFormat. So, in the example above, if LaunchDate is 23 November 2008, the field written to the file will be "23 Nov 08".

With many formats, the final result depends on the language/country of the file, as set in the FileCultureName property of the CsvFileDescription object. So, if LaunchDate is 23 November 2008 and you specify the short date format:

C#
[CsvColumn(OutputFormat = "d")]
public DateTime LaunchDate { get; set; }

Then, the final value written to the output file will be "11/23/08" if you use US dates (FileCultureName is set to "en-US"), but "23/11/08" if you use German dates (FileCultureName is set to "de-DE").

Error Handling

When the Read and Write methods detect an error situation, they throw an exception with all information you need to solve the problem. As you would expect, all exceptions are derived from the .NET class Exception.

Retrieving error information

In addition to such properties as StackTrace and Message, the Exception class exposes the Data property. The Read and Write methods use that property to provide exception information in a way that is easy for your code to read, while they provide error messages targeted at humans via the Message property.

The description for each exception (further below) shows what information is stored in the Data property.

Aggregating exceptions

When the Read method detects an error while reading data from a file, it does not throw an exception right away, but stores it in a list of type List<Exception>. Then, after it has processed the file, it throws a single exception of type AggregatedException, with the list of exceptions in its Data["InnerExceptionsList"] property. This allows you to fix all problems with an input file in one go, instead of one by one.

You can limit the number of exceptions that get aggregated this way by setting the MaximumNbrExceptions property of the CsvFileDescription object that you pass to the Read method. By default, MaximumNbrExceptions is set to 100. When the limit is reached, the AggregatedException is thrown right away, with the list of exceptions aggregated so far.

Not all exceptions get aggregated! Before Read starts reading data from a file, it first processes column names, CsvColumn attributes, etc. If something goes wrong during that preliminary stage, it throws an exception right away.

Deferred reading

Keep in mind that due to deferred reading, you can get exceptions not only when you invoke the Read method, but also when you access the IEnumerable<T> that is returned by the Read method.

Example

The following code reads a file and processes exceptions. To show how to use the Data property, it includes some special processing for the DuplicateFieldIndexException - thrown when the Read and Write methods detect two fields or properties with the same FieldIndex.

C#
public static void ShowErrorMessage(string errorMessage)
{
    // show errorMessage to user
    // .....
}
public static void ReadFileWithExceptionHandling()
{
    try
    {
        CsvContext cc = new CsvContext();
        CsvFileDescription inputFileDescription = new CsvFileDescription
        {
            MaximumNbrExceptions = 50
            // limit number of aggregated exceptions to 50
        };
        IEnumerable<Product> products =
            cc.Read<Product>("products.csv", inputFileDescription);
        // Do data processing
        // ...........
    }
    catch(AggregatedException ae)
    {
        // Process all exceptions generated while processing the file
        List<Exception> innerExceptionsList =
            (List<Exception>)ae.Data["InnerExceptionsList"];
        foreach (Exception e in innerExceptionsList)
        {
            ShowErrorMessage(e.Message);
        }
    }
    catch(DuplicateFieldIndexException dfie)
    {
        // name of the class used with the Read method - in this case "Product"
        string typeName = Convert.ToString(dfie.Data["TypeName"]);
        // Names of the two fields or properties that have the same FieldIndex
        string fieldName = Convert.ToString(dfie.Data["FieldName"]);
        string fieldName2 = Convert.ToString(dfie.Data["FieldName2"]);
        // Actual FieldIndex that the two fields have in common
        int commonFieldIndex = Convert.ToInt32(dfie.Data["Index"]);
        // Do some processing with this information
        // .........
        // Inform user of error situation
        ShowErrorMessage(dfie.Message);
    }
    catch(Exception e)
    {
        ShowErrorMessage(e.Message);
    }
}

BadStreamException

This exception exposes the same properties as Exception.

Thrown when a stream is passed to Read, which is either null, or does not support Seek. The stream has to support Seek, otherwise it cannot be rewound when the IEnumarable returned by Read is accessed.

CsvColumnAttributeRequiredException

This exception exposes the same properties as Exception.

Thrown when the CsvFileDescription object that has been passed to Read has both FirstLineHasColumnNames and EnforceCsvColumnAttribute set to false.

If there are no column names in the file, then Read relies on the FieldIndex of each field or property in the data class to match them with the data fields in the file. However, if EnforceCsvColumnAttribute is false, that implies that fields or properties without the CsvColumn attribute can also be used to accept data, while they do not have a FieldIndex.

DuplicateFieldIndexException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class with the offending fields/properties
Data["FieldName"] string Fields or properties with a duplicate FieldIndex
Data["FieldName2"]
Data["Index"] int Common FieldIndex

Thrown when two or more fields or properties have the same FieldIndex.

RequiredButMissingFieldIndexException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class with the offending field/property
Data["FieldName"] string Field or property without FieldIndex

When there are no column names in the first record in the file (FirstLineHasColumnNames is false), each required field (CanBeNull attribute set to false) must have a FieldIndex attribute, otherwise it cannot be read from the file.

ToBeWrittenButMissingFieldIndexException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class with the offending field/property
Data["FieldName"] string Field or property without FieldIndex

When writing a file without column names in the first record, you will want to make sure that the data fields appear in each line in a well defined order. If that order were random, it would be impossible for some other program to reliably process the file.

So, when the Write method is given a CsvFileDescription with FirstLineHasColumnNames as false, and it finds a field or property that doesn't have a FieldIndex, it throws a ToBeWrittenButMissingFieldIndexException.

NameNotInTypeException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class missing the field/property
Data["FieldName"] string Field or property that isn't found
Data["FileName"] string Name of the input file

If the Read method is given a CsvFileDescription with FirstLineHasColumnNames as true, and one of the column names in the first record in the file does not match a field or property, it throws a NameNotInTypeException.

MissingCsvColumnAttributeException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class with the offending field/property
Data["FieldName"] string Field or property without CsvColumn attribute
Data["FileName"] string Name of the input file

The Read method may throw this exception when it is given a CsvFileDescription with both FirstLineHasColumnNames and EnforceCsvColumnAttribute as true. When Read reads the column names from the first record, one of those column names may match a field or property that doesn't have a CsvColumn attribute, even though only fields and properties with a CsvColumn attribute can be used. When that happens, Read throws a MissingCsvColumnAttributeException.

TooManyDataFieldsException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the data class
Data["LineNbr"] int Line in the input file with an excess data field
Data["FileName"] string Name of the input file

Thrown when a record in the input file has more data fields than there are public fields and properties in the data class.

TooManyNonCsvColumnDataFieldsException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the data class
Data["LineNbr"] int Line in the input file with an excess data field
Data["FileName"] string Name of the input file

When only fields or properties that have a CsvColumn attribute are used (Read is given a CsvFileDescription with EnforceCsvColumnAttribute as true), and a record in the input file has more data fields than there are fields and properties with the CsvColumn attribute, a TooManyNonCsvColumnDataFieldsException is thrown.

MissingFieldIndexException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the data class
Data["LineNbr"] int Line with offending field
Data["FileName"] string Name of the input file

If there are no column names in the first record of the input file (Read is given a CsvFileDescription with FirstLineHasColumnNames as false), then Read relies on the FieldIndex of the fields and properties in the data class to match them with the data fields in the file.

When a record in the input file has more data fields than there are fields and properties in the data class with a FieldIndex, then a MissingFieldIndexException is thrown.

MissingRequiredFieldException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class with the required field/property
Data["FieldName"] string Name of the required field/property
Data["LineNbr"] int Line where missing field should have been
Data["FileName"] string Name of the input file

Thrown when a record from the input file does not have a value for a required field or property (CanBeNull property of the CsvColumn attribute set to false).

Difference between null and empty string

Empty strings and strings consisting of only white space need to be surrounded by quotes, so they are recognized as something other than null.

These input lines both have the data fields "abc", null, and "def":

abc,,def
abc,   ,def

While this line has the data fields "abc", followed by the empty string, followed by "def":

abc,"",def

and this line has the data fields "abc", followed by a string with three spaces, followed by "def":

abc,"   ",def

WrongDataFormatException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the class with the field/property
Data["FieldName"] string Name of the field/property
Data["FieldValue"] string The offending data value
Data["LineNbr"] int Line with offending data value
Data["FileName"] string Name of the input file

Thrown when a field has the wrong format. For example, a numeric field with the value "abc".

AggregatedException

Additional Properties - This exception exposes the same properties as Exception, plus these additional properties:

Property Type Description
Data["TypeName"] string Name of the data class used by Read
Data["FileName"] string Name of the input file
Data["InnerExceptionsList"] List<Exception> List of Exceptions

Used to aggregate exceptions generated while reading a file (more details).

History

Version Released Description
1.0 11 Apr 2008 Initial release.
1.1 19 Sep 2011 Added ability to read raw data rows.
1.2 18 Feb 2012 Bug fix - CsvFileDescription now properly processed when reading from streams.
1.3 14 Feb 2014 Introduced fixed width columns. Courtesy of lvaleriu.
1.4 18 Feb 2014 Introduced option to ignore trailing separator character. Courtesy of Roman.
1.5 4 Mar 2014 Introduced option to ignore unused columns in the input file. Courtesy of Oscar Mederos.

New Features and Bug Fixes

If you found a bug or have an idea for a new feature, please feel free contribute to this project. Details: https://github.com/mperdeck/LINQtoCSV

License

This article, along with any associated source code and files, is licensed under The Apache License, Version 2.0


Written By
Architect
Australia Australia
Twitter: @MattPerdeck
LinkedIn: au.linkedin.com/in/mattperdeck
Current project: JSNLog JavaScript Logging Package

Matt has over 9 years .NET and SQL Server development experience. Before getting into .Net, he worked on a number of systems, ranging from the largest ATM network in The Netherlands to embedded software in advanced Wide Area Networks and the largest ticketing web site in Australia. He has lived and worked in Australia, The Netherlands, Slovakia and Thailand.

He is the author of the book ASP.NET Performance Secrets (www.amazon.com/ASP-NET-Site-Performance-Secrets-Perdeck/dp/1849690685) in which he shows in clear and practical terms how to quickly find the biggest bottlenecks holding back the performance of your web site, and how to then remove those bottlenecks. The book deals with all environments affecting a web site - the web server, the database server and the browser.

Matt currently lives in Sydney, Australia. He recently worked at Readify and the global professional services company PwC. He now works at SP Health, a global provider of weight loss web sites such at CSIRO's TotalWellBeingDiet.com and BiggestLoserClub.com.

Comments and Discussions

 
QuestionError: ReadTimeout' threw an exception of type 'System.InvalidOperationException' [modified] Pin
anderdw27-Aug-11 14:54
anderdw27-Aug-11 14:54 
AnswerRe: Error: ReadTimeout' threw an exception of type 'System.InvalidOperationException' Pin
George Swan19-Sep-11 5:30
mveGeorge Swan19-Sep-11 5:30 
Questionskip columns Pin
miltonhowe1-Aug-11 12:44
miltonhowe1-Aug-11 12:44 
QuestionIssue with writing csv file Pin
p.srirama12-Jul-11 17:39
p.srirama12-Jul-11 17:39 
GeneralMy vote of 5 Pin
tbayart30-May-11 0:06
professionaltbayart30-May-11 0:06 
QuestionArrayConverter cannot convert from System.String Pin
Tory Netherton21-May-11 13:01
Tory Netherton21-May-11 13:01 
AnswerRe: ArrayConverter cannot convert from System.String Pin
Tory Netherton10-Jun-11 23:07
Tory Netherton10-Jun-11 23:07 
General5* Pin
Member 778975327-Mar-11 6:31
Member 778975327-Mar-11 6:31 
Great stuff, thanx
GeneralMy vote of 5 Pin
User 17912922-Mar-11 0:26
professionalUser 17912922-Mar-11 0:26 
GeneralBug report (extremely minor bug) Pin
smt5217-Mar-11 12:30
smt5217-Mar-11 12:30 
GeneralRe: Bug report (extremely minor bug) Pin
Matt Perdeck19-Mar-11 1:14
Matt Perdeck19-Mar-11 1:14 
Generalcsv structure must be defined compile time? (for example ProductData) Pin
devvvy9-Mar-11 22:15
devvvy9-Mar-11 22:15 
GeneralRe: csv structure must be defined compile time? (for example ProductData) Pin
Matt Perdeck10-Mar-11 13:48
Matt Perdeck10-Mar-11 13:48 
GeneralRe: csv structure must be defined compile time? (for example ProductData) Pin
devvvy15-Mar-11 0:50
devvvy15-Mar-11 0:50 
GeneralWHERE clause - does it filter BEFORE|AFTER when reading a big file? [modified] Pin
devvvy8-Mar-11 2:06
devvvy8-Mar-11 2:06 
GeneralRe: WHERE clause - does it filter BEFORE|AFTER when reading a big file? Pin
Matt Perdeck10-Mar-11 13:43
Matt Perdeck10-Mar-11 13:43 
GeneralRe: WHERE clause - does it filter BEFORE|AFTER when reading a big file? Pin
devvvy15-Mar-11 0:48
devvvy15-Mar-11 0:48 
GeneralMy vote of 5 Pin
Jon Sagara8-Feb-11 12:53
Jon Sagara8-Feb-11 12:53 
GeneralNew Options Pin
George Coney19-Jan-11 11:09
George Coney19-Jan-11 11:09 
GeneralRe: New Options Pin
Vader_Oz6-Nov-13 13:40
Vader_Oz6-Nov-13 13:40 
GeneralMy vote of 4 Pin
jacktheexcynic14-Jan-11 9:03
jacktheexcynic14-Jan-11 9:03 
Generalnice one - have 5 Pin
Pranay Rana5-Jan-11 9:14
professionalPranay Rana5-Jan-11 9:14 
QuestionIgnoring columns in CSV file [modified] Pin
maze_dk30-Dec-10 3:32
maze_dk30-Dec-10 3:32 
AnswerRe: Ignoring columns in CSV file Pin
Matt Perdeck16-Jan-11 21:14
Matt Perdeck16-Jan-11 21:14 
AnswerRe: Ignoring columns in CSV file Pin
maze_dk18-Jan-11 1:00
maze_dk18-Jan-11 1:00 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.