Click here to Skip to main content
15,940,550 members
Please Sign up or sign in to vote.
1.00/5 (1 vote)
See more:
In my project, I would like to to read and process a huge CSV file with millions of records, and the fastest library I could find is uniVocity-parsers. Here is my logic:
1) Read the CSV file with FTP protocol
2) Parse the CSV file with my own logic, such as combination, duplicates deletion and so on.
3) Store the parsed data into the MySQL database.

Currently I'm using the JavaCSV, but it's slow for my project.

Do you have any other suggestion?
Updated 12-May-15 12:13pm
Sergey Alexandrovich Kryukov 9-May-15 20:23pm    
Wrong question. How told you that you need any library? The fastest code could be the ad-hoc code your write by yourself. What you really need, depends on your goals. "Millions or records" in a CSV is already the abuse of technology, and making it in a very universal way will hardly be a great fix, anyway...
Mohibur Rashid 10-May-15 5:25am    
Suggestion 1: Drop CSV pick something else
[no name] 12-May-15 20:17pm    
Even lightweight code written especially is going to choke on millions of records. You are barking up the wrong tree. If you are not in control of creating the data csv is a very bad choice. Why is speed an issue?

1 solution

Try to use Google first, a simple search gives a lot of potential projects to choose from.
Share this answer

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900