Copy large amounts of data into a table is a balancing act between the data you are inserting and the indexes on that table. Have a look at this question on stack overflow...
http://stackoverflow.com/questions/6955456/drop-rebuild-indexes-during-bulk-insert[
^]
"There is overhead in maintaing indexes during the insert and there is overhead in rebuilding the indexes after the insert. The only way to definitively determine which method incurs less overhead is to try them both and benchmark them. "
You'll definitely want to batch up the inserts into smaller chunks, if you're just performing one giant INSERT statement it's all going to be handled by a single transaction which will cause large tlog growth and wont be the fastest.
I have found this pattern works with large amounts of data....(this would be an SSIS package to perform the ETL).
* Drop all indexes on destination table *apart from clustered index*
* Begin bulk insert operation on destination table - (e.g do this in batches of 1000 records).
* Rebuild indexes on destination table
It totally depends on the clustered index as well. I remember doing some work on sales data, and the data was clustered by financial date rather than just an autoincrementing field. In this case, it was important to insert the data into the desintation table in the order of the clustered index (e.g. have your source data sorted before trying to insert it)