Click here to Skip to main content
15,893,381 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Dear all,
Currently I implement a downloader which uses for download file from internet. And the file has been divided into several blocks, every block includes data, the length of data and the start address to be written into file. But as you know, the block we gets it isn't sequence, so it can't write into file sequencely. For the situation, if the file is very large, like more than 2G, and if the first time I get the block at 2G position, and if it write into file directly, it costs a lot time.
How can it writes such data efficiently?
Best regards,
Mike
Posted

If you're programming on windows you might find sparse files[^] do what you want.

For most purposes the file looks like it's full of zeroes but the unoccupied bits don't take up any diskspace.

Cheers,

Ash
 
Share this answer
 
This naive code (warning: very bad error-handling)
C++
#include <stdio.h>
int main()
{
  unsigned char a[1024];
  FILE * fp = fopen(filepath, "wb");
  if ( !fp ) return -1;
  if (fseek(fp, 1000000000L,SEEK_SET) ) return -2;
  if (fwrite(a, 1, sizeof(a), fp) != sizeof(a) ) return -3;
  fclose(fp);
}


takes circa 25" on my system. If such a long time is not acceptable then save the received blocks on temporary files, eventually recomposing the complete big file in a background thread.
:)
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900