Click here to Skip to main content
Click here to Skip to main content

Secure File Shredder

, 28 Oct 2008 CPOL
Rate this:
Please Sign up or sign in to vote.
A secure file shredder in C#

Introduction

This article describes a C style shredder in C# .NET.
Pizza van with tinted windows parked across the road for days? Strange clicking sound on the landline? Crooked CEO maybe? Then this is exactly the tool you have been searching for..

Background

I originally published v1 in VB6 as 'SDS' on Planet Source Code.

NShred 2.0

I decided to rewrite the SDS file shredder as one of my first forays into the C# .NET language, the reason being that it is a fairly compact and class-driven application. This time around, however, not being saddled with the limitations of the VB6 language, I was able to create a faster and more thorough application engine. The cShredder class is almost completely Win32 driven, using virtual memory buffers and the WriteFile API to overwrite the file with several passes of 0s, 1s, and random data. After some initial preamble of file path checks, attribute stripping, and enabling key access tokens within the process, we create the buffer:

...
hFile = CreateFileW(pName, GENERIC_ALL, FILE_SHARE_NONE, 
        IntPtr.Zero, OPEN_EXISTING, WRITE_THROUGH, IntPtr.Zero);
// get the file size
nFileLen = fileSize(hFile);
if (nFileLen > BUFFER_SIZE)
    nFileLen = BUFFER_SIZE;
if (hFile.ToInt32() == -1)
    return false;
// set the table
SetFilePointerEx(hFile, 0, IntPtr.Zero, FILE_BEGIN);
pBuffer = VirtualAlloc(IntPtr.Zero, nFileLen, MEM_COMMIT, PAGE_READWRITE);
if (pBuffer == IntPtr.Zero)
    return false;
// fill the buffer with zeros
RtlZeroMemory(pBuffer, nFileLen);
...

Once the buffer is allocated and written to, call the overwriteFile method that uses WriteFile to overwrite the contents in buffered 'chunks'. Note that the file was opened with the WRITE_THROUGH flag, which causes the file to be written through the buffers and straight to disk. Also, all APIs used are of the 'W' flavor, so the shredder should be fully Unicode compliant.

private Boolean overwriteFile(IntPtr hFile, IntPtr pBuffer)
{
    UInt32 nFileLen = fileSize(hFile);
    UInt32 dwSeek = 0;
    UInt32 btWritten = 0;

    try
    {
        if (nFileLen < BUFFER_SIZE)
        {
            SetFilePointerEx(hFile, dwSeek, IntPtr.Zero, FILE_BEGIN);
            WriteFile(hFile, pBuffer, nFileLen, ref btWritten, IntPtr.Zero);
        }
        else
        {
            do
            {
                SetFilePointerEx(hFile, dwSeek, IntPtr.Zero, FILE_BEGIN);
                WriteFile(hFile, pBuffer, BUFFER_SIZE, ref btWritten, IntPtrZero);
                dwSeek += btWritten;
            } while ((nFileLen - dwSeek) > BUFFER_SIZE);
            WriteFile(hFile, pBuffer, (nFileLen - dwSeek), ref btWritten, IntPtr.Zero);
        }
        // reset file pointer
        SetFilePointerEx(hFile, 0, IntPtr.Zero, FILE_BEGIN);
        // add it up
        if ((btWritten + dwSeek) == nFileLen)
            return true;
        return false;
    }
    catch
    {
        return false;
    }
}

The buffers are filled first with zeros, then ones, random data, then zeros again; this ensures that even with the most sophisticated software techniques, using a modern hard drive, all data will be rendered permanently unreadable. The random data phase uses the Crypto API to fill the buffer; intended for secure key creation, it also works well in this implementation.

private Boolean randomData(IntPtr pBuffer, UInt32 nSize)
{
    IntPtr iProv = IntPtr.Zero;

    try
    {
        // acquire context
        if (CryptAcquireContextW(ref iProv, "", MS_ENHANCED_PROV, 
            PROV_RSA_FULL, CRYPT_VERIFYCONTEXT) != true)
            return false;
        // generate random block
        if (CryptGenRandom(iProv, nSize, pBuffer) != true)
            return false;
        return true;
    }
    finally
    {
        // release crypto engine
        if (iProv != IntPtr.Zero)
            CryptReleaseContext(iProv, 0);
    }
}

One thing that most open source shredders I have seen fail to do, is to verify the read on the file. This is accomplished by comparing the buffer with the file contents using RtlCompareMemory:

private Boolean writeVerify(IntPtr hFile, IntPtr pCompare, UInt32 pSize)
{
    IntPtr pBuffer = IntPtr.Zero;
    UInt32 iRead = 0;

    try
    {
        pBuffer = VirtualAlloc(IntPtr.Zero, pSize, MEM_COMMIT, PAGE_READWRITE);
        SetFilePointerEx(hFile, 0, IntPtr.Zero, FILE_BEGIN);
        if (ReadFile(hFile, pBuffer, pSize, ref iRead, IntPtr.Zero) == 0)
        {
            if (InError != null)
                InError(004, "The file write failed verification test.");
            return false; // bad read
        }
        if (RtlCompareMemory(pCompare, pBuffer, pSize) == pSize)
            return true; // equal
        return false;
    }
    finally
    {
        if (pBuffer != IntPtr.Zero)
            VirtualFree(pBuffer, pSize, MEM_RELEASE);
    }
}

After the overwrite cycles, the file is then zero sized ten times, and renamed thirty times:

private Boolean zeroFile(IntPtr pName)
{
    for (Int32 i = 0; i < 10; i++)
    {
        IntPtr hFile = CreateFileW(pName, GENERIC_ALL, FILE_SHARE_NONE,
            IntPtr.Zero, OPEN_EXISTING, WRITE_THROUGH, IntPtr.Zero);
        if (hFile == IntPtr.Zero)
            return false;
        SetFilePointerEx(hFile, 0, IntPtr.Zero, FILE_BEGIN);
        // unnecessary but..
        FlushFileBuffers(hFile);
        CloseHandle(hFile);
    }
    return true;
}

private Boolean renameFile(string sPath)
{
    string sNewName = String.Empty;
    string sPartial = sPath.Substring(0, sPath.LastIndexOf(@"\") + 1);
    Int32 nLen = 10;
    char[] cName = new char[nLen];
    for (Int32 i = 0; i < 30; i++)
    {
        for (Int32 j = 97; j < 123; j++)
        {
            for (Int32 k = 0; k < nLen; k++)
            {
                if (k == (nLen - 4))
                    sNewName += ".";
                else
                    sNewName += (char)j;
            }
            if (MoveFileExW(sPath, sPartial + sNewName, 
		MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH) != 0)
                sPath = sPartial + sNewName;
            sNewName = String.Empty;
        }
    }
    // last step: delete the file
    if (deleteFile(sPath) != true)
        return false;
    return true;
}

For the truly paranoid user, there is a hidden startup switch, '/p', that enables the paranoid mode. With this setting, after the overwrite cycles, the files object identifier is deleted, effectively orphaning the file from the file and security subsystems:

private Boolean orphanFile(IntPtr pName)
{
    UInt32 lpBytesReturned = 0;
    IntPtr hFile = CreateFileW(pName, GENERIC_WRITE, FILE_SHARE_NONE,
        IntPtr.Zero, OPEN_EXISTING, WRITE_THROUGH, IntPtr.Zero);
    if (DeviceIoControl(hFile, FsctlDeleteObjectId, IntPtr.Zero, 
        0, IntPtr.Zero, 0, out lpBytesReturned, IntPtr.Zero))
        return false;
    return true;
}

History

  • 26th October, 2008: Initial post
  • 27th October, 2008: Article updated

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

John Underhill
Network Administrator vtdev.com
Canada Canada
Network and programming specialist. Started in C, and have learned about 14 languages since then. Cisco programmer, and lately writing a lot of C# and WPF code, (learning Java too). If I can dream it up, I can probably put it to code. My software company, (VTDev), is on the verge of releasing a couple of very cool things.. keep you posted.

Comments and Discussions

 
GeneralFile Renames before delete Pinmemberbrennan13134-Apr-10 16:18 
GeneralRe: File Renames before delete PinmemberSteppenwolfe5-Apr-10 3:53 
GeneralAbout orphan method.. PinmemberLoboSoft3-Apr-10 13:05 
GeneralRe: About orphan method.. PinmemberSteppenwolfe3-Apr-10 14:06 
GeneralSteppenwolfe: would you tell me why you use 65536 buffer size Pinmemberjoshzhang22-Oct-09 18:32 
GeneralRe: Steppenwolfe: would you tell me why you use 65536 buffer size PinmemberSteppenwolfe3-Apr-10 14:14 
GeneralMemory leak PinmemberMARC5614-Oct-09 2:35 
GeneralRe: Memory leak PinmemberSteppenwolfe3-Apr-10 14:19 
GeneralRe: Memory leak Pinmembersir-West27-Oct-11 22:16 
QuestionShadow Copies PinmemberTobiasP2-Nov-08 1:22 
AnswerRe: Shadow Copies PinmemberSteppenwolfe2-Nov-08 1:57 
GeneralRe: Shadow Copies PinmemberTobiasP2-Nov-08 2:21 
GeneralRe: Shadow Copies PinmemberSteppenwolfe3-Nov-08 6:02 
AnswerRe: Shadow Copies PinmemberKeith Vinson4-Nov-08 5:10 
GeneralRe: Shadow Copies PinmemberSteppenwolfe4-Nov-08 15:49 
QuestionEver looked at SDelete? PinmemberAlois Kraus28-Oct-08 21:37 
AnswerRe: Ever looked at SDelete? PinmemberSteppenwolfe29-Oct-08 7:56 
GeneralRe: Ever looked at SDelete? PinmemberAlovera20-May-09 8:37 
GeneralThoughts PinmemberPIEBALDconsult26-Oct-08 16:35 
GeneralRe: Thoughts [modified] PinmemberSteppenwolfe26-Oct-08 16:54 
GeneralRe: Thoughts Pinmembersupercat927-Oct-08 7:26 
GeneralRe: Thoughts PinmemberSteppenwolfe27-Oct-08 7:40 
GeneralRe: Thoughts Pinmembersupercat927-Oct-08 13:08 
GeneralRe: Thoughts PinmemberPIEBALDconsult28-Oct-08 10:23 
GeneralRe: Thoughts PinmemberSteppenwolfe28-Oct-08 10:46 
GeneralRe: Thoughts Pinmembersupercat928-Oct-08 13:26 
If I open an existing file and write one byte in the middle of one sector, I wouldn't expect the whole sector (or file) to be copied somewhere else.
 
On any drive, changing one byte of a sector will cause it to be rewritten. On most hard drives, rewriting a sector will physically overwrite the old one; on flash drives, it usually will not. Even on hard drives, a file system that notices a file is fragmented may attempt to defragment as it is rewritten. I don't know of NTFS or FAT32 doing this, but the possibility exists. It's also possible that a hard drive's controller may decide to relocate a sector which seems to be getting marginal. If that occurs, the earlier contents of the old sector would still exist on disk for an arbitrarily long time, even surviving what would seem like a low-level format.
 
BTW, hard drives are generally capable of physically writing and rewriting any 512-byte chunk without disturbing anything else. Flash memories are not. A typical flash memory will consist of a number of blocks that may be from 16K+ to 1M+ in size, each of which is subdivided into 528 (512+16) byte pages. The pages within a block may only be written once each before they are all erased; no facility exists to erase a single page. Further, there is a limit to how many times each block may be erased before it wears out.
 
To compensate for this, a flash drive will maintain a sector map which maps BIOS sector numbers into physical memory blocks. Every time the BIOS writes a sector, the flash drive will find an empty page and put the newly-written sector there (the 16-byte 'bonus' area is used to hold the logical sector number and a sequence counter). The drive will also periodically copy already-existing sectors to empty pages in partially-written blocks; once all of the pages in a block have been either copied elsewhere or rendered obsolete, the block can be erased and considered eligible for reuse.
GeneralRe: Thoughts PinmemberPIEBALDconsult28-Oct-08 13:30 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web02 | 2.8.141022.2 | Last Updated 28 Oct 2008
Article Copyright 2008 by John Underhill
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid