Sure. This describes the algorithm in sufficient detail:
http://en.wikipedia.org/wiki/Crc16[
^],
http://en.wikipedia.org/wiki/Computation_of_CRC[
^].
This is all you need to write this code.
The only problem is to represent the string as numeric data. The simplest way to do so is to serialize the string using one of the Unicode UTF encodings into the array of bytes and then apply the CRC algorithm to this array. The result will depend on encoding, but it does not matter as the information is equivalent; it's only important that you use the same encoding consistently and don't use non-Unicode encoding which means loss of information in the general-case Unicode input.
Here is how:
string input =
byte[] data = System.Text.Encoding.UTF8.GetBytes(input);
Please see:
http://msdn.microsoft.com/en-us/library/system.text.encoding.aspx[
^].
Note that the encoding used by .NET internally in memory is UTF-16LE. All UTF encodings are equivalent in that sense that they encode any Unicode test without any losses, even if the text contains characters with the code point requiring 16 or more (up to 32) bits to fit. Byte-oriented UTF-8 uses variable-size byte sequences to represent one character, and UTF-16 uses one 16-bit words or two such words (called
surrogate pair) to represent a single character. In this way, any character in the Unicode range 0 to 0x10FFFF can be represented.
Please see:
http://en.wikipedia.org/wiki/Unicode[
^],
http://en.wikipedia.org/wiki/Code_point[
^],
http://unicode.org/[
^],
http://unicode.org/faq/utf_bom.html[
^].
—SA