This is a very unpleasant problem, but, most likely, not hard to solve. I would start it from the detection of what is that encoding. I don't know which one is the most likely, but I know quite a few ones for Chinese, Simplified or Traditional. Based on that information, you may start looking for a font, which is a separate problem, but it would be much safer to perform appropriate trans-coding, even if you have to do it on the fly.
If I had a text file with the text, I would find out the text's encoding pretty quickly, only if some obsolete standard is used; there aren't too many of them. May I ask you: did the legacy code work on some old Microsoft system, or is it something else. Old Microsoft encodings were based on "code page", which would accelerate search.
So, the detection methods which lies on the surface is using some modern Web browser. Put the fragment of text in a file ("ANSI") as is and open it in the browser. Usually, the browser's menu, something like View => Character Encoding => Auto-Detect => (…specify the language) gives you the result; if not, try non-auto.
If the encoding was covered by one or another "codepage" number, you can do a comprehensive search. For this purpose, you would need an editor which supports "ANSI" (actually, it used to be not really standard "extended ASCII", which still can be used) and "codepage". Frankly, I don't know where you get such editor at this moment. I just developed one myself. In worst case, keep in mind that I can share my source code, but it's fully functional on Windows, doesn't work as multi-platform yet; this is the only reason it's not published. I used Free Pascal and Lazarus LCL library, where they use their own, very unusual cross-platform way supporting of various encoding. The idea is: they don't support "Unicode" as most OS do, via UTF-16. Instead, internal representation is UTF-8, which is "Unicode-agnostic". The strings are considered as "ANSI strings", and "Unicode bytes" of UTF-8 multi-byte characters are considered as separate characters. Most string functions, where you don't need to select a sub-string with known number of characters, can work without knowing if it is Unicode or not. Now, the most interesting thing. The general-case string is "ANSI with codepage", where code page is the part of each string metadata, located in a string structure in the same way as its length. LCL text component render non-Unicode ANSI strings accordingly.
One product I know is Double Commander (its embedded "F4" editor), but it has fixed set of code pages. Going to source code can quickly solve this problem; the project is, of course, open sours (and highly recommended, by the way):
Double Commander — Wikipedia, the free encyclopedia
Double Commander home page
Another approach is: you can get a sample of the Chinese code as raw data, bytes, pack it with base64 and publish that base64 text in your question. I'll quickly determine the encoding for you.
If you know the encoding, writing a function which converts the array of bytes into a Unicode string won't be a big problem.