As Christian said, you are totally confused about the "magic" of 0's and 1's. A computer doesn't "understand" any language, it merely executes operations that are given to it. These operations are in fact bytes (or greater) that corresponds to a code plus arguments (also bytes) depending on the operation. You can write these codes in binary if you want, but then you'll need a binary editor (an editor that only accepts 0's and 1's and store 8 of these digits in one byte). I never heard about a binary editor.
So, if you try to write your binary code in notepad, each digit will be converted to a full byte depending on its character encoding: a zero has an ASCII code of 48 and a one has an ASCII code of 49 (see
here[
^]). So, if you write a sequence like this in notepad:
0 0 1 1
, it will be converted in binary to:
110000 110000 110001 110001
(48 and 49 converted to binary). So, you see that it is totally different than what you expected.
What you could do is put 8 bits together and look into the ASCII table for the corresponding symbol and write this one instead. Or you could use an hexadecimal editor.
However, even if you succeed to write your binary file like that, you will never be able to execute it because it will not be recognized by Windows as a valid executable (it doesn't have the correct header for isntance).
So, to summarize, it is impossible to write in binary (it was something you could do when the first computer was invented but it is not possible anymore now). As Christian said, if you really want to go to the lowest possible level, you have to write in assembler. But if you understand what assembler is, you'll soon realize that it is close to "binary".