Simple.
y = y | 2; y = y | 128;
Consider the binary representation of the first few digits.
0 = 0000 0000
1 = 0000 0001 *
2 = 0000 0010 *
3 = 0000 0011
4 = 0000 0100 *
5 = 0000 0101
6 = 0000 0110
7 = 0000 0111
8 = 0000 1000 *
Notice the ones marked with a *, they're the ones that have only 1 bit set. These are the numbers you use to set/clear a particular bit. Notice also, that the *ed items are all powers of 2? You have probably guessed by now that you can fiddle with the other bits of an 8 bit variable by using the numbers 16, 32, 64, 128.
Hexidecimal numbers are often easier to work with than decimal ones when dealing with bit-fields. This is because each nibble or hexadecimal digit is equivalent to 4 bits.
This would leave you with the following numbers to set/clear bits in an 8 bit number:
(bit 0)0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80(bit 7)
It's much easier to see at a lance that each of the above numbers have one bit set and which one it is, than it would be with their decimal equivalents.