This is not a correctly posed question. More exactly, there is no a problem with that.
Compression has nothing to do with languages. However, compression ratio does depend on the content of the data. You can only compare blocks of data of the same size in bytes. If you compress the block, the size of compressed data will be different. Isn't that totally natural? How do you think why compression is possible at all? Because the compression algorithm find some redundancy
in data and tries to optimize the presentation of redundant data. Imagine that you all your data consists of binary ones. Then the compressed data should just say: "80 billions of 1 bits". And then imagine that the data is the random sequence of bits. Then the compression ratio, on average, could be slightly lower then 1, with the good algorithm and big sets, because minor amount of redundancy can only come at random. Isn't that logical?
Perhaps you could better understand it if you read about data compression: http://en.wikipedia.org/wiki/Data_compression