As a general question it's not very interesting to a programmer - maybe to a mathematician/information theorist. Specific examples are fun to play with but not important in today's world where memory is plentiful but all the other work you and computer do is more scarce.

Let's say you had to store a sequence of a lot numbers in the range 1-6. You can fit 12 of these in a 32-bit integer, as 6^12 < 2^32. But now you have extra arithmetic whenever you access data, basically seeing how many 6^n's there are in the entry modulus 6^n+1. This is because there are tradeoffs in speed and compactness for whichever representation you use. "Best" in practice means straightforward.

It's not to say there aren't times when saving space is free, good, and elegant, however it is an issue that should only be asked by one who already has the skills to solve it, in a situation that requires it.