Text of an average book is 100,000 letters; with a very smart and optimized compression/prediction algorithm (which hopefully is far smaller than 1GB), it is reasonable to expect a single char to be less than half a byte in size, so 50kB per book (saving without covers of course), this would mean around 20,000 books in a GB (not really, the compression algorithm probably also takes quite some MBs)— which should be enough for quite some time.
Pretty unusual, especially state-owned. There was a similar program on EU level that was just cancelled, apart from that I don’t know any other countries investing in open source.