Continued from the previous post. This second part reviews the commitment scheme, the zero-knowledge, the digital signatures and the lattices.
Lecture 15-16: Commitment Schemes
A commitment scheme between the sender and the receiver satisfies two security properties: hiding and biding. The former says that the commitment looks random to the receiver before the decommitment, and the later says that the sender cannot generate two different but valid de-commitments. A trusted party in charge of generating e.g. public random bits might be necessary, however, it would be more desirable to design commitment schemes without such party.
A first example is based on OWP (without trusted party). It can be associated with a hard-core predicate (e.g. using Goldreich-Levin). For some randomly picked , the commitment would be where is the message bit. To de-commit the sender only needs to send . The perfect binding follows from the oneway permutation, and the computational hiding follows from the hard-core predicate. Though a trusted party is not necessary here, one needs to make sure that is verifiable to be a one-way permutation.
The second example is based on DLP (with trusted party). For some prime , let some random generator and be the common knowledge. Then the commitment would be for some random , and the de-commitment be itself. The perfect hiding follows from the randomly chosen , and the computational binding follows from the DLA.
The third example is based on OWF (with interactive scheme, without a trusted party). Because of the existence of OWF, we have PRG . First let the receiver send a random -bit string to the sender, and then the sender draws a -bit seed , and sends as a commitment for , and as a commitment for . This is computational hiding because of the pseudorandomness of , and statistically binding because to find a conflict of for some random is w.p. .
The fourth example is based on hardness of factoring. Let be the set of quadratic residue module , where . Now both and are permutations over , but they are claw-free: i.e. it is hard to find .
Now, the commitment would be (after some prefix-free encoding) to take some random and output . It is perfectly hiding because a random input after some permutation is still random; it is computational binding following from the claw-free property.
Lecture 17-18: Zero-Knowledge
Zero knowledge IP system is defined as an IP such that for all PPT verifier , there exists some PPT that can generate the “view” of the verifier as if he has interacted with the prover. Two things are important here: first, it needs to hold for all verifier because even dishonest verifier should not be able to extract knowledge; second, the “view” is defined as not only the communication history but also the randomness known to the verifier himself.
Besides the definition, the examples of ISO and 3COLOR are given. The zero-knowledge proof for ISO is to let the prover send a random graph isomorphic to the two graphs, and then ask the verifier to randomly pick one of the graph and then send the mapping. The zero-knowledge proof for 3COLOR is to randomly permute the coloring and send the commitments of the colors to the receiver. The receiver can only ask one pair of nodes and check their colors.
Lecture 19-20: Digital Signature
Lecture 19 is missing from the folder, and therefore I referred to lecture 15-18 in 2009 of this same course.
Signature is something in which the verifier sends some message and oracle signs it. The signature must be unforgeable according to:
- (single-message) knowing the public key and with one adaptive message to the oracle, the adversary cannot forge the signature for any (unqueried) message;
- (multi-message) knowing the public key and with adaptive messages to the oracle, the adversary cannot forge the signature for any (unqueried) message.
As expected, the RSA signature and the Rabin signature are not secure. One successful single-message secure signature adopts the claw-free permutation as mentioned in Lecture 15-16.
Assume we have two pairs of claw-free permutations: , then let be the public key, and be the secret key. Given message , the signature is a pair , where is following from by applying to some random , and is following from by applying to (the prefix-encoding of) the message. To verify a signature, one needs to apply the message using and obtain , and then apply the using and check if it is . It can be proved that, with the introduction of some random , this signature scheme is robust against one single adaptive query from the adversary.
To obtain a multi-message signature, one needs to concatenate each message with some key to the next message. This makes the signature . To verify a message, the verifier starts with and applies according to obtain . It then starts from , apply according to , continue to apply and so on. It should eventually reach if valid. This is unforgeable because of the claw-free property again.
Another easy single-message secure (one-time) signature is mentioned in the early Lecture 17 (2009) based on OWF . The public key is , and the signature . To verify the signature one simply calculates the function and check if it coincides the public key.
In Lecture 20 (2010), another way of constructing multiple-message scheme using one-time signature is given, with better efficiency than the simply concatenation mentioned two paragraphs earlier. The major difference is that instead of using a sequence of messages, one can use a Merkle Tree so that the signature will only be of logarithmic length. To ensure that we do not need to store the whole tree, the PRF can again be adopted. Details ignored here.
Lecture 21-26: Lattices (taught by Yael Tauman Kalai)
Some basic algebraic knowledge about lattices are defined in lecture 21, and then the hardness of finding the shortest vector or some vector closest to a given vector is addressed in lecture 22. The LLL Algorithm that uses the Gram-Schmidt orthogonalization to solve the shortest vector with an exponential approximation factor is given in lecture 22 and 23. The main difficulty is to find some LLL-reduced basis with small Schmidt coefficient . This is done by performing rounded version of Gram-Schmidt orthogonalization, and swapping whenever . This algorithm terminates in poly-time.
Though there are many important applications of lattices, the notes only manage to mention one of them in details. That is to base on the worst-case lattice assumption, one can construct a collision-resilient hash function, which is as simple as . One can reduce any poly-time collision-finding algorithm to a solver for any GapSVP for any lattice (i.e. in the worst case).
The first step of the reduction is to consider a problem GDD, that is given some target vector , find some vector in the lattice such that is no more than fraction of the “covering radius” – a global largest distance of any point to the lattice. A solution of implies . Notice that this is the major tool for us to relate an average-case problem to some worst-case assumption.
The second step is the smoother. Starting from an arbitrary point, as long as the deviation is large enough (still poly), the Gaussian distribution module , the fundamental parallelepiped, is close enough to the uniform distribution. The last step is the algorithm which is stated in the end of lecture 25. Complicated so ignored here.
In sum, if there exists some collision-finder, then the can be solved in the worst-case (for an arbitrary ).
The last lecture slightly mention the learning with error (LWE) assumption and its relationship with te lattice assumption. A public-key encryption is also mentioned under such assumption.