However, in lecture 3 we claimed that for messages of length 2n the function is OWF. So where am i wrong?

Thanks!! ]]>

Is it correct? I could not convince myself that it is, but the solution is dependent on it if I understood correctly.

Hope the question is clear, will write it more formally:

if G generates secret and public keys for a public key encryption, and for randomness r_g it created (sk,pk), and for randomness r_g_2 it created (sk^', pk), does it mean that sk^' == sk?

Otherwise I think there is a counter example for the solution described.

]]>Why is this necessary? If we were to remove this bit sampling and have P always send rx to V, wouldn't this still work?

Then the simulator would simply always return y^{-1}r^{2} to V as the first message.

If we know that the prover is convincing the verifier w.p. 1/100, why do we need the other expression in some session? The prover is deterministic so I don't see the reason for the 2 probabilities.

And additionally, why is there a 2|E| instead of |E| in the denominator?

Thanks!

]]>I saw a construction that given CPA secure encryption scheme, you construct a MAC scheme as follows:

$Auth_{sk}(m) = Enc_{sk}(m)$

The question was either it's secure or not.

The answer that it's not, and the solution was to create a new encryption scheme E' such that for every key (sk) the text $0^n$ is encoded to $0^n$ with some negligible probability. Then, an adversary can simply generate $(0^n, 0^n)$ as a forgery.

I don't see why this is correct, because it will only happen with negligible probability (by the definition of E').

What am I missing here?

Thanks!

]]>1. U_n, Auth_sk(U_n)

2. U_n, (U_n)^'

Basically the question is does the fact that the adversary cannot forge a signature to a new message means that he cannot tell when the signature is correct?

Thanks!

Thanks! ]]>

1. {P(w_1),V*(x), P(w_1)V*(x)} =S_1,1

2. {P(w_o),V*(x), P(w_1)V*(x)} = S_0,1

I understand why if this claim is correct the claim in the question is correct.

But, I try to formulate why this claim is correct and do not manage to do so:

1. Assume there is a PPT A that can distinguish between S_1,1 and S_0,1 with non-negligible probability.

2. Now I would like to use A to distinguish between {P(w_1)V*(x)} and {P(w_0)V*(x)} and by witness indistinguishability deduce that this A does not exist.

3. Our distinguisher B will get y which is either from p(w_1)V*(x) or P(w_0)V*(x)

4. Now I would like to send A {y,P(w_1)V*(x)} and return whatever A does.

4. My problem is that I cannot really sample interaction with the P and be sure that he is using w_1. Or can I?

Thanks,

]]>In both of the examples (stream ciphers and public synchronization) the main issue was that we have to maintain a state.

Is this the only way to support multi-message encryption? If the encryption is random, isn't this sufficient for multi-message encryption?

My main wonder here is what do PRFs introduce to us that wasn't possible before.

If this is stateless multi-message encryption, then why wasn't this possible before?

From lecture 1:

How do we show that the 4th definition of perfect secrecy is equivalent to the others?

(The one with the fact that for a subset S the probability to output the message after encrypting is less than 1/|S|).

Thanks

]]>I have some question regarding Lecture 9:

1. Page 4, Claim 2.4: I didn't understand your comment about fixing the inputs for g_hat protocols. By fixing you mean for specific bits of inputs?

Why can we do that? how does it reduce the burden of prooving indistinguishability for the whole view?

2. Page 4, Remark about the original GMW construction: Can you please explain why it is sufficient to run the protocol only for MUL gates? (what is "the protocol" that we are supposed to run?).

3. Page 5, Garbling Scheme definition: Why do we need to encode the description of f if the simulator also gets f? (we assume that f is public)

4. Page 6, Yao's Garbled Circuit:

- In the first stage, each wire samples two secret keys. Are they one-bit? multiple-bits?

Also, we assumed a secret-key encryption scheme. Does the cipher's length equal to the secret key's length? (The table T_g implies that).

- In the second stage, we mentioned a mapping for the output wire. b can be either 0 or 1, and there is only one output wire. can't we turn it to two bits table? am I wrong somewhere?

- In the fourth stage, we saw v(w). I don't understand how do we get this function, and what it does. I guess that it has to use the gate's table T_G somehow.

Thanks, and sorry (again) for the long questions.

]]>"You can use and written material" - does this include "printed material"?

Thanks

]]>Thanks

]]>I have some question regarding Lecture 12:

1. Page 1, Adaptive Zero Knowledge: Can you please give a description of the distribution behind the Real and the Ideal worlds? maybe in an ensemble notation?

2. Page 3, Within the Decrypt of the CCA scheme: Don't we need to append to SK' the CRS? Because it has to verify that the NIZK is correct, and the verifier of the NIZK needs the CRS.

3. Page 3, Claim 2.2: What does it mean a "sequence of hybrid games" - How does the distribution of game look like? And why does it imply that we can replace one game with another?

4.Page 3, Game1: It seems that we assumed NIZK, but we are using the two simulators which are part of the Adaptive NIZK ZK.

Do we assume NIZK and transform it into an adaptive NIZK (although the proof for it will come later on the lecture) as stated that can be done?

(The same question goes for Definition 2.3 which also assumed NIZK but used the adaptive ZK).

5. Page 4, Game4: In line 2, do we intentionally moved back to decrypting with sk0, If so - why is that?

6. A General question: In most of the NIZK ZK definitions we used an ensemble notation, but we didn't mention the security parameter n.

Does it have some meaning? or n is that obvious that it is omitted? (If n shouldn't be there - on what parameter is the bounding negligible function?)

Thanks, and sorry for the long questions.

]]>"Note on Randomized Encryption. Note that the constructed encryption scheme is randomized. In fact, any stateless encryption scheme for more than a single message must be randomized (**think why)**."

The answer is that if it is not randomized, an attacker can distinguish between encryption of (m_0,m_0,m_0) to encryption of (m_0,m_0,m_1), since in the first all the 3 will always be the same and in the second not?

If not, what is the answer?

Thanks!

]]>Is there an option to upload the notes from the extra (העשרה) class from Friday?

Thanks

]]>Regarding lecture 11, I didn't manage to understand the impossibility mentioned in the beginning of page 2, i.e. why in the NIZK model with crs, the simulator will not decide the language like in claim 2.1?

Thanks

]]>2. For the function we defined in question 2 for auction, what happens in a situation where an adversary controls the seller? If the seller is corrupted then the adversary can decide who wins because it can choose for the seller the final output to be (B, some random number). Nobody really checks that this random number is the real bid of B, thus the adversary can make B win even in the ideal world.

3. The logic of section A: I did not understand how it is possible that A* outputs b. By the definition of the function, the output of A should be nothing, so if A* outputs b, it contradicts the definition of the function.

Thanks!!

]]>I'm a bit confused with the definition of an ensemble with multi-variable.

By the definition of: (1)

\begin{align} \left \{ Real(n,C,x) \right \}_{n,C,x}\approx _C \left \{ Ideal(n,C,x) \right \}_{n,C,x} \end{align}

- Do we fix C and x, and bound the distance term with a function of n?

- Does the distinguisher know C or x?

- In lecture 2 we defined Computational Indistinguishablity and we defined I_n . Does the group I_n contains every C and x of length n?

I'm asking it because I can't find a way to distinguish between the Real-world and the Ideal-world given only y_S,y_A,y_B,y_C (assuming A*'s ability to outputs b with non-negligible probability).

]]>