<li>The whole passive security thing where we define views and that a simulator has to be computationally indistingushiable from the views of all parties.</li>
<li>This protocols allows two parties to compute any function over their inputs. Thus: \(f : \{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}\).</li>
<li>Rather than computing the function though, it's represented by a truth table (a matrix) $ T ∈ \{0,1\}<sup>2<sup>n</sup>× 2<sup>n</sup></sup> $ where \(T[i,j] = f(i,j)\).</li>
<li>The ideal functionality is: Alice inputs \(x \in \{0,1\}^{n}\) and Bob inputs \(y \in \{0,1\}^{n}\). The protocols allows Alice to learn \(z = f(x,y)\) and Bob should learn <i>nothing</i>.</li>
<li>One could gen \(M_B\) or \(M_A\) using a pseudorandom generator. This forces the protocol to only have computational security from the previous unconditional, but the storage complexity of one of the parties can be made small, while the other still needs exponential.</li>
<li>Circuit based, still using a trusted dealer</li>
<li>More complicated, as it has to support different operations, XOR and AND.</li>
<li><b>Circuit Notation</b>: A circuit is a function \(C : \{0,1\}^n \times \{0,1\}^n \rightarrow \{0, 1\}\) where the first input bits comes from Alice and the seconds from Bob. Gates have unbounded fanout. No cycles are allowed.</li>
<li><b>Invariant</b>: The protocol works on secret shared bits.</li>
<li>Input Wires: For each of the <i>n</i> wires belonging to Alice: Alice samples a random bit \(x_B \leftarrow \{0,1\}\), sets \(x_A = x \oplus x_B\) and then sends \(x_B\) to Bob. \(x\) is said to be "shared" or "x-in-a-box", using notation \([x] \leftarrow Share(A,x)\). Bob is symmetric to this.</li>
<li>Output wires: If Alice (resp. Bob), is supposed to learn [x], Bob sends \(x_B\) to Alice who can then output \(x = x_A \oplus x_B\). \((x, \perp) \leftarrow OpenTo(A, [x])\). \(x \leftarrow OpenTo([x])\) is written, if both is to learn.</li>
<li>XOR with Constant: Write \([z] = [x] \oplus c\) for a unary gate computing \(z = x \oplus c\) for some constant bit \(c \in \{0,1\}\). In reality, Alice computes \(z_A = x_A \oplus c\) while Bob simply sets \(z_B = x_B\).</li>
<li>AND with Constant: \([z] = [x] \cdot c\). Same as above, kinda, but multiply. Both Alice and Bob multiplies their share by c now. \(z_{A,B} = x_{A,B} \cdot c\).</li>
<li>XOR of Two Wires: \([z] = [x] \oplus [y]\). Alice computes \(z_A = x_A \oplus y_A\), Bob computes \(z_B = x_B \oplus y_B\).</li>
<li>Run subproto: \([z] = [w] \oplus e \cdot [x] \oplus d \cdot [y] \oplus e \cdot d\)</li>
</ol></li>
</ol></li>
<li><b>Putting all of this together</b>: The circuit has <i>L</i> wires; \(x^1, ..., x^L\), there is only one output wire; \(x^L\). First Alice and Bob run the subproto Share for each of the 2n input wires in the circuit; Then for each layer in the circuit \(1,\dots,d\) alice and bob securely evaluate all gates at that layer using the subprotos XOR and AND and gates can only get input from gates at a lower level. Eventually, the value at the output wire will be ready and it can be opened to Alice, \((x, \perp) \leftarrow OpenTo(A, [x^L])\).</li>
<li>OT can be used to remove the trusted dealer from BeDOZa.</li>
</ul>
</div>
<olclass="org-ol">
<li><aid="orgd691f91"></a>Analysis TODO!<br/>
<divclass="outline-text-5"id="text-1-3-2-1">
<ulclass="org-ul">
<li>We consider only semi-honest (or passive) at this point and this function is deterministic, so it's enough to prove that the output is correct and the view of a corrupted party can be simulated.</li>
<li><b>Correctness</b>: All gates are trivially correct, apart from AND: $$ w ⊕ e ⋅ x ⊕ d ⋅ y ⊕ e ⋅ d = uv ⊕ (xy ⊕ vx) ⊕ (xy ⊕ uy) ⊕ (xy ⊕ vx ⊕ uy ⊕ uv) = xy$.</li>
<li><b>Simulation of the view of a corrupted Alice, having only access to her input/output</b>:
<olclass="org-ol">
<li>For each invocation of \([x^i] = Share(x^i, A)\), the simulator (like an honest Alice would), samples random \(x^i_B\) and sets \(x^i_A = x^i \oplus x^i_B\).</li>
<li>For each invocation of \([x^i] = Share(x^i, B)\), the simulator includes in the view a message from Bob with a random bit \(x^i_A \leftarrow \{0,1\}\).</li>
<li>When \([x^k] = [x^i] \oplus [x^j]\) is invoked, the sim (like an honest Alice) computes \(x^k_A = x^i_A \oplus x^j_A\); (Simulation for XOR with constant and AND with constant is done similarly)</li>
<li>A malicious Bob can deviate from the original OTTT protocol in a few ways:
<olclass="org-ol">
<li>Bob can send the wrong value \(v'\), rather than \(v = y+s\). This means that Bob sends some arbitrary \(v' \in \{0,1\}^n\). However, this can be seen as input substitution, since it's based on \(y\), which is a value only Bob knows regardless.
<li>Bob sends <i>nothing</i> or <i>an invalid message</i>. This will happen if Bob either does not send anything or Bob sends a pair which is not the right format; i.e. \((v', z'_B) \not\in \{0,1\}^n \times \{0,1\}\).
<li>The second condition can be checked by Alice and the first can be solved by adding a timer at which point Alice will time out.</li>
<li>At this point, Bob has learned nothing but the value \(u\), which is just a random value, as such we will not consider this cheating.</li>
<li>So we account for this by modifying the protocol in such a way that if Alice detects an invalid message or receives none, she simply outputs \(z = f(x, y_0)\) where \(y_0\) is just some default input. This can be computed efficiently in the simulated world by having the simulator give \(y_0\) to the ideal world.</li>
<li>Bob <i>sends a wrong value</i> \(z'_B\). Since \(z_B\) is the value we XOR with in the end, if it's flipped, Alice will get the wrong result, but will not know this.
<li>Since \(z'_B = z_B \oplus 1\) must be true, Alice will output \(z' = z \oplus 1 = f(x,y) \oplus 1\).</li>
<li>This is <b>NOT</b><i>input substitution</i>. If Alice and Bob were to compute \(f(x,y) = 0\) for all values of \(x\) and \(y\), this would get fucked by Bob flipping his \(z_B\), as Alice would always end up XORing \(0\) and \(1\), giving \(1\) instead of \(0\) as the result.</li>
</ul></li>
</ol></li>
<li>Does, we need to defend us against the third case!</li>
<li>Has three algos: (gen, tag, ver), where gen produces a MAC key k which can then be used to compute a tag on messages: \(t = tag(k,m)\). The verification function ver(k,t,m) outputs accept if t is a valid tag for message m under key k and rejects otherwise.
<li>Security of a MAC is defined as a game between a challenger C and an adversary A. C samples a key k and then A is alllowed to query q times for tags t<sub>1,…,t</sub><sub>q</sub> on messages x<sub>1</sub>, …., x<sub>q</sub>. The adversary then outputs a pair (t', x') for a message x' which he has not already queried for. A MAC scheme is then (q, eps) secure if the adversary is allowed <= q queries and his probability of t' being a valid tag for x' is >= eps.</li>
<li>This is no longer strictly deterministic, most likely since the MAC scheme fails with probability epsilon, thus we have to show this works for the joint probability of the views and the output.</li>
<li>The proof is a reduction to breaking the underlying MAC scheme, if we can break the OTTT protocol.</li>
<li>Sample random \(s, M_B\), generate keys \(K\) for the MAC scheme and compute MACs \(T = Tag(K, M_B)\) (for all entrances) and send \((s, M_B, T)\) to Bob (so the simulator replaces the trusted dealer).</li>
<li>Sample a random \(u\) and send it to Bob (replacing the honest Alice)</li>
<li>If Bob does not output anything or output an invalid message, or output a triple \((v', z'_B, t'_B)\) s.t. \(z'_B \neq M_B[u,v]\), the simulator inputs \(y_0\) to the ideal func. Else the sim inputs \(y = v' - s\) to the ideal func.</li>
<li><i>The view of Bob</i> is distributed as the normal scheme, since \(M_B, r\) are uni-random in both experiments and \(u = x+r\) with random \(r\) in the actual protocol, \(u\) is also random.</li>
<li>The output of Alice is distributed identically, except for the case where corrupt Bob sends a triple s.t. \(ver(K[u,v], t'_B, z'_B) = accept\) meanwhile \(z'_B \neq M_B[u,v]\): in which case the real Alice would output \(f(x,y) \oplus 1\) as previously discussed, but the ideal Alice would output \(f(x,y_0)\). This Bob can however be turned into an adversary for the underlying MAC scheme, which is assumed to be secure, thus completing the proof.</li>
<li>Protocols with passive security (from public-key encryption with random looking keys).</li>
<li>The GMW compiler (3 steps: zero-knowledge proofs, coin-flip into the well, input commitment) and how to use it to construct active secure oblivious transfers - OR -</li>
<li>Explicit constructions of active secure oblivious transfer in the common reference string (PVW protocol).</li>
<li>Main variant of Oblivious Transfer (OT) is the <i>1-out-of-2</i> OT or (2 1)-OT for short. It's described by the functional functionality:
<olclass="org-ol">
<li>Alice inputs a choice bit \(b \in \{0,1\}\)</li>
<li>Bob inputs two messages \(m_0, m_1\)</li>
<li>Alice learns \(z = m_b\)</li>
</ol></li>
<li>A secure OT should not let Alice learn anything about unchosen bit and Bob should not learn which bit Alice desires.</li>
<li><i>1-out-of-n</i> is exactly what it sounds like.
<ulclass="org-ul">
<li><i>1-out-of-n</i> OT directly implies single two-party secure computation for some function \(f(x,y)\) for \(x <= n\), as Bob can create his messages \(M_0, ..., M_{n-1}\) as \(M_i = f(i,y)\), using his own input \(y\) and Alice will then use \(x\) as the choice bit, giving her \(M_i\) for \(i=x\), \(f(x,y)\).</li>
</ul></li>
<li>OT can be used to remove the trusted dealer from BeDOZa.
<ulclass="org-ul">
<li>Specifically, for the multiplication (AND) gate, the trusted dealer had to samle bits \(u_a,v_a,u_b,v_b,w_b\), had to compute $w<sub>a</sub> = (u<sub>a</sub>⊕ u<sub>b</sub>) ⋅ (v<sub>a</sub>⊕ v<sub>b</sub>) ⊕ w<sub>b</sub> and then send all the subscript \(A\) to Alice and vice versa for Bob.</li>
<li>This dealer can be replaced by a (4 1)-OT protocol:
<olclass="org-ol">
<li>Alice samples random bits \(u_A, v_A\) and inputs \(i=2 \cdot u_a + v_a\) to the OT</li>
<li>There exists something called <b>Random OT</b>, which is a randomized functionality where parties have no input. The functionality samples random bits \(b,s_0,s_1\) and outputs \((b,z=s_b\) to Alice and \(s_0,s_1\) to Bob.</li>
<h4id="org0880381"><spanclass="section-number-4">2.3.1</span> With passive security and pseudorandom public keys</h4>
<divclass="outline-text-4"id="text-2-3-1">
<ulclass="org-ul">
<li>The Public-key Encryption Scheme (PKE) has pseudorandom public-keys. Then the following OT-protocol has passive security:
<ulclass="org-ul">
<li><b>Choose</b>: Alice (who has a choice bit b) generates a public key \(pk_b\) with a secret key \(sk_b\) and samples another random string \(pk_{1-b}\) whose secret key she does not know. She sends \((pk_0, pk_1)\) to Bob.</li>
<li><b>Transfer</b>: Bob (with messages \(m_0,m_1\)) creates two ciphertexts \(c_0,c_1\) where \(c_i\) is an encryption of \(m_i\) under \(pk_i\). He sends \((c_0,c_1)\) to Alice.</li>
<li><b>Retrieve</b>: Alice can only decrypt \(c_b\), as she only knows \(sk_b\). She learns \(m_b\).</li>
</ul></li>
<li>Since keys are pseudorandom, Bob can not distinguish between the fake PK and the real one. Alice does not know the sk for the fake pk and thus she can not decrypt the undesired message.</li>
<h4id="orgc2e35c2"><spanclass="section-number-4">2.3.2</span> Passive security with oblivious key generation</h4>
<divclass="outline-text-4"id="text-2-3-2">
<ulclass="org-ul">
<li>It is not required that public keys are pseudorandom, but merely that there should be an alternative way of generating the public-keys such that:
<olclass="org-ol">
<li>Public keys generated like this looks like regular pks</li>
<li>It is not possible to learn the sk corresponding to the pks generated like this.
<ulclass="org-ul">
<li>Note that we can't simply let Alice erase the sk after having computed the pk. It might not be easy to securely erase data and there is no way to verify that Alice has properly erased it. A passive party has to follow the protocol correctly, but is still allowed to look at their view and learn from this. If they have a secret key, they are allowed to use this!</li>
</ul></li>
</ol></li>
<li>A PKE with <i>oblivious key generation</i> is a regular <b>IND-CPA-secure</b> PKE defined by three algos <b>Gen, Enc, Dec</b>, but with an additional algo <i>oblivious generation</i> or <b>OGen</b>. OGen outputs strings which look like regular pks. OGen must satisfy:
<olclass="org-ol">
<li>Let b be a random bit, \(pk_0 \leftarrow Gen(sk)\) is the regular pk gen algo and \(pk_1 \leftarrow OGen(r)\) is the oblivious pk gen algo, where sk and r are chosen uni randomly. Then no PPT algo D, s.t. \(D(pk_b) = b\) with prob larger than 1/2 (Should this not HAVE to be 1/2? Otherwise if it fails with larger prob, you can just reverse the answer???)</li>
<li>It is possible to <b>efficiently invert</b> \(pk \leftarrow OGen(r)\), which is denoted \(r \leftarrow OGen^{-1}(pk)\)</li>
</ol></li>
<li>These two props imply it's infeasible to find sk corresponding to obliviously generated pks, even if you know the randomness used to generate it;
<olclass="org-ol">
<li>There exists no PPT algo A, which can output \(sk \leftarrow A(r)\) s.t. \(Gen(sk) = OGen(r)\).</li>
</ol></li>
<li>\(OGen^{-1}\) must be able to "explain" real pks, as if they were generated by OGen, since a distinguisher can check if \[OGen(OGen^{-1}(pk)) = pk\] This will apply to keys generated with OGen and thus it must also apply to keys generated by regular Gen, otherwise these two would not be indistinguishable. Therefore it must also hold that: \[OGen(OGen^{-1}(Gen(sk))) = Gen(sk)\]</li>
<li>However, as (Gen, Enc, Dec) is a secure scheme, then it MUST be hard to compute sk from pk generated with Gen, so \(pk \leftarrow Gen(sk)\) has to be a one-way function. Thus, a contradiction; if there is an A who can break property 3, then we can invert \(pk <-Gen(sk)\)bycomputing\[sk<-A(OGen^{-1}(pk))\].</li>
<li>Thus, for the OT protocol to be secure, we need more. The encryption scheme is still IND-CPA secure, even if encryptions are performed using a pk which is the output from OGen, even if the adversary knows the randomness used by OGen to generate that specific key.
<olclass="org-ol">
<li>For all m; let b be a random bit, \(m_0 = m\), \(m_1\) is a random uniform message and \(pk <-OGen(r)\),thenthereexistsnoPPTalgoDs.t.\(D(r,Enc(pk,m_b))=b\)withprobsignificantlylargerthan1/2.
<ulclass="org-ul">
<li>This can be proven using property 1, 2 and that PKE is IND-CPA</li>
<li>Not all functions admit an oblivious sampler, which is due to <ahref="http://cs.au.dk/~orlandi/asiacrypt-draft.pdf">http://cs.au.dk/~orlandi/asiacrypt-draft.pdf</a></li>
<li>Think of a pseudorandom generator PRG, which expands n-bit long seed s into a 2n bit long string y=PRG(s). A trivial sampler for PRG is the identity function which maps 2n bit strings into 2n bit strings. This function is invertible and the security of the PRG implies that the output of the PRG is indistinguishable from uniformly random 2n-bit strings.</li>
<li>Given the description of a group where DDH assumption is believed to hold, (G,g,q), where g is gen for G and q is order of G, ElGamal is described:
<ulclass="org-ul">
<li><b>Gen(sk):</b> Input secret key \(sk = \alpha \in Z_q\), compute h = g<sup>α</sup>$ and out \(pk = (g,h)\)</li>
<li><b>\(Dec_{sk}(C)\):</b> parse C as \((c_1, c_2)\). Output \(m = c_2 \cdot c_1^{-\alpha}\).</li>
</ul></li>
<li>We thus need to construct \(OGen(r)\) which outputs \(pk = (g,h)\) which is invertible and indistinguishable from \(Gen\). If we pick a group where DDH is assumed to be hard, the multiplicative subgroup of order q of \(Z^*_p\), where \(p = 2q+1\). Now to gen a random element, we use the random string \(r\) to sample \(s\) between \(1\) and \(p\) and output \(h = s^2 \mod p\). This process is invertible, since it is easy to compute square roots modulo a prime), and we can check that \(h\) is distributed uni-randomly among elements of order q, as required.</li>
<h4id="org64a9b91"><spanclass="section-number-4">2.4.2</span> Note on ZK</h4>
<divclass="outline-text-4"id="text-2-4-2">
<ulclass="org-ul">
<li>Use ideal ZK-functionalities, i.e. a model where parties have access to ideal boxes which on common input \(x\) and private input \(w\) from one of the parties outputs \(R(x,w)\). This kind of box can in practice be replaced with any of the ZK protocols for NP-complete languages from CPT.
<h4id="orgcd3dd29"><spanclass="section-number-4">2.4.3</span> The compiler</h4>
<divclass="outline-text-4"id="text-2-4-3">
<ulclass="org-ul">
<li>We can build passive OT using any PKE scheme with OGen
<ulclass="org-ul">
<li>Is only passive though, as Alice could just sample both PKs using Gen. Bob would not know this, as OGen and Gen should be indistinguishable from each other.</li>
</ul></li>
<li>Simply adding a ZK proof that some \(r\) is used to generate either key.
<ulclass="org-ul">
<li>\(x = (pk_0,pk_1)\), \(w=r\), the relation would accept if \((pk_0 = OGen(r) \text{ or } pk_1 = OGen(r))\). This can be attacked, by computing \(pk_0 = Gen(sk_0), pk_1 = Gen(sk_1), r=OGen^{-1}(pk_0)\) s.t. \(pk_0 = OGen(r)\), and then use \(r\) for the ZK proof.</li>
</ul></li>
<li>Issue arises as Alice can choose her own randomness, however, on the other hand we can not let Bob choose this for her either.</li>
<li><b>Coin flip into the well</b> is a fix for this. Essentialy, it allows Bob to participate, without having access to the end result. Bob will choose some \(r_B\) and Alice will choose some \(r_A\) and these can be XORed, when Bob sends \(r_B\) to Alice.
<olclass="org-ol">
<li>Alice choose random string \(r_A\) and randomness \(s\) for the commitment scheme. She computes \(c = Com(r_A, s)\) and sends \(c\) to Bob.</li>
<li>Bob choose random \(r_B\) and sends this to Alice.</li>
<li>Alice computes \(r = r_A \oplus r_B\) and \(s\), Bob has \((c,r_B)\) in the end.</li>
</ol></li>
<li>Finally, \(r\) is random, \(c\) is hiding, so Bob can not base \(r_B\) on \(r_A\) and c is also binding, so Alice can not choose a new \(r_A\).</li>
<li>Now Alice can compute \(pk_b = Gen(sk)\), \(pk_{1-b} = OGen(r)\) using the \(r\) from the coinflipping protocol. Alice will then send \((pk_0, pk_1)\) to Bob and Alice and Bob will then use ZK box for the following relation: \(x = (pk_0, pk_1, c, r_B)\), witness \(w = (r,s)\) and the relation outputs \(1\) if: \(c = Com(r \oplus r_B, s) \text{ and } (pk_0 = OGen(r) \text{ or } pk_1 = OGen(r))\).</li>
<li>This addition suffices to make the aforementioned protocol actively secure. It is not always enough though, as such, a third step might be required.</li>
<li>So the protocol simply runs two copies of the original protocol <i>using the same inputs</i>. This protocol is still secure against passive corruption. If we simply compile this the same way as before (so only two steps), we get something which is not actively secure:
<olclass="org-ol">
<li>Run coin-flip proto; Alice gets \((r,s)\), parse \(r = (r^1, r^2)\), bob gets \((c, r_B)\)</li>
<li>Alice gens \(pk^1_{1-b} = OGen(r^1)\), sends (\(pk^1_0, pk^1_1)\) to Bob.</li>
<li>Alice and Bob runs ZK proof where Alice proves that \((pk^1_0 = OGen(r^1) \text{ or } pk^1_1 = OGen(r^1))\)</li>
<li>Bob computes and sends \((e^1_0,e^1_1)\)</li>
<li>Alice gens \(pk^2_{1-b} = OGen(r^2)\), sends (\(pk^2_0, pk^2_1)\) to Bob.</li>
<li>Alice and Bob runs ZK proof where Alice proves that \((pk^2_0 = OGen(r^2) \text{ or } pk^2_1 = OGen(r^2))\)</li>
<li>Bob computes and sends \((e^2_0,e^2_1)\)</li>
</ol></li>
<li>Clearly not secure, as Alice can change which bit she is interested between step 2 and 5. This results in her having Bob encrypt both \(m_0\) and \(m_1\) using the actual encryption key, so she can learn both.</li>
<li>Is fixed by having Alice also commit to her input bit.
<olclass="org-ol">
<li>Alice with input \(b\) choosen random \(r_A,s,t\), computes \(c = Com(r_A,s)\) and \(d = Com(b,t)\). Sends \((c,d)\) to Bob.</li>
<li>Alice and Bob perform ZK proof where \(x = (c,d)\), \(w = (b,r_A,s,t)\) and \(R(x,w)=1\) if \(c = Com(r_A,s) \text{ or } d = Com(b,t)\).</li>
<li>Bob choose \(r_B\) and sends to Alice</li>
<li>Alice computes \(r = r_A \oplus r_B\), choose random \(sk\) and computes \(pk_b = Gen(sk)\) and \(pk_{1-b} = OGen(r)\). Sends \((pk_0,pk_1)\) to Bob</li>
<li>Alice and Bob perform another ZK proof; \(x = (c,d,r_B,pk_0,pk_1)\), \(w = (b,r_A,s,t)\), \(R(x,w) = 1\) if: \(c = Com(r \oplus r_B, s) \text{ and } d = Com(b,t) \text{ and } pk_{1-b} = OGen(r_A \oplus r_B)\). Note that the final and can be computed due to the simulator learning the witness.</li>
<li>Bob sends \((e_0,e_1)\) to Alice, \(e_i = Enc(pk_i,m_i)\).</li>
<li>Alice outputs \(m_b = Dec(sk, e_b)\).</li>
</ol></li>
<li>The protocol is proven secure against an actively corrupted Alice by constructing a simulator. Remember that in the hybrid model the simulator simulates all calls to the ZK box. In other words, every time Alice inputs something to the ZK box the simulator learns w. The simulator uses the corrupted A as a subroutine and works as follows:
<olclass="org-ol">
<li>S receives \((c,d)\) from A</li>
<li>S receives \(w = (b,r_A,s,t)\) from A</li>
<li>S sends random \(r_B\) to A</li>
<li>S receives \((pk_0, pk_1)\) from A</li>
<li>S receives \(w' = (b', r'_A, s', t')\) from A</li>
<li>If \(w' \neq w\) or \(pk_{1-b} \neq OGen(r_A \oplus r_B)\) (which we have access to through the witness), then abort. O.w. sim inputs \(b\) to the ideal func and learns \(m_b\).</li>
<li>The sim computes \(e_b = Enc(pk_b, m_b)\) and \(e_{1-b} = Enc(pk_{1-b}, 0)\) (we don't know the other message, but since the Enc system is neat, we're guchi) and outputs the view for Alice \((b,t,r,s,r_B,e_0,e_1)\).</li>
</ol></li>
<li>Informally, there are only two differences from the view of the simulator and of Alice;
<olclass="org-ol">
<li>The simulation aborts if \(w' \neq w\) while real proto continues as long as \(Com(b',t') = Com(b,t)\) and \(Com(r'_A,s') = Com(r_A,s)\). However, if this happens in the protocol, then A can be used to break the binding property of \(Com\), thus reaching a contradiction of \(Com\) being a nice.</li>
<li>In the sim, \(e_{1-b} = Enc(pk_{1-b},0)\), while in the proto \(e_{1-b} = Enc(pk_{1-b}, m_{1-b})\). If the distinguisher succeeds, then we can break the IND-CPA (chosen plaintext attack) security of the underlying scheme.
<olclass="org-ol">
<li>Receive pk from the ind-cpa challenger</li>
<li>Query the ind-cpa challenger with messages (case 0) \(m_{1-b}\) and (case 1) \(0\) and receive a challenge text \(e^*\)</li>
<li>Construct a simulated view where \(r_B = r_A \oplus OGen^{-1}(pk)\) and \(e_{1-b} = e^*\) and give these to the distinguisher. If the distinguisher thinks the view is from a real proto, guess case 0, o.w. guess case 1.</li>
<h4id="org7dafa5b"><spanclass="section-number-4">2.5.2</span> The active secure OT proto</h4>
<divclass="outline-text-4"id="text-2-5-2">
<ulclass="org-ul">
<li>We assume existence of a trust dealer who can output a common random string to both parties. This dealer can be replaced with a secure coin-flipping protocol</li>
<li><b>Setup</b>: Use group \((G,g,q)\) with order \(q\), gen by \(g\). Trusted Dealer outputs four random group elements: \(crs = (g_0,h_0,g_1,h_1)\) to Alice and Bob</li>
<li><b>Choose</b>: Alice with input \(b \in \{0,1\}\), samples \(x \in_R Z_q\), computes \((u,v) = (g^x_b, h^x_b)\), sends \((u,v)\) to Bob.</li>
<li><b>Transfer</b>: Bob with \(m_0, m_1 \in G\) defines \(pk_0 = (g_0, h_0, u,v)\) and \(pk_1 = (g_1,h_1,u,v)\), samples \(r_0,s_0,r_1,s_1 \in_R Z_q\) and computes: \(e_0 = Enc(pk_0,m_0;r_0,s_0)\) and \(e_1 = Enc(pk_1,m_1;r_1,s_1)\) and sends these to Alice</li>
<li><b>Retrieve</b>: Alice computes \(m_b = Dec(x,e_b)\).</li>
<li>Neither Alice nor Bob can determine if \((g_0,h_0,g_1,h_1)\) is a DDH tuple or not</li>
<li>if \((g_0,h_0,g_1,h_1)\) is not a DDH tuple, then at most one of \(pk_0,pk_1\) is a DDH tuple, which implies that the encryption using the wrong key is information-theoretically secure.</li>
<li>If \((g_0,h_0,g_1,h_1)\) is a DDH tuple, then \((u,v)\) perfectly hides choice bit \(b\); if \((g_0,h_0,g_1,h_1)\) is a DDH tuple, then \(h_0 = g^\alpha_0\) and \(h_1 = g^\alpha_1\), with the same \(\alpha\), meaning that given \((u,v)\) there exists \(x_0,x_1\) s.t. \((u,v) = (g^{x_0}_0,h^{x_0}_0) = (g^{x_1}_1,h^{x_1}_1)\)</li>
<li>The sim choose \(crs\) as a non-DDH tuple, it gens \((g_0,h_0,g_1,h_1)\) s.t. \(h_0 = g^{\alpha_0}_0\), \(h_1 = g^{\alpha_1}_1\), s.t. \(\alpha_0 \neq \alpha_1\).</li>
<li>The sim extracts input bit of Alice from \((u,v)\) by finding \(b\) s.t. \(v = u^{\alpha_b}\) and if no such \(b\) exists s.t. the relation holds, set \(b=0\).</li>
<li>The sim inputs \(b\) to the ideal func and learns \(m_b\), defines \(m_{1-b} = 0\). Sim sends \((Enc(pk_0,m_0), Enc(pk_1,m_1))\) to corrupt Alice</li>
</ol></li>
<li>This sim is statistically indistinguishable from the protocol as most \(crs\) are not a DDH tuple, in which case encryptions under \(pk_{1-b}\) are perfectly secure</li>
<li>When Bob is corrupted:
<olclass="org-ol">
<li>The sim choose \(crs\) as a DDH tuple, it gens \((g_0,h_0,g_1,h_1)\) s.t. \(h_0 = g^{\alpha}_0\), \(h_1 = g^{\alpha}_1\), and $g<sub>1</sub> = g^β<sub>0</sub></li>
<li>The sim computes random \(x_1\) and \((u,v) = (g^{x_1}_1, h^{x_1}_1)\). The sim also defines \(x_0 = x_1 \cdot \beta\), now \((u,v) = (g^{x_0}_0, h^{x_0}_0)\) due to the definition of \(x_0\).</li>
<li>The sim receives \((e_0,e_1)\) from corrupt Bob, compute \(m_0 = Dec(x_0,e_0)\) and \(m_1 = Dec(x_1, e_1)\) and inputs them to ideal func.</li>
</ol></li>
<li>View of Bob in sim is computationally indistinguishable from proto, only diff is the distribution of the crs.</li>
<li>Showing that output of the honest Alice in the sim, is identical to the output of the honest alice in the proto, is left out of the proof.</li>