topic
stringlengths
3
71
describe
stringlengths
0
3.02k
questions
listlengths
1
9
date
timestamp[ns]
author
stringclasses
4 values
license
stringclasses
3 values
The Pretty-Good-Measurement is not Optimal
The pretty-good-measurement is useful when we have an ensemble that we don’t understand very well and we need to distinguish the states in the ensemble with some success probability. (Due to De Huang)
[ { "context": "We have\n\n\\[\n\\rho = \\frac{1}{3} (\\rho_0 + \\rho_1 + \\rho_2) = \\frac{1}{2} I, \\quad \\rho^{-\frac{1}{2}} = \\sqrt{2} I,\n\\]\n\nthus the pretty-good-measurement \\(\\{M_0, M_1, M_2\\}\\) is given by\n\n\\[\nM_0 = \\frac{1}{3} \\rho^{-\frac{1}{2}} \\rho_0 \\rho^{-\frac{1}{2}} = \\frac{2}{3} |0\\rangle \\langle 0|, \\quad M_1 = \\frac{1}{3} \\rho^{-\frac{1}{2}} \\rho_1 \\rho^{-\frac{1}{2}} = \\frac{1}{3} I, \\quad M_2 = \\frac{1}{3} \\rho^{-\frac{1}{2}} \\rho_2 \\rho^{-\frac{1}{2}} = \\frac{2}{3} |1\\rangle \\langle 1|,\n\\]\n\nand the success probability using this measurement is\n\n\\[\np_{\\text{good}} = \\frac{1}{3} (\\text{tr}(M_0 \\rho_0) + \\text{tr}(M_1 \\rho_1) + \\text{tr}(M_2 \\rho_2)) = \\frac{5}{9}.\n\\]", "question": "### (a) Suppose Alice sends Bob one of the three states \\( \\rho_0 = |0\\rangle \\langle 0|, \\rho_1 = \\frac{1}{2} I, \\rho_2 = |1\\rangle \\langle 1| \\) with equal probability. Bob wants to figure out which state Alice sent. Compute the success probability achieved by Bob if he uses the pretty-good-measurement." }, { "context": "Let \\(\\sigma^* = \\frac{1}{3} I\\), then it's easy to check that\n\n\\[\n\\sigma^* \\geq \\frac{1}{3} |0\\rangle \\langle 0| = p_0 \\rho_0, \\quad \\sigma^* \\geq \\frac{1}{3} \\times \\frac{1}{2} I = p_1 \\rho_1, \\quad \\sigma^* \\geq \\frac{1}{3} |1\\rangle \\langle 1| = p_2 \\rho_2,\n\\]\n\ntherefore\n\n\\[\nP_{\\text{guess}} = \\inf_{\\sigma \\geq p_i \\rho_i, i=0,1,2} \\text{tr}(\\sigma) \\leq \\text{tr}(\\sigma^*) = \\frac{2}{3}.\n\\]\n\n\\(\\frac{2}{3}\\) is an upper bound of the guessing probability. We will show that this is actually the maximum of the guessing probability. Indeed, consider a POVM \\(\\{M_0^*, M_1^*, M_2^*\\}\\,\n\n\\[\nM_0^* = |0\\rangle \\langle 0|, \\quad M_1^* = 0, \\quad M_2^* = |1\\rangle \\langle 1|.\n\\]\n\nWe can check that this is a legal POVM, and the success probability using this POVM is\n\n\\[\np_{\\text{succ}}^* = \\frac{1}{3} (\\text{tr}(M_0^* \\rho_0) + \\text{tr}(M_1^* \\rho_1) + \\text{tr}(M_2^* \\rho_2)) = \\frac{2}{3}.\n\\]\n\nThus we have\n\n\\[\n\\frac{2}{3} \\geq P_{\\text{guess}} \\geq p_{\\text{succ}}^* = \\frac{2}{3},\n\\]\n\nwhich implies \\(P_{\\text{guess}} = \\frac{2}{3}\\).", "question": "### (b) In Homework 5, problem 1(h), you showed the following formulation of the guessing probability:\n\n\\[\nP_{\\text{guess}}(X | E) = \\inf_{\\sigma: \\sigma \\leq \\sum_i p_i \\rho_i} \\text{Tr} \\sigma,\n\\]\n\nwh" }, { "context": "Already done in (b).", "question": "### (c) Notice that there is a gap between the success probability calculated in parts (a) and (b). Find a measurement whose success probability matches the bound from part (b)." } ]
2016-11-18T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Properties of the Pretty-Good-Measurement
This problem is adapted from a StackExchange answer by Norbert Schuch. [^1] That post is not an allowed resource for this problem. When we make a pretty-good measurement to distinguish the ensemble \( \rho = \sum_i p_i \rho_i \), we associate to each \( \rho_i \) a measurement operator \( M_i = \rho^{- rac{1}{2}} p_i \rho_i \rho^{- rac{1}{2}} \). We think of \( M_i \) as being “well-fitted” to the state \( \rho_i \), in the sense that when we measure \( \Pi_i \), we conclude that \( \rho_i \) is the most likely state. This may lead us to believe that \( \rho_i \) is “well-fitted” to \( M_i \) in the sense that it is the state for which the measurement is most likely to result in \( \Pi_i \). In other words, we may like to believe the following inequality: \[ \text{Tr}(M_i \rho_i) \geq \text{Tr}(M_i \rho_k), \] for any \( i \) and \( k \). (Due to De Huang)
[ { "context": "Suppose \\( \\rho = p_0 \\rho_0 + p_1 \\rho_1 \\), \\( p_0, p_1 \\geq 0 \\), \\( p_0 + p_1 = 1 \\), then the pretty-good-measurement is given by\n\\[ M_0 = \\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}}, \\quad M_1 = \\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}}. \\]\n\nWe only need to show that\n\\[ \\text{tr}(M_0 \\rho_0) \\geq \\text{tr}(M_0 \\rho_1), \\quad \\text{tr}(M_1 \\rho_1) \\geq \\text{tr}(M_1 \\rho_0). \\]\n\nDefine\n\\[ a = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho), \\quad b = \\text{tr}(\\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}} \\rho), \\quad c = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho_1). \\]\n\nIt's easy to check that \\( a, b, c \\geq 0 \\), and we have\n\\[\n\\begin{cases}\np_0 a + p_1 c = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho) + \\text{tr}(\\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}} \\rho) = \\text{tr}(\\rho^{-\frac{1}{2}} \\rho \\rho^{-\frac{1}{2}} \\rho) = \\text{tr}(\\rho) = 1, \\\np_0 c + p_1 b = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho_1) + \\text{tr}(\\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}} \\rho_1) = \\text{tr}(\\rho^{-\frac{1}{2}} \\rho \\rho^{-\frac{1}{2}} \\rho_1) = \\text{tr}(\\rho_1) = 1,\n\\end{cases}\n\\]\n\\[ \\implies p_0 (a - c) = p_1 (b - c). \\]\n\nAlso using Cauchy-Schwarz inequality we have\n\\[ c^2 = \\left( \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho_1) \\right)^2 = \\left( \\text{tr}(\\rho^{-\frac{1}{4}} p_0 \\rho_0^{\\frac{1}{4}} \\rho^{-\frac{1}{4}} \\rho_1^{\\frac{1}{4}}) \\right)^2 \\leq \\left( \\text{tr}(\\rho^{-\frac{1}{4}} p_0 \\rho_0^{\\frac{1}{4}} \\rho^{-\frac{1}{4}}) \\right) \\left( \\text{tr}(\\rho^{-\frac{1}{4}} \\rho_1^{\\frac{1}{4}} \\rho^{-\frac{1}{4}}) \\right) = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho) \\text{tr}(\\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}} \\rho) = ab. \\]\n\nThat is to say, at least one of the following is true: \\( a \\geq c \\); \\( b \\geq c \\). Then using \\( p_0 (a - c) = p_1 (b - c) \\), we must have\n\\[ p_0 (a - c) = p_1 (b - c) \\geq 0. \\]\n\nTherefore\n\\[ \\text{tr}(M_0 \\rho_0) = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho_0) = p_0 a \\geq p_0 c = \\text{tr}(\\rho^{-\frac{1}{2}} p_0 \\rho_0 \\rho^{-\frac{1}{2}} \\rho_1) = \\text{tr}(M_0 \\rho_1), \\]\n\\[ \\text{tr}(M_1 \\rho_1) = \\text{tr}(\\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}} \\rho_1) = p_1 b \\geq p_1 c = \\text{tr}(\\rho^{-\frac{1}{2}} p_1 \\rho_1 \\rho^{-\frac{1}{2}} \\rho_0) = \\text{tr}(M_1 \\rho_0). \\]", "question": "### (a) Prove inequality 2 for the case where the ensemble has only two states." }, { "context": "In this case, we have\n\\[ \\rho = \\frac{2}{5} \\rho_0 + \\frac{3}{5} \\rho_1 + \\frac{1}{5} \\rho_2 = \\frac{1}{5} \\begin{pmatrix} 2 & 0 \\\\ 0 & 3 \\end{pmatrix}, \\quad \\rho^{-\frac{1}{2}} = \\sqrt{5} \\begin{pmatrix} \\frac{1}{\\sqrt{2}} & 0 \\\\ 0 & \\frac{1}{\\sqrt{3}} \\end{pmatrix}, \\]\nand the pretty-good-measurement is given by\n\\[ M_0 = \\begin{pmatrix} \\frac{3}{20} & 0 \\\\ 0 & \\frac{2}{9} \\end{pmatrix}, \\quad M_1 = \\begin{pmatrix} \\frac{1}{30} & 0 \\\\ 0 & \\frac{4}{9} \\end{pmatrix}, \\quad M_2 = \\begin{pmatrix} 0 & 0 \\\\ 0 & \\frac{1}{3} \\end{pmatrix}. \\]\n\nWe can see check that\n\\[ \\text{tr}(M_1 \\rho_1) = \\frac{1}{3} \\times \\left( \\frac{1}{3} + \\frac{8}{9} \\right) = \\frac{11}{27}, \\quad \\text{tr}(M_1 \\rho_2) = \\frac{4}{9} = \\frac{12}{27}, \\]\n\n\\[ \\text{tr}(M_1 \\rho_1) < \\text{tr}(M_1 \\rho_2), \\]\n\nwhich violates inequality (2).", "question": "### (b) Let \\( \\rho_0 = \\frac{1}{3} \\begin{pmatrix} 2 & 0 \\\\ 0 & 1 \\end{pmatrix} \\), \\( \\rho_1 = \\frac{1}{3} \\begin{pmatrix} 1 & 0 \\\\ 0 & 2 \\end{pmatrix} \\), and \\( \\rho_2 = \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\). Consider the ensemble \\( \\rho = \\frac{2}{5} \\rho_0 + \\frac{2}{5} \\rho_1 + \\frac{1}{5} \\rho_2 \\). Show that inequality 2 is not satisfied." } ]
2016-11-18T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Deterministic Extractors on Bit-Fixing Sources
We saw in the edX lecture notes that no deterministic function can serve as an extractor for all random sources of a given length. However, this doesn’t rule out the possibility that a deterministic extractor can work for some restricted class of sources. (Due to Bolton Bailey)
[ { "context": "The min entropy for \\( X \\) is defined as\n\n\\[ H_{\\min}(X) = -\\log \\max p_x \\]\n\nSince each of the last \\( n - t \\) bits of \\( X_0 \\) is uniformly random and independent of the other bits, there are \\( 2^{n-t} \\) equally likely outcomes for the distribution \\( X_0 \\), so\n\n\\[ H_{\\min}(X_0) = -\\log \\max \\frac{1}{2^{n-t}} = -\\log \\frac{1}{2^{n-t}} = n - t \\]\n\nFor \\( X_1 \\), there are \\( 2^{n-1} \\) strings of length \\( n - 1 \\), and for each of these, there is exactly one bit we can append to get a string with an even number of 0s. Thus \\( X_1 \\) has \\( 2^{n-1} \\) equally likely outcomes.\n\n\\[ H_{\\min}(X_1) = -\\log \\max \\frac{1}{2^{n-1}} = -\\log \\frac{1}{2^{n-1}} = n - 1 \\]\n\nFor \\( X_2 \\), there are \\( 2^{n/2} \\) possibilities for the first half of the string, and since the first half of the string determines the second half, there are \\( 2^{n/2} \\) equally likely outcomes.\n\n\\[ H_{\\min}(X_2) = -\\log \\max \\frac{1}{2^{n/2}} = -\\log \\frac{1}{2^{n/2}} = n/2 \\]", "question": "### (a) Fix an even integer \\( n \\) and integer \\( t < n \\). Consider the following sources.\n- \\( X_0 \\) is all 1s on the first \\( t \\) bits and uniformly random on the last \\( n - t \\) bits.\n- \\( X_1 \\) is uniformly random over the set of strings with an even number of 0s.\n- \\( X_2 \\) is uniformly random over the set of strings where the first \\( \\frac{n}{2} \\) bits are the same as the last \\( \\frac{n}{2} \\) bits.\n\nCompute the min-entropy \\( H_{\\min}(X_i) \\) for each \\( i \\in \\{0, 1, 2\\} \\)." }, { "context": "\\( f_0(X_0) \\) is not uniformly random, since this is the XOR of t 1s, so this always produces t mod 2.\n\n\\( f_0(X_1) \\) is uniformly random, since \\( t < n \\), so the first t bits of n are uniformly random, so their XOR is uniformly random. (Unless \\( t = 0 \\) in which case the output is not uniform random, it is 0)\n\n\\( f_0(X_2) \\) is uniformly random, since if \\( 1 \\leq t \\leq n/2 \\), it is the XOR of the first t bits of the string and if \\( n/2 \\leq t \\leq n \\), it is the XOR of the last \\( n - t \\) bits of a uniform string. (Unless \\( t = 0 \\) in which case the output is not uniform random, it is 0)\n\n\\( f_1(X_0) \\) is uniformly random, since if \\( 1 \\leq t \\leq n/2 \\), then \\( x_1 x_{1+n/2} \\) is uniform random, and if \\( n/2 \\leq t < n \\), then \\( x_n x_{2n} \\) is uniform random.\n\n\\( f_1(X_1) \\) is uniformly random only if \\( n/2 \\) is odd. If the first half of \\( x \\) has at least a 0 and at least a 1, then there is a 1/2 chance of each outcome, since if we choose the corresponding elements of the right half last, we can get either a result of 0 or 1. (TAs comment: In other\n\nwords, for any possible string on the other \\( n - 4 \\) bits, we can always make the value of \\( f_1(X_1) = 1 \\) by choosing appropriately the bit in the second half that is multiplied with the 1, and fix the number of zeros to be even by choosing appropriately the bit in the second half that is multiplied with 0). If the first half is all 0s, the result will be 0. If the first half is all 1s, then the result is the parity of the second half, which is 1 only if \\( n/2 \\) is odd.\n\n\\( f_1(X_2) \\) is uniformly random, since it is the XOR of a uniform random string of length \\( n/2 \\)\n\n\\( f_2(X_0) \\) is uniformly random, since the last bit is always uniformly random and independent of the previous bits.\n\n\\( f_2(X_1) \\) is not uniformly random, since if there are an even number of 0s, since \\( n \\) is even, there is an even number of 1s, so the XOR is 0.\n\n\\( f_2(X_2) \\) is not uniformly random, since the XOR of the whole string is the XOR of the first and second halves, which have the same parity, so this always results in 0.", "question": "### (b) Consider the following deterministic functions:\n- \\( f_0(x) := \\bigoplus_{i=1}^t x_i \\), the XOR of the first \\( t \\) bits of \\( x \\).\n- \\( f_1(x) := x_L \\cdot x_R = \\bigoplus_{i=1}^{\\frac{n}{2}} x_i x_{i+\\frac{n}{2}} \\), where \\( x = (x_L, x_R) \\) are the left and right halves of \\( x \\).\n- \\( f_2(x) := \\bigoplus_{i=1}^n x_i \\), the XOR of all of the bits of \\( x \\).\n\nFor which pairs \\( (i, j) \\) is \\( f_i(X_j) \\) distributed as a uniformly random bit?\n\n[^1]: [physics.stackexchange.com/questions/245274/probability-distribution-of-a-pretty-good-measurement](https://physics.stackexchange.com/questions/245274/probability-distribution-of-a-pretty-good-measurement)" }, { "context": "Consider the function \\( f \\) defined as follows: We divide the input \\( x \\) into \\( \\left\\lceil \\frac{n}{t+1} \\right\\rceil \\) disjoint segments each containing \\( t+1 \\) bits. There will always be enough bits to do this since\n\n\\[ \\left\\lceil \\frac{n}{t+1} \\right\\rceil \\leq \\frac{n}{t+1} \\]\n\n\\[ (t+1) \\cdot \\left\\lceil \\frac{n}{t+1} \\right\\rceil \\leq (t+1) \\cdot \\frac{n}{t+1} = n \\]\n\nIf there are leftover bits we ignore them. We then define \\( f(x) \\) such that the \\( i \\)-th bit of \\( f(x) \\) is equal to the XOR of the \\( i \\)-th segment (this means the outputs will have the correct number of bits, the same as the number of segments). Moreover, since Eve has only \\( t \\) bits, and the segments are \\( t+1 \\) bits, Eve never has all the bits in a segment, and the \\( i \\)-th bit of the output is therefore independent of Eve. Thus, the whole output is independent of Eve.", "question": "### (c) Alice and Bob share a classical secret \\( X \\in \\{0, 1\\}^n \\) generated uniformly at random. Alice and Bob make an error in their secure communication protocol and as a result, Eve learns \\( t \\) bits of \\( X \\). Give, with proof, a deterministic function \\( f \\) such that \\( f(X) \\) is uniformly random over strings of length \\( \\left\\lfloor \\frac{n}{L+1} \\right\\rfloor \\) and \\( f(X) \\) is independent of Eve." } ]
2016-11-18T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
No Chain Rule for Conditional Min-Entropy
Recall the definition of conditional Shannon entropy. \[ H(Y \mid X) = \sum_x \Pr[X = x] H(Y \mid X = x) \] (3). (Due to De Huang)
[ { "context": "\\[ H(Y|X) = \\sum_x \\Pr[X = x] H(Y|X = x) \\] \\[ = \\sum_x \\Pr[X = x] \\sum_y \\Pr[Y = y|X = x] \\log \\left( \\frac{1}{\\Pr[Y = y|X = x]} \\right) \\] \\[ = \\sum_x \\Pr[X = x] \\sum_y \\Pr[Y = y, X = x] \\log \\left( \\frac{\\Pr[X = x]}{\\Pr[Y = y, X = x]} \\right) \\] \\[ = \\sum_x \\sum_y \\Pr[Y = y, X = x] \\left( \\log \\left( \\frac{1}{\\Pr[Y = y, X = x]} \\right) - \\log \\left( \\frac{1}{\\Pr[X = x]} \\right) \\right) \\] \\[ = \\sum_x \\sum_y \\Pr[Y = y, X = x] \\log \\left( \\frac{1}{\\Pr[Y = y, X = x]} \\right) \\] \\[ - \\sum_x \\sum_y \\Pr[Y = y, X = x] \\log \\left( \\frac{1}{\\Pr[X = x]} \\right) \\] \\[ = H(XY) - \\sum_x \\Pr[X = x] \\log \\left( \\frac{1}{\\Pr[X = x]} \\right) \\] \\[ = H(XY) - H(X). \\]", "question": "### (a) Prove that conditional Shannon entropy satisfies the chain rule: \\[ H(Y \\mid X) = H(XY) - H(X). \\] (4)" }, { "context": "Given that \\(X\\) and \\(Y\\) are independent, we have \\[ H_{\\min}(XY) = -\\log P_{\text{guess}}(XY) \\] \\[ = -\\log (P_{\text{guess}}(X) P_{\text{guess}}(Y)) \\] \\[ = -\\log P_{\text{guess}}(X) - \\log P_{\text{guess}}(Y) \\] \\[ = H(X) + H(Y), \\] and thus \\[ H(Y|X) = H(Y) = H(XY) - H(X). \\]", "question": "### (b) Prove that the conditional min-entropy satisfies the chain rule on \\( X \\) and \\( Y \\) if \\( X \\) and \\( Y \\) are independent." }, { "context": "For \\(i = 1\\), \\[ P_{\text{guess}}(X_1) = \\Pr[X_1 = 0] = \\frac{5}{8}, \\quad P_{\text{guess}}(X_1 Y_1) = \\Pr[X_1 Y_1 = 00] = \\frac{1}{2}; \\] \\[ P_{\text{guess}}(Y_1|X_1) = \\Pr[X = 0] \\Pr[Y_1 = 0|X_1 = 0] + \\Pr[X_1 = 1] \\Pr[Y_1 = 0|X_1 = 1] = \\frac{3}{4}; \\] \\[ H_{\\min}(X_1) = -\\log \\frac{5}{8} = 3 - \\log 5, \\quad H_{\\min}(X_1 Y_1) = -\\log \\frac{1}{2} = 1, \\] \\[ H_{\\min}(Y_1|X_1) = -\\log \\frac{3}{4} = 2 - \\log 3, \\] \\[ H_{\\min}(Y_1|X_1) > H_{\\min}(X_1 Y_1) - H_{\\min}(X_1). \\] For \\(i = 2\\), \\[ P_{\text{guess}}(X_2) = \\Pr[X_2 = 0] = \\frac{5}{8}, \\quad P_{\text{guess}}(X_2 Y_2) = \\Pr[X_2 Y_2 = 00] = \\frac{3}{8}; \\] \\[ P_{\text{guess}}(Y_2|X_2) = \\Pr[X_2 = 0]\\Pr[Y_2 = 0|X_2 = 0] + \\Pr[X = 1]\\Pr[Y_2 = 0|X_2 = 1] = \\frac{11}{16} \\] \\[ H_{\\min}(X_2) = -\\log \\frac{5}{8} = 3 - \\log 5, \\quad H_{\\min}(X_2Y_2) = -\\log \\frac{3}{8} = 3 - \\log 3, \\] \\[ H_{\\min}(Y_2|X_2) = -\\log \\frac{11}{16} = 4 - \\log 11, \\] \\[ H_{\\min}(Y_2|X_2) < H_{\\min}(X_2Y_2) - H_{\\min}(X_2). \\] We have encountered all cases where \\( H_{\\min}(Y|X) =, >, < H_{\\min}(XY) - H_{\\min}(X) \\), thus we may conclude that there is no certain form of the chain rule for conditional min-entropy.", "question": "### (c) For each of the following two distributions, compute \\( H_{\\min}(X_iY_i) \\), \\( H_{\\min}(X_i) \\), and \\( H_{\\min}(Y_i \\mid X_i) \\). Make a conclusion about the form of the general chain rule for conditional min-entropy. \\[ \\begin{aligned} &p(X_1Y_1 = 00) = \\frac{1}{2}, \\quad p(X_1Y_1 = 01) = \\frac{1}{8}, \\quad p(X_1Y_1 = 10) = \\frac{1}{4}, \\quad p(X_1Y_1 = 11) = \\frac{1}{8}, \\\\ &p(X_2Y_2 = 00) = \\frac{3}{8}, \\quad p(X_2Y_2 = 01) = \\frac{1}{4}, \\quad p(X_2Y_2 = 10) = \\frac{5}{16}, \\quad p(X_2Y_2 = 11) = \\frac{1}{16}. \\end{aligned} \\]" } ]
2016-11-18T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Optimal qubit strategies in the CHSH game
Questions (a), (b) and (d) of this problem are worth one point each. The others are worth zero points and are optional. You should still read the problem to its end, as the conclusion is used in the following problem. The goal of this problem is to evaluate the maximum success probability that can be achieved in the CHSH game by players sharing a two-qubit entangled state of the form \[ \lvert \psi_{\theta} \rangle_{AB} = \cos(\theta) \lvert 0 \rangle_A \lvert 0 \rangle_B + \sin(\theta) \lvert 1 \rangle_A \lvert 1 \rangle_B, \] (5) where \( \theta \in [0, \pi/4] \) (other values of \( \theta \) can be reduced to this case by simple change of basis or phase flip). Having fixed the state, what are the optimal measurements for the players, and what is their success probability? We will assume each player makes a basis measurement on their qubit. Recall that an observable \( O \) is a \( 2 \times 2 \) matrix with complex entries such that \( O \) is Hermitian (\( O^{\dagger} = O \)) and squares to identity (\( O^2 = I \)). For any single-qubit basis measurement \( \{ \lvert u_0 \rangle, \lvert u_1 \rangle \} \), there is an associated observable \( O = \lvert u_0 \rangle \langle u_0 \rvert - \lvert u_1 \rangle \langle u_1 \rvert \). Conversely, any observable that is not \( \pm I \) has two non-degenerate eigenvalues \( +1 \) and \( -1 \), so we can uniquely identify it with a basis. To reduce the number of cases to consider we first make a few symmetry observations. (Due to Bolton Bailey)
[ { "context": "We wish to show that any observable \\( O \\) is of the form \\[ O = \\alpha X + \\beta Y + \\gamma Z \\] Where the coefficients are real and \\( \\alpha^2 + \\beta^2 + \\gamma^2 = 1 \\). We first reason that since \\( O = O^\\dagger \\), \\( O \\) is of the form \\[ O = \\begin{pmatrix} x & y + zi \\\\ y - zi & w \\end{pmatrix} \\] Where \\( x, y, z, w \\) are real. If we take \\[ y = \\alpha \\] \\[ z = -\\beta \\] \\[ \\frac{x - w}{2} = \\gamma \\] \\[ \\frac{x + w}{2} = \\delta \\] We get \\[ O = \\begin{pmatrix} \\delta + \\gamma & \\alpha - \\beta i \\\\ \\alpha + \\beta i & \\delta - \\gamma \\end{pmatrix} = \\alpha X + \\beta Y + \\gamma Z + \\delta I \\] So any unitary \\( O \\) must be of this form. Since we also know that \\( O^2 = I \\), we have \\[ O^2 = (\\alpha X + \\beta Y + \\gamma Z + \\delta I)(\\alpha X + \\beta Y + \\gamma Z + \\delta I) \\] And since \\( XY = -YX, XZ = -ZX \\) and \\( YZ = -ZY \\), if we expand and cancel, we get \\[ O^2 = \\alpha^2 X^2 + \\beta^2 Y^2 + \\gamma^2 Z^2 + \\delta^2 I^2 + 2\\alpha \\delta X + 2\\beta \\delta Y + 2\\gamma \\delta Z \\] \\[ O^2 = \\alpha^2 I + \\beta^2 I + \\gamma^2 I + \\delta^2 I + 2\\alpha \\delta X + 2\\beta \\delta Y + 2\\gamma \\delta Z \\] \\[ O^2 = (\\alpha^2 + \\beta^2 + \\gamma^2 + \\delta^2)I + 2\\alpha \\delta X + 2\\beta \\delta Y + 2\\gamma \\delta Z \\] And so if this equals I, either δ = 0 or α = β = γ = 0. Since we are assuming nondegeneracy, the former is the case \\[ O^2 = (\\alpha^2 + \\beta^2 + \\gamma^2)I \\] And so \\( \\alpha^2 + \\beta^2 + \\gamma^2 = 1 \\). Thus, any single qubit observable can be represented in this form.", "question": "### (a) Let \\( O \\) be a single-qubit observable such that \\( O \\) is non-degenerate (\\( O \\neq \\pm I \\)). Show that there exists real numbers \\( \\alpha, \\beta, \\gamma \\) such that \\( \\alpha^2 + \\beta^2 + \\gamma^2 = 1 \\) and \\( O = \\alpha X + \\beta Y + \\gamma Z \\), with \\( X, Y, Z \\) the standard Pauli matrices." }, { "context": "Referring to the result of (Due to Mandy Huo) We have \\( |\\Psi\\rangle |E\\rangle \\rightarrow \\alpha |0\\rangle |E_0\\rangle + \\beta |1\\rangle |E_1\\rangle \\) so \\[ |\\Psi\\rangle \\langle \\Psi| \\otimes |E\\rangle \\langle E| = |\\alpha|^2 |0\\rangle \\langle 0| \\otimes |E_0\\rangle \\langle E_0| + \\alpha \\beta^* |0\\rangle \\langle 1| \\otimes |E_0\\rangle \\langle E_1| + \\alpha^* \\beta |1\\rangle \\langle 0| \\otimes |E_1\\rangle \\langle E_0| + |\\beta|^2 |1\\rangle \\langle 1| \\otimes |E_1\\rangle \\langle E_1| \\] Assuming \\( \\langle E_0|E_1 \\rangle \\) is real, we have \\( \\langle E_0|E_1 \\rangle = \\langle E_1|E_0 \\rangle \\). Since \\( |E\\rangle \\), \\( |E_0\\rangle \\), and \\( |E_1\\rangle \\) are normalized, tracing out the environment gives \\[ |\\Psi\\rangle \\langle \\Psi| \\otimes \\text{Tr} (|E\\rangle \\langle E|) = |\\alpha|^2 |0\\rangle \\langle 0| + \\langle E_0|E_1 \\rangle (\\alpha \\beta^* |0\\rangle \\langle 1| + \\alpha^* \\beta |1\\rangle \\langle 0|) + |\\beta|^2 |1\\rangle \\langle 1| \\] Define \\( p = \\frac{1 - \\langle E_0|E_1 \\rangle}{2} \\). We will show later that \\( p \\) is in fact a valid probability. Note that \\( Z|\\Psi\\rangle = \\alpha |0\\rangle - \\beta |1\\rangle \\). Then we have \\[ |\\Psi\\rangle \\langle \\Psi| \\otimes \\text{Tr} (|E\\rangle \\langle E|) = |\\alpha|^2 |0\\rangle \\langle 0| + (1 - 2p)(\\alpha \\beta^* |0\\rangle \\langle 1| + \\alpha^* \\beta |1\\rangle \\langle 0|) + |\\beta|^2 |1\\rangle \\langle 1| \\] \\[ = (1 - p) (|\\alpha|^2 |0\\rangle \\langle 0| + \\alpha \\beta^* |0\\rangle \\langle 1| + \\alpha^* \\beta |1\\rangle \\langle 0| + |\\beta|^2 |1\\rangle \\langle 1|) \\] \\[ + p (|\\alpha|^2 |0\\rangle \\langle 0| - \\alpha \\beta^* |0\\rangle \\langle 1| - \\alpha^* \\beta |1\\rangle \\langle 0| + |\\beta|^2 |1\\rangle \\langle 1|) \\] \\[ = (1 - p)|\\Psi\\rangle \\langle \\Psi| + pZ|\\Psi\\rangle \\langle \\Psi|Z \\] So \\( |\\Psi\\rangle \\langle \\Psi| \\rightarrow (1 - p)|\\Psi\\rangle \\langle \\Psi| + pZ|\\Psi\\rangle \\langle \\Psi|Z \\). Note that \\( |E_i\\rangle \\langle E_i| \\geq 0 \\) since \\( \\langle u|E_i\\rangle \\langle E_i|u\\rangle = |\\langle u|E_i\\rangle|^2 \\geq 0 \\) for any \\( |u\\rangle \\) and \\( \\lambda_{\\max} (|E_i\\rangle \\langle E_i|) \\leq 1 \\) since \\( \\sum_i \\lambda_i (|E_i\\rangle \\langle E_i|) = \\text{Tr} (|E_i\\rangle \\langle E_i|) = 1 \\) and \\( \\lambda_i (|E_i\\rangle \\langle E_i|) \\geq 0 \\). Then by problem 2(b) we have \\[ |\\langle E_0|E_1 \\rangle|^2 = \\text{Tr} (|E_0\\rangle \\langle E_0| |E_1\\rangle \\langle E_1|) \\leq \\lambda_{\\max} (|E_1\\rangle \\langle E_1|) \\text{Tr} (|E_0\\rangle \\langle E_0|) \\leq 1 \\] Then we have \\( |\\langle E_0|E_1 \\rangle| \\leq 1 \\) which implies \\( 0 \\leq p \\leq 1 \\) so \\( p \\) is a valid probability.", "question": "### (b) Let \\( B = A_0 \\otimes B_0 + A_1 \\otimes B_0 + A_0 \\otimes B_1 - A_1 \\otimes B_1 \\). Show that the success probability of the strategy in the CHSH game is \\( p_s = \\frac{1}{2} + \\frac{1}{4} \\langle \\psi | B | \\psi \\rangle \\)." }, { "context": "we found that the probability of success in the CHSH game was \\[ p_s = \\frac{1}{2} + \\frac{1}{8} (\\langle u_0|v_0 \\rangle + \\langle u_0|v_1 \\rangle + \\langle u_1|v_0 \\rangle - \\langle u_1|v_1 \\rangle) \\] Where \\[ |u_x \\rangle = A_x \\otimes I |\\psi \\rangle \\] \\[ |v_y \\rangle = I \\otimes B_y |\\psi \\rangle \\] From these definitions, we see \\[ \\langle u_x|v_y \\rangle = \\langle \\psi | (A_x \\otimes I)(I \\otimes B_y) |\\psi \\rangle = \\langle \\psi | (A_x \\otimes B_y) |\\psi \\rangle \\] And so we can rewrite the result of that problem as \\[ p_s = \\frac{1}{2} + \\frac{1}{8} (\\langle \\psi | (A_0 \\otimes B_0) |\\psi \\rangle + \\langle \\psi | (A_0 \\otimes B_1) |\\psi \\rangle + \\langle \\psi | (A_1 \\otimes B_0) |\\psi \\rangle - \\langle \\psi | (A_1 \\otimes B_1) |\\psi \\rangle) \\] And by linearity \\[ p_s = \\frac{1}{2} + \\frac{1}{8} (\\langle \\psi | B |\\psi \\rangle) \\] Which is the correct identity.", "question": "### (c) Argue that for the purposes of computing the maximum success probability in the CHSH game of players using state \\( |\\psi \\rangle_{AB} \\) as in (5) we may without loss of generality restrict our attention to observables of the form \\( A_x = \\cos(\\alpha_x)X + \\sin(\\alpha_x)Y \\) and \\( B_y = \\cos(\\beta_y)X + \\sin(\\beta_y)Y \\) for some angles \\( \\alpha_x, \\beta_y \\in [0, 2\\pi) \\). [Hint: do a rotation on the Bloch sphere.] Based on the symmetry argument from the previous questions we have reduced our problem to understanding the maximum value that \\( \\langle \\psi | B | \\psi \\rangle \\) can take, when \\( |\\psi \\rangle \\) is as in (5) and \\( B \\) is defined from observables \\( A_x, B_y \\) as in (b). To understand this maximum value we compute the spectral decomposition of \\( B \\)." }, { "context": "We have \\[ B = (A_0 \\otimes B_0) + (A_0 \\otimes B_1) + (A_1 \\otimes B_0) - (A_1 \\otimes B_1) \\] And from the special form of \\( A_x, B_y \\), we have \\[ A_x \\otimes B_y = (\\cos(α_x)X + \\sin(α_x)Y) \\otimes (\\cos(β_y)X + \\sin(β_y)Y) \\] And so we note that \\( ZXZ = -iZ \\) and \\( ZYZ = iZ \\) and we see \\[ (Z \\otimes I)A_x \\otimes B_y(Z \\otimes I) = (-\\cos(α_x)X - \\sin(α_x)Y) \\otimes (\\cos(β_y)X + \\sin(β_y)Y) = -A_x \\otimes B_y \\] And \\[ (I \\otimes Z)A_x \\otimes B_y(I \\otimes Z) = (\\cos(α_x)X + \\sin(α_x)Y) \\otimes (-\\cos(β_y)X - \\sin(β_y)Y) = -A_x \\otimes B_y \\] And so, since \\( B \\) is a linear combination of \\( A_x \\otimes B_y \\), we have by linearity \\[ (Z \\otimes I)B(Z \\otimes I) = (I \\otimes Z)B(I \\otimes Z) = -B \\]", "question": "### (d) Show that \\( (Z \\otimes I)B(Z \\otimes I) = (I \\otimes Z)B(I \\otimes Z) = -B \\). [Hint: use the special form of \\( A_x \\) and \\( B_y \\) you obtained from question (c).]" }, { "context": "", "question": "### (e) Show that \\( B \\) has a basis of eigenvectors of the form \\( |\\phi_{ab} \\rangle = e^{i\\phi_{ab}} |ab \\rangle + \\overline{a} \\overline{b} \\rangle \\), where \\( a, b \\in \\{0, 1\\} \\) and \\( \\overline{a} = 1 - a, \\overline{b} = 1 - b \\). Note that up to local rotations this is the Bell basis." }, { "context": "", "question": "### (f) Write \\( B^2 \\) as a \\( 4 \\times 4 \\) matrix depending on the angles \\( \\alpha_x, \\beta_y \\), and show that \\( \\text{Tr}(B^2) \\leq 16 \\)." }, { "context": "", "question": "### (g) Show that the largest success probability achievable in the CHSH game using \\( |\\psi \\rangle_{AB} \\) is at most \\( \\frac{1}{2} + \\frac{1}{4} \\sqrt{1 + \\sin^2(2\\theta)} \\). [Hint: Decompose \\( |\\psi \\rangle \\) in the eigenbasis of \\( B \\). Use (f) and the symmetries from (d) to bound the success probability via the expression found in (b).]" }, { "context": "", "question": "### (h) Give a strategy for the players which achieves this value, i.e. specify the players' observables." } ]
2016-11-18T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Trading success probability for randomness in the CHSH game
The goal of this problem is to show that, if players succeed with higher and higher probability in the CHSH game then Alice's outputs in the game must contain more and more randomness. (Due to De Huang)
[ { "context": "Let \\( \\rho_{AB} = |\\psi\rangle \\langle \\psi|_{AB} \\), then\n\n\\[\n\\rho_A = \\text{Tr}_B(\\rho_{AB}) = \\cos^2(\\theta)|0\\rangle \\langle 0| + \\sin^2(\\theta)|1\\rangle \\langle 1|.\n\\]\n\nAssume that\n\n\\[\nA_x = |u_x^0\\rangle \\langle u_x^0| - |u_x^1\\rangle \\langle u_x^1|, \\quad x \\in \\{0, 1\\},\n\\]\n\nwhere \\( \\{|u_x^0\\rangle, |u_x^1\\rangle\\} \\) is an orthogonal basis. Then for any \\( a \\in \\{0, 1\\}, x \\in \\{0, 1\\} \\), we have\n\n\\[\np_{\\theta}(a|x) = \\text{Tr}(|u_x^a\\rangle \\langle u_x^a| \\rho_A)\n\\]\n\\[\n= \\cos^2(\\theta)|\\langle u_x^a|0\\rangle|^2 + \\sin^2(\\theta)|\\langle u_x^a|1\\rangle|^2\n\\]\n\\[\n\\leq \\cos^2(\\theta)|\\langle u_x^a|0\\rangle|^2 + \\cos^2(\\theta)|\\langle u_x^a|1\\rangle|^2\n\\]\n\\[\n= \\cos^2(\\theta)(|\\langle u_x^a|0\\rangle|^2 + |\\langle u_x^a|1\\rangle|^2)\n\\]\n\\[\n= \\cos^2(\\theta).\n\\]\n\nWe have used the fact that \\( \\sin^2(\\theta) \\leq \\cos^2(\\theta) \\), \\( \\forall \\theta \\in [0, \\frac{\\pi}{4}] \\). Therefore \\( \\max_{a,x} p_{\\theta}(a|x) \\leq \\cos^2(\\theta) \\).", "question": "### (a) Suppose that Alice and Bob play the CHSH game using a two-qubit entangled state \\( |\\psi \rangle_{AB} \\) as in (5). Let \\( p_0(a|x) \\) be the probability that, in this strategy, Alice returns answer \\( a \\in \\{0, 1\\} \\) to question \\( x \\in \\{0, 1\\} \\). Show that \\( \\max_{a,x} p_0(a|x) \\leq \\cos^2(\theta) \\)." }, { "context": "(b) Using the result in problem 5(g), we have\n\n\\[\nI = 8p_s - 4 \\leq 8 \\left( \\frac{1}{2} + \\frac{1}{4} \\sqrt{1 + \\sin^2(2\\theta)} \\right) - 4 = 2 \\sqrt{1 + \\sin^2(2\\theta)}.\n\\]\n\nSince we may also assume that \\( p_s \\geq \\frac{1}{2} \\), i.e. \\( I \\geq 0 \\), then we have\n\n\\[\n\\sin^2(2\\theta) + 1 \\geq \\frac{I^2}{4},\n\\]\n\\[\n\\implies 2 - \\frac{I^2}{4} \\geq 2 - (1 + \\sin^2(2\\theta)) = 1 - \\sin^2(2\\theta) = \\cos^2(2\\theta),\n\\]\n\\[\n\\implies \\sqrt{2 - \\frac{I^2}{4}} \\geq \\cos(2\\theta) = 2 \\cos^2(\\theta) - 1.\n\\]\n\nThen using the result of (a), we have\n\n\\[\n\\max_{a,x} p_{\\theta}(a|x) = \\cos^2(\\theta) \\leq \\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right),\n\\]\n\ni.e.\n\n\\[\np_{\\theta}(a|x) \\leq \\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right), \\quad \\forall a, x \\in \\{0, 1\\}.\n\\]", "question": "### (b) Let \\( p_s = \\frac{1}{2} + \\frac{1}{8} I \\) be the players' success probability in CHSH, where \\( I \\in [-4, 4] \\) (\\( I = 2\\sqrt{2} \\) for the optimal quantum strategy). Using (g) from the previous problem, deduce from (a) that\n\\[\n\\forall a, x \\in \\{0,1\\}, \\quad p_{\\theta}(a|x) \\leq \\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right).\n\\]" }, { "context": "(c) For any two-qubit \\(|\\phi\\rangle\\), consider its Schmidt decomposition\n\n\\[\n|\\phi\\rangle = \\cos(\\theta)|u_0\\rangle|v_0\\rangle + \\sin(\\theta)|u_1\\rangle|v_1\\rangle,\n\\]\n\nwhere \\( \\theta \\in \\left[0, \\frac{\\pi}{4}\\right] \\). We may also assume that\n\n\\[\n|u_0\\rangle = U|0\\rangle, \\quad |u_1\\rangle = U|1\\rangle, \\quad |v_0\\rangle = V|0\\rangle, \\quad |v_1\\rangle = V|1\\rangle,\n\\]\n\nwhere \\( U, V \\) are two unitaries, that is\n\n\\[\n|\\phi\\rangle = \\cos(\\theta)(U|0\\rangle \\otimes V|0\\rangle) + \\sin(\\theta)(U|1\\rangle \\otimes V|1\\rangle) = (U \\otimes V)|\\psi_0\\rangle.\n\\]\n\nNow assume that we use a strategy \\( A_0, A_1, B_0, B_1 \\) to play CHSH game with state \\( |\\phi\\rangle \\), and have a probability distribution \\( \\{p(a, b|x, y), a, b, x, y \\in \\{0, 1\\}\\} \\). Let\n\n\\[\n\\tilde{A}_x = U^T A_x U, \\quad x \\in \\{0, 1\\},\n\\]\n\\[\n\\tilde{B}_y = U^T B_y U, \\quad y \\in \\{0, 1\\}.\n\\]\n\nIt's easy to check that \\( \\tilde{A}_0, \\tilde{A}_1, \\tilde{B}_0, \\tilde{B}_1 \\) are still non-degenerate observables. Then we can check that\n\n\\[\np(a, b|x, y) = \\langle \\phi | (A_x \\otimes B_y) | \\phi \\rangle\n\\]\n\\[\n= \\langle \\psi_0 | (U^T \\otimes V^T) (A_x \\otimes B_y) (U \\otimes V) | \\psi_0 \\rangle\n\\]\n\\[\n= \\langle \\psi_0 | (\\tilde{A}_x \\otimes \\tilde{B}_y) | \\psi_0 \\rangle\n\\]\n\\[\n= \\tilde{p}(a, b|x, y),\n\\]\n\nwhere \\( \\tilde{p}(a, b|x, y), a, b, x, y \\in \\{0, 1\\} \\) is the probability distribution when we use the observables \\( \\tilde{A}_0, \\tilde{A}_1, \\tilde{B}_0, \\tilde{B}_1 \\) to play CHSH game with the state \\( |\\psi_0 \\rangle \\). In particular, we have\n\n\\[\np_s = \\tilde{p}_s = \\frac{1}{2} + \\frac{1}{8} I,\n\\]\n\\[\np(a|x) = \\tilde{p}(a|x) \\leq \\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right), \\quad \\forall a, x \\in \\{0, 1\\},\n\\]\n\nwhere we have used the result of (b). That is\n\n\\[\np(a|x) \\leq \\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right) = \\frac{1}{2} \\left( 1 + \\sqrt{2 - 4(2p_s - 1)^2} \\right), \\quad \\forall a, x \\in \\{0, 1\\}.\n\\]\n\nThen for any \\( x \\in \\{0, 1\\} \\),\n\n\\[\nH_{\\min}(A|X = x) = -\\log \\left( \\max_{a \\in \\{0, 1\\}} p(a|x) \\right)\n\\]\n\\[\n\\geq -\\log \\left( \\frac{1}{2} \\left( 1 + \\sqrt{2 - 4(2p_s - 1)^2} \\right) \\right)\n\\]\n\\[\n= 1 - \\log \\left( 1 + \\sqrt{2 - 4(2p_s - 1)^2} \\right).\n\\]", "question": "### (c) Suppose now the players use any single-qubit strategy (not necessarily using \\(|\\psi_{\theta}\\)). Prove a lower bound on the conditional min-entropy \\(H_{\\min}(A|X = x)\\), for any \\(x \\in \\{0,1\\}\\), that is generated in Alice's outputs, as a function of the players' success probability in the CHSH game." }, { "context": "Let\n\n\\[\nA_0 = Z, \\quad A_1 = X, \\quad B_0 = \\cos(t)Z + \\sin(t)X, \\quad B_1 = \\cos(t)Z - \\sin(t)X,\n\\]\n\nwhere \\(\\cos(t) = \\frac{1}{\\sqrt{1 + \\sin^2(2\\theta)}}, \\sin(t) = \\frac{-\\sin(2\\theta)}{\\sqrt{1 + \\sin^2(2\\theta)}}\\). It's easy to check that \\(A_0, A_1, B_0, B_1\\) are non-degenerate observables. Then we have\n\n\\[\nB = A_0 \\otimes B_0 + A_1 \\otimes B_0 + A_0 \\otimes B_1 - A_1 \\otimes B_1\n\\]\n\\[\n= 2 \\cos(t) Z \\otimes Z + 2 \\sin(t) X \\otimes X,\n\\]\n\nand\n\n\\[\n\\langle \\psi_0 | B | \\psi_0 \\rangle = 2 \\cos(t) \\langle \\psi_0 | Z \\otimes Z | \\psi_0 \\rangle + 2 \\sin(t) \\langle \\psi_0 | X \\otimes X | \\psi_0 \\rangle\n\\]\n\\[\n= 2 \\cos(t) + 2 \\sin(t) \\sin(2\\theta)\n\\]\n\\[\n= 2 \\left( \\frac{1}{\\sqrt{1 + \\sin^2(2\\theta)}} \\right) + 2 \\left( \\frac{\\sin^2(2\\theta)}{\\sqrt{1 + \\sin^2(2\\theta)}} \\right)\n\\]\n\\[\n= 2 \\sqrt{1 + \\sin^2(2\\theta)}.\n\\]\n\nRecall that \\(p_s = \\frac{1}{2} + \\frac{1}{8} \\langle \\psi_0 | B | \\psi_0 \\rangle = \\frac{1}{2} + \\frac{1}{8} I\\), thus\n\n\\[\nI = \\langle \\psi_0 | B | \\psi_0 \\rangle = 2 \\sqrt{1 + \\sin^2(2\\theta)},\n\\]\n\\[\n\\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right) = \\frac{1}{2} \\left( 1 + \\sqrt{1 - \\sin^2(2\\theta)} \\right) = \\frac{1}{2} \\left( 1 + \\cos(2\\theta) \\right) = \\cos^2(\\theta).\n\\]\n\nOn the other hand, since \\(\\forall x \\in \\{0, 1\\}\\),\n\n\\[\np(0|x) - p(1|x) = \\text{Tr}(A_x \\rho_A), \\quad p(0|x) + p(1|x) = 1,\n\\]\n\nwe have\n\n\\[\np(0|x) = \\frac{1}{2} \\left( 1 + \\text{Tr}(A_x \\rho_A) \\right), \\quad p(1|x) = \\frac{1}{2} \\left( 1 - \\text{Tr}(A_x \\rho_A) \\right).\n\\]\n\nThen now we have\n\n\\[\n\\rho_A = \\cos^2(\\theta) |0\\rangle \\langle 0| + \\sin^2(\\theta) |1\\rangle \\langle 1|,\n\\]\n\n\\[\np(0|0) = \\frac{1}{2} \\left( 1 + \\text{Tr}(A_0 \\rho_A) \\right) = \\frac{1}{2} \\left( 1 + \\cos^2(\\theta) - \\sin^2(\\theta) \\right) = \\cos^2(\\theta),\n\\]\n\\[\np(1|0) = \\frac{1}{2} \\left( 1 - \\text{Tr}(A_0 \\rho_A) \\right) = \\frac{1}{2} \\left( 1 - \\cos^2(\\theta) + \\sin^2(\\theta) \\right) = \\sin^2(\\theta),\n\\]\n\\[\np(0|1) = \\frac{1}{2},\n\\]\n\\[\np(1|1) = \\frac{1}{2}.\n\\]\n\nSince \\(\\theta \\in \\left[ 0, \\frac{\\pi}{4} \\right]\\), we have \\(\\cos^2(\\theta) \\geq \\frac{1}{2} \\geq \\sin^2(\\theta)\\), thus\n\n\\[\n\\max_{a,x} p(a|x) = \\cos^2(\\theta) = \\frac{1}{2} \\left( 1 + \\sqrt{2 - \\frac{I^2}{4}} \\right).\n\\]\n\nThe bound is tight.", "question": "### (d) Show that the bound from (b) is tight: for any \\(\\theta \\in [0, \\pi/4]\\) find a strategy for the players using \\(|\\psi_{\theta}\\) such that \\(\\max_{a,x} p_{\theta}(a|x) = \\frac{1}{2} (1 + \\sqrt{2 - I^2/4})\\)." } ]
2016-11-18T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Thinking adversarially
(Due to De Huang)
[ { "context": "- **Attack to protocol 1:**\n\n- Assume that Eve has a quantum machine that can store arbitrary amount of quantum states.\n- After Alice sends out the qubits she prepares via the quantum channel, Eve captures and stores them for the moment.\n- When Alice announces the basis string \\( \\theta \\) via the authenticated channel, Eve learns the basis. So Eve can measure the qubits in the right basis to learn the key \\( x \\) exactly without changing the qubits.\n- Afterwards Eve sends the unchanged qubits to Bob.\n- Since Bob receives the exact qubits that Alice sends at the beginning and measures them in the right basis, the \\( x \\) they share will pass the correctness checking.\n\n- **Protocol improvement:** Bob announces reception when he receives Alice’s qubits. Alice announces the basis string \\( \\theta \\) only after Bob announces reception.\n\n- **Attack to protocol 2:**\n\n- Assume that Eve has a quantum machine that can store and generate arbitrary amount of quantum states.\n- After Alice sends out the n halves of her EPR pairs via the quantum channel, Eve captures and stores them. (They become AE pairs.)\n- Eve generates another n EPR pairs and sends one half of each to Bob to cheat Bob, so that Bob will announce reception. (They become EB pairs.)\n- After Alice and Bob announce their basis strings \\( \\theta \\) and \\( \\hat{\\theta} \\), Eve uses these bases to measure the qubits on her side (of AE pairs and of EB pairs respectively), and learns exactly the raw key \\( x \\) shared with Alice and the raw key \\( \\hat{x} \\) shared with Bob.\n- Eve only keeps the bits \\( x_i \\) and \\( \\hat{x}_i \\) for \\( i \\) such that \\( \\theta_i = \\hat{\\theta}_i \\). Now Eve has the exact key \\( x \\) that Alice will use and the exact key \\( \\hat{x} \\) that Bob will use.\n- Protocol improvement: Alice and Bob carry out an additional correctness checking step before they use the keys \\( x \\) and \\( \\hat{x} \\) as a common key. (Under such attack, the final keys \\( x \\) and \\( \\hat{x} \\), whose expected lengths are \\( \\frac{n}{2} \\), are expected to share only \\( \\frac{n}{4} \\) matching bits.)", "question": "### Let’s imagine that we are playing the role of the eavesdropper Eve. We observe two parties, Alice and Bob, trying to implement certain QKD protocols. Because QKD is hard, Alice and Bob might try to cut corners in the implementation of their protocols. Here are two suggested protocols that Alice and Bob might want to implement. For each of them, either prove security or provide an explicit attack for Eve.\n\n**Protocol 1:** \nAlice and Bob can communicate through a classical authenticated channel, and a quantum (non-authenticated) channel.\n\n- Alice generates bit strings \\( x, \\theta \\in \\{0, 1\\}^n \\) uniformly at random.\n- Alice prepares qubits \\( |x_i\\rangle_{\\theta_i} \\) for \\( i = 1, .., n \\) where \\( |0\\rangle_0 = |0\\rangle \\), \\( |1\\rangle_0 = |1\\rangle \\), \\( |0\\rangle_1 = |+\\rangle \\), \\( |1\\rangle_1 = |-\\rangle \\), and sends them to Bob.\n- Alice announces the basis string \\( \\theta \\).\n- Bob measures the qubits he received according to the bases specified by the string \\( \\theta \\) and recovers \\( x \\)." } ]
2016-02-12T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
BB’84 fails in the device-independent setting
Consider the purified variant of the BB’84 protocol. Suppose that Eve prepares the state \(\rho_{ABE}\) in the following form: \[ \rho_{ABE} = \frac{1}{2} \sum_{x,z=0}^{1} |xz\rangle \langle xz|_A \otimes |xz\rangle \langle xz|_B \otimes |xz\rangle \langle xz|_E, \] where \(|xz\rangle\) is short-hand notation for \(|x\rangle \otimes |z\rangle\). Note that here each of the systems A and B handed over to Alice and Bob respectively is made of two qubits. But suppose that they don’t notice this - the qubits go directly into their respective measurement device. Now suppose each of Alice and Bob’s measurement devices, instead of measuring a single qubit in the standard or Hadamard bases, as it is supposed to do, in fact performs the following: - When the device is told to measure in the standard basis, it measures the first qubit of the two-qubit system associated with the device in (1) in the standard basis; - When the device is told to measure in the Hadamard basis, it measures the second qubit of the two-qubit system associated with the device in (1) in the standard basis. (Due to Mandy Huo)
[ { "context": "In this case the box measures the first qubit in the standard basis so the first qubit of Alice’s state is \\(|0\\rangle\\). The post-measurement state is \n\\[\\rho = \\frac{1}{2} \\sum_{z=0}^{1} |0z\\rangle \\langle 0z| \\otimes |0z\\rangle \\langle 0z| \\otimes |0z\\rangle \\langle 0z|\\].", "question": "### (a) Alice and Bob put blind faith in their hardware and attempt to implement BB’84. They want to check that their state is an EPR pair, so Alice asks her box to measure in the standard basis. The box returns a measurement outcome of 0. Determine the post-measurement state." }, { "context": "If Bob asks his box to measure in the Hadamard basis, it will measure the second qubit in the standard basis. Thus, Bob will get outcome 0 half the time and outcome 1 half the time. If Bob asks his box to measure in the standard basis, then it will measure the first qubit in the standard basis, so he will get outcome 0.", "question": "### (b) After Alice’s measurement, Bob asks his box to measure in the Hadamard basis. What measurement outcome will Bob receive? Suppose instead Bob had asked his box to measure in the standard basis. What measurement outcome would Bob have received?" }, { "context": "Eve will get 0 since her first qubit is \\(|0\\rangle\\).", "question": "### (c) Suppose Bob did in fact the latter. Suppose that now Eve measures her first qubit in the standard basis. What measurement outcome does she receive?" }, { "context": "Since Alice and Bob have the same state and their boxes work the same way, they must get the same outcome whenever they make the same measurement. Therefore, they will pass with probability 1.", "question": "### (d) Suppose Alice and Bob run the BB’84 protocol, and, as per the usual, they look at all the rounds in which they made the same measurement as each other. They pick a random subset of half of those rounds and test whether they received the same output on all the rounds. What is the probability that they pass the test?" }, { "context": "Since Eve has the same state as Alice and Bob, she can learn \\( x_j \\) by making the same measurement that Alice and Bob’s boxes made: measure the first qubit in the standard basis if \\( \\theta_j = 0 \\) and the second qubit in the standard basis if \\( \\theta_j = 1 \\).", "question": "### (e) Let \\( T' \\) be the set of rounds on which Alice and Bob made the same measurement but didn’t perform a test. Let \\(\\{ \\theta_j \\}_{j \\in T'} \\) be the measurements they made and \\(\\{ x_j \\}_{j \\in T'} \\) and \\(\\{ i_j \\}_{j \\in T'} \\) be the results they received. The \\(\\theta_j\\) have been communicated over the public channel. Eve wishes to learn the \\( x_j \\). Which measurements should she make?" }, { "context": "Since Eve can recover all the \\( x_j, j \\in T' \\), Eve knows the entire key so \\( H_{\\text{min}}(X|E) = 0 \\).", "question": "### (f) Let \\( X \\) be the classical key generated by Alice and Bob. What is \\( H_{\\text{min}}(X \\mid E) \\), where \\( E \\) is Eve’s system?" } ]
2016-02-12T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Commuting observables are compatible
Consider \( X \otimes X \) and \( Z \otimes Z \). Each of these is a \( 4 \times 4 \) Hermitian matrix which squares to identity, so it has \( \pm 1 \) eigenvalues. Moreover, since \( X \otimes X \) and \( Z \otimes Z \) mutually commute, they have a simultaneous eigenbasis. It turns out it consists of the Bell states \[ \begin{aligned} \lvert \Psi_{00} \rangle &= \frac{1}{\sqrt{2}} (\lvert 00 \rangle + \lvert 11 \rangle) ; \\ \lvert \Psi_{01} \rangle &= \frac{1}{\sqrt{2}} (\lvert 00 \rangle - \lvert 11 \rangle) ; \\ \lvert \Psi_{10} \rangle &= \frac{1}{\sqrt{2}} (\lvert 01 \rangle + \lvert 10 \rangle) ; \\ \lvert \Psi_{11} \rangle &= \frac{1}{\sqrt{2}} (\lvert 01 \rangle - \lvert 10 \rangle) . \end{aligned} \](Due to Bolton Bailey)
[ { "context": "The two-dimensional eigenspace to which the post-measurement state will belong will be spanned by the two Bell states which correspond to the eigenvalue \\(-1\\). If we evaluate\n\n\\[\n(X \\otimes X)|\\Psi_{00}\\rangle = (X \\otimes X) \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle) = \\frac{1}{\\sqrt{2}} (|11\\rangle + |00\\rangle) = |\\Psi_{00}\\rangle\n\\]\n\n\\[\n(X \\otimes X)|\\Psi_{01}\\rangle = (X \\otimes X) \\frac{1}{\\sqrt{2}} (|00\\rangle - |11\\rangle) = \\frac{1}{\\sqrt{2}} (|11\\rangle - |00\\rangle) = -|\\Psi_{01}\\rangle\n\\]\n\n\\[\n(X \\otimes X)|\\Psi_{10}\\rangle = (X \\otimes X) \\frac{1}{\\sqrt{2}} (|01\\rangle + |10\\rangle) = \\frac{1}{\\sqrt{2}} (|10\\rangle + |01\\rangle) = |\\Psi_{10}\\rangle\n\\]\n\n\\[\n(X \\otimes X)|\\Psi_{11}\\rangle = (X \\otimes X) \\frac{1}{\\sqrt{2}} (|01\\rangle - |10\\rangle) = \\frac{1}{\\sqrt{2}} (|10\\rangle - |01\\rangle) = -|\\Psi_{11}\\rangle\n\\]\n\nWe see that if the measurement is \\(-1\\), then the post-measurement state belongs to the eigenspace spanned by \\(|\\Psi_{01}\\rangle, |\\Psi_{11}\\rangle\\).\n\nIf we evaluate the eigenvalues for the Bell states under the \\(Z \\otimes Z\\) operator\n\n\\[\n(Z \\otimes Z)|\\Psi_{00}\\rangle = (Z \\otimes Z) \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle) = \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle) = |\\Psi_{00}\\rangle\n\\]\n\n\\[\n(Z \\otimes Z)|\\Psi_{01}\\rangle = (Z \\otimes Z) \\frac{1}{\\sqrt{2}} (|00\\rangle - |11\\rangle) = \\frac{1}{\\sqrt{2}} (|00\\rangle - |11\\rangle) = |\\Psi_{01}\\rangle\n\\]\n\n\\[\n(Z \\otimes Z)|\\Psi_{10}\\rangle = (Z \\otimes Z) \\frac{1}{\\sqrt{2}} (|01\\rangle + |10\\rangle) = \\frac{1}{\\sqrt{2}} (|01\\rangle + |10\\rangle) = -|\\Psi_{10}\\rangle\n\\]\n\n\\[\n(Z \\otimes Z)|\\Psi_{11}\\rangle = (Z \\otimes Z) \\frac{1}{\\sqrt{2}} (|01\\rangle - |10\\rangle) = \\frac{1}{\\sqrt{2}} (-|01\\rangle + |10\\rangle) = -|\\Psi_{11}\\rangle\n\\]\n\nAnd so after the second measurement of 1, the post measurement state is \\(|\\Psi_{00}\\rangle\\).", "question": "### (a) Suppose we measure an arbitrary two-qubit state \\(\\lvert \\phi \\rangle \\) using the observable \\( X \\otimes X \\) and obtain the outcome \\(-1\\). To which two-dimensional eigenspace does the post-measurement state belong? (Specify the subspace using two of the Bell states above.) Next, we measure the observable \\( Z \\otimes Z \\) and obtain outcome 1. What is post-measurement state \\(\\lvert \\phi' \\rangle \\)?" }, { "context": "If we apply \\((Y \\otimes Y)\\) to \\(|\\Psi_{00}\\rangle\\), we get \\(|\\Psi_{00}\\rangle\\), so if the post-measurement state had some overlap with this state, the measured eigenvalue would have to be \\(1\\).", "question": "### (b) Suppose that instead we performed the measurement \\(-Y \\otimes Y = (X \\otimes X)(Z \\otimes Z) \\) directly, and the post-measurement state had nonzero overlap with \\(\\lvert \\phi' \\rangle \\). What measurement outcome would we have obtained?" }, { "context": "In general, when one measures a product of commuting observables, one gets the product of the measurement one would have gotten from each measurement individually. That is,\n\n\\[\n\\langle \\psi|A| \\psi \\rangle \\langle \\psi|B| \\psi \\rangle = \\langle \\psi|AB| \\psi \\rangle\n\\]", "question": "### (c) What do you deduce about the relationship between the outcomes of measuring commuting observables \\( A \\) and \\( B \\) with the outcome of measuring the observable \\( AB \\) directly?" } ]
2016-02-12T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
A coherent attack on a nonlocal game
In video 7.5-2 on EdX, you saw a nonlocal game where a coherent attack allowed the players to do just as well when playing two parallel copies of the game as they did when playing just one copy. Now we’ll see another example of a game with such an attack, and we’ll prove that this attack is the best strategy for the game even in the quantum setting. (Due to Anish Thilagar)
[ { "context": "(a) They win with probability \\(\\frac{3}{4}\\). They only lose when \\(s = t = 0\\), because then both sides of the equation evaluate to 0.", "question": "### (a) We begin by describing the single-shot game. Eve starts by generating a pair \\((s, t) \\in \\{(0,0), (0,1), (1,0), (1,1)\\}\\) uniformly at random. She gives \\(s\\) to Alice and \\(t\\) to Bob. Alice and Bob generate output bits \\(a, b \\in \\{0,1\\}\\), respectively. They win if \\(a \\vee b \\neq s \\vee t\\). As a warm-up, consider the strategy in which \\(a = s\\) and \\(b = t\\). What is the winning probability? Which inputs cause Alice and Bob to lose?" }, { "context": "There are 9 possible equally likely situations that can occur, because each game has 3 possible inputs. Both Alice and Bob follow the same strategy. If their input is \\((0,0)\\), they output \\((0,0)\\), and otherwise output \\((1,1)\\).\n\nClearly, if they both output \\((0,0)\\), they will fail because all 4 variables in both equations are 0. Additionally, if both of them output \\((1,1)\\), they will fail because both sides of both equations will be 1.\n\nThe only remaining case is, if only one of them outputs \\((0,0)\\) and the other outputs \\((1,1)\\), then we have that either \\(a_0 \\vee s_0 = a_1 \\vee s_1 = 0\\) and \\(b_0 \\vee t_0 = b_1 \\vee t_1 = 1\\) or \\(a_0 \\vee s_0 = a_1 \\vee s_1 = 1\\) and \\(b_0 \\vee t_0 = b_1 \\vee t_1 = 0\\), and either way they succeed in both games, so they win. This can happen 6 different ways, because \\((s_0, s_1) = (0,0)\\), then we can have \\((t_0, t_1) \\in \\{(0,1), (1,0), (1,1)\\}\\), and vice versa if \\((t_0, t_1) = (0,0)\\). Therefore, the probability of success is \\(6 \\times \\frac{1}{9} = \\frac{2}{3}\\).", "question": "### (b) In the two-parallel version \\(G^{(2)}\\) of the game we just described, Eve picks two strings \\((s_0, t_0), (s_1, t_1)\\) from \\(\\{(0,0), (0,1), (1,0), (1,1)\\}\\) independently and uniformly at random. She gives \\((s_0, s_1)\\) to Alice, \\((t_0, t_1)\\) to Bob, and demands outputs \\((a_0, a_1), (b_0, b_1)\\) from Alice and Bob. They win if \\(a_0 \\vee s_0 \\neq b_0 \\vee t_0\\) and \\(a_1 \\vee s_1 \\neq b_1 \\vee t_1\\). Describe a deterministic strategy for Alice and Bob that achieves a winning probability of 2/3." }, { "context": "Alice can just randomly generate values of \\((s_1, t_1)\\) and send Bob \\(t_1\\) and then 'forget' the value. Then, they can play the two-shot game and win with probability \\(w_c\\), which means they must win the one-shot game with probability at least \\(w_c\\), because they need to win both games to win the two-shot version.", "question": "### (c) Suppose Alice and Bob have a valid strategy for the two-parallel game which wins with probability \\(\\omega_c\\). Describe a strategy for them to win the one-shot game with probability at least \\(\\omega_c\\). This proves that the optimal success probability in the one-shot game is an upper bound for the optimal success probability in the two-parallel game." }, { "context": "First, note that \\(A_0\\) and \\(B_0\\) commute because they each act on separate parts of the state that are spacelike separated.\n\nIf \\((s, t) = (0, 0)\\), which happens with probability \\(\\frac{1}{3}\\), Alice and Bob win the game with likelihood \\(P(a = 0)P(b = 1) + P(a = 1)P(b = 0) = P(a \\neq b)\\). We know that \\(\\langle \\Psi | A_0 B_0 | \\Psi \\rangle\\) is the probability of the measured eigenvalues of \\(A_0\\) and \\(B_0\\) being the same minus the probability that they are different, because as we know from the last question the measured eigenvalue of a product of two commuting operators is the same as the product of measuring with each operator. Therefore, this tells us that \\(\\langle \\Psi | A_0 B_0 | \\Psi \\rangle = P(a = b) - P(a \\neq b)\\). However, \\(P(a = b) = 1 - P(a \\neq b)\\), so we have \\(\\langle \\Psi | A_0 B_0 | \\Psi \\rangle = 1 - 2P(a \\neq b)\\), so \\(P(a \\neq b) = \\frac{1}{2} - \\frac{1}{2} \\langle \\Psi | A_0 B_0 | \\Psi \\rangle\\) is the success probability in this case.\n\nIf \\((s, t) = (0, 1)\\), which happens with probability \\(\\frac{1}{3}\\), they succeed exactly when \\(a = 0\\). We know that \\(\\langle \\Psi | A_0 | \\Psi \\rangle = P(a = 0) - P(a = 1)\\), and using the substitution \\(P(a = 1) = 1 - P(a = 0)\\), we get \\(P(a = 0) = \\frac{1}{2} + \\frac{1}{2} \\langle \\Psi | A_0 | \\Psi \\rangle\\).\n\nSimilarly, if \\((s, t) = (1, 0)\\), which happens with probability \\(\\frac{1}{3}\\), they succeed exactly when \\(b = 0\\). We know that \\(\\langle \\Psi | B_0 | \\Psi \\rangle = P(b = 0) - P(b = 1)\\), and using the substitution \\(P(b = 1) = 1 - P(b = 0)\\), we get \\(P(b = 0) = \\frac{1}{2} + \\frac{1}{2} \\langle \\Psi | B_0 | \\Psi \\rangle\\).\n\nTherefore, their success probability will be\n\n\\[\n\\frac{1}{3} \\left( \\frac{1}{2} - \\frac{1}{2} \\langle \\Psi | A_0 B_0 | \\Psi \\rangle \\right) + \\frac{1}{3} \\left( \\frac{1}{2} + \\frac{1}{2} \\langle \\Psi | A_0 | \\Psi \\rangle \\right) + \\frac{1}{3} \\left( \\frac{1}{2} + \\frac{1}{2} \\langle \\Psi | B_0 | \\Psi \\rangle \\right)\n\\]\n\nLetting \\(M = \\frac{1}{3} (A_0 + B_0 - A_0 B_0)\\), this reduces to \\(\\frac{1}{2} + \\frac{1}{2} \\langle \\Psi | M | \\Psi \\rangle\\).", "question": "### (d) Now we will find an upper bound on the success probability of the one-shot game, assuming that Alice and Bob may use shared entanglement in addition to classical resources.\n\nThe most general strategy that Alice and Bob can take is as follows. They each have two \\(\\pm 1\\)-eigenvalue-observables \\(A_0, A_1, B_0, B_1\\). They share a joint state \\(|\\psi\\rangle\\). Alice measures \\(|\\psi\\rangle\\) on \\(A_s\\), Bob measures on \\(B_t\\), and they each output 0 if they measured a 1 and 1 if they measured a -1.\n\nIn general, if \\(X\\) is an observable, then \\(\\langle\\psi| X |\\psi\\rangle\\) is equal to the probability of measuring a 1 minus the probability of measuring -1.\n\nLet \\(M = -\\frac{1}{3}A_0 B_0 + \\frac{1}{3}A_0 + \\frac{1}{3}B_0\\). Prove that the probability that Alice and Bob win the game is \\(\\frac{1}{2} + \\frac{1}{2}\\langle\\psi| M |\\psi\\rangle\\)." }, { "context": "\\[M^2 = \\frac{1}{9} (A_0 + B_0 - A_0 B_0)^2\\]\n\n\\[= \\frac{1}{9} (A_0^2 + B_0^2 + A_0 B_0 + B_0 A_0 - A_0^2 B_0 - B_0 A_0^2 - B_0 A_0 B_0 + A_0 B_0 A_0 B_0)\\]\n\nBecause \\(A_0\\) and \\(B_0\\) are operators, they square to identity\n\n\\[= \\frac{1}{9} (2I + A_0 B_0 + B_0 A_0 - B_0 - B_0 A_0 B_0 + A_0 B_0 A_0 B_0)\\]\n\nAdditionally, \\(A_0\\) and \\(B_0\\) commute, so we can rewrite this as\n\n\\[= \\frac{1}{9} (3I + 2A_0 B_0 - B_0 - A_0)\\]\n\n\\[= \\frac{1}{3} (I - 2M)\\]", "question": "### (e) Prove that \\(M^2 = \\frac{1}{3}I - \\frac{2}{3}M\\)." }, { "context": "By Cayley-Hamilton, \\(M\\) satisfies its characteristic polynomial, and since this is the unique monic polynomial that it satisfies, this must be its characteristic polynomial. Therefore, the eigenvalues of \\(M\\) will satisfy this as well, so \\(3\\lambda^2 + 2\\lambda - 1 = 0\\). This factors to \\((3\\lambda - 1)(\\lambda + 1) = 0\\), so \\(\\lambda_{\\text{max}} = \\frac{1}{3}\\).", "question": "### (f) The answer to the last is the characteristic polynomial of \\(M\\) (indeed, it is the unique monic quadratic satisfied by \\(M\\)). Use it to solve for the largest eigenvalue \\(\\lambda_{\\text{max}}\\) of \\(M\\)." }, { "context": "\\( p_{\\text{win}} \\leq \\frac{1}{2} + \\frac{1}{2} \\langle \\Psi | M | \\Psi \\rangle \\leq \\frac{1}{2} + \\frac{1}{2} \\cdot \\frac{2}{3} = \\frac{2}{3} \\). We also know that this win probability is achievable even without any quantum strategy from the naive strategy in part (a), so this must be the tightest possible bound.", "question": "### (g) Now use the facts that \\(p_{\\text{win}} \\leq \\frac{1}{2} + \\frac{1}{2}\\langle\\psi| M |\\psi\\rangle\\) and \\(\\langle\\psi| M |\\psi\\rangle \\leq \\lambda_{\\text{max}}\\) to find the tightest possible upper bound on \\(p_{\\text{win}}\\)." } ]
2016-02-12T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
A dual formulation for the conditional min-entropy
In the notes, the conditional min-entropy of a cq state \( \rho_{XE} = \sum_{x \in \mathcal{X}} |x\rangle \langle x| \otimes \rho_x^E \) (where \( \mathcal{X} \) is any finite set of outcomes) is defined through the guessing probability, \( H_{\min}(X|E) = -\log P_{\text{guess}}(X|E) \) where \[ P_{\text{guess}}(X|E) = \sup_{\{M_x\}} \sum_{x \in \mathcal{X}} \text{Tr} \left( M_x \rho_x \right), \] where the supremum is over all POVM \( \{M_x\} \). It turns out that the min-entropy can also be written in a different way, and this other expression can be useful in calculations. To derive it, we first rewrite (1) as a semidefinite program (SDP). Recall the primal and dual forms of an SDP from Problem 2 in Homework 3. Consider the map \[ \Phi(Z) = \sum_{x \in \mathcal{X}} \left( |x\rangle \otimes I_E \right) Z \left( \langle x| \otimes I_E \right), \] where \( I_E \) is the identity operator on the system \( E \). (Due to De Huang)
[ { "context": "For any \\( |\\Psi\\rangle_{XE} \\in \\mathcal{H}_X \\otimes \\mathcal{H}_E \\), say\n\n\\[ |\\Psi\\rangle_{XE} = \\sum_{i,j} \\alpha_{ij} |i\\rangle_X |j\\rangle_E, \\]\n\nwe have\n\n\\[ \\langle \\Psi |_{XE} (|x\\rangle \\langle x| \\otimes N_z) |\\Psi\\rangle_{XE} = \\sum_{i,j} \\sum_{k,l} \\langle i|x \\rangle \\langle x|i \\rangle \\langle j| (|x\\rangle \\langle x| \\otimes N_z) |k\\rangle \\langle k|j \\rangle_E \\]\n\n\\[ = \\sum_{i,j} \\sum_{k,l} \\langle i|x \\rangle \\langle x|k \\rangle \\langle j|N_z|l \\rangle_E \\]\n\n\\[ = \\left( \\sum_{i,k} \\langle i|x \\rangle \\langle x|k \\rangle \\right) \\left( \\sum_{j,l} \\langle j|N_z|l \\rangle_E \\]\n\n\\[ = \\left( \\left( \\sum_{i} \\langle i|x \\rangle \\langle x| \\left( \\sum_{i} |i\\rangle \\right) \\right) \\left( \\left( \\sum_{j} \\langle j|E \\right) N_z \\left( \\sum_{j} |j\\rangle_E \\right) \\right) \\right) \\]\n\n\\[ \\geq 0. \\]\n\nThus \\( |x\\rangle \\langle x| \\otimes N_z \\geq 0, \\ \\forall x \\in \\mathcal{X}, \\) and consequently\n\n\\[ Z = \\sum_{x \\in \\mathcal{X}} |x\\rangle \\langle x| \\otimes N_z \\geq 0. \\]\n\nBy definition we have\n\n\\[ \\Phi(Z) = \\sum_{x' \\in \\mathcal{X}} \\left( \\langle x' | \\otimes I_E \\right) Z \\left( | x' \\rangle \\otimes I_E \\right) \\]\n\n\\[ = \\sum_{x' \\in \\mathcal{X}} \\left( \\langle x' | \\otimes I_E \\right) (| x \\rangle \\langle x | \\otimes N_x) \\left( | x' \\rangle \\otimes I_E \\right) \\]\n\n\\[ = \\sum_{x' \\in \\mathcal{X}} \\langle x' | x \\rangle \\langle x | x' \\rangle N_x \\]\n\n\\[ = \\sum_{x \\in \\mathcal{X}} N_x \\]\n\n\\[ = I_E. \\]", "question": "### (a) Suppose \\( \\{N_x\\} \\) is a valid POVM. Show that the matrix \\( Z = \\sum_x |x\\rangle \\langle x| \\otimes N_x \\) satisfies \\( Z \\geq 0 \\), and compute \\( \\Phi(Z) \\) (the result should be a matrix defined on system \\( E \\) only)." }, { "context": "\\[ \\text{tr}(Z \\rho_{X E}) = \\text{tr} \\left( Z \\sum_{x' \\in \\mathcal{X}} (| x \\rangle \\langle x | x' \\rangle \\langle x' |) \\otimes (N_x \\rho_E^x) \\right) \\]\n\n\\[ = \\sum_{x' \\in \\mathcal{X}} \\text{tr} \\left( (| x \\rangle \\langle x' | x' \\rangle \\langle x' |) \\text{tr}(N_x \\rho_E^x) \\right) \\]\n\n\\[ = \\sum_{x \\in \\mathcal{X}} \\text{tr}(N_x \\rho_E^x) \\]", "question": "### (b) For the same matrix \\( Z \\) as in the previous question, compute \\( \\text{Tr}(Z \\rho_{XE}) \\)." }, { "context": "For any \\( | \\phi \\rangle_E \\in \\mathcal{H}_E \\), we have\n\n\\[ \\langle \\phi | E N_x | \\phi \\rangle_E = \\langle \\phi | E \\left( (| x \\rangle \\langle x | \\otimes I_E) Z (| x \\rangle \\langle x | \\otimes I_E) \\right) \\phi \\rangle_E = \\left( \\langle x | \\langle \\phi | E (| x \\rangle \\langle x | \\otimes I_E) | \\phi \\rangle_E \\right) \\geq 0, \\]\n\nthus \\( N_x \\geq 0, \\forall x \\in \\mathcal{X} \\). Also we have\n\n\\[ \\sum_{x \\in \\mathcal{X}} N_x = \\sum_{x \\in \\mathcal{X}} \\left( (| x \\rangle \\langle x | \\otimes I_E) Z (| x \\rangle \\langle x | \\otimes I_E) \\right) = \\Phi(Z) = I_E. \\]\n\nTherefore \\( \\{ N_x \\}_x \\) is a valid POVM over \\( \\mathcal{H}_E \\).", "question": "### (c) Conversely, suppose \\( Z \\geq 0 \\) and \\( \\Phi(Z) = I_E \\). Show that the elements \\( N_x = (\\langle x| \\otimes I_E) Z (\\langle x| \\otimes I_E) \\) form a valid POVM \\( \\{N_x\\} \\) over \\( \\mathcal{H}_E \\) (with outcomes \\( x \\in \\mathcal{X} \\))." }, { "context": "By the previous results, the constraint that \\( \\{ M_x \\}_x \\) is a POVM can be translated into the conditions\n\n\\[ M_x = \\left( (| x \\rangle \\otimes I_E) Z (| x \\rangle \\langle x | \\otimes I_E) \\right), \\quad \\forall x \\in \\mathcal{X}, \\]\n\nfor some \\( Z \\) satisfying\n\n\\[ Z \\geq 0, \\quad \\Phi(Z) = I_E, \\]\n\nwhere\n\n\\[ \\Phi(Z) = \\sum_{x \\in \\mathcal{X}} \\left( (| x \\rangle \\otimes I_E) Z (| x \\rangle \\langle x | \\otimes I_E) \\right). \\]\n\nAnd the objective function can rewrite as\n\n\\[ \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_E^x) = \\text{tr}(Z \\rho_{X E}). \\]\n\nTherefore the primal problem that gives \\( P_{\\text{guess}} \\) is\n\n\\[ P_{\\text{guess}}(X|E) = \\sup_Z \\, \\text{tr}(Z \\rho_{X E}) \\]\n\ns.t. \\( \\Phi(Z) = I_E \\),\n\n\\( Z \\geq 0 \\).\n\nIn the language of HW3 Problem 2, we are using \\( A = \\rho_{X E}, B = I_E \\).", "question": "### (d) Use the previous questions to give a semidefinite program in primal form whose optimum is \\( P_{\\text{guess}}(X|E) \\). That is, specify the map \\( \\Phi \\) and matrices \\( A \\) and \\( B \\) that define the SDP." }, { "context": "Since\n\n\\[ \\Phi(Z) = \\sum_{x \\in \\mathcal{X}} (|x\\rangle \\otimes I_E) Z ( \\langle x| \\otimes I_E) \\quad \\forall Z, \\]\n\nwe have\n\n\\[ \\Phi^*(Y_E) = \\sum_{x \\in \\mathcal{X}} (|x\\rangle \\otimes I_E) Y_E ( \\langle x| \\otimes I_E) \\]\n\n\\[ = \\sum_{x \\in \\mathcal{X}} (|x\\rangle \\langle x| \\otimes Y_E) \\]\n\n\\[ = \\left( \\sum_{x \\in \\mathcal{X}} |x\\rangle \\langle x| \\right) \\otimes Y_E \\]\n\n\\[ = I_X \\otimes Y_E, \\quad \\forall Y_E. \\]", "question": "### (e) Show that the map \\( \\Phi^* \\) associated to \\( \\Phi \\) is such that \\( \\Phi^*(Y) = I_X \\otimes Y \\), for any matrix \\( Y \\) defined over system \\( E \\) (remember the definition of \\( \\Phi^* \\) from \\( \\Phi \\) given in Homework 3, Problem 1)." }, { "context": "Recall that the dual problem is\n\n\\[ \\inf_Y \\, \\text{tr}(B Y) \\]\n\ns.t. \\( \\Phi^*(Y) \\geq A \\),\n\n\\( Y = Y^\\dagger \\).\n\nSince we are using \\( A = \\rho_{X E}, B = I_E \\), thus the dual problem becomes\n\n\\[ \\inf_Y \\, \\text{tr}(Y) \\]\n\ns.t. \\( I_X \\otimes Y \\geq \\rho_{X E} \\),\n\n\\( Y = Y^\\dagger \\).\n\nMoreover, if \\( I_X \\otimes Y \\geq \\rho_{X E} \\), given any \\( |\\phi\\rangle_E \\in \\mathcal{H}_E \\), we have\n\n\\[ \\langle \\phi|_E (Y - \\rho_E^P) |\\phi\\rangle_E = \\langle \\phi| Y |\\phi\\rangle_E - \\langle \\phi| \\rho_E^P |\\phi\\rangle_E \\]\n\n\\[ = (\\langle x| \\otimes \\langle \\phi|_E) (I_X \\otimes Y) (|x\\rangle \\otimes |\\phi\\rangle_E) - (\\langle x| \\otimes \\langle \\phi|_E) \\left( \\sum_{x' \\in \\mathcal{X}} |x'\\rangle \\langle x'| \\otimes \\rho_E^P \\right) (|x\\rangle \\otimes |\\phi\\rangle_E) \\]\n\n\\[ = (\\langle x| \\otimes \\langle \\phi|_E) (I_X \\otimes Y) (|x\\rangle \\otimes |\\phi\\rangle_E) - (\\langle x| \\otimes \\langle \\phi|_E) \\rho_{X E} (|x\\rangle \\otimes |\\phi\\rangle_E) \\]\n\n\\[ = (\\langle x| \\otimes \\langle \\phi|_E) (I_X \\otimes Y - \\rho_{X E}) (|x\\rangle \\otimes |\\phi\\rangle_E) \\]\n\n\\[ \\geq 0, \\]\n\nthus \\( Y \\geq \\rho_E^P \\), \\( \\forall x \\in \\mathcal{X} \\). Conversely, if \\( Y \\geq \\rho_E^P \\), \\( \\forall x \\in \\mathcal{X} \\), then\n\n\\[ I_X \\otimes Y = \\sum_{x \\in \\mathcal{X}} |x\\rangle \\langle x| \\otimes Y \\geq \\sum_{x \\in \\mathcal{X}} |x\\rangle \\langle x| \\otimes \\rho_E^P = \\rho_{X E}. \\]\n\nTherefore we have\n\n\\[ I_X \\otimes Y \\geq \\rho_{XE} \\iff Y \\geq \\rho_x^E, \\ \\forall x \\in \\mathcal{X}, \\]\n\nthe dual problem is more explicitly as\n\n\\[ \\inf_Y \\ \\text{tr}(Y) \\]\n\n\\[ \\text{s.t.} \\ Y \\geq \\rho_x, \\ \\forall x \\in \\mathcal{X}, \\]\n\n\\[ Y = Y^\\dagger. \\]", "question": "### (f) Write the dual program to your SDP explicitly." }, { "context": "Since the primal SDP problem is strictly feasible and the objective function is bounded above by 1, the problem has strong duality. That is, the supremum of the primal problem and the infimum of the dual problem are the same, i.e. we have\n\n\\[ P_{\\text{guess}}(X|E) = \\inf_Y \\ \\text{tr}(Y) \\]\n\n\\[ \\text{s.t.} \\ Y \\geq \\rho_x, \\ \\forall x \\in \\mathcal{X}, \\]\n\n\\[ Y = Y^\\dagger, \\]\n\nwhich is what we want to conclude.", "question": "### (g) Conclude that the guessing probability satisfies\n\n\\[ P_{\\text{guess}}(X|E) = \\inf_{\\sigma} \\text{Tr}(\\sigma), \\]\n\nwhere the infimum is taken over all matrices \\( \\sigma \\) defined on system \\( E \\) such that \\( \\sigma \\geq \\rho_x \\) for all \\( x \\in \\mathcal{X} \\).\n\nIn the final two parts of this problem, we use the previous parts (whose results you may use even if you didn’t prove them) to compute the min-entropy of cq states given as the tensor product of many copies of the same state." }, { "context": "Given that \\( \\sigma \\geq \\rho_x, \\ \\forall x \\in \\mathcal{X}, \\) we have\n\n\\[ P_{\\text{guess}}(X|E) = \\sup_{M_x} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x) \\leq \\sup_{M_x} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\sigma) = \\sup_{M_x} \\text{tr}\\left( \\left( \\sum_{x \\in \\mathcal{X}} M_x \\right) \\sigma \\right) = \\text{tr}(I_E \\sigma) = \\text{tr}(\\sigma). \\]", "question": "### (h) Show that for any \\( \\sigma \\) such that \\( \\sigma \\geq \\rho_x \\) we have \\( P_{\\text{guess}}(X|E) \\leq \\text{Tr}(\\sigma) \\). [Hint: remember Exercise 2 from Homework 3...]" }, { "context": "Suppose that\n\n\\[ \\tau_{X_1 E_1} = \\sum_{x \\in \\mathcal{X}_1} |x\\rangle \\langle x| \\otimes \\tau_x^{E_1}, \\]\n\nthen\n\n\\[ \\rho_{XE} = \\sigma_{X_1 E_1} = \\sum_{x_1, x_2, \\ldots, x_n \\in \\mathcal{X}_1} \\left( |x_1\\rangle \\langle x_1| \\otimes \\tau_{x_1}^{E_1} \\right) \\otimes \\left( |x_2\\rangle \\langle x_2| \\otimes \\tau_{x_2}^{E_2} \\right) \\otimes \\cdots \\otimes \\left( |x_n\\rangle \\langle x_n| \\otimes \\tau_{x_n}^{E_n} \\right) \\]\n\n\\[ = \\sum_{x_1, x_2, \\ldots, x_n \\in \\mathcal{X}_1} \\left( |x_1 x_2 \\ldots x_n \\rangle \\langle x_1 x_2 \\ldots x_n| \\right) \\otimes \\left( \\tau_{x_1}^{E_1} \\otimes \\tau_{x_2}^{E_2} \\otimes \\cdots \\otimes \\tau_{x_n}^{E_n} \\right) \\]\n\n\\[ = \\sum_{x \\in \\mathcal{X}} |x\\rangle \\langle x| \\otimes \\rho_x^E, \\]\n\nwhere\n\n\\[ \\mathcal{X} = \\{ x = (x_1, x_2, \\ldots, x_n) : x_i \\in \\mathcal{X}_1, \\ i = 1, 2, \\ldots, n \\}, \\]\n\n\\[ \\rho_x^E = \\tau_{x_1}^{E_1} \\otimes \\tau_{x_2}^{E_2} \\otimes \\cdots \\otimes \\tau_{x_n}^{E_n}, \\ \\forall x \\in \\mathcal{X}. \\]\n\nUsing the previous results, we have\n\n\\[ P_{\\text{guess}}(X_1|E_1) = \\sup_{\\{M_x\\} \\in \\mathcal{P}_1} \\sum_{x \\in \\mathcal{X}_1} \\text{tr}(M_x \\tau_x^{E_1}) = \\inf_{\\sigma \\in \\Omega_1} \\text{tr}(\\sigma). \\]\n\n\\[ P_{\\text{guess}}(X|E) = \\sup_{\\{M_x\\} \\in P} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x^E) = \\inf_{\\sigma \\in \\Omega} \\text{tr}(\\sigma), \\]\n\nwhere\n\n\\[ \\begin{aligned} P_1 &= \\{\\{M_x^1\\} : \\{M_x^1\\} \\text{ is a POVM over } \\mathcal{H}_{E_1}\\}, \\\\ \\Omega_1 &= \\{\\sigma : \\sigma \\geq \\tau_x^{E_1}, \\forall x \\in \\mathcal{X}_1\\}, \\\\ P &= \\{\\{M_x\\} : \\{M_x\\} \\text{ is a POVM over } \\mathcal{H}_E\\}, \\\\ \\Omega &= \\{\\sigma : \\sigma \\geq \\rho_x^E, \\forall x \\in \\mathcal{X}\\}. \\end{aligned} \\]\n\nDefine\n\n\\[ \\begin{aligned} \\tilde{P} &= \\{\\{M_x\\} = \\{M_{x_1} \\otimes M_{x_2} \\otimes \\cdots \\otimes M_{x_n}\\} : \\{M_{x_i}\\} \\in P_1, i = 1, 2, \\ldots, n\\}, \\\\ \\tilde{\\Omega} &= \\{\\sigma = \\sigma_1 \\otimes \\sigma_2 \\otimes \\cdots \\otimes \\sigma_n : \\sigma_i \\in \\Omega_1, i = 1, 2, \\ldots, n\\}, \\end{aligned} \\]\n\nthen it's easy to check that\n\n\\[ \\tilde{P} \\subset P, \\quad \\tilde{\\Omega} \\subset \\Omega, \\]\n\nand thus we have\n\n\\[ \\begin{aligned} P_{\\text{guess}}(X|E) &= \\sup_{\\{M_x\\} \\in P} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x^E) \\geq \\sup_{\\{M_x\\} \\in \\tilde{P}} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x^E), \\\\ P_{\\text{guess}}(X|E) &= \\inf_{\\sigma \\in \\tilde{\\Omega}} \\text{tr}(\\sigma) \\leq \\inf_{\\sigma \\in \\Omega} \\text{tr}(\\sigma). \\end{aligned} \\]\n\nBut on the other hand, we have\n\n\\[ \\begin{aligned} \\sup_{\\{M_x\\} \\in \\tilde{P}} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x^E) &= \\sup_{\\{M_x\\} \\in \\tilde{P}} \\sum_{x \\in \\mathcal{X}} \\text{tr}((M_{x_1} \\otimes M_{x_2} \\otimes \\cdots \\otimes M_{x_n})(\\rho_{x_1}^{E_1} \\otimes \\rho_{x_2}^{E_2} \\otimes \\cdots \\otimes \\rho_{x_n}^{E_n})) \\\\ &= \\sup_{\\{M_x\\} \\in \\tilde{P}} \\sum_{x \\in \\mathcal{X}} \\left( \\prod_{i=1}^n \\text{tr}(M_{x_i}^{E_i} \\rho_{x_i}^{E_i}) \\right) \\\\ &= \\sup_{\\{M_x\\} \\in \\tilde{P}} \\prod_{i=1}^n \\left( \\sum_{x_i \\in \\mathcal{X}_i} \\text{tr}(M_{x_i}^{E_i} \\rho_{x_i}^{E_i}) \\right) \\\\ &= \\prod_{i=1}^n \\left( \\sup_{\\{M_{x_i}\\} \\in P_1} \\sum_{x_i \\in \\mathcal{X}_i} \\text{tr}(M_{x_i}^{E_i} \\rho_{x_i}^{E_i}) \\right) \\\\ &= (P_{\\text{guess}}(X_1|E_1))^n, \\end{aligned} \\]\n\n\\[ \\begin{aligned} \\inf_{\\sigma \\in \\tilde{\\Omega}} \\text{tr}(\\sigma) &= \\inf_{\\sigma \\in \\tilde{\\Omega}} \\text{tr}(\\sigma_1 \\otimes \\sigma_2 \\otimes \\cdots \\otimes \\sigma_n) \\\\ &= \\inf_{\\sigma \\in \\tilde{\\Omega}} \\prod_{i=1}^n \\text{tr}(\\sigma_i) \\\\ &= \\prod_{i=1}^n \\left( \\inf_{\\sigma_i \\in \\Omega_1} \\text{tr}(\\sigma_i) \\right) \\\\ &= \\left( \\inf_{\\sigma \\in \\Omega_1} \\text{tr}(\\sigma) \\right)^n \\\\ &= (P_{\\text{guess}}(X_1|E_1))^n. \\end{aligned} \\]\n\nThus we come to\n\n\\[ \\begin{aligned} P_{\\text{guess}}(X|E) &= \\sup_{\\{M_x\\} \\in \\mathcal{P}} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x^E) \\geq \\sup_{\\{M_x\\} \\in \\mathcal{P}} \\sum_{x \\in \\mathcal{X}} \\text{tr}(M_x \\rho_x^E) = (P_{\\text{guess}}(X_1|E_1))^n, \\\\ P_{\\text{guess}}(X|E) &= \\inf_{\\sigma \\in \\mathcal{E}(\\mathcal{H})} \\text{tr}(\\sigma) \\leq \\inf_{\\sigma \\in \\mathcal{E}(\\mathcal{H})} \\text{tr}(\\sigma) = (P_{\\text{guess}}(X_1|E_1))^n, \\end{aligned} \\]\n\n\\[ \\implies P_{\\text{guess}}(X|E) \\geq (P_{\\text{guess}}(X_1|E_1))^n \\geq P_{\\text{guess}}(X|E), \\]\n\n\\[ \\implies P_{\\text{guess}}(X|E) = (P_{\\text{guess}}(X_1|E_1))^n, \\]\n\nand therefore\n\n\\[ H_{\\text{min}}(X|E)_p = -\\log P_{\\text{guess}}(X|E) = -\\log (P_{\\text{guess}}(X_1|E_1))^n = -n \\log P_{\\text{guess}}(X_1|E_1) = n H_{\\text{min}}(X_1|E_1)_r. \\]", "question": "### (i) Suppose that \\( \\rho_{XE} = \\tau_{X_1E_1}^{\\otimes n} \\), where \\( \\tau_{X_1E_1} \\) is a cq-state. That is, \\( \\rho_{XE} \\) is formed of \\( n \\) identical copies of the same state. Show that \\( H_{\\min}(X|E) = n H_{\\min}(X_1|E_1)_{\\tau} \\), where we have used the subscripts \\( \\rho \\) and \\( \\tau \\) to remind ourselves of which states we compute the min-entropy. [Hint: consider the solutions for the primal and dual SDP from the previous problem for a single instance \\( \\sigma_{X_1E_1} \\). Use these solutions to construct matching solutions for the primal and dual SDP associated with the cq state \\( \\rho \\).]" } ]
2016-11-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Computing the min-entropy
How much can a quantum register \( E \) help us guess \( X \)? In the following, you will show that \( H_{\min}(X|E) \geq H_{\min}(X) - \log |E| \), where \( E \) denotes the dimension of the associated Hilbert space (so \( \log |E| \) is just the number of qubits of \( E \)). (Due to Mandy Huo)
[ { "context": "By definition \\( H_{\\text{min}}(X | E) = -\\log P_{\\text{guess}}(X | E) \\) and \\( H_{\\text{min}}(X) = -\\log \\max_x p_x = -\\log P_{\\text{guess}}(X) \\) so we want to show \\(-\\log P_{\\text{guess}}(X | E) \\ge -\\log P_{\\text{guess}}(X) - \\log |E| = -\\log (P_{\\text{guess}}(X)/|E|)\\). Since \\(-\\log x\\) is monotonically decreasing, we need to show \\( P_{\\text{guess}}(X | E) \\le P_{\\text{guess}}(X)/|E| \\).", "question": "### (a) Write out what we want to show in terms of the guessing probability \\( P_{\\text{guess}}(X|E) \\) using the definition of the min-entropy." }, { "context": "Let \\( A \\ge 0, B \\ge 0 \\). We write the eigendecomposition \\( B = \\sum_i \\lambda_i(B) |u_i\\rangle \\langle u_i| \\). Then using the linearity of trace we have\n\n\\[ \\text{Tr}(AB) = \\text{Tr} \\left( A \\sum_i \\lambda_i(B) |u_i\\rangle \\langle u_i| \\right) = \\sum_i \\lambda_i(B) \\text{Tr}(A |u_i\\rangle \\langle u_i|) \\]\n\n\\[ \\le \\lambda_{\\max}(B) \\sum_i \\text{Tr}(A |u_i\\rangle \\langle u_i|) \\]\n\n\\[ = \\lambda_{\\max}(B) \\text{Tr} \\left( A \\sum_i |u_i\\rangle \\langle u_i| \\right) \\]\n\n\\[ = \\lambda_{\\max}(B) \\text{Tr}(A \\mathbb{I}) \\]\n\n\\[ = \\lambda_{\\max}(B) \\text{Tr}(A) \\]\n\nwhere the inequality step is because \\(\\text{Tr}(A |u_i\\rangle \\langle u_i|) = \\langle u_i| A |u_i\\rangle \\ge 0\\) since \\(A \\ge 0\\).", "question": "### (b) It will be useful to establish the following fact. Suppose given two Hermitian matrices \\( A \\) and \\( B \\), which are positive semidefinite: \\( A \\geq 0 \\) and \\( B \\geq 0 \\). Show that \n\\[ \\text{Tr}(AB) \\leq \\lambda_{\\text{max}}(B) \\text{Tr}(A), \\]\nwhere \\( \\lambda_{\\text{max}}(B) \\) is the largest eigenvalue of \\( B \\)." }, { "context": "Let \\(\\{M_x\\}\\) be a POVM and \\(\\rho_x^E\\) be a quantum state. Then \\(M_x \\ge 0\\) and since \\(\\rho_x^E\\) is a density matrix, we have \\(\\rho_x^E \\ge 0\\) so \\(\\lambda_i(\\rho_x^E) \\ge 0\\) so \\(\\sum_i \\lambda_i(\\rho_x^E) = \\text{Tr}(\\rho_x^E) = 1 \\implies \\lambda_{\\max}(\\rho_x^E) \\le 1\\). Then applying part (b) gives\n\n\\[ \\text{Tr}(M_x \\rho_x^E) \\le \\lambda_{\\max}(\\rho_x^E) \\text{Tr}(M_x) \\le \\text{Tr}(M_x). \\]", "question": "### (c) Use this fact to show that for any POVM \\( \\{M_x\\} \\) and any quantum state \\( \\rho_E^x \\) we have \n\\[ \\text{Tr}(M_x \\rho_E^x) \\leq \\text{Tr}(M_x). \\]" }, { "context": "Since \\( p_x \\geq 0 \\), applying part (c),\n\n\\[ \\sum_x p_x \\text{Tr} \\left( M_x \\rho_E^x \\right) \\leq \\sum_x p_x \\text{Tr} (M_x) \\leq \\left( \\max_x p_x \\right) \\sum_x \\text{Tr} (M_x) \\]\n\n\\[ = \\left( \\max_x p_x \\right) \\text{Tr} \\left( \\sum_x M_x \\right) \\]\n\n\\[ = \\left( \\max_x p_x \\right) \\text{Tr} (\\mathbb{I}_E) \\]\n\n\\[ = \\left( \\max_x p_x \\right) |E|. \\]\n\nThen we have\n\n\\[ P_{\\text{guess}}(X | E) = \\max_{\\{M_x\\}} \\sum_x p_x \\text{Tr} (M_x \\rho_E^x) \\leq \\left( \\max_x p_x \\right) |E|. \\]\n\nSince \\( - \\log x \\) is monotonically decreasing, we have\n\n\\[ H_{\\min}(X | E) = - \\log P_{\\text{guess}}(X | E) \\geq - \\log \\left( \\left( \\max_x p_x \\right) |E| \\right) = - \\log \\left( \\max_x p_x \\right) - \\log |E| \\]\n\n\\[ = H_{\\min}(X) - \\log |E|. \\]", "question": "### (d) Using this trick together with what you know about POVMs, show that \n\\[ H_{\\min}(X|E) \\geq H_{\\min}(X) - \\log |E|. \\]" } ]
2016-11-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Bounding the winning probability in the CHSH game
The goal of this problem is to demonstrate that no quantum strategy, however large a quantum state it uses, can succeed with probability larger than \( \cos^2(\pi/8) \approx 0.85 \) in the CHSH game. The first step consists in having an accurate model for what a “quantum strategy” is. The players, Alice and Bob, should be allowed to use an arbitrary bipartite state \( |\psi\rangle_{AB} \in \mathcal{H}_A \otimes \mathcal{H}_B \), where \( \mathcal{H}_A \) and \( \mathcal{H}_B \) are arbitrary vector spaces (of finite dimension). Next, upon reception of a question \( a \in \{0, 1\} \), Alice can make an arbitrary measurement (POVM) \( \{A_a^0, A_a^1\} \) on her system, and similarly for Bob with \( \{B_b^0, B_b^1\} \). It is important to convince yourselves that any kind of strategy can be implemented in this way, including making repeated measurements in sequence, unitaries, etc. Indeed, ultimately a “strategy” receives as input a question, makes some sequence of quantum operations, and returns an answer: it is in any case something that can be modeled via a POVM. So for the remainder of the problem, let us fix an arbitrary entangled state \( |\psi\rangle_{AB} \) and POVM \( \{A_a^j\}_{a \in \{0,1\}} \) and \( \{B_b^j\}_{b \in \{0,1\}} \) on that state. For convenience we also define \[ A_x = A_0^x - A_1^x \] and \[ B_x = B_0^x - B_1^x. \] (Due to De Huang)
[ { "context": "Since \\(\\{A_x^0, A_x^1\\}\\) is a valid POVM, we have\n\n\\[ 0 \\leq A_x^0 \\leq I_{A_x}, \\quad 0 \\leq A_x^1 \\leq I_{A_x}, \\]\n\n\\[ \\implies -I_{A_x} \\leq A_x = A_x^0 - A_x^1 \\leq I_{A_x}, \\]\n\n\\[ \\implies r(A_x) \\leq 1, \\]\n\nwhere \\(r(A_x)\\) denotes the spectral radius of \\(A_x\\). Since \\(A_x^0, A_x^1\\) are Hermitian, \\(A_x\\) is also Hermitian, thus the largest singular value of \\(A_x\\) equals \\(r(A_x)\\). Therefore\n\n\\[ \\|A_x\\| = r(A_x) \\leq 1. \\]\n\nThe same argument also works for \\(B_y\\), i.e. \\(\\|B_y\\| \\leq 1\\).", "question": "### (a) Show that if \\( \\{A_a^0, A_a^1\\} \\) is a valid POVM then \\( \\|A_x\\| \\leq 1 \\) (where as usual \\( \\|\\cdot\\| \\) is the operator norm, the largest singular value). Similarly for \\( B_y \\)." }, { "context": "Suppose that \\(A_x\\) is diagonalized in a basis \\(\\{\\phi_i\\}\\), i.e.\n\n\\[ A_x|\\phi_i\\rangle = \\lambda_i|\\phi_i\\rangle, \\quad i = 0, 1, \\ldots, d_a - 1, \\]\n\n\\[ \\langle \\phi_i|\\phi_j \\rangle = \\delta_{ij}, \\quad i, j = 0, 1, \\ldots, d_a - 1, \\]\n\nwhere \\(d_a\\) is the dimension of \\(\\mathcal{H}_A\\), and \\(\\lambda_i, i = 0, 1, \\ldots, d_a - 1\\), are all eigenvalues of \\(A_x\\). Then\n\n\\[ \\lambda_i^2 \\leq \\|A_x\\|^2 \\leq 1, \\quad i = 0, 1, \\ldots, d_a - 1, \\]\n\nsince \\(A_x\\) is Hermitian. We can always write \\(|\\psi\\rangle_{AB}\\) as\n\n\\[ |\\psi\\rangle_{AB} = \\sum_{i=0}^{d_A-1} \\sum_{j=0}^{d_B-1} \\alpha_{ij} |\\phi_i\\rangle_A |j\\rangle_B. \\]\n\nThen we have\n\n\\[ \\| |u_x\\rangle \\|^2 = \\langle u_x|u_x \\rangle = \\langle \\psi|_{AB}(A_x \\otimes I_B)(A_x \\otimes I_B)|\\psi\\rangle_{AB} = \\sum_{i,j} \\sum_{k,l} \\alpha_{ij} \\alpha_{kl} \\langle \\phi_i|A_x^2|\\phi_k \\rangle_A \\langle j|l \\rangle_B = \\sum_{i,j} \\sum_{k,l} \\alpha_{ij} \\alpha_{kl} \\lambda_k \\lambda_k \\langle \\phi_i|\\phi_k \\rangle_A \\langle j|l \\rangle_B = \\sum_{i,j} |\\alpha_{ij}|^2 \\lambda_i^2 \\leq \\sum_{i,j} |\\alpha_{ij}|^2 = \\| |\\psi\\rangle_{AB} \\|^2 = 1, \\]\n\nthat is \\(\\| |u_x\\rangle \\| \\leq 1\\). Similarly, we can also prove that \\(\\| |v_y\\rangle \\| \\leq 1\\).", "question": "### (b) For \\( x \\in \\{0, 1\\} \\) define \\( |u_x\\rangle = A_x \\otimes I_B |\\psi\\rangle_{AB} \\), and for \\( y \\in \\{0, 1\\} \\) define \\( |v_y\\rangle = I_A \\otimes B_y |\\psi\\rangle_{AB} \\). Give a bound on the Euclidean norms \\( \\|u_x\\|, \\|v_y\\| \\)." }, { "context": "By direct calculation, we have\n\n\\[ \\langle u_x|v_y \\rangle = \\langle \\psi|_{AB}(A_x \\otimes I_B)(I_A \\otimes B_y)|\\psi\\rangle_{AB} = \\langle \\psi|_{AB}(A_x \\otimes B_y)|\\psi\\rangle_{AB} = \\langle \\psi|_{AB}(A_x^0 \\otimes B_y^0)|\\psi\\rangle_{AB} - \\langle \\psi|_{AB}(A_x^1 \\otimes B_y^0)|\\psi\\rangle_{AB} - \\langle \\psi|_{AB}(A_x^0 \\otimes B_y^1)|\\psi\\rangle_{AB} + \\langle \\psi|_{AB}(A_x^1 \\otimes B_y^1)|\\psi\\rangle_{AB}. \\]\n\nThen for \\((x, y) \\neq (1, 1)\\),\n\n\\[ \\langle u_x|v_y \\rangle = 2(\\langle \\psi|_{AB}(A_x^0 \\otimes B_y^0)|\\psi\\rangle_{AB} + 2(\\langle \\psi|_{AB}(A_x^1 \\otimes B_y^1)|\\psi\\rangle_{AB} - \\langle \\psi|_{AB}(A_x^0 \\otimes B_y^1)|\\psi\\rangle_{AB} - \\langle \\psi|_{AB}(A_x^1 \\otimes B_y^0)|\\psi\\rangle_{AB} = 2\\text{Tr}((A_x^0 \\otimes B_y^0)|\\psi\\rangle_{AB}) + 2\\text{Tr}((A_x^1 \\otimes B_y^1)|\\psi\\rangle_{AB}) - \\langle \\psi|_{AB}((A_x^0 + A_x^1) \\otimes (B_y^0 + B_y^1))|\\psi\\rangle_{AB} = 2p(0, 0|x, y) + 2p(1, 1|x, y) - 1, \\]\n\n\\[ p(0, 0|x, y) + p(1, 1|x, y) = \\frac{1}{2}(1 + \\langle u_x|v_y \\rangle). \\]\n\nfor \\((x, y) = (1, 1)\\),\n\n\\[ \\begin{aligned} \\langle u_{x} | v_{y} \\rangle &= \\langle \\psi |_{AB} (A_{0}^{x} \\otimes B_{0}^{y}) | \\psi \\rangle_{AB} + \\langle \\psi |_{AB} (A_{1}^{x} \\otimes B_{0}^{y}) | \\psi \\rangle_{AB} \\\\ &\\quad + \\langle \\psi |_{AB} (A_{0}^{x} \\otimes B_{1}^{y}) | \\psi \\rangle_{AB} + \\langle \\psi |_{AB} (A_{1}^{x} \\otimes B_{1}^{y}) | \\psi \\rangle_{AB} \\\\ &\\quad - 2 \\langle \\psi |_{AB} (A_{1}^{x} \\otimes B_{0}^{y}) | \\psi \\rangle_{AB} - 2 \\langle \\psi |_{AB} (A_{0}^{x} \\otimes B_{1}^{y}) | \\psi \\rangle_{AB} \\\\ &= \\langle \\psi |_{AB} ((A_{0}^{x} + A_{1}^{x}) \\otimes (B_{0}^{y} + B_{1}^{y})) | \\psi \\rangle_{AB} \\\\ &\\quad - 2 \\text{Tr} ((A_{1}^{x} \\otimes B_{0}^{y}) \\rho) \\langle \\psi |_{AB} \\rangle - 2 \\text{Tr} ((A_{0}^{x} \\otimes B_{1}^{y}) \\rho) \\langle \\psi |_{AB} \\rangle \\\\ &= \\langle \\psi |_{AB} (\\mathbb{I}_{A} \\otimes \\mathbb{I}_{B}) | \\psi \\rangle_{AB} - 2p(1, 0 | x, y) - 2p(0, 1 | x, y), \\end{aligned} \\]\n\n\\[ p(1, 0 | x, y) + p(0, 1 | x, y) = \\frac{1}{2} (1 - \\langle u_{x} | v_{y} \\rangle). \\]\n\nFinally we have\n\n\\[ \\begin{aligned} p_{\\text{succ}} &= \\sum_{x, y \\in \\{0, 1\\}} p(x, y) \\sum_{a, b \\in \\{0, 1\\}} p(a, b | x, y) \\\\ &= \\frac{1}{4} \\big( p(0, 0 | 0, 0) + p(1, 1 | 0, 0) + p(0, 0 | 0, 1) + p(1, 1 | 0, 1) \\\\ &\\quad + p(0, 0 | 1, 0) + p(1, 1 | 1, 0) + p(0, 0 | 1, 1) + p(1, 1 | 1, 1) \\big) \\\\ &= \\frac{1}{8} \\big( 1 + (u_{0} | v_{0}) + 1 + (u_{1} | v_{0}) + 1 + (u_{0} | v_{1}) + 1 + 1 - (u_{1} | v_{1}) \\big) \\\\ &= \\frac{1}{2} + \\frac{1}{8} \\big( (u_{0} | v_{0}) + (u_{1} | v_{0}) + (u_{0} | v_{1}) - (u_{1} | v_{1}) \\big). \\end{aligned} \\]", "question": "### (c) Show that the success probability of the quantum strategy in the CHSH game can be expressed as \n\\[ p_{\\text{succ}} = \\frac{1}{2} + \\frac{1}{8} \\left( \\langle u_0 | v_0 \\rangle + \\langle u_1 | v_0 \\rangle + \\langle u_0 | v_1 \\rangle - \\langle u_1 | v_1 \\rangle \\right). \\]" }, { "context": "Let \\(c = \\max \\{ \\| r_{0} \\|, \\| r_{1} \\|, \\| s_{0} \\|, \\| s_{1} \\| \\}\\). Notice that\n\n\\[ \\begin{aligned} \\| r_{0} \\rangle + \\| r_{1} \\rangle - \\| r_{1} \\rangle \\|^{2} &= \\langle (r_{0} | + (r_{1} |) (| r_{0} \\rangle + | r_{1} \\rangle) + (r_{0} | - (r_{1} |) (| r_{0} \\rangle - | r_{1} \\rangle) \\rangle \\\\ &= 2 \\langle r_{0} | r_{0} \\rangle + 2 \\langle r_{0} | r_{0} \\rangle \\\\ &= 2 \\| r_{0} \\|^{2} + 2 \\| r_{1} \\|^{2} \\\\ &\\leq 4c^{2}, \\end{aligned} \\]\n\nthus\n\n\\[ \\| r_{0} \\rangle + \\| r_{1} \\rangle + \\| r_{0} \\rangle - \\| r_{1} \\rangle \\| \\leq \\sqrt{2} \\left( \\| r_{0} \\rangle + \\| r_{1} \\rangle \\|^{2} + \\| r_{0} \\rangle - \\| r_{1} \\rangle \\| \\right)^{\\frac{1}{2}} \\leq 2 \\sqrt{2} c. \\]\n\nThen we have\n\n\\[ \\begin{aligned} | \\langle r_{0} | s_{0} \\rangle + \\langle r_{1} | s_{1} \\rangle - \\langle r_{1} | s_{1} \\rangle | &\\leq | \\langle r_{0} | s_{0} \\rangle + \\langle r_{0} | s_{0} \\rangle | + | \\langle r_{1} | s_{1} \\rangle - \\langle r_{1} | s_{1} \\rangle | \\\\ &= | \\langle (r_{0} | + (r_{1} |) s_{0} \\rangle | + | \\langle (r_{0} | - (r_{1} |) s_{1} \\rangle | \\\\ &\\leq \\| r_{0} \\rangle + \\| r_{1} \\rangle \\| \\| s_{0} \\rangle + \\| r_{0} \\rangle - \\| r_{1} \\rangle \\| \\| s_{1} \\rangle \\| \\\\ &\\leq c \\| r_{0} \\rangle + \\| r_{1} \\rangle \\| + c \\| r_{0} \\rangle - \\| r_{1} \\rangle \\| \\\\ &\\leq 2 \\sqrt{2} c. \\end{aligned} \\]", "question": "### (d) Show that for any vectors \\( |r_0\\rangle, |r_1\\rangle, |s_0\\rangle, |s_1\\rangle \\), the inequality \n\\[ |\\langle r_0 | s_0 \\rangle + \\langle r_1 | s_0 \\rangle + \\langle r_0 | s_1 \\rangle - \\langle r_1 | s_1 \\rangle| \\leq 2\\sqrt{2} \\max \\{ \\|r_0\\|, \\|r_1\\|, \\|s_0\\|, \\|s_1\\| \\} \\]\nholds." }, { "context": "Using the result of (b), we have\n\n\\[ c = \\max\\{\\| |u_0\\rangle \\|, \\| |u_1\\rangle \\|, \\| |s_0\\rangle \\|, \\| |s_1\\rangle \\|\\} \\leq 1. \\]\n\nThen using the result of (d), we have\n\n\\[ \\left| \\langle u_0 | v_0 \\rangle + \\langle u_1 | v_0 \\rangle + \\langle u_0 | v_1 \\rangle - \\langle u_1 | v_1 \\rangle \\right| \\leq 2 \\sqrt{2} c \\leq 2 \\sqrt{2}. \\]\n\nFinally using the result of (c), we have\n\n\\[ p_{\\text{succ}} = \\frac{1}{2} + \\frac{1}{8} \\left( \\langle u_0 | v_0 \\rangle + \\langle u_1 | v_0 \\rangle + \\langle u_0 | v_1 \\rangle - \\langle u_1 | v_1 \\rangle \\right) \\]\n\n\\[ \\leq \\frac{1}{2} + \\frac{1}{8} \\left| \\langle u_0 | v_0 \\rangle + \\langle u_1 | v_0 \\rangle + \\langle u_0 | v_1 \\rangle - \\langle u_1 | v_1 \\rangle \\right| \\]\n\n\\[ \\leq \\frac{1}{2} + \\frac{\\sqrt{2}}{4} \\]\n\n\\[ = \\cos^2 \\frac{\\pi}{8}. \\]", "question": "### (e) Conclude that \\( p_{\\text{succ}} \\leq \\cos^2(\\pi/8) \\)." } ]
2016-11-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
A guessing game
Imagine Alice and Eve play a guessing game where they share some state \( \rho_{AE} \) and Alice produces a random bit \( \theta \), measures her qubit in the standard basis if \( \theta = 0 \) and measures in the Hadamard basis if \( \theta = 1 \). In both cases she obtains a bit \( x \) as measurement outcome. She then announces \( \theta \) to Eve. Eve's goal is then to guess the bit \( x \). Now imagine \( \rho_{AE} = |\Phi\rangle \langle \Phi| \), where \( |\Phi\rangle = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle) \), so Alice and Eve share a maximally entangled pair of qubits. In this scenario you know that, if Eve measures in the same basis as Alice, she will get the same outcome and thus be able to guess \( x \) perfectly in this situation. (Due to Mandy Huo, adapted by the TAs)
[ { "context": "If Eve knows both \\(\\theta\\) and \\(U_A\\) then she can guess perfectly in all cases by applying \\(U_E = U_A\\), the element-wise complex conjugation of \\(U_A\\), and measuring in the same basis as Alice. This is because, from Homework 4, we know that \\(U \\otimes I | \\phi^+ \\rangle = I \\otimes U^T | \\phi^+ \\rangle\\), and to undo \\(U^T\\) Eve just applies \\((U^T)^\\dagger = \\overline{U}\\).", "question": "### (a) However, Alice wants to foil Eve so, before measuring, she first applies some unitary \\( U \\) to her qubit, and then measures. Of course Eve, being really smart, gets wind of this so she will know what unitary Alice has used before measuring. So they share the state\n\\[ |\\Phi_U\\rangle = (U_A \\otimes \\mathbb{I}_E) \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle) \\]\nand Eve knows \\( \\theta \\) and \\( U \\). What would now be Eve's best guessing probability? (Tip: Eve has a quantum computer!)" }, { "context": "Alice should use the following strategy:\n\n1. If \\(\\theta = 0\\) then Alice applies \\(I\\) with probability \\(1/2\\), \\(X\\) with probability \\(1/2\\), or \\(Z\\) with probability zero before measuring.\n2. If \\(\\theta = 1\\) then Alice applies \\(I\\) with probability \\(1/2\\), \\(Z\\) with probability \\(1/2\\), or \\(X\\) with probability zero before measuring.\n\nWith this strategy, when \\(\\theta = 0\\), the shared state is either \\(\\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle)\\) or \\(\\frac{1}{\\sqrt{2}} (|01\\rangle + |10\\rangle)\\), each with probability \\(1/2\\). So if \\(x = 0\\), since Eve does not know which unitary Alice applied, Eve has the state \\(\\frac{1}{2} |0\\rangle (|0\\rangle + \\frac{1}{2} |1\\rangle)\\), and if \\(x = 1\\), Eve has the state \\(\\frac{1}{2} |1\\rangle (|1\\rangle + \\frac{1}{2} |0\\rangle)\\). Then Eve cannot distinguish between the two outcomes so at best she can guess randomly.\n\nSimilarly, when \\(\\theta = 1\\) the shared state is either \\(\\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle)\\) = \\(\\frac{1}{\\sqrt{2}} (|+ +\\rangle + |-\rangle)\\) or \\(\\frac{1}{\\sqrt{2}} (|+ -\\rangle + |-\rangle)\\), each with probability \\(1/2\\). So if \\(x = 0\\), Eve has the state \\(\\frac{1}{2} |+\\rangle (|+\\rangle + \\frac{1}{2} |-\rangle)\\), and if \\(x = 1\\), Eve has the state \\(\\frac{1}{2} |-\\rangle (|-\rangle + \\frac{1}{2} |+\\rangle)\\). Hence in both cases Eve's best strategy is to guess randomly so she will guess correctly with probability \\(1/2\\). Since the worst strategy Eve can use is a random guess, this strategy makes Eve's guessing probability the lowest possible.", "question": "### (b) Consider now a scenario in which Eve doesn’t know the local unitary that Alice applies. Suppose that Alice can choose her unitary from a set of three unitaries (and assume that she always applies one unitary from this set before measuring). You should assume that, once Alice decides on a (probabilistic) strategy, this is fixed. Provide a strategy for Alice (including her choice of the set of three unitaries) that makes Eve’s guessing probability the lowest possible." }, { "context": "Alice should, for all \\(\\theta\\), first apply either \\(I\\) or \\( Y = XZ \\) each with probability \\(1/2\\). We will show that this achieves the same probability as in part (b).\n\nWith this strategy, when \\( \\theta = 0 \\), the shared state is either \\( \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle) \\) or \\( \\frac{1}{\\sqrt{2}} (|01\\rangle + |10\\rangle) \\), each with probability 1/2. Then if \\( x = 0 \\), since Eve does not know which unitary Alice used, Eve has the state \\( \\frac{1}{2} |0\\rangle (\\frac{1}{2} |0\\rangle + \\frac{1}{2} |1\\rangle) \\), and if \\( x = 1 \\), Eve has the state \\( \\frac{1}{2} |1\\rangle (\\frac{1}{2} |1\\rangle + \\frac{1}{2} |0\\rangle) \\). Then Eve cannot distinguish between the two so at best she can guess randomly.\n\nSimilarly, when \\( \\theta = 1 \\) the shared state is either \\( \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle) = \\frac{1}{\\sqrt{2}} (| + + \\rangle + | - - \\rangle) \\) or \\( \\frac{1}{\\sqrt{2}} (| + - \\rangle + | - + \\rangle) \\), each with probability 1/2. So if \\( x = 0 \\), Eve has the state \\( \\frac{1}{2} |+\\rangle (\\frac{1}{2} |+\\rangle + \\frac{1}{2} |-\\rangle) \\), and if \\( x = 1 \\), Eve has the state \\( \\frac{1}{2} |-\\rangle (\\frac{1}{2} |-\\rangle + \\frac{1}{2} |+\\rangle) \\). Hence in both cases Eve’s best strategy is to guess randomly so she will guess correctly with probability 1/2.", "question": "### (c) Suppose we restrict Alice’s set of possible unitaries to contain only two. Can she still make Eve’s guessing probability as low as in part (b)?" } ]
2016-11-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Decoherence
A quantum state can naturally be exposed to a phenomenon called decoherence, due to its interaction with the surrounding environment. Suppose the state of a qubit and the surrounding environment is initially \( |\Psi\rangle |E\rangle \), where \( |\Psi\rangle = \alpha |0\rangle + \beta |1\rangle \), and \( |E\rangle \) is the initial state of the environment. (Due to Mandy Huo)
[ { "context": "We have \\( |\\Psi\\rangle |E\\rangle \\rightarrow \\alpha |0\\rangle |E_0\\rangle + \\beta |1\\rangle |E_1\\rangle \\) so\n\n\\[ |\\Psi\\rangle \\langle \\Psi| \\otimes |E\\rangle \\langle E| = |\\alpha|^2 |0\\rangle \\langle 0| \\otimes |E_0\\rangle \\langle E_0| + \\alpha \\beta^* |0\\rangle \\langle 1| \\otimes |E_0\\rangle \\langle E_1| + \\alpha^* \\beta |1\\rangle \\langle 0| \\otimes |E_1\\rangle \\langle E_0| + |\\beta|^2 |1\\rangle \\langle 1| \\otimes |E_1\\rangle \\langle E_1| \\]\n\nAssuming \\( \\langle E_0|E_1 \\rangle \\) is real, we have \\( \\langle E_0|E_1 \\rangle = \\langle E_1|E_0 \\rangle \\). Since \\( |E\\rangle \\), \\( |E_0\\rangle \\), and \\( |E_1\\rangle \\) are normalized, tracing out the environment gives\n\n\\[ |\\Psi\\rangle \\langle \\Psi| \\otimes \\text{Tr} (|E\\rangle \\langle E|) = |\\alpha|^2 |0\\rangle \\langle 0| + \\langle E_0|E_1 \\rangle (\\alpha \\beta^* |0\\rangle \\langle 1| + \\alpha^* \\beta |1\\rangle \\langle 0|) + |\\beta|^2 |1\\rangle \\langle 1| \\]\n\nDefine \\( p = \\frac{1 - \\langle E_0|E_1 \\rangle}{2} \\). We will show later that \\( p \\) is in fact a valid probability. Note that \\( Z|\\Psi\\rangle = \\alpha |0\\rangle - \\beta |1\\rangle \\). Then we have\n\n\\[ |\\Psi\\rangle \\langle \\Psi| \\otimes \\text{Tr} (|E\\rangle \\langle E|) = |\\alpha|^2 |0\\rangle \\langle 0| + (1 - 2p)(\\alpha \\beta^* |0\\rangle \\langle 1| + \\alpha^* \\beta |1\\rangle \\langle 0|) + |\\beta|^2 |1\\rangle \\langle 1| \\]\n\\[ = (1 - p) (|\\alpha|^2 |0\\rangle \\langle 0| + \\alpha \\beta^* |0\\rangle \\langle 1| + \\alpha^* \\beta |1\\rangle \\langle 0| + |\\beta|^2 |1\\rangle \\langle 1|) \\]\n\\[ + p (|\\alpha|^2 |0\\rangle \\langle 0| - \\alpha \\beta^* |0\\rangle \\langle 1| - \\alpha^* \\beta |1\\rangle \\langle 0| + |\\beta|^2 |1\\rangle \\langle 1|) \\]\n\\[ = (1 - p)|\\Psi\\rangle \\langle \\Psi| + pZ|\\Psi\\rangle \\langle \\Psi|Z \\]\n\nSo \\( |\\Psi\\rangle \\langle \\Psi| \\rightarrow (1 - p)|\\Psi\\rangle \\langle \\Psi| + pZ|\\Psi\\rangle \\langle \\Psi|Z \\). Note that \\( |E_i\\rangle \\langle E_i| \\geq 0 \\) since \\( \\langle u|E_i\\rangle \\langle E_i|u\\rangle = |\\langle u|E_i\\rangle|^2 \\geq 0 \\) for any \\( |u\\rangle \\) and \\( \\lambda_{\\max} (|E_i\\rangle \\langle E_i|) \\leq 1 \\) since \\( \\sum_i \\lambda_i (|E_i\\rangle \\langle E_i|) = \\text{Tr} (|E_i\\rangle \\langle E_i|) = 1 \\) and \\( \\lambda_i (|E_i\\rangle \\langle E_i|) \\geq 0 \\). Then by problem 2(b) we have\n\n\\[ |\\langle E_0|E_1 \\rangle|^2 = \\text{Tr} (|E_0\\rangle \\langle E_0| |E_1\\rangle \\langle E_1|) \\leq \\lambda_{\\max} (|E_1\\rangle \\langle E_1|) \\text{Tr} (|E_0\\rangle \\langle E_0|) \\leq 1 \\]\n\nThen we have \\( |\\langle E_0|E_1 \\rangle| \\leq 1 \\) which implies \\( 0 \\leq p \\leq 1 \\) so \\( p \\) is a valid probability.", "question": "### Suppose that this state undergoes “decoherence”, as described by the CPTP map\n\n\\[ |0\\rangle |E\\rangle \\mapsto |0\\rangle |E_0\\rangle , \\]\n\\[ |1\\rangle |E\\rangle \\mapsto |1\\rangle |E_1\\rangle , \\]\n\nwhere the states \\( |E\\rangle \\), \\( |E_0\\rangle \\) and \\( |E_1\\rangle \\) are normalized but not necessarily orthogonal. Show that the density matrix of the qubit evolves as\n\n\\[ |\\Psi\\rangle \\langle \\Psi| \\mapsto (1 - p) |\\Psi\\rangle \\langle \\Psi| + pZ |\\Psi\\rangle \\langle \\Psi| Z. \\]\n\nAssuming \\( \\langle E_0 | E_1 \\rangle \\) is real, express \\( p \\) in terms of \\( \\langle E_0 | E_1 \\rangle \\). This means that with the probability \\( 1 - p \\) the qubit is not affected by the environment and with probability \\( p \\) the qubit undergoes a phase-flip error." } ]
2016-11-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Classical one-time pad
We meet up again with our favourite protagonists, Alice and Bob. As you’ve seen in class, Alice and Bob have an adversary named Eve who is intent on listening in on all the conversations Alice and Bob have. In order to protect themselves, they exchange a classical key \( k = k_1 k_2 \ldots k_n \) which they can use to encrypt messages and hence be safe from Eve. Alice knows that a safe way to encode messages would be to use a classical one-time pad, as seen in the lecture notes. But she feels like this uses a large amount of key, and being a smart student she comes up with the following encoding scheme which she claims is also secure but uses less key. Alice’s scheme goes as follows. Alice’s message is an \( n \)-bit string \( m = m_1 m_2 \ldots m_n \). For \( i \) from 1 to \( n \), 1. Alice flips a fair coin. 2. If the result is tails, she sets \( c_i = m_i \oplus k_i \). 3. If the result is heads, she sets \( c_i = m_i \oplus r \), where \( r \) is a fresh random bit. The encrypted ciphertext is \( c = c_1 c_2 \ldots c_n \).
[ { "context": "(Due to Daniel Gu)\nLet \\( X \\) be the random variable which is the number of bits that Alice uses in total, and \\( X_i \\) be the number of bits that Alice uses at step \\( i \\) in the protocol. Then \\( X = \\sum_{i=1}^{n} X_i \\) and so by linearity of expectation\n\n\\[\n\\mathbb{E}[X] = \\mathbb{E}\\left[\\sum_{i=1}^{n} X_i\\right] = \\sum_{i=1}^{n} \\mathbb{E}[X_i] = \\frac{n}{2}\n\\]", "question": "### (a) How many bits of key will Alice use on average with the new protocol?" }, { "context": "(Due to Daniel Gu)\nThe scheme is certainly not correct: since Alice doesn’t send her random choices (random bits) to Bob but only the ciphertext \\( c \\), Bob has no idea which bits got XOR’d with the key and which got XOR’d with a random bit. No deterministic algorithm can decrypt the ciphertext accurately, since once we fix the ciphertext, key, and \\( \\text{DEC}(k, c) \\), we can choose our random bits such that our message does not match our decryption algorithm’s answer.\n\nHowever, the scheme is secure. The probability that the \\( i \\)-th bit of the message is 0 (over the random choices made by Alice and a uniformly random key distribution) given that the \\( i \\)-th bit of the ciphertext is \\( b \\) is 1/2, since with probability 1/2 we XOR \\( b \\) with the \\( i \\)-th bit of the key, which without knowledge of the key is equally likely to be 0 or 1, so it has a 1/2 chance of being 0 and producing 0, and with probability 1/2 we XOR it with a uniformly random bit, which also has a 1/2 chance of being \\( b \\). So given the ciphertext, the distribution of possible messages is the uniform distribution over all \\( n \\) bit messages, so the scheme is secure.", "question": "### (b) Is this protocol correct? Is it secure? Provide a proof of security or an attack scheme." } ]
2016-10-25T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Superpositions and mixtures
Alice wants to send the state \( |0\rangle \) to Bob. But 50% of the time, her (noisy) device outputs the state \( |1\rangle \) instead.
[ { "context": "(Due to Alex Meiburg)\n\\[\n\\frac{1}{2} |0\\rangle \\langle 0| + \\frac{1}{2} |1\\rangle \\langle 1| = \\begin{bmatrix} \\frac{1}{2} & 0 \\\\ 0 & \\frac{1}{2} \\end{bmatrix}\n\\]", "question": "### (a) Give the density matrix \\( \\rho_0 \\) describing Bob’s state." }, { "context": "(Due to Alex Meiburg)\nNote that \\( \\rho_0 = \\frac{1}{2} I \\). The probability for any given state is given by\n\n\\[\n\\langle \\psi | \\rho_0 | \\psi \\rangle = \\langle \\psi | \\frac{1}{2} I | \\psi \\rangle = \\frac{1}{2} \\langle \\psi | \\psi \\rangle = \\frac{1}{2}\n\\]\n\n\\[\n\\langle \\psi | \\rho_0 | \\psi \\rangle = \\frac{1}{2}\n\\]\nSo that each measurement of a state gives a 50% probability of that occurring.", "question": "### (b) Suppose Bob measures \\( \\rho_0 \\) in the standard basis. What is the probability that the measurement results in \\( |0\\rangle \\)? \\( |1\\rangle \\)? What if Bob measures in the Hadamard basis?" }, { "context": "(Due to Alex Meiburg)\n\\[\n\\rho_0 = |+\\rangle \\langle +| = \\begin{bmatrix} \\frac{1}{2} & \\frac{1}{2} \\\\ \\frac{1}{2} & \\frac{1}{2} \\end{bmatrix}\n\\]\n\n\\[\n\\langle 0|\\rho_0|0\\rangle = \\frac{1}{2}, \\quad \\langle 1|\\rho_0|1\\rangle = \\frac{1}{2}\n\\]\n\n\\[\n\\langle +|\\rho_0|+\\rangle = \\langle +|+\\rangle \\langle +|+\\rangle = 1, \\quad \\langle -|\\rho_0|-\\rangle = \\langle -|+\\rangle \\langle +|-\\rangle = 0\n\\]\nSo that in the standard basis it is completely random, while in the Hadamard basis it is guaranteed \\(|+\\rangle\\).", "question": "### (c) Now say that the machine on Alice’s side is not noisy but simply misaligned: it consistently prepares qubits in the state \\( |+\\rangle \\). Again, what is the distribution of outcomes if Bob measures in the standard basis? In the Hadamard basis?" } ]
2016-10-25T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Quantum one-time pad
In the lecture notes, you saw that two classical bits of key suffice to encrypt one quantum bit. On an intuitive level, our scheme needed to use both the \( X \) and \( Z \) gates because the \( X \) operation has no effect on the \( |+\rangle \) state and the \( Z \) operation has no effect on the \( |0\rangle \) state. Alice decides to avoid this problem by using \( H \), which fixes neither \( |0\rangle \) nor \( |+\rangle \). Explicitly, she uses the following protocol to encode a qubit \( |\psi\rangle \): Let \( k \in \{0, 1\} \) be the key bit. Encrypt \( |\psi\rangle \) as \( H^k |\psi\rangle \).
[ { "context": "(Due to Anish Thilagar)\nThis protocol is correct. Bob will receive the state \\(H^k |\\psi\\rangle\\). He can then apply \\(H^k\\) again, to get the qubit \\(H^{2k} |\\psi\\rangle = |\\psi\\rangle\\) because \\(H^{2k} = (H^2)^k = I^k = I\\). Therefore, he can correctly extract the message from Alice.", "question": "### (a) Is this protocol a correct encryption scheme?" }, { "context": "(Due to Anish Thilagar)\nThis protocol is not secure. Take the state \\(|\\psi\\rangle = \\frac{1}{\\sqrt{2}} (|0\\rangle + |+\\rangle)\\). Under the action of \\(H\\), this is an eigenvector with eigenvalue 1, so it will remain unchanged. Therefore, the ciphertext \\(c\\) will be equal to the message \\(m = |\\psi\\rangle\\), so \\(p(|\\psi\\rangle |c) = 1 \\neq p(|\\psi\\rangle) < 1\\). Therefore, this protocol is not secure.", "question": "### (b) Is this protocol a secure encryption scheme? Provide either a proof of security or an attack." } ]
2016-10-25T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Unambiguous quantum state discrimination
(adapted from Nielsen and Chuang) In this problem we explore an essential practical advantage that comes with general POVMs rather than strictly projective measurements. Consider the following scenario: Bob sends Alice a qubit prepared in one of the two non-orthogonal states \( |0\rangle \) and \( |+\rangle \). Alice wants to perform a measurement on this qubit that distinguishes it as either \( |0\rangle \) or \( |+\rangle \) as soundly as possible, i.e. with minimum probability of mis-identifying \( |0\rangle \) as \( |+\rangle \) or vice versa. Let us first restrict her to projective measurements.
[ { "context": "(Due to Mandy Huo)\n If Alice measures in the standard basis then given that the state is \\(|0\\rangle\\) she will always get \\(|0\\rangle\\) so she will never misidentify it and given that the state is \\(|+\\rangle\\) she will get \\(|0\\rangle\\) half the time so she will misidentify it probability 1/2.", "question": "### (a) Suppose Alice measures in the basis \\( \\{|0\\rangle , |1\\rangle \\} \\). She identifies the state as \\( |0\\rangle \\) if she gets the outcome \\( |0\\rangle \\) and as \\( |+\\rangle \\) if she gets the outcome \\( |1\\rangle \\). What is her probability of misidentifying the state given that it is \\( |0\\rangle \\)? What is her probability of misidentifying the state given that it is \\( |+\\rangle \\)?" }, { "context": "(Due to Mandy Huo)\n If Alice measures in the Hadamard basis then given that the state is \\(|+\\rangle\\) she will always get \\(|+\\rangle\\) so she will never misidentify it. Given that the state is \\(|0\\rangle\\) she will get \\(|+\\rangle\\) half the time so she will misidentify it probability 1/2.", "question": "### (b) Suppose instead Alice measures in the basis \\( \\{|+\\rangle, |-\\rangle\\} \\). She identifies the state as \\( |+\\rangle \\) if she gets the outcome \\( |+\\rangle \\) and as \\( |0\\rangle \\) if she gets the outcome \\( |-\\rangle \\). Again, what are her probabilities of misidentifying the state in each case?" }, { "context": "(Due to Mandy Huo)\nAssuming both states are equally likely a priori, Alice can do better overall if she measures in the basis \\(\\{|b_1\\rangle, |b_2\\rangle\\}\\) where \\(|b_1\\rangle = \\sin \\frac{3\\pi}{8} |0\\rangle - \\cos \\frac{3\\pi}{8} |1\\rangle\\) and \\(|b_2\\rangle = \\cos \\frac{3\\pi}{8} |0\\rangle + \\sin \\frac{3\\pi}{8} |1\\rangle\\), and identifies \\(|0\\rangle\\) when she gets the outcome \\(|b_1\\rangle\\) and \\(|+\\rangle\\) when she gets the outcome \\(|b_2\\rangle\\). Then the total probability of misidentifying is\n\n\\[\n\\frac{1}{2}(|\\langle b|0\\rangle|^2 + |\\langle b|1\\rangle|^2) = \\frac{1}{2} \\cos^2 \\frac{3\\pi}{8} + \\frac{11}{22} \\left( \\sin^2 \\frac{3\\pi}{8} + \\cos^2 \\frac{3\\pi}{8} - 2 \\sin \\frac{3\\pi}{8} \\cos \\frac{3\\pi}{8} \\right)\n\\]\n\n\\[\n= \\frac{1}{2} \\cos^2 \\frac{3\\pi}{8} + \\frac{11}{22} \\left( 1 - \\sin \\frac{3\\pi}{4} \\right)\n\\]\n\n\\[\n= \\frac{1}{2} \\cos^2 \\frac{3\\pi}{8} + \\frac{11}{22} \\left( 1 + \\cos \\frac{3\\pi}{4} \\right)\n\\]\n\n\\[\n= \\cos^2 \\frac{3\\pi}{8} \\approx 0.15\n\\]\nwhich is less than \\(\\frac{11}{22} = \\frac{1}{4}\\) in parts (a) and (b).", "question": "### (c) Is it possible for Alice to do better than this with any projective measurement? Assume \\( |0\\rangle \\) and \\( |+\\rangle \\) are equally likely a priori.\nNow suppose we allow Alice to perform a general measurement. In particular consider the following POVM with three elements:\n\n\\[\nE_1 = \\frac{\\sqrt{2}}{1 + \\sqrt{2}} |1\\rangle \\langle 1|\n\\]\n\n\\[\nE_2 = \\frac{\\sqrt{2}}{1 + \\sqrt{2}} \\frac{(|0\\rangle - |1\\rangle)(\\langle 0| - \\langle 1|)}{2}\n\\]\n\n\\[\nE_3 = I - E_1 - E_2\n\\]\n\nAlice identifies the state as \\(|+\\rangle\\) if she gets outcome 1, as \\(|0\\rangle\\) if she gets outcome 2, and makes no identification if she gets outcome 3." }, { "context": "(Due to Mandy Huo)\nIf the state is \\(|+\\rangle\\) then Alice will get outcomes 2 and 3 with probabilities\n\n\\[\n\\text{tr}(E_2|+\\rangle\\langle+|) = \\text{tr} \\left\\{ \\left( \\frac{\\sqrt{2}}{1+\\sqrt{2}} |-|\\rangle\\langle-| \\right) |+\\rangle\\langle+| \\right\\} = 0,\n\\]\n\n\\[\n\\text{tr}(E_3|+\\rangle\\langle+|) = 1 - \\text{tr}(E_1|+\\rangle\\langle+|) - \\text{tr}(E_2|+\\rangle\\langle+|) = 1 - \\frac{1}{\\sqrt{2}(1+\\sqrt{2})} = \\frac{1}{\\sqrt{2}}.\n\\]\nSo given that the state is \\(|+\\rangle\\), Alice will never misidentify the state and will fail to make an identification with probability \\(1/\\sqrt{2}\\).\n\nIf the state is \\(|0\\rangle\\) then Alice will get outcomes 1 and 3 with probabilities\n\n\\[\n\\text{tr}(E_1|0\\rangle\\langle0|) = 0,\n\\]\n\n\\[\n\\text{tr}(E_3|0\\rangle\\langle0|) = 1 - \\text{tr}(E_1|0\\rangle\\langle0|) - \\text{tr}(E_2|0\\rangle\\langle0|) = 1 - \\frac{\\sqrt{2}}{2(1+\\sqrt{2})} = \\frac{1}{\\sqrt{2}}.\n\\]\nSo given that the state is \\(|+\\rangle\\), Alice will never misidentify the state and will fail to make an identification with probability \\(1/\\sqrt{2}\\).", "question": "### (d) What is her probability of mis-identifying the state? What is her probability of failing to make an identification?" }, { "context": "(Due to Mandy Huo)\nThere is no POVM that increases the chances of making a correct identification without increasing the chance of making an incorrect identification.\n\nFirst we will show that any POVM such that the probability of mis-identification is zero must have the form \\(E_1 = \\alpha |1\\rangle\\langle1|\\) and \\(E_2 = \\beta |-|\\rangle\\langle-|, \\alpha, \\beta > 0\\). Since we must have \\(\\text{tr}(E_1|0\\rangle\\langle0|) = \\langle0|E_1|0\\rangle = 0\\) for zero chance of mis-identification in the \\(|0\\rangle\\) case, we have that either \\(|0\\rangle\\) is in the nullspace of \\(E_1\\) or \\(E_1\\) projects \\(|0\\rangle\\) onto \\(|1\\rangle\\). In the second case, we would have \\(E_1 = \\alpha |1\\rangle\\langle0|\\), which is not Hermitian and thus not positive so \\(E_1\\) must map \\(|0\\rangle\\) to the zero vector. Then \\(E_1\\) has rank 1 so it has the form \\(E_1 = \\alpha |b\\rangle\\langle1|\\). Then since \\(E_1\\) must be positive (and thus Hermitian) we have \\(E_1 = \\alpha |1\\rangle\\langle1|, \\alpha > 0\\) (note if \\(\\alpha = 0\\) then Alice will always fail to make an identification in the \\(|+\\rangle\\) case.) Similarly, \\(\\text{tr}(E_2|+\\rangle\\langle+|) = \\langle+|E_2|+\\rangle = 0\\) implies that \\(|+\\rangle\\) is either in the nullspace of \\(E_2\\) or is projected onto \\(|-\\rangle\\), but the second case results in \\( E_2 \\) not positive semidefinite so we must have \\( E_2 = \\beta |-|\\rangle \\langle -| \\), \\( \\beta > 0 \\).\n\nThen \\( E_3 = I - E_1 - E_2 \\) as before so that \\( \\sum_i E_i = I \\). What is left to check is whether \\( E_3 \\) is positive semidefinite. Since \\( E_3 \\) is given by\n\n\\[\n\\left| \\begin{array}{cc}\n(1 - \\lambda) - \\beta/2 & \\beta/2 \\\\\n\\beta/2 & (1 - \\lambda) - a - \\beta/2 \\\\\n\\end{array} \\right| = \\lambda^2 + (\\alpha + \\beta - 2)\\lambda - \\left( \\alpha + \\beta - \\frac{\\alpha \\beta}{2} - 1 \\right) = 0\n\\]\nso the eigenvalues are\n\n\\[\n\\lambda = \\frac{-(\\alpha + \\beta - 2) \\pm \\sqrt{(\\alpha + \\beta - 2)^2 + 4 \\left( \\alpha + \\beta - \\frac{\\alpha \\beta}{2} - 1 \\right)}}{2} = \\frac{-(\\alpha + \\beta - 2) \\pm \\sqrt{\\alpha^2 + \\beta^2}}{2}.\n\\]\nSince we want a POVM that fails to make an identification with smaller probability, we need \\( \\text{tr}[E_3|0\\rangle \\langle 0|] = 1 - \\frac{\\beta}{2} < \\frac{1}{\\sqrt{2}} \\) and \\( \\text{tr}[E_3|+\\rangle \\langle +|] = 1 - \\frac{\\alpha}{2} < \\frac{1}{\\sqrt{2}} \\), that is,\n\n\\[\n\\alpha > 2 \\left( \\frac{\\sqrt{2} - 1}{\\sqrt{2}} \\right), \\quad \\beta > 2 \\left( \\frac{\\sqrt{2} - 1}{\\sqrt{2}} \\right).\n\\]\nThen we have\n\n\\[\n-(\\alpha + \\beta - 2) < 2 - 4 \\frac{\\sqrt{2} - 1}{\\sqrt{2}} = -2 + 4 \\frac{\\sqrt{2} - 2}{\\sqrt{2}} = 2(\\sqrt{2} - 2) = 2 \\left( \\sqrt{2} - 1 \\right)\n\\]\n\n\\[\n\\sqrt{\\alpha^2 + \\beta^2} > \\sqrt{2 \\left( 2 \\frac{\\sqrt{2} - 1}{\\sqrt{2}} \\right)^2} = 2(\\sqrt{2} - 2) = 2 \\left( \\sqrt{2} - 1 \\right) > 0\n\\]\nso \\( \\sqrt{\\alpha^2 + \\beta^2} > -(\\alpha + \\beta - 2) \\) and so \\( E_3 \\) will have one negative eigenvalue and thus is not positive. Hence there is no POVM that gives Alice a better chance of making a correct identification without increasing the chance of making an incorrect identification.", "question": "### (e) Is there any POVM that gives Alice a better chance of making a correct identification without increasing the chance of making an incorrect identification?" } ]
2016-10-25T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Robustness of GHZ and W States
In this problem we explore two classes of \(N\)-qubit states that are especially useful for cryptography and communication, but behave very differently under tracing out a single qubit. Let’s first define them for \(N = 3\): \[ \text{GHZ state: } |GHZ_3\rangle = \frac{1}{\sqrt{2}} (|000\rangle + |111\rangle) \] \[ \text{W state: } |W_3\rangle = \frac{1}{\sqrt{3}} (|100\rangle + |010\rangle + |001\rangle) \] Note that both states are invariant under permutation of the three qubits, so without loss of generality we may trace out the last one. We’ll denote this operation by Tr3. Also, we have analogous definitions in the two-qubit case: \(|GHZ_2\rangle = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle)\) and \(|W_2\rangle = \frac{1}{\sqrt{2}} (|10\rangle + |01\rangle)\). In the following we consider the overlap between \(N\)-qubit GHZ and W states with one qubit discarded (i.e. traced out) and their \((N - 1)\)-qubit counterparts. The overlap of density matrices \(\rho\) and \(\sigma\) is defined as \(\text{Tr} \rho \sigma\), a measure of “closeness” that generalizes the expression \(|\langle \phi | \psi \rangle|^2\) for pure states.
[ { "context": "(Due to Mandy Huo)\n(i) Since \\( \\text{Tr}(|i\\rangle \\langle j|) = \\langle j|i \\rangle \\) is 0 for \\( i \\neq j \\) and 1 for \\( i = j \\), we have \\( \\text{Tr}_3[(GHZ_3)(GHZ_3)] = \\frac{1}{2}(|00\\rangle \\langle 00| + |11\\rangle \\langle 11|) \\), and so\n\n\\[\n\\text{Tr}[(|GHZ_2\\rangle \\langle GHZ_2| \\text{Tr}_3[(|GHZ_3\\rangle \\langle GHZ_3|)]) = \\frac{1}{4} \\text{Tr}[(|00\\rangle + |11\\rangle)(\\langle 00| + \\langle 11|)(|00\\rangle \\langle 00| + |11\\rangle \\langle 11|)]\n\\]\n\n\\[\n= \\frac{1}{4} \\text{Tr}[(|00\\rangle + |11\\rangle)(\\langle 00| + \\langle 11|)]\n\\]\n\n\\[\n= \\frac{1}{4} \\text{Tr}[(|00\\rangle + |11\\rangle)(\\langle 00| + \\langle 11|)]\n\\]\n\n\\[\n= \\frac{1}{4} \\text{Tr}[(|00\\rangle + |11\\rangle)(\\langle 00| + \\langle 11|)]\n\\]\n\n\\[\n= \\frac{1}{2}\n\\]\n\n=================\n\n(ii) Note that \\( \\text{Tr}_3(|W_3\\rangle \\langle W_3|) = \\frac{1}{3}(|10\\rangle \\langle 10| + |01\\rangle \\langle 01| + |00\\rangle \\langle 00| + |10\\rangle \\langle 01| + |01\\rangle \\langle 10|) \\) so we have\n\n\\[\n\\text{Tr}(|W_2\\rangle \\langle W_2| \\text{Tr}_3(|W_3\\rangle \\langle W_3|)) = \\frac{1}{6} \\text{Tr}[(|10\\rangle + |01\\rangle)(2\\langle 10| + 2\\langle 01|)] = \\frac{2}{3}\n\\]", "question": "### (a) Calculate the overlaps\n(i) \\(\\text{Tr}(|GHZ_2\\rangle \\langle GHZ_2| \\text{Tr}_3 |GHZ_3\\rangle \\langle GHZ_3|)\\) and\n(ii) \\(\\text{Tr}(|W_2\\rangle \\langle W_2| \\text{Tr}_3 |W_3\\rangle \\langle W_3|)\\).\n\nNow we generalize to the \\(N\\)-qubit case. As you might expect, \\(|GHZ_N\\rangle = \\frac{1}{\\sqrt{2}}(|0\\rangle^{\\otimes N} + |1\\rangle^{\\otimes N})\\) and \\(|W_N\\rangle\\) is an equal superposition of all \\(N\\)-bit strings with exactly one 1 and \\(N - 1\\) 0's." }, { "context": "(Due to Mandy Huo)\n(i) We have \\( \\text{Tr}_3(|GHZ_N\\rangle \\langle GHZ_N|) = \\frac{1}{2}(|0\\rangle \\langle 0|^{\\otimes N-1} + |1\\rangle \\langle 1|^{\\otimes N-1}) \\) so\n\n\\[\n(GHZ_N - 1| \\text{Tr}_N(|GHZ_N\\rangle \\langle GHZ_N|) = \\frac{1}{2}(GHZ_N - 1|\n\\]\n\nand thus\n\n\\[\n\\text{Tr}(|GHZ_{N-1}\\rangle \\langle GHZ_{N-1}| \\text{Tr}_3(|GHZ_N\\rangle \\langle GHZ_N|)) = \\frac{1}{4} \\text{Tr}[(|0\\rangle \\langle 0|^{\\otimes N-1} + |1\\rangle \\langle 1|^{\\otimes N-1})(|0\\rangle \\langle 0|^{\\otimes N-1} + (|1\\rangle \\langle 1|^{\\otimes N-1}) = \\frac{1}{2}\n\\]\n\n(ii) We have \\( W_{N-1} \\text{Tr}_N(|W_N\\rangle \\langle W_N|) = \\frac{N-1}{N}(W_{N-1}) \\) so\n\n\\[\n\\text{Tr}(|W_{N-1}\\rangle \\langle W_{N-1}| \\text{Tr}_3(|W_N\\rangle \\langle W_N|)) = \\frac{1}{N} \\text{Tr}[(|10\\ldots 0\\rangle + |010\\ldots 0\\rangle + \\ldots + |0\\ldots 01\\rangle)(\\langle 10\\ldots 0| + \\langle 010\\ldots 0| + \\ldots + \\langle 0\\ldots 01|)] = \\frac{N-1}{N}\n\\]\n\nSince \\( \\frac{N-1}{N} > \\frac{1}{2} \\) for \\( N > 2 \\) the overlap between the \\( N \\)-qubit \\( W \\) states is greater than between the \\( N \\)-qubit \\( GHZ \\) states so we can conclude that the \\( W \\) states are more 'robust' to tracing out a qubit.", "question": "### (b) Calculate the following overlaps as functions of \\(N\\).\n(i) \\(\\text{Tr}(|GHZ_{N-1}\\rangle \\langle GHZ_{N-1}| \\text{Tr}_N |GHZ_N\\rangle \\langle GHZ_N|)\\) and\n(ii) \\(\\text{Tr}(|W_{N-1}\\rangle \\langle W_{N-1}| \\text{Tr}_N |W_N\\rangle \\langle W_N|)\\).\n\nConclude that W states are “more robust” against loss of a single qubit than GHZ states." } ]
2016-10-25T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Universal Cloning
In this problem we analyze a single-qubit universal cloner.
[ { "context": "(Due to De Huang)\n(a) (i) We can see \\(\\rho\\) and \\(T_1(\\rho)\\) as matrices in \\(\\mathbb{C}^{2 \\times 2}\\) and \\(\\mathbb{C}^{4 \\times 4}\\). Then we have\n\n\\[T_1(\\rho) = \\rho \\otimes \\frac{I}{2}\\]\n\n\\[= \\frac{1}{2} \\begin{pmatrix} \\rho_{11} & 0 & \\rho_{12} & 0 \\\\ 0 & \\rho_{11} & 0 & \\rho_{12} \\\\ \\rho_{21} & 0 & \\rho_{22} & 0 \\\\ 0 & \\rho_{21} & 0 & \\rho_{22} \\end{pmatrix} \\]\n\n\\[= \\frac{1}{2} \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix} \\begin{pmatrix} \\rho_{11} & \\rho_{12} \\\\ \\rho_{21} & \\rho_{22} \\end{pmatrix} \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix} \\]\n\n\\[+ \\frac{1}{2} \\begin{pmatrix} 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix} \\begin{pmatrix} \\rho_{11} & \\rho_{12} \\\\ \\rho_{21} & \\rho_{22} \\end{pmatrix} \\begin{pmatrix} 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix} \\]\n\n\\[= A_1 \\rho A_1^{\\dagger} + A_2 \\rho A_2^{\\dagger},\\]\n\nwhere\n\n\\[A_1 = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix}, \\quad A_2 = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix}.\\]\n\nIt's easy to check that\n\n\\[A_1^{\\dagger} A_1 + A_2^{\\dagger} A_2 = I.\\]\n\nTherefore \\(T_1\\) is CPTP. Indeed we can check that for any single qubit \\(|\\psi\\rangle\\),\n\n\\[A_1 |\\psi\\rangle = \\frac{1}{\\sqrt{2}} |\\psi\\rangle \\otimes |0\\rangle, \\quad A_2 |\\psi\\rangle = \\frac{1}{\\sqrt{2}} |\\psi\\rangle \\otimes |1\\rangle,\\]\n\n\\[A_1^{\\dagger} (|\\psi\\rangle \\otimes |0\\rangle) = \\frac{1}{\\sqrt{2}} |\\psi\\rangle, \\quad A_2^{\\dagger} (|\\psi\\rangle \\otimes |1\\rangle) = \\frac{1}{\\sqrt{2}} |\\psi\\rangle,\\]\n\ntherefore\n\n\\[A_1 |\\psi\\rangle \\langle \\psi| A_1^{\\dagger} + A_2 |\\psi\\rangle \\langle \\psi| A_2^{\\dagger} = \\frac{1}{2} |\\psi\\rangle \\langle \\psi| \\otimes |0\\rangle \\langle 0| + \\frac{1}{2} |\\psi\\rangle \\langle \\psi| \\otimes |1\\rangle \\langle 1|\\]\n\n\\[= |\\psi\\rangle \\langle \\psi| \\otimes \\frac{1}{2} (|0\\rangle \\langle 0| + |1\\rangle \\langle 1|)\\]\n\n\\[= |\\psi\\rangle \\langle \\psi| \\otimes \\frac{I}{2}\\]\n\n\\[= T_1 (|\\psi\\rangle \\langle \\psi|),\\]\n\nand\n\n\\[(A_1^{\\dagger} A_1 + A_2^{\\dagger} A_2) |\\psi\\rangle = \\frac{1}{\\sqrt{2}} A_1^{\\dagger} (|\\psi\\rangle \\otimes |0\\rangle) + \\frac{1}{\\sqrt{2}} A_1^{\\dagger} (|\\psi\\rangle \\otimes |1\\rangle) = |\\psi\\rangle,\\]\n\nwhich again verifies our proof of CPTP. The cloned qubit has density matrix \\(\\frac{1}{2}I\\), which actually carries no information. No matter what basis we use to measure the cloned qubit, we always get fair probability \\(\\frac{1}{2}\\) on both results. In the meanwhile, the first qubit is still in state \\(\\lvert \\psi \\rangle\\).\n\n(ii) Since \\(T_1(\\lvert \\psi \\rangle \\langle \\psi \\rvert) \\geq 0\\), we have\n\n\\[\\lvert \\langle \\psi \\lvert T_1(\\lvert \\psi \\rangle \\langle \\psi \\rvert) \\lvert \\psi \\rangle \\lvert = \\langle \\psi \\lvert T_1(\\lvert \\psi \\rangle \\langle \\psi \\rvert) \\lvert \\psi \\rangle\\]\n\n\\[= \\langle \\psi \\lvert (\\lvert \\psi \\rangle \\langle \\psi \\rvert \\frac{1}{2}I) \\lvert \\psi \\rangle\\]\n\n\\[= \\langle \\psi \\lvert \\psi \\rangle \\langle \\psi \\lvert \\psi \\rangle \\times \\langle \\psi \\lvert \\frac{1}{2}I \\lvert \\psi \\rangle\\]\n\n\\[= \\frac{1}{2}.\\]", "question": "### (a) Consider the map which takes as input a pure single-qubit state \\(\\rho = |\\psi\\rangle \\langle \\psi|\\), and returns \\(T_1(\\rho) = \\rho \\otimes \\frac{1}{2} I\\), where \\(\\frac{1}{2} I\\) is the maximally mixed state of a single qubit.\n(i) Show that this map is a valid quantum operation: it is CPTP. Give an interpretation of this map in terms of making a random guess for the cloned qubit." }, { "context": "(Due to De Huang)\n(i) Since $\\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$ and $\\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$ are orthogonal, we only need to verify that $U \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$ and $U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$ are orthogonal. Indeed, note that $\\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$, $\\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 1 \\rangle$, $\\lvert 0 \\rangle \\lvert 1 \\rangle \\lvert 0 \\rangle$, $\\lvert 0 \\rangle \\lvert 1 \\rangle \\lvert 1 \\rangle$, $\\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$, $\\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 1 \\rangle$, $\\lvert 1 \\rangle \\lvert 1 \\rangle \\lvert 0 \\rangle$, $\\lvert 1 \\rangle \\lvert 1 \\rangle \\lvert 1 \\rangle$ are orthogonal to each other, since\n\n\\[\nU \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle = \\sqrt{\\frac{2}{3}} \\lvert 1 \\rangle \\lvert 1 \\rangle \\lvert 1 \\rangle + \\sqrt{\\frac{1}{6}} \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle + \\sqrt{\\frac{1}{6}} \\lvert 1 \\rangle \\lvert 1 \\rangle \\lvert 0 \\rangle,\n\\]\n\nwe have\n\n\\[\n\\langle 0 \\lvert \\langle 0 \\lvert \\langle 0 \\lvert U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\rangle = \\langle 0 \\lvert \\langle 1 \\lvert \\langle 1 \\lvert U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\rangle = \\langle 1 \\lvert \\langle 0 \\lvert \\langle 1 \\lvert U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\rangle = 0.\n\\]\n\nAnd since\n\n\\[\nU \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle = \\sqrt{\\frac{2}{3}} \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle + \\sqrt{\\frac{1}{6}} \\lvert 0 \\rangle \\lvert 1 \\rangle \\lvert 1 \\rangle + \\sqrt{\\frac{1}{6}} \\lvert 1 \\rangle \\lvert 1 \\rangle \\lvert 1 \\rangle,\n\\]\n\nwe immediately have that $U \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$ and $U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle$ are orthogonal. Now we may extend $\\{ \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\}$ to\n\n\\[\n\\{ \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, \\phi_3, \\phi_4, \\cdots, \\phi_8 \\}\n\\]\n\nas an orthogonal basis of all three-qubits, and also extend $\\{ U \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\}$ to\n\n\\[\n\\{ U \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, \\psi_3, \\psi_4, \\cdots, \\psi_8 \\}\n\\]\n\nas another orthogonal basis of all three-qubits. Then one example of extending $U$ to a valid three-qubit unitary $\\tilde{U}$ would be\n\n\\[\n\\tilde{U} : \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\rightarrow U \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle, \\quad \\tilde{U} : \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\rightarrow U \\lvert 1 \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle,\n\\]\n\n\\[\n\\tilde{U} : \\phi_i \\rightarrow \\psi_i, \\quad i = 3, 4, \\cdots, 8.\n\\]\n\nIt’s easy to check that $\\tilde{U}$ is a valid three-qubit unitary because it linearly transforms an orthogonal basis to another orthogonal basis.\n\nThen the success probability is\n\n\\[\n|\\langle \\psi | \\langle T_2 | (\\langle \\psi | \\langle \\psi |) | \\psi \\rangle |^2 = | \\langle \\psi | \\langle \\psi | (\\alpha \\sqrt{\\frac{2}{3}} |0\\rangle |0\\rangle + \\beta \\sqrt{\\frac{1}{3}} | \\psi_+ \\rangle) (\\bar{\\alpha} \\sqrt{\\frac{2}{3}} \\langle 0 | \\langle 0 | + \\bar{\\beta} \\sqrt{\\frac{1}{3}} \\langle \\psi_+ |) | \\psi \\rangle |^2\n\\]\n\n\\[\n+ \\langle \\psi | \\langle \\psi | (\\beta \\sqrt{\\frac{2}{3}} |1\\rangle |1\\rangle + \\alpha \\sqrt{\\frac{1}{3}} | \\psi_+ \\rangle) (\\bar{\\beta} \\sqrt{\\frac{2}{3}} \\langle 1 | \\langle 1 | + \\bar{\\alpha} \\sqrt{\\frac{1}{3}} \\langle \\psi_+ |) | \\psi \\rangle |^2\n\\]\n\n\\[\n= | \\langle \\psi | \\langle \\psi | (\\alpha \\sqrt{\\frac{2}{3}} |0\\rangle |0\\rangle + \\beta \\sqrt{\\frac{1}{3}} | \\psi_+ \\rangle) |^2\n\\]\n\n\\[\n+ | \\langle \\psi | \\langle \\psi | (\\beta \\sqrt{\\frac{2}{3}} |1\\rangle |1\\rangle + \\alpha \\sqrt{\\frac{1}{3}} | \\psi_+ \\rangle) |^2\n\\]\n\n\\[\n= | \\alpha |^2 \\alpha \\sqrt{\\frac{2}{3}} + | \\beta |^2 \\alpha \\sqrt{\\frac{2}{3}} + | \\beta |^2 \\alpha \\sqrt{\\frac{2}{3}} + | \\alpha |^2 \\alpha \\sqrt{\\frac{2}{3}} |^2\n\\]\n\n\\[\n= \\frac{2}{3} | \\alpha |^2 + \\frac{2}{3} | \\beta |^2\n\\]\n\n\\[\n= \\frac{2}{3}.\n\\]", "question": "### (b) Let’s consider a second cloning map, which acts on the qubit input state together with two ancilla qubits as follows:\n\n\\[ \nU : |0\\rangle |0\\rangle |0\\rangle \\mapsto \\sqrt{\\frac{2}{3}} |0\\rangle |0\\rangle |0\\rangle + \\sqrt{\\frac{1}{6}} (|0\\rangle |1\\rangle + |1\\rangle |0\\rangle ) |1\\rangle , \n\\]\n\n\\[ \n|1\\rangle |0\\rangle |0\\rangle \\mapsto \\sqrt{\\frac{2}{3}} |1\\rangle |1\\rangle |1\\rangle + \\sqrt{\\frac{1}{6}} (|1\\rangle |0\\rangle + |0\\rangle |1\\rangle ) |0\\rangle . \n\\]\n\n(i) Verify that \\(U\\) can be extended into a valid three-qubit unitary.\n\nThe cloning map associated to \\(U\\) is the map \\(T_2\\) which first initializes two qubits to the \\(|0\\rangle |0\\rangle\\) state, then applies \\(U\\), and then traces out the third qubit.\n\n(ii) Evaluate the success probability of the map \\(U\\) on an arbitrary input pure state \\(\\rho = |\\psi\\rangle \\langle \\psi|\\)." }, { "context": "(Due to De Huang)\n(i) Note that\n\n\\[\nP_+^\\dagger = I^\\dagger - (| \\Psi_- \\rangle \\langle \\Psi_- |)^\\dagger = I - | \\Psi_- \\rangle \\langle \\Psi_- | = P_+,\n\\]\n\n\\[\nP_+ P_+ = (I - | \\Psi_- \\rangle \\langle \\Psi_- |) (I - | \\Psi_- \\rangle \\langle \\Psi_- |)\n\\]\n\n\\[\n= I - 2 | \\Psi_- \\rangle \\langle \\Psi_- | + | \\Psi_- \\rangle \\langle \\Psi_- | = I - | \\Psi_- \\rangle \\langle \\Psi_- |\n\\]\n\n\\[\n= P_+.\n\\]\n\nThen using the result of (a)(i), we have\n\n\\[\nT_3 (\\rho) = \\frac{2}{3} P_+ (\\rho \\otimes I) P_+\n\\]\n\n\\[\n= \\frac{4}{3} P_+ T_2 (\\rho) P_+\n\\]\n\n\\[\n= \\frac{4}{3} P_+ (A_1 \\rho A_1^\\dagger + A_2 \\rho A_2^\\dagger) P_+\n\\]\n\n\\[\n= (\\frac{2}{\\sqrt{3}} P_+ A_1) \\rho (\\frac{2}{\\sqrt{3}} P_+ A_1)^\\dagger + (\\frac{2}{\\sqrt{3}} P_+ A_2) \\rho (\\frac{2}{\\sqrt{3}} P_+ A_2)^\\dagger\n\\]\n\n\\[\n= V_1 \\rho V_1^\\dagger + V_2 \\rho V_2^\\dagger,\n\\]\n\nwhere \\( A_1, A_2 \\) are defined in (a)(i), and\n\n\\[\nV_1 = \\frac{2}{\\sqrt{3}} P_+ A_1, \\quad V_2 = \\frac{2}{\\sqrt{3}} P_+ A_2.\n\\]\n\nIf we see \\( P_+ = I - | \\Psi_- \\rangle \\langle \\Psi_- | \\) as a matrix in \\( \\mathbb{C}^{4 \\times 4} \\), then\n\n\\[\nP_+ = \\begin{pmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & \\frac{1}{2} & \\frac{1}{2} & 0 \\\\\n0 & \\frac{1}{2} & \\frac{1}{2} & 0 \\\\\n0 & 0 & 0 & 1\n\\end{pmatrix}.\n\\]\n\nBy direct calculation, we can check that\n\n\\[ V_1^\\dagger V_1 + V_2^\\dagger V_2 = \\frac{4}{3} A_1^\\dagger P_+ P_+ A_1 + \\frac{4}{3} A_2^\\dagger P_+ P_+ A_2 = \\frac{4}{3} A_1^\\dagger P_+ A_1 + \\frac{4}{3} A_2^\\dagger P_+ A_2 = I. \\]\n\nTherefore \\( T_3 \\) is CPTP.\n\n(ii) For any single-state \\( |\\psi\\rangle \\), we have\n\n\\[ \\langle \\psi | \\langle \\psi | \\Psi_- \\rangle = \\frac{1}{\\sqrt{2}} (\\langle \\psi | 0 \\rangle \\langle \\psi | 1 \\rangle - \\langle \\psi | 1 \\rangle \\langle \\psi | 0 \\rangle) = 0, \\]\n\n\\[ \\langle \\psi | \\langle \\psi | P_+ = \\langle \\psi | \\langle \\psi | - \\langle \\psi | \\langle \\psi | \\Psi_- \\rangle \\langle \\Psi_- | = \\langle \\psi | \\langle \\psi |, \\]\n\n\\[ P_+ |\\psi\\rangle |\\psi\\rangle = |\\psi\\rangle |\\psi\\rangle - |\\Psi_- \\rangle \\langle \\Psi_- | |\\psi\\rangle |\\psi\\rangle = |\\psi\\rangle |\\psi\\rangle, \\]\n\nthus the success probability of \\( T_3 \\) is\n\n\\[ |\\langle \\psi | \\langle \\psi | T_3 (|\\psi\\rangle |\\psi\\rangle) | \\langle \\psi | \\langle \\psi | \\rangle = \\frac{2}{3} \\langle \\psi | \\langle \\psi | P_+ (|\\psi\\rangle |\\psi\\rangle \\langle \\psi | \\langle \\psi | \\otimes I) P_+ |\\psi\\rangle |\\psi\\rangle \\]\n\n\\[ = \\frac{2}{3} \\langle \\psi | \\langle \\psi | (|\\psi\\rangle \\langle \\psi | \\otimes I) |\\psi\\rangle |\\psi\\rangle \\]\n\n\\[ = \\frac{2}{3} (\\langle \\psi | \\langle \\psi | |\\psi\\rangle |\\psi\\rangle) (\\langle \\psi | I | \\psi\\rangle) \\]\n\n\\[ = \\frac{2}{3}. \\]\n\n(iii) We can see that for any single-state \\( |\\psi\\rangle \\),\n\n\\[ |\\langle \\psi | \\langle \\psi | T_2 (|\\psi\\rangle |\\psi\\rangle) | \\langle \\psi | \\langle \\psi | \\rangle = |\\langle \\psi | \\langle \\psi | T_3 (|\\psi\\rangle |\\psi\\rangle) | \\langle \\psi | \\langle \\psi | \\rangle = \\frac{2}{3}, \\]\n\nthat is, the map \\( T_2 \\) and \\( T_3 \\) have the same success probability. The essential reason for this result is that we actually have\n\n\\[ T_2 (|\\psi\\rangle \\langle \\psi |) = T_3 (|\\psi\\rangle \\langle \\psi |) \\]\n\nfor any single-state \\( |\\psi\\rangle \\). To see this, we first rewrite \\( U |\\psi\\rangle |0\\rangle |0\\rangle \\) as\n\n\\[ U |\\psi\\rangle |0\\rangle |0\\rangle = \\alpha U |0\\rangle |0\\rangle |0\\rangle + \\beta U |1\\rangle |0\\rangle |0\\rangle \\]\n\n\\[ = \\alpha \\left( \\sqrt{\\frac{2}{3}} |0\\rangle |0\\rangle |0\\rangle + \\sqrt{\\frac{1}{6}} (|0\\rangle |1\\rangle + |1\\rangle |0\\rangle) |1\\rangle \\right) \\]\n\n\\[ + \\beta \\left( \\sqrt{\\frac{2}{3}} |1\\rangle |1\\rangle |1\\rangle + \\sqrt{\\frac{1}{6}} (|1\\rangle |0\\rangle + |0\\rangle |1\\rangle) |0\\rangle \\right) \\]\n\n\\[ = \\frac{1}{\\sqrt{3}} \\left[|\\Phi_+\\rangle \\langle \\alpha |0\\rangle + \\beta |1\\rangle | + \\frac{1}{\\sqrt{3}} \\left[|\\Phi_-\\rangle \\langle \\alpha |0\\rangle - \\beta |1\\rangle | + \\frac{1}{\\sqrt{3}} \\left[|\\Psi_+\\rangle \\langle \\alpha |1\\rangle + \\beta |0\\rangle | \\right]\n\\]\n\nHere \\(|\\Phi_+\\rangle, |\\Phi_-\\rangle, |\\Psi_+\\rangle\\) together with \\(|\\Psi_-\\rangle\\) are the Bell basis, i.e.\n\n\\[\n|\\Phi_+\\rangle = \\frac{1}{\\sqrt{2}} (|00\\rangle + |11\\rangle), \\quad |\\Phi_-\\rangle = \\frac{1}{\\sqrt{2}} (|00\\rangle - |11\\rangle),\n\\]\n\n\\[\n|\\Psi_+\\rangle = \\frac{1}{\\sqrt{2}} (|01\\rangle + |10\\rangle), \\quad |\\Psi_-\\rangle = \\frac{1}{\\sqrt{2}} (|01\\rangle - |10\\rangle).\n\\]\n\nThen we have\n\n\\[\nT_2(|\\psi\\rangle \\langle \\psi|) = \\text{tr}_3(U(|\\psi\\rangle \\langle \\psi| \\otimes |0\\rangle \\langle 0|)U^\\dagger)\n\\]\n\n\\[\n= \\frac{1}{3} \\left( \\text{tr}(|\\psi\\rangle \\langle \\psi|) |\\Phi_+\\rangle \\langle \\Phi_+| + \\text{tr}(Z|\\psi\\rangle \\langle \\psi|Z) |\\Phi_-\\rangle \\langle \\Phi_-| + \\text{tr}(X|\\psi\\rangle \\langle \\psi|X) |\\Psi_+\\rangle \\langle \\Psi_+| + \\text{tr}(Z|\\psi\\rangle \\langle \\psi|X) |\\Psi_-\\rangle \\langle \\Psi_-| \\right)\n\\]\n\n\\[\n= \\frac{1}{3} \\left( \\langle \\psi| \\psi \\rangle |\\Phi_+\\rangle \\langle \\Phi_+| + \\langle \\psi| \\psi \\rangle |\\Phi_-\\rangle \\langle \\Phi_-| + \\langle \\psi| \\psi \\rangle |\\Psi_+\\rangle \\langle \\Psi_+| + \\langle \\psi| \\psi \\rangle |\\Psi_-\\rangle \\langle \\Psi_-| \\right)\n\\]\n\nOn the other hand, since\n\n\\[\n|\\Phi_+\\rangle \\langle \\Phi_+| + |\\Phi_-\\rangle \\langle \\Phi_-| + |\\Psi_+\\rangle \\langle \\Psi_+| + |\\Psi_-\\rangle \\langle \\Psi_-| = I,\n\\]\n\nwe have\n\n\\[\nI - |\\Psi_-\\rangle \\langle \\Psi_-| = |\\Phi_+\\rangle \\langle \\Phi_+| + |\\Phi_-\\rangle \\langle \\Phi_-| + |\\Psi_+\\rangle \\langle \\Psi_+|.\n\\]\n\nThus\n\n\\[\nT_3(|\\psi\\rangle \\langle \\psi|)\n\\]\n\n\\[\n= \\frac{2}{3} (I - |\\Psi_-\\rangle \\langle \\Psi_-|) (|\\psi\\rangle \\langle \\psi| \\otimes I) (I - |\\Psi_-\\rangle \\langle \\Psi_-|)\n\\]\n\n\\[\n= \\frac{2}{3} \\left( |\\Phi_+\\rangle \\langle \\Phi_+| + |\\Phi_-\\rangle \\langle \\Phi_-| + |\\Psi_+\\rangle \\langle \\Psi_+| \\right) (|\\psi\\rangle \\langle \\psi| \\otimes I) \\left( |\\Phi_+\\rangle \\langle \\Phi_+| + |\\Phi_-\\rangle \\langle \\Phi_-| + |\\Psi_+\\rangle \\langle \\Psi_+| \\right)\n\\]\n\n\\[\n= \\frac{2}{3} \\left( |\\Phi_+\\rangle \\langle \\Phi_+| (|\\psi\\rangle \\langle \\psi| \\otimes I) |\\Phi_+\\rangle \\langle \\Phi_+| + |\\Phi_-\\rangle \\langle \\Phi_-| (|\\psi\\rangle \\langle \\psi| \\otimes I) |\\Phi_-\\rangle \\langle \\Phi_-| + |\\Psi_+\\rangle \\langle \\Psi_+| (|\\psi\\rangle \\langle \\psi| \\otimes I) |\\Psi_+\\rangle \\langle \\Psi_+| \\right).\n\\]\n\nNote that\n\n\\[\n\\langle \\Phi_+ | (|\\psi\\rangle \\langle \\psi| \\otimes I) |\\Phi_+\\rangle = \\frac{1}{2} \\left( \\langle 0| \\langle 0| + \\langle 1| \\langle 1| \\right) (|\\psi\\rangle \\langle \\psi|) = \\frac{1}{2} \\langle \\psi| \\psi \\rangle,\n\\]\n\n\\[\n\\begin{aligned}\n\\langle \\Phi_- | (|\\psi\\rangle \\langle \\psi| \\otimes I) | \\Phi_- \\rangle &= \\frac{1}{2} \\langle \\psi | (|0\\rangle \\langle 0| + |1\\rangle \\langle 1|) |\\psi \\rangle = \\frac{1}{2} \\langle \\psi | \\psi \\rangle, \\\\\n\\langle \\Psi_+ | (|\\psi\\rangle \\langle \\psi| \\otimes I) | \\Psi_+ \\rangle &= \\frac{1}{2} \\langle \\psi | (|0\\rangle \\langle 0| + |1\\rangle \\langle 1|) |\\psi \\rangle = \\frac{1}{2} \\langle \\psi | \\psi \\rangle, \\\\\n\\langle \\Phi_+ | (|\\psi\\rangle \\langle \\psi| \\otimes I) | \\Phi_- \\rangle &= \\frac{1}{2} \\langle \\psi | (|0\\rangle \\langle 0| - |1\\rangle \\langle 1|) |\\psi \\rangle = \\frac{1}{2} \\langle \\psi | Z | \\psi \\rangle, \\\\\n\\langle \\Phi_+ | (|\\psi\\rangle \\langle \\psi| \\otimes I) | \\Psi_+ \\rangle &= \\frac{1}{2} \\langle \\psi | (|1\\rangle \\langle 0| + |0\\rangle \\langle 1|) |\\psi \\rangle = \\frac{1}{2} \\langle \\psi | X | \\psi \\rangle, \\\\\n\\langle \\Phi_- | (|\\psi\\rangle \\langle \\psi| \\otimes I) | \\Psi_+ \\rangle &= \\frac{1}{2} \\langle \\psi | (|1\\rangle \\langle 0| - |0\\rangle \\langle 1|) |\\psi \\rangle = \\frac{1}{2} \\langle \\psi | X Z | \\psi \\rangle.\n\\end{aligned}\n\\]\n\nTherefore\n\n\\[\n\\begin{aligned}\nT_3 (|\\psi \\rangle \\langle \\psi |) &= \\frac{2}{3} \\left( \\langle \\psi | \\psi \\rangle |\\Phi_+\\rangle \\langle \\Phi_+| + \\langle \\psi | \\psi \\rangle |\\Phi_-\\rangle \\langle \\Phi_-| + \\langle \\psi | \\psi \\rangle |\\Psi_+\\rangle \\langle \\Psi_+| \\right. \\\\\n& \\quad + \\langle \\psi | Z | \\psi \\rangle |\\Phi_+\\rangle \\langle \\Phi_-| + \\langle \\psi | X | \\psi \\rangle |\\Phi_-\\rangle \\langle \\Psi_+| \\right) \\\\\n&= T_2 (|\\psi \\rangle \\langle \\psi |)\n\\end{aligned}\n\\]\n\nIt's done.", "question": "### (c) Consider a third cloning map \\(T_3\\) defined as \\(T_3(\\rho) = \\frac{2}{3} P_+ (\\rho \\otimes I) P_+\\), where \\(P_+ = I - |\\Psi_-\\rangle \\langle \\Psi_-|\\) and \\(|\\Psi_-\\rangle = \\frac{1}{\\sqrt{2}}(|0\\rangle |1\\rangle - |1\\rangle |0\\rangle)\\).\n(i) Verify that \\(T_3\\) is a CPTP map.\n(ii) Evaluate its success probability as a universal cloning map.\n(iii) Is this a coincidence — is there a relationship between the three maps you have considered?" } ]
2016-10-25T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Superdense Coding
In Homework 1, you were introduced to the idea of “quantum teleportation”. By sending just two bits of classical information, Alice was able to “teleport” her single-qubit quantum state to Bob, provided they shared a pair of maximally entangled qubits to begin with. In this problem, Alice instead wants to share two classical bits with Bob, but she only has a quantum channel at her disposal, and she is only allowed to use it once (i.e. send only one single-qubit state). Can she succeed?
[ { "context": "The most general way in which Bob can try to ascertain Alice’s classical bits is by creating a POVM with four operators, each corresponding to a single guess for Alice’s pair of qubits. Thus, we want a POVM\n\n\\[\n\\{M_0, M_1, M_+, M_-\\}\n\\]\n\nWhich maximizes the value\n\n\\[\nP(0|0) + P(1|1) + P(+|+) + P(-|-)\n\\]\n\n\\[\n= \\frac{4}{4} tr(M_0|0\\rangle \\langle 0|) + tr(M_1|1\\rangle \\langle 1|) + tr(M_+|+\\rangle \\langle +|) + tr(M_-|- \\rangle \\langle -|)\n\\]\n\n\\[\n= \\frac{\\langle 0|M_0|0\\rangle + \\langle 1|M_1|1\\rangle + \\langle +|M_+|+\\rangle + \\langle -|M_-|-\rangle}{4}\n\\]\n\nAnd we want to know this maximal value. Observe that\n\n\\[\n\\langle 0|M_0|0\\rangle + \\langle 1|M_1|1\\rangle \\leq \\langle 0|M_0|0\\rangle + \\langle 1|M_0|1\\rangle + \\langle 0|M_1|0\\rangle + \\langle 1|M_1|1\\rangle = tr(M_0 + M_1)\n\\]\n\nFrom the positive definiteness of these matrices, and similarly\n\n\\[\n\\langle +|M_+|+\\rangle + \\langle -|M_-|-\rangle \\leq \\langle +|M_+|+\\rangle + \\langle -|M_+|-\rangle + \\langle +|M_-|+\\rangle + \\langle -|M_-|-\rangle = tr(M_+ + M_-)\n\\]\n\nAnd we therefore have\n\n\\[\n\\langle 0|M_0|0\\rangle + \\langle 1|M_1|1\\rangle + \\langle +|M_+|+\\rangle + \\langle -|M_-|-\rangle \\leq tr(M_0 + M_1) + tr(M_+ + M_-)\n\\]\n\n\\[\n= tr(M_0 + M_1 + M_+ + M_-)\n\\]\n\n\\[\n= tr(I)\n\\]\n\n\\[\n= 2\n\\]\n\nAnd so\n\n\\[\n\\frac{P(0|0) + P(1|1) + P(+|+) + P(-|-)}{4}\n\\]\n\\[\n= \\frac{tr(M_0|0\\rangle \\langle 0|) + tr(M_1|1\\rangle \\langle 1|) + tr(M_+|+\\rangle \\langle +|) + tr(M_-|- \\rangle \\langle -|)}{4}\n\\]\n\\[\n= \\frac{\\langle 0|M_0|0\\rangle + \\langle 1|M_1|1\\rangle + \\langle +|M_+|+\\rangle + \\langle -|M_-|- \\rangle}{4}\n\\]\n\\[\n\\leq \\frac{2}{4}\n\\]\n\\[\n= \\frac{1}{2}\n\\]\n\nAnd so \\(\\frac{1}{2}\\) is an upper bound on the probability of success. We can attain this upper bound with\n\\[\nM_0 = |0\\rangle \\langle 0|\n\\]\n\\[\nM_1 = |1\\rangle \\langle 1|\n\\]\n\\[\nM_+ = 0\n\\]\n\\[\nM_- = 0\n\\]\nWhich measures in the standard basis. Thus, \\(\\frac{1}{2}\\) is the maximum value with which Bob can correctly guess both of Alice's two classical bits.", "question": "### (a) The first idea she has is to encode her two classical bits into her preparation of one of four states in \\(\\{ |0\\rangle, |1\\rangle, |+\\rangle, |-\\rangle \\}\\), and then send this qubit to Bob. Suppose that the a priori distribution of Alice’s two classical bits is uniform. What is the maximum probability with which Bob can correctly guess both of Alice’s two classical bits?" }, { "context": "Initially, the Alice-Bob qubit pair is maximally entangled:\n\n\\[\n\\frac{1}{\\sqrt{2}} (|0\\rangle_A|0\\rangle_B + |1\\rangle_A|1\\rangle_B)\n\\]\n\nSuppose Alice applies one of the four unitary transformations\n\n\\[\n\\{I, X, Z, ZX\\}\n\\]\n\nto her qubit and then sends it to Bob. Now, Bob has the qubit pair state\n\n\\[\n\\frac{1}{\\sqrt{2}} ((Z^{k_1}X^{k_2}|0\\rangle_A) \\otimes |0\\rangle_B + (Z^{k_1}X^{k_2}|1\\rangle_A) \\otimes |1\\rangle_B)\n\\]\n\nNow, consider the possible values for this pair\n\n\\[\n|\\psi\\rangle_{00} = \\frac{1}{\\sqrt{2}} (|0\\rangle_A|0\\rangle_B + |1\\rangle_A|1\\rangle_B)\n\\]\n\n\\[\n|\\psi\\rangle_{01} = \\frac{1}{\\sqrt{2}} (|1\\rangle_A|0\\rangle_B + |0\\rangle_A|1\\rangle_B)\n\\]\n\n\\[\n|\\psi\\rangle_{10} = \\frac{1}{\\sqrt{2}} (|0\\rangle_A|0\\rangle_B - |1\\rangle_A|1\\rangle_B)\n\\]\n\n\\[\n|\\psi\\rangle_{11} = \\frac{1}{\\sqrt{2}} (|1\\rangle_A|0\\rangle_B - |0\\rangle_A|1\\rangle_B)\n\\]\n\nThis quadruple constitutes a basis. We can see this since all the states are normalized, and the two pairs of states which share components in the standard basis, their inner products evaluate to 0. Thus, all Bob has to do is to measure his state in this basis, thereby ascertaining which of the four states he has.", "question": "### (b) Suppose that Alice and Bob share a maximally entangled pair of qubits. Alice thinks that it is a good idea to start by performing one of four unitary transformations on the qubit in her possession depending on the value of the two classical bits that she wishes to communicate and send her qubit to Bob. What next? Help Alice (and Bob) devise a scheme that achieves the desired task with certainty." }, { "context": "No, Eve cannot recover any information about the classical bits Alice is sharing with Bob. To see this, we trace out Bob’s bit from the two-qubit state, leaving only the intercepted qubit. Letting \\( U_A \\) be the unitary applied by Alice, the density matrix for Alice and Bob’s state is:\n\n\\[\n\\begin{aligned}\n&\\frac{1}{2} (U_A|0\\rangle_A)(\\langle0|U_A^\\dagger) \\otimes |0\\rangle_B\\langle0|_B + \\\\\n&\\frac{1}{2} (U_A|0\\rangle_A)(\\langle1|U_A^\\dagger) \\otimes |0\\rangle_B\\langle1|_B + \\\\\n&\\frac{1}{2} (U_A|1\\rangle_A)(\\langle0|U_A^\\dagger) \\otimes |1\\rangle_B\\langle0|_B + \\\\\n&\\frac{1}{2} (U_A|1\\rangle_A)(\\langle1|U_A^\\dagger) \\otimes |1\\rangle_B\\langle1|_B\n\\end{aligned}\n\\]\n\nAnd so if we trace out the second bit, we get\n\n\\[\n\\begin{aligned}\n&\\frac{1}{2} (U_A|0\\rangle_A)(\\langle0|U_A^\\dagger) + (U_A|1\\rangle_A)(\\langle1|U_A^\\dagger) \\\\\n&= U_A (\\frac{1}{2} (|0\\rangle\\langle0| + |1\\rangle\\langle1|)) U_A^\\dagger \\\\\n&= U_A \\frac{I}{2} U_A^\\dagger\n\\end{aligned}\n\\]\n\nNow, neither the application of \\( X \\) or \\( Z \\) changes the value of the maximally mixed state \\( \\frac{1}{2} \\), so whatever Alice’s bits are, Eve’s state is maximally mixed, and so she learns nothing.", "question": "### (c) After all the thought that Alice and Bob put into coming up with a working scheme, they finally decide to employ it. Unfortunately, the tireless eavesdropper Eve has heard of their new scheme, and as soon as Alice and Bob use it, she intercepts the qubit as it’s sent from Alice to Bob. Can Eve recover information about the two confidential classical bits that Alice intended to share with Bob?" } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Semidefinite programming
A **semidefinite program** (SDP) is a triple \((\Phi, A, B)\), where - \(\Phi : M_d(\mathbb{C}) \rightarrow M_d(\mathbb{C})\) is a linear map of the form \(\Phi(X) = \sum_{i=1}^k K_i X K_i^\dagger\), for \(K_i\) arbitrary \(d' \times d\) matrices with complex entries, and - \(A \in M_d(\mathbb{C}), B \in M_d(\mathbb{C})\) are Hermitian matrices. Let \(\Phi^*(Y) = \sum_{i=1}^k K_i^\dagger Y K_i\) be the adjoint map to \(\Phi\). We associate with the triple \((\Phi, A, B)\) two optimization problems, called the primal and dual problems, as follows: | Primal problem | Dual problem | |----------------|--------------| | \(\alpha := \max_X \text{Tr}(AX)\) | \(\beta := \min_Y \text{Tr}(BY)\) | | s.t. \(\Phi(X) = B\) | s.t. \(\Phi^*(Y) \geq A\) | | \(X \geq 0\) | \(Y = Y^\dagger\) | (Due to De Huang)
[ { "context": "We first prove a Lemma: If \\( X, Y \\in M_d(\\mathbb{C}), X \\geq 0, Y \\geq 0 \\), then \\( \\text{tr}(XY) \\geq 0 \\).\n\n**Proof:** Consider the eigenvalue decomposition of \\( X \\),\n\n\\[ X = Q \\Lambda Q^\\dagger \\]\n\nwhere \\( Q \\) is unitary, and \\( \\Lambda \\) is a diagonal matrix with diagonal elements \\( \\lambda_1 \\geq \\lambda_2 \\geq \\cdots \\geq \\lambda_d \\geq 0 \\). Then\n\n\\[ \\text{tr}(XY) = \\text{tr}(Q \\Lambda Q^\\dagger Y) = \\text{tr}(\\Lambda Q^\\dagger Y Q).\\]\n\nLet \\( s_1, s_2, \\ldots, s_d \\) be the diagonal elements of \\( Q^†YQ \\). Since \\( Y \\ge 0 \\), we have \\( Q^†YQ \\ge 0 \\), and thus \\( s_i \\ge 0, \\, i = 1, 2, \\ldots, d \\). Therefore\n\n\\[ \\text{tr}(XY) = \\text{tr}(\\Lambda Q^†YQ) = \\sum_{i=1}^{d} \\lambda_i s_i \\ge 0.\\]\n\nLet \\( \\Omega_1 = \\{ X \\in M_d(\\mathbb{C}) : X \\ge 0, \\Phi(X) = B \\} \\), \\( \\Omega_2 = \\{ Y \\in M_d(\\mathbb{C}) : \\Phi^*(Y) \\ge A, Y = Y^† \\} \\). Now given any \\( X \\in \\Omega_1, Y \\in \\Omega_2 \\), we have\n\n\\[ \\text{tr}(BY) = \\text{tr}(\\Phi(X)Y) \\]\n\n\\[= \\text{tr}\\left( \\sum_{i=1}^{k} K_i X K_i^† Y \\right)\\]\n\n\\[= \\text{tr}\\left( \\sum_{i=1}^{k} X K_i^† Y K_i \\right)\\]\n\n\\[= \\text{tr}(X \\Phi^*(Y)).\\]\n\nSince \\( \\Phi^*(Y) \\ge A \\), i.e. \\( \\Phi^*(Y) - A \\ge 0 \\), using the Lemma we have\n\n\\[ \\text{tr}(X(\\Phi^*(Y) - A)) \\ge 0,\\]\n\n\\[\\Rightarrow \\text{tr}(BY) = \\text{tr}(X \\Phi^*(Y)) \\ge \\text{tr}(XA) = \\text{tr}(AX).\\]\n\nSince \\( X, Y \\) are arbitrary in \\( \\Omega_1, \\Omega_2 \\), we immediately have\n\n\\[ \\beta = \\min_{Y \\in \\Omega_2} \\text{tr}(BY) \\ge \\max_{X \\in \\Omega_1} \\text{tr}(AX) = \\alpha.\\]", "question": "### (a) Show that it is always the case that \\(\\alpha \\leq \\beta\\). This condition is called **weak duality**." }, { "context": "Consider the eigenvalue decomposition of \\( M \\),\n\n\\[ M = U \\Lambda U^†, \\]\n\nwhere \\( U \\) is unitary, and \\( \\Lambda \\) is a diagonal matrix with diagonal elements \\( \\lambda_1 \\ge \\lambda_2 \\ge \\ldots \\ge \\lambda_d \\). We have\n\n\\[ \\lambda I \\ge M \\Rightarrow \\lambda I - M \\ge 0 \\Rightarrow U^†(\\lambda I - M)U \\ge 0 \\Rightarrow \\lambda I - \\Lambda \\ge 0.\\]\n\nNotice that \\( \\lambda I - \\Lambda \\) is a diagonal matrix. Thus we have \\( \\lambda - \\lambda_i \\ge 0, \\, i = 1, 2, \\ldots, d \\), which means all eigenvalues of \\( M \\) are less than or equal to \\( \\lambda \\).", "question": "### (b) Remember that the inequality \\(M \\geq N\\), for \\(M, N\\) Hermitian \\(d \\times d\\) matrices, is always taken to mean \\(M - N \\geq 0\\), or equivalently all eigenvalues of \\((M - N)\\) are non-negative. What does the condition \\(M \\leq \\lambda I\\), for some fixed Hermitian \\(M \\in M_d(\\mathbb{C})\\) and \\(\\lambda \\in \\mathbb{R}\\), mean on the eigenvalues of \\(M\\)?" }, { "context": "We can choose that \\( A = M \\in M_d(\\mathbb{C}), B = 1 \\in \\mathbb{C} \\), and each \\( K_i, i = 1, 2, \\ldots, d \\) is a \\( 1 \\times d \\) vector such that the \\( i \\)-th element is 1 and the other elements are 0. Then for any \\( X \\in M_d(\\mathbb{C}) \\) and any \\( y \\in \\mathbb{C} \\), we have\n\n\\[ \\Phi(X) = \\sum_{i=1}^{d} K_i X K_i^\\dagger = \\sum_{i=1}^{d} X_{ii} = \\text{tr}(X), \\]\n\n\\[ \\Phi^*(y) = \\sum_{i=1}^{d} K_i^\\dagger y K_i = yI. \\]\n\nNow we have \\( tr(By) = y \\), and \\( y = y^{\\dagger} \\Rightarrow y \\in \\mathbb{R} \\). Then the dual problem is just\n\n\\[ \\beta = \\min_{y \\in \\mathbb{R}} y \\\\\n\\text{s.t.} \\quad yI \\geq M. \\]\n\nUsing the result of (b), it's easy to see that \\( \\beta = \\lambda_1(M) \\). The primal problem is\n\n\\[ \\alpha = \\max_{X} tr(MX) \\\\\n\\text{s.t.} \\quad tr(X) = 1, \\\\\nX \\geq 0. \\]\n\nGiven \\( X \\geq 0 \\), we can always find the root decomposition of \\( X \\),\n\n\\[ X = PP^{\\dagger}. \\]\n\nLet \\( p_i \\) denote the \\( i \\)-th column of \\( P \\), then we have\n\n\\[ tr(MX) = tr(MPP^{\\dagger}) = tr(P^{\\dagger}MP) = \\sum_{i=1}^{d} p_i^{\\dagger}Mp_i, \\]\n\nand\n\n\\[ 1 = tr(X) = tr(PP^{\\dagger}) = tr(P^{\\dagger}P) \\Rightarrow \\sum_{i=1}^{d} \\|p_i\\|_2^2 = 1. \\]\n\nThus the primal problem is equivalent to\n\n\\[ \\alpha = \\max_{p_i, i=1,2,\\ldots,d} \\sum_{i=1}^{d} p_i^{\\dagger}Mp_i \\\\\n\\text{s.t.} \\quad \\sum_{i=1}^{d} \\|p_i\\|_2^2 = 1. \\]\n\nSince given that \\( M \\) is Hermitian, for any feasible \\( p_i, i = 1, 2, \\ldots, d \\), we have\n\n\\[ \\sum_{i=1}^{d} p_i^{\\dagger}Mp_i \\leq \\sum_{i=1}^{d} \\lambda_1(M)p_i^{\\dagger}p_i = \\lambda_1(M) \\sum_{i=1}^{d} \\|p_i\\|_2^2 = \\lambda_1(M). \\]\n\nThus \\( \\alpha \\leq \\lambda_1(M) \\). In particular, if we choose \\( p_1 \\) to be the normalized eigenvector of \\( M \\) associated with \\( \\lambda_1(M) \\), and \\( p_i = 0, i = 2, 3, \\ldots, d \\), then\n\n\\[ \\sum_{i=1}^{d} \\|p_i\\|_2^2 = 1, \\quad \\sum_{i=1}^{d} p_i^{\\dagger}Mp_i = p_1^{\\dagger}Mp_1 = \\lambda_1(M). \\]\n\nTherefore we have \\( \\alpha = \\lambda_1(M) = \\beta \\).", "question": "### (c) Express the problem of computing the largest eigenvalue \\(\\lambda_1(M)\\) of a given \\(d \\times d\\) Hermitian matrix \\(M\\) in the form of a **dual problem** as above. That is, specify the map \\(\\Phi\\) (via the matrices \\(K_i\\)) and the matrices \\(A\\) and \\(B\\) such that \\(\\beta = \\lambda_1(M)\\). Write the primal problem. Show that, in this case, its optimum \\(\\alpha = \\beta\\)." }, { "context": "Recall that in class we have shown that\n\n\\[\\| \\rho - \\sigma \\|_{tr} = \\max_{E_1, E_2} \\text{tr}(\\rho E_1) + \\text{tr}(\\sigma E_2) - 1\\]\n\ns.t. \\( E_1 \\geq 0, \\, E_2 \\geq 0, \\, E_1 + E_2 = I \\),\n\ngiven that \\(\\rho, \\sigma\\) are density matrices. Let\n\n\\[ A = \\begin{pmatrix} \\rho & 0 \\\\ 0 & \\sigma \\end{pmatrix} \\in M_{2d}(C), \\quad B = I \\in M_d(C),\\]\n\n\\[ K_1 = \\begin{pmatrix} I & 0 \\\\ 0 & 0 \\end{pmatrix} \\in M_{d \\times 2d}(C), \\quad K_2 = \\begin{pmatrix} 0 & I \\\\ 0 & 0 \\end{pmatrix} \\in M_{d \\times 2d}(C).\\]\n\nConsider the two sets \\(\\Omega_1 = \\{(E_1, E_2) \\in (M_d(C), M_d(C)) : E_1 \\geq 0, E_2 \\geq 0, E_1 + E_2 = I\\}, \\Omega_2 = \\{X \\in M_{2d}(C) : X \\geq 0, \\Phi(X) = I\\}\\). For any matrix\n\n\\[X = \\begin{pmatrix} X_1 & X_3 \\\\ X_3^\\dagger & X_2 \\end{pmatrix} \\in \\Omega_2,\\]\n\nlet \\(E_1 = X_1, E_2 = X_2\\), then we have\n\n\\[\\text{tr}(AX) = \\text{tr}(\\rho E_1) + \\text{tr}(\\sigma E_2),\\]\n\nand \\((E_1, E_2) \\in \\Omega_1\\), because\n\n\\[X \\geq 0 \\implies E_1 \\geq 0, \\, E_2 \\geq 0,\\]\n\n\\[\\Phi(X) = B = I \\implies K_1 X K_1^\\dagger + K_2 X K_2^\\dagger = E_1 + E_2 = I.\\]\n\nConversely, for any \\((E_1, E_2) \\in \\Omega_1\\), let\n\n\\[X = \\begin{pmatrix} E_1 & 0 \\\\ 0 & E_2 \\end{pmatrix},\\]\n\nthen we have\n\n\\[\\text{tr}(AX) = \\text{tr}(\\rho E_1) + \\text{tr}(\\sigma E_2),\\]\n\nand \\(X \\in \\Omega_2\\), because\n\n\\[E_1 \\geq 0, \\, E_2 \\geq 0 \\implies X \\geq 0,\\]\n\n\\[E_1 + E_2 = I \\implies \\Phi(X) = K_1 X K_1^\\dagger + K_2 X K_2^\\dagger = I = B.\\]\n\nTherefore if we consider the primal problem\n\n\\[\\alpha = \\max_X \\text{tr}(AX) - 1\\]\ns.t. \\(\\Phi(X) = B, \\, X \\geq 0\\),\nit's easy to see that \\(\\alpha = \\| \\rho - \\sigma \\|_{tr} = \\| M \\|_{tr}\\). Notice that this primal problem with a ‘-1’ in the objective function is a little different from the original form above, but we can fix this\n\nby simply adding one more dimension to the problem, which would have more complicated expressions for \\(A, B, K_1, K_2\\). For convenience, we just stick to this modified primal problem.\n\nNow the modified dual problem\n\n\\[\\beta = \\min_Y \\, \\text{tr}(BY) - 1\\]\n\\[\\text{s.t.} \\quad \\Phi^*(Y) \\geq A,\\]\n\\[Y = Y^\\dagger,\\]\nis equivalent to\n\n\\[\\beta = \\min_Y \\, \\text{tr}(Y) - 1\\]\n\\[\\text{s.t.} \\quad Y \\geq \\rho, \\quad Y \\geq \\sigma,\\]\n\\[Y = Y^\\dagger,\\]\nwhich can be easily verified by substituting the explicit expressions of \\(A, B, K_1, K_2\\) into the modified dual problem.\n\nNext we need to prove in this case \\(\\beta = \\alpha = \\|\\rho - \\sigma\\|_{\\text{tr}}\\). Since in (a) we already showed that \\(\\alpha \\leq \\beta\\), we only need to show that there exists a feasible \\(Y\\) such that \\(\\text{tr}(Y) - 1 = \\alpha = \\|\\rho - \\sigma\\|_{\\text{tr}}\\). Consider the eigenvalue decomposition of \\(\\rho - \\sigma\\),\n\n\\[\\rho - \\sigma = Q \\Lambda Q^\\dagger,\\]\nwhere \\(Q\\) is unitary, and \\(\\Lambda\\) is a diagonal matrix with diagonal elements\n\n\\[\\lambda_1 \\geq \\lambda_2 \\geq \\lambda_r \\geq 0 \\geq \\lambda_{r+1} \\geq \\ldots \\geq \\lambda_d.\\]\n\nSince \\(\\rho, \\sigma\\) are density matrices, we have\n\n\\[\\|\\rho - \\sigma\\|_{\\text{tr}} = \\sum_{i=1}^r \\lambda_i = \\frac{1}{2} \\sum_{i=1}^d |\\lambda_i| = \\frac{1}{2} \\sum_{i=1}^r \\lambda_i - \\frac{1}{2} \\sum_{i=r+1}^d \\lambda_i.\\]\n\nLet \\(q_i\\) denote the \\(i\\)-th column of \\(Q\\). Now define\n\n\\[s_i = q_i^\\dagger \\rho q_i, \\quad i = 1, 2, \\ldots, d,\\]\n\\[t_i = q_i^\\dagger \\sigma q_i, \\quad i = 1, 2, \\ldots, d.\\]\n\nWe have\n\n\\[s_i - t_i = q_i^\\dagger (\\rho - \\sigma) q_i = \\lambda_i \\geq 0, \\quad i = 1, 2, \\ldots, r,\\]\n\\[t_i - s_i = -q_i^\\dagger (\\rho - \\sigma) q_i = -\\lambda_i \\geq 0, \\quad i = r+1, r+2, \\ldots, d.\\]\n\nLet \\(S, T, \\Sigma\\) be three diagonal matrices such that their diagonal vectors are \\((s_1, s_2, \\ldots, s_d)\\), \\((t_1, t_2, \\ldots, t_d)\\) and \\((s_1, s_2, \\ldots, s_r, t_{r+1}, t_{r+2}, \\ldots, t_d)\\) respectively. Then\n\n\\[\\Sigma - S = \\text{diag}(0, 0, \\ldots, 0, t_{r+1} - s_{r+1}, \\ldots, t_d - s_d) \\geq 0,\\]\n\\[\\Sigma - T = \\text{diag}(s_1 - t_1, s_2 - t_2, \\ldots, s_r - t_r, 0, \\ldots, 0) \\geq 0,\\]\n\nwhere diag(v) denotes the diagonal matrix with diagonal vector v. Notice that \\( S \\) and \\( T \\) are the diagonal parts of \\( Q^\\dagger \\rho Q \\) and \\( Q^\\dagger \\sigma Q \\) respectively. We should also notice that\n\n\\[ q_i^\\dagger (\\rho - \\sigma) q_j = 0, \\quad i \\neq j, \\]\nwhich means the non-diagonal part of \\( Q^\\dagger \\rho Q \\) and \\( Q^\\dagger \\sigma Q \\) are the same, i.e.\n\n\\[ Q^\\dagger \\rho Q - S = Q^\\dagger \\sigma Q - T \\triangleq L.\\]\n\nNow we have\n\n\\[ Q^\\dagger \\rho Q = S + L, \\quad Q^\\dagger \\sigma Q = T + L.\\]\n\nLet\n\n\\[ Y = Q(\\Sigma + L)Q^\\dagger.\\]\n\nObviously \\( Y = Y^\\dagger \\), and we have\n\n\\[ Q^\\dagger (Y - \\rho) Q = \\Sigma + L - S - L = \\Sigma - S \\geq 0 \\quad \\Rightarrow \\quad Y \\geq \\rho,\\]\n\\[ Q^\\dagger (Y - \\sigma) Q = \\Sigma + L - T - L = \\Sigma - T \\geq 0 \\quad \\Rightarrow \\quad Y \\geq \\sigma,\\]\nthus \\( Y \\) is a feasible solution to the dual problem. Moreover, notice that we have\n\n\\[ \\sum_{i=1}^d s_i = \\text{tr}(Q^\\dagger \\rho Q) = \\text{tr}(\\rho) = 1, \\quad \\sum_{i=1}^d t_i = \\text{tr}(Q^\\dagger \\sigma Q) = \\text{tr}(\\sigma) = 1,\\]\ntherefore\n\n\\[ \\text{tr}(Y) = \\text{tr}(\\Sigma) = \\sum_{i=1}^r s_i + \\sum_{i=r+1}^d t_i = \\sum_{i=1}^r (s_i - t_i) + \\sum_{i=1}^d t_i = \\sum_{i=1}^r \\lambda_i + 1 = \\|\\rho - \\sigma\\|_{tr} + 1,\\]\nthat is \\( \\beta \\leq \\text{tr}(Y) - 1 = \\|\\rho - \\sigma\\|_{tr} = \\alpha \\). In all we have \\( \\beta = \\alpha = \\|\\rho - \\sigma\\|_{tr} \\).", "question": "### (d) Suppose given a Hermitian matrix \\(M\\) that is the difference of two density matrices, \\(M = \\rho - \\sigma\\). Express the problem of computing \\(\\|M\\|_{\\text{tr}} = \\frac{1}{2} \\|M\\|_1\\) in the form of a primal problem as above. That is, specify the map \\(\\Phi\\) and the matrices \\(A\\) and \\(B\\) such that \\(\\alpha = \\|M\\|_{\\text{tr}}\\). [Hint: recall the operational interpretation of the trace distance as optimal distinguishing probability.] Write the dual problem. Show that, in this case, its optimum \\(\\beta = \\alpha\\)." }, { "context": "The success probability of distinguishing with a POVM \\( \\{M_i\\} \\) is\n\n\\[ \\sum_{i=1}^k p_i \\Pr(M_i | \\rho_i) = \\sum_{i=1}^k p_i \\text{tr}(M_i \\rho_i). \\]\n\nDefine\n\n\\[ A = \\begin{pmatrix}\np_1 \\rho_1 & p_2 \\rho_2 & \\cdots \\\\\n& \\ddots \\\\\n& & p_k \\rho_k\n\\end{pmatrix} \\in M_{kd}(C), \\quad B = I \\in M_{d}(C), \\]\n\n\\[ K_i = (0, \\ldots, 0, I, 0, \\ldots, 0) \\in M_{d \\times kd}(C), \\quad i = 1, 2, \\ldots, k. \\]\n\nthen using the similar argument in (d), we can see that solving the optimization problem\n\n\\[ \\alpha = \\max_{M_i} \\sum_{i=1}^k p_i \\text{tr}(M_i \\rho_i) \\]\ns.t. \\quad M_i \\geq 0, \\quad i = 1, 2, \\ldots, k,\n\\sum_{i=1}^{k} M_i = I,\nis equivalent to solving the primal problem\n\n\\[\\alpha = \\max_X \\, \\text{tr}(AX)\\]\ns.t. \\quad \\Phi(X) = B,\n\\quad \\quad \\quad X \\geq 0,\nand given an optimal solution \\(X^*\\) for the primal problem, we can recover an optimal solution \\(\\{M_i^*\\}_{i \\geq k}\\) for the first problem by taking \\(\\{M_i^*\\}_{i \\geq k}\\) to be the diagonal blocks of \\(X^*\\). Indeed, let \\(\\Omega_1\\) and \\(\\Omega_2\\) be the feasible sets of the two problems above respectively. For any \\(\\{M_i\\}_{i \\geq k} \\in \\Omega_1\\), let\n\n\\[X = \\begin{pmatrix}\nM_1 & M_2 & \\cdots & M_k\n\\end{pmatrix},\\]\nthen we have\n\n\\[\\text{tr}(AX) = \\sum_{i=1}^{k} p_i \\text{tr}(M_i \\rho_i), \\quad K_i X K_i^\\dagger = M_i, \\quad i = 1, 2, \\ldots, k,\\]\nand \\(X \\in \\Omega_2\\) because\n\n\\[\\sum_{i=1}^{k} M_i = I \\implies \\Phi(X) = B = I,\\]\n\\[M_i \\geq 0, \\quad i = 1, 2, \\ldots, k \\implies X \\geq 0.\\]\n\nConversely, for any \\(X \\in \\Omega_2\\), let \\(M_1, M_2, \\ldots, M_k\\) be the diagonal blocks of \\(X\\), then we have\n\n\\[\\sum_{i=1}^{k} p_i \\text{tr}(M_i \\rho_i) = \\text{tr}(AX), \\quad K_i X K_i^\\dagger = M_i, \\quad i = 1, 2, \\ldots, k,\\]\nand \\(\\{M_i\\}_{i \\geq k} \\in \\Omega_1\\) because\n\n\\[\\Phi(X) = B = I \\implies \\sum_{i=1}^{k} M_i = I,\\]\n\\[X \\geq 0 \\implies M_i \\geq 0, \\quad i = 1, 2, \\ldots, k.\\]", "question": "### (e) Suppose you are given one of \\(k\\) possible density matrices, \\(\\rho_1, ..., \\rho_k\\), each with a priori probability \\(p_1, ..., p_k\\) respectively. Your goal is to find the optimal guessing measurement: this is the \\(k\\)-outcome POVM which maximizes your chances of producing the index \\(j \\in \\{1, ..., k\\}\\), given one copy of \\(\\rho_j\\) (which is assumed to occur with probability \\(p_j\\)). First write a formula that expresses the success probability of distinguishing with a POVM \\(\\{M_x\\}\\). Show that the problem of optimizing this quantity can be expressed as a semidefinite program in primal or dual form (whichever you find most convenient).\n\nIt turns out that in many cases (essentially all “well-behaved” cases) the optimum of the primal problem of a semidefinite program equals the optimum of the dual problem. This is useful for several reasons. First of all, note how the primal is a maximization problem, while the dual is a minimization problem. Therefore any feasible solution (a candidate solution that satisfies all the constraints) to the primal provides a lower bound on the optimum, while a feasible solution to the dual provides an upper bound. The fact that they are equal shows that one can get tight bounds in this way. In addition, formulating a problem in, say, primal form, and then looking at the dual formulation, can provide useful insights on the problem. We will see examples of this later on in the course, when we discuss the relation between “guessing probability” and “conditional min-entropy”." } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Maximally entangled properties
Let A and B be quantum systems of the same dimension d. Let |φ⁺⟩ = \(\frac{1}{\sqrt{d}}\) ∑₀≤i≤d-1 |i⟩ₐ ⊗ |i⟩B. This is referred to as a maximally entangled pair of qudits. (Due to Bolton Bailey)
[ { "context": "We have a maximally entangled pair of qubits\n\n\\[ |\\Phi^+\\rangle = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} |i\\rangle_A \\otimes |i\\rangle_B \\]\n\nTo find the reduced state on \\( A \\) we trace out the \\( B \\) system\n\n\\[ Tr_B(|\\Phi^+\\rangle \\langle \\Phi^+|) = Tr_B \\left( \\sum_{0 \\leq i,j \\leq d-1} \\frac{1}{d} (|i\\rangle_A \\otimes |i\\rangle_B)(\\langle j|_A \\otimes \\langle j|_B) \\right) \\]\n\n\\[ = \\sum_{0 \\leq i,j \\leq d-1} \\frac{1}{d} Tr_B \\left( (|i\\rangle_A \\otimes |i\\rangle_B)(\\langle j|_A \\otimes \\langle j|_B) \\right) \\]\n\n\\[ = \\sum_{0 \\leq i,j \\leq d-1} \\frac{1}{d} Tr_B \\left( (|i\\rangle_A \\langle j|_A \\otimes |i\\rangle_B \\langle j|_B) \\right) \\]\n\n\\[ = \\sum_{0 \\leq i,j \\leq d-1} \\frac{1}{d} |i\\rangle_A \\langle j|_A \\]\n\nSo we get the maximally mixed state on the \\( A \\) system.", "question": "### (i) What is the reduced state on subsystem A?" }, { "context": "\\( M \\otimes I \\) and \\( I \\otimes M^T \\) are both \\( d^2 \\times d^2 \\) matrices, and \\( |\\Phi^+\\rangle \\) is a vector of length \\( d^2 \\), so \\( M \\otimes I |\\Phi^+\\rangle \\) and \\( I \\otimes M^T |\\Phi^+\\rangle \\) are vectors of length \\( d^2 \\). To see these are equal, we must show their components are equal. That is, for each \\( 0 \\leq k, l \\leq d-1 \\), we must show\n\n\\[ ((\\langle k| \\otimes \\langle l|) M \\otimes I |\\Phi^+\\rangle) = ((\\langle k| \\otimes \\langle l|) I \\otimes M^T |\\Phi^+\\rangle) \\]\n\nNow see\n\n\\[ ((\\langle k| \\otimes \\langle l|) M \\otimes I |\\Phi^+\\rangle) = ((\\langle k| \\otimes \\langle l|) M \\otimes I \\left( \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} |i\\rangle_A \\otimes |i\\rangle_B \\right)) \\]\n\n\\[ = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} ((\\langle k| \\otimes \\langle l|) M \\otimes I (|i\\rangle_A \\otimes |i\\rangle_B)) \\]\n\n\\[ = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} \\langle k| M |i\\rangle \\otimes \\langle l| I |i\\rangle \\]\n\n\\[ = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} \\langle k| M |i\\rangle \\otimes \\langle l| i\\rangle \\]\n\n\\[ = \\frac{1}{\\sqrt{d}} \\langle k| M |l\\rangle \\]\n\n\\[ = \\frac{1}{\\sqrt{d}} M_{kl} \\]\n\nAnd\n\n\\[ (\\langle k| \\otimes \\langle l|) \\otimes M^T \\Phi^+ = (\\langle k| \\otimes \\langle l|) \\otimes M^T \\left( \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} |i\\rangle_A \\otimes |i\\rangle_B \\right) \\]\n\n\\[ = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} (\\langle k| \\otimes \\langle l|) \\otimes M^T (|i\\rangle_A \\otimes |i\\rangle_B) \\]\n\n\\[ = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} \\langle k|i\\rangle \\otimes \\langle l|M^T |i\\rangle \\]\n\n\\[ = \\sum_{0 \\leq i \\leq d-1} \\frac{1}{\\sqrt{d}} \\langle k|i\\rangle \\otimes \\langle l|M^T |i\\rangle \\]\n\n\\[ = \\frac{1}{\\sqrt{d}} M^T_{lk} \\]\n\n\\[ = \\frac{1}{\\sqrt{d}} M_{kl} \\]\n\nSo these two are indeed equal.", "question": "### (ii) Let M ∈ M_d(C). Show that M ⊗ I|φ⁺⟩ = I ⊗ Mᵀ|φ⁺⟩." } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Choi’s theorem
[This problem is optional, and you may use its results to solve the following problem.] A linear map T : M_d(C) → M_d'(C) is said to be completely positive if for any d'' ≥ 0 the map T ⊗ id_d'' : M_d(C) ⊗ M_d''(C) → M_d'(C) ⊗ M_d''(C) is positive, where id_d'' : M_d''(C) → M_d''(C) is the identity map. (Recall that a positive map is one which maps positive semidefinite matrices to positive semidefinite matrices. Not every positive map is completely positive: a good example is the transpose map on 2 x 2 matrices.) Let |φ⁺⟩ = ∑₀≤i≤d-1 |i⟩ ⊗ |i⟩ (this is just \(\sqrt{d}\) times the maximally entangled state defined in the previous question), and define φ⁺ := |φ⁺⟩⟨φ⁺|. Define the Choi-Jamiolkowski representation J(T) ∈ M_d(C) ⊗ M_d(C) of a linear map T : M_d(C) → \[ M_{d^2}(\mathbb{C}) \text{ as follows:} \] \[J(T) = T \otimes \text{id}_d (\Phi^+) = \sum_{0 \leq i,j \leq d-1} T(|i\rangle \langle j|) \otimes |i\rangle \langle j|. \tag{1}\] In this problem, you will show that, letting \( J(T) \) be the Choi-Jamiolkowski representation of a linear map \( T : M_d(\mathbb{C}) \rightarrow M_{d^2}(\mathbb{C}) \), the following are equivalent: 1. \( J(T) \) is positive semidefinite. 2. There is a set of matrices \( \{K_j \in M_{d^2}(\mathbb{C})\} \) such that \( T(X) = \sum_j K_j X K_j^\dagger \) for \( X \in M_d(\mathbb{C}) \). 3. \( T \) is completely positive. (Due to De Huang)
[ { "context": "Assume that (2) is true, i.e.\n\n\\[T(X) = \\sum_s K_s X K_s^\\dagger, \\quad \\forall X \\in M_d(\\mathbb{C}).\\]\n\nFor any \\( d'' \\geq 0 \\) and any \\( X \\otimes Y \\in M_d(\\mathbb{C}) \\otimes M_{d''}(\\mathbb{C}) \\) such that \\( X \\otimes Y \\geq 0 \\), we have\n\n\\[T \\otimes id_{d''} (X \\otimes Y) = T(X) \\otimes Y = \\sum_s K_s X K_s^\\dagger \\otimes Y.\\]\n\nThen for any joint state\n\n\\[|\\Phi\\rangle = \\sum_{0 \\leq i \\leq d-1} \\sum_{0 \\leq j \\leq d''-1} a_{ij} |i\\rangle \\otimes |j\\rangle \\in \\text{span}\\{\\mathbb{C}^d \\otimes \\mathbb{C}^{d''}\\},\\]\n\nwe have\n\\[\\langle \\Phi | T \\otimes id_{d''} (X \\otimes Y) | \\Phi \\rangle = \\sum_s \\langle \\Phi | (K_s X K_s^\\dagger \\otimes Y) | \\Phi \\rangle\\]\n\n\\[= \\sum_s \\sum_{i,j} \\sum_{k,l} a_{ij} a_{kl} \\langle i | K_s X K_s^\\dagger | k \\rangle \\langle j | Y | l \\rangle\\]\n\n\\[= \\sum_s \\left( \\sum_{i,j} a_{ij} \\langle i | K_s \\rangle \\otimes \\langle j | \\right) (X \\otimes Y) \\left( \\sum_{k,l} a_{kl} \\langle k | K_s^\\dagger \\rangle \\otimes \\langle l | \\right)\\]\n\n\\[= \\sum_s \\left( \\langle \\Phi | K_s \\otimes \\mathbb{I} \\right) (X \\otimes Y) \\left( K_s^\\dagger \\otimes \\mathbb{I} | \\Phi \\rangle \\right)\\]\n\n\\[\\geq 0.\\]\nSince \\( | \\Phi \\rangle \\) is arbitrary, we have \\( T \\otimes id_{d''} (X \\otimes Y) \\geq 0 \\). And since \\( d'' \\) and \\( X \\otimes Y \\) are arbitrary, we can conclude that \\( T \\) is completely positive.", "question": "### (a) Show that (2) \\(\\Rightarrow\\) (3), i.e. that if \\( T \\) is such that \\( T(X) = \\sum_j K_j X K_j^\\dagger \\) for \\( X \\in M_d(\\mathbb{C}) \\), then \\( T \\) is completely positive." }, { "context": "(3) \\(\\Rightarrow\\) (1) is trivial. If \\( T \\) is completely positive, then by definition, in the case \\( d'' = d \\), \\( T \\otimes id_d \\) is positive. Since \\( \\Phi^+ = | \\Phi^+ \\rangle \\langle \\Phi^+ | \\) is positive semidefinite, we immediately have that\n\\[J(T) = T \\otimes id_d (\\Phi^+)\\]\nis positive semidefinite.", "question": "### (b) Explain why (3) \\(\\Rightarrow\\) (1)." }, { "context": "We may always assume that\n\\[X | j \\rangle = \\sum_{0 \\leq j \\leq d-1} x_{ij} | i \\rangle,\\]\nthen we can write \\( X \\) as\n\\[X = \\sum_{0 \\leq i,j \\leq d-1} x_{ij} | i \\rangle \\langle j |,\\]\nand we have\n\\[T(X) = \\sum_{0 \\leq i,j \\leq d-1} x_{ij} T(| i \\rangle \\langle j |).\\]\nOn the other hand, we have\n\\[(id_d \\otimes t_d(J(T))) (\\mathbb{I} \\otimes X) = \\left( \\sum_{0 \\leq i,j \\leq d-1} T(| i \\rangle \\langle j |) \\otimes t_d(| i \\rangle \\langle j |) \\right) (\\mathbb{I} \\otimes X)\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} (T(| i \\rangle \\langle j |) \\otimes | j \\rangle \\langle i |) (\\mathbb{I} \\otimes X)\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} T(| i \\rangle \\langle j |) \\otimes (| j \\rangle \\langle i | X)\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} T(| i \\rangle \\langle j |) \\otimes (| j \\rangle \\langle i | (\\sum_{0 \\leq k,l \\leq d-1} x_{kl} | k \\rangle \\langle l |))\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} T(| i \\rangle \\langle j |) \\otimes (\\sum_{0 \\leq k,l \\leq d-1} x_{kl} | j \\rangle \\langle l |)\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} x_{il} T(| i \\rangle \\langle j |) \\otimes | j \\rangle \\langle l |,\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} x_{il} T(| i \\rangle \\langle j |) \\otimes | j \\rangle \\langle l |,\\]\nand thus\n\\[tr_A \\left( (id_{d'} \\otimes td(J(T))) (\\lvert B \\rangle \\otimes X_A) \\right) = tr_A \\left( \\sum_{0 \\leq i,j \\leq d-1} x_{i} T(\\lvert i \\rangle \\langle j \\rvert) \\lvert B \\rangle \\otimes \\lvert j \\rangle \\langle i \\rvert \\right)\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} x_{ij} T(\\lvert i \\rangle \\langle j \\rvert)\\]\n\\[= T(X).\\]\nNow we can define a bidirectional map between linear map \\( T \\) from \\( M_d(\\mathbb{C}) \\) to \\( M_{d'}(\\mathbb{C}) \\) and their operator representation \\( J(T) \\) in \\( M_{d'}(\\mathbb{C}) \\otimes M_d(\\mathbb{C}) \\). One direction is\n\\[T \\rightarrow J(T),\\]\nand the other direction is\n\\[J(T) \\rightarrow tr_A \\left( (id_{d'} \\otimes td(J(T))) (\\lvert B \\rangle \\otimes \\cdot \\lambda_A) \\right) = T(\\cdot).\\]\nThese two directions are inverse to each other, so this is a one-to-one map. Let's define \\( J^{-1}(J(T)) = T \\) for future use. Also we can see that both \\( J \\) and \\( J^{-1} \\) are linear.", "question": "### (c) Let \\( t_d : M_d(\\mathbb{C}) \\rightarrow M_d(\\mathbb{C}) \\) be the linear map defined by \\( X \\mapsto X^T \\). \\( t_d \\) transposes the matrix but we take care to define it in this way as we could be taking the transpose just on a single subsystem. Show that the action of \\( T \\) on \\( X \\in M_d(\\mathbb{C}) \\) can be written in terms of its Choi-Jamiolkowski representation \\( J(T) \\) as\n\\[ T(X) = \\text{Tr}_{\\Lambda}((\\text{id}_d \\otimes t_d(J(T)))(\\mathbb{I}_d \\otimes X_{\\Lambda})). \\tag{2} \\]\nand deduce that there is a one-to-one correspondence between linear maps from \\( M_d(\\mathbb{C}) \\) to \\( M_{d^2}(\\mathbb{C}) \\) and their operator representations in \\( M_{d^2}(\\mathbb{C}) \\)." }, { "context": "For an arbitrary \\( Z \\in M_{d' \\times d} \\),\n\\[Z = \\sum_{0 \\leq i \\leq d' - 1} \\sum_{0 \\leq j \\leq d - 1} a_{ij} \\lvert i \\rangle \\lvert j \\rangle,\\]\nwe have\n\\[vec(Z) = \\sum_{0 \\leq i \\leq d' - 1} \\sum_{0 \\leq j \\leq d - 1} a_{ij} vec(\\lvert i \\rangle \\lvert j \\rangle)\\]\n\\[= \\sum_{0 \\leq i \\leq d' - 1} \\sum_{0 \\leq j \\leq d - 1} a_{ij} \\lvert i \\rangle \\otimes \\lvert j \\rangle\\]\n\\[= \\sum_{0 \\leq j \\leq d - 1} \\left( \\sum_{0 \\leq i \\leq d' - 1} a_{ij} \\lvert i \\rangle \\right) \\otimes \\lvert j \\rangle\\]\n\\[= \\sum_{0 \\leq j \\leq d - 1} Z \\lvert j \\rangle \\otimes \\lvert j \\rangle,\\]\nwhere we have used the fact that\n\\[Z \\lvert j \\rangle = \\sum_{0 \\leq i \\leq d' - 1} a_{ij} \\lvert i \\rangle, \\quad 0 \\leq j \\leq d - 1.\\]\nTherefore we have\n\\[J(T) = T \\otimes id_d(\\Phi^+)\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} T(|i\\rangle \\langle j|) \\otimes |i\\rangle \\langle j|\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} (Z|i\\rangle \\langle j|Z^\\dagger) \\otimes |i\\rangle \\langle j|\\]\n\\[= \\sum_{0 \\leq i,j \\leq d-1} ((Z|i\\rangle) \\otimes |i\\rangle)((\\langle j|Z^\\dagger) \\otimes \\langle j|)\\]\n\\[= (\\sum_{0 \\leq i,j \\leq d-1} (Z|i\\rangle) \\otimes |i\\rangle) (\\sum_{0 \\leq i,j \\leq d-1} (\\langle j|Z^\\dagger) \\otimes \\langle j|)\\]\n\\[= |\\zeta\\rangle \\langle \\zeta|,\\]\nwith \\(|\\zeta\\rangle = vec(Z)\\).", "question": "### (d) Define the linear map \\( \\text{vec} : M_{d^2}(\\mathbb{C}) \\rightarrow M_d(\\mathbb{C}) \\otimes M_d(\\mathbb{C}) \\) by its action on the standard basis, \\( \\text{vec} : |i\\rangle \\langle j| \\mapsto |i\\rangle \\otimes |j\\rangle \\) for \\( 0 \\leq i < d \\) and \\( 0 \\leq j < d \\). Let \\( Z \\in M_{d^2}(\\mathbb{C}) \\). Show that a map of the form \\( T : X \\mapsto Z X Z^\\dagger \\) has Choi-Jamiolkowski representation \\( |\\zeta\\rangle \\langle \\zeta| \\) where \\( |\\zeta\\rangle = \\text{vec}(Z) \\)." }, { "context": "If (1) is true, \\(J(T)\\) is positive semidefinite, we should be able to write \\(J(T)\\) in form of its eigenvalue decomposition\n\\[J(T) = \\sum_s \\lambda_s |\\zeta_s\\rangle \\langle \\zeta_s|,\\]\nwhere \\(\\lambda_s > 0\\) for each \\(s\\), and each\n\\[|\\zeta_s\\rangle = \\sum_{0 \\leq i,j \\leq d-1} c_{ij}^s |i\\rangle \\otimes |j\\rangle \\in span\\{C^d \\otimes C^d\\}\\]\nis an normalized eigenstate of \\(J(T)\\). Let's define\n\\[Z_s = \\sum_{0 \\leq i,j \\leq d-1} c_{ij}^s |i\\rangle \\langle j|, \\quad T_s = J^{-1}(|\\zeta_s\\rangle \\langle \\zeta_s|),\\]\nwhere the notation \\(J^{-1}\\) has been defined in (c). Then it's easy to check that\n\\[vec(Z_s) = |\\zeta_s\\rangle, \\quad J(T_s) = |\\zeta_s\\rangle \\langle \\zeta_s|,\\]\nand using the result of (c) and (d) we have\n\\[T_s(X) = Z_s X Z_s^\\dagger, \\quad \\forall X \\in M_d(C).\\]\nNow we have\n\\[J(T) = \\sum_s \\lambda_s |\\zeta_s\\rangle \\langle \\zeta_s| = \\sum_s \\lambda_s J(T_s),\\]\nthen by linearity we have\n\\[T = J^{-1}(J(T)) = \\sum_s \\lambda_s J^{-1}(J(T_s)) = \\sum_s \\lambda_s T_s.\\]\nFurther, since \\(\\lambda_s > 0\\) for each \\(s\\), we can define\n\\[K_s = \\sqrt{\\lambda_s} Z_s,\\]\nthen we have\n\\[T(X) = \\sum_s \\lambda_s T_s(X) = \\sum_s \\lambda_s Z_s X Z_s^\\dagger = \\sum_s K_s X K_s^\\dagger, \\quad \\forall X \\in M_d(C),\\]\nwhich means (2) is true.", "question": "### (e) Show that (1) \\(\\Rightarrow\\) (2), i.e. suppose that a map \\( T : M_d(\\mathbb{C}) \\rightarrow M_{d^2}(\\mathbb{C}) \\) has a positive semidefinite Choi-Jamiolkowski representation. Construct a set of maps \\( \\{K_j \\in M_{d^2}(\\mathbb{C})\\} \\) such that \\( T(X) = \\sum_j K_j X K_j^\\dagger \\) for any \\( X \\in M_d(\\mathbb{C}) \\). [**Hint:** You might find calculations from part (d) helpful.]" } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
A limit on quantum attacks on Wiesner’s scheme
Consider Wiesner’s quantum money scheme for the case of a single qubit. Recall that an attack on this scheme is a CPTP map \( T \) which maps a single qubit to two qubits, and is such that the probability that the two-qubit density matrix \( T(|x\rangle \langle x|) \) succeeds in the bank’s verification procedure twice in sequence is maximized, when \( x, \theta \in \{0, 1\} \) are chosen uniformly at random.(Due to De Huang)
[ { "context": "The success probability is\n\n\\[\n\\Pr(\\text{success}) = \\frac{1}{4} \\sum_{x, \\theta \\in \\{0,1\\}} \\text{tr}(T(|x\\rangle \\langle x| \\otimes |x\\rangle \\langle x| \\otimes |x\\rangle \\langle x|)).\n\\]", "question": "### (a) Write out the formula which expresses the success probability of an attack specified by a CPTP map \\( T \\)." }, { "context": "Notice that for any matrices \\( N \\in M_4(\\mathbb{C}) \\) and \\( M \\in M_2(\\mathbb{C}) \\), we have\n\n\\[\n\\begin{aligned}\n\\text{tr}(J(T)(N \\otimes M)) &= \\sum_{i,j \\in \\{0,1\\}} \\text{tr}((T(|i\\rangle \\langle j|) \\otimes |i\\rangle \\langle j|)(N \\otimes M)) \\\\\n&= \\sum_{i,j \\in \\{0,1\\}} \\text{tr}(T(|i\\rangle \\langle j|)N \\otimes |i\\rangle \\langle j|M) \\\\\n&= \\sum_{i,j \\in \\{0,1\\}} \\text{tr}(T(|i\\rangle \\langle j|)N) \\text{tr}(|i\\rangle \\langle j|M) \\\\\n&= \\sum_{i,j \\in \\{0,1\\}} \\text{tr}(T(|i\\rangle \\langle j|)N) m_{ij} \\\\\n&= \\text{tr}(T(\\sum_{i,j \\in \\{0,1\\}} m_{ij} |i\\rangle \\langle j|)N) \\\\\n&= \\text{tr}(T(M)N),\n\\end{aligned}\n\\]\n\nwhere we have used that\n\n\\[\nM = \\sum_{i,j \\in \\{0,1\\}} m_{ij} |i\\rangle \\langle j|,\n\\]\n\n\\[\nm_{ij} = \\langle i|M|j\\rangle = \\text{tr}(|i\\rangle \\langle j|M), \\quad i, j \\in \\{0, 1\\}.\n\\]\n\nThen using this result by taking \\( N = |x\\rangle \\langle x| \\otimes |x\\rangle \\langle x|, M = |x\\rangle \\langle x| \\) for each pair of \\( (x, \\theta) \\), we have\n\n\\[\n\\begin{aligned}\n\\text{tr}(J(T)Q) &= \\sum_{x, \\theta \\in \\{0,1\\}} \\text{tr}(J(T)(|x\\rangle \\langle x| \\otimes |x\\rangle \\langle x| \\otimes |x\\rangle \\langle x|)) \\\\\n&= \\sum_{x, \\theta \\in \\{0,1\\}} \\text{tr}(T(|x\\rangle \\langle x| \\otimes |x\\rangle \\langle x| \\otimes |x\\rangle \\langle x|)) \\\\\n&= 4 \\Pr(\\text{success}),\n\\end{aligned}\n\\]\n\nthat is\n\n\\[\n\\Pr(\\text{success}) = \\frac{1}{4} \\text{tr}(J(T)Q).\n\\]", "question": "### (b) Let \\( J(T) = \\sum_{i,j \\in \\{0,1\\}} T(|i\\rangle \\langle j|) \\otimes |i\\rangle \\langle j| \\) be the Choi-Jamiolkowski representation of the map \\( T \\). Consider the matrix \\( Q = \\sum_{x, \\theta \\in \\{0,1\\}} |x\\rangle \\otimes |\\theta\\rangle \\langle x| \\otimes \\langle \\theta| \\). Write the success probability of the map \\( T \\) as a simple expression involving \\( J(T) \\) and \\( Q \\)." }, { "context": "If \\( T \\) is trace-preserving, then we have\n\n\\[\n\\text{tr}_1(J(T)) = \\sum_{0 \\leq i,j \\leq d-1} \\text{tr}(T(|i\\rangle \\langle j|)) |i\\rangle \\langle j|\n\\]\n\n\\[\n= \\sum_{0 \\leq i,j \\leq d-1} \\delta_{ij} |i\\rangle \\langle j|\n\\]\n\n\\[\n= \\sum_{0 \\leq i \\leq d-1} |i\\rangle \\langle i|\n\\]\n\n\\[\n= I_d.\n\\]\n\nConversely, if we have\n\n\\[\n\\sum_{0 \\leq i,j \\leq d-1} \\text{tr}(T(|i\\rangle \\langle j|)) |i\\rangle \\langle j| = I_d,\n\\]\n\nthen\n\n\\[\n\\delta_{ij} = \\langle i | I_d | j \\rangle = \\langle i | \\left( \\sum_{0 \\leq k,l \\leq d-1} \\text{tr}(T(|k\\rangle \\langle l|)) |k\\rangle \\langle l| \\right) | j \\rangle = \\text{tr}(T(|i\\rangle \\langle j|)), \\quad \\forall 0 \\leq i, j \\leq d-1.\n\\]\n\nTherefore for any\n\n\\[\nX = \\sum_{0 \\leq i,j \\leq d-1} x_{ij} |i\\rangle \\langle j| \\in M_d(\\mathbb{C}),\n\\]\n\nwe have\n\n\\[\n\\text{tr}(T(X)) = \\text{tr} \\left( \\sum_{0 \\leq i,j \\leq d-1} x_{ij} T(|i\\rangle \\langle j|) \\right)\n\\]\n\n\\[\n= \\sum_{0 \\leq i,j \\leq d-1} x_{ij} \\text{tr}(T(|i\\rangle \\langle j|))\n\\]\n\n\\[\n= \\sum_{0 \\leq i \\leq d-1} x_{ii}\n\\]\n\n\\[\n= \\text{tr}(X).\n\\]\n\nSince \\( X \\) is arbitrary, we may conclude that \\( T \\) is trace-preserving.", "question": "### (c) Show that the condition that the CP map \\( T \\) is trace-preserving can be expressed as the condition that its Choi-Jamiolkowski representation \\( J(T) \\) satisfies\n\\[ \\text{Tr}_1 (J(T)) = \\sum_{0 \\leq i,j \\leq d-1} \\text{Tr} \\left( T(|i\\rangle \\langle j|) \\right) |i\\rangle \\langle j| = I_d. \\]" }, { "context": "Recall in (b) we have shown that\n\n\\[\n\\text{Pr}(\\text{success}(T)) = \\frac{1}{4} \\text{tr}(Q J(T)),\n\\]\n\nso we may take \\( A = \\frac{1}{4} Q \\), and the variable \\( X = J(T) \\). After obtaining the optimal \\( X^* \\), we may recover the optimal \\( T^* \\) as \\( T^* = J^{-1}(X^*) \\), where \\( J^{-1} \\) is defined in problem 4 (c) as\n\n\\[\nJ^{-1}(X)( \\cdot ) = \\text{tr}_A \\left( (id_A \\otimes t_2(X))(I_B \\otimes ( \\cdot ) \\otimes \\lambda) \\right).\n\\]\n\nDefine\n\n\\[\nK_{ij} = \\langle i | \\langle j | \\otimes I_{t_2}, \\quad i, j \\in \\{0, 1\\},\n\\]\n\n\\[\n\\Phi(X) = \\sum_{i,j \\in \\{0,1\\}} K_{ij} X K_{ij}^{\\dagger},\n\\]\n\nthen for any matrices \\( N \\in M_4(\\mathbb{C}) \\) and \\( M \\in M_2(\\mathbb{C}) \\) we have\n\n\\[\n\\Phi(N \\otimes M) = \\sum_{i,j \\in \\{0,1\\}} K_{ij} (N \\otimes M) K_{ij}^{\\dagger}\n= \\sum_{i,j \\in \\{0,1\\}} (\\langle i| \\langle j| N |i \\rangle |j \\rangle) \\otimes M\n= \\text{tr}(N) M.\n\\]\n\nThen\n\n\\[\n\\Phi(J(T)) = \\sum_{0 \\leq i,j \\leq d-1} \\Phi(T(|i \\rangle \\langle j|)) \\otimes |i \\rangle \\langle j| = \\sum_{0 \\leq i,j \\leq d-1} \\text{tr}(T(|i \\rangle \\langle j|)) |i \\rangle \\langle j| = \\text{tr}(J(T)).\n\\]\n\nNow if we take \\( B = I_2 \\), then the condition\n\n\\[\n\\Phi(J(T)) = \\Phi(X) = B = I_2\n\\]\n\nensures that \\( T \\) is trace-preserving, by the result of (c). Moreover, the condition\n\n\\[\nJ(T) = X \\geq 0\n\\]\n\nensures that \\( T \\) is completely positive, by the result of problem 4. Then finally, we can obtain the optimal success probability of attack based on CPTP map by solving the primal problem\n\n\\[\n\\alpha = \\max_X \\text{tr} \\left( \\frac{1}{4} QX \\right)\n\\]\nsubject to\n\n\\[\n\\Phi(X) = I_2,\nX \\geq 0,\n\\]\nwith the optimal success probability equal to \\( \\alpha \\) and the optimal CPTP map \\( T^* = J^{-1}(X^*) \\).\n\nThe dual problem is\n\n\\[\n\\beta = \\min_Y \\text{tr}(Y)\n\\]\nsubject to\n\n\\[\n\\Phi^*(Y) \\geq \\frac{1}{4} Q,\nY = Y^{\\dagger}.\n\\]\n\nNotice that\n\n\\[\n\\Phi^*(Y) = \\sum_{i,j \\in \\{0,1\\}} K_{ij}^{\\dagger} Y K_{ij} = \\sum_{i,j \\in \\{0,1\\}} \\langle i| Y |j \\rangle \\langle i| j \\rangle \\otimes Y = I_4 \\otimes Y,\n\\]\nthe dual problem can have a more explicit form\n\n\\[\n\\beta = \\min_Y \\text{tr}(Y)\n\\]\nsubject to\n\n\\[\nI_4 \\otimes Y \\geq \\frac{1}{4} Q,\nY = Y^{\\dagger}.\n\\]", "question": "### (d) Find a semidefinite program in primal form (see problem 2.) whose optimum is the success probability of an arbitrary attack on the single-qubit Wiesner quantum money scheme. *[Hint: recall the characterization of CP maps from their Choi-Jamiolkowski representation given in the previous problem, and use the previous question as well]* Write down the dual semidefinite program." }, { "context": "Let's solve the primal problem in (d):\n\n\\[\n\\alpha = \\max_X \\, \\text{tr}(AX) \\\\\n\\text{s.t.} \\quad \\Phi(X) = I_2, \\\\\nX \\geq 0.\n\\]\n\nFor Wiesner's scheme, we have\n\n\\[\nA = \\frac{1}{4} Q = \\frac{1}{4} (|\\psi_1\\rangle \\langle \\psi_1| + |\\psi_2\\rangle \\langle \\psi_2| + |\\psi_3\\rangle \\langle \\psi_3| + |\\psi_4\\rangle \\langle \\psi_4|),\n\\]\n\nwhere\n\n\\[\n|\\psi_1\\rangle = |0\\rangle |0\\rangle |0\\rangle, \\quad |\\psi_2\\rangle = |1\\rangle |1\\rangle |1\\rangle, \\quad |\\psi_3\\rangle = |+\\rangle |+\\rangle |+\\rangle, \\quad |\\psi_4\\rangle = |\\-\\rangle |\\-\\rangle |\\-\\rangle.\n\\]\n\nWith help of Matlab, we can easily find the eigenvalue decomposition of \\(Q\\),\n\n\\[\nQ = U \\Lambda U^\\dagger,\n\\]\n\nwhere \\(U\\) is unitary, and\n\n\\[\n\\Lambda = \\text{diag}\\left(\\frac{3}{8}, \\frac{3}{8}, \\frac{1}{8}, \\frac{1}{8}, 0, 0, 0, 0\\right).\n\\]\n\nThat is, all eigenvalues of \\(A\\) are\n\n\\[\n\\lambda_1 = \\lambda_2 = \\frac{3}{8}, \\quad \\lambda_3 = \\lambda_4 = \\frac{1}{8}, \\quad \\lambda_5 = \\lambda_6 = \\lambda_7 = \\lambda_8 = 0.\n\\]\n\nThen we can immediately obtain an upper bound for our objective function given that \\(X\\) is a feasible solution,\n\n\\[\n\\text{tr}(AX) = \\text{tr}(U \\Lambda U^\\dagger X) \\leq \\lambda_1(A) \\text{tr}(U^\\dagger X U) = \\lambda_1(A) \\text{tr}(X) = 2 \\lambda_1(A) = \\frac{3}{4}.\n\\]\n\nTherefore if we can achieve this upper bound with some feasible \\(X\\), then the problem is solved. Indeed, to make the inequality to become equality in the formula above, i.e.\n\n\\[\n\\text{tr}(U^\\dagger X U) = \\lambda_1(A) \\text{tr}(U^\\dagger X U),\n\\]\n\nwe need the diagonal entries of \\(U^\\dagger X U\\) to focus on the first two entries which are associated with \\(\\lambda_1(A), \\lambda_2(A)\\). Recall that \\(\\text{tr}(U^\\dagger X U) = \\text{tr}(X) = 2\\), a natural guess would be\n\n\\[\nU^\\dagger X^* U = \\text{diag}(1, 1, 0, 0, 0, 0, 0, 0),\n\\]\n\nand we have\n\n\\[\nX^* = U \\text{diag}(1, 1, 0, 0, 0, 0, 0, 0) U^\\dagger = \n\\begin{pmatrix}\n3/4 & 0 & 0 & 1/4 & 0 & 1/4 & 1/4 & 0 \\\\\n0 & 1/12 & 1/12 & 0 & 1/12 & 0 & 0 & 1/12 \\\\\n0 & 1/12 & 1/12 & 0 & 1/12 & 0 & 0 & 1/12 \\\\\n1/4 & 0 & 0 & 1/12 & 0 & 1/12 & 1/12 & 0 \\\\\n0 & 1/12 & 1/12 & 0 & 1/12 & 0 & 0 & 1/12 \\\\\n1/4 & 0 & 0 & 1/12 & 0 & 1/12 & 1/12 & 0 \\\\\n1/4 & 0 & 0 & 1/12 & 0 & 1/12 & 1/12 & 0 \\\\\n0 & 1/4 & 1/4 & 0 & 1/4 & 0 & 0 & 3/4\n\\end{pmatrix}\n\\]\n\n\\[\n= |\\zeta_1\\rangle \\langle \\zeta_1| + |\\zeta_2\\rangle \\langle \\zeta_2|,\n\\]\n\nwhere \\( |\\zeta_1\\rangle, |\\zeta_2\\rangle \\) are the eigenstates of \\( A \\) corresponding to eigenvalues \\( \\lambda_1, \\lambda_2 \\),\n\n\\[ |\\zeta_1\\rangle = \\frac{1}{\\sqrt{12}} (3|0\\rangle|0\\rangle|0\\rangle + |0\\rangle|1\\rangle|1\\rangle + 1|1\\rangle|0\\rangle|1\\rangle + 1|1\\rangle|1\\rangle|0\\rangle), \\]\n\n\\[ |\\zeta_2\\rangle = \\frac{1}{\\sqrt{12}} (|0\\rangle|0\\rangle|1\\rangle + |0\\rangle|1\\rangle|0\\rangle + 1|1\\rangle|0\\rangle|0\\rangle + 3|1\\rangle|1\\rangle|1\\rangle). \\]\n\nIt's easy to check that \\( X^* \\) is a feasible solution, i.e.\n\n\\[ \\Phi(X^*) = I_2, \\quad X^* \\geq 0, \\]\n\nthus we have \\( \\alpha = \\frac{3}{4} \\), the optimal success probability is \\( \\frac{3}{4} \\).\n\nOur next mission is to recover \\( T^* \\) from \\( X^* \\). Now we can make use of the useful results in problem 4. Let\n\n\\[ Z_1 = \\frac{1}{\\sqrt{12}} (3|0\\rangle|0\\rangle|0\\rangle + |0\\rangle|1\\rangle|1\\rangle|0\\rangle + 1|1\\rangle|0\\rangle|1\\rangle|0\\rangle), \\]\n\n\\[ Z_2 = \\frac{1}{\\sqrt{12}} (|0\\rangle|0\\rangle|1\\rangle + |0\\rangle|1\\rangle|0\\rangle + 1|1\\rangle|0\\rangle|0\\rangle + 3|1\\rangle|1\\rangle|1\\rangle), \\]\n\nthen we have\n\n\\[ \\text{vec}(Z_1) = |\\zeta_1\\rangle, \\quad \\text{vec}(Z_2) = |\\zeta_2\\rangle. \\]\n\nDefine\n\n\\[ T_1(\\rho) = Z_1 \\rho Z_1^\\dagger, \\quad \\forall \\rho \\in M_2(\\mathbb{C}), \\]\n\n\\[ T_2(\\rho) = Z_2 \\rho Z_2^\\dagger, \\quad \\forall \\rho \\in M_2(\\mathbb{C}). \\]\n\nBy the result of problem 4(d), we have\n\n\\[ J^{-1}(|\\zeta_1\\rangle \\langle \\zeta_1|) = T_1, \\quad J^{-1}(|\\zeta_2\\rangle \\langle \\zeta_2|) = T_2. \\]\n\nFinally we have\n\n\\[ T^* = J^{-1}(X^*) = J^{-1}(|\\zeta_1\\rangle \\langle \\zeta_1|) + J^{-1}(|\\zeta_2\\rangle \\langle \\zeta_2|) = T_1 + T_2, \\]\n\nthat is we have\n\n\\[ T^*(\\rho) = Z_1 \\rho Z_1^\\dagger + Z_2 \\rho Z_2^\\dagger \\quad \\forall \\rho \\in M_2(\\mathbb{C}). \\]", "question": "### (e) Solve the semidefinite program! That is, give an explicit matrix which achieves the optimum, together with the value of the optimum. *[Hint: I will allow you to google — but if you do so, state your source. Serious bonus points for solving the problem yourself, either by hand (explain your reasoning) or using Matlab or any other program (print out your code).*" }, { "context": "Consider a linear map \\( U \\) such that\n\n\\[ U : |0\\rangle|0\\rangle|0\\rangle \\longrightarrow \\frac{1}{\\sqrt{12}} \\left( (3|0\\rangle|0\\rangle + 1|1\\rangle) \\otimes |0\\rangle + (|0\\rangle|1\\rangle + 1|1\\rangle) \\otimes |1\\rangle \\right), \\]\n\n\\[ U : |1\\rangle|0\\rangle|0\\rangle \\longrightarrow \\frac{1}{\\sqrt{12}} \\left( (|0\\rangle|1\\rangle + 1|1\\rangle) \\otimes |0\\rangle + (|0\\rangle|0\\rangle + 3|1\\rangle|1\\rangle) \\otimes |1\\rangle \\right). \\]\n\nBy direct calculation, we can check that\n\n\\[ |U|0\\rangle|0\\rangle|0\\rangle| = |U|1\\rangle|0\\rangle|0\\rangle| = 1, \\quad (U|0\\rangle|0\\rangle|0\\rangle)^{\\dagger} (U|1\\rangle|0\\rangle|0\\rangle) = 0, \\]\n\ntherefore we can extend \\( U \\) to be a unitary operator for all three-qubits (use a similar argument for HW2 problem 6(b)). We still denote this extended unitary operator as \\( U \\). Notice that\n\n\\[\nZ_1|0\\rangle = \\frac{1}{\\sqrt{12}} (3|0\\rangle|0\\rangle + |1\\rangle|1\\rangle), \\quad Z_1|1\\rangle = \\frac{1}{\\sqrt{12}} (|0\\rangle|1\\rangle + 3|1\\rangle|0\\rangle),\n\\]\n\n\\[\nZ_2|0\\rangle = \\frac{1}{\\sqrt{12}} (|0\\rangle|1\\rangle + |1\\rangle|0\\rangle), \\quad Z_2|1\\rangle = \\frac{1}{\\sqrt{12}} (|0\\rangle|0\\rangle + 3|1\\rangle|1\\rangle),\n\\]\n\nthus we have\n\n\\[\nU(|0\\rangle|0\\rangle|0\\rangle) = (Z_1|0\\rangle) \\otimes |0\\rangle + (Z_2|0\\rangle) \\otimes |1\\rangle,\n\\]\n\n\\[\nU(|1\\rangle|0\\rangle|0\\rangle) = (Z_1|1\\rangle) \\otimes |0\\rangle + (Z_2|1\\rangle) \\otimes |1\\rangle.\n\\]\n\nThen for any single qubit \\( |\\phi\\rangle \\), by linearity, we always have\n\n\\[\nU(|\\phi\\rangle|0\\rangle|0\\rangle) = (Z_1|\\phi\\rangle) \\otimes |0\\rangle + (Z_2|\\phi\\rangle) \\otimes |1\\rangle.\n\\]\n\nNotice that we have\n\n\\[\ntr_3(U|\\phi\\rangle|0\\rangle\\langle 0| \\langle \\phi|U^{\\dagger}) = tr_3 \\left( Z_1|\\phi\\rangle \\langle \\phi|Z_1^{\\dagger} \\otimes |0\\rangle \\langle 0| + Z_2|\\phi\\rangle \\langle \\phi|Z_2^{\\dagger} \\otimes |1\\rangle \\langle 1| + Z_1|\\phi\\rangle \\langle \\phi|Z_2^{\\dagger} \\otimes |0\\rangle \\langle 1| + Z_2|\\phi\\rangle \\langle \\phi|Z_1^{\\dagger} \\otimes |1\\rangle \\langle 0| \\right)\n\\]\n\n\\[\n= Z_1|\\phi\\rangle \\langle \\phi|Z_1^{\\dagger} + Z_2|\\phi\\rangle \\langle \\phi|Z_2^{\\dagger} = T^*(|\\phi\\rangle \\langle \\phi|).\n\\]\n\nNow we can clarify our optimal attack found in (e). Given a money qubit \\( |\\phi\\rangle \\), the attack steps are\n\n(i) **Operation**: Append \\( |0\\rangle|0\\rangle \\) to \\( |\\phi\\rangle \\). **Outcome**: \\( |\\phi\\rangle|0\\rangle|0\\rangle \\).\n\n(ii) **Operation**: Apply \\( U \\) to \\( |\\phi\\rangle|0\\rangle|0\\rangle \\). **Outcome**: \\( (Z_1|\\phi\\rangle) \\otimes |0\\rangle + (Z_2|\\phi\\rangle) \\otimes |1\\rangle \\).\n\n(iii) **Operation**: Trace out the third qubit. **Outcome**: \\( T^*(|\\phi\\rangle \\langle \\phi|) \\).", "question": "### (f) Give an explicit representation of the attack you found in (e) as a sequence of three operations: (i) appending some auxiliary qubits in state |0⟩; (ii) applying a unitary transformation on all qubits; (iii) performing a partial trace or measurement map on some of the qubits. *[If you weren’t able to solve (e), you can ignore this question. It will only count for 1 point.]*" } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
State discrimination
Suppose you are given two distinct states of a single qubit, \( |\psi_1\rangle \) and \( |\psi_2\rangle \).
[ { "context": "(Due to De Huang)\nFor any basis \\( |b_1\\rangle, |b_2\\rangle \\), we have\n\n\\[\n\\text{Pr}(|b_1\\rangle \\mid |\\psi_2\\rangle) = |\\langle b_1|\\psi_2\\rangle|^2 = |\\langle b_1|e^{i\\varphi}|b_1\\rangle|^2 = |e^{i\\varphi}\\langle b_1|b_1\\rangle|^2 = |\\langle b_1|b_1\\rangle|^2 = \\text{Pr}(|b_1\\rangle \\mid |\\psi_1\\rangle),\n\\]\n\n\\[\n\\text{Pr}(|b_2\\rangle \\mid |\\psi_2\\rangle) = |\\langle b_2|\\psi_2\\rangle|^2 = |\\langle b_2|e^{i\\varphi}|b_1\\rangle|^2 = |e^{i\\varphi}\\langle b_2|b_1\\rangle|^2 = |\\langle b_2|b_1\\rangle|^2 = \\text{Pr}(|b_2\\rangle \\mid |\\psi_1\\rangle),\n\\]\n\nthus no measurement will distinguish between these two states.", "question": "### (a) Argue that if there is a \\( \\varphi \\) such that \\( |\\psi_2\\rangle = e^{i\\varphi} |\\psi_1\\rangle \\) then no measurement will distinguish between the two states: for any choice of a basis, the probabilities of obtaining either outcome will be the same when performing the measurement on \\( |\\psi_1\\rangle \\) or on \\( |\\psi_2\\rangle \\).\n\nAssuming \\( |\\psi_1\\rangle \\) and \\( |\\psi_2\\rangle \\) can be distinguished, we are interested in finding the optimal measurement to tell them apart. Here we need to make precise our notion of “optimal”. We would like to find a basis \\( \\{ |b_1\\rangle , |b_2\\rangle \\} \\) of \\( \\mathbb{C}^2 \\) such that the expression\n\n\\[\n\\Pr \\left( |b_1\\rangle \\mid |\\psi_1\\rangle \\right) + \\Pr \\left( |b_2\\rangle \\mid |\\psi_2\\rangle \\right) = \\left| \\langle b_1 | \\psi_1 \\rangle \\right|^2 + \\left| \\langle b_2 | \\psi_2 \\rangle \\right|^2\n\\]\n\nis maximized." }, { "context": "(Due to De Huang)\n Given a distinguishable pair \\((|\\psi_1\\rangle, |\\psi_2\\rangle)\\), assume that \\(\\langle \\psi_1|\\psi_2\\rangle = e^{i\\varphi} \\cos \\theta\\) for some \\(\\theta \\in (0, \\pi)\\). We may also assume that \\(\\varphi = 0\\) since adding a phase \\(e^{-i\\varphi}\\) to \\(|\\psi_1\\rangle\\) won’t change the optimal solution. Let \\(|\\psi_1^\\perp\\rangle\\) be a state such that \\(\\langle \\psi_1|\\psi_1^\\perp\\rangle = 0\\), then it’s obvious that \\(\\langle \\psi_1^\\perp|\\psi_2\\rangle = \\sin \\theta\\), since \\(|\\psi_1\\rangle, |\\psi_1^\\perp\\rangle\\) is a basis. By adding a proper phase to \\(|\\psi_1^\\perp\\rangle\\) we may further assume that \\(\\langle \\psi_1^\\perp|\\psi_2\\rangle = \\sin \\theta\\). Now consider the operator\n\n\\[\nU = |0\\rangle \\langle \\psi_1| + |1\\rangle \\langle \\psi_1^\\perp|,\n\\]\n\nthen it’s easy to check that \\(U\\) is unitary and\n\n\\[\nU|\\psi_1\\rangle = |0\\rangle \\equiv |\\psi_1'\\rangle, \\quad U|\\psi_2\\rangle = \\cos \\theta|0\\rangle + \\sin \\theta|1\\rangle \\equiv |\\psi_2'\\rangle.\n\\]\n\nProvided that \\(\\{|b_1'\\rangle, |b_2'\\rangle\\}\\) is a basis that maximizes (1) for the pair \\((|\\psi_1'\\rangle, |\\psi_2'\\rangle)\\), we may recover \\(\\{|b_1\\rangle, |b_2\\rangle\\}\\) as\n\n\\[\n|b_1\\rangle = U^\\dagger |b_1'\\rangle, \\quad |b_2\\rangle = U^\\dagger |b_2'\\rangle.\n\\]\n\nIt’s easy to check that \\(\\{|b_1\\rangle, |b_2\\rangle\\}\\) is a basis, and we have\n\n\\[\n\\text{Pr}(|b_1\\rangle \\mid |\\psi_1\\rangle) + \\text{Pr}(|b_2\\rangle \\mid |\\psi_2\\rangle) = |\\langle b_1|\\psi_1\\rangle|^2 + |\\langle b_2|\\psi_2\\rangle|^2 = |\\langle b_1'|U^\\dagger U|\\psi_1'\\rangle|^2 + |\\langle b_2'|U^\\dagger U|\\psi_2'\\rangle|^2 = |\\langle b_1'|\\psi_1'\\rangle|^2 + |\\langle b_2'|\\psi_2'\\rangle|^2 = \\text{Pr}(|b_1'\\rangle \\mid |\\psi_1'\\rangle) + \\text{Pr}(|b_2'\\rangle \\mid |\\psi_2'\\rangle).\n\\]\n\nAnd for any basis \\(\\{|b_1'\\rangle, |b_2'\\rangle\\}\\), we have\n\n\\[\n\\text{Pr}(|b_1'\\rangle \\mid |\\psi_1'\\rangle) + \\text{Pr}(|b_2'\\rangle \\mid |\\psi_2'\\rangle) = |\\langle b_1'|\\psi_1'\\rangle|^2 + |\\langle b_2'|\\psi_2'\\rangle|^2 = |\\langle b_1'|U^\\dagger|\\psi_1\\rangle|^2 + |\\langle b_2'|U^\\dagger|\\psi_2\\rangle|^2 = \\text{Pr}(U|b_1'\\rangle \\mid |\\psi_1\\rangle) + \\text{Pr}(U|b_2'\\rangle \\mid |\\psi_2\\rangle) \\leq \\text{Pr}(|b_1\\rangle \\mid |\\psi_1\\rangle) + \\text{Pr}(|b_2\\rangle \\mid |\\psi_2\\rangle) = \\text{Pr}(|b_1\\rangle \\mid |\\psi_1\\rangle) + \\text{Pr}(|b_2\\rangle \\mid |\\psi_2\\rangle).\n\\]\n\nTherefore the basis \\(\\{|b_1\\rangle, |b_2\\rangle\\}\\) maximizes value (1) for the pair \\((|\\psi_1\\rangle, |\\psi_2\\rangle)\\).", "question": "### (b) Show that for the purposes of this problem we can assume without loss of generality that \\( |\\psi_1'\\rangle = |0\\rangle \\) and \\( |\\psi_2'\\rangle = \\cos \\theta |0\\rangle + \\sin \\theta |1\\rangle \\), for some \\( \\theta \\in [0, \\pi) \\). That is, given any \\( |\\psi_1\\rangle , |\\psi_2\\rangle \\), determine an angle \\( \\theta \\) such that, given a basis \\( \\{ |b_1'\\rangle , |b_2'\\rangle \\} \\) which maximizes (1) for the pair \\((|\\psi_1'\\rangle, |\\psi_2'\\rangle)\\), lets you recover a basis \\(\\{ |\\beta_1\\rangle, |\\beta_2\\rangle \\}\\) which achieves the same value in (1) when \\((|\\psi_1\\rangle, |\\psi_2\\rangle)\\) is being measured. Say explicitly how to determine \\( \\theta \\) from \\((|\\psi_1\\rangle, |\\psi_2\\rangle)\\) and how to recover \\(\\{ |\\beta_1\\rangle, |\\beta_2\\rangle \\}\\) from \\(\\{ |\\beta_1'\\rangle, |\\beta_2'\\rangle \\}\\)." }, { "context": "(Due to De Huang)\n Let \\(|b_1'\\rangle, |b_2'\\rangle\\) be a basis that maximizes (1) for the pair \\((|\\psi_1'\\rangle, |\\psi_2'\\rangle)\\). It’s easy to check that applying any phases to \\(|b_1'\\rangle\\) or \\(|b_2'\\rangle\\) will not change value (1). Thus without loss of generality, we may assume that\n\n\\[\n|b_1'\\rangle = \\cos \\varphi|0\\rangle + e^{ia} \\sin \\varphi|1\\rangle,\n\\]\n\n\\[\n|b_2'\\rangle = \\sin \\varphi|0\\rangle - e^{ia} \\cos \\varphi|1\\rangle.\n\\]\n\nfor some \\(\\varphi \\in [0, 2\\pi)\\) and \\(\\alpha \\in [0, 2\\pi)\\) such that \\((\\cos \\theta \\sin \\theta \\cos \\varphi \\sin \\varphi) \\leq 0\\) (otherwise we may take \\(\\varphi \\rightarrow \\pi - \\varphi, \\alpha \\rightarrow \\alpha + \\pi\\)). Then we have\n\n\\[\n\\text{Pr}(|b'_1 \\rangle | \\psi'_1 \\rangle) + \\text{Pr}(|b'_2 \\rangle | \\psi'_2 \\rangle) = | \\langle b'_1 | \\psi'_1 \\rangle |^2 + | \\langle b'_2 | \\psi'_2 \\rangle |^2\n\\]\n\n\\[\n= \\cos^2 \\varphi + | \\cos \\theta \\sin \\varphi - e^{-i \\varphi} \\sin \\theta \\cos \\varphi |^2\n\\]\n\n\\[\n= \\cos^2 \\varphi + \\cos^2 \\theta \\sin^2 \\varphi + \\sin^2 \\theta \\cos^2 \\varphi\n\\]\n\n\\[\n- \\cos \\theta \\sin \\theta \\cos \\varphi \\sin \\varphi (e^{i \\alpha} + e^{-i \\alpha})\n\\]\n\n\\[\n\\leq \\cos^2 \\varphi + \\cos^2 \\theta \\sin^2 \\varphi + \\sin^2 \\theta \\cos^2 \\varphi\n\\]\n\n\\[\n- 2 \\cos \\theta \\sin \\theta \\cos \\varphi \\sin \\varphi.\n\\]\n\nSince we assume that \\((\\cos \\theta \\sin \\theta \\cos \\varphi \\sin \\varphi) \\leq 0\\), we should always take \\(\\alpha = 0\\) so that the value is maximized. Then we have the wanted form\n\n\\[\n|b'_1 \\rangle = \\cos \\varphi |0 \\rangle + \\sin \\varphi |1 \\rangle,\n\\]\n\n\\[\n|b'_2 \\rangle = \\sin \\varphi |0 \\rangle - \\cos \\varphi |1 \\rangle.\n\\]", "question": "### (c) Show that the optimal basis \\(\\{ |\\beta_1'\\rangle, |\\beta_2'\\rangle \\}\\) will always be of the form\n\n\\[\n|\\beta_1'\\rangle = \\cos \\varphi |0\\rangle + \\sin \\varphi |1\\rangle, \\quad |\\beta_2'\\rangle = \\sin \\varphi |0\\rangle - \\cos \\varphi |1\\rangle\n\\]" }, { "context": "(Due to De Huang)\nUsing the result of (c), we have\n\n\\[\n\\text{Pr}(|b'_1 \\rangle | \\psi'_1 \\rangle) + \\text{Pr}(|b'_2 \\rangle | \\psi'_2 \\rangle) = | \\langle b'_1 | \\psi'_1 \\rangle |^2 + | \\langle b'_2 | \\psi'_2 \\rangle |^2\n\\]\n\n\\[\n= \\cos^2 \\varphi + (\\cos \\theta \\sin \\varphi - \\sin \\theta \\cos \\varphi)^2\n\\]\n\n\\[\n= \\cos^2 \\varphi + \\sin^2 (\\varphi - \\theta)\n\\]\n\n\\[\n= 1 + \\frac{1}{2} [\\cos (2 \\varphi) - \\cos (2 \\varphi - 2 \\theta)]\n\\]\n\n\\[\n= 1 + \\sin (\\theta - 2 \\varphi) \\sin \\theta\n\\]\n\n\\[\n\\leq 1 + \\sin \\theta.\n\\]\n\nRecall that we assume \\((\\cos \\theta \\sin \\theta \\cos \\varphi \\sin \\varphi) \\leq 0\\), to make the inequality to become equality, we can take\n\n\\[\n\\varphi = \\frac{\\theta}{2} + \\frac{3\\pi}{4}, \\quad \\theta \\in (0, \\pi).\n\\]\n\nThen the basis\n\n\\[\n\\{|b'_1 \\rangle, |b'_2 \\rangle\\} = \\{\\cos \\varphi |0 \\rangle + \\sin \\varphi |1 \\rangle, \\sin \\varphi |0 \\rangle - \\cos \\varphi |1 \\rangle\\}\n\\]\n\nmaximizes value (1) for the pair \\(| \\psi'_1 \\rangle, | \\psi'_2 \\rangle\\), and the maximum is \\(1 + \\sin \\theta\\).", "question": "### (d) Determine the optimal \\( \\varphi \\) as a function of \\( \\theta \\)." }, { "context": "(Due to De Huang)\n Since \\(\\cos \\theta = | \\langle \\psi_1 | \\psi_2 \\rangle |\\), the previous results conclude that the maximum of value (1) is\n\n\\[\n1 + \\sin \\theta = 1 + \\sqrt{1 - | \\langle \\psi_1 | \\psi_2 \\rangle |^2},\n\\]\n\nand the basis that achieves the optimum is\n\n\\[\n\\{|b_1 \\rangle, |b_2 \\rangle\\} = \\{U^* |b'_1 \\rangle, U^* |b'_2 \\rangle\\},\n\\]\n\nwhere \\(U\\) is defined in (a) and \\(\\{|b'_1 \\rangle, |b'_2 \\rangle\\}\\) is defined in (d).", "question": "### (e) Conclude: what is the maximum value of (1), as a function of the original states \\( |\\psi_1\\rangle \\) and \\( |\\psi_2\\rangle \\)? What is the basis which achieves the optimum?" } ]
2016-10-22T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Improving Wiesner’s quantum money
Consider the following six single-qubit states: \[ \left\{ |\psi_1\rangle = |0\rangle, \, |\psi_2\rangle = |1\rangle, \, |\psi_3\rangle = |+\rangle, \, |\psi_4\rangle = |-\rangle, \, |\psi_5\rangle = \frac{|0\rangle + i |1\rangle}{\sqrt{2}}, \, |\psi_6\rangle = \frac{|0\rangle - i |1\rangle}{\sqrt{2}} \right\}. \] Suppose we create a money scheme in which each bit of a bill’s serial number is encoded into one of these six states, chosen uniformly at random (so with probability \(1/6\) each) by the bank. # References [1] Abel Molina, Thomas Vidick, and John Watrous. “Optimal counterfeiting attacks and generalizations for Wiesner’s quantum money”. In: *Conference on Quantum Computation, Communication, and Cryptography*. Springer. 2012, pp. 45–64.
[ { "context": "(Due to the TAs)\n By linearity, we can compute the action of \\( U \\) on all 6 quantum money states.\n\n\\[\n\\begin{aligned}\nU : |\\psi_1\\rangle |0\\rangle &\\mapsto |0\\rangle |0\\rangle; \\\\\nU : |\\psi_2\\rangle |0\\rangle &\\mapsto |1\\rangle |1\\rangle; \\\\\nU : |+\\rangle |0\\rangle &\\mapsto \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle + |1\\rangle |1\\rangle); \\\\\nU : |-\\rangle |0\\rangle &\\mapsto \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle - |1\\rangle |1\\rangle); \\\\\nU : |\\psi_5\\rangle |0\\rangle &\\mapsto \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle + i |1\\rangle |1\\rangle); \\\\\nU : |\\psi_6\\rangle |0\\rangle &\\mapsto \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle - i |1\\rangle |1\\rangle).\n\\end{aligned}\n\\]\n\nIt is instructive to carry out each of the inner products in the sum by hand in full detail\n\n\\[\n\\begin{aligned}\n\\langle 0| \\langle 0| U |0\\rangle |0\\rangle &= \\langle 0| \\langle 0| (U |0\\rangle |0\\rangle) = (\\langle 0| \\otimes \\langle 0|) (|0\\rangle \\otimes |0\\rangle) = \\langle 0|0\\rangle \\langle 0|0\\rangle = 1, \\\\\n\\langle \\psi_1| \\psi_1\\rangle \\langle \\psi_1| U |\\psi_1\\rangle |0\\rangle |^2 &= 1, \\\\\n\\langle 1| \\langle 1| U |1\\rangle |1\\rangle &= \\langle 1| \\langle 1| (U |1\\rangle |1\\rangle) = (\\langle 1| \\otimes \\langle 1|) (|1\\rangle \\otimes |1\\rangle) = \\langle 1|1\\rangle \\langle 1|1\\rangle = 1, \\\\\n\\langle \\psi_2| \\psi_2\\rangle \\langle \\psi_2| U |\\psi_2\\rangle |0\\rangle |^2 &= 1.\n\\end{aligned}\n\\]\n\n\\[\n\\langle +| (+|U| + \\rangle |0\\rangle) \\\\\n= (\\langle +| (+| \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle) + |1\\rangle |1\\rangle)) \\\\\n= \\frac{1}{\\sqrt{2}} (\\langle +|0\\rangle \\langle +|0\\rangle + \\langle +|1\\rangle \\langle +|1\\rangle) \\\\\n= \\frac{1}{\\sqrt{2}} \\left( \\left( \\frac{1}{\\sqrt{2}} \\right)^2 + \\left( \\frac{1}{\\sqrt{2}} \\right)^2 \\right) = \\frac{1}{\\sqrt{2}} \\\\\n|\\langle \\psi_3 | \\psi_4 U | \\psi_3 \\rangle |0\\rangle |^2 = \\frac{1}{2} = \\frac{1}{2}\n\\]\n\n\\[\n\\langle -| (-|U| - \\rangle |0\\rangle) \\\\\n= (\\langle -| (-| \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle) - |1\\rangle |1\\rangle)) \\\\\n= \\frac{1}{\\sqrt{2}} (\\langle -|0\\rangle \\langle -|0\\rangle - \\langle -|1\\rangle \\langle -|1\\rangle) \\\\\n= \\frac{1}{\\sqrt{2}} \\left( \\left( \\frac{1}{\\sqrt{2}} \\right)^2 - \\left( \\frac{1}{\\sqrt{2}} \\right)^2 \\right) = 0 \\\\\n|\\langle \\psi_4 | \\psi_4 U | \\psi_4 \\rangle |0\\rangle |^2 = 0\n\\]\n\n\\[\n\\langle \\psi_5 | \\psi_5 U | \\psi_5 \\rangle |0\\rangle \\\\\n= (\\langle \\psi_5 | \\psi_5 \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle) + i |1\\rangle |1\\rangle)) \\\\\n= \\frac{1}{\\sqrt{2}} (\\langle \\psi_5 |0\\rangle \\langle \\psi_5 |0\\rangle + i \\langle \\psi_5 |1\\rangle \\langle \\psi_5 |1\\rangle) \\\\\n= \\frac{1}{\\sqrt{2}} \\left( \\left( \\frac{1}{\\sqrt{2}} \\right)^2 + i \\left( \\frac{i}{\\sqrt{2}} \\right)^2 \\right) = \\frac{1 - i}{2\\sqrt{2}} = \\frac{1}{4} \\\\\n|\\langle \\psi_5 | \\psi_5 U | \\psi_5 \\rangle |0\\rangle |^2 = \\frac{1}{4}\n\\]\n\n\\[\n\\langle \\psi_6 | \\psi_6 U | \\psi_6 \\rangle |0\\rangle \\\\\n= (\\langle \\psi_6 | \\psi_6 \\frac{1}{\\sqrt{2}} (|0\\rangle |0\\rangle) - i |1\\rangle |1\\rangle)) \\\\\n= \\frac{1}{\\sqrt{2}} (\\langle \\psi_6 |0\\rangle \\langle \\psi_6 |0\\rangle - i \\langle \\psi_6 |1\\rangle \\langle \\psi_6 |1\\rangle) \\\\\n= \\frac{1}{\\sqrt{2}} \\left( \\left( \\frac{1}{\\sqrt{2}} \\right)^2 - i \\left( \\frac{i}{\\sqrt{2}} \\right)^2 \\right) = \\frac{1 + i}{2\\sqrt{2}} = \\frac{1}{4} \\\\\n|\\langle \\psi_6 | \\psi_6 U | \\psi_6 \\rangle |0\\rangle |^2 = \\frac{1}{4}\n\\]\n\nThe overall success probability is\n\n\\[\n\\frac{1}{6} \\left( 1 + 1 + \\frac{1}{2} + 0 + \\frac{1}{4} + \\frac{1}{4} \\right) = \\frac{1}{2}. \\tag{2}\n\\]\n\nWhat if we copy in the Hadamard basis instead? First, we define the copying map:\n\n\\[\nU' : |+\\rangle |+\\rangle \\mapsto |+\\rangle |+\\rangle \\\\\nU' : |-\\rangle |-\\rangle \\mapsto |-\\rangle |-\\rangle\n\\]\n\nWe see that this definition is symmetric with respect to switching |0\\rangle with |+\\rangle and |1\\rangle with |-\\rangle. This symmetry is given concretely by the Hadamard change of basis. Let's examine the action of H on our money states\n\n\\[\nH |\\psi_1\\rangle = |\\psi_3\\rangle \\quad H |\\psi_2\\rangle = |\\psi_4\\rangle \\quad H |\\psi_3\\rangle = -|\\psi_5\\rangle \\quad H |\\psi_6\\rangle = -|\\psi_6\\rangle \\tag{3}\n\\]\n\nTo see these identities visually, recall that H is the rotation by \\(\\pi\\) about the X + Z axis on the Bloch sphere. Note that since \\(H = H^\\dagger\\), the corresponding identities for bras hold.\n\n\\[\n\\langle \\psi_1 | H = \\langle \\psi_3 | \\quad \\langle \\psi_2 | H = \\langle \\psi_4 | \\quad \\langle \\psi_3 | H = - \\langle \\psi_5 | \\quad \\langle \\psi_6 | H = - \\langle \\psi_6 |\n\\]\n\nAll of this suggests that we should be able to express \\( U' \\) in terms of \\( U \\) and \\( H \\). Let's rewrite with the equations defining \\( U' \\) in terms of \\( H \\) and the standard basis.\n\n\\[\nU \\lvert 0 \\rangle \\otimes \\lvert 0 \\rangle = \\lvert 0 \\rangle \\otimes \\lvert 0 \\rangle \\implies U (H \\otimes H) \\lvert + \\rangle \\lvert + \\rangle = (H \\otimes H) \\lvert + \\rangle \\lvert + \\rangle \\tag{4}\n\\]\n\nMultiplying both sides of this equation by \\( (H \\otimes H) \\) gives us that \\( (H \\otimes H)U(H \\otimes H) \\) acts on \\( \\lvert + \\rangle \\lvert + \\rangle \\) in the same way as \\( U' \\). In fact, these two maps also have the same action on \\( \\lvert 1 \\rangle \\lvert 0 \\rangle \\). Now we can apply this symmetry term-by-term:\n\n\\[\n\\lvert \\langle \\psi_k \\lvert \\langle \\psi_{k'} \\lvert U' \\lvert \\psi_k \\rangle \\lvert \\psi_{k'} \\rangle \\lvert + \\rangle \\lvert + \\rangle \\lvert^2 = \\lvert \\langle \\psi_k \\lvert \\langle \\psi_{k'} \\lvert (H \\otimes H)U(H \\otimes H) \\lvert \\psi_k \\rangle \\lvert \\psi_{k'} \\rangle \\lvert + \\rangle \\lvert + \\rangle \\lvert^2 = \\lvert \\langle \\psi_k \\lvert \\langle \\psi_{k'} \\lvert U \\lvert \\psi_k \\rangle \\lvert \\psi_{k'} \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert^2\n\\]\n\nThe map \\( k \\mapsto k' \\) is the permutation given in (3). Therefore, we can derive the below equality by rearranging terms.\n\n\\[\n\\frac{1}{6} \\sum_{k=1}^{6} \\lvert \\langle \\psi_k \\lvert \\langle \\psi_{k'} \\lvert U \\lvert \\psi_k \\rangle \\lvert \\psi_{k'} \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert^2 = \\frac{1}{6} \\sum_{k=1}^{6} \\lvert \\langle \\psi_k \\lvert \\langle \\psi_{k'} \\lvert U' \\lvert \\psi_k \\rangle \\lvert \\psi_{k'} \\rangle \\lvert 0 \\rangle \\lvert 0 \\rangle \\lvert^2 = \\frac{1}{2}. \\tag{5}\n\\]", "question": "### (a.1) Consider the attack on this scheme which attempts to copy the bill in the standard basis, using the unitary \\(U : |0\\rangle |0\\rangle \\mapsto |0\\rangle |0\\rangle, \\, U : |1\\rangle |0\\rangle \\mapsto |1\\rangle |1\\rangle\\). What is its success probability? Recall that the success probability is defined as\n\n\\[\n\\frac{1}{6} \\sum_{k=1}^{6} \\left| \\langle \\psi_k | \\otimes \\langle \\psi_k | U (|\\psi_k\\rangle \\otimes |0\\rangle) \\right|^2.\n\\]\n\nWhat if we choose \\(U\\) to copy in the Hadamard basis instead?" }, { "context": "(Due to the TAs)\n For a scheme achieving \\( \\frac{2}{3}, \\frac{2}{3} \\), see [1].", "question": "### (a.2) Can you improve on the attack described in the previous question? Give any attack that does better. [Bonus points: describe an attack with success probability \\(2/3\\).]" }, { "context": "(Due to the TAs)\n Suppose that \\( \\lvert \\psi \\rangle \\) and \\( \\lvert \\phi \\rangle \\) have an angle of \\( \\theta \\) between them when considered as points upon the Bloch sphere. Then \\( \\lvert \\langle \\phi \\lvert \\psi \\rangle \\lvert^2 = \\cos^2 \\frac{\\theta}{2} \\). The spatial distance between the points is \\( 2 \\sin \\frac{\\theta}{2} \\). Therefore, any configuration which maximizes the sum of squares of pairwise distance between points on the Bloch sphere also minimizes the sum of pairwise overlaps of the corresponding qubits. In other words, this choice maximizes the distinguishability of the states. For four points, this arrangement is achieved by the tetrahedron.\n\nFor any attack scheme, say given by a cptp map \\( T \\), the distinguishability of the \\( \\lvert \\psi_k \\rangle \\) gives an upper bound on the distinguishability of the \"copied\" states \\( T(\\lvert \\psi_k \\rangle \\langle \\psi_k \\lvert) \\).", "question": "### (b) Find a quantum money scheme which uses only four possible single-qubit states but is better than Wiesner’s scheme (i.e. the scheme which uses the four states \\(\\{ |\\psi_1\\rangle = |0\\rangle, \\, |\\psi_2\\rangle = |1\\rangle, \\, |\\psi_3\\rangle = |+\\rangle, \\, |\\psi_4\\rangle = |-\\rangle \\}\\)), in the sense that the optimal attack has success probability \\(< 3/4\\). (You do not need to prove completely formally that your scheme is better than Wiesner’s, but describe the four states you would use, and argue why you think it would be better than Wiesner’s.) [Hint: Think about the Bloch sphere — use all the available space!]" } ]
2016-10-22T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Quantum teleportation
In class we saw that the no-cloning theorem forbids us from copying arbitrary quantum states, i.e. implementing a unitary \( U \) that takes \( |\psi\rangle |0\rangle \rightarrow |\psi\rangle |\psi\rangle \) for any state \( |\psi\rangle \).
[ { "context": "(Due to Mandy Huang)\n Suppose there exists a unitary \\( U \\) mapping \\( | \\Psi \\rangle \\rightarrow | \\Phi \\rangle \\). Since \\( U \\) is unitary, we have \\( U^{-1} = U^* \\) is unitary, so the operator \\( V := U^* \\) is a unitary operator which maps \\( | \\Phi \\rangle \\rightarrow | \\Psi \\rangle \\).", "question": "### (a) In general show that if there exists a unitary \\( U \\) taking \\( |\\Psi\\rangle \\rightarrow |\\Phi\\rangle \\), there must exist another unitary \\( V \\) independent of \\( |\\Psi\\rangle \\) and \\( |\\Phi\\rangle \\) that takes \\( |\\Phi\\rangle \\rightarrow |\\Psi\\rangle \\). In other words, no information is lost when applying a unitary to a quantum state." }, { "context": "(Due to Mandy Huang)\n If Alice measures \\( | \\psi \\rangle \\) in a basis of her choice, then \\( | \\psi \\rangle \\) will collapse to some eigenstate of the basis so if she reproduces this state and sends it to Bob, Bob will have a state that is different than the original state unless \\( | \\psi \\rangle \\) happens to be an element of the basis in which Alice is measuring.", "question": "### (b) Suppose Alice holds a qubit in the state \\( |\\psi\\rangle = a|0\\rangle + b|1\\rangle \\) and wants Bob to have that state as well. Why doesn’t the following work? Alice measures her qubit in some basis of her choice, then prepares another qubit in the state she obtains and sends that to Bob. (Please answer without making explicit use of the no-cloning theorem.)\n\nWe now introduce a scheme by which Alice can prepare \\( |\\psi\\rangle \\) on Bob’s side without sending him her qubit—in fact, without sending any quantum information at all!—provided they share a Bell state. To be precise, the initial setup is as follows: Alice holds \\( |\\psi\\rangle_S \\), and Alice and Bob each have a qubit of\n\n\\[ |\\phi^+\\rangle_{AB} = \\frac{1}{\\sqrt{2}} (|0\\rangle_A |0\\rangle_B + |1\\rangle_A |1\\rangle_B), \\]\n\nso that the joint state of all three qubits is\n\n\\[ |\\Psi\\rangle_{SAB} = |\\psi\\rangle_S \\otimes |\\phi^+\\rangle_{AB} = (a|0\\rangle_S + b|1\\rangle_S) \\otimes \\frac{1}{\\sqrt{2}} (|00\\rangle_{AB} + |11\\rangle_{AB}). \\]" }, { "context": "(Due to Mandy Huang)\n Let \\( | \\psi^- \\rangle = \\frac{1}{\\sqrt{2}} (|01\\rangle - |10\\rangle) \\). Then\n\n\\[ \n\\langle \\psi^+ | \\psi^+ \\rangle = \\frac{1}{2} (1 + 1) = 1 \\quad \\langle \\phi^+ | \\phi^+ \\rangle = \\frac{1}{2} (1 + 1) = 1 \n\\]\n\n\\[ \n\\langle \\psi^- | \\psi^+ \\rangle = \\frac{1}{2} (1 - 1) = 0 \\quad \\langle \\phi^- | \\phi^+ \\rangle = \\frac{1}{2} (1 - 1) = 0 \n\\]\n\n\\[ \n\\langle \\psi^+ | \\phi^+ \\rangle = 0 \\quad \\langle \\phi^+ | \\psi^+ \\rangle = 0 \n\\]", "question": "### (c) As with a single qubit, any state of a two-qubit system can be written in terms of an orthonormal basis, and also measured in such a basis. One example is the computational basis \\(\\{|00\\rangle, |01\\rangle, |10\\rangle, |11\\rangle\\}\\). Find a state \\( |\\psi'\\rangle \\) that, together with the following three states,\n\n\\[ |\\phi^+\\rangle = \\frac{1}{\\sqrt{2}} (|00\\rangle_{AB} + |11\\rangle_{AB}), \\quad |\\phi^-\\rangle = \\frac{1}{\\sqrt{2}} (|00\\rangle_{AB} - |11\\rangle_{AB}), \\]\n\n\\[ |\\psi^+\\rangle = \\frac{1}{\\sqrt{2}} (|01\\rangle_{AB} + |10\\rangle_{AB}), \\]\n\nforms an orthonormal basis of the two-qubit space \\( \\mathbb{C}^4 \\). This basis is called the Bell basis." }, { "context": "(Due to Mandy Huang)\n Note\n\n\\[ |\\Psi\\rangle_{SAB} = (a|0\\rangle_S + b|1\\rangle_S) \\otimes \\frac{1}{\\sqrt{2}} (|00\\rangle_{AB} + |11\\rangle_{AB}) \\]\n\n\\[ = \\frac{1}{\\sqrt{2}} (a|000\\rangle_{SAB} + b|100\\rangle_{SAB} + a|011\\rangle_{SAB} + b|111\\rangle_{SAB}) \\]\n\nand\n\n\\[ \\frac{1}{2} (| \\phi^+ \\rangle + | \\phi^- \\rangle) = \\frac{1}{\\sqrt{2}} |00\\rangle \\quad \\frac{1}{2} (| \\phi^+ \\rangle - | \\phi^- \\rangle) = \\frac{1}{\\sqrt{2}} |11\\rangle \\]\n\n\\[ \\frac{1}{2} (| \\psi^+ \\rangle + | \\psi^- \\rangle) = \\frac{1}{\\sqrt{2}} |01\\rangle \\quad \\frac{1}{2} (| \\psi^+ \\rangle - | \\psi^- \\rangle) = \\frac{1}{\\sqrt{2}} |10\\rangle \\]\n\nThen we have\n\n\\[ |\\Psi\\rangle_{SAB} = \\frac{1}{2} (a (| \\phi^+ \\rangle + | \\phi^- \\rangle) |0\\rangle + b (| \\psi^+ \\rangle - | \\psi^- \\rangle) |0\\rangle + a (| \\psi^+ \\rangle + | \\psi^- \\rangle) |1\\rangle + b (| \\phi^+ \\rangle - | \\phi^- \\rangle) |1\\rangle) \\]\n\n\\[ = \\frac{1}{2} (| \\phi^+ \\rangle (a|0\\rangle + b|1\\rangle) + | \\phi^- \\rangle (a|0\\rangle - b|1\\rangle) + | \\psi^+ \\rangle (b|0\\rangle + a|1\\rangle) + | \\psi^- \\rangle (-b|0\\rangle + a|1\\rangle)) \\]", "question": "### (d) Rewrite the joint state (2) as a linear combination of the form \\( \\sum_{i=1}^{4} |\\alpha_i\\rangle_{SA} |\\beta_i\\rangle_B \\), where \\( |\\alpha\\rangle \\) ranges over the four possible Bell states on Alice’s two qubits \\( S \\) and \\( A \\), and \\( |\\beta\\rangle \\) is a single-qubit state on Bob’s qubit." }, { "context": "(Due to Mandy Huang)\n Note\n\n\\[ |\\Psi\\rangle_{SAB} = (a|0\\rangle_S + b|1\\rangle_S) \\otimes \\frac{1}{\\sqrt{2}} (|00\\rangle_{AB} + |11\\rangle_{AB}) \\]\n\n\\[ = \\frac{1}{\\sqrt{2}} (a|000\\rangle_{SAB} + b|100\\rangle_{SAB} + a|011\\rangle_{SAB} + b|111\\rangle_{SAB}) \\]\n\nand\n\n\\[ \\frac{1}{2} (| \\phi^+ \\rangle + | \\phi^- \\rangle) = \\frac{1}{\\sqrt{2}} |00\\rangle \\quad \\frac{1}{2} (| \\phi^+ \\rangle - | \\phi^- \\rangle) = \\frac{1}{\\sqrt{2}} |11\\rangle \\]\n\n\\[ \\frac{1}{2} (| \\psi^+ \\rangle + | \\psi^- \\rangle) = \\frac{1}{\\sqrt{2}} |01\\rangle \\quad \\frac{1}{2} (| \\psi^+ \\rangle - | \\psi^- \\rangle) = \\frac{1}{\\sqrt{2}} |10\\rangle \\]\n\nThen we have\n\n\\[ |\\Psi\\rangle_{SAB} = \\frac{1}{2} (a (| \\phi^+ \\rangle + | \\phi^- \\rangle) |0\\rangle + b (| \\psi^+ \\rangle - | \\psi^- \\rangle) |0\\rangle + a (| \\psi^+ \\rangle + | \\psi^- \\rangle) |1\\rangle + b (| \\phi^+ \\rangle - | \\phi^- \\rangle) |1\\rangle) \\]\n\n\\[ = \\frac{1}{2} (| \\phi^+ \\rangle (a|0\\rangle + b|1\\rangle) + | \\phi^- \\rangle (a|0\\rangle - b|1\\rangle) + | \\psi^+ \\rangle (b|0\\rangle + a|1\\rangle) + | \\psi^- \\rangle (-b|0\\rangle + a|1\\rangle)) \\]\n\nThe possible outcomes are \\( | \\phi^+ \\rangle, | \\phi^- \\rangle, | \\psi^+ \\rangle, \\) and \\( | \\psi^- \\rangle \\) in which case Bob's qubit is \\( a|0\\rangle + b|1\\rangle, a|0\\rangle - b|1\\rangle, b|0\\rangle + a|1\\rangle, \\) and \\( -b|0\\rangle + a|1\\rangle \\), respectively.\n\nIf Alice measures \\( | \\phi^+ \\rangle \\) then Bob should apply the identity.\n\nIf Alice measures \\( | \\phi^- \\rangle \\) then Bob should apply \\( Z = \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\end{pmatrix} \\).\n\nIf Alice measures \\( | \\psi^+ \\rangle \\) then Bob should apply \\( X = \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\).\n\nIf Alice measures \\( | \\psi^- \\rangle \\) then Bob should apply \\( ZX = \\begin{pmatrix} 0 & 1 \\\\ -1 & 0 \\end{pmatrix} \\).", "question": "### (e) Suppose Alice measures her two qubits \\( SA \\) in the Bell basis and sends the result to Bob. Show that for each of the four possible outcomes, Bob can use this (classical!) information to determine a unitary, independent of \\( |\\psi\\rangle_S \\), on his qubit that will map it, in all cases, to the original state \\( |\\psi\\rangle_B \\) that Alice had." } ]
2016-10-22T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Robustness of GHZ and W states, part 2
We return to the multi-qubit GHZ and W states originally introduced in HW 2 Problem 4. As a reminder: \[|GHZ_N\rangle = \frac{1}{\sqrt{2}} (|0\rangle^{\otimes N} + |1\rangle^{\otimes N}),\] \[|W_N\rangle = \frac{1}{\sqrt{N}} (|100 \cdots 00\rangle + |010 \cdots 00\rangle + \cdots + |000 \cdots 01\rangle).\] This week we learned to distinguish product states from (pure) entangled states by calculating the Schmidt rank of a bipartite state \(|\Psi\rangle_{AB}\), i.e. the rank of the reduced state \(\rho_A = \text{Tr}_B |\Psi\rangle \langle \Psi|_{AB}\). In particular \(|\Psi\rangle_{AB}\) is pure if and only if its Schmidt rank is 1. In the following, we denote by \(\text{Tr}_N\) the operation of tracing out only the last of \(N\) qubits. (Due to Bolton Bailey)
[ { "context": "For the GHZ state, we have\n\\[ \\text{Tr}_{N}|GHZ_{N}\\rangle\\langle GHZ_{N}| = \\frac{1}{2} \\left( |0\\rangle^{\\otimes N-1}\\langle 0|^{\\otimes N-1} + |1\\rangle^{\\otimes N-1}\\langle 1|^{\\otimes N-1} \\right) \\]\nSo therefore, the rank is \\( r_{GHZ} = 2 \\).\n\nFor the W state, we have\n\\[ |W_{N}\\rangle = \\sqrt{\\frac{N-1}{N}} |W_{N-1}\\rangle \\otimes |0\\rangle + \\frac{1}{\\sqrt{N}} |0\\rangle^{\\otimes N-1} \\otimes |1\\rangle \\]\nSo we get\n\\[ \\text{Tr}_{N}|W_{N}\\rangle\\langle W_{N}| = \\frac{N-1}{N} |W_{N-1}\\rangle\\langle W_{N-1}| + \\frac{1}{N} |0\\rangle^{\\otimes N-1}\\langle 0|^{\\otimes N-1} \\]\nAnd so the rank is \\( r_{W} = 2 \\).", "question": "### (a) What are the ranks \\( r_{GHZ} \\) of \\( \\text{Tr}_N |GHZ_N\\rangle \\langle GHZ_N| \\) and \\( r_W \\) of \\( \\text{Tr}_N |W_N\\rangle \\langle W_N| \\)?\n(Note that these are the Schmidt ranks of \\( |GHZ_N\\rangle \\) and \\( |W_N\\rangle \\) if we partition each of them between the first \\( N - 1 \\) qubits and the last qubit.)\n\nLet's now introduce a more discriminating (in fact, continuous) measure of the entanglement of a state \\( |\\Psi\\rangle_{AB} \\): namely, the **purity** of the reduced state \\( \\rho_A \\), defined as \\( \\text{Tr} \\rho_A^2 \\)." }, { "context": "The minimum purity for a density matrix \\( \\rho \\) on a \\( d \\)-dimensional vector space is \\( \\frac{1}{d} \\). To see that this can be attained, consider\n\\[ \\rho = \\frac{1}{d} I_{d} \\]\nThis matrix is positive semidefinite as a positive multiple of the identity, and it has trace 1, since the \\( I_{d} \\) has trace \\( d \\). Note that\n\\[ \\rho^{2} = \\frac{1}{d^{2}} I_{d}^{2} = \\frac{1}{d^{2}} I_{d} \\]\nAnd so\n\\[ \\text{Tr} \\rho^{2} = \\frac{1}{d^{2}} d = \\frac{1}{d} \\]\nTo see it is impossible to have a density matrix of this dimension with a smaller purity, let\n\\[ \\rho = \\sum_i p_i |\\psi_i \\rangle \\langle \\psi_i | \\]\nAnd so therefore\n\\[ \\rho^2 = \\sum_{i,j} p_i p_j |\\psi_i \\rangle \\langle \\psi_i | \\psi_j \\rangle \\langle \\psi_j | \\]\n\\[ \\text{Tr} \\rho^2 = \\text{Tr} \\left( \\sum_{i,j} p_i p_j |\\psi_i \\rangle \\langle \\psi_i | \\psi_j \\rangle \\langle \\psi_j | \\right) \\]\nBy linearity of the trace\n\\[ \\text{Tr} \\rho^2 = \\sum_{i,j} p_i p_j \\text{Tr} (|\\psi_i \\rangle \\langle \\psi_i | \\psi_j \\rangle \\langle \\psi_j |) \\]\nBy invariance of trace under cyclic permutations\n\\[ \\text{Tr} \\rho^2 = \\sum_{i,j} p_i p_j \\text{Tr} (|\\psi_j \\rangle \\langle \\psi_j | \\psi_i \\rangle \\langle \\psi_i |) \\]\n\\[ \\text{Tr} \\rho^2 = \\sum_{i,j} p_i p_j \\langle \\psi_j | \\psi_i \\rangle \\langle \\psi_i | \\psi_j \\rangle \\]\n\\[ \\text{Tr} \\rho^2 = \\sum_{i,j} p_i p_j |\\langle \\psi_i | \\psi_j \\rangle|^2 \\]\nNow, we separate the sum into the cases where \\( i = j \\) and \\( i \\neq j \\)\n\\[ \\text{Tr} \\rho^2 = \\sum_i p_i^2 |\\langle \\psi_i | \\psi_i \\rangle|^2 + \\sum_{i \\neq j} p_i p_j |\\langle \\psi_i | \\psi_j \\rangle|^2 \\]\nSince \\( \\langle \\psi_i | \\psi_i \\rangle = 1 \\), we get\n\\[ \\text{Tr} \\rho^2 = \\sum_i p_i^2 + \\sum_{i \\neq j} p_i p_j |\\langle \\psi_i | \\psi_j \\rangle|^2 \\]\nNow, since the \\( p_i \\) sum to 1, the minimum value of the sum of the \\( p_i^2 \\) is \\( \\frac{1}{d} \\) by the Cauchy-Schwarz inequality. The minimum value of \\( |\\langle \\psi_i | \\psi_j \\rangle|^2 \\) is 0 since the norm squared is nonnegative.\n\\[ \\text{Tr} \\rho^2 \\geq \\frac{1}{d} \\]\nAs we claimed.\n\nThe maximum value of the purity of a density matrix on \\( d \\) dimensions is 1. To see that this can be attained, consider\n\\[ \\rho = |0 \\rangle \\langle 0 | \\]\nWhich is a pure density matrix, and satisfies\n\\[ \\text{Tr} \\rho^2 = \\text{Tr}(|0\\rangle \\langle 0|0\\rangle \\langle 0|) = \\text{Tr}((|0\\rangle \\langle 0|)(|0\\rangle \\langle 0|)) = 1 \\]\nTo see that this is maximal, recall that we have shown for arbitrary\n\\[ \\rho = \\sum_i p_i |\\psi_i\\rangle \\langle \\psi_i| \\]\nThat\n\\[ \\text{Tr} \\rho^2 = \\sum_{i,j} p_i p_j |\\langle \\psi_i | \\psi_j \\rangle|^2 \\]\nAnd since \\( 0 \\leq |\\langle \\psi_i | \\psi_j \\rangle|^2 \\leq 1 \\), and \\( p_i, p_j \\) are positive\n\\[ \\text{Tr} \\rho^2 = \\sum_{i,j} p_i p_j |\\langle \\psi_i | \\psi_j \\rangle|^2 \\leq \\sum_{i,j} p_i p_j \\leq 1 \\cdot 1 = 1 \\]\nSo 1 is the maximum purity.", "question": "### (b) What are the minimum and maximum values of \\( \\text{Tr} \\rho^2 \\) that can be attained by an arbitrary density matrix \\( \\rho \\) on a \\( d \\)-dimensional Hilbert space? For each extreme, prove that it is indeed optimal and give an example of a state \\( \\rho \\) that achieves it." }, { "context": "As a state gets more entangled, we expect the purity to decrease. We think of a state being more entangled as the partial state being more heavily correlated with the other half of the bipartite state. Thus, if we trace out the other state, the partial state will be more mixed.", "question": "### (c) Would you expect the purity of \\( \\rho_A \\) to increase or decrease as the full state \\( |\\Psi\\rangle_{AB} \\) gets \"more entangled\"? Give a qualitative justification for your answer. [NB: The reduced-state purity can in fact be proven to be an entanglement monotone, i.e. it only changes in one direction under local operations and classical communication. The proof is beyond the scope of this problem but can be found in [https://arxiv.org/abs/quant-ph/0506181](https://arxiv.org/abs/quant-ph/0506181).]" }, { "context": "Again, we have\n\\[ \\text{Tr}_{N} (|GHZ_N\\rangle \\langle GHZ_N|) = \\frac{1}{2} (|0\\rangle^{\\otimes (N-1)} \\langle 0|^{\\otimes (N-1)} + |1\\rangle^{\\otimes (N-1)} \\langle 1|^{\\otimes (N-1)}) \\]\nAnd the purity of this state is\n\\[ \\text{Tr} \\left( (\\text{Tr}_{N} |GHZ_N\\rangle \\langle GHZ_N|)^2 \\right) = \\text{Tr} \\left( \\frac{1}{4} (|0\\rangle^{\\otimes (N-1)} \\langle 0|^{\\otimes (N-1)} + |1\\rangle^{\\otimes (N-1)} \\langle 1|^{\\otimes (N-1)})^2 \\right) \\]\n\\[ = \\text{Tr} \\left( \\frac{1}{4} (|0\\rangle^{\\otimes (N-1)} \\langle 0|^{\\otimes (N-1)} + |1\\rangle^{\\otimes (N-1)} \\langle 1|^{\\otimes (N-1)}) \\right) \\]\n\\[ = \\frac{1}{2} \\]\nSo in the limit as \\( N \\to \\infty \\), the purity of this state is \\( \\frac{1}{2} \\).", "question": "### (d) What is the purity of \\( \\text{Tr}_N |GHZ_N\\rangle \\langle GHZ_N| \\) in the limit \\( N \\rightarrow \\infty \\)?" }, { "context": "Evaluating the purity\n\\[ \\text{Tr} \\left( (\\text{Tr}_{N} |W_N\\rangle \\langle W_N|)^2 \\right) = \\text{Tr} \\left( \\left( \\frac{N-1}{N} |W_{N-1}\\rangle \\langle W_{N-1}| + \\frac{1}{N} |0\\rangle^{\\otimes (N-1)} \\langle 0|^{\\otimes (N-1)} \\right)^2 \\right) \\]\n\\[ = \\text{Tr} \\left( \\left( \\left( \\frac{N-1}{N} \\right)^2 |W_{N-1}\\rangle \\langle W_{N-1}| + \\left( \\frac{1}{N} \\right)^2 |0\\rangle^{\\otimes (N-1)} \\langle 0|^{\\otimes (N-1)} \\right) \\right) \\]\n\\[ = \\left( \\frac{N-1}{N} \\right)^2 + \\left( \\frac{1}{N} \\right)^2 \\]\n\\[ = \\frac{N^2 - 2N + 2}{N^2} \\]\nSo in the limit as \\( N \\rightarrow \\infty \\), the purity of this state is 1. Since this value is higher than that for the GHZ states, we can conclude the W states are more robust, as they remain mostly pure even when a bit is traced out.", "question": "### (e) What is the purity of \\( \\text{Tr}_N |W_N\\rangle \\langle W_N| \\) in the limit \\( N \\rightarrow \\infty \\)? Comparing with part (d), explain why we can conclude that the entanglement in the W states is more \"robust\" to the loss of a single qubit." }, { "context": "We now repeat the analysis tracing out \\( m \\) qubits instead of 1, which we will indicate by \\( \\text{Tr}_B \\). Again, we have\n\\[ \\text{Tr}_B|GHZ_N\\rangle\\langle GHZ_N| = \\frac{1}{2} (|0\\rangle^{\\otimes N-m}\\langle 0|^{\\otimes N-m} + |1\\rangle^{\\otimes N-m}\\langle 1|^{\\otimes N-m}) \\]\nAnd the purity of this state is\n\\[ \\text{Tr} \\left( (\\text{Tr}_B|GHZ_N\\rangle\\langle GHZ_N|)^2 \\right) = \\text{Tr} \\left( \\frac{1}{4} (|0\\rangle^{\\otimes N-m}\\langle 0|^{\\otones N-m} + |1\\rangle^{\\otones N-m}\\langle 1|^{\\otones N-m})^2 \\right) \\]\n\\[ = \\text{Tr} \\left( \\frac{1}{4} (|0\\rangle^{\\otones N-m}\\langle 0|^{\\otones N-m} + |1\\rangle^{\\otones N-m}\\langle 1|^{\\otones N-m}) \\right) \\]\n\\[ = \\frac{1}{2} \\]\nSo no matter how many bits are traced out of the GHZ density matrix, the purity is \\( \\frac{1}{2} \\) and so in the limit as \\( N \\rightarrow \\infty \\), the purity is \\( \\frac{1}{2} \\).\n\nEvaluating the purity for the W density matrix\n\\[ \\text{Tr} \\left( (\\text{Tr}_B|W_N\\rangle\\langle W_N|)^2 \\right) = \\text{Tr} \\left( \\left( \\frac{N-m}{N} |W_{N-m}\\rangle\\langle W_{N-m}| + \\frac{m}{N} |0\\rangle^{\\otones N-m}\\langle 0|^{\\otones N-m} \\right)^2 \\right) \\]\n\\[ = \\text{Tr} \\left( \\left( \\frac{N-m}{N} \\right)^2 |W_{N-m}\\rangle\\langle W_{N-m}| + \\left( \\frac{m}{N} \\right)^2 |0\\rangle^{\\otones N-m}\\langle 0|^{\\otones N-m} \\right) \\]\n\\[ = \\left( \\frac{N-m}{N} \\right)^2 + \\left( \\frac{m}{N} \\right)^2 \\]\n\\[ = \\frac{N^2 - 2Nm + m^2}{N^2} \\]\nSo in the limit as \\( N \\rightarrow \\infty \\), the purity of this state still goes to 1 (although slower for larger \\( m \\)). Since this value is higher than that for the GHZ states, we can again conclude the W states are more robust, as they remain mostly pure even when many bits are traced out.", "question": "### (f) Repeat the analysis of parts (d) and (e) for the general case of tracing out some constant number of qubits. That is, fix constant \\( m \\) and compute the limits as \\( N \\rightarrow \\infty \\) of the purities of the \\( N \\)-qubit GHZ and W states with \\( m \\) qubits traced out, as functions of \\( m \\)." } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Dimension of a purifying system
Consider the following protocol for preparing an arbitrary (possibly mixed) state \( \rho_A \) on a qudit \( A \) with dimension \( d \), i.e. a system whose Hilbert space has basis \( \{ |0\rangle, |1\rangle, \ldots, |d-1\rangle \} \): - Prepare a pure state \( |\Psi\rangle_{AB} \) on \( A \) and a \( D \)-dimensional ancilla qudit \( B \) (for some integer \( D \)), satisfying \( \text{Tr}_B |\Psi\rangle \langle \Psi|_{AB} = \rho_A \). - Discard the ancilla qudit \( B \). (Due to De Huang)
[ { "context": "Let \\( r \\) be the Schmidt rank of \\( |\\Psi\\rangle_{AB} \\), then \\( D \\geq r \\). On the other hand, given\n\n\\[ \\rho_A = \\frac{1}{2} (|0\\rangle \\langle 0| + |3\\rangle \\langle 3|), \\]\n\nwe have\n\n\\[ r = \\text{rank}(\\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB}) = \\text{rank}(\\rho_A) = 2. \\]\n\nTherefore \\( D \\geq 2 \\), the minimum value of \\( D \\) is no less than 2. In particular, if we let \\( B \\) be the space of single qubit (\\( D = 2 \\)), and take\n\n\\[ |\\Psi\\rangle_{AB} = \\frac{1}{\\sqrt{2}} (|0\\rangle_A|0\\rangle_B + |3\\rangle_A|1\\rangle_B), \\]\n\nthen we have \\( \\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB} = \\rho_A \\). Thus the minimum of \\( D \\) is 2.", "question": "### (a) What is the minimum value of the ancilla dimension \\( D \\) for which we can have \\( \\rho_A = \\frac{1}{2} (|0\\rangle \\langle 0| + |3\\rangle \\langle 3|) \\)? Argue explicitly that no smaller dimension will work." }, { "context": "We can rewrite \\( \\rho_A \\) as\n\n\\[ \\rho_A = \\frac{1}{5}|1\\rangle\\langle1| + \\frac{2}{5} \\left( \\frac{1}{\\sqrt{2}}|2\\rangle + \\frac{1}{\\sqrt{2}}|3\\rangle \\right) \\left( \\frac{1}{\\sqrt{2}}\\langle2| + \\frac{1}{\\sqrt{2}}\\langle3| \\right) + \\frac{2}{5} \\left( \\frac{1}{\\sqrt{2}}|4\\rangle + \\frac{1}{\\sqrt{2}}|5\\rangle \\right) \\left( \\frac{1}{\\sqrt{2}}\\langle4| + \\frac{1}{\\sqrt{2}}\\langle5| \\right). \\]\n\nNotice that \\( \\frac{1}{\\sqrt{2}}(|2\\rangle + |3\\rangle) \\), \\( \\frac{1}{\\sqrt{2}}(|4\\rangle + |5\\rangle) \\) are orthogonal to each other, we have\n\n\\[ \\text{rank}(\\rho_A) = 3. \\]\n\nAgain we have\n\n\\[ D \\geq \\text{rank}(\\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB}) = \\text{rank}(\\rho_A) = 3. \\]\n\nIn particular, if we let \\( B \\) be a 3 dimensional qudit space, and take\n\n\\[ |\\Psi\\rangle_{AB} = \\frac{1}{\\sqrt{5}}|1\\rangle_A|0\\rangle_B + \\frac{1}{\\sqrt{5}}(|2\\rangle + |3\\rangle)|1\\rangle_A|1\\rangle_B + \\frac{1}{\\sqrt{5}}(|4\\rangle + |5\\rangle)|2\\rangle_A|2\\rangle_B, \\]\n\nthen we have \\( \\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB} = \\rho_A \\). Thus the minimum of \\( D \\) is 3.", "question": "### (b) What is the minimum value of \\( D \\) for which we can have\n\\[ \\rho_A = \\frac{1}{5} (|1\\rangle \\langle 1| + |2\\rangle \\langle 2| + |2\\rangle \\langle 3| + |3\\rangle \\langle 2| + |3\\rangle \\langle 3| + |4\\rangle \\langle 4| + |4\\rangle \\langle 5| + |5\\rangle \\langle 4| + |5\\rangle \\langle 5|) \\]\nAgain argue explicitly that no smaller dimension will work." }, { "context": "For a general \\( \\rho_A \\), let \\( r \\) be the rank of \\( \\rho_A \\). Consider the eigenvalue decomposition of \\( \\rho_A \\),\n\n\\[ \\rho_A = \\sum_{i=0}^{r-1} \\lambda_i |\\phi_i\\rangle\\langle\\phi_i|, \\]\n\nwhere \\( |\\phi_i\\rangle \\), \\( i = 0, 1, \\ldots, r-1 \\), are normalized orthogonal eigen states, \\( \\lambda_i > 0 \\), \\( i = 0, 1, \\ldots, r-1 \\), and \\( \\sum_{i=0}^{r-1} \\lambda_i = 1 \\). If \\( \\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB} = \\rho_A \\), then\n\n\\[ D \\geq \\text{Schmidt rank}(|\\Psi_{AB}\\rangle) = \\text{rank}(\\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB}) = \\text{rank}(\\rho_A) = r. \\]\n\nIn particular, let \\( B \\) be a \\( r \\) dimensional qudit space with standard basis \\(|i\\rangle\\), \\( i = 0, 1, \\ldots, r-1 \\), and take\n\n\\[ |\\Psi\\rangle_{AB} = \\sum_{i=0}^{r-1} \\sqrt{\\lambda_i} |\\phi_i\\rangle_A |i\\rangle_B, \\]\n\nthen we have \\( \\text{tr}_B|\\Psi\\rangle\\langle\\Psi|_{AB} = \\rho_A \\), and thus the minimum of \\( D \\) is \\( r \\).", "question": "### (c) For a general \\(\\rho_A\\), explain how to find the minimum ancilla dimension \\(D\\) for which there exists a pure purifying state \\(|\\Psi\\rangle_{AB}\\)." }, { "context": "Still, consider the eigenvalue decomposition of \\( \\rho_A \\) with rank \\( r \\),\n\n\\[ \\rho_A = \\sum_{i=0}^{r-1} \\lambda_i |\\phi_i\\rangle\\langle\\phi_i|, \\]\n\nwhere \\( \\{ |\\phi_i\\rangle, \\, i = 0, 1, \\ldots, d - 1 \\} \\) is an orthogonal basis of the qudit space \\( A \\), \\( \\lambda_i > 0, \\, i = 0, 1, \\ldots, r - 1 \\), and \\( \\sum_{i=0}^{r-1} \\lambda_i = 1 \\).\n\nAlso consider the eigenvalue decomposition of \\( \\sigma_{AB} \\),\n\n\\[ \\sigma_{AB} = \\sum_{k=0}^{r'-1} s_k |\\Psi_k\\rangle \\langle \\Psi_k|_{AB}, \\]\n\nwhere \\( r' = \\text{rank}(\\sigma_{AB}) \\), and \\( s_k > 0, \\, k = 0, 1, \\ldots, r' - 1 \\).\n\n(i) First we find the lowest attainable rank of \\( \\sigma_{AB} \\). For each \\(|\\Psi_k\\rangle_{AB}\\), we have\n\n\\[ \\text{rank}(\\text{tr}_B(|\\Psi\\rangle \\langle \\Psi|_{AB})) = \\text{Schmidt rank}(|\\Psi\\rangle_{AB}) \\leq m, \\]\n\nsince the dimension of \\( B \\) system is \\( m \\). Then we have\n\n\\[ \\text{rank}(\\text{tr}_B \\sigma_{AB}) = \\text{rank}\\left(\\sum_{k=0}^{r'-1} s_k \\text{tr}_B(|\\Psi_k\\rangle \\langle \\Psi_k|_{AB})\\right) \\leq \\sum_{k=0}^{r'-1} \\text{rank}(\\text{tr}_B(|\\Psi_k\\rangle \\langle \\Psi_k|_{AB})) \\leq r' \\times m. \\]\n\nBut since \\( \\text{tr}_B(\\sigma_{AB}) = \\rho_A \\), we have\n\n\\[ \\text{rank}(\\text{tr}_B \\sigma_{AB}) = \\text{rank}(\\rho_A) = r, \\]\n\nand thus we get\n\n\\[ r' \\times m \\geq r, \\, \\text{i.e.} \\, r' \\geq \\left\\lceil \\frac{r}{m} \\right\\rceil, \\]\n\nsince \\( r' \\) is an integer. Let \\( p = \\left\\lfloor \\frac{r}{m} \\right\\rfloor, \\, q = r - mp \\), then we can take\n\n\\[ s_k = \\sum_{i=0}^{m-1} \\lambda_{mk+i}, \\, |\\Psi_k\\rangle_{AB} = \\frac{1}{\\sqrt{s_k}} \\sum_{i=0}^{m-1} \\sqrt{\\lambda_{mk+i}} |\\phi_{mk+i}\\rangle_A |i\\rangle_B, \\, k = 1, 2, \\ldots, p - 1, \\]\n\n\\[ s_p = \\sum_{i=0}^{q-1} \\lambda_{mp+i}, \\, |\\Psi_p\\rangle_{AB} = \\frac{1}{\\sqrt{s_p}} \\sum_{i=0}^{q-1} \\sqrt{\\lambda_{mp+i}} |\\phi_{mp+i}\\rangle_A |i\\rangle_B, \\, \\text{if} \\, q > 0. \\]\n\nIt's easy to check that \\( |\\Psi_k\\rangle, \\, k = 0, 1, \\ldots, p \\), are orthogonal to each other, and that\n\n\\[ \\text{tr}_B(\\sigma_{AB}) = \\rho_A, \\]\n\n\\[ r' = \\text{rank}(\\sigma_{AB}) = p + \\text{sign}(q) \\cdot 1 = p + \\left\\lceil \\frac{r}{m} \\right\\rceil - \\left\\lfloor \\frac{r}{m} \\right\\rfloor = \\left\\lceil \\frac{r}{m} \\right\\rceil. \\]\n\nIn this case the lower-bound is achieved. Thus the minimum attainable rank of \\( \\sigma_{AB} \\) is \\( \\left\\lceil \\frac{r}{m} \\right\\rceil \\).\n\n(ii) Next we find the highest attainable purity of \\( \\sigma_{AB} \\), i.e. the maximum of\n\n\\[ \\text{tr}(\\sigma_{AB}^2) = \\sum_{k=0}^{r'-1} s_k^2. \\]\n\nWe now should find out the constraints on \\( s = (s_0, s_2, \\ldots, s_{r'}) \\) (notice the we now have no upper bound on \\( r' \\), even though there is indeed one as \\( r' \\leq m \\times r \\)). Let \\( \\Omega \\) be the feasible set for $s$, and that $s^*$ be an optimal solution that achieves the maximum of attainable purity.\n\nThe first constraints on $s$ are\n\n\\[ s_k \\ge 0, \\quad k = 0, 1, \\ldots, r' - 1, \\quad \\sum_{k=0}^{r'-1} s_k = 1, \\]\n\nsince $\\sigma_{AB}$ is a density matrix. Before we go further, we give a Lemma: if $a \\ge b \\ge 0, c \\ge d \\ge 0, a + b = c + d = e, a \\ge c$, then $a^2 + b^2 \\ge c^2 + d^2$.\n\nProof:\n\n\\[ \\begin{aligned}\na^2 + b^2 - (c^2 + d^2) &= (a - c)(a + c) + (b - d)(b + d) \\\n&= (a - c)(a + c) + (c - a)(2e - a - c) \\\n&= (a - c)(2a + 2c - 2e) \\\n&\\ge 0.\n\\end{aligned} \\]\n\nNow we inductively define\n\n\\[ g_0^0 = \\max_{s \\in \\Omega} s_0, \\quad g_i^k = \\max_{s \\in \\Omega_i, g_i: i = 0, \\ldots, k-1} s_k, \\quad k = 1, 2, \\ldots, r' - 1, \\]\n\nthen using the lemma, we can always make $s_k^* = g_i^k$, $k = 0, 1, \\ldots, r' - 1$ (it's not a hard proof, and we skip it here). We should find out what $g_i^k$ are.\n\nLet's assume that in the expression of $\\sigma_{AB}$ we give above,\n\n\\[ |\\Psi_k\\rangle_{AB} = \\sum_{i=0}^{r'-1} \\sum_{j=0}^{m-1} \\alpha_{ij}^k |\\phi_i\\rangle_A |j\\rangle_B, \\]\n\nwhere\n\n\\[ \\sum_{i=0}^{r'-1} \\sum_{j=0}^{m-1} |\\alpha_{ij}^k|^2 = 1, \\quad k = 0, 1, \\ldots, r' - 1, \\]\n\n\\[ \\sum_{i=0}^{r'-1} \\sum_{j=0}^{m-1} \\alpha_{ij}^k \\alpha_{ij}^{k'} = \\langle \\Psi_l | \\Psi_k \\rangle = 0, \\quad k \\neq l, \\]\n\nsince $|\\Psi_k\\rangle_{AB}$ are normalized and orthogonal to each other. The condition $\\text{tr}_B(\\sigma_{AB}) = \\rho_A$ gives that\n\n\\[ \\lambda_i = \\sum_{k=0}^{r'-1} s_k \\sum_{j=0}^{m-1} |\\alpha_{ij}^k|^2, \\quad i = 0, 1, \\ldots, r' - 1. \\]\n\nNow define matrices $A_k \\in \\mathbb{C}^{r \\times m}$ as\n\n\\[ (A_k)_{ij} = \\sqrt{s_k} \\alpha_{ij}^k, \\quad k = 0, 1, \\ldots, r' - 1, \\]\n\nthen all the constraints above can be summarized as\n\n\\[ \\sum_{k=0}^{r'-1} A_k A_k^\\dagger = \\Lambda = \\begin{pmatrix}\n\\lambda_0 \\\\\n& \\lambda_1 \\\\\n& & \\ddots \\\\\n& & & \\lambda_{r-1}\n\\end{pmatrix}, \\]\n\n\\[ \\sum_{k=0}^{r'-1} A_k A_k^\\dagger = \\Lambda = \\begin{pmatrix}\n\\lambda_0 \\\\\n& \\lambda_1 \\\\\n& & \\ddots \\\\\n& & & \\lambda_{r-1}\n\\end{pmatrix}. \\]\n\n\\[ \\text{tr}(A_k A_k^t) = s_k, \\quad k = 0, 1, \\ldots, r' - 1; \\quad \\text{tr}(A_k A_l^t) = 0, \\quad k \\neq l. \\]\n\nWe first give a lemma without proof: if \\( m \\leq r \\), then\n\n\\[ \\max_{P_1, P_2 \\in \\Omega'} \\text{tr}(P_1^t MP_2) = \\sum_{i=1}^m \\sigma_i(M), \\quad \\forall M \\in \\mathbb{C}^{r \\times r}, \\]\n\nwhere \\( \\Omega' = \\{ P \\in \\mathbb{C}^{r \\times m} : P^t P = I_m \\} \\), and \\( \\sigma_i(M) \\) denote the \\( i \\)th largest singular value of \\( M \\).\n\nAssume that \\( \\lambda_0 \\geq \\lambda_1 \\geq \\ldots \\geq \\lambda_r > 0 \\). Then for our case, the lemma above reduces to\n\n\\[ \\max_{P_1, P_2 \\in \\Omega'} \\text{tr}(P_1^t A P_2) = \\sum_{i=0}^{m-1} \\lambda_i. \\]\n\nConsider the reduced eigenvalue decomposition of \\( A_0 A_0^t \\):\n\n\\[ A_0 A_0^t = Q \\Sigma Q^t, \\]\n\nwhere \\( Q \\in \\Omega' \\), and \\( \\Sigma \\in \\mathbb{C}^{m \\times m} \\) is a positive diagonal matrix. Then since\n\n\\[ A_0 A_0^t \\leq \\sum_{k=0}^{r'-1} A_k A_k^t = \\Lambda, \\\n\\Sigma = Q^t A_0 A_0^t Q \\leq Q^t \\Lambda Q, \\]\n\nwe have\n\n\\[ s_0 = \\text{tr}(A_0 A_0^t) = \\text{tr}(\\Sigma) \\leq \\text{tr}(Q^t \\Lambda Q) \\leq \\max_{P_1, P_2 \\in \\Omega'} \\text{tr}(P_1^t A P_2) = \\sum_{i=0}^{m-1} \\lambda_i, \\]\n\nthus \\( g_0^s \\leq \\sum_{i=0}^{m-1} \\lambda_i \\). Recall that in (d)(i) we provided an example in which \\( s_0 = \\sum_{i=0}^{m-1} \\lambda_i \\), therefore we indeed have \\( g_0^s = \\sum_{i=0}^{m-1} \\lambda_i \\).\n\nLet \\( p = \\left\\lfloor \\frac{m}{r} \\right\\rfloor, q = r - mp \\). Then similarly we can prove that\n\n\\[ g_k^s = \\sum_{i=0}^{m-1} \\lambda_{mk+i}, \\quad k = 0, 1, \\ldots, p - 1, \\]\n\nand if \\( q > 0 \\),\n\n\\[ g_p^s = \\sum_{i=0}^{q-1} \\lambda_{mp+i}. \\]\n\nNow using the previous result, we can always make\n\n\\[ s_k^s = g_k^s = \\sum_{i=0}^{m-1} \\lambda_{mk+i}, \\quad k = 0, 1, \\ldots, p - 1; \\quad s_p^s = g_p^s = \\sum_{i=0}^{q-1} \\lambda_{mp+i}, \\quad \\text{if } q > 0, \\]\n\nand therefore the highest attainable purity of \\( \\sigma_{AB} \\) is\n\n\\[ \\sum_{k=0}^{p-1} (s_k^s)^2 + \\text{sign}(q)(s_p^s)^2 = \\sum_{k=0}^{p-1} \\left( \\sum_{i=0}^{m-1} \\lambda_{mk+i} \\right)^2 + \\text{sign}(q) \\left( \\sum_{i=0}^{q-1} \\lambda_{mp+i} \\right)^2, \\]\n\nwhere \\( \\text{sign}(q) = 1 \\), if \\( q > 0 \\); \\( \\text{sign}(q) = 0 \\), if \\( q = 0 \\). Indeed the example in (d)(i),\n\n\\[ s_k = \\sum_{i=0}^{m-1} \\lambda_{mk+i}, \\quad |\\Psi_k\\rangle_{AB} = \\frac{1}{\\sqrt{s_k}} \\sum_{i=0}^{m-1} \\lambda_{mk+i} |\\phi_k^i\\rangle |i\\rangle_B, \\quad k = 1, 2, \\ldots, p - 1, \\]\n\n\\[ s_p = \\sum_{i=0}^{q-1} \\lambda_{mp+i}, \\quad |\\Psi_p\\rangle_{AB} = \\frac{1}{\\sqrt{s_p}} \\sum_{i=0}^{q-1} \\lambda_{mp+i} |\\phi_p^i\\rangle |i\\rangle_B, \\quad \\text{if } q > 0, \\]\n\nachieves this maximum of attainable purity.", "question": "### (d) **Bonus question:** Suppose in part (c) that we are somehow limited to ancilla systems of some fixed dimension \\(m < \\text{rank}(\\rho_A)\\). In this case it may not be possible to find a pure \\(|\\Psi\\rangle_{AB}\\) such that \\(\\text{Tr}_B |\\Psi\\rangle \\langle \\Psi|_{AB} = \\rho_A\\). Suppose then we look for a mixed state \\(\\sigma_{AB}\\) such that \\(\\text{Tr}_B(\\sigma_{AB}) = \\rho_A\\). How would you go about finding the minimum attainable rank of the joint state \\(\\sigma_{AB}\\)? What about the state with the highest attainable purity \\(\\text{Tr} \\sigma_{AB}^2\\)? The more thorough, generic and analytical your explanation, the better!" } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Secret sharing among three people
Consider the following protocol for preparing an arbitrary (possibly mixed) state \( \rho_A \) on a qudit \( A \) with dimension \( d \), i.e. a system whose Hilbert space has basis \( \{ |0\rangle, |1\rangle, \ldots, |d-1\rangle \} \): - Prepare a pure state \( |\Psi\rangle_{AB} \) on \( A \) and a \( D \)-dimensional ancilla qudit \( B \) (for some integer \( D \)), satisfying \( \text{Tr}_B |\Psi\rangle \langle \Psi|_{AB} = \rho_A \). - Discard the ancilla qudit \( B \). (Due to Bolton Bailey)
[ { "context": "We compute the reduced density matrix \\( \\rho_A \\):\n\n\\[ |\\Psi\\rangle = \\frac{1}{\\sqrt{2}} \\left( |0\\rangle_A|0\\rangle_B|0\\rangle_C + (-1)^b|1\\rangle_A|1\\rangle_B|1\\rangle_C \\right) \\]\n\n\\[ |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( (|0\\rangle \\langle 0|_A \\otimes |0\\rangle \\langle 0|_B + (-1)^b|0\\rangle \\langle 1|_A \\otimes |1\\rangle \\langle 0|_B) \\otimes (|0\\rangle \\langle 0|_C + (-1)^b|1\\rangle \\langle 1|_C) \\right) \\]\n\n\\[ |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( |000\\rangle \\langle 000|_{ABC} + (-1)^b|000\\rangle \\langle 111|_{ABC} + (-1)^b|111\\rangle \\langle 000|_{ABC} + |111\\rangle \\langle 111|_{ABC} \\right) \\]\n\n\\[ |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( |0\\rangle \\langle 0|_A \\otimes |0\\rangle \\langle 0|_{BC} + (-1)^b|0\\rangle \\langle 1|_A \\otimes |1\\rangle \\langle 0|_{BC} + (-1)^b|1\\rangle \\langle 0|_A \\otimes |0\\rangle \\langle 1|_{BC} + |1\\rangle \\langle 1|_A \\otimes |1\\rangle \\langle 1|_{BC} \\right) \\]\n\n\\[ \\text{Tr}_{BC} |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( |0\\rangle \\langle 0|_A + |1\\rangle \\langle 1|_A \\right) = \\frac{1}{2} I \\]\n\nSo \\( \\rho_A \\) is the maximally mixed state, no matter the value of \\( b \\). Thus, \\( A \\) alone gains no information about the secret. By the symmetry of the state \\( |\\Psi\\rangle \\), we see that the other density matrices are the same:\n\n\\[ \\text{Tr}_{AC} |\\Psi\\rangle \\langle \\Psi| = \\text{Tr}_{AB} |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} I \\]\n\n\\[ \\rho_A = \\rho_B = \\rho_C = \\frac{1}{2} I \\]\n\nSo none of the three can recover the secret on their own.", "question": "### (a) What is the minimum value of the ancilla dimension \\( D \\) for which we can have \\( \\rho_A = \\frac{1}{2} (|0\\rangle \\langle 0| + |3\\rangle \\langle 3|) \\)? Argue explicitly that no smaller dimension will work." }, { "context": "We compute the two-party reduced density matrix \\( \\rho_{BC} \\), from part (b):\n\n\\[ |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( |0\\rangle \\langle 0|_A \\otimes |0\\rangle \\langle 0|_{BC} + (-1)^b|0\\rangle \\langle 1|_A \\otimes |1\\rangle \\langle 0|_{BC} + (-1)^b|1\\rangle \\langle 0|_A \\otimes |0\\rangle \\langle 1|_{BC} + |1\\rangle \\langle 1|_A \\otimes |1\\rangle \\langle 1|_{BC} \\right) \\]\n\nSo:\n\n\\[ \\text{Tr}_A |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( |00\\rangle \\langle 00| + |11\\rangle \\langle 11| \\right) \\]\n\nSo \\( \\rho_{BC} \\) is the same mixed state, no matter the value of \\( b \\). Thus, \\( B \\) and \\( C \\) together gain no information about the secret. By the symmetry of the state \\( |\\Psi\\rangle \\), we see that the other two-party density matrices are again the same:\n\n\\[ \\text{Tr}_B |\\Psi\\rangle \\langle \\Psi| = \\text{Tr}_C |\\Psi\\rangle \\langle \\Psi| = \\frac{1}{2} \\left( |00\\rangle \\langle 00| + |11\\rangle \\langle 11| \\right) \\]\n\nSo these pairs also do not have any information about the secret.", "question": "### (c) For a general \\( \\rho_A \\), explain how to find the minimum ancilla dimension \\( D \\) for which there exists a pure purifying state \\( |\\Psi\\rangle_{AB} \\)." }, { "context": "(c) Consider the following LOCC protocol. Alice, Bob and Charlie all apply local Hadamard transformations to their qubit. This yields the state:\n\n\\[ H \\otimes H \\otimes H \\frac{1}{\\sqrt{2}} \\left( |000\\rangle + |111\\rangle \\right) = \\frac{1}{4} \\left( (|0\\rangle + |1\\rangle) \\otimes (|0\\rangle + |1\\rangle) \\otimes (|0\\rangle + |1\\rangle) \\right) \\]\n\n\\[ + (-1)^b(|0\\rangle - |1\\rangle) \\otimes (|0\\rangle - |1\\rangle) \\otimes (|0\\rangle - |1\\rangle) \\]\n\n\\[ = \\frac{1}{4} \\left( (|000\\rangle + |001\\rangle + |010\\rangle + |011\\rangle + |100\\rangle + |101\\rangle + |110\\rangle + |111\\rangle) \\right) \\]\n\n\\[ + (-1)^b(|000\\rangle - |001\\rangle - |010\\rangle + |011\\rangle - |100\\rangle + |101\\rangle + |110\\rangle - |111\\rangle) \\]\n\nSo if \\( b = 0 \\), the state is now:\n\n\\[ \\frac{1}{2} (|000\\rangle + |011\\rangle + |101\\rangle + |110\\rangle) \\]\n\nAnd if \\( b = 1 \\), the state is now:\n\n\\[ \\frac{1}{2} (|001\\rangle + |010\\rangle + |100\\rangle + |111\\rangle) \\]\n\nNow, Alice, Bob, and Charlie measure in the standard basis. We note that these three measurements are equivalent to a measurement in the computational in the product space. Thus, if \\( b = 0 \\), the only possible measurements for Alice, Bob, and Charlie are (0, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 0). If \\( b = 1 \\), the only possible measurements for Alice, Bob, and Charlie are (0, 0, 1), (0, 1, 0), (1, 0, 0), (1, 1, 1). Thus, to determine \\( b \\), Bob and Charlie send the bits from their measurements to Alice and Alice computes the parity of the three measurements, which is then equal to \\( b \\).", "question": "### (d) **Bonus question:** Suppose in part (c) that we are somehow limited to ancilla systems of some fixed dimension \\( m < \\text{rank}(\\rho_A) \\). In this case it may not be possible to find a pure \\( |\\Psi\\rangle_{AB} \\) such that \\( \\text{Tr}_B |\\Psi\\rangle \\langle \\Psi|_{AB} = \\rho_A \\). Suppose then we look for a mixed state \\( \\sigma_{AB} \\) such that \\( \\text{Tr}_B (\\sigma_{AB}) = \\rho_A \\). How would you go about finding the minimum attainable rank of the joint state \\( \\sigma_{AB} \\)? What about the state with the highest attainable purity \\( \\text{Tr} \\sigma_{AB}^2 \\)? The more thorough, generic and analytical your explanation, the better!" } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Nonlocal boxes
Given an integer \(n\) and finite sets \(X_1, \ldots, X_n\) ("inputs") and \(A_1, \ldots, A_n\) ("outputs"), an \(n\)-partite non-local box is a family of distributions \(\{p(\cdot | x_1, \ldots, x_n)\}\), \(x_1, \ldots, x_n \in X_1 \times \cdots \times X_n\), each defined on \(A_1 \times \cdots \times A_n\), i.e. \[\sum_{a_i \in A_i} p(a_1, \ldots, a_n | x_1, \ldots, x_n) = 1 \quad \forall x_i, \quad \text{and} \quad p(a_1, \ldots, a_n | x_1, \ldots, x_n) \geq 0 \quad \forall x_i, a_i.\] Intuitively, a non-local box is called non-signaling if the \(i\)-th output does not provide information about the \(j\)-th input, for \(i \neq j\). More formally, it is required that for each \(i \in \{1, \ldots, n\}\) and all input tuples \((x_1, \ldots, x_n)\) and \((x'_1, \ldots, x'_n)\) such that \( x_i = x'_i \), \[\forall a_i \in A_i, \quad \sum_{a_j:j \neq i} p(a_1, \ldots, a_n|x_1, \ldots, x_n) = \sum_{a_j:j \neq i} p(a_1, \ldots, a_n|x'_1, \ldots, x'_n).\] Similarly, the condition is required when taking marginals on more than one location, e.g. the marginal on any pair \((a_i, a_j)\) should be independent of questions \(x_k\) for \(k \not\in \{i, j\}\). [This additional condition was missing in previous versions of the problem. It is needed for questions (e), (f) and (g) only. If you did the questions without the condition, explain your answer, and we will not take out any points.] This condition implies that the marginal distribution on any single coordinate \(i\) is a well-defined distribution which only depends on the input \(x_i\) associated with that coordinate. Let's first investigate some examples of bipartite (\(n = 2\)) nonlocal boxes. Here are four of them. In each case \(X_i = A_i = \{0, 1\}\), and any un-specified probability is set to 0 by default: | (U) | \( p(a, b|x, y) = 1/4 \) | \(\forall (x, y, a, b)\) | |------|--------------------------|-------------------------| | (PR) | \( p(0, 0|x, y) = p(1, 1|x, y) = 1/2 \) if \((x, y) \neq (1, 1)\) | | | \( p(1, 0|x, y) = p(0, 1|x, y) = 1/2 \) if \((x, y) = (1, 1)\) | | (CH) | \( p(0, 0|x, y) = p(1, 1|x, y) = \frac{1}{2} \cos^2 \frac{\pi}{8} \) if \((x, y) = (0, 0)\) | | | \( p(1, 0|x, y) = p(0, 1|x, y) = \frac{1}{2} \sin^2 \frac{\pi}{8} \) if \((x, y) = (0, 0)\) | | | \( p(0, 0|x, y) = p(1, 1|x, y) = \frac{1}{2} \sin^2 \frac{\pi}{8} \) if \((x, y) = (1, 1)\) | | | \( p(1, 0|x, y) = p(0, 1|x, y) = \frac{1}{2} \cos^2 \frac{\pi}{8} \) if \((x, y) = (1, 1)\) | | (SIG)| \( p(y, x|x, y) = 1 \) | \(\forall (x, y)\) | (Due to De Huang)
[ { "context": "(U) For any input \\((x, y) \\in A_1 \\times A_2\\), we have\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} p(a, b | x, y) = 4 \\times \\frac{1}{4} = 1. \\]\n\n(PR) If \\((x, y) \\neq (1, 1)\\),\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} p(a, b | x, y) = p(0, 0 | x, y) + p(1, 1 | x, y) = \\frac{1}{2} + \\frac{1}{2} = 1, \\]\n\nif \\((x, y) = (1, 1)\\),\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} p(a, b | x, y) = p(1, 0 | x, y) + p(0, 1 | x, y) = \\frac{1}{2} + \\frac{1}{2} = 1. \\]\n\n(CH) If \\((x, y) \\neq (1, 1)\\),\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} p(a, b | x, y) = \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} + \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} + \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} + \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} = 1, \\]\n\nif \\((x, y) = (1, 1)\\),\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} p(a, b | x, y) = \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} + \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} + \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} + \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} = 1. \\]\n\n(SIG) For any input \\((x, y) \\in A_1 \\times A_2\\), we have\n\n\\[ \\sum_{a,b \\in \\{0,1\\}} p(a, b|x, y) = p(y, x|x, y) = 1. \\]", "question": "### (a) Verify that each of these indeed specifies a nonlocal box, i.e. that the probabilities add up to 1 when they should." }, { "context": "Let\n\n\\[ p(a, *|x, y) = \\sum_{b \\in \\{0,1\\}} p(a, b|x, y) \\]\n\ndenote the marginal probability of the first output being \\(a\\) given the input \\((x, y)\\), and\n\n\\[ p(*, b|x, y) = \\sum_{a \\in \\{0,1\\}} p(a, b|x, y) \\]\n\nthe marginal probability of the second output being \\(b\\) given the input \\((x, y)\\).\n\n(U) It's non-signaling, because\n\n\\[ p(a, *|x, 0) = p(a, *|x, 1) = \\frac{1}{2} \\quad \\forall x \\in \\{0, 1\\}, \\quad \\forall a \\in \\{0, 1\\}, \\]\n\n\\[ p(*, b|0, y) = p(*, b|1, y) = \\frac{1}{2} \\quad \\forall y \\in \\{0, 1\\}, \\forall b \\in \\{0, 1\\}. \\]\n\n(PR) It's non-signaling, because\n\n\\[ p(a, *|x, y) = p(a, a \\oplus (x \\land y)|x, y) = \\frac{1}{2} \\quad \\forall x, y \\in \\{0, 1\\}, \\forall a \\in \\{0, 1\\}, \\]\n\n\\[ p(*, b|x, y) = p(b(b \\land (x \\land y), b|x, y) = \\frac{1}{2} \\quad \\forall x, y \\in \\{0, 1\\}, \\forall b \\in \\{0, 1\\}. \\]\n\n(CH) It's non-signaling, because given any input \\((x, y)\\), we always have\n\n\\[ p(0, 0|x, y) = p(1, 1|x, y) = \\frac{-(x \\land y)}{2} \\cos^2 \\frac{\\pi}{8} + \\frac{x \\land y}{2} \\sin^2 \\frac{\\pi}{8}, \\]\n\n\\[ p(1, 0|x, y) = p(0, 1|x, y) = \\frac{x \\land y}{2} \\cos^2 \\frac{\\pi}{8} + \\frac{-(x \\land y)}{2} \\sin^2 \\frac{\\pi}{8}, \\]\n\nand thus\n\n\\[ p(a, *|x, y) = p(a, 0|x, y) + p(a, 1|x, y) = \\frac{1}{2} \\quad \\forall x, y \\in \\{0, 1\\}, \\forall a \\in \\{0, 1\\}, \\]\n\n\\[ p(*, b|x, y) = p(0, b|x, y) + p(1, b|x, y) = \\frac{1}{2} \\quad \\forall x, y \\in \\{0, 1\\}, \\forall b \\in \\{0, 1\\}. \\]\n\n(SIG) It's not non-signaling, because\n\n\\[ p(1, *|0, 0) = 0, \\quad p(1, *|0, 1) = p(1, 0|0, 1) = 1, \\]\n\n\\[ p(1, *|0, 0) \\neq p(1, *|0, 1). \\]", "question": "### (b) Among the four boxes, which are non-signaling?" }, { "context": "We always have\n\n\\[ p_{win} = \\sum_{x,y \\in \\{0,1\\}} p(x, y) \\left( \\sum_{a \\oplus b = xy} p(a, b | x, y) \\right) \\]\n\n\\[ = \\frac{1}{4} \\left( p(0,0|0,0) + p(1,1|0,0) \\right) + \\frac{1}{4} \\left( p(0,0|0,1) + p(1,1|0,1) \\right) \\]\n\n\\[ + \\frac{1}{4} \\left( p(0,0|1,0) + p(1,1|1,0) \\right) + \\frac{1}{4} \\left( p(0,1|1,1) + p(1,0|1,1) \\right). \\]\n\nWe just need to specify each probability for each case.\n\n(U) Everything is \\(\\frac{1}{4}\\), thus\n\n\\[ p_{win} = \\frac{1}{4} \\times \\left( \\frac{1}{4} + \\frac{1}{4} \\right) \\times 4 = \\frac{1}{2}. \\]\n\n(PR)\n\n\\[ p_{win} = \\frac{1}{4} \\times \\left( \\frac{1}{2} + \\frac{1}{2} \\right) \\times 4 = 1. \\]\n\n(CH)\n\n\\[ p_{win} = \\frac{1}{4} \\times \\left( \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} + \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} \\right) \\times 4 = \\cos^2 \\frac{\\pi}{8} = \\frac{1}{2} + \\frac{\\sqrt{2}}{4}. \\]\n\n(SIG)\n\n\\[ p_{win} = \\frac{1}{4} (p(0,0|0,0) = 1. \\]", "question": "### (c) For each of the boxes, evaluate its success probability in the CHSH game. That is, assuming Alice and Bob are able to generate answers distributed according to \\( p(a, b|x, y) \\) whenever their respective inputs are \\( x \\) and \\( y \\), what is the probability that they produce valid answers in the game (when the questions are chosen uniformly at random, as usual)?" }, { "context": "(U) Can. Let\n\n\\[ \\rho_{AB} = \\frac{I_2}{2} \\otimes \\frac{I_2}{2}, \\]\n\n\\[ A^0_0 = B^0_0 = |0\\rangle \\langle 0|, \\]\n\n\\[ A^0_1 = B^0_1 = |1\\rangle \\langle 1|, \\]\n\n\\[ A^1_0 = B^1_0 = |+\\rangle \\langle +|, \\]\n\n\\[ A^1_1 = B^1_1 = |-\\rangle \\langle -|. \\]\n\nIt's easy to check that\n\n\\[ tr \\left( (A^a_x \\otimes B^b_y) \\rho_{AB} \\right) = \\frac{1}{4} \\quad \\forall x, y, a, b \\in \\{0, 1\\}. \\]\n\n(PR) Can not. We will prove by contradiction. For any POVM \\(\\{A^a_x\\}_{a}, x \\in \\{0, 1\\}, \\{B^b_y\\}_{b}, y \\in \\{0, 1\\}\\), and any density matrix \\(\\rho_{AB}\\), consider the following formula\n\n\\[ (A^0_0 - A^1_0) \\otimes (B^0_0 - B^1_0) + (A^0_0 - A^1_0) \\otimes (B^0_1 - B^1_1) \\]\n\n\\[ + (A^0_1 - A^1_1) \\otimes (B^0_0 - B^1_0) - (A^0_1 - A^1_1) \\otimes (B^0_1 - B^1_1) \\]\n\n\\[ = M_0 N_0 + M_0 N_1 + M_1 N_0 - M_1 N_1, \\]\n\nwhere\n\n\\[ M_i = (A^0_i - A^1_i) \\otimes I_2, \\quad N_i = I_2 \\otimes B^0_i - B^1_i, \\quad i \\in \\{0, 1\\}. \\]\n\nIt's easy to check that\n\n\\[ -I_4 \\leq M_i \\leq I_4, \\quad -I_4 \\leq N_i \\leq I_4, \\quad i \\in \\{0, 1\\}, \\]\n\n\\[ M_i N_j = N_j M_i, \\quad i, j \\in \\{0, 1\\}. \\]\n\nNow define\n\n\\[ \\langle XY \\rangle = tr(XY \\rho_{AB}), \\quad \\langle X^2 \\rangle = \\langle XX \\rangle, \\]\n\nthen we have\n\n\\[ \\langle M_i N_j \\rangle = \\langle N_j M_i \\rangle, \\quad i, j \\in \\{0, 1\\}, \\]\n\n\\[ \\left( \\left( \\sum_{i \\in \\{0,1\\}} \\alpha_i M_i + \\sum_{j \\in \\{0,1\\}} \\beta_j N_j \\right)^2 \\right) \\geq 0, \\quad \\forall \\alpha_0, \\alpha_1, \\beta_0, \\beta_1. \\]\n\nThen by direct calculation, we have\n\n\\[ tr((M_0 N_0 + M_0 N_1 + M_1 N_0 - M_1 N_1) \\rho_{AB}) = \\langle M_0 N_0 \\rangle + \\langle M_0 N_1 \\rangle + \\langle M_1 N_0 \\rangle - \\langle M_1 N_1 \\rangle \\]\n\n\\[ = \\frac{1}{\\sqrt{2}} \\left( \\langle M_0^2 \\rangle + \\langle M_1^2 \\rangle + \\langle N_0^2 \\rangle + \\langle N_1^2 \\rangle \\right) \\]\n\n\\[ - \\frac{\\sqrt{2} - 1}{8} \\left( ((\\sqrt{2} + 1)(M_0 - N_0) + M_1 - N_1)^2 \\right) - \\frac{\\sqrt{2} - 1}{8} \\left( ((\\sqrt{2} + 1)(M_0 - N_1) - M_1 - N_0)^2 \\right) \\]\n\n\\[ - \\frac{\\sqrt{2} - 1}{8} \\left( ((\\sqrt{2} + 1)(M_1 - N_0) + M_0 + N_1)^2 \\right) - \\frac{\\sqrt{2} - 1}{8} \\left( ((\\sqrt{2} + 1)(M_1 + N_1) - M_0 - N_0)^2 \\right) \\]\n\n\\[ \\leq \\frac{1}{\\sqrt{2}} \\left( \\langle M_0^2 \\rangle + \\langle M_1^2 \\rangle + \\langle N_0^2 \\rangle + \\langle N_1^2 \\rangle \\right) \\]\n\n\\[ \\leq \\frac{1}{\\sqrt{2}} \\left( \\langle I_4 \\rangle + \\langle I_4 \\rangle + \\langle I_4 \\rangle + \\langle I_4 \\rangle \\right) \\]\n\n\\[ = 2 \\sqrt{2}. \\]\n\nHowever, if there exist such a quantum strategy that can implement (PR) box, that it's easy to check that\n\n\\[ tr((M_0 N_0 + M_0 N_1 + M_1 N_0 - M_1 N_1) \\rho_{AB}) = tr((A_0^0 - A_1^0) \\otimes (B_0^0 - B_1^0) \\rho_{AB}) + tr((A_0^0 - A_1^0) \\otimes (B_0^1 - B^1_1) \\rho_{AB}) \\]\n\n\\[ + tr((A_0^1 - A_1^1) \\otimes (B_0^0 - B_1^0) \\rho_{AB}) - tr((A_0^1 - A_1^1) \\otimes (B_0^1 - B_1^1) \\rho_{AB}) \\]\n\n\\[ = 4, \\]\n\nwhich violates the upper bound \\(2 \\sqrt{2}\\) we obtain above. This contradiction implies that we can not use quantum strategy to implement (PR) box.\n\n(CH) Can. Let\n\n\\[ \\rho_{AB} = |EPR \\rangle \\langle EPR|_{AB} \\]\n\n\\[ = \\frac{1}{2} (|0 \\rangle |0 \\rangle + |1 \\rangle |1 \\rangle) (\\langle 0| \\langle 0| + \\langle 1| \\langle 1|)_{AB} \\]\n\n\\[ = \\frac{1}{2} (|+ \\rangle |+ \\rangle + |− \\rangle |− \\rangle) ((\\langle +| \\langle +| + \\langle −| \\langle −|)_{AB}, \\]\n\n\\[ A_0^0 = |0 \\rangle \\langle 0|, \\quad A_0^1 = |1 \\rangle \\langle 1|, \\quad A_1^0 = |+ \\rangle \\langle +|, \\quad A_1^1 = |− \\rangle \\langle −|, \\]\n\n\\[ B_0^0 = |\\phi_0\\rangle \\langle \\phi_0|, \\quad B_0^1 = |\\phi_1\\rangle \\langle \\phi_1|, \\quad B_1^0 = |\\psi_0\\rangle \\langle \\psi_0|, \\quad B_1^1 = |\\psi_1\\rangle \\langle \\psi_1|, \\]\n\nwhere\n\n\\[ |\\phi_0\\rangle = \\cos \\frac{\\pi}{8} |0\\rangle + \\sin \\frac{\\pi}{8} |1\\rangle, \\quad |\\phi_1\\rangle = -\\sin \\frac{\\pi}{8} |0\\rangle + \\cos \\frac{\\pi}{8} |1\\rangle, \\]\n\n\\[ |\\psi_0\\rangle = \\cos \\frac{\\pi}{8} |0\\rangle - \\sin \\frac{\\pi}{8} |1\\rangle, \\quad |\\psi_1\\rangle = \\sin \\frac{\\pi}{8} |0\\rangle + \\cos \\frac{\\pi}{8} |1\\rangle. \\]\n\nIt's easy to check that\n\n\\[ \\text{tr}((A_x^a \\otimes B_y^b) \\rho_{AB}) = \\frac{1}{2} \\cos^2 \\frac{\\pi}{8}, \\quad \\text{if} \\ a \\oplus b = x \\land y, \\]\n\n\\[ \\text{tr}((A_x^a \\otimes B_y^b) \\rho_{AB}) = \\frac{1}{2} \\sin^2 \\frac{\\pi}{8}, \\quad \\text{if} \\ a \\oplus b \\neq x \\land y. \\]\n\n(SIG) Can not. We will show this by contradiction. Assume that there is such a strategy, then\n\n\\[ \\text{tr}((A_0^0 \\otimes B_0^0) \\rho_{AB}) = \\text{tr}((A_1^1 \\otimes B_1^0) \\rho_{AB}) = 1, \\]\n\n\\[ \\text{tr}((A_0^1 \\otimes B_1^0) \\rho_{AB}) = \\text{tr}((A_1^0 \\otimes B_0^1) \\rho_{AB}) = 1. \\]\n\nIn my HW3 problem 2, I have shown a lemma that if \\( X \\geq Y, \\ Z \\geq 0 \\), then \\( \\text{tr}(XZ) \\geq \\text{tr}(YZ) \\). We will use this lemma here again. Since \\( \\{A_x^a\\}_a \\) and \\( \\{B_y^b\\}_b \\) are POVMs for all \\( x, y \\), we have\n\n\\[ I_2 \\geq B_0^0 \\geq 0, \\quad I_2 \\geq B_0^1 \\geq 0, \\]\n\n\\[ A_0^0 \\geq 0, \\quad A_1^0 \\geq 0, \\quad A_0^0 + A_1^0 = I_2, \\]\n\n\\[ \\implies A_0^0 \\otimes I_2 \\geq A_0^0 \\otimes B_0^0 \\geq 0, \\quad A_0^0 \\otimes I_2 \\geq A_0^0 \\otimes B_1^0 \\geq 0, \\]\n\n\\[ \\implies I_4 = I_2 \\otimes I_2 = A_0^0 \\otimes I_2 + A_1^0 \\otimes I_2 \\geq A_0^0 \\otimes B_0^0 + A_1^0 \\otimes B_1^0 \\geq 0, \\]\n\nthen using the lemma, we have\n\n\\[ 1 = \\text{tr}(I_4 \\rho_{AB}) \\geq \\text{tr}((A_0^0 \\otimes B_0^0 + A_1^0 \\otimes B_1^0) \\rho_{AB}) = \\text{tr}((A_0^0 \\otimes B_0^0) \\rho_{AB}) + \\text{tr}((A_1^0 \\otimes B_1^0) \\rho_{AB}) = 2. \\]\n\nThis contradiction implies that we can not find such a quantum strategy.", "question": "### (d) For each of the four boxes, state which can be implemented using quantum mechanics. If it can, provide a strategy: a bipartite state \\(\\rho_{AB}\\) and POVM \\(\\{A^a_x\\}_a\\) and \\(\\{B^b_y\\}_b\\), for all \\( x \\) and \\( y \\), such that \\(\\text{Tr}((A^a_x \\otimes B^b_y) \\rho_{AB}) = p(a, b|x, y)\\) for all \\((a, b, x, y)\\). If it cannot, provide an argument justifying your answer." }, { "context": "Consider an non-signaling extension \\(\\{q(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) of the (PR) box. Using non-signaling condition, we have\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} q(a, b, c \\mid x, y, z) = \\sum_{a, b \\in \\{0, 1\\}} q(a, b, c \\mid x', y', z), \\quad \\forall c, z, \\quad \\forall x, y, x', y' \\in \\{0, 1\\}, \\]\n\ntherefore we can define\n\n\\[ p^c(z) = \\sum_{a, b \\in \\{0, 1\\}} q(a, b, c \\mid 0, 0, z) \\geq 0, \\quad \\forall c, z, \\]\n\nthen we have\n\n\\[ p^c(z) = \\sum_{a, b \\in \\{0, 1\\}} q(a, b, c \\mid x, y, z), \\quad \\forall x, y \\in \\{0, 1\\}. \\]\n\nAlso we can check that for all \\( z \\),\n\n\\[ \\sum_{c} p^{b}(c, z) = \\sum_{a, b \\in \\{0, 1\\}} \\sum_{c} q(a, b, c\\mid 0, 0, z) = \\sum_{a, b \\in \\{0, 1\\}} p(a, b\\mid 0, 0) = 1, \\]\n\nthus \\(\\{p^{b}(c\\mid z)\\}_{z}\\) is a family of well defined distributions. Now using the properties of (PR) box, we have\n\n\\[ \\sum_{c} q(0, 1, c\\mid x, y, z) = p(0, 1\\mid x, y) = 0, \\ \\forall z, \\quad \\text{if} \\ (x, y) \\neq (1, 1), \\]\n\n\\[ \\sum_{c} q(1, 0, c\\mid x, y, z) = p(1, 0\\mid x, y) = 0, \\ \\forall z, \\quad \\text{if} \\ (x, y) \\neq (1, 1), \\]\n\n\\[ \\sum_{c} q(0, 0, c\\mid x, y, z) = p(0, 0\\mid x, y) = 0, \\ \\forall z, \\quad \\text{if} \\ (x, y) = (1, 1), \\]\n\n\\[ \\sum_{c} q(1, 1, c\\mid x, y, z) = p(1, 1\\mid x, y) = 0, \\ \\forall z, \\quad \\text{if} \\ (x, y) = (1, 1). \\]\n\nSince all probabilities are non negative, we have\n\n\\[ q(0, 1, c\\mid x, y, z) = 0 = p(0, 1\\mid x, y)p^{b}(c\\mid z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) \\neq (1, 1), \\]\n\n\\[ q(1, 0, c\\mid x, y, z) = 0 = p(1, 0\\mid x, y)p^{b}(c\\mid z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) \\neq (1, 1), \\]\n\n\\[ q(0, 0, c\\mid x, y, z) = 0 = p(0, 0\\mid x, y)p^{b}(c\\mid z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) = (1, 1), \\]\n\n\\[ q(1, 1, c\\mid x, y, z) = 0 = p(1, 1\\mid x, y)p^{b}(c\\mid z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) = (1, 1). \\]\n\nThen using the non-signaling condition for fixing two inputs and outputs, we have\n\n\\[ q(0, 0, c\\mid 0, 0, z) = q(0, 0, c\\mid 0, 1, z) = q(1, 0, c\\mid 1, 1, z) = q(1, 1, c\\mid 1, 0, z) = q(1, 1, c\\mid 0, 0, z), \\ \\forall c, z, \\]\n\nand since we also have\n\n\\[ \\sum_{a, b \\in \\{0, 1\\}} q(a, b, c\\mid 0, 0, z) = q(0, 0, c\\mid 0, 0, z) + q(1, 1, c\\mid 0, 0, z) = p^{b}(c\\mid z), \\ \\forall c, z, \\]\n\nthus\n\n\\[ q(0, 0, c\\mid 0, 0, z) = \\frac{1}{2} p^{b}(c\\mid z) = p(0, 0\\mid 0, 0)p^{b}(c\\mid z), \\ \\forall c, z, \\]\n\n\\[ q(1, 1, c\\mid 0, 0, z) = \\frac{1}{2} p^{b}(c\\mid z) = p(1, 1\\mid 0, 0)p^{b}(c\\mid z), \\ \\forall c, z. \\]\n\nSimilarly we can also prove that\n\n\\[ q(0, 0, c\\mid x, y, z) = \\frac{1}{2} p^{b}(c\\mid z) = p(0, 0\\mid x, y)p^{b}(c, z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) \\neq (1, 1), \\]\n\n\\[ q(1, 1, c\\mid x, y, z) = \\frac{1}{2} p^{b}(c\\mid z) = p(1, 1\\mid x, y)p^{b}(c, z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) \\neq (1, 1), \\]\n\n\\[ q(0, 1, c\\mid x, y, z) = \\frac{1}{2} p^{b}(c\\mid z) = p(0, 1\\mid x, y)p^{b}(c, z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) = (1, 1), \\]\n\n\\[ q(1, 0, c\\mid x, y, z) = \\frac{1}{2} p^{b}(c\\mid z) = p(1, 0\\mid x, y)p^{b}(c, z), \\ \\forall c, z, \\quad \\text{if} \\ (x, y) = (1, 1). \\]\n\nTherefore the extension \\(\\{q(\\cdot, \\cdot, \\cdot\\mid x, y, z)\\}\\) is in a product form.", "question": "### (e) Show that any non-signaling tripartite extension of the (PR) box must have a product form, i.e. \\( q(a, b, c|x, y, z) = p(a, b|x, y)p'(c|z) \\) for some family of distributions \\(\\{p'(\\cdot|z)\\}\\)." }, { "context": "Let \\( c \\in X_3 = \\{0, 1\\} \\), \\( z \\in A_3 = \\{0\\} \\). Let\n\n\\[ \\begin{cases} q(0, 0, 0\\mid x, y, z) = q(1, 1, 0\\mid x, y, z) = \\frac{1}{4}, \\\\ q(0, 0, 1\\mid x, y, z) = q(1, 1, 1\\mid x, y, z) = \\frac{1}{2} \\cos \\frac{3 \\pi}{8} \\sin \\frac{\\pi}{8}, \\\\ q(1, 0, 1\\mid x, y, z) = q(0, 1, 1\\mid x, y, z) = \\frac{1}{4} \\sin^2 \\frac{3 \\pi}{8}, & \\text{if } (x, y) \\neq (1, 1), \\\\ q(1, 0, 0\\mid x, y, z) = q(0, 1, 0\\mid x, y, z) = \\frac{1}{4} \\sin^2 \\frac{\\pi}{8}, \\\\ q(0, 0, 0\\mid x, y, z) = q(1, 1, 0\\mid x, z) = \\frac{1}{4} \\sin^2 \\frac{\\pi}{8}, \\\\ q(0, 0, 1\\mid x, y, z) = q(1, 1, 1\\mid x, y, z) = \\frac{1}{2} \\cos \\frac{3 \\pi}{8} \\sin \\frac{\\pi}{8}, \\\\ q(1, 0, 0\\mid x, y, z) = q(0, 1, 0\\mid x, y, z) = \\frac{1}{4}, \\\\ \\end{cases} \\]\n\nIt's easy to check that this \\(\\{q(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) defines a tripartite non local box. Also, using the fact that\n\n\\[ \\frac{1}{2} + \\cos \\frac{3 \\pi}{8} \\sin \\frac{\\pi}{8} = \\cos^2 \\frac{\\pi}{8}, \\]\n\nand noticing that \\( z \\) is always 0, we can check that\n\n\\[ \\sum_{b, c} q(a, b, c\\mid x, y, 0) = \\sum_{b, c} q(a, b, c\\mid x', y', 0), \\quad \\forall a, x, y, y', \\]\n\n\\[ \\sum_{a, c} q(a, b, c\\mid x, y, 0) = \\sum_{a, c} q(a, b, c\\mid x', y', 0), \\quad \\forall b, x, y, y', \\]\n\n\\[ \\sum_{a, b} q(a, b, c\\mid x, y, 0) = \\sum_{a, b} q(a, b, c\\mid x', y', 0), \\quad \\forall c, x, y, x', y', \\]\n\n\\[ \\sum_{b} q(a, b, c\\mid x, y, 0) = \\sum_{b} q(a, b, c\\mid x', y, 0), \\quad \\forall a, c, x, y, y', \\]\n\n\\[ \\sum_{a} q(a, b, c\\mid x, y, 0) = \\sum_{a} q(a, b, c\\mid x', y, 0), \\quad \\forall b, c, x, y, x', \\]\n\ntherefore it's non-signaling. Moreover, if \\( (x, y) \\neq (1, 1) \\) we have\n\n\\[ \\sum_{c \\in \\{0, 1\\}} q(0, 0, c\\mid x, y, z) = \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} = p(0, 0\\mid x, y), \\quad \\sum_{c \\in \\{0, 1\\}} q(1, 1, c\\mid x, y, z) = \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} = p(1, 1\\mid x, y), \\]\n\n\\[ \\sum_{c \\in \\{0, 1\\}} q(1, 0, c\\mid x, y, z) = \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} = p(1, 0\\mid x, y), \\quad \\sum_{c \\in \\{0, 1\\}} q(0, 1, c\\mid x, y, z) = \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} = p(0, 1\\mid x, y), \\]\n\nif \\( (x, y) = (1, 1) \\) we have\n\n\\[ \\sum_{c \\in \\{0, 1\\}} q(0, 0, c\\mid x, y, z) = \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} = p(0, 0\\mid x, y), \\quad \\sum_{c \\in \\{0, 1\\}} q(1, 1, c\\mid x, y, z) = \\frac{1}{2} \\sin^2 \\frac{\\pi}{8} = p(1, 1\\mid x, y), \\]\n\n\\[ \\sum_{c \\in \\{0,1\\}} q(1,0,c\\mid x,y,z) = \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} = p(1,0\\mid x,y), \\quad \\sum_{c \\in \\{0,1\\}} q(0,1,c\\mid x,y,z) = \\frac{1}{2} \\cos^2 \\frac{\\pi}{8} = p(0,1\\mid x,y). \\]\n\nTherefore this \\(\\{q(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) is a non-signaling extension of (CH) box.\n\nHowever, if this \\(\\{q(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) has a product form, then we have\n\n\\[ q(0,0,0\\mid 0,0,2) = p(0,0\\mid 0,0)p'(0\\mid z) \\implies p'(0\\mid z) = \\frac{q(0,0,0\\mid 0,0,2)}{p(0,0\\mid 0,0)} = \\frac{1}{2 \\cos^2 \\frac{\\pi}{8}}, \\]\n\n\\[ q(0,0,0\\mid 1,1,z) = p(0,0\\mid 1,1)p'(0\\mid z) \\implies p'(0\\mid z) = \\frac{q(0,0,0\\mid 1,1,z)}{p(0,0\\mid 1,1)} = \\frac{1}{2}. \\]\n\nThis contraction implies that this \\(\\{q(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) is a non-product, non-signaling extension of (CH) box.", "question": "### (f) Show that this is not true of the (CH) box: find a non-product extension of that box which nevertheless satisfies all non-signaling conditions. Part (e) has a very important consequence for cryptography: it means that certain types of bipartite correlations imply perfect privacy: any extension of the distribution which takes into account a third system must be completely uncorrelated from the first two (as long as it respects the basic non-signaling conditions). This phenomenon is often referred to as a monogamy property of the bipartite (PR) box. While this is not true of the (CH) box, the latter still provides some limited amount of secrecy, which will be key to its use in quantum key distribution, a topic we will soon explore in class." }, { "context": "Given that each pair is chosen uniformly and \\((x, y, z)\\) is generated uniformly, the success probability is\n\n\\[ P_{\\text{win}} = \\frac{1}{3} \\sum_{x,y,z \\in \\{0,1\\}} \\frac{1}{8} \\left( \\sum_{a,b \\in \\{0,1\\}} p(a,b,c\\mid x,y,z) + \\sum_{a,b \\in \\{0,1\\}} p(a,b,c\\mid x,y,z) + \\sum_{b,c \\in \\{0,1\\}} p(a,b,c\\mid x,y,z) \\right) \\]\n\n\\[ = \\frac{1}{24} \\sum_{x,y,z \\in \\{0,1\\}} \\left( I_1(x,y,z) + I_2(x,y,z) + I_3(x,y,z) \\right), \\]\n\nwhere \\(I_i(x,y,z)\\), \\(i = 1, 2, 3\\) denote the success probability under the condition that the input is \\((x, y, z)\\) and the \\(i\\)th pair is chosen. Here the first, second and third pair means \\((A, B)\\), \\((A, C)\\) and \\((B, C)\\) respectively.\n\nThe following table gives occurrence number of each term \\(p(a, b, c\\mid x, y, z)\\) in the summation \\(I_1(x, y, z) + I_2(x, y, z) + I_3(x, y, z)\\).\n\n| abc | xyz | 000 | 100 | 010 | 001 | 011 | 101 | 110 | 111 |\n|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|\n| 000 | | 3 | 3 | 3 | 3 | 2 | 2 | 2 | 0 |\n| 001 | | 1 | 1 | 1 | 1 | 2 | 2 | 0 | 2 |\n| 010 | | 1 | 1 | 1 | 1 | 2 | 0 | 2 | 2 |\n| 100 | | 1 | 1 | 1 | 1 | 0 | 2 | 2 | 2 |\n| 011 | | 1 | 1 | 1 | 1 | 0 | 2 | 2 | 2 |\n| 101 | | 1 | 1 | 1 | 1 | 2 | 0 | 2 | 2 |\n| 110 | | 1 | 1 | 1 | 1 | 2 | 2 | 0 | 2 |\n| 111 | | 3 | 3 | 3 | 3 | 2 | 2 | 2 | 0 |\n\nUsing this table and the condition that\n\n\\[ \\sum_{a,b,c \\in \\{0,1\\}} p(a, b, c\\mid x, y, z) = 1, \\quad \\forall x, y, z, \\]\n\nwe can easily see that\n\n\\[ P_{\\text{win}} = \\frac{1}{24} \\left( 12 + 2p(0,0,0\\mid 0,0,0) + 2p(0,0,0\\mid 1,0,0) + 2p(0,0,0\\mid 0,1,0) + 2p(0,0,0\\mid 0,0,1) \\right. \\]\n\n\\[ + 2p(1,1,1\\mid 0,0,0) + 2p(1,1,1\\mid 1,0,0) + 2p(1,1,1\\mid 0,1,0) + 2p(1,1,1\\mid 0,0,1) \\]\n\n\\[ - 2p(1,0,0\\mid 0,1,1) - 2p(0,1,0\\mid 1,0,1) - 2p(0,0,1\\mid 1,1,0) - 2p(0,0,0\\mid 1,1,1) \\]\n\n\\[ - 2p(0,1,1\\mid 0,1,1) - 2p(1,0,1\\mid 1,0,1) - 2p(1,1,0\\mid 1,1,0) - 2p(1,1,1\\mid 1,1,1) \\]\n\n\\[ \\leq \\frac{1}{24} \\left( 12 + 2p(0,0,0\\mid 0,0,0) + 2p(0,0,0\\mid 1,0,0) + 2p(0,0,0\\mid 0,1,0) + 2p(0,0,0\\mid 0,0,1) \\right. \\]\n\n\\[ + 2p(1,1,1\\mid 0,0,0) + 2p(1,1,1\\mid 1,0,0) + 2p(1,1,1\\mid 0,1,0) + 2p(1,1,1\\mid 0,0,1) \\]\n\n\\[ - 2p(0,0,0\\mid 1,1,1) - 2p(1,1,1\\mid 1,1,1) \\]\n\nNotice that whichever pair of players is chosen, \\((a, b, c)\\) is a valid answer to the input \\((x, y, z)\\) if and only if \\((a \\oplus 1, b \\oplus 1, c \\oplus 1)\\) is a valid answer to \\((x, y, z)\\). Therefore, given a tripartite box \\(\\{p(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\), if we define a new tripartite \\(\\{q(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) box like\n\n\\[ q(a, b, c\\mid x, y, z) = \\frac{1}{2} \\left( p(a, b, c\\mid x, y, z) + p(a \\oplus 1, b \\oplus 1, c \\oplus 1\\mid x, y, z) \\right), \\quad \\forall a, b, c, x, y, z, \\]\n\nthen it’s easy to check that we will have\n\n\\[ P_{\\text{win}}(p) = P_{\\text{win}}(q), \\]\n\nthat is to say such transformation preserves the success probability. Also if \\(\\{p(\\cdot, \\cdot \\mid x, y, z)\\}\\) is non-signaling, then for any \\((a, x)\\), we have\n\n\\[ \\sum_{b, c \\in \\{0,1\\}} q(a, b, c\\mid x, y, z) = \\frac{1}{2} \\sum_{b, c \\in \\{0,1\\}} \\left( p(a, b, c\\mid x, y, z) + p(a \\oplus 1, b \\oplus 1, c \\oplus 1\\mid x, y, z) \\right) \\]\n\n\\[ = \\frac{1}{2} \\sum_{b, c \\in \\{0,1\\}} \\left( p(a, b, c\\mid x, y', z') + p(a \\oplus 1, b \\oplus 1, c \\oplus 1\\mid x, y', z') \\right) \\]\n\n\\[ = \\sum_{b, c \\in \\{0,1\\}} q(a, b, c\\mid x, y', z'), \\quad \\forall y, z, y', z', \\]\n\nand this is also true for any \\((b, y)\\) or any \\((c, z)\\). Also for any \\((a, b, x, y)\\), we have\n\n\\[ \\sum_{c \\in \\{0,1\\}} q(a, b, c\\mid x, y, z) = \\frac{1}{2} \\sum_{c \\in \\{0,1\\}} \\left( p(a, b, c\\mid x, y, z) + p(a \\oplus 1, b \\oplus 1, c \\oplus 1\\mid x, y, z) \\right) \\]\n\n\\[ = \\frac{1}{2} \\sum_{c \\in \\{0,1\\}} \\left( p(a, b, c\\mid x, y, z') + p(a \\oplus 1, b \\oplus 1, c \\oplus 1\\mid x, y, z') \\right) \\]\n\n\\[ = \\sum_{c \\in \\{0,1\\}} q(a, b, c\\mid x, y, z'), \\quad \\forall z, z', \\]\n\nand this is true for any \\((a, c, x, z)\\) or any \\((b, c, y, z)\\). Therefore \\(\\{q(\\cdot, \\cdot \\mid x, y, z)\\}\\) is also non-signaling. Thus this transformation also preserves the non-signaling property. Out of this reason, from now on we can always assume that\n\n\\[ p(a, b, c \\mid x, y, z) = p(a \\oplus 1, b \\oplus 1, c \\oplus 1 \\mid x, y, z)), \\quad \\forall a, b, c, x, y, z, \\tag{*} \\]\n\notherwise we can do this transformation to \\(\\{p(\\cdot, \\cdot \\mid x, y, z)\\}\\) to make it so.\n\nNow we continue our task of finding the upper bound of success probability,\n\n\\[ \\begin{aligned} P_{\\text{win}} &\\leq \\frac{1}{24} \\big( 12 + 2p(0, 0, 0 \\mid 0, 0, 0) + 2p(0, 0, 0 \\mid 1, 0, 0) + 2p(0, 0, 0 \\mid 0, 1, 0) + 2p(0, 0, 0 \\mid 0, 0, 1) \\] \\[ &\\quad + 2p(1, 1, 1 \\mid 0, 0, 0) + 2p(1, 1, 1 \\mid 1, 0, 0) + 2p(1, 1, 1 \\mid 0, 1, 0) + 2p(1, 1, 1 \\mid 0, 0, 1) \\] \\[ &\\quad - 2p(0, 0, 0 \\mid 1, 1, 1) - 2p(1, 1, 1 \\mid 1, 1, 1) \\big) \\[ &= \\frac{1}{24} \\big( 12 + 4p(0, 0, 0 \\mid 0, 0, 0) + 4p(0, 0, 0 \\mid 1, 0, 0) + 4p(0, 0, 0 \\mid 0, 1, 0) \\[ &\\quad + 4p(0, 0, 0 \\mid 0, 0, 1) - 4p(0, 0, 0 \\mid 1, 1, 1) \\big) \\[ &= \\frac{1}{2} + \\frac{1}{6} \\big( p(0, 0, 0 \\mid 0, 0, 0) + p(0, 0, 0 \\mid 1, 0, 0) + p(0, 0, 0 \\mid 0, 1, 0) + p(0, 0, 0 \\mid 0, 0, 1) \\[ &\\quad - p(0, 0, 0 \\mid 1, 1, 1) \\big), \\[ \\end{aligned} \\]\n\nwhere we have used the condition \\((*)\\). Next we will have to do some painful calculation. Using condition \\((*)\\) and non-signaling condition for fixing two inputs, we can show that\n\n\\[ \\begin{aligned} p(0, 0 \\mid 1, 0, 1, 1) + p(0, 1 \\mid 0, 1, 0, 1) &= p(0, 0 \\mid 1, 1, 1, 1) + p(0, 1 \\mid 0, 1, 1, 1), \\[ p(0, 0 \\mid 1, 1, 0, 1) + p(1, 0 \\mid 0, 1, 0, 1) &= p(0, 0 \\mid 1, 1, 1, 1) + p(1, 0 \\mid 0, 1, 1, 1), \\[ p(0, 0 \\mid 1, 1, 0, 1) + p(1, 1 \\mid 0, 1, 0, 1) &= p(0, 0 \\mid 1, 1, 1, 1) + p(1, 1 \\mid 0, 1, 1, 1), \\[ p(0, 0 \\mid 1, 1, 0, 1) + p(1, 1 \\mid 0, 1, 0, 1) &= p(0, 0 \\mid 1, 1, 1, 1) + p(1, 1 \\mid 0, 1, 1, 1), \\[ \\end{aligned} \\]\n\n\\[ \\begin{aligned} &\\implies p(0, 0 \\mid 1, 0, 1, 1) + p(0, 1 \\mid 0, 1, 0, 1) + p(0, 0 \\mid 1, 1, 0, 1) + p(1, 0 \\mid 0, 1, 0, 1) \\[ &\\quad + p(1, 0 \\mid 0, 1, 0, 1) + p(0, 1 \\mid 0, 1, 0, 1) = 2p(0, 0 \\mid 1, 1, 1, 1) + 2p(0, 1 \\mid 0, 1, 1, 1) \\[ &\\quad = 1 - 2p(0, 0 \\mid 0, 1, 1, 1). \\[ \\end{aligned} \\]\n\nOn the other hand, still using condition \\((*)\\) and non-signaling condition for fixing two inputs, we can check that\n\n\\[ \\begin{aligned} p(0, 0 \\mid 1, 1, 0, 0) + p(0, 1 \\mid 0, 1, 0, 0) + 2p(1, 0 \\mid 0, 0, 0, 0) \\[ &\\quad + p(0, 0 \\mid 1, 0, 1, 0) + 2p(0, 1 \\mid 0, 1, 0, 0) + p(1, 0 \\mid 0, 0, 1, 0) \\[ &\\quad + 2p(0, 0 \\mid 1, 0, 0, 1) + p(0, 1 \\mid 0, 0, 0, 1) + p(1, 0 \\mid 0, 0, 0, 1) \\[ &= p(0, 0 \\mid 1, 0, 1, 1) + p(0, 1 \\mid 0, 1, 0, 1) + p(0, 0 \\mid 1, 1, 0, 1) + p(1, 0 \\mid 0, 1, 0, 1) \\[ &\\quad + p(1, 0 \\mid 0, 1, 0, 1) + p(0, 1 \\mid 0, 1, 0, 1) + 2p(1, 0 \\mid 0, 1, 0, 1) + 2p(0, 0 \\mid 1, 1, 0, 1) \\[ &\\quad = 1 - 2p(0, 0 \\mid 0, 1, 1, 1) + 2p(1, 0 \\mid 0, 1, 0, 1) + 2p(0, 0 \\mid 1, 1, 0, 1). \\[ \\end{aligned} \\]\n\nNotice that condition \\((\\ast)\\) gives\n\n\\[ \\begin{aligned} &2p(0, 0, 0|1, 0, 0) + 2p(0, 0, 0|0, 1, 0) + 2p(0, 0, 0|0, 0, 1) \\[ &+ p(0, 0, 1|1, 0, 0) + p(0, 1, 0|1, 0, 0) + 2p(1, 0, 0|1, 0, 0) \\[ &+ p(0, 0, 1|0, 1, 0) + 2p(0, 1, 0|0, 1, 0) + p(1, 0, 0|0, 1, 0) \\[ &+ 2p(0, 0, 1|0, 0, 1) + p(0, 1, 0|0, 0, 1) + p(1, 0, 0|0, 0, 1) \\[ &\\leq 3, \\[ \\end{aligned} \\]\n\nthus\n\n\\[ \\begin{aligned} &2p(0, 0, 0|1, 0, 0) + 2p(0, 0, 0|0, 1, 0) + 2p(0, 0, 0|0, 0, 1) \\[ &+ 1 - 2p(0, 0, 0|1, 1, 1) \\[ &+ 2p(1, 0, 0|0, 1, 1) + 2p(0, 1, 0|1, 0, 1) + 2p(0, 0, 1|1, 1, 0) \\[ &\\leq 3, \\[ \\implies p(0, 0, 0|1, 0, 0) + p(0, 0, 0|0, 1, 0) + p(0, 0, 0|0, 0, 1) - p(0, 0, 0|1, 1, 1) \\[ &\\leq 1 - p(1, 0, 0|0, 1, 1) - p(0, 1, 0|1, 0, 1) - p(0, 0, 1|1, 1, 0) \\[ &\\leq 1. \\[ \\]\n\nFinally we have\n\n\\[ \\begin{aligned} P_{\\text{win}} &\\leq \\frac{1}{2} + \\frac{1}{6} (p(0, 0, 0|0, 0, 0) + p(0, 0, 0|1, 0, 0) + p(0, 0, 0|0, 1, 0) + p(0, 0, 0|0, 0, 1) \\[ &- p(0, 0, 0|1, 1, 1)) \\[ &\\leq \\frac{1}{2} + \\frac{1}{6} (p(0, 0, 0|0, 0, 0) + 1) \\[ &\\leq \\frac{1}{2} + \\frac{1}{6} \\left(\\frac{1}{2} + 1\\right) \\[ &= \\frac{3}{4}. \\[ \\]\n\nHere we have used the condition \\((\\ast)\\), so \\(p(0, 0, 0|0, 0, 0) \\leq \\frac{1}{2}\\).\n\nNow we have proved that \\(\\frac{3}{4}\\) is an upper bound of the success probability. Indeed if we take\n\n\\[ p(0, 0, 0|x, y, z) = p(1, 1, 1|x, y, z) = \\frac{1}{2}, \\quad \\forall x, y, z, \\]\n\nand all unspecified probabilities being 0, then we can check that this \\(\\{p(\\cdot, \\cdot, \\cdot \\mid x, y, z)\\}\\) is a non-signaling tripartite nonlocal box, and we have\n\n\\[ \\begin{aligned} P_{\\text{win}} &= \\frac{1}{24} \\big(12 + 2p(0, 0, 0\\mid 0, 0, 0) + 2p(0, 0, 0\\mid 1, 0, 0) + 2p(0, 0, 0\\mid 0, 1, 0) + 2p(0, 0, 0\\mid 0, 0, 1) \\[ &+ 2p(0, 1, 1|0, 0, 0) + 2p(1, 1, 0|0, 0, 0) + 2p(1, 0, 1|0, 0, 0) + 2p(1, 1, 1|0, 0, 0) \\[ &- 2p(0, 0, 0\\mid 1, 1, 1) - 2p(1, 1, 1\\mid 1, 1, 1)\\big) \\[ &= \\frac{3}{4}. \\[ \\]\n\nTherefore the optimum success probability achieved by any non-signaling tripartite box in this game is \\(\\frac{3}{4}\\).", "question": "### (g) Consider a three-player variant of the CHSH game in which each of the three possible pairs of players is chosen uniformly at random by the referee to execute the CHSH game (with the third player being ignored; see the notes on EdX for a complete description). Prove either analytically or numerically (in the latter case, include and briefly explain your code) that the optimum success probability achieved by any non-signaling tripartite box in this game is at most \\(3/4\\). Another manifestation of monogamy!" } ]
2016-03-11T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Establishing keys in the presence of a limited eavesdropper
Assume that Alice and Bob are connected by a classical authenticated channel. Your goal is to devise ways in which Alice and Bob can obtain a key in any of the situations below. (Due to Eric Fries)
[ { "context": "Define Protocol \\( J_1 \\) as follows: Alice chooses a string \\( X = x_1, \\ldots, x_n \\in \\{0, 1\\}^n \\) uniformly at random, Alice sends each bit \\( x_i \\) to Bob over the channel where Eve has a probability \\( q \\) of learning each bit, Alice picks a random seed \\( r \\in \\{0, 1\\}^m \\), Alice uses a 2-universal extractor \\( \\text{Ext} : \\{0, 1\\}^n \\times \\{0, 1\\}^m \\rightarrow \\{0, 1\\}^\\ell \\) to compute \\( k = \\text{Ext}(x, r) \\), Alice sends \\( r \\) to Bob over the classically authenticated channel, and Bob computes \\( k = \\text{Ext}(x, r) \\).\n\nEve either learns \\( x_i \\) with probability \\( q \\) and can guess it exactly, or does not learn \\( x_i \\) with probability \\( 1 - q \\) and can guess it with probability \\( \\frac{1}{2} \\). Therefore,\n\n\\[ P_{\\text{guess}}(X|E) = \\left[ q + (1 - q) \\frac{1}{2} \\right]^n = \\left( \\frac{1 + q}{2} \\right)^n \\Rightarrow H_{\\min}(X|E) = -n \\log \\left( \\frac{1 + q}{2} \\right) = n [1 - \\log (1 + q)]. \\]\n\nPage 5 of \"Security of Quantum Key Distribution\" (Renner 2005) gives\n\n\\[ \\ell \\leq H_{\\min}(X|E) + 2 \\log e - 1 \\Rightarrow D \\left( \\rho_{KRE}, \\frac{I}{2^\\ell} \\otimes \\rho_{PRE} \\right) \\leq \\epsilon. \\]\n\nFor \\( \\epsilon = 10^{-5} \\), Protocol \\( J_1 \\) is secure when\n\n\\[ \\ell \\leq n [1 - \\log (1 + q)] - 10 \\log 10 - 1. \\]\n\nThe upper bound on \\( \\ell \\) is tightest when \\( q \\) is maximized, so \\( \\frac{1}{3} \\leq q \\leq \\frac{1}{2} \\Rightarrow \\ell \\leq n (2 - \\log 3) - 10 \\log 10 - 1. \\]\n\nTherefore, if the definition of \\( \\text{Ext}(x, r) \\) satisfies the previous inequality, then Protocol \\( J_1 \\) is \\( 10^{-5} \\)-secure.\n\nWhen the amount of \\( 10^{-5} \\)-secure key is maximized,\n\n\\[ \\ell = n (2 - \\log 3) - 10 \\log 10 - 1 \\Rightarrow \\frac{n}{\\ell} = \\frac{1}{2 - \\log 3} \\left( 1 + \\frac{1 + 10 \\log 10}{\\ell} \\right). \\]\n\nIn the limit of a long key, the channel where Eve has a probability \\( q \\) of learning each bit must be used \\( \\frac{2 - \\log 3}{2} \\approx 2.409 \\) times per bit of \\( 10^{-5} \\)-secure key.\n\n(TA's comment: One also needs to add \\( m \\), the length of the seed, in the number of uses of the channel. So the correct ratio is actually \\( \\frac{n + m}{\\ell} \\), but anyways the seed length is small and doesn't affect the final number by much)", "question": "### (a) Suppose that Alice and Bob are connected by a classical channel such that Eve learns each bit with probability \\( q \\), where we only know that \\( \\frac{1}{3} \\leq q \\leq \\frac{1}{2} \\). Give a protocol that allows Alice and Bob to create an \\( \\epsilon \\)-secure key, where \\( \\epsilon = 10^{-5} \\). Explain why your protocol is secure. How many uses of the channel are required per bit of key produced?" }, { "context": "Define Protocol J₂ as follows: Alice chooses a string \\( X = x_1, \\ldots, x_n \\in \\{0, 1\\}^n \\) uniformly at random, Alice sends each bit \\( x_i \\) to Bob over the channel where Eve learns every bit but can only remember a maximum of \\( p = 1024 \\) of them, Alice picks a random seed \\( r \\in \\{0, 1\\}^m \\), Alice uses a 2-universal extractor \\( \\text{Ext} : \\{0, 1\\}^n \\times \\{0, 1\\}^m \\rightarrow \\{0, 1\\}^\\ell \\) to compute \\( k = \\text{Ext}(x, r) \\), Alice sends \\( r \\) to Bob over the classically authenticated channel, and Bob computes \\( k = \\text{Ext}(x, r) \\).\n\n\\( n \\leq p \\Rightarrow \\) Eve can remember all of \\( X \\Rightarrow P_{\\text{guess}}(X|E) = 1 \\Rightarrow H_{\\min}(X|E) = 0 \\). \\( n \\geq p \\Rightarrow \\) there are \\( n - p \\) bits of \\( X \\) that Eve does not remember \\( \\Rightarrow P_{\\text{guess}}(X|E) = \\left(\\frac{1}{2}\\right)^{n-p} = 2^{-(n-p)} \\Rightarrow H_{\\min}(X|E) = n - p \\). Therefore, \\( H_{\\min}(X|E) = \\max \\{0, n - p\\} \\). From here on forward, assume \\( n > p \\Rightarrow H_{\\min}(X|E) = n - p \\).\n\nPage 5 of \"Security of Quantum Key Distribution\" (Renner 2005) gives\n\n\\[ \\ell \\leq H_{\\min}(X|E) + 2 \\log e - 1 \\Rightarrow D \\left( P_{KRE}, \\frac{1}{2^\\ell} \\otimes P_{RE} \\right) \\leq \\epsilon. \\]\n\nFor \\( \\epsilon = 10^{-10} \\), Protocol J₂ is secure when \\( \\ell \\leq n - p - 20 \\log 10 - 1 \\). For \\( p = 1024 \\), Protocol J₂ is secure when \\( \\ell \\leq n - 1025 - 20 \\log 20 \\).", "question": "### (b) Suppose now that Alice and Bob are connected by a classical channel on which Eve can intercept bits arbitrarily. However, Eve’s memory is limited to \\( k = 1024 \\) bits. Give a protocol that allows Alice and Bob to create an \\( \\epsilon \\)-secure key where \\( \\epsilon = 10^{-10} \\). Explain why your protocol is secure." } ]
2016-11-29T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Recursive information reconciliation
Suppose that Alice and Bob have n-bit strings \(X, Y \in \{0, 1\}^n\) respectively such that for each \(i \in \{1, \ldots, n\}\), \(\Pr(X_i = Y_i) = p = 1 - \delta \in [1/3, 2/3]\).
[ { "context": "", "question": "### (a) Let \\(S \\subseteq \\{1, \\ldots, n\\}\\) be a set of coordinates of size \\(|S| = k\\). Evaluate \\(\\Pr(x_S = y_S)\\) and \\(\\Pr(\\bigoplus_{i \\in S} x_i = \\bigoplus_{i \\in S} y_i)\\), as a function of \\(\\delta\\)." }, { "context": "", "question": "### (b) Using the previous question, find a lower bound on \\(k\\) which guarantees that \\(\\Pr(x_S = y_S \\mid \\bigoplus_{i \\in S} x_i = \\bigoplus_{i \\in S} y_i) \\geq 1 - \\delta / 2\\)." }, { "context": "", "question": "### (c) Explain how this idea can be used to implement an iterative scheme for information reconciliation [Hint: use larger and larger alphabets]." }, { "context": "", "question": "### (d) How efficient is your scheme? For some small \\(\\epsilon > 0\\) (much smaller than \\(\\delta\\)), estimate the number of bits that Alice and Bob have to exchange before they find a subset \\(T\\) such that \\(\\Pr(x_T = y_T) \\geq 1 - \\epsilon\\). How large is \\(T\\)?" } ]
2016-11-29T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Generating a key using an anonymous message board
(Due to Bolton Bailey)
[ { "context": "(a) Consider the following protocol for Alice and Bob. Both Alice and Bob each uniformly randomly choose a bit. Then, Alice and Bob each broadcast their bits on the public channel. If their bits are the same, which happens with probability \\( \\frac{1}{2} \\), they consider the protocol failed. If the two bits are different, then both take the bits broadcast by Alice to be the secret key. From Eve's perspective it is impossible to tell which broadcast came from Alice and which from Bob, so she cannot tell which of the two different bits broadcast is the key, so the key is uniformly random from her perspective.", "question": "### (a) Alice and Bob’s conversation takes place on an anonymous message board. That is, Eve can see the whole transcript but doesn’t know which message came from which person. Find a protocol in which Alice and Bob exchange a total of two messages, which succeeds with probability at least one half, and when it succeeds, Alice and Bob share one bit of key which is uniformly random from the perspective of Alice." }, { "context": "Consider the following protocol, with 3n rounds. In each round, Alice and Bob generate a uniformly random bit and broadcast it. Then, (as in part a) each round where the bits matched is considered a failure and in each round where the bits were different, Alice and Bob both take Alice's broadcast bit from that round and append it to their key. By argument as (a), all the bits of this key should be independent of Eve and uniformly random. At the end, if Alice's and Bob's key length is greater than or equal to \\( n \\), they take the first \\( n \\) bits of their key as their final key. If they have less than \\( n \\) bits, the protocol fails.\n\nThe number of bits generated by the 3n rounds is distributed according to a Bernoulli random variable with 3n trials and success probability \\( \\frac{1}{2} \\). To bound the chance of failure, we apply the Hoeffding inequality, which is mentioned in the lecture notes as a variant of the Chernoff bound. The Hoeffding inequality states that the probability a Bernoulli random variable with \\( n \\) trials and probability \\( p \\) is less than or equal to \\( (p - \\epsilon)n \\) is bounded by\n\n\\[\n\\Pr(B(n, p) \\leq (p - \\epsilon)n) \\leq e^{-2\\epsilon^2 n}\n\\]\n\nIn this case, we have, with \\( \\epsilon = \\frac{1}{6} \\)\n\n\\[\n\\text{Pr(failure)} = \\text{Pr}(B(3n, \\frac{1}{2}) < n) \\leq \\text{Pr}(B(3n, \\frac{1}{2}) \\leq (\\frac{1}{2} - \\frac{1}{6})3n) \\leq e^{-2(\\frac{1}{6})^2 3n} = e^{-n/6}\n\\]\n\nAnd so the probability of failure is indeed exponentially small.", "question": "### (b) Give an anonymous-message-board protocol to generate an \\(n\\)-bit private key which takes a linear number of rounds and has exponentially small failure rate. (Hint: You’ll need a Chernoff bound to control the error rate.)" }, { "context": "The argument fails to apply here because in this case Alice and Bob have some information that Eve does not have. Namely, the identities of the people sending the messages. Because Alice and Bob can infer whether they or the other was one who sent a message, they can use this information to create a key. Eve can no longer simply carry out all the operations that Alice and Bob carry out because she doesn’t have all the information.", "question": "### (c) Eve sees the entire transcript of Alice and Bob’s conversation, but in the lecture notes it is argued that key generation is impossible against an adversary who can overhear all communication. Why does that argument fail to apply here?" } ]
2016-11-29T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Information reconciliation via linear codes
(Due to Alex Meiburg)
[ { "context": "With 3 extra bits, there are 8 possible transmitted states that can be corrected to the same output state. One of them is the original text. The other 7 then, we hope, are 7 syndromes corresponding to single-bit flips: we see that indeed each single-bit flip will lead to a different syndrome, so we can correct single-bit errors in the plaintext always. Since this saturates the set of transmitted sets, we can only correct single-bit flip errors, so we have a probability of success of\n\n\\[\n(1 - p)^7 + 7p(1 - p)^6 = \\frac{1 - 21p^2 + 70p^3 - 105p^4 + 84p^5 - 35p^6 + 6p^7}{1 - 7p + 21p^2 - 35p^3 + 35p^4 - 21p^5 + 7p^6 - p^7}\n\\]", "question": "### (a) Suppose Alice and Bob have access to the binary symmetric channel with error \\(p\\): Bob receives each bit that Alice sends correctly with probability \\((1 - p)\\). Consider the linear code generated by the parity check matrix\n\n\\[\nH = \\begin{pmatrix}\n0 & 0 & 0 & 1 & 1 & 1 \\\\\n0 & 1 & 1 & 0 & 0 & 1 \\\\\n1 & 0 & 1 & 0 & 1 & 0 \\\\\n\\end{pmatrix}\n\\]\n\nUsing the information reconciliation scheme defined in the edX videos, with what probability does Alice and Bob succeed at distributing their key?" }, { "context": "\n\\[\n(1 - p)^7 = \\frac{1 - 7p + 21p^2 - 35p^3 + 35p^4 - 21p^5 + 7p^6 - p^7}{1 - 3p^2 + 2p^3}\n\\]\n\nTransmitting the 3-bit scheme with the 2 parity-check bits has a success rate of\n\n\\[\n(1 - p)^3 + 3p(1 - p)^2 = \\frac{1 - 3p^2 + 2p^3}{1 - 3p^2 + 2p^3}\n\\]\n\nTo leading order in \\(p\\), the 3-bit scheme is the most reliable (but also transmits less data than the 7-bit scheme), followed by the 7-bit with reconciliation, followed by the 7-bit without reconciliation. Plotting these functions, we can check that this ordering holds for all \\(p \\in (0, 1/2)\\). So for these \\(p\\), the highest success rate scheme is the 3-bit scheme. If we weight by information transmission rate (for instance, with \\(p = 0.05\\), the 7-bit code transmits 7 bits with 10 with success probability 0.995 for information density of 0.669, while the 3-bit code has information density of only 0.596), then the 7-bit code is optimal for \\(p < 0.1083\\), and the 3-bit code is optimal otherwise.", "question": "### (b) What is the probability that a 7-bit message is transmitted correctly with no reconciliation? Compare this to the success probability of the previous part and to the success probability of the 3-bit scheme generated by the parity check matrix\n\n\\[ H = \\begin{pmatrix} 1 & 1 & 0 \\\\ 0 & 1 & 1 \\end{pmatrix} \\]\n\n(The three-bit scheme is analyzed in the edX lecture videos; you may quote those results.) Which schemes have success probabilities with the best leading order behavior? For \\( p \\in \\left(0, \\frac{1}{2}\\right) \\), which scheme is best?" } ]
2016-11-29T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Cloning attacks
In previous problems, we studied the ability of Alice and Bob to generate a key over a classical channel given some strict limitations on Eve’s ability. Now we aim to analyze BB84 in the context of a limited Eve. In particular, Eve will be limited to intercepting Alice’s message and attempting to copy it with one of the maps from HW2, problem 6. Recall that in the BB84 protocol Alice first generates random \( x_j, \theta_j \in \{0, 1\} \), and then sends \( N \) single-qubit states \( |x_j\rangle_{\theta_j} \), for \( j \in \{1, \ldots, N\} \), to Bob. Now suppose the eavesdropper Eve intercepts each of the states sent by Alice, and does the following: (i) With probability \( 1-p \), she applies the cloning map \( T_1 \) from Problem 6(a) in HW2. She keeps the second qubit and forwards the first qubit to Bob. (ii) With probability \( p \), she applies the cloning map \( T_2 \) from Problem 6(b) in HW2. She keeps the second qubit, traces out (i.e. ignores) the third qubit, and forwards the first qubit to Bob. For simplicity, assume \( N = 1 \). Based on the results of HW2 Problem 6 (you may consult the solution available online), evaluate the following. (In the solutions to HW2 it is proven that the map \( T_2 \) is equivalent to the map \( T_3 \); you should use whichever form you find most convenient.) (Due to Bolton Bailey)
[ { "context": "First, consider the case where \\( p = 0 \\), that is, Eve uses cloning map \\( T_1 \\) with certainty. Recall that the map \\( T_1 \\) takes\n\n\\[ T_1 : \\rho \\mapsto \\rho \\otimes \\frac{1}{2} I \\]\n\nSince Eve forwards the first qubit to Bob, Bob receives \\( \\rho \\) in this case. Then, if Bob correctly guesses \\( \\theta_1 \\), Bob is certain to get the correct measurement outcome.\n\nSecond, consider the case where \\( p = 1 \\), that is, Eve uses cloning map \\( T_2 \\) with certainty. Since Eve forwards the first qubit to Bob, to get the probability that Bob correctly measures his bit, we trace out the second and third bits produced by the map \\( T_2 \\). Whether a 0 or 1 is sent, this leaves us with a probability of \\( \\frac{5}{6} \\) that Bob gets the right state.\n\n\\[ q_B = \\frac{5}{6} \\]\n\nIn general the probability of Bob being correct is the sum of the probabilities that he is right in either case:\n\n\\[ 1(1 - p) + \\frac{5}{6} p = 1 - \\frac{1}{6} p \\]", "question": "### (a) Suppose Bob correctly guesses \\( \\theta = \\theta_1 \\) and measures his qubit in the corresponding basis. What is the probability that his measurement outcome is equal to \\( x = x_1 \\)? First compute this for the case \\( p = 0 \\). Next compute the probability for \\( p = 1 \\); call this value \\( q_B \\) for future reference. Finally, extend this to give the success probability as a function of the probability \\( p \\)." }, { "context": "First, consider the case where \\( p = 0 \\), that is, Eve uses cloning map \\( T_1 \\) with certainty. Recall that the map \\( T_1 \\) takes\n\n\\[ T_1 : \\rho \\mapsto \\rho \\otimes \\frac{1}{2} I \\]\n\nSince Eve keeps the second qubit, Eve keeps the maximally mixed state. Then, if Eve correctly guesses \\( \\theta_1 \\), she has a \\( \\frac{1}{2} \\) chance of getting the correct outcome.\n\nSecond, consider the case where \\( p = 1 \\), that is, Eve uses cloning map \\( T_2 \\) with certainty. Since Eve keeps the second bit, to get the probability that Eve correctly measures his bit, we trace out the first and third bits produced by the map \\( T_2 \\). Whether a 0 or 1 is sent, this leaves us with a probability of \\( \\frac{5}{6} \\) that Eve gets the right state.\n\n\\[ q_E = \\frac{5}{6} \\]\n\nIn general the probability of Eve being correct is the sum of the probabilities that she is right in either case:\n\n\\[ \\frac{1}{2} (1 - p) + \\frac{5}{6} p = \\frac{1}{2} + \\frac{1}{3} p \\]", "question": "### (b) Suppose Eve does the same, guessing \\( \\theta \\) correctly and measuring in the corresponding basis. As in part (a), compute her probability of success when \\( p = 0 \\), \\( p = 1 \\), and for general \\( p \\). For future reference, let \\( q_E \\) be the value when \\( p = 1 \\)." }, { "context": "Bob and Eve’s outcomes agree with each other and are correct only if both of the qubits produced by Eve’s cloning are correct.\n\nWhen \\( p = 0 \\), Bob is certain to be correct, and Eve is \\( \\frac{1}{2} \\) chance to be correct, so the probability of correctness is \\( \\frac{1}{2} \\).\n\nWhen \\( p = 1 \\), both are correct only if, when we trace out the last bit of the \\( T_2 \\) output, the remaining two qubits are both correct. This happens with probability \\( \\frac{2}{3} \\).\n\nIn fact, the probabilities of both being correct in each of these cases are just the cloning success probabilities for these two maps.\n\nThus, in general, the probability that both are correct is\n\n\\[ \\frac{1}{2} (1 - p) + \\frac{2}{3} p = \\frac{1}{2} + \\frac{1}{6} p \\]", "question": "### (c) What is the probability that Bob and Eve’s outcomes agree with each other and are correct? Give your answer as a function of \\( p \\). (Hint: This is related to the success probabilities of \\( T_1 \\) and \\( T_2 \\) as cloning maps.)" } ]
2016-11-29T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
BB84 against a cloning attack
Let's continue the previous problem with the BB84 protocol. Alice and Bob know that Eve will implement a cloning attack, but they do not know \( p \) ahead of time. They will try to generate a key independent from Eve which is as long as possible. We will informally estimate the length of the key produced. (It is possible but more difficult to show that with high probability, Alice and Bob produce a key which is mostly independent from Eve and has almost our estimated length.) We now consider a number of rounds \( N = 4n \). Suppose that in \( 2n \) of the rounds (exactly), Bob happens to make the right basis choice; call these the agreement rounds, \( R \subset \{1, \ldots, N\} \). They select exactly \( n \) of these rounds for testing; call these rounds the testing rounds, \( T \subset R \). You may assume all rounds behave the same. (Due to Alex Meiburg)
[ { "context": "From 5a, Bob has a success rate of \\(1 - (1 - q_B)p\\) in each round, or \\((1 - q_B)p\\) of getting an error in a round, so we expect\n\\[ (1 - q_B)pn \\]", "question": "### (a) We say that Bob succeeds in round \\( j \\) if his measurement outcome against \\([x_j]_{\\theta_j}\\) is equal to \\( x_j \\). If Bob does not succeed, we say there is an error. What is the expected number of errors that Alice and Bob will notice in the testing rounds \\( T \\), as a function of \\( q_B \\) and \\( p \\)?" }, { "context": "There are \\(n\\)-choose-\\(\\delta n\\) ways to have \\(\\delta n\\) bits corrupted of \\(n\\), which is by Stirling's formula is approximately\n\\[ \\frac{1}{\\sqrt{2\\pi n \\delta (1 - \\delta)}} n^{n(\\delta n) - \\delta n}(n - \\delta n)^{(1 - \\delta)n} \\approx \\frac{1}{\\sqrt{n \\delta}} n^{\\delta n} \\]\nso that approximately \\(\\delta n \\log_2(n)\\) bits are required for information reconciliation.", "question": "### (b) Now suppose that Alice and Bob detect \\( \\delta n \\) errors in the testing rounds. They should expect to also see approximately \\( \\delta n \\) errors in the untested agreement rounds \\( R \\setminus T \\). They perform information reconciliation on Alice’s bits \\(\\{x_j\\}\\) and Bob’s measurement outcomes to generate a common key \\( k_A = k_B \\). How many bits do they need to exchange in order to perform the reconciliation, as a function of \\( \\delta \\) and \\( n \\)? (You may assume there are indeed at most \\( \\delta n \\) errors.)" }, { "context": "\\[ (1 - q_B)pn = \\delta n \\implies \\hat{p} = \\frac{\\delta}{1 - q_B} \\]", "question": "### (c) Now we’ll invert the bound from (a). What is Alice and Bob’s best guess \\( \\hat{p} \\) for \\( p \\), as a function of \\( q_B \\) and \\( \\delta \\)?" }, { "context": "In each of the rounds, Eve has a \\(\\frac{1}{2}(1 - p) + q_E p\\) chance of guessing correctly, and the min entropy is the negative log of this, so\n\\[ H_{\\min}(A|E) = -\\log \\left( \\frac{1}{2}(1 - \\hat{p}) + q_E \\hat{p} \\right) = -\\log \\left( \\frac{1}{2} + \\left( q_E - \\frac{1}{2} \\right) \\frac{\\delta}{1 - q_B} \\right) \\]", "question": "### (d) Suppose Alice and Bob make a guess \\( \\hat{p} \\) for \\( p \\) based on the method from the previous question. Deduce a bound on the min-entropy \\( H_{\\min}(A|E) \\) per round that they could estimate for the rounds in \\( K = R \\setminus T \\). Give their estimate as a function of \\( q_E, q_B, \\delta \\)." }, { "context": "From class our affine extractor is a \\((K, \\epsilon)\\)-strong extractor for \\(K \\geq 2m - 2 \\log \\epsilon\\). Here we have \\(K\\) given by the \\(n - H_{\\min}\\) from (d), combined with the bits leaked out as (b) of \\(\\delta n \\log_2(n)\\), and \\(m\\) is the bits we can extract. So an \\(\\epsilon\\)-secure system can be accomplished with\n\\[ n - \\left( -\\log_2 \\left( \\frac{1}{2} + \\left( q_E - \\frac{1}{2} \\right) \\frac{\\delta}{1 - q_B} \\right) + \\delta n \\log_2(n) + 2 \\log_2(1/\\epsilon) \\right) \\]\nbits available of private key.", "question": "### (e) Finally Alice and Bob apply privacy amplification to their reconciled string. They start with the min-entropy guarantee computed in (d) and leak as many bits as computed in (b) to Eve. Using the best privacy amplification method you know (e.g. as seen in class), how much private key can they extract? Express your answer as a function of \\( \\delta, q_E, q_B, n \\)." }, { "context": "We have \\(\\delta \\approx p(1 - q_B)\\), so\n\\[ n - \\left( -\\log_2 \\left( \\frac{1}{2} + \\left( q_E - \\frac{1}{2} \\right) p \\right) + \\delta n \\log_2(n) + 2 \\log_2(1/\\epsilon) \\right) \\]\nbits of private key available.", "question": "### (f) Estimate \\( \\delta \\) in terms of \\( p \\) as in part (a). Using your values of \\( q_E \\) and \\( q_B \\) from problem 5, how much key can they expect to extract as a function of \\( p \\) and \\( n \\)?" } ]
2016-11-29T00:00:00
Thomas Vidick, Andrea Coladangelo, Jalex Stark, Charles Xu
Caltech
Attack on DL-Based Signature Schemes
In what follows, we consider a cyclic group of order \( q \) generated by some element \( g \). We let \( \langle g \rangle \) denote this group and we take multiplicative notations. We let 1 denote the neutral element. We assume that comparing and multiplying two group elements is easy and that inverting an element is easy. We assume that the discrete logarithm problem is hard in this group. In particular, we assume that \( q > 2^{160} \). We further assume that we have a hash function \( G \) mapping an arbitrary group element to a \( \mathbb{Z}_q \) element and a hash function \( H \) mapping an arbitrary bitstring to a \( \mathbb{Z}_q \) element.
[ { "context": "*The difference with DSA is that r is hashed using G instead of being reduced modulo q and that k is small.*\n\n*We have*\n\\[ g^{\\frac{H(m)}{g^s \\cdot y^r} \\mod q} = g^k \\]\n\n*As \\( G(g^k) = r \\), the verification succeeds.*", "question": "### Q.1\nWe consider a digital signature scheme (inspired by DSA) in which the key generation and the signature algorithm work as follows:\n\n**Key generation:**\n1. pick \\( x \\in \\mathbb{Z}_q \\) with uniform distribution\n2. compute \\( y = g^x \\)\n3. set the secret key to x and the public key to y\n\n**Sign m using key x:**\n1. pick \\( k \\in \\{1, 2, \\ldots, 2^{128}\\} \\) with uniform distribution\n2. compute \\( r = G(g^k) \\)\n3. compute \\( s = \\frac{H(m) + xr}{k} \\mod q \\)\n4. set the signature to (r, s)\n\n**Verify signature (r, s) for m using key y:**\n1. check that \\( G \\left( \\frac{H(m)}{g^s \\cdot y^r} \\mod q \\right) = r \\)\n\nProve that under a honest execution, a signature is always correct." }, { "context": "*As G hashes on a domain of size q and k is selected on a domain of size 2^{128}, and q \\gg 2^{128}, collisions on k are more probable than collisions on G. So, we assume that r_i = r_j is due to k_i = k_j. Note that we could be a bit more precise using the Bayes formula:*\n\n\\[\n\\Pr[k_i = k_j \\mid r_i = r_j] = \\frac{\\Pr[k_i = k_j]}{\\Pr[r_i = r_j]} = \\frac{\\Pr[k_i = k_j]}{\\Pr[r_i = r_j \\mid k_i = k_j] \\Pr[k_i = k_j] + \\Pr[r_i = r_j \\mid k_i \\neq k_j] \\Pr[k_i \\neq k_j]}\n\\]\n\n*As \\( \\Pr[k_i = k_j] \\approx 2^{-128} \\) and \\( \\Pr[r_i = r_j \\mid k_i \\neq k_j] \\approx \\frac{1}{q} \\), we obtain*\n\n\\[\n\\Pr[k_i = k_j \\mid r_i = r_j] \\approx \\frac{1}{1 + \\frac{1}{q}(2^{128} - 1)} \\approx 1\n\\]\n\n*If \\( k_i = k_j \\) happens, then \\( s_i / s_j \\equiv \\frac{H(m_i) + x r_i}{H(m_j) + x r_j} \\mod q \\) so*\n\n\\[\ns_i (H(m_j) + x r_j) \\equiv s_j (H(m_i) + x r_i) \\mod q\n\\]\n\n*in which \\( m_i, m_j, r_i, r_j, s_i, s_j \\) are known. So, we can easily solve this equation in \\( x \\):*\n\n\\[\nx = \\frac{s_j H(m_i) - s_i H(m_j)}{s_i r_j - s_j r_i} \\mod q\n\\]\n\n*Collisions on k happen after \\( n \\approx \\sqrt{2^{128}} = 2^{64} \\) due to the birthday paradox.*", "question": "### Q.2\nAssume that an adversary collects many signed messages \\((m_i, r_i, s_i)\\) for \\(i = 1, 2, \\ldots, n\\). If \\(r_i = r_j\\) for \\(i < j\\), show that the adversary can easily make a key recovery attack. How large must \\(n\\) be for this to happen?\n\n**HINT**: first prove by an informal probability estimate that \\(r_i = r_j\\) is most likely due to \\(k_i = k_j\\)." }, { "context": "*We have*\n\n\\[\ng^{\\frac{H(m)}{g^s y_1^{r_1} y_2^{r_2}} \\mod q} = g^k\n\\]\n\n*so we can propose to verify*\n\n\\[\nG_i \\left( g^{\\frac{H(m)}{g^s y_1^{r_1} y_2^{r_2}} \\mod q} \\right) = r_i\n\\]\n\n*for \\(i = 1\\) and \\(i = 2\\).*", "question": "### Q.3\nTo defeat the previous attack, our usual crypto apprentice designs the following signature scheme:\n\n**Key generation:**\n1. pick \\(x_1 \\in \\mathbb{Z}_q\\) with uniform distribution\n2. pick \\(x_2 \\in \\mathbb{Z}_q\\) with uniform distribution\n3. compute \\(y_1 = g^{x_1}\\) and \\(y_2 = g^{x_2}\\)\n4. set the secret key to \\((x_1, x_2)\\) and the public key to \\((y_1, y_2)\\)\n\n**Sign m using key \\((x_1, x_2)\\):**\n1. pick \\(k \\in \\{1, 2, \\ldots, 2^{128}\\}\\) with uniform distribution\n2. compute \\(r_1 = G_1(g^k)\\) and \\(r_2 = G_2(g^k)\\)\n3. compute \\(s = \\frac{H(m) + x_1 r_1 + x_2 r_2}{k} \\mod q\\)\n4. set the signature to \\((r_1, r_2, s)\\)\n\nwhere we now use two independent hash functions \\(G_1\\) and \\(G_2\\) to hash group elements onto \\(\\mathbb{Z}_q\\).\n\nPropose a verification algorithm and prove that it works." }, { "context": "*Given \\(i, j, \\ell\\) fixed, the probability that \\(k_i = k_j = k_\\ell\\) is \\(1/N^2\\) with \\(N = 2^{128}\\). We have*\n\n\\[\n\\binom{n}{3} = \\frac{n(n-1)(n-2)}{6}\n\\]\n\n*such triplets. So, by taking \\(n = (2N)^{\\frac{3}{2}} = 2^{86}\\), we should obtain a 3-collision with good probability. More precisely, the probability should be*\n\n\\[\np \\approx 1 - \\left( 1 - \\frac{1}{N^2} \\right)^{\\frac{n(n-1)(n-2)}{6}} \\approx 1 - e^{-\\frac{n^3}{6N^2}} = 1 - e^{-\\frac{3}{2}} \\approx 49\\%\n\\]\n\n*Such \\(n\\) is indeed too large to be realistic.*", "question": "### Q.4\nThe idea of the crypto apprentice is that to adapt the attack of Q.2 to this new scheme, one needs to find \\(i, j, \\ell\\) such that \\(i < j < \\ell\\) and \\(k_i = k_j = k_\\ell\\). With appropriate approximations, prove that we need \\(n \\approx 2^{86}\\) to have good chances of such \\(i, j, \\ell\\) to exist and conclude that this attack has a too high complexity.\n\n**HINT**: approximate \\(\\log \\Pr[\\text{no 3-collision}]\\)." }, { "context": "*Given one collision \\(k_i = k_j\\), the values of \\(r_1\\) and \\(r_2\\) are the same for the two signatures. We deduce the common value \\(x_1 r_1 + x_2 r_2 \\mod q\\) with known \\(r_1\\) and \\(r_2\\) coming from this collision. Given a second collision, we obtain another value \\(x_1 r_1' + x_2 r_2' \\mod q\\) with known \\(r_1'\\) and \\(r_2'\\). Hence, we can solve these two linear equations in \\(x_1\\) and \\(x_2\\).*\n\n*We need two collisions. For that, we only need to take \\(n\\) a bit larger than \\(\\sqrt{N}\\). Indeed, the probability to have 2 collisions or more is*\n\n\\[\np \\approx 1 - \\left( 1 - \\frac{1}{N} \\right)^{\\frac{n(n-1)}{2}} = 1 - \\frac{n(n-1)}{2N} \\left( 1 - \\frac{1}{N} \\right)^{\\frac{n(n-1)}{2} - 1} \\approx 1 - \\left( 1 + \\frac{n^2}{2N} \\right) e^{-\\frac{n^2}{2N}}\n\\]\n\n*So, with \\(n = 2\\sqrt{N}\\), we obtain \\(p \\approx 59\\%\\). Hence \\(n = 2^{65}\\) suffices to break the scheme.*", "question": "### Q.5\nIgnore the idea with 3-collisions and prove that two regular 2-collisions would suffice to break the new scheme. Say how large \\(n\\) should be for this better attack to work.\n\n**NOTE**: we do not require a formula to give \\(x_1\\) and \\(x_2\\)." }, { "context": "*In the new scheme, the signer has no exponential to compute. The value of the register \\(e\\) is always \\( e = g^k \\). Two consecutive signatures (r, s) and (r', s') on messages m and m' are computed by \\( r = G(g^k), r' = G(g^{k+1}), s = \\frac{H(m) + xr}{k} \\mod q, \\text{ and } s' = \\frac{H(m') + xr'}{k+1} \\mod q. \\) Hence, we have*\n\n\\[\n\\left( \\begin{array}{cc}\nr & -s \\\nr' & -s' \\\n\\end{array} \\right) \\times \\left( \\begin{array}{c}\nx \\\nk \\\n\\end{array} \\right) = \\left( \\begin{array}{c}\n-H(m) \\\n-H(m') + s' \\\n\\end{array} \\right) \\mod q\n\\]\n\n*and we deduce*\n\n\\[\nx = \\frac{s's' - sH(m') + sH(m)}{sr' - s'r} \\mod q\n\\]", "question": "### Q.6\nUpset, the crypto apprentice decides to avoid collisions by using a counter in the following scheme:\n\n**Key generation:**\n1. pick \\(x \\in \\mathbb{Z}_q\\) with uniform distribution\n2. compute \\(y = g^x\\)\n3. set the secret key to \\(x\\) and the public key to \\(y\\)\n4. set the counter \\(k\\) to a random number\n5. set the \\(e\\) register to \\( g^k \\)\n\n**Sign m using key x:**\n1. increment the counter \\(k\\)\n2. set \\(e\\) to \\(eg\\)\n3. compute \\(r = G(e)\\)\n4. compute \\( s = \\frac{H(m) + xr}{k} \\mod q \\)\n5. set the signature to (r, s)\n\nDesign a key-recovery attack for this scheme using two signatures." }, { "context": "*Again, we always have \\( e = g^k \\), and the values of \\( k \\) and \\( e \\) are updated consistently by minimizing the cost for the signer. We have \\( k = k_0 + i \\times \\text{inc} \\), where \\( k_0 \\) is the initial value of \\( k \\).*\n\n*With 3 consecutive signatures \\((m_i, r_i, s_i)\\) for \\( i = 1, 2, 3 \\), we have equations of form*\n\n\\[\n\\begin{aligned}\ns_1 k_1 &= H(m_1) + x r_1 \\\\\ns_2 (k_1 + \\text{inc}) &= H(m_2) + x r_2 \\\\\ns_3 (k_1 + 2 \\text{inc}) &= H(m_3) + x r_3\n\\end{aligned}\n\\]\n\n*modulo \\( q \\), where the unknowns are \\( k_1 \\), \\( \\text{inc} \\), and \\( x \\). So, this is a linear system which can be easily solved.*", "question": "### Q.7\nWhat if we now use the following scheme?\n\n**Key generation:**\n1. pick \\( x \\in \\mathbb{Z}_q \\) with uniform distribution\n2. compute \\( y = g^x \\)\n3. set the secret key to x and the public key to y\n4. set the counter k to a random number\n5. pick \\( \\text{inc} \\in \\mathbb{Z}_q^* \\) with uniform distribution\n6. set the e register to \\( g^k \\)\n7. set the e' register to \\( g^{\\text{inc}} \\)\n\n**Sign m using key x:**\n1. set k to \\( k + \\text{inc} \\)\n2. set e to \\(ee'\\)\n3. compute r = G(e)\n4. compute \\( s = \\frac{H(m) + xr}{k} \\mod q \\)\n5. set the signature to (r, s)" } ]
2016-01-14T00:00:00
Serge Vaudenay
EPFL
RSA in an Extension Ring
[ { "context": "Since \\( p \\mod 4 = 3 \\), we have \\( (-1)^{\\frac{p-1}{2}} = (-1) \\) so \\(-1\\) is not a quadratic residue in \\( \\mathbb{Z}_p \\). Hence, \\( x^2 + 1 \\) has no root in \\( \\mathbb{Z}_p \\). As it is of degree 2, this implies that it is irreducible in \\( \\mathbb{Z}_p[x] \\). (Otherwise, we would reduce it into some \\( x^2 + 1 = (ax + b)(cx + d) \\) with \\( a \\) and \\( c \\) nonzero and we would obtain the roots \\(-b/a\\) and \\(-d/c\\).", "question": "### Q.1\nLet \\( p \\) be a prime number such that \\( p \\mod 4 = 3 \\). We consider the polynomial \\( x^2 + 1 \\) in the ring \\( \\mathbb{Z}_p[x] \\) of polynomials in the indeterminate \\( x \\), with coefficients in \\( \\mathbb{Z}_p \\). Prove that \\( x^2 + 1 \\) is irreducible." }, { "context": "That is actually the standard construction of the finite field \\( \\text{GF}(p^2) \\) since \\( x^2 + 1 \\) is monic, irreducible, and of degree 2.\n\nBy reducing an arbitrary polynomial modulo \\( x^2 + 1 \\), we always obtain a polynomial of degree bounded by 1. It can be written \\( a + bx \\) for two coefficients \\( a \\) and \\( b \\). Now, no two distinct such elements can be equal modulo \\( x^2 + 1 \\): if \\( a + bx \\equiv a' + b'x \\mod x^2 + 1 \\), it means that \\( (b - b')x + (a - a') \\) is a multiple of \\( x^2 + 1 \\), which implies that \\( b - b' = 0 \\) and \\( a - a' = 0 \\), hence \\( a = a' \\) and \\( b = b' \\). So, we have exactly \\( p^2 \\) elements in \\( K \\).\n\nBy construction, we obtain a ring. We further check that every nonzero element \\( a + bx \\) is invertible. Indeed, the function \\( (c + dx) \\rightarrow (a + bx)(c + dx) \\mod (x^2 + 1) \\mod p \\) is linear and has no nonzero preimage of 0. (Indeed, if \\( (a + bx)(c + dx) = 0 \\mod x^2 + 1 \\), then \\( (a + bx)(c + dx) \\) is a multiple of \\( x^2 + 1 \\). If both \\( a + bx \\) and \\( c + dx \\) are nonzero, due to their degree being bounded by 1 with a product of degree exactly 2 they must be of degree exactly 1, so they must be divisors of \\( x^2 + 1 \\), which is impossible. So, either \\( a + bx \\) or \\( c + dx \\) must be zero.) So, this linear function is a bijection of \\( K \\) and it has a preimage of 1 which is the inverse of \\( a + bx \\). Therefore, \\( K \\) is a field of \\( p^2 \\) elements.\n\n(We could have a shorter proof with more background in algebra.)", "question": "### Q.2\nLet \\( p \\) be a prime number such that \\( p \\mod 4 = 3 \\). We consider the set \\( K = \\mathbb{Z}_p[x]/(x^2 + 1) \\) of all polynomials over \\( \\mathbb{Z}_p \\), taken modulo \\( x^2 + 1 \\). This defines the addition and the multiplication over \\( K \\). (This is just the regular addition and multiplication of polynomials reduced modulo \\( x^2 + 1 \\) and modulo \\( p \\).) Give the cardinality of \\( K \\) and say what type of algebraic structure it has. Justify your answer." }, { "context": "One way is to count the number of non-invertible elements. First, we can use the property that an element \\( a + bx \\) is non-invertible is equivalent to the property that \\( a \\) and \\( b \\) are either both divisible by \\( p \\) or both divisible by \\( q \\). The \\( \\Leftarrow \\) implication is trivial as a product with any candidate for the inverse would stay divisible by \\( p \\) or \\( q \\) and 1 is not. For the \\( \\Rightarrow \\) implication, we show that if among \\( a \\) and \\( b \\) there is at least one which is not divisible by \\( p \\) and one which is not divisible by \\( q \\) then \\( a + bx \\) is invertible. For that, we first observe that \\( a^2 + b^2 \\) is nonzero modulo \\( p \\) (otherwise, \\( a/b \\) or \\( b/a \\) would be a square root of \\(-1\\) modulo \\( p \\), which is impossible), and similarly nonzero modulo \\( q \\), so it is invertible modulo \\( n \\). Then, we realize that \\( (a - bx)/(a^2 + b^2) \\mod n \\) is the inverse of \\( a + bx \\).\n\nIf \\( N_p \\) (resp. \\( N_q, N_n \\)) is the number of elements which are divisible by \\( p \\) (resp. \\( q, n \\)), by the principle of inclusion/exclusion, we have a number of invertible elements equal to \\( n^2 - N_p - N_q + N_n \\). As \\( N_p = q^2, N_q = p^2, \\) and \\( N_n = 1 \\), we have \\( n^2 - p^2 - q^2 + 1 \\) invertible elements, which is \\( (p^2 - 1)(q^2 - 1) \\).\n\nWe now show the same using the Chinese remainder theorem. As \\( (a + bx) + (c + dx) = (a + c) + (b + d)x \\) and\n\\[ (a + bx) \\times (c + dx) \\equiv (ac - bd) + (ad + bc)x \\quad (\\text{mod } x^2 + 1) \\]\n\n\\( R \\) is isomorphic to \\( \\mathbb{Z}_n^2 \\) where we define\n\\[ (a, b) + (c, d) = (a + c, b + d) \\]\nand\n\\[ (a, b) \\times (c, d) = (ac - bd, ad + bc) \\]\nwhere all numbers are taken modulo \\( n \\). These operations are polynomial modulo \\( n \\). So, due to the Chinese Remainder Theorem, all operations over \\( \\mathbb{Z}_n \\) are equivalent to operations over \\( \\mathbb{Z}_p \\times \\mathbb{Z}_q \\) (note that \\( p \\) and \\( q \\) are different primes, so they are coprime). So, \\( R \\) is isomorphic to \\( \\mathbb{Z}_p^2 \\times \\mathbb{Z}_q^2 \\) where the operations over \\( \\mathbb{Z}_p^2 \\) and \\( \\mathbb{Z}_q^2 \\) are defined as above, like in \\( \\mathbb{Z}_n^2 \\). These structures are isomorphic to \\( \\mathbb{Z}_{p^2}[x]/(x^2 + 1) \\) and \\( \\mathbb{Z}_{q^2}[x]/(x^2 + 1) \\). So, we obtain that \\( R \\) is isomorphic to \\( \\mathbb{Z}_{p^2}[x]/(x^2 + 1) \\times \\mathbb{Z}_{q^2}[x]/(x^2 + 1) \\) which is a ring obtained by the product of two finite fields GF(\\( p^2 \\)) and GF(\\( q^2 \\)). In a product of two fields, an element is invertible if and only if both components are nonzero. So, we have \\( \\phi = (p^2 - 1)(q^2 - 1) \\) invertible elements.", "question": "### Q.3\nLet \\( p \\) and \\( q \\) be two different prime numbers such that \\( p \\mod 4 = q \\mod 4 = 3 \\). Let \\( n = pq \\). Let \\( R = \\mathbb{Z}_n[x]/(x^2 + 1) \\) be the set of all polynomials over \\( \\mathbb{Z}_n \\) taken modulo \\( x^2 + 1 \\). We want to construct an RSA-like cryptosystem over \\( R \\).\n\nProve that there are exactly \\( \\phi = (p^2 - 1)(q^2 - 1) \\) invertible elements in \\( R \\).\n\n**HINT**: either count or think Chinese." }, { "context": "For \\( m \\in R \\), we want to have \\( m^{ed} = m \\). So, \\( m^{ed-1} = 1 \\) for all \\( m \\in R^* \\). Hence, \\( ed - 1 \\) must be a multiple of all element orders. One way to achieve this is to take \\( ed \\mod \\phi = 1 \\). So, we can take \\( e \\) such that \\( \\gcd(e, \\phi) = 1 \\) and take \\( d \\) as the inverse of \\( e \\) modulo \\( \\phi \\).\nTo be more precise, we can see from the previous question that \\( R^* \\) includes a subgroup isomorphic to GF(\\( p^2 \\))* which is cyclic. (If we write \\( R \\sim GF(p^2) \\times GF(q^2) \\), this is the subgroup of all (t,1) for t \\in GF(p^2)*.) So, \\( ed - 1 \\) must be a multiple of \\( p^2 - 1 \\). This is the same for \\( q^2 - 1 \\). So, \\( ed - 1 \\) must be a multiple of \\( \\lambda = \\text{lcm}(p^2 - 1, q^2 - 1) \\), which is the exponent of \\( R^* \\). So, we can take \\( e \\) such that \\( \\gcd(e, \\lambda) = 1 \\) and take \\( d \\) as the inverse of \\( e \\) modulo \\( \\lambda \\).\nOnce we have seen that \\( m^{ed} = m \\) for all \\( m \\in R^* \\), we can verify it for all \\( m \\in R \\) by using the Chinese Remainder Theorem: for any polynomial \\( m \\), if \\( m \\equiv 0 \\mod x^2 + 1 \\) and \\mod p, then if \\( m^{ed} = m \\) as well. Otherwise, \\( m \\) has order multiple of \\( ed - 1 \\mod x^2 + 1 \\) and \\mod p, so \\( m^{ed} = m \\) anyway. Hence, \\( m^{ed} \\equiv m \\mod x^2 + 1 \\) and \\mod p. It is the same \\mod q. So, \\( m^{ed} \\equiv m \\mod x^2 + 1 \\) and \\mod n.", "question": "### Q.4\nUnder the same hypothesis as in Q.3, we want to encrypt an element \\( m \\in R \\) by computing \\( m^e \\) and to decrypt by raising to the power \\( d \\). How to set \\( e \\) and \\( d \\) for the decryption to work correctly? Justify your answer." }, { "context": "We must take \\( ed \\mod \\phi = 1 \\) so \\( \\gcd(e, \\phi) = 1 \\).\nWe cannot take \\( p = 3 \\), otherwise it is trivial to factor \\( n \\). So, \\( p \\) is coprime with 3. Hence, either \\( p + 1 \\) or \\( p - 1 \\) is a multiple of 3. We deduce that \\( p^2 - 1 \\) is always a multiple of 3. So, \\( e = 3 \\) is not invertible modulo \\( \\phi \\) (and not modulo \\( \\lambda \\) either). Therefore, \\( e = 3 \\) is not possible.", "question": "### Q.5\nIn the context of Q.4, can we take \\( e = 3 \\)? Justify your answer." }, { "context": "If \\( p \\mod 10 = 7 \\) then \\( p \\mod 5 = 2 \\) so \\( p^2 - 1 \\mod 5 = 3 \\). If \\( q \\mod 10 = 3 \\) then \\( q \\mod 5 = 3 \\) so \\( q^2 - 1 \\mod 5 = 3 \\). So, \\( \\phi \\mod 5 = 4 \\) so 5 does not divide \\( \\phi \\). Since 5 is prime, we deduce that 5 is invertible modulo \\( \\phi \\). So, we can take \\( e = 5 \\).", "question": "### Q.6\nBy selecting the last decimal digit of \\( p \\) to be equal to 7 and the last decimal digit of \\( q \\) to be equal to 3, prove that we can always use \\( e = 5 \\) in the previous construction." }, { "context": "So far, the best way to break RSA is to factor \\( n \\). So, we have to take \\( n \\) long enough to make it hard to factor. If we can factor \\( n \\), we can also break the new cryptosystem. So, in practice, the length requirements on \\( n \\) are the same for RSA and the new cryptosystem.\nClearly, the \\( e \\) exponent may have the same size but the \\( d \\) exponent is twice larger. The messages are twice larger as well (so we can encrypt more).\nWith the same modulus length, the complexity of the new scheme is larger than for RSA (say four times larger if we implement multiplication in a straightforward way). So, there seems to be no advantage in using this new cryptosystem.", "question": "### Q.7\nIs there any advantage of this cryptosystem compared to RSA? Explain why.\n\n**HINT**: compare the security with respect to modulus size, key and message lengths, and complexities." } ]
2016-01-14T00:00:00
Serge Vaudenay
EPFL
On Securing Biometric Passports
A biometric passport is an identity document with a contactless chip. Reading the digital identity works like this: 1. The reader first reads the low-entropy password \( w \) which is printed inside the passport. 2. The reader sends a standard RFID broadcast signal and the chip responds. 3. The chip requests to go through a password-based key agreement. The password \( w \) is the input of the protocol on the reader side. On the chip side, there is a long-term public/secret key pair \( pk/sk \) and \( w \). (sk is stored in the chip but is not accessible to the reader.) At the end of the protocol, the output on both sides is a symmetric key \( K \). 4. The reader and the chip communicate securely by using this key \( K \). 5. Through this secure communication, the reader can retrieve some files containing the identity information \( ID \), a biometric reference template \( bio \), the public key \( pk \) again, and a signature \( \sigma \) from the issuing country that \( (ID, bio, pk) \) is correct. 6. The reader extracts from \( ID \) the field country indicating the issuing country. It is assumed that the reader has previously got in a secure way the root certificate \( C_{country} \) from the issuing country so that he can verify \( \sigma \). Then, the reader has obtained \( (ID, bio) \) which can then be used to identify the person. We further describe BAC, the original password-based key agreement protocol which is in the standard. In this exercise, some questions are specific to BAC. BAC makes no use of any pk/sk pair. It works as follows: the reader and the chip derive \( K_{init} = KDF(w) \) using a key derivation function, select some random nonces \( N_r \) (for the reader) and \( N_c \) (for the chip) and some keys \( K_r \) (for the reader) and \( K_c \) (for the chip); the chip sends \( N_c \) in clear to the reader; the reader sends \( (N_r, N_c, K_r) \) securely (using \( K_{init} \)) to the chip; the chip checks that \( N_c \) is correct and sends \( (N_c, N_r, K_c) \) securely (using \( K_{init} \)) to the reader; the reader checks that \( N_r \) is correct; the reader and the chip derive \( K = KDF(K_r, K_c) \). ``` Reader input: \( w \) \[ K_{init} \leftarrow KDF(w) \] pick \( N_r, K_r \) \[ \begin{array}{c} N_c \\ \longrightarrow \\ \end{array} \] \[ \begin{array}{c} Enc_{K_{init}}(N_r, N_c, K_r) \\ \longleftarrow \\ \end{array} \] decrypt, check \( N_c \) \[ \begin{array}{c} Enc_{K_{init}}(N_c, N_r, K_c) \\ \longrightarrow \\ \end{array} \] decrypt, check \( N_r \) \[ K \leftarrow KDF(K_r, K_c) \] output: \( K \) ``` ``` Chip input: \( w \) \[ K_{init} \leftarrow KDF(w) \] pick \( N_c, K_c \) \[ \begin{array}{c} N_c \\ \longleftarrow \\ \end{array} \] \[ \begin{array}{c} Enc_{K_{init}}(N_r, N_c, K_r) \\ \longrightarrow \\ \end{array} \] decrypt, check \( N_c \) \[ \begin{array}{c} Enc_{K_{init}}(N_c, N_r, K_c) \\ \longleftarrow \\ \end{array} \] decrypt, check \( N_r \) \[ K \leftarrow KDF(K_r, K_c) \] output: \( K \) ```
[ { "context": "What the chip needs is just w, ID, bio, and σ. Someone who has seen the passport knows w. So, he can access the chip and read all other information. This information could be then stored in a blank biometric passport. Then, the reader sees no difference between the original passport and the copied one as they contain the same information. This was fixed in the original standard by having an additional “Active Authentication” protocol using the pk/sk pair using public-key cryptography. As the value of sk cannot be copied (it is not accessible), we cannot make a copy which simulates the protocol knowing sk. But we could also replace BAC with a better password-based key agreement protocol instead of adding another protocol.", "question": "### Q.1 If the password-based key agreement protocol makes no use of any pk/sk pair like in BAC, prove that when the holder shows his biometric passport to someone (for instance, at the hotel check-in counter), this person can easily copy the passport. How could this be fixed?" }, { "context": "With w compromised, it is then easy to trace the movements of the passport as we can easily recognize it by running the protocol again. We can also start communication with the chip and read the private data (ID, bio, σ). In both cases, this is a privacy concern. If, like in the case of Q.1, the protocol makes no use of sk, the additional threat related to having read (ID, bio, σ) is that one could make a digital copy of the passport. But this is not really the question here. The main threat remains the loss of privacy, regardless of the use of sk. A possible scenario is that we obtain w from a legitimate physical access to the document (for instance, at a hotel check-in desk), then implement sensors to trace the holder carrying his passport. We can recognize it when he enters again in the hotel, or in a shop, etc.", "question": "### Q.2 If an adversary has obtained w (by whatever means), what is the threat for the holder of the passport? Describe a possible scenario." }, { "context": "A passive adversary can get Nc (which is sent in clear) and c = EncKN (Nr, Nc, Kr). Then, he can do an exhaustive search on w (which has a low entropy) until DecKDF(guess)(c) is of form (., Nc, .). This way, the adversary recovers w. Even worse: the adversary can then decrypt the messages and deduce K, then decrypt further communication which transmits the private data. The only way to fix it is to use public-key cryptography and a correct PAKE protocol which is secure against offline exhaustive search instead of BAC.", "question": "### Q.3 If we use BAC as a password-based key agreement protocol, prove that the password w and all transmitted data can be recovered in clear with a passive offline exhaustive search. How could we replace BAC to avoid this attack?" }, { "context": "The digital signature σ is transferable because it can be perfectly copied: copies of a digital signature are undeniable. The important cryptographic property is that (copies of) the signature are transferable and undeniable because they are unforgeable. The original stamped document is assumed to be unforgeable. Although it can be copied, it cannot be perfectly copied as copies could be forged. So, photocopies are deniable. So, the original document is a proof while its copies are not. The threat is for people who want to hide some sensitive part of their identity (such as the exact age, official name, citizenship, etc) as any copy with σ which is disclosed would leak evidence of the private data. To fix this, the passport should not give the transferable σ but rather go through an interactive proof of knowledge for a valid signature. This interactive proof should be deniable (e.g., zero-knowledge). Note: again, this question is not really about copying passports. If we use sk, we can tell genuine passports and their copies apart. The question is about copying the signature for publication.", "question": "### Q.4 One difference between a regular identity document (with ID and a picture bio printed) with an official stamp σ and a digital document (ID, bio, pk) with a digital signature σ is that we cannot use a photocopy of the stamped document as a proof, whereas we can use an electronic copy of the digital signed document like the original one." } ]
2016-01-14T00:00:00
Serge Vaudenay
EPFL
3-Collisions
*This exercise is inspired by “Improved Generic Algorithms for 3-Collisions” by Joux and Lucks, ASIACRYPT’09, LNCS vol. 5912, pp. 347–363, 2009.* Let \( f \) be a random-looking function from a set \( X \) to a set \( \mathcal{Y} \). Let \( N \) denote the cardinality of \( \mathcal{Y} \). We call an \( r \)-collision a set \( \{x_1, \ldots, x_r\} \) of \( r \) elements of \( X \) such that \( f(x_i) = f(x_j) \) for every \( i \) and \( j \).
[ { "context": "*Preimage resistance: given an arbitrary \\( y \\), it is hard to find \\( x \\) such that \\( f(x) = y \\).\nCollision resistance: it is hard to find \\( x \\) and \\( x' \\) such that \\( x \\neq x' \\) and \\( f(x) = f(x') \\).*", "question": "### Q.1 Recall what preimage resistance and collision resistance mean." }, { "context": "*We have several algorithms based on the birthday paradox. For instance we can pick a new \\( x \\) and store \\( (f(x), x) \\) in memory if there is no \\( (y, x) \\) entry with \\( y = f(x) \\), or stop if there is such entry. This works with time and memory complexity \\( O(\\sqrt{N}) \\).\nWe have other algorithms based on cycle finding with much lower memory complexity. For instance, the Floyd cycling algorithm works with memory complexity \\( O(1) \\) and time complexity \\( O(\\sqrt{N}) \\).*", "question": "### Q.2 Name the ideas behind two collision finding algorithms from the course and give their time and memory complexity." }, { "context": "Algorithm \\( \\mathcal{A}_1 \\) picks a new \\( x \\in X \\) at random until \\( f(x) = y \\):\n\n**input**: \\( y \\)\n1. **repeat**\n2. &nbsp;&nbsp;&nbsp;&nbsp;pick \\( x \\) at random\n3. **until** \\( f(x) = y \\)\n**output**: \\( x \\)\n\nLet \\( p_y \\) be the probability that a random \\( x \\) is mapped onto \\( y \\) by \\( f \\). So, the probability to stop after \\( i \\) iterations is \\( (1 - p_y)^{i-1} p_y \\). The average complexity for input \\( y \\) is thus\n\n$$\nE(C_y) = \\sum_{i=0}^{+\\infty} i (1 - p_y)^{i-1} p_y = \\frac{1}{p_y}\n$$\n\nAssuming that \\( p_y \\approx N^{-1} \\), we have \\( E(C) \\approx N \\).", "question": "### Q.3 Let \\( y \\in \\mathcal{Y} \\) be a target value. Provide an algorithm \\( \\mathcal{A}_1 \\) such that upon input \\( y \\) it returns \\( x \\in X \\) such that \\( f(x) = y \\) with average complexity \\( N \\) in terms of \\( f \\) evaluations.\nMake the complexity analysis." }, { "context": "Algorithm \\( \\mathcal{A}_2 \\) picks a random \\( y \\in \\mathcal{Y} \\) and repeatedly calls \\( \\mathcal{A}_1 \\) until \\( r \\) pairwise different preimages of \\( y \\) are found. If we neglect the case where a preimage \\( x \\) is found several times, the complexity is \\( rN \\).\nTo make it more rigorous, we shall make sure that \\( \\mathcal{A}_1 \\) never picks an \\( x \\) twice through different calls. That is, we rewrite \\( \\mathcal{A}_1 \\) for \\( \\mathcal{A}_2 \\) so that a new \\( x \\) is picked at each iteration, it is stored if \\( f(x) = y \\), and the algorithm stops if the \\( r \\)th preimage was found.\nTo improve it further, we first pick \\( x \\) and take \\( y = f(x) \\) so that we have one less preimage to find:\n\n1. pick \\( x \\) at random\n2. let \\( y = f(x) \\) and \\( L = (x) \\)\n3. **repeat**\n4. &nbsp;&nbsp;&nbsp;&nbsp;pick a new \\( x \\) at random\n5. &nbsp;&nbsp;&nbsp;&nbsp;**if** \\( f(x) = y \\) and \\( x \\) does not appear in \\( L \\) **then**\n6. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;insert \\( x \\) in \\( L \\)\n7. &nbsp;&nbsp;&nbsp;&nbsp;**end if**\n8. **until** \\( L \\) has size \\( r \\)\n**output**: \\( L \\)", "question": "### Q.4 By using \\( \\mathcal{A}_1 \\) as a subroutine, provide an algorithm \\( \\mathcal{A}_2 \\) producing \\( r \\)-collisions with complexity \\( rN \\) in terms of \\( f \\) evaluations.\nMake the complexity analysis." }, { "context": "**Q.5a**:\n> We show by induction that for every existing \\( y \\) in the hash table, the list \\( L_y \\) contain pairwise different entries \\( x \\) such that \\( f(x) = y \\). It is true when the algorithm starts (the hash table is empty so there is no \\( y \\)). It is true when a new \\( y \\) entry is inserted (the list has a single element). It is true when a list is expanded. So, it is true throughout the execution of the algorithm.\n> \n> The algorithm can only stop when it has found a list of cardinality \\( r \\). So, this list is an \\( r \\)-collision.\n\n**Q.5b**:\n> Due to the structure of the algorithm, we have \\( N^{\\alpha} \\) entries in the hash table. Each entry has size up to \\( r \\log_2 N \\) in bits, so the memory complexity is \\( O(N^{\\alpha} r \\log_2 N) \\). By neglecting \\( r \\log_2 N \\) we obtain \\( M \\approx N^{\\alpha} \\).\n> \n> We need \\( N^{\\alpha} \\) evaluations to prepare the hash table and \\( N^\\beta \\) evaluations to run Phase 2, so the time complexity is \\( N^{\\alpha} + N^\\beta \\). By using the approximation \\( \\log(a + b) \\approx \\max(\\log a, \\log b) \\) we obtain \\( T \\approx \\max(N^{\\alpha}, N^\\beta) \\).\n\n**Q.5c**:\n> Each \\( x \\) in the second phase will hit one entry with probability \\( N^{\\alpha - 1} \\). So, we have \\( N^{\\alpha + \\beta - 1} \\) hits on average for \\( r = 2 \\). We thus need \\( \\alpha + \\beta \\geq 1 \\).\n> \n> For \\( r = 3 \\), we have \\( N^{\\alpha + \\beta - 1} \\) hits in a set of \\( N^{\\alpha} \\). When the number of hits exceed \\( N^{\\frac{1}{2} \\alpha} \\), we obtain colliding hits with constant probability, thanks to the birthday paradox. So, \\( \\alpha + \\beta - 1 \\geq \\frac{1}{2} \\alpha \\) guarantees a constant probability of success. This is equivalent to \\( \\alpha + 2\\beta \\geq 2 \\).\n\n**Q.5d**:\nWe have \\(\\log M = \\alpha \\log N\\), \\(\\log T = \\max(\\alpha, \\beta) \\log N\\), and \\(\\alpha + 2\\beta \\geq 2\\). The minimal value for \\(\\log T\\) is reached for \\(\\beta = 1 - \\frac{1}{2}\\alpha\\), so \\(\\log T = \\max(\\alpha, 1 - \\frac{1}{2}\\alpha) \\log N\\) which can be computed in terms of \\(\\log M\\). The curve looks like what follows.\n\n\\[\n\\begin{array}{c|c|c}\n\\log N & & \\\\\n& & \\\\\n\\frac{2}{3} \\log N & & \\\\\n& & \\\\\n0 & \\frac{2}{3} \\log N & \\log N \\\\\n\\end{array}\n\\]", "question": "### Q.5\nWe consider an algorithm \\( \\mathcal{A}_3 \\) for making \\( r \\)-collisions, defined by two parameters \\( \\alpha \\) and \\( \\beta \\). The algorithm works in two phases. In the first phase, it picks \\( N^\\alpha \\) random \\( x \\in X \\) and stores \\( (f(x), L_{f(x)}) \\) in a hash table, where \\( L_{f(x)} \\) is a list initialized to the single element \\( x \\). In the second phase, it iteratively picks \\( N^\\beta \\) random \\( x \\in X \\). For each of these \\( x \\)'s, it looks whether \\( y = f(x) \\) has an entry in the hash table. If it does, and if \\( x \\) is not already in the list \\( L_y \\), \\( x \\) is inserted into the list \\( L_y \\). If \\( L_y \\) has \\( r \\) elements, the algorithm outputs \\( L_y \\). We assume that \\( \\mathcal{A}_3 \\) never picks the same \\( x \\) twice.\n```\n1. **for** \\( i = 1 \\) to \\( N^\\alpha \\) **do**\n2. &nbsp;&nbsp;&nbsp;&nbsp;pick a new \\( x \\) at random\n3. set \\( y = f(x) \\) and store \\( (y, f(x)) \\) at place \\( h(y) \\)\n4. end for\n5. for \\( i = 1 \\) to \\( N^\\beta \\) do\n6. &nbsp;&nbsp;&nbsp;&nbsp;pick a new \\( x \\) at random\n7. &nbsp;&nbsp;&nbsp;&nbsp;if there is an entry \\( (y, L_y) \\) at place \\( h(f(x)) \\) such that \\( y = f(x) \\) then\n8. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;insert \\( x \\) in list \\( L_y \\)\n9. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if \\( L_y \\) has size \\( r \\) then\n10. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;yield \\( L_y \\) and stop\n11. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;end if\n12. &nbsp;&nbsp;&nbsp;&nbsp;end if\n13. end for\n14. algorithm failed\n```\n#### Q.5a Show that \\( \\mathcal{A}_3 \\) either generates \\( r \\)-collisions or fails.\n#### Q.5b Show that the memory complexity is \\( M = O(N^{\\alpha} \\log N) \\) and that the time complexity in terms of \\( f \\) evaluations is \\( T = N^{\\alpha} + N^\\beta \\).\n#### Q.5c For \\( r = 2 \\), which inequality shall \\( \\alpha \\) and \\( \\beta \\) satisfy to reach a constant probability of success?\nFor \\( r = 3 \\), show that this inequality becomes \\( \\alpha + 2\\beta \\geq 2 \\).\n**Hint**: apply the birthday paradox in Phase 2.\n#### Q.5d Show that for parameters for \\( r = 3 \\) reaching a constant probability of success, \\( \\log T \\) is a function in terms of \\( \\log M \\). Plot its curve." }, { "context": "\n#### Q.6a: When using a collision finding algorithm with constant memory complexity in Phase 1, the memory complexity is again \\(O(N^{\\alpha} \\log N)\\) which is approximated by \\(N^{\\alpha}\\).\nWhen using a collision finding algorithm based on the birthday paradox, Phase 1 uses \\(O(N^{\\alpha} + \\sqrt{N})\\) memory so the final memory complexity is approximated by \\(N^{\\max(\\alpha, \\frac{1}{2})}\\). This is not a very good choice since we get results which are not better than with the previous method.\nThe time complexity of a collision search is \\(O(N^{\\frac{1}{2}})\\). So, the time complexity of \\(A_5\\) is \\(O(N^{\\alpha + \\frac{1}{2}} + N^{\\beta})\\) which is approximately \\(T \\approx \\max(N^{\\alpha + \\frac{1}{2}}, N^{\\beta})\\).\n#### Q.6b: \nAgain, the logarithmic memory complexity is \\(\\alpha\\). The logarithmic time complexity is now roughly \\(\\max(\\alpha + \\frac{1}{2}, 1 - \\alpha) \\log N\\). The curve looks like what follows.\n\n\\[\n\\ell = \\log N\n\\]\n\n\\[\n\\begin{array}{c|c|c|c|c|c}\n0 & \\frac{1}{4} \\ell & \\frac{1}{3} \\ell & \\frac{1}{2} \\ell & \\frac{2}{3} \\ell & \\ell = \\log N \\\\\n\\hline\n0 & \\frac{5\\ell}{6} & \\frac{3\\ell}{4} & \\frac{2\\ell}{3} & & \\\\\n\\end{array}\n\\]\n\n\\[\n\\begin{array}{c}\n\\text{previous complexity} \\\\\n\\text{new complexity}\n\\end{array}\n\\]\n\nAs we can see, for \\(\\alpha < \\frac{1}{3}\\), \\(A_4\\) has a better complexity.\n", "question": "### Q.6 We consider another algorithm \\(A_4\\) for making 3-collisions, defined by parameters \\(\\alpha\\) and \\(\\beta\\). Now, \\(A_4\\) runs \\(N^\\alpha\\) times a collision-finding algorithm and stores the \\(N^\\alpha\\) obtained collisions in the same form \\((y, L_y)\\) with \\(L_y = (x_1, x_2)\\) as before. In a second phase, \\(A_4\\) picks \\(N^\\beta\\) random \\(x\\) and checks if \\(f(x)\\) hits one of the \\(y\\) in the hash table. If it is the case, a 3-collision is found. (We assume that no \\(x\\) is picked several times.)\n\n1. for \\(i = 1\\) to \\(N^\\alpha\\) do\n2. &nbsp;&nbsp;&nbsp;&nbsp;run a collision-finding algorithm and get \\(x_1\\) and \\(x_2\\)\n3. &nbsp;&nbsp;&nbsp;&nbsp;set \\(y = f(x_1)\\) and store \\((y, (x_1, x_2))\\) at place \\(h(y)\\)\n4. end for\n5. for \\(i = 1\\) to \\(N^\\beta\\) do\n6. &nbsp;&nbsp;&nbsp;&nbsp;pick a new \\(x\\) at random\n7. &nbsp;&nbsp;&nbsp;&nbsp;if there is an entry \\((y, L_y)\\) at place \\(h(f(x))\\) such that \\(y = f(x)\\) then\n8. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;insert \\(x\\) in list \\(L_y\\)\n9. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;yield \\(L_y\\) and stop\n10. &nbsp;&nbsp;&nbsp;&nbsp;end if\n11. end for\n12. algorithm failed \n\n#### Q.6a Show that the memory complexity is \\(M \\approx N^\\alpha\\) and that the time complexity in terms of \\(f\\) evaluations is \\(T \\approx \\max(N^{\\alpha + \\frac{1}{2}}, N^\\beta)\\).\n#### Q.6b Show that for \\(\\alpha + \\beta \\geq 1\\) we obtain a constant probability of success.\nPlot the curve of minimal \\(\\log T\\) in terms of \\(\\log M\\) to reach a constant probability of success. Compare with \\(A_3\\).\nWhen is it better?\n\n" } ]
2011-12-01T00:00:00
Serge Vaudenay
EPFL
Attack on some Implementations of PKCS#1v1.5 Signature with \( e = 3 \)
*This exercise is inspired by an attack originally presented by Bleichenbacher at CRYPTO’06 (unpublished), then improved by Kühn, Pyshkin, Tews, and Weinmann (in a technical available online). These attacks were later extended by Oiwa, Kobara, and Watanabe: “A New Variant for an Attack against RSA Signature Verification using Parameter Field”, EuroPKI’07, LNCS vol. 4582, pp. 143–153, 2007.* In this exercise we represent bitstrings in hexadecimal by grouping bits into packets of 4, each packet (nibble) being denoted in hexadecimal with a figure between 0 and F. For instance, 2B represents the bitstring 00101011. Given a bitstring \( x \), we denote by \( \overline{x} \) the integer such that \( x \) is a binary expansion of \( \overline{x} \). For instance, 00FF = 255. We call a cube an integer whose cubic root is an integer. Given a message \( m \) and an integer \( \ell_N \), we define the bitstring of length \( \ell_N \) \[ \text{format}_{\ell_N}(m) = 00 \, 01 \, FF \cdots FF \, 00 \| D(m) \] where \( D(m) \) represents the identifier of the hash function \( H \) together with \( H(m) \) following the ASN.1 syntax. As an example, in the SHA-1 case, we have \[ D(m) = 30 \, 21 \, 30 \, 09 \, 06 \, 05 \, 2B \, 0E \, 03 \, 02 \, 1A \, 05 \, 00 \, 04 \, 14 \| \text{SHA-1}(m) \] We denote by \( \ell_D \) the bitlength of \( D(m) \). We recall that the PKCS#1v1.5 signature for a message \( m \) and a public key \( (e, N) \) is an integer \( s \) such that \( 0 \leq s < N \) and \( s^e \mod N \) can be parsed following the format \( \text{format}_{\ell_N}(m) \), where \( \ell_N \) is the minimal bitlength of \( N \). It is required that the padding field consisting of FF bytes is at least of 8 bytes. Throughout this exercise we assume that \( e = 3 \).
[ { "context": "A signature scheme is defined by three algorithms: a key generator, a signature algorithm, and a verification algorithm. The key generator is a pseudorandom generator making a key pair of required length, one of the two keys is called the public key and the other is the secret key. The signature algorithm is a probabilistic algorithm taking a secret key and a message and producing a signature. The verification algorithm is a deterministic algorithm taking a public key, a message, and a putative signature, and telling whether the signature is valid. The signature scheme has the functionality such that when generating a key pair and making a signature on an arbitrary message \\( m \\) with the secret key, then the obtained signature passes the verification algorithm with \\( m \\) and the public key. Security requires that it shall be impossible to create a valid signature without using the secret key.", "question": "### Q.1 What is a signature scheme? Describe its components, its functionality, and give an intuition on its security." }, { "context": "A valid signature for m is a string which passes the verification algorithm. The algorithm first converts the string into an integer s. Then, it checks that s < N. Then, it computes the binary expansion of s³ mod N. It parses this string into", "question": "### Q.2 What is a valid signature for a message \\( m \\) in PKCS#1v1.5? Detail the verification algorithm." }, { "context": "\n#### Q.3a: If it is a cube, we can compute the cubic root \\( s \\) and \\( s^3 \\) has a valid format. Since, \\( s^3 \\mod N = u \\), it is a valid signature.\n#### Q.3b: We have\n\\[ u = 0001\\ FF\\cdots FF003021300906052B0E03021A05000414||H(m) \\]\n\nwhich is less than \\( 2^{\\ell N - 15} \\). The number of cubes less than \\( a \\) is exactly \\( \\lfloor \\sqrt[3]{a} \\rfloor \\). So, we roughly have \\( 2^{\\frac{\\ell N - 5}{3}} \\) cubes. Therefore, \\( u \\) is a cube with probability roughly \\( 2^{-2\\frac{\\ell N + 10}{3}} \\).\n\nOf course, \\( u \\) does not really look like random. We can do some thinner analysis and get that the probability for \\( u \\) to be a cube is \\( \\frac{1}{3} 2^{-2\\frac{\\ell N + 10}{3}} \\). Indeed, \\( u \\) is between \\( a \\) and \\( a + b \\) for \\( b < 2^{160} \\) and \\( a \\approx 2^{\\ell N - 15} \\). The number of cubes between \\( a \\) and \\( a + b \\) is roughly\n\\[ (a + b)^{\\frac{1}{3}} - a^{\\frac{1}{3}} = a^{\\frac{1}{3}} \\left( \\left( 1 + \\frac{b}{a} \\right)^{\\frac{1}{3}} - 1 \\right) \\approx a^{\\frac{1}{3}} \\frac{b}{3a} \\]\nso the probability to be a cube is about \\( \\frac{1}{3} a^{- \\frac{2}{3}} \\).\n#### Q.3c: The algorithm first computes \\( u \\). If \\( u \\) is not a cube, the algorithm fails. Otherwise, it extracts the cubic root and produce a valid signature. Since it is easy to compute cubic roots over the integers, the algorithm is very fast. Its probability of success is \\( 2^{-2\\frac{\\ell N + 10}{3}} \\) which is much too low to be practical. E.g. for \\( \\ell N = 1024 \\), the probability is \\( 2^{-673} \\).\n", "question": "### Q3 Let u = format_{\\ell N}(m).\n#### Q.3a If \\( u \\) is a cube, show that we can easily forge a signature for \\( m \\) without any secret information.\n#### Q.3b We assume that \\( u \\) looks like a random number less than \\( a = 2^{\\ell N-15} \\). How many cubes are less than \\( a \\)? What is the probability for \\( u \\) to be a cube?\n#### Q.3c Deduce an algorithm to forge a signature for \\( m \\) which works with a success probability \\( 2^{-2\\frac{\\ell N + 10}{3}} \\). Is it practical?" }, { "context": "\n### Q.4a: \\[ \\overline{u} = 2^{8 + \\ell_D - D(m)} \\]\nWe easily see that \\( 2^{3\\alpha - x2\\gamma} \\) writes\n\\[ 0001||P||00||D(m)||00 \\cdots 00 \\]\nin hexadecimal.\n### Q.4b: Since \\( \\gamma - 2\\alpha = \\ell - 14 - \\ell_D - \\ell_P \\), by selecting \\(\\ell_P \\leq \\ell - 14 - \\ell_D\\), we have \\(\\gamma \\geq 2\\alpha\\).\nSince \\( x \\leq 2^{8 + \\ell_D} \\) and \\( 3\\alpha - \\gamma = \\gamma - 9 + \\ell_D + \\ell_P \\), by selecting \\(\\ell_P \\geq \\ell_D + 7\\), we have \\( x \\leq 2^{\\frac{1}{2}(3\\alpha - \\gamma)} \\).\nWhen \\(\\ell_N \\geq 84 + 6\\ell_D\\), we have \\(\\ell = 28 \\geq 2\\ell_D\\), so \\(\\ell_D + 7 \\leq (\\ell - 14 - \\ell_D) - 7\\). Therefore, we can select \\(\\ell_P\\) such that \\(\\ell_P \\mod 8 = 0\\) and \\(\\ell_D + 7 \\leq \\ell_P \\leq \\ell - 14 - \\ell_D\\).\n### Q.4c: Note that since \\( x \\mod 3 = 0 \\) and \\(\\gamma \\geq 2\\alpha\\), \\( y \\) is an integer.\nWe use \\( (A - B)^3 = A^3 - 3A^2B + 3AB^2 - B^3 \\) with \\( A = 2^{\\alpha} \\) and \\( B = y \\). Clearly, \\( A^3 - 3A^2B = \\overline{u} \\).\nSince \\( s = A - B \\), we have \\( s^3 = \\overline{u} + 3AB^2 - B^3 \\). Thus, we only have to show that \\( 0 \\leq 3AB^2 - B^3 \\leq 2\\gamma \\).\nSince \\( 2^{\\alpha - x2\\gamma} \\geq 0 \\), we have \\( A \\geq 3B \\), so \\( 3AB^2 - B^3 \\geq 0 \\).\nSince \\( x \\leq 2^{\\frac{1}{2}(3\\alpha - \\gamma)} \\), we have \\( 3AB^2 \\leq 2\\gamma \\).\n### Q.4d: We take \\( m \\) hash it to compute \\( D(m) \\), then \\( x \\). If \\( x \\mod 3 \\neq 0 \\), the attack fails (that is, with probability \\(\\frac{2}{3}\\)). Then, we construct \\( s \\) as above. Due to the above inequalities, \\( s^3 \\) parses\n\\[ 0001 \\text{FF} \\cdots \\text{FF} 00||D(m)||g \\]\nwhere \\( g \\) is some \"garbage\" string. So, it is accepted as a valid signature.\n### Q.4e: \\[ s = 2^{1019} - \\frac{1}{3} (2^{288 - D(m)})^{2^{34}} \\]\nis a valid signature with probability \\(\\frac{1}{3}\\) over the random selection of the message.\nFor \\(\\ell_N = 3072\\), we have \\(\\ell_N \\mod 3 = 0\\). The \\(\\ell_N \\geq 84 + 6\\ell_D\\) constraint is equivalent to 498 \\(\\geq \\ell_D\\). For SHA-1, the digest length is 160. We can see that there is an overhead of 120 bits in the ASN.1 syntax of \\( D(m) \\). So, we have \\(\\ell_D = 280\\) and the constraint is satisfied. We have\n\\[ s = 2^{1019} - \\frac{1}{3} x 2^{730 - \\ell_P} \\]\nBy replacing \\( x \\) with its value, we obtain\n\\[ s = 2^{1019} - \\frac{1}{3} x (2^{288 - D(m)})^{2^{730 - \\ell_P}} \\]\nWe must select \\(\\ell_P\\) such that \\(\\ell_P \\leq 730\\), \\(\\ell_P \\geq 287\\), and \\(\\ell_P \\mod 8 = 0\\). By selecting \\(\\ell_P = 696 = 8 \\times 87\\) we obtain the expression in the question. Whenever its content of the parentheses is divisible by 3, \\( s \\) is a valid signature of the message. That is, with a probability \\(\\frac{1}{3}\\).\n", "question": "### Q.4 Bleichenbacher observed that some parsers just scan the bytes from the formatting rule but do not check that the string terminates after the digest. That is, these implementations accept the following format\n\\[ 0001\\ FF\\cdots FF00||D(m)||g \\]\nwhere \\( g \\) is any garbage string, provided that the padding field has at least 8 bytes and that the total length (including the garbage) is \\( \\ell_N \\).\nIn this question we assume \\( \\ell_N = 3\\ell \\). We further assume \\( \\ell_N \\geq 84 + 6\\ell_D \\).\n#### Q.4a Let \\( P = \\text{FF} \\cdots \\text{FF} \\) be a string of FF bytes with bitlength \\( \\ell_P \\). Show that the \\( \\ell_N \\)-bit string \\( u = 0001||P||00||D(m)||00 \\cdots 00 \\) is such that \\( \\overline{u} = 2^{3\\alpha - x2\\gamma} \\) for some integer \\( x \\), where \\( \\alpha = \\ell' - 5 \\) and \\( \\gamma = \\ell_N - 24 - \\ell_D - \\ell_P \\).\n#### Q.4b By using the assumption \\( \\ell_N \\geq 84 + 6\\ell_D \\), show that we can select \\( \\ell_P \\) such that \\( \\gamma \\geq 2\\alpha \\) and \\( x \\leq 2^{\\frac{1}{2}(3\\alpha - \\gamma)} \\).\n#### Q.4c We assume that \\( x \\mod 3 = 0 \\). Let \\( y = \\frac{1}{3}x2^{-\\alpha} \\) and \\( s = 2^{\\alpha - y} \\). Show that \\( \\overline{u} \\leq s^3 \\leq \\overline{u} + 2\\gamma \\).\n#### Q.4d Deduce an algorithm to forge signatures on a random message \\( m \\) with success probability \\( \\frac{1}{3} \\) based on Bleichenbacher's observation when 3 divides \\( \\ell_N \\) and \\( \\ell_N \\geq 84 + 6\\ell_D \\).\n#### Q.4e Finally, apply the attack to \\( \\ell_N = 3072 \\) with SHA-1. Show that the attack applies and that" } ]
2011-12-01T00:00:00
Serge Vaudenay
EPFL
Hidden Collisions in DSA
*The following exercise is inspired from Hidden Collisions on DSS by Vaudenay, published in the proceedings of CRYPTO’96 pp. 83–88, LNCS vol. 1109, Springer 1996.* We recall the DSA signature scheme: **Public parameters** \((p, q, g)\): pick a 160-bit prime number \(q\), pick a large \(p\) random until \(p = aq + 1\) is prime, pick \(h\) in \(Z_p^*\) and take \(g = h^a \mod p\) until \(g \neq 1\). **Set up**: pick \(x \in Z_q\) (the secret key) and compute \(y = g^x \mod p\) (the public key). **Signature generation for a message \(M\)**: pick a random \(k \in Z_q^*\), compute \[ r = (g^k \mod p) \mod q \] \[ s = \frac{H(M) + xr}{k} \mod q \] the signature is \(\sigma = (r, s)\). **Verification**: check that \[ r = \left( g^{\frac{H(M)}{s}} \cdot y^{\frac{r}{s}} \mod q \mod p \right) \mod q \] The hash function \(H\) is the SHA-1 standard. The output of \(H\) is a binary string which is implicitly converted into an integer. DSA was standardized by NIST with a usual suspicion that the NSA was behind it. It could be the case that some specific choices for \((p, q, g)\) could indeed hide some special property making an attack possible. This is what we investigate in this exercise.
[ { "context": "*Using the birthday paradox, the complexity has an order of magnitude of \\(\\sqrt{\\text{digest domain}}\\). Since SHA-1 hashes onto 160 bits, it is \\(2^{80}\\).*", "question": "### Q.1 What is the complexity of finding \\(m\\) and \\(m'\\) such that \\(m \\neq m'\\) and \\(H(m) = H(m')\\)?" }, { "context": "The adversary asks the signer to sign \\( m \\) and gets the signature \\( (r, s) \\). Then, the forgery is \\( (m', (r, s)) \\). We can check that the signature is valid. Indeed,\n\n\\[\n\\left( g^{\\frac{H(m')}{a}} \\mod q \\cdot y_s \\mod q \\mod p \\right) \\mod q = \\left( g^{\\frac{H(m)}{a}} \\mod q \\cdot y_s \\mod q \\mod p \\right) \\mod q = r\n\\]\n\nFurthermore, \\( m \\neq m' \\). So, this is a valid forgery.", "question": "### Q.2 Describe a chosen-message signature-forgery attack based on the fact that an adversary knows two messages \\( m \\) and \\( m' \\) such that \\( m \\neq m' \\) and \\( H(m) = H(m') \\)." }, { "context": "If \\( q \\) divides \\( H(m) - H(m') \\), then \\( H(m) \\mod q = H(m') \\mod q \\). We can see that the attack from the above question still works, since the digest of messages is always taken modulo \\( q \\).\n\nSo, the NSA can select two random messages \\( m \\) and \\( m' \\) until \\( q = H(m) - H(m') \\) is a 160-bit prime number. The message \\( m \\) could be a test message (such as \"this is a test message to check that the signature works\") while \\( m' \\) could be a payment order.", "question": "### Q.3 Describe a chosen-message signature-forgery attack based on the fact that an adversary knows two messages \\( m \\) and \\( m' \\) such that \\( m \\neq m' \\) and \\( q = H(m) - H(m') \\) (with the integer subtraction).\n\nPropose a way for the NSA to generate public parameters \\( (p, q, g) \\) in such a way that it can later perform a forgery attack for a suitable message." }, { "context": "If we take a random seed, we let m = seed, m' = seed + 1, m1 and m2 such that {m1, m2} = {m, m'} and H(m1) > H(m2). We have |H(m) - H(m')| = H(m1) - H(m2). We define the event\n\n\\[ E : |H(m) - H(m')| = (H(seed) \\oplus H(seed + 1)) \\vee 2^{159} \\vee 1 \\]\n\nWe want to estimate Pr[E]. So, we repeat the above process 1/Pr[E] times until E occurs and we can apply the same attack with m and m'. We let msb2(s) be the two most significant bits of a string s. We first assume that msb(H(m1)) \\neq msb(H(m2)).\n\nWhen making the boolean subtraction H(m1) - H(m2), the difference of the least significant bits gives 1 with no carry only when making 1 - 0. This occurs with probability \\(\\frac{1}{4}\\). Then, for each of the 157 next bit positions, assuming there is no carry, the subtraction of the two bits matched the XOR and makes no carry with probability \\(\\frac{3}{4}\\) (that is, in all cases except 0 - 1). Finally, the two most significant bits with no carry have a difference matching the XOR and starting with 1 in the following cases: 10 - 00, 11 - 00, 11 - 01. The remaining cases are 10 - 01, 11 - 10, 01 - 00. So, this happens with probability \\(\\frac{1}{2}\\). Hence, we have\n\n\\[ \\text{Pr} \\left[ E \\mid \\text{msb}(H(m1)) \\neq \\text{msb}(H(m2)) \\right] = \\frac{1}{2} \\times \\frac{1}{4} \\times \\left( \\frac{3}{4} \\right)^{157} \\]\n\nWhen msb(H(m1)) = msb(H(m2)), we can see that E cannot occur. So,\n\n\\[ \\text{Pr}[E] = \\frac{1}{2^5} \\times \\left( \\frac{3}{4} \\right)^{157} \\approx 2^{-70} \\]\n\nThe attack has a complexity of \\(2^{70}\\).", "question": "**Q.4** To put more confidence, NIST added a way to certify that \\( (p, q, g) \\) were honestly selected. For this, we shall provide together with the public parameters a value seed such that\n\n\\[\nq = (H(\\text{seed}) \\oplus H(\\text{seed} + 1)) \\lor 2^{159} \\lor 1\n\\]\n\nwhere \\( \\oplus \\) denotes the bitwise XOR, \\( \\lor \\) denotes the bitwise OR, and \\( + \\) is the regular addition of integers. I.e., \\( q \\) is the XOR between \\( H(\\text{seed}) \\) and \\( H(\\text{seed} + 1) \\) after which the least and the most significant bits are forced to 1 so that \\( 2^{159} \\leq q < 2^{160} \\) and \\( q \\) is odd.\n\nPropose a way to construct \\( (\\text{seed}, p, q, g) \\) such that an attack is still possible.\n\nHINT: take \\( m = \\text{seed} \\) and \\( m' = \\text{seed} + 1 \\) and estimate the probability that \\( |H(m) - H(m')| = q \\) for seed random, by looking at the propagation of carry bits in the subtraction.\n\nHINT2: you may skip this question." } ]
2015-01-20T00:00:00
Serge Vaudenay
EPFL
DSA With Related Randomness
We recall the DSA signature scheme: **Public parameters** \((p, q, g)\): pick a 160-bit prime number \(q\), pick a large \(a\) random until \(p = aq + 1\) is prime, pick \(h\) in \(Z_p^*\) and take \(g = h^a \mod p\) until \(g \neq 1\). **Set up**: pick \(x \in Z_q\) (the secret key) and compute \(y = g^x \mod p\) (the public key). **Signature generation for a message \(M\)**: pick a random \(k \in Z_q^*\), compute \[ r = (g^k \mod p) \mod q \] \[ s = \frac{H(M) + xr}{k} \mod q \] the signature is \(\sigma = (r, s)\). **Verification**: check that \(r = \left( g^{\frac{H(M)}{s}} \cdot y^{\frac{r}{s}} \mod q \mod p \right) \mod q\). Sampling the randomness \(k\) to sign is critical. This exercise is about bad sampling methods. In what follows, we consider two messages \(m_1\) and \(m_2\), a signature \((r_i, s_i)\) for message \(m_i\) using the randomness \(k_i\), \(i = 1, 2\).
[ { "context": "Modulo \\(q\\), we have \\(k_i = \\frac{H(m_i) + xr_i}{s_i}\\), \\(i = 1, 2\\). If \\(k_1 = k_2\\), we deduce\n\\[ \\frac{H(m_1) + xr_1}{s_1} = \\frac{H(m_2) + xr_2}{s_2} \\]\nSo, \\(x = s_2 \\frac{H(m_1) - s_1 H(m_2)}{s_1 r_2 - s_2 r_1} \\mod q\\).", "question": "### Q.1\nSometimes, random sources are not reliable and produce twice the same value. If \\(k_1 = k_2\\), show that from the values of \\(p, q, g, y, r_1, s_1, r_2, s_2, m_1, m_2\\) we can recover \\(x\\)." }, { "context": "\\[ \\text{The equation is now } \\frac{H(m_1) + xr_1}{s_1} + 1 = \\frac{H(m_2) + xr_2}{s_2} \\text{ which leads us to} \\]\n\\[ x = \\frac{s_2 H(m_1) + s_1 s_2 - s_1 H(m_2)}{s_1 r_2 - s_2 r_1} \\mod q \\]", "question": "### Q.2\nTo avoid the previous problem, a crypto apprentice decides to sample \\(k\\) based on a counter. Redo the previous question with \\(k_2 = k_1 + 1\\)." }, { "context": "\\[ \\text{The equation is now } \\alpha \\frac{H(m_1) + xr_1}{s_1} + \\beta = \\frac{H(m_2) + xr_2}{s_2} \\text{ which leads us to} \\]\n\\[ x = \\frac{s_2 \\alpha H(m_1) + s_1 s_2 \\beta - s_1 H(m_2)}{s_1 r_2 - s_2 \\alpha r_1} \\mod q \\]", "question": "### Q.3\nTo avoid the previous problem, a crypto apprentice decides to sample \\(k\\) by iterating an affine function. Redo the previous question for \\(k_2 = \\alpha k_1 + \\beta\\) with \\(\\alpha\\) and \\(\\beta\\) known." }, { "context": "The equation is now\n\\[ \\alpha \\left( \\frac{H(m_1) + x r_1}{s_1} \\right)^2 + \\beta \\frac{H(m_1) + x r_1}{s_1} + \\gamma \\equiv \\frac{H(m_2) + x r_2}{s_2} \\]\n\nWe can rewrite it as\n\\[ x^2 \\alpha r_1^2 + x \\left( 2 \\alpha H(m_1) r_1 + \\beta r_1 - \\frac{r_2}{s_2} \\right) + \\left( \\alpha H(m_1)^2 + \\frac{\\beta H(m_1)}{s_1} + \\gamma - \\frac{H(m_2)}{s_2} \\right)\\equiv 0 \\]\n\nBy writing this as \\(u x^2 + v x + w \\equiv 0\\), we find that \\(x\\) is one of the two solutions\n\\[ x = \\frac{-v \\pm \\sqrt{v^2 - 4 u w}}{2 u} \\mod q \\]\n\nWe check the correct solution with \\(y = g^x \\mod p\\).", "question": "### Q.4\nTo avoid the previous problem, a crypto apprentice decides to sample \\(k\\) by iterating a quadratic function. Redo the previous question for \\(k_2 = \\alpha k_1^2 + \\beta k_1 + \\gamma\\) with \\(\\alpha\\), \\(\\beta\\), and \\(\\gamma\\) known." } ]
2015-01-20T00:00:00
Serge Vaudenay
EPFL
Reset Password Recovery
We consider a non-uniform distribution \( D \) of passwords. Passwords are taken from a set \(\{k_1, \ldots, k_n\}\) and each password \( k_i \) is selected with probability \(\Pr_D[k_i]\). (We omit the subscript \( D \) when there is no ambiguity in the distribution.) For simplicity, we assume that \(\Pr[k_1] \geq \Pr[k_2] \geq \cdots \geq \Pr[k_n]\). We consider a game in which a cryptographer apprentice plays with a black-box device which has two buttons — a reset button and a test button — and a keyboard. - When the player pushes the reset button, the device picks a new password \( K \), following the above distribution, and stores it into its memory. The game cannot start before the player pushes this button. - The player can enter an input \( w \) on the keyboard and push the test button. This makes the device compare \( K \) with \( w \). If \( K = w \), the device opens, the player wins, and the game stops. Otherwise, the device remains closed and the player continues. A strategy is an algorithm that the player follows to play the game. Given a strategy, we let \( C \) denote the expected number of times the player pushes the test button until he wins. The goal of the player is to design a strategy which uses a minimal \( C \). In this exercise, we consider several strategies. To compare them, we use a toy distribution \( T \) defined by the parameters \( a, p \) and \[ \Pr_T[k_1] = \cdots = \Pr_T[k_a] = \frac{p}{a}, \quad \Pr_T[k_{a+1}] = \cdots = \Pr_T[k_n] = \frac{1-p}{n-a} \] and assuming that \( \frac{p}{a} \geq \frac{1-p}{n-a} \).
[ { "context": "\\[\n\\begin{array}{|l|}\n\\hline\n\\text{The best strategy is to test the most likely possible password, i.e. } k_1, \\text{ following the algorithm} \\\\\n1: \\text{loop} \\\\\n2: \\quad \\text{reset} \\\\\n3: \\quad \\text{test } k_1 \\\\\n4: \\text{end loop} \\\\\n\\hline\n\\text{The expected number of test queries is} \\\\\n\\sum_{i=1}^{+\\infty} i \\Pr[k_1](1 - \\Pr[k_1])^{i-1} = \\frac{1}{\\Pr[k_1]} = 2^{-H_\\infty} \\\\\n\\text{where } H_\\infty \\text{ is called the min-entropy.} \\\\\n\\text{For the toy distribution } T, \\text{ we have} \\\\\nC = \\frac{a}{p} \\\\\n\\hline\n\\end{array}\n\\]", "question": "### Q.1\nWe consider a strategy in which the player always pushes the reset button before pushing the test button. For a general distribution \\( D \\), give an optimal strategy and the corresponding value of \\( C \\). Apply the general result to the toy distribution \\( T \\)." }, { "context": "The best strategy is to test the possible passwords by decreasing order of likelihood, i.e.\n1. reset\n2. for \\( i = 1 \\) to \\( n \\) do\n3. test \\( k_i \\)\n4. end for\n\nThe expected number of test queries is\n\\[\n\\sum_{i=1}^{n} i \\Pr[k_i] = G\n\\]\nwhich is called the guesswork entropy. For the toy distribution \\( T \\), we have\n\\[\nG = \\frac{p}{a} \\sum_{i=1}^{a} i + \\frac{1-p}{n-a} \\sum_{i=a+1}^{n} i\n\\]\n\\[\n= p \\frac{a+1}{2} + (1-p) \\frac{n+a+1}{2}\n\\]\n\\[\n= (1-p) \\frac{n}{2} + \\frac{a+1}{2}\n\\]", "question": "### Q.2\nWe consider a strategy in which the reset button is never used again after the initial reset. For a general distribution \\( D \\), give an optimal strategy and the corresponding value of \\( C \\). Apply the general result to the toy distribution \\( T \\)." }, { "context": "For \\( n = 3 \\) and \\( a = 1 \\), the condition \\( \\frac{2}{a} \\geq \\frac{1-p}{n-a} \\) simplifies to \\( p \\geq \\frac{1}{3} \\).\n\nWe could already look at the extreme cases with \\( p = \\frac{1}{3} \\), making \\( T \\) the uniform distribution with \\( n = 3 \\), and \\( p = 1 \\), making \\( T \\) having zero probability on \\( k_2 \\) and \\( k_3 \\).\n\nFor the uniform distribution, we have \\( G = 2 \\) and \\( 2^{-H_\\infty} = 3 \\). So, the strategy of Q.2 is better. For \\( p = 1 \\), we have \\( G = 1 \\) and \\( 2^{-H_\\infty} = 1 \\). So, both strategies are equally good.\n\nIn the general case, we have\n\\[\nG = \\frac{5 - 3p}{2}\n\\]\nand\n\\[\n2^{-H_\\infty} = \\frac{1}{p}\n\\]\n\nWe have equality between \\( G \\) and \\( 2^{-H_\\infty} \\) if and only if \\( 3p^2 - 5p + 2 = 0 \\) which have roots \\( p = 1 \\) and \\( p = \\frac{2}{3} \\). So, for \\( \\frac{1}{3} \\leq p \\leq \\frac{2}{3} \\), \\( G \\) is lower, and for \\( \\frac{2}{3} \\leq p \\leq 1 \\), \\( 2^{-H_\\infty} \\) is lower. We can propose \\( p = \\frac{5}{8} \\) for which the strategy of Q.1 is better.", "question": "### Q.3\nFor \\( n = 3 \\) and \\( a = 1 \\), propose one value for \\( p \\) in the toy distribution \\( T \\) so that the strategy in Q.2 is better and one value for \\( p \\) so that the strategy in Q.1 is better. We recall that we must have \\( \\frac{2}{a} \\geq \\frac{1-p}{n-a} \\)." }, { "context": "The best strategy is to test the m most likely possible passwords by decreasing order of likelihood, i.e.\n1. loop\n2. reset\n3. for i = 1 to m do\n4. test \\( k_i \\)\n5. end for\n6. end loop\n\nLet \\( p_m = \\Pr[k_1] + \\cdots + \\Pr[k_m] \\)\n\nWe define the distribution \\( D' = D|K \\in \\{k_1, \\ldots, k_m\\}\\) conditioned to \\( K \\in \\{k_1, \\ldots, k_m\\}\\). We have \\( \\Pr_{D'}[k_i] = \\frac{1}{p_m} \\Pr_D[k_i] \\). The distribution \\( D' \\) has a guesswork entropy \\( G_m \\) defined by\n\\[\nG_m = \\frac{1}{p_m} \\sum_{i=1}^{m} i \\Pr_D[k_i]\n\\]\n\nThe expected number of iterations of the outer loop is \\( \\frac{1}{p_m} \\). The number of tests during the last iteration is \\( G_m \\). The number of tests during each of the previous iterations is exactly m. So, the expected number of tests is\n\\[\nC_1 = m \\left( \\frac{1}{p_m} - 1 \\right) + G_m\n\\]\n\nNote that for \\( m = 1 \\), we have \\( p_1 = \\Pr[k_1] \\), \\( G_1 = 1 \\), and \\( C_1 = \\frac{1}{\\Pr[k_1]} \\). For \\( m = n \\), we have \\( p_n = 1 \\), \\( G_n = G \\), and \\( C_n = G \\).", "question": "### Q.4\nWe consider a strategy in which the player always pushes the reset button after m tests have been made since the last reset. For a general distribution D, give an optimal strategy and the corresponding value of C. Check that your result is consistent with those from Q.1 and Q.2 with m = 1 and m = n." } ]
2015-01-20T00:00:00
Serge Vaudenay
EPFL
A Bad EKE with RSA
In this exercise we want to apply the EKE construction with the RSA cryptosystem and the AES cipher to derive a password-based authenticated key exchange protocol (PAKE). For that, Alice and Bob are assumed to share a (low-entropy) password \( w \). The protocol runs as follows: ``` Alice (password: w) Bob (password: w) RSAGen → (sk, N), B_1|...|B_8 ← N C_1|...|C_8 ← AESCBC_w(B_1|...|B_8) B_1|...|B_8 ← N C_1|...|C_8 → B_1|...|B_8 ← AESCBC_w^{-1}(C_1|...|C_8) N ← B_1|...|B_8, pick K ∈ {0,1}^{128} Γ ← RSAEnc_{N,3}(K), B'_1|...|B'_8 ← Γ B'_1|...|B'_8 ← AESCBC_w^{-1}(C'_1|...|C'_8) C'_1|...|C'_8 ← AESCBC_w(B'_1|...|B'_8) Γ ← B'_1|...|B'_8, K ← RSADec_{sk}(Γ) C'_1|...|C'_8 ← C_1|...|C_8 output: K output: K ``` Here are some explanations: - Alice generates an RSA modulus \( N \) such that \( \gcd(3, \varphi(N)) = 1 \). This modulus is supposed to have exactly 1024 bits. The modulus \( N \) is written in binary and splits into 8 blocks \( N = B_1|...|B_8 \). The blocks \( B_1, ..., B_8 \) are then encrypted with AES in CBC mode with IV set to the zero block and the key set to \( w \). The obtained ciphertext blocks \( C_1, ..., C_8 \) are sent to Bob. - Bob decrypts \( C_1, ..., C_8 \) following the AES-CBC decryption algorithm with IV set to the zero block and the key set to \( w \). He recovers \( B_1, ..., B_8 \) and can reconstruct \( N \). He picks a random 128-bit key \( K \) and computes the RSA-OAEP encryption of \( K \) with key \( N \) and \( e = 3 \). He then obtains a ciphertext \( Γ \). This is split into 8 blocks \( Γ = B'_1|...|B'_8 \) and the blocks \( B'_1, ..., B'_8 \) are then encrypted with AES in CBC mode with IV set to the zero block and the key set to \( w \). The obtained ciphertext blocks \( C'_1, ..., C'_8 \) are sent to Alice. - Alice decrypts \( C'_1, ..., C'_8 \) following the AES-CBC decryption algorithm with IV set to the zero block and the key set to \( w \). She recovers \( B'_1, ..., B'_8 \) and can reconstruct \( Γ \). She applies the RSA-OAEP decryption on \( Γ \) with her secret key and obtains \( K \). So, Alice and Bob end the protocol with the secret \( K \).
[ { "context": "Since \\( K < 2^{128} \\), we have \\( K^3 < 2^{384} \\) which is small. So, \\( K^3 \\mod N = K^3 \\). Eve could do an exhaustive search on \\( W \\) to decrypt \\( C'_1|...|C'_8 \\) to obtain some candidate values for \\( Γ \\). She could then compute \\( \\sqrt[3]{Γ} \\). For wrong guesses for the password, this is unlikely to be an integer. For the correct \\( w \\), this gives \\( K \\). So, Eve recovers \\( w \\) and \\( K \\).", "question": "### Q.1\nAssume (only in this question) that we use plain RSA instead of RSA-OAEP. Show that Eve can easily recover \\( w \\) and \\( K \\) in a passive attack with a single execution of the protocol. **HINT**: show that the plain RSA decryption of \\( Γ \\) is easy in this case." }, { "context": "We apply the principle of the partition attack: the set of valid \\( N \\) does not correspond to the set of messages. Namely, the most significant and the least significant bits of \\( N \\) must always be 1 (since \\( N \\) is odd and between \\( 2^{1023} \\) and \\( 2^{1024} \\)). We could also note that \\( N \\mod 3 = 1 \\), so the set of valid \\( N \\) is at least \\( \\frac{1}{3} \\) of the full space. Eve could do an exhaustive search on all \\( w \\), decrypt \\( C_1 || \\cdots || C_8 \\) with the trial passwords and discard all trials not satisfying the above conditions. The set of possible passwords would thus be reduced by at least a factor 8. By using \\( k \\) executions of the protocol, the set of possible passwords is reduced by \\( 8^k \\) until it contains the correct password \\( w \\). For instance, if \\( w \\) has an entropy lower than 48 bits, \\( k = 16 \\) iterations are enough to isolate the correct password.", "question": "### Q.2\nPropose a passive attack allowing Eve to deduce the password \\( w \\) after a few executions of the protocol. Estimate the number of executions needed to recover a password with less than 48 bits of entropy with a high probability. **HINT**: \\( N \\) is not an arbitrary bitstring. You could think of eliminating some password guesses." } ]
2015-01-20T00:00:00
Serge Vaudenay
EPFL
Attack on 2K-3DES
This exercise is based on “On the security of multiple encryption” by Merkle and Hellman, Communications of the ACM, Vol. 24(7), July 1981.
[ { "context": "Blocks have 64 bits. The key has 56 effective bits. With a single plaintext-ciphertext pair \\((x, y)\\) with a known plaintext, it is enough to characterize the correct key as no wrong key shall be consistent with probability \\( (1 - 2^{-64})^{2^{56}} \\approx e^{-2^{-8}} \\) which is very close to 1. The average complexity is of \\( 2^{55} \\) trials with a small memory (just enough to store the data and a counter).", "question": "### Q.1 What are the block length and the key length in DES? What is the complexity of key recovery exhaustive search in terms of data, known plaintexts versus chosen ciphertexts, memory, and time?" }, { "context": "We now need two pairs \\((x_i, y_i)\\), \\( i = 1, 2 \\) to characterize the correct key uniquely. With 2 known plaintexts, we prepare a dictionary of \\( 2^{56} \\) records \\((\\text{DES}_{K_1}^{-1}(y_1), b)\\) for all \\( b \\). Records are sorted by their first two values. The dictionary takes \\( 8 \\times 2^{56} \\) bytes. There would be tricks to shrink it a bit but the order of magnitude should stay \\( 2^{56} \\). Then, for all \\( a \\) we compute \\( \\text{DES}_a(x_1) \\) and check if this is in the dictionary. When it is, \\( b \\) is given by the dictionary and we check if \\( y_2 = \\text{DES}_b(\\text{DES}_a(x_2)) \\). If it matches, then \\( K_1 = a \\) and \\( K_2 = b \\) is the correct key. The time complexity consists of \\( 4 \\times 2^{56} \\) DES encryptions. Again, there would be tricks to reduce it a bit but the order of magnitude should stay \\( 2^{56} \\).", "question": "### Q.2 Double DES is defined by\n\n\\[ y = \\text{DES}_{K_1}(\\text{DES}_{K_2}(x)) \\]\n\nExplain how the meet-in-the-middle attack works. What is its complexity in terms of data, known plaintexts versus chosen ciphertexts, memory, and time?" }, { "context": "For each \\(a\\) we compute \\(x = \\text{DES}^{-1}_a(0)\\) and use \\(x\\) as a chosen plaintext. We obtain \\(y\\). Then, we check if \\(\\text{DES}^{-1}_a(y)\\) is in the dictionary. If it is, it means that \\(\\text{DES}^{-1}_a(y) = \\text{DES}^{-1}_k(0)\\) for some \\(b\\) and it gives \\(b\\). Clearly, \\(x\\) encrypts to \\(y\\) with key \\((a, b)\\). With a previous plaintext-ciphertext pair we can check if this key is correct. Clearly, when \\(a\\) becomes equal to \\(K_1\\) (which would happen after an average number of trials equal to \\(2^{55}\\)), this attack recovers \\(K_2\\). So, it works with a number of DES operations equal to \\(3 \\times 2^{55}\\), \\(2^{55}\\) chosen plaintexts, and a dictionary of \\(2^{56}\\) entries.", "question": "### Q.3 Two-key triple DES is defined by\n\n\\[ y = \\text{DES}_{K_1} \\left( \\text{DES}^{-1}_{K_2} \\left( \\text{DES}_{K_1}(x) \\right) \\right) \\]\n\nBy preparing a dictionary of all \\((\\text{DES}^{-1}_k(0), k)\\) pairs, show that we can break this using many chosen plaintexts and within a time/memory complexity similar to in the previous question." } ]
2012-01-25T00:00:00
Serge Vaudenay
EPFL
Collisions with a Subset
In a classroom, we have \(x\) female students and \(y\) male students. We assume that their birthday is uniformly distributed in a calendar of \(N\) possible dates, e.g., \(N = 365\).
[ { "context": "> The probability that we have no collision is\n> \n> \\[\n> 1 \\cdot \\left(1 - \\frac{1}{N}\\right) \\cdot \\left(1 - \\frac{2}{N}\\right) \\cdots \\left(1 - \\frac{x-1}{N}\\right) = \\frac{N!}{N^x (N - x)!}\n> \\]\n> \n> So, the probability to have a female-female collision is\n> \n> \\[\n> p_{xx} = 1 - \\frac{N!}{N^x (N - x)!}\n> \\]", "question": "### Q.1 Let \\(p_{xx}\\) denote the exact probability that there are two different female students with the same birthday. Express \\(p_{xx}\\) in terms of \\(N\\) and \\(x\\)." }, { "context": "> The probability of no female-male collision under the condition that there are exactly \\(x\\) female birthdays is \\(\\left(1 - \\frac{x}{N}\\right)^y\\). So, the probability to have a male-female collision is\n> \n> \\[\n> p_{xy|xx} = 1 - \\left(1 - \\frac{x}{N}\\right)^y\n> \\]", "question": "### Q.2 Let \\(p_{xy|xx}\\) denote the exact probability that there is at least one female-male pair of students who share the same birthday conditioned to that female students have pairwise different birthdays. Express \\(p_{xy|xx}\\) in terms of \\(N\\), \\(x\\), and \\(y\\)." }, { "context": "> \\[\n> \\begin{aligned}\n> \\text{We have} \\quad p_{xy| \\neg xx} &= 1 - e^{y \\log \\left(1 - \\frac{x}{N}\\right)} \\\\\n> \\text{Since} \\log(1 - \\epsilon) &\\approx -\\epsilon, \\quad \\text{we obtain the result.}\n> \\end{aligned}\n> \\]", "question": "### Q.3 Show that \\( p_{xy| \\neg xx} \\approx 1 - e^{-\\frac{xy}{N}} \\)." }, { "context": "> \\[\n> \\begin{aligned}\n> \\text{It is} \\quad p_{xk} &= p_{xx} + (1 - p_{xx}) p_{xy| \\neg xx} \\\\\n> &= 1 - \\frac{N!}{N^x (N - x)!} + \\frac{N!}{N^x (N - x)!} \\left( 1 - \\left( 1 - \\frac{x}{N} \\right)^y \\right)\n> \\end{aligned}\n> \\]", "question": "### Q.4 Based on the previous computations, what is the exact probability \\( p_{xk} \\) that at least one female student shares the same birthday with another student (either female or male)?" }, { "context": "> \\[\n> \\begin{aligned}\n> \\text{We have} \\quad p_{xk} &= p_{xx} + (1 - p_{xx}) p_{xy| \\neg xx} \\\\\n> &\\approx 1 - e^{-\\frac{x^2}{2N}} + e^{-\\frac{x^2}{2N}} \\left( 1 - e^{-\\frac{xy}{N}} \\right) \\\\\n> &= 1 - e^{-\\frac{x(y+2x)}{2N}}\n> \\end{aligned}\n> \\]", "question": "### Q.5 Show that \\( p_{xk} \\approx 1 - e^{-\\frac{x(y+2x)}{2N}} \\). Hint: \\( p_{xx} \\approx 1 - e^{-\\frac{x^2}{2N}} \\)." }, { "context": "> \\[\n> \\begin{aligned}\n> \\text{We take} \\quad x &= n_u \\quad \\text{and} \\quad y = n_t. \\\\\n> \\text{The attacker succeeds if either there is a collision between the list} \\quad x \\quad \\text{and the list} \\quad y, \\quad \\text{or there is a collision inside the list} \\quad x. \\\\\n> \\text{If the range of the hash function is} \\quad N, \\quad \\text{this probability is thus} \\quad p_{xk}.\n> \\end{aligned}\n> \\]", "question": "### Q.6 In a community of \\( n_u \\) users each having a password, we assume that there is a public directory for the hash of the passwords. We consider an attacker who tries to find password matches with the existing database of \\( n_u \\) password hashes. He is allowed to try \\( n_t \\) many random passwords and hash them. We say that he succeeds if he gets any match. That is to say, he succeeds if either he finds at least one password with a hash in the directory, or if he finds two users having the same password hash in the directory. What is his success probability?" } ]
2012-01-25T00:00:00
Serge Vaudenay
EPFL
Secure Communication Across the Röstigraben
*Warning: this exercise asks you to propose a real solution for a real problem. You are requested to precisely describe your proposed solution so that we could assess on correctness, feasibility, efficiency, and security. Take this exercise as if it was for a hiring interview for an engineer position.* You want to communicate securely with your friend in Zurich, but you forgot to prepare for it the last time you met. Fortunately, you are making MSc studies with courses in cryptography, so you are familiar with communication systems and computers, and so is your friend.
[ { "context": "You could use a software such as GPG to set up your own key. You could also try to develop your own RSA at your own risks. For instance, you could generate two prime numbers \\( p \\) and \\( q \\) large enough, compute \\( n = p \\times q \\) and pick a random \\( e \\) until it is coprime with \\((p−1)\\times(q−1)\\). The public key would be \\( K^L_p = (n, e) \\) while the private one would be \\( K^L_s = (n, d) \\) with \\( d = e^{−1} \\mod ((p−1)(q−1)) \\).", "question": "### Q.1 How would you generate a private/public key pair on your computer?" }, { "context": "You could exchange your public keys over the Internet (e.g., by email), then authenticate them using a second channel. To authenticate your public key \\( K^L_p \\) to your counterpart, you can pick a random string \\( r_L \\), compute \\( \\text{SHA1}(K^L_p||r_L) \\), take the 80 leftmost bits \\( \\sigma_L \\), send \\( r_L \\) by email. Your friend would do the same computation to obtain \\( \\sigma'_L \\). Then, you call each other over the telephone and recognize each other by your voice. Then, you would spell \\( \\sigma_L \\) to your friend who will compare it with \\( \\sigma'_L \\). If they match, then your key is authenticated. The authentication of his public key \\( K^Z_p \\) would be similar: you receive \\( r_Z \\) by email, compute \\( \\text{SHA1}(K^Z_p||r_Z) \\), take the 80 leftmost bits \\( \\sigma'_Z \\). When you call each other over the phone, he would spell \\( \\sigma'_Z \\) and you would have to check \\( \\sigma_Z = \\sigma'_Z \\). There are better interactive protocols using shorter strings than 80 bits and which are called SAS-based authentication protocols. One advantage of the above one is that it is essentially non-interactive. It can work even with a very slow channel instead of a telephone. For instance, you could exchange \\( \\sigma \\)'s by regular mail with enough handwriting so that you could authenticate the message. Concrete implementations heavily depend on the properties on this second channel. If voice recognition is not enough, you would also transmit the 80-bit \\( \\sigma_Z \\) by several channels (telephone, email, sms, regular mail) and assume that no adversary will be able to attack all channels.", "question": "### Q.2 How would you and your friend securely exchange your public keys?" }, { "context": "You could use the public keys to exchange symmetric keys. To exchange symmetric keys, you would exchange some secret random numbers by encrypting them with each other’s public key, then XOR them, and use the result as a seed to generate symmetric keys. For instance, you pick \\( x_L \\) using a pseudorandom generator with a seed set to some secret stuff with large enough entropy and send \\( y_L = \\text{Enc}_{K^Z}(x_L) \\) to your friend. You receive \\( y_Z \\) and compute \\( x_Z = \\text{Dec}_{K^L}(y_Z) \\). Then, \\( s = x_L \\oplus x_Z \\) can be used as a seed for a pseudorandom generator to generate a symmetric key \\( K \\). As for pseudorandom generator, you can use\n\n \\( \\text{SHA1}(1||\\text{stuff})||\\text{SHA1}(2||\\text{stuff})||\\ldots \\)\n\nWith GPG you would just send encrypted emails with public key \\( K^Z_L \\). You can decrypt the email from your counterpart using \\( K^L_S \\). With your hand-made RSA you would take a random number \\( w \\) which is twice larger than your friend’s modulus (to avoid biases in distributions), and reduce it modulo that number to get \\( x_L \\). You would compute \\( y_L = e^Z_L x_L \\mod N^Z \\) and send it by email. As your hand-made implementation of RSA would essentially be the plain RSA cryptosystem, it would be wise to throw away your RSA keys after the protocol completes in order to avoid all the problems of plain RSA.", "question": "### Q.3 How would you use public keys to set up a symmetric key with your friend?" }, { "context": "A key \\( K \\) would have two parts \\( K = (K_1, K_2) \\). Let \\( c−1 \\) be the number of exchanged messages. To send the \\( c \\)-th message \\( x_C \\), you can send \\( \\text{Enc}_{K_1}(\\text{Mac}_{K_2}(c||x_C)||x_C) \\). It is assumed that you would send even numbered messages and that you would receive odd numbered messages. The synchronized counter protects message sequentiality. For instance it defeats replay attacks. Encryption and MAC protect message confidentiality, authentication, and integrity.", "question": "### Q.4 How would you implement a secure communication channel based on this key?" }, { "context": "The described solution requires :\n– that \\( \\sigma \\)'s are well authenticated (e.g., by voice recognition);\n– that the public-key solution is secure (e.g., GPG is secure or your home-made RSA is secure);\n– that your random numbers are really random, based on a large enough entropy;\n– that the block cipher and the MAC are secure.", "question": "### Q.5 Under which assumptions would your system be secure?" } ]
2012-01-25T00:00:00
Serge Vaudenay
EPFL
Modular Arithmetic
Let \( p \) and \( q \) be two different odd prime numbers and \( n = pq \).
[ { "context": "\\[ \\text{The function } f \\text{ is used in the Chinese Remainder Theorem. Indeed, } f(x, y) \\equiv x \\pmod{p} \\text{ and } f(x, y) \\equiv y \\pmod{q}. \\]\nIn what follows, \\(\\alpha = q \\times q' \\) where \\(q' \\in \\mathbb{Z} \\) is the inverse of \\( q \\) modulo \\( p \\), and \\(\\beta = p \\times p' \\) where \\( p' \\in \\mathbb{Z} \\) is the inverse of \\( p \\) modulo \\( q \\). We define \\( f(x, y) = \\alpha x + \\beta y \\), where \\( x, y \\in \\mathbb{Z} \\).", "question": "### Q.1 Show that \\( p \\) is invertible modulo \\( q \\) and that \\( q \\) is invertible modulo \\( p \\)." }, { "context": "\\[ \\text{In what follows, } \\alpha = q \\times q' \\text{ where } q' \\in \\mathbb{Z} \\text{ is the inverse of } q \\text{ modulo } p, \\text{ and } \\beta = p \\times p' \\text{ where } p' \\in \\mathbb{Z} \\text{ is the inverse of } p \\text{ modulo } q. \\text{ We define } f(x, y) = \\alpha x + \\beta y, \\text{ where } x, y \\in \\mathbb{Z}. \\]\n\\[ \\text{Since } \\beta \\text{ is a multiple of } p, \\text{ we have } f(x, y) \\equiv \\alpha x \\pmod{p}. \\text{ Now, } \\alpha \\mod p = 1. \\]\n\\[ \\text{Since } 0 \\leq x < p, \\text{ we obtain that } f(x, y) \\mod p = x. \\]", "question": "### Q.2 For \\( x \\in \\{0, \\ldots, p-1\\} \\) and \\( y \\in \\mathbb{Z} \\), what is \\( f(x, y) \\mod pq? \\)" }, { "context": "\\[ \\text{The function } f \\text{ is used in the Chinese Remainder Theorem. Indeed, } f(x, y) \\equiv x \\pmod{p} \\text{ and } f(x, y) \\equiv y \\pmod{q}. \\]", "question": "### Q.3 Which concept of the course corresponds to the function \\( f \\)?" }, { "context": "\\[ \\text{We have } f(1, 1) \\mod p = 1 \\text{ and } f(1, 1) \\mod q = 1. \\text{ Since } p \\text{ and } q \\text{ are coprime, we deduce } f(1, 1) = 1 + kn \\text{ for some integer } k. \\]\n\\[ \\text{We have } f(1, 1) = \\alpha + \\beta \\text{ so } 0 < f(1, 1) \\leq q(p - 1) + p(q - 1) = 2n - p - q. \\text{ Clearly, } f(1, 1) = \\alpha + \\beta > 1. \\text{ So, } 1 \\leq k < 2. \\text{ We deduce } k = 1 \\text{ so } f(1, 1) = 1 + n. \\]", "question": "### Q.4 Show that \\( f(1, 1) = 1 + n \\)." }, { "context": "Since \\( f(x, x) \\mod p = x \\) due to the previous question, we obtain that \\( f(x, x) - x \\) is divisible by \\( p \\). Similarly, it must be divisible by \\( q \\). Since \\( p \\) and \\( q \\) are coprime, \\( f(x, x) - x \\) is divisible by \\( n \\), for all \\( x \\).\nLet \\( m \\) be the largest common factor of all \\( f(x, x) - x \\). Since \\( n \\) divides all \\( f(x, x) - x \\), \\( n \\) divides \\( m \\) as well, so \\( m \\geq n \\).\nSince \\( f(1, 1) - 1 = n \\), we have \\( m \\leq n \\). So, \\( m = n \\).", "question": "### Q.5 Give the largest common factor of all numbers of the form \\( f(x, x) - x \\) for \\( x \\in \\mathbb{Z} \\)." }, { "context": "We only have two square roots modulo \\( p \\) which are \\( x \\) and \\( -x \\). The same holds modulo \\( q \\). Then, the square roots modulo \\( n \\) must be of form \\( f(\\pm x, \\pm x) \\mod n \\). We can see that we obtain the four square roots which are \\( x \\), \\( -x \\), \\( f(x, -x) \\), and \\( -f(x, -x) \\) modulo \\( n \\).", "question": "### Q.6 Let \\( x \\in \\mathbb{Z}_n \\). Using \\( f \\), list all the square roots of \\( x^2 \\mod n \\) in \\( \\mathbb{Z}_n \\)." }, { "context": "We know that \\( z - x \\) is divisible by \\( p \\). We further have \\( z - x \\equiv y - x \\pmod{q} \\). Due to these assumptions, this is not divisible by \\( q \\). So, \\( \\gcd(z - x, n) = p \\). We can compute \\( p \\) by this formula then compute \\( q = n / p \\).\nThis algorithm was seen in the course when we have shown that we can factor \\( n \\) based on a square root algorithm. In this case, the square roots of \\( x^2 \\mod n \\) can be written \\( f(\\pm x, \\pm x) \\mod n \\), and we hoped to get \\( f(x, -x) \\mod n \\). Given \\( f(x, -x) \\) and \\( x \\), we could factor \\( n \\).", "question": "### Q.7 Assuming that \\( p < q \\), that is \\( x \\in \\{0, \\ldots, p-1\\} \\), \\( y \\in \\{0, \\ldots, q-1\\} \\), that \\( x \\neq y \\), let \\( z = f(x, y) \\). Give an algorithm to compute \\( p \\) and \\( q \\) when given \\( z \\), \\( x \\), and \\( n \\)." } ]
2013-01-15T00:00:00
Ioana Boureanu, Serge Vaudenay
EPFL
A MAC Based on DES
We construct a (bad) MAC as follows: given a message \( m \) and a key \( K \), we first compute \( h = \text{trunc}(\text{SHA1}(m)) \) where trunc maps onto the keyspace of DES (assume that the preimages by trunc have the same size). Then, we compute \( c = \text{DES}_h(K) \) which is the authentication code.
[ { "context": "Since a DES key has 56 bits, only 56 useful bits from \\( m \\) are used. (Although \\( h \\) has 64 bits, 8 of these bits are not used by DES.)", "question": "### Q.1 How many bits of entropy are used from \\( m \\) to compute \\( c \\)?" }, { "context": "Due to the birthday paradox, we roughly need \\( 2^{28} \\) random messages to have a collision on the DES key which is computed from \\( m \\).", "question": "### Q.2 How many random messages do we need in order to see the same authentication code twice with a good probability? (Explain.)" }, { "context": "We first look for the collision in an offline way, using about \\( 2^{28} \\) random messages. When this is done, we obtain \\( m_1 \\) and \\( m_2 \\) producing the same DES key. Therefore, we must have \\( c = \\text{MAC}_K(m_1) = \\text{MAC}_K(m_2) \\). So, we just use \\( m_1 \\) as a chosen message to get \\( c \\), and we produce the forgery \\((m_2, c)\\).\n\nThe above was the expected answer to the exercise. However, there was a typo in the definition of the MAC which made possible to propose a better solution: With the given definition, there is a much better attack as follows: given \\( m \\) and \\( c = \\text{MAC}_K(m) \\), compute \\( h \\) from \\( m \\) then \\( K = \\text{DES}_h^{-1}(c) \\). This recovers the key, based on which we can make forgeries.\nThe correct definition of the MAC would have been \\( c = K \\oplus \\text{DES}_h(K) \\).", "question": "### Q.3 Describe a chosen-message forgery attack against the MAC which uses only one chosen message." } ]
2013-01-15T00:00:00
Ioana Boureanu, Serge Vaudenay
EPFL
Secure Communication
We want to construct a secure communication channel using cryptography.
[ { "context": "Confidentiality. This ensures that only the legitimate receiver can receive the packet. This is protected by symmetric encryption. \nIntegrity. This ensures that the received packet must be equal to the sent one. This is protected by a MAC, together with the next property. \nAuthentication. This ensures that only the legitimate sender can send valid messages. This is protected by a MAC.", "question": "### Q.1 List the three main security properties that we need at the packet level to achieve secure communication. For each property, explain what it means and say which cryptographic technique can be used to obtain it." }, { "context": "Sequentiality. This ensures that the sequence of packets which is seen at both ends of the channel are prefix of each other. This is protected by numbering the packets and authenticating the packet number together with the packet itself. \nFairness of termination. This ensures that both ends see the same final message. This is hard to protect. It can be achieved by using the KiT protocol, but it is expensive.", "question": "### Q.2 Assuming that packet communication is secure, list two extra properties (other than key establishment) that we need in order to secure an entire session, and how to ensure these properties." }, { "context": "We must securely set up a symmetric key. This can be done using an extra secure communication channel. \nFor instance, with a channel protecting authentication and integrity, we can use the Diffie-Hellman protocol. \nOtherwise, we can use a third party. E.g., a secure channel to a human user to help authenticating key exchange with SAS-based cryptography. Or, secure channels to a certificate authority to set up a PKI.", "question": "### Q.3 How to secure a key establishment to initialize the secure channel? Give two solutions." } ]
2013-01-15T00:00:00
Ioana Boureanu, Serge Vaudenay
EPFL
On Entropies
We define `nextprime(x)` as the smallest prime number `p` such that `p ≥ x`. We want to sample a prime number greater than 40 as follows: given a random number `R` with uniform distribution between 1 and 16, we compute `X = nextprime(40 + R)`. For `X` secret, we consider the problem of finding `X`.
[ { "context": "We compute the table of X in terms of R:\n\n| R | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |\n|---|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|\n| X |41 |43 |43 |47 |47 |47 |47 |47 |53 |53 |53 |53 |53 |59 |59 |59 |\n\nSo, we obtain the following distribution:\n\n| x | 41 | 43 | 47 | 53 | 59 |\n|----|----|----|----|----|----|\n| Pr[X] | 1/16 | 2/16 | 4/16 | 6/16 | 3/16 |", "question": "### Q.1 Give the distribution of all possible values for `X`." }, { "context": "By applying the formula, we obtain H(X) = 2.108459 and c = 2.656152. So, X has about 2.1 bits of entropy. An exhaustive search on a uniformly distributed string of 2.1 bits would require an average complexity of 2.7.", "question": "### Q.2 Compute `H(X)`, the Shannon entropy of `X` and the value `c = \\frac{1}{2} (2^{H(X)} + 1)`. **Reminder:** \\( H(X) = -\\sum_x \\text{Pr}[X = x] \\log_2 \\text{Pr}[X = x] \\)" }, { "context": "Clearly, the best strategy is to ask the questions in the following order until one answer is “yes”:\n- is the secret X equal to 59?\n- is the secret X equal to 47?\n- is the secret X equal to 53?\n- is the secret X equal to 43?\n- is the secret X equal to 41?\n\nThe average complexity is\n\nG(X) = \\frac{6}{16} \\times 1 + \\frac{4}{16} \\times 2 + \\frac{3}{16} \\times 3 + \\frac{2}{16} \\times 4 + \\frac{1}{16} \\times 5 = 2.250000\n\nWe notice that G(X) < c. Actually, the best way to do an exhaustive search on X is more efficient than doing an exhaustive search on H(X) bits, although we might think it is the same.", "question": "### Q.3 Compute `G(X)`, the guesswork entropy of `X`, and compare it with `c`. What do we deduce? **Reminder:** `G(X)` is the lowest expected complexity in the following game. A challenger samples `X`, keeps it secret, and answers questions as follows. The adversary, trying to guess `X`, can ask as many questions as he wants of the form “is the secret `X` equal to `x`?” for any value `x`. The complexity is the number of questions until one answer is “yes”." }, { "context": "This is\n\\[\n\\Pr[X = Y] = \\sum_x \\Pr[X = Y = x] = \\sum_x \\Pr[X = x]^2 = 0.257813\n\\]", "question": "### Q.4 By sampling two independent prime numbers \\( X \\) and \\( Y \\) following the same distribution, what is the probability that \\( X = Y \\)?" } ]
2013-01-15T00:00:00
Ioana Boureanu, Serge Vaudenay
EPFL
Pedersen Commitment
*The following exercise is inspired from Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing by Pedersen, published in the proceedings of Crypto'91 pp. 129-140, LNCS vol. 576, Springer 1992.* Let \( p \) and \( q \) be two prime numbers such that \( q \) divides \( p - 1 \). Let \( g \) be an element of \( \mathbb{Z}_p^* \) of order \( q \). Let \( h \) be in the subgroup of \( \mathbb{Z}_p^* \) generated by \( g \) but different from the neutral element. Given two numbers \( x \) and \( r \), we define a commitment scheme by \( \text{commit}(x; r) = g^x h^r \mod p \). The protocol works as follows. We assume that the sender wants to commit to a message \( x \) to a receiver. In the commitment phase, the sender selects \( r \) at random, computes \( y = \text{commit}(x; r) = g^x h^r \mod p \) and sends \( y \) to the receiver. In the opening phase, the sender sends some values and the receiver does some computation. (Formalizing further this phase is subject to a question.)
[ { "context": "*Opening the commitment means that the sender provides the value \\( x \\) and his coins \\( r \\). The algorithm consists of checking that \\( y = \\text{commit}(x; r) \\) and producing \\( x \\) as the protocol outcome.*", "question": "### Q.1 Fully formalize what the sender sends to the receiver in the opening phase and which computation the receiver is doing." }, { "context": "*We write \\( h = g^a \\mod p \\) for some \\( a \\in \\mathbb{Z}_q \\). Since \\( h \\) is not the neutral element and \\( q \\) is prime, we deduce that \\( a \\in \\mathbb{Z}_q^* \\). Since \\( X \\) and \\( R \\) are independent and since \\( R \\) is uniformly distributed, we deduce that \\( X + aR \\mod q \\) is uniformly distributed in \\( \\mathbb{Z}_q \\). Now, \\( x \\mapsto g^x \\mod p \\) is a one-to-one mapping from \\( \\mathbb{Z}_q \\) to the subgroup generated by \\( g \\), so \\( Y \\) is uniformly distributed.*", "question": "### Q.2 Let \\( X \\) and \\( R \\) be two independent random variables with values in \\( \\mathbb{Z}_q \\) such that \\( R \\) is uniformly distributed in \\( \\mathbb{Z}_q \\). Let \\( Y = \\text{commit}(X; R) \\). Show that \\( Y \\) is uniformly distributed in the subgroup of \\( \\mathbb{Z}_p^* \\) generated by \\( g \\). \n *Hint: use \\( h \\) in the subgroup of \\( \\mathbb{Z}_p^* \\) generated by \\( g \\).*" }, { "context": "Since \\( X \\) and \\( R \\) are independent and that \\( R \\) is uniformly distributed, we obtain that \\( X + aR \\mod q \\) and \\( X \\) are independent. Due to the one-to-one mapping, we obtain that \\( X \\) and \\( Y \\) are independent.\nThe complete proof of uniformity and independence (not necessary in this exercise) goes like in the course: given \\( x \\in \\mathbb{Z}_q, y \\in \\langle g \\rangle, y = g^b \\),\n\n\\[\n\\begin{aligned}\n\\Pr[X = x, Y = y] &= \\Pr[X = x, X + aR \\equiv b] \\\\\n&= \\Pr[X = x, R \\equiv (b - x)/a] \\\\\n&= \\Pr[X = x] \\Pr[R = (b - x)/a \\mod q] \\\\\n&= \\Pr[X = x] \\frac{1}{q}\n\\end{aligned}\n\\]\n\nSo,\n\n\\[\n\\Pr[Y = y] = \\sum_x \\Pr[X = x, Y = y] = \\sum_x \\Pr[X = x] \\frac{1}{q} = \\frac{1}{q}\n\\]\n\nand \\( Y \\) is uniform. Finally, \\( \n\\Pr[X = x, Y = y] = \\frac{\\Pr[X = x]}{q} = \\Pr[X = x] \\Pr[Y = y] \n\\) so \\( X \\) and \\( Y \\) are independent.", "question": "### Q.3 With the same settings, show that \\( X \\) and \\( Y \\) are independent." }, { "context": "If we know \\( a \\) such that \\( h = g^a \\mod p \\), for any \\( x, r \\) we have\n\n\\[\n\\text{commit}(x + a; r - 1) \\equiv g^{x+a}h^{r-1} \\equiv g^x h^r \\equiv \\text{commit}(x; r) \\quad (\\mod p)\n\\]\n\nSo, we can compute \\( x, r, x', r' \\in \\mathbb{Z}_q \\) such that \\( \\text{commit}(x; r) = \\text{commit}(x'; r') \\) and \\( x \neq x' \\).\n\nConversely, if we know \\( x, r, x', r' \\in \\mathbb{Z}_q \\) such that \\( \\text{commit}(x; r) = \\text{commit}(x'; r') \\) and \\( x \neq x' \\), we have \\( x + ar \\equiv x' + ar' \\mod q \\) where \\( a \\) is such that \\( h = g^a \\mod p \\). If we had \\( r = r' \\mod q \\), we would obtain \\( x \\equiv x' \\mod q \\) which is not possible due to the assumptions. So, \\( r - r' \\) is invertible modulo \\( q \\) and we deduce\n\n\\[\na = \\frac{x' - x}{r - r'} \\mod q\n\\]\n\nSo, we can compute \\( a \\) from \\( x, r, x', r', q \\).", "question": "### Q.4 Given \\( p, q, g, h \\), show that computing \\( x, r, x', r' \\in \\mathbb{Z}_q \\) such that \\( \\text{commit}(x; r) = \\text{commit}(x'; r') \\) and \\( x \neq x' \\) is equivalent to computing \\( a \\in \\mathbb{Z}_q \\) such that \\( h = g^a \\mod p \\)." }, { "context": "Due to Question Q.3, \\( \\text{commit}(X; R) \\) is independent from \\( X \\). So, \\( \\text{commit}(X; R) \\) perfectly hides \\( X \\).\nBeing able to open a commitment on two different values \\( x \\) and \\( x' \\) means providing \\( x, r, x', r' \\) such that \\( \\text{commit}(x; r) = \\text{commit}(x'; r') \\). Due to the last question, this is equivalent to being able to compute \\( a \\) such that \\( h = g^a \\mod p \\). Assuming this is hard, we deduce that the commitment is computationally binding.", "question": "### Q.5 Finding \\( a \\in \\mathbb{Z}_q \\) such that \\( h = g^a \\mod p \\) is called the discrete logarithm problem. Assuming that solving the discrete logarithm problem is hard, show that commit defines a hiding and binding commitment scheme." } ]
2013-01-15T00:00:00
Ioana Boureanu, Serge Vaudenay
EPFL
Security Issue in ECDSA
In Sony PS3, the bootup code can be changed when it comes from a valid signature from the manufacturer. The signature scheme is ECDSA. We briefly recall the scheme here. The public key consists of a prime number \( n \), a finite field \( GF(q) \), an elliptic curve over this field, a generator \( G \) of order \( n \), and another point \( Q \). The secret key is an integer \( d \in \mathbb{Z}_n^* \) such that \( Q = dG \). To sign a message \( M \), the signer picks \( k \in \mathbb{Z}_n^* \), computes the point \( (x_1, y_1) = kG \), then \( r = \bar{x}_1 \mod n \) given a function \( x \rightarrow \bar{x} \) from \( GF(q) \) to \( \mathbb{Z} \), and finally \( s = \frac{H(M) + dr}{k} \mod n \) given a hash function \( H \). If \( r = 0 \) or \( s = 0 \), the signer restarts the computation until \( r \neq 0 \) and \( s \neq 0 \). The signature is the pair \( (r, s) \). To verify a signature \( (r, s) \) for a message \( M \), the verifier checks that \( Q \neq O \), that \( Q \) lies on the curve, that \( nQ = O \), and that \( r \in \mathbb{Z}_n^* \). Then, he computes \( u_1 = \frac{H(M)}{s} \mod n \), \( u_2 = \frac{r}{s} \mod n \), and \( (x_1, y_1) = u_1G + u_2Q \), and finally checks that \( r = \bar{x}_1 \mod n \).
[ { "context": "k, r, s, H(M) are integers (taken modulo n). (Strictly speaking, H(M) is a bitstring which shall be converted into an integer.) y1 is a field element. O is the point at infinity, the neutral element of the elliptic curve.", "question": "### Q.1 ECDSA manipulates values of different types such as points, field elements, integers, etc. What are the types of \\( k \\), \\( r \\), \\( s \\), \\( y_1 \\), \\( H(M) \\)? What is \\( O \\)?" }, { "context": "There are two types of finite fields which are popular for ECDSA: the field \\( \\mathbb{Z}_q \\) when q is a prime number and the field GF(q) when q is a power of 2. In the former case, we manipulate integers and reduce them modulo q. In the latter case, we manipulate polynomials with coefficients modulo 2 and reduce them modulo a reference irreducible polynomial.", "question": "### Q.2 What kind of finite fields can we use in practice? Cite at least two and briefly explain how to perform computations in these structures." }, { "context": "If the key is valid, we have \\( Q = dG \\) and \\( G \\) is on the curve. So, \\( Q \\) lies on the curve. Then, \\( nQ = n(dG) = d(nG) \\). Since \\( G \\) has order \\( n \\), we have \\( nG = O \\). Furthermore, \\( dO = O \\). So, \\( nQ = O \\). Then, since \\( d \\in \\mathbb{Z}_n^* \\), \\( dG \\neq O \\). So, \\( Q \\neq O \\). \\( Q \\) passes all verifications. Since \\( r \\) is the result of a modulo \\( n \\) computation, we have \\( r \\in \\mathbb{Z}_n \\). Since \\( r = 0 \\) is excluded from the signature generation and \\( n \\) is prime, we have \\( r \\in \\mathbb{Z}_n^* \\). Finally, we have\n\\( u_1G + u_2Q = \\left( \\frac{H(M)}{s} G + \\frac{r}{s} Q \\right) G = kG \\)\n\nDue to the signature generation, we know that \\( (x_1, y_1) = kG \\) is such that \\( r = \\bar{x}_1 \\mod n \\), so the signature is valid.", "question": "### Q.3 If a key is valid and a signature is produced by the signing algorithm, show that the verification algorithm will accept the signature." }, { "context": "Because ECDSA uses elliptic curves on which it is hard to compute the discrete logarithm. Computing a given \\( Q \\) and the group material is exactly the problem of computing the discrete logarithm of \\( Q \\).", "question": "### Q.4 Why is it hard to recover the secret key given the public key?" }, { "context": "Let \\( (x_1, y_1) = kG \\). We have \\( r = \\bar{x}_1 \\mod n = r' \\) and \\( s = \\frac{H(M) + dr}{k} \\mod n \\) and \\( s' = \\frac{H(M') + dr}{k} \\mod n \\).\n\nSo,\n\\( \\frac{H(M)}{s} + d \\frac{r}{s} \\equiv \\frac{H(M')}{s'} + d \\frac{r}{s'} \\mod n \\) thus \\( d = \\frac{sH(M') - s'H(M)}{(s' - s)r} \\mod n \\) which can be computed.", "question": "### Q.5 For some reasons, the manufacturer produced signatures for different codes using the same random \\( k \\). Given two codes \\( M \\) and \\( M' \\) and their signatures \\( (r, s) \\) and \\( (r', s') \\), respectively, show that an adversary can recover \\( d \\)." } ]
2012-01-25T00:00:00
Serge Vaudenay
EPFL
Hard Disk Encryption
A hard disk is made of sectors of various length (e.g., 4,096 bytes). We want to encrypt data on the disk using the following constraints: - we want security (no information leakage); - we want to use symmetric encryption with a single secret key \( K \) for the entire hard disk; - we prefer to use a block cipher; - we want to be able to access or update a random piece of information without having to process an entire sector; and - encryption should be “in-place”, i.e., ciphertexts must not be larger than plaintexts.
[ { "context": "ℓ = 128 bits.\nWe can propose a CTR mode of AES where\n\n\\[ y_{i,j} = x_{i,j} \\oplus \\text{Enc}_K(i, j) \\]\n\nThis uses a block cipher, we can access data randomly, and the length is preserved.\nWe could not use the CBC, OFB, or CFB modes which require processing more blocks to access a random one. We could not use the ECB mode for security reasons.\n(We can see when two plaintext blocks are equal by comparing the ciphertext blocks, which leaks information.)", "question": "### Q.1 Let ℓ be the block length in bits for the block cipher. We assume that each sector has a length L which is a multiple of ℓ. If i is the index of a sector and j is the index of a block in the sector, we let \\( x_{i,j} \\) denote the plaintext block we would have had at position (i, j) with an unencrypted hard disk. Further, we let \\( y_{i,j} \\) denote the ciphertext block we have in the encrypted hard disk.\nWhat is the value of ℓ in the case of AES? Which mode of operation could we propose to meet all the requirements?" }, { "context": "Decryption is using a similar formula:\n\n\\[ x_{i,j} = \\text{Dec}_{K_1}(y_{i,j} \\oplus t_{i,j}) \\oplus t_{i,j} \\quad \\text{where} \\quad t_{i,j} = \\alpha^j \\times \\text{Enc}_{K_2}(i) \\]\n\nIndeed,\n\n\\[ \\text{Dec}_{K_1}(y_{i,j} \\oplus t_{i,j}) \\oplus t_{i,j} = \\text{Dec}_{K_1}(\\text{Enc}_{K_1}(x_{i,j} \\oplus t_{i,j}) \\oplus t_{i,j}) \\oplus t_{i,j} = x_{i,j} \\]\n\nThe problem if L is not a multiple of ℓ is that there remains an incomplete block and it is not clear how to encrypt it.\nIt is (hopefully) secure, using symmetric encryption with a single key \\( K = (K_1, K_2) \\), using a block cipher, we can have random access to a block, and it is length-preserving. So it meets all requirements.", "question": "### Q.2 We still assume that each sector has a length L which is a multiple of ℓ. We define the XTS mode by having a key K composed of two subkeys \\( K = (K_1, K_2) \\) and by having\n\\[ y_{i,j} = \\text{Enc}_{K_1}(x_{i,j} \\oplus t_{i,j}) \\oplus t_{i,j} \\quad \\text{with} \\quad t_{i,j} = \\alpha^j \\times \\text{Enc}_{K_2}(i), \\]\nwhere α is a constant and \\( \\alpha^j \\times u \\) is defined by GF(2^ℓ) operations. Explain how to decrypt and show that it meets all requirements. What is the problem if L is not a multiple of ℓ?" }, { "context": "This system meets all requirements since it uses a block cipher, keeps the data size, and can access data randomly. In the worst case, two blocks will be processed to recover one.\nTo decrypt \\( y_{i,j} \\) for \\( j < n_i - 1 \\) or \\( y_{i,n_i} \\) of length \\( \\ell \\), we proceed as before. To decrypt \\( y_{i,n_i-1} \\) and \\( y_{i,n_i} \\) when \\( y_{i,n_i} \\) has length smaller than \\( \\ell \\), we proceed as follows: split \\( \\text{Dec}_{K_1}(y_{i,n_i-1} \\oplus t_{i,n_i-1}) \\oplus t_{i,n_i-1} \\) into \\( x_{i,n_i} \\| u \\) where \\( x_{i,n_i} \\) has the same length as \\( y_{i,n_i} \\) and \\( u \\) is the leftover information in the block, then \\( x_{i,n_i-1} = \\text{Dec}_{K_1}((y_{i,n_i} \\| u) \\oplus t_{i,n_i}) \\oplus t_{i,n_i} \\).", "question": "### Q.3 We assume that there is at most one incomplete block per sector and that the size of the sector is \\( L > \\ell \\). We assume that there are \\( n_i \\) blocks in sector i, that \\( j \\in \\{1, \\ldots, n_i\\} \\), and that the incomplete block (if any) is the one of index \\( n_i \\). We use the XTS mode from the previous question with the ciphertext stealing technique for the special blocks of index \\( n_i - 1 \\) and \\( n_i \\). Ciphertext stealing consists of using a special rule to compute \\( y_{i,n_i-1} \\) and \\( y_{i,n_i} \\) from \\( x_{i,n_i-1} \\) and \\( x_{i,n_i} \\).\n- if the size of \\( x_{i,n_i} \\) is \\( \\ell \\), proceed as in the previous question;\n- otherwise, split \\( \\text{Enc}_{K_1}(x_{i,n_i-1} \\oplus t_{i,n_i-1}) \\oplus t_{i,n_i-1} \\) into \\( y_{i,n_i} \\| u \\), where \\( y_{i,n_i} \\) is an incomplete block, having the same length as \\( x_{i,n_i} \\), and \\( u \\) is the leftover information in the block. Then, \\( y_{i,n_i-1} = \\text{Enc}_{K_1}((x_{i,n_i} \\| u) \\oplus t_{i,n_i}) \\oplus t_{i,n_i} \\). (The \\( \\| \\) symbol denotes the concatenation operation.)\n\nExplain how to decrypt and show that it meets all requirements." } ]
2012-01-25T00:00:00
Serge Vaudenay
EPFL
Foundations of Computer Security (True or False)
Please answer T or F for the following. *No justification is needed (nor will be considered).*
[ { "context": "True", "question": "(a) Implementation bugs can subvert security even if the designers had the correct goal." }, { "context": "False", "question": "(b) A one-way function must also be collision-resistant." }, { "context": "False", "question": "(c) In practice, we use AES as a collision-resistant hash function." }, { "context": "True", "question": "(d) There are message authentication codes (MACs) that provide λ-bit security and that have a MAC tag length of λ bits, under standard cryptographic assumptions." }, { "context": "True", "question": "(e) We believe that Lamport signatures (instantiated with a suitable one-way function) will resist attacks by large-scale quantum computers." }, { "context": "True", "question": "(f) Let \\( N \\) be an RSA modulus with public exponent \\( e = 3 \\). Define the function \\( F: \\mathbb{Z}_N^* \\rightarrow \\mathbb{Z}_N^* \\) as \\( F(x) := (x + 7)^3 \\mod N \\). Under the RSA assumption, the function \\( F \\) is a one-way permutation." }, { "context": "False", "question": "(g) The hash-then-sign paradigm is useful because it avoids the need to use a collision-resistant signature scheme." } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
User authentication
[ { "context": "Yes, this scheme is as secure, because the adversary cannot pre-compute hashes for any passwords before the database is compromised, and every user’s password is hashed differently, just as with per-password salts.", "question": "### Ben Bitdiddle runs a popular web site, in which users create accounts using their email address as their username. Ben Bitdiddle is worried about the overhead of storing a separate salt for every user’s account in his web site’s password database. Ben devises the following alternative plan: his web site will store a single global salt \\( s \\), and for every user, the database will store the user’s username (email address) and \\( H(s||\\text{username}||\\text{password}) \\). Is this scheme as good as using individual salts for every password? Explain why, or describe an attack for which this scheme would give an adversary some advantage." } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Signatures
[ { "context": "Fast verification.", "question": "### (a) What is one benefit of RSA signatures over EC-DSA signatures?" }, { "context": "Short signature size, fast key generation, short public keys.", "question": "### (b) What is one benefit of EC-DSA signatures over RSA signatures?" }, { "context": "Post-quantum security, security from OWF only.", "question": "### (c) What is one benefit of Lamport signatures over EC-DSA signatures?" }, { "context": "(d) What is one benefit of RSA signatures over Lamport signatures?", "question": "### (d) What is one benefit of RSA signatures over Lamport signatures?" } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
RSA
Let \( N \) be an RSA modulus with public exponent \( e = 3 \) and private exponent \( d \) (i.e., \( ed \equiv 1 \mod \phi(N) \)). The full-domain-hash signature scheme uses a hash function \( H : \{0, 1\}^* \rightarrow \mathbb{Z}_N^* \). The scheme computes the signature on a message \( m \in \{0, 1\}^* \) as: \[ \sigma \leftarrow H(m)^d \mod N. \]
[ { "context": "The message \\( m^* \\) can be any perfect cube with \\( m < N \\). For example \\( (m^*, \\sigma^*) \\) can be \\( (1, 1) \\), \\( (8, 2) \\), \\( (27, 3) \\), ...", "question": "### (a) Your friend (who has taken 6.042 but not 6.1600) proposes removing the hash function \\( H \\) from the full-domain-hash signature scheme. With this modified scheme, a signature on a message \\( m \\in \\mathbb{Z}_N^* \\) is \\( \\sigma \\leftarrow m^d \\mod N \\). Show that an attacker, given only the public key \\( (N, e) \\), can produce a valid forged signature \\( \\sigma^* \\) on some message \\( m^* \\in \\mathbb{Z}_N^* \\). Your answer should include a valid message-signature pair: \\( (m^*, \\sigma^*) \\)." }, { "context": "- The attacker asks for a signature on \\( m_0 \\). This is the value \\( \\sigma = H(m_0)^d \\).\n- The attacker outputs \\( \\sigma^* \\leftarrow \\sigma^2 \\in \\mathbb{Z}_N^* \\) as its forged signature.\n\nThe forged signature \\( \\sigma^* \\) is valid since:\n\\[ (\\sigma^*)^e = ((H(m_0)^d)^2)^e = (H(m_0)^2)^{ed} = H(m_0)^2 \\in \\mathbb{Z}_N^*. \\]", "question": "### (b) Consider the full-domain-hash signature scheme instantiated with some hash function \\( H \\). Say that an attacker can find two messages \\( m_0, m_1 \\in \\{0, 1\\}^* \\), such that \\( H(m_0) = H(m_1)^2 \\in \\mathbb{Z}_N^* \\). Explain how the attacker can use \\( (m_0, m_1) \\) to win the signature security game." }, { "context": "The analysis here is similar to the Birthday Problem. The average number is around \\( \\sqrt{N} \\).", "question": "### (c) Model the hash function \\( H : \\{0,1\\}^* \\rightarrow \\mathbb{Z}_N^* \\) as a truly random function. How many times would an attacker have to evaluate the hash function \\( H \\), on average, to find messages \\( m_0, m_1 \\in \\mathbb{Z}_N \\) such that \\( H(m_0) = H(m_1)^2 \\in \\mathbb{Z}_N \\)." } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Lab 1: Key-Value Store Security
[ { "context": "Construct an adversarial tree of 1000 key-value pairs whose two root children hashes have a slash byte in their UTF-8 encoding (for at least one of the two of them). Concatenate these two root children hashes, split them at the slash character, and return the parts before and after the slash, respectively, as the key and value from `attack_key_value()`.", "question": "Ben Bitdiddle is hired by the 6.1600 course staff to defend the lab 1 key-value store against the attacks covered in the lab. Ben decides to change how key-value leaf nodes are hashed, by inserting a slash separator between the key and the value, as follows:\n\n```python\ndef H_kv(key, val):\n return H(key + \"/\" + val)\n```\n\nHow can you modify the attack in scenario 2 (many fake key-value pairs) so that it still works against Ben’s modified design?" } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Key exchange
In his final project for his undergraduate security class, Ralph Merkle proposed a key-exchange protocol based on hash functions. The protocol uses a hash function \( H: \mathbb{Z}_n \rightarrow \{0, 1\}^{256} \), where \( n \) is on the order of \( 2^{60} \), and proceeds as follows: - Alice picks \( \sqrt{n} \) random numbers \( a_1, \ldots, a_{\sqrt{n}} \in \mathbb{Z}_n \) and sends \( H(a_1), \ldots, H(a_{\sqrt{n}}) \) to Bob. - Bob picks \( \sqrt{n} \) random numbers \( b_1, \ldots, b_{\sqrt{n}} \in \mathbb{Z}_n \) and sends \( H(b_1), \ldots, H(b_{\sqrt{n}}) \) to Alice. - If there exist \( i, j \in \{1, \ldots, \sqrt{n}\} \) such that \( H(a_i) = H(b_j) \): - Alice uses \( a_i \) as her shared secret with Bob. - Bob uses \( b_j \) as his shared secret with Alice. - (If there are many such \( (i, j) \) pairs, Alice and Bob use the lexicographically first one.) Model the hash function as a truly random function.
[ { "context": "The Birthday bound.", "question": "### (a) Explain why Alice and Bob will agree on a shared secret with constant probability." }, { "context": "It takes \\( \\sqrt{n} \\) time.", "question": "### (b) How much time does it take Alice to generate her message to Bob? Assume that evaluating \\( H(\\cdot) \\) takes a constant amount of time." }, { "context": "It takes the attacker \\( \\Omega(n) \\) time.", "question": "### (c) If an attacker eavesdrops on the communication between Alice and Bob, how much time does it take the attacker to recover the shared secret \\(a_i = b_j\\)?" }, { "context": "Computing an exponentiation by repeated squaring requires \\( O(\\log p) \\) multiplications.\n\n- Merkle’s protocol has goodness \\( \\frac{n}{\\sqrt{n}} \\).\n- Diffie-Hellman key exchange has goodness \\( \\frac{2^{\\log^{1/3} p}}{\\log p} \\).\n- ECDH has goodness \\( \\frac{2^{p/2}}{\\log p} \\).", "question": "### (d) Define the goodness of a key-exchange protocol to be the ratio:\n\n\\[\n\\text{goodness} = \\frac{\\text{the attacker's running time}}{\\text{Alice's running time}}\n\\]\n\nwhere the attacker's running time is the time required to recover the shared secret.\n\n- What is the goodness of Merkle’s protocol?\n- What is the goodness of Diffie-Hellman key exchange in \\( \\mathbb{Z}_p^* \\) for a large prime \\( p \\), assuming that it takes \\( O(1) \\) time to multiply two integers in \\( \\mathbb{Z}_p^* \\)? (In reality the time for a big-integer multiplication grows with \\( p \\).)\n- What is the goodness of elliptic-curve Diffie-Hellman key exchange in a group of prime order \\( p \\), assuming that it takes \\( O(1) \\) time to perform a single elliptic-curve point operation?" } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
TLS security
Ben Bitdiddle is designing an Android application where users can send money to each other, by username. The application relies on a central server, which authenticates requests from user devices. When the application wants to transfer some amount of money to another user, it opens a TLS connection to the server, and sends the following message: ``` USER: username PASS: password REQUEST: transfer AMOUNT: amount RECIPIENT: recipient ```
[ { "context": "Create an account that corresponds to some prefix of the recipient’s username. Terminate the TLS connection after that prefix of the recipient’s username is sent. The server will receive the request containing the truncated recipient name and will perform a transfer to that account instead of the intended recipient.", "question": "### Where `username` and `password` authenticate the sender, and the request asks the server to transfer `amount` to `recipient`’s account.\nExplain how a network adversary may be able to redirect a transfer to their account. You can assume the adversary knows the user, the fact that the user is transferring money, how much they are transferring, and to whom, but does not know the user’s password. Assume that the TLS certificates are correct (i.e., certificate authorities will not issue an incorrect certificate) and that the adversary cannot guess the user’s password." } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Law and technology
[ { "context": "Performing the lab0 attack to retrieve users' passwords from a hashed list and then accessing said users' accounts and personal information would be a violation of the Computer Fraud and Abuse Act (CFAA) because it involves unauthorized access to a protected computer. This attack could also violate the Digital Millennium Copyright Act (DMCA) if it leads to unauthorized access to copyrighted works.\n\nPerforming lab1 attacks to tamper with users' communication with a protected store could violate CFAA if you tamper with “gates-up” data by performing unauthorized data manipulation on the Merkle tree server, which could be argued as “recklessly causing damage” to the store.\n\nPerforming lab2 attacks to tamper with encrypted Wi-Fi packets could violate CFAA as it involves unauthorized data access, exceeding authorized access to the server that the client is communicating with.\n\nWhat this question is looking for:\n- Knowledge of one of the two main acts Chris Conley discussed: DMCA or CFAA.\n- A solid explanation of what DMCA and/or CFAA covers, demonstrated by how an attack from lab0, lab1, or lab2 could be performed illegally.", "question": "### (a) Chris Conley’s guest lecture outlined several U.S. acts that regulate federal computer law. Imagine that you have mounted one of the attacks from the 6.1600 labs without permission against a victim server elsewhere on the Internet. Name a U.S. law and an attack from either lab0, lab1, or lab2 that could violate that law. Give a one-sentence explanation for why your chosen attack would violate that law." } ]
2022-10-25T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Foundations of Computer Security (True or False)
Please write T or F for the following. **No justification is needed (nor will be considered).**
[ { "context": "False.", "question": "### (a) We know how to construct a public-key encryption scheme assuming only one-way functions." }, { "context": "False.", "question": "### (b) \\( F(k, x) = k \\oplus x \\) is a secure PRF." }, { "context": "True.", "question": "### (c) If \\( H_1 \\) and \\( H_2 \\) are collision-resistant hash functions then \\( H(x) = H_1(H_2(x)) \\) is also collision-resistant." }, { "context": "True.", "question": "### (d) A CPA-secure encryption scheme must be randomized." }, { "context": "False.", "question": "### (e) Given a secure signature scheme for \\( n \\)-bit messages, one can construct a secure signature scheme for messages of length \\( 2n \\) bits, by partitioning the \\( 2n \\)-bit message into two equal chunks (each of length \\( n \\)), and signing each chunk using the underlying secure signature scheme." }, { "context": "False.", "question": "### (f) If a mechanism \\( A \\) provides \\( \\epsilon \\)-differential privacy, then the mechanism \\( A \\) also provides \\( 2\\epsilon \\)-differential privacy." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Encryption scheme
[ { "context": "The answer is no, since then each message must have a unique ciphertext (by the pigeon-hole principle), and then one can attack as explained in class.", "question": "### Assume that one-way functions exist. Does there exist a CPA-secure encryption scheme that takes as input a 128-bit message and a random string, and outputs a 128-bit ciphertext? Explain your answer. " } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Authentication schemes
Suppose we are given a PRF that outputs a single bit, with a key space \( K \) and a message space \( M \); namely, \( F : K \times M \rightarrow \{0, 1\} \).
[ { "context": "No, one can guess the tag with probability \\( 1/2 \\).", "question": "### (a) Is the following a secure MAC, for the same key space \\( K \\) and message space \\( M \\): for a key \\( k \\in K \\) and message \\( m \\in M \\), \\( \\text{MAC}(k, m) := F(k, m) \\)?" }, { "context": "\\( \\text{MAC}(k, m) = (F(k, m||i))_{i=1}^{128} \\), where each \\( i \\in [128] \\) is encoded in binary and thus is an element in \\( \\{0, 1\\}^7 \\).", "question": "### (b) Suppose that the message space \\( M \\) above is \\( M = \\{0, 1\\}^{135} \\). Show how to use the PRF \\( F \\) to construct a secure MAC scheme with message space \\( \\{0, 1\\}^{128} \\)." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Isolation
Ben Bitdiddle wants to run two applications strongly isolated from one another, so he runs them on two separate computers. However, he has just one storage server. His storage server implements a simple network API, whose pseudo-code is shown below (here `src` indicates which of the two separate computers are making the request; assume the adversary cannot tamper with the `src` argument): ```python class Storage: def __init__(self): self.files = {} def write(self, src, filename, contents): self.files[filename] = (src, contents) def read(self, src, filename): if filename not in self.files: return ErrorNotFound() (owner, contents) = self.files[filename] if owner == src: return contents else: return ErrorNotAllowed() ```
[ { "context": "No: one application can corrupt another application’s data by writing over files with the same filename.", "question": "### (a) Does Ben’s design provide integrity for the isolated applications, as defined in the isolation lecture? Explain why or why not." }, { "context": "No: one application can determine the names of files used or not used by the other application, by probing different file names using `read()`. Partial credit for yes: if only one application is running, it cannot read the data of another application. This effectively assumes that the applications use a well-known set of file names that is not dependent on the application’s state.", "question": "### (b) Does Ben’s design provide non-leakage for the isolated applications, as defined in the isolation lecture? Explain why or why not." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Symbolic execution
Ben Bitdiddle is running a symbolic execution tool on some function \( f(x, y) \), where \( x \) and \( y \) are arbitrary symbolic 32-bit values passed as arguments to \( f \). The symbolic execution tool issues the following queries to the SAT solver: - \( (x > 0) \) - \( (x > 0) \land (x + y > 0) \) - \( (x > 0) \land (x + y > 0) \land (x == 0) \) - \( (x > 0) \land (x + y > 0) \land \neg(x == 0) \) - \( (x > 0) \land \neg(x + y > 0) \) - \( \neg(x > 0) \) - \( \neg(x > 0) \land (x + y < 0) \) - \( \neg(x > 0) \land \neg(x + y < 0) \)
[ { "context": "```c\nvoid f(unsigned int x, unsigned int y) {\n if (x > 0) {\n // If x is greater than 0\n if (x + y > 0) {\n // If the sum of x and y is greater than 0\n if (x == 0) {\n // This condition is unreachable since x > 0,\n // so maybe there's an error or something needs adjustment here.\n // You could place some error handling or a specific case.\n } else {\n // Logic for when x > 0 and x + y > 0\n // (x == 0 is not possible here, so this branch will be executed)\n }\n }\n } else {\n // If x is not greater than 0 (x is 0)\n if (x + y < 0) {\n // This condition might never be true since x and y are unsigned.\n // The sum of two unsigned integers cannot be less than 0.\n // You could adjust this check if handling underflow, or maybe\n // handle a specific case when x == 0 and y is something specific.\n } else {\n // Logic for when x <= 0 and x + y >= 0\n }\n }\n}\n```", "question": "### Write down a sketch of Ben’s function \\( f \\) that would generate these queries under symbolic execution:" } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Google Chrome security
[ { "context": "$1M.", "question": "### (a) What is the approximate black-market cost for a zero-day vulnerability in Google Chrome that allows a remote adversary to execute arbitrary code and escape from Chrome’s sandbox? Circle the best answer.\n\n- $10K\n- $100K\n- $1M\n- $10M" }, { "context": "No: if bugs become cheaper, it means there’s either fewer people that want them (so Chrome is less important) or there’s more bugs (so Chrome has more vulnerabilities).", "question": "### (b) If the black-market price of a zero-day vulnerability in Google Chrome went down, would the Chrome security team view this as a success? Explain why or why not." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Law and policy
[ { "context": "“Going dark” is the term law-enforcement agencies have often used to describe the fact that they cannot decrypt end-to-end encrypted traffic.", "question": "### (a) Explain in 1-2 sentences what “going dark” is, according to Jennifer Granick, in the context of end-to-end encryption." }, { "context": "One concern was that the system could be abused by domestic or foreign governments for repressive purposes.", "question": "### (b) Earlier this year, Apple proposed a system that would scan photos on users’ iPhones for illegal/exploitative material. Explain in 1-2 sentences why, according to Jennifer Granick, privacy experts objected to this technology." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
WASI escape
[ { "context": "Not possible. The bug in the WASI sandboxing plan is that the depth for an open file or directory might be wrong if that file or directory was moved in the file system tree after it was opened. But without being able to open a pathname relative to an existing directory file descriptor (which is only done by openat), there is no way to take advantage of this bug; no other code looks at the (possibly incorrect) depth in a file descriptor.", "question": "### Is it possible to exploit the WASI file system sandbox bug that you used in lab 4 part 2 (i.e., access a file outside of the sandbox directory) without using either the `symlink()` or `openat()` system calls? Describe how, or explain why not." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Splitting trust
Apple has a secret key that it uses to sign its iOS operating system updates. Apple engineers want to split this key into \( n \) pieces and to give one piece to each of \( n \) engineers in a way that ensures that: - no strict subset of engineers can produce a valid signature but - all \( n \) engineers together can sign a release.
[ { "context": "No. If three engineers collude, they can learn all but 64 bits of the key and can brute-force search to find the rest.", "question": "### (a) Say that there are \\( n = 4 \\) engineers and that the signing key \\( k \\) is 256 bits long. One engineer proposes splitting the key into four 64-bit chunks and giving one chunk to each of the engineers. Is this scheme secure? Why or why not?" }, { "context": "Pick random \\( k_1, k_2, k_3, k_4 \\in \\{0, 1\\}^{256} \\) such that \\( k = k_1 \\oplus k_2 \\oplus k_3 \\oplus k_4 \\). Security follows from the security of the one-time pad.", "question": "### (b) Explain how the engineers can split the key into \\( n \\) pieces (for any \\( n \\)) such that:\n1. it is possible to recover the key given all \\( n \\) pieces but\n2. no one piece leaks any information about the key." }, { "context": "For each size-three subset of users, run the scheme of part (b). Give one share to each user.", "question": "### (c) Explain how the engineers can split the key into \\( n \\) pieces (for any \\( n > 3 \\)) such that:\n1. it is possible to recover the key given ANY THREE pieces but\n2. no one piece leaks any information about the key." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
timing attack
[ { "context": "He should be guessing one of the \\(2 \\times 1\\) nibbles (from 0 to 15) at a time, since the token is hex-encoded and checked one hex digit at a time.", "question": "### (a) Ben Bitdiddle is developing his attack for lab 5, implementing the code for the function `steal_secret_token(1)`. Recall that the argument 1 is the number of random bytes that were used to generate the secret token that the attack code must guess. Ben’s plan is to guess one of 256 possible values for each of the 1 bytes at a time, but he discovers that his attack is taking way too long. What should Ben be doing instead to speed up his attack? (We are looking for a big speedup—at least \\(2 \\times\\) faster.)" }, { "context": "For the last character of the token, the checking code executes in about the same time regardless of whether the last character is correct or not. Alyssa’s attack should instead try each of the possible values and see which one causes the server to reply with success vs failure.", "question": "### (b) Alyssa P. Hacker is also working on her lab 5 attack. She gets her solution mostly working, and is able to guess almost all of the token, but for some reason, her attack does not work for the very last byte of the token. What is Alyssa missing, and how should she fix her attack?" } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
iPhone security
An iPhone app developer finds an exploitable buffer overflow in the iPhone’s kernel on the application processor. For each of the following questions, answer either **True** or **False** and give a one-sentence explanation.
[ { "context": "True.", "question": "### (a) The app developer may be able to exploit this overflow to get root on the phone’s application processor." }, { "context": "True.", "question": "### (b) A website that the phone’s owner visits may be able to exploit this vulnerability to get root on the phone’s application processor." }, { "context": "False.", "question": "### (c) A malicious app developer can exploit this vulnerability to persistently corrupt the application-processor’s kernel, such that the attacker’s kernel code continues to run even after restarting the phone." }, { "context": "True.", "question": "### (d) If an attacker can exploit this overflow, they can corrupt the OS in a way that prevents the phone from booting." }, { "context": "False.", "question": "### (e) If an attacker can exploit this overflow, it can corrupt the phone’s BootROM." } ]
2022-12-20T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Collision resistance
Let \( H : \{0, 1\}^* \rightarrow \{0, 1\}^{256} \) be a collision-resistant hash function. Alice wants to convince Bob that she knows whether the stock market will go up or down tomorrow. But, she doesn’t want to give this information to Bob. Alice expresses her prediction as a bitstring \( m \in \{0, 1\}^* \) and gives \( c \leftarrow H(m) \) to Bob. After the stock market closes tomorrow, Alice will send \( m' \) to Bob, who checks that \( c = H(m') \). If this check passes, Bob will accept \( m' \) as Alice’s prediction. Otherwise, Bob will reject.
[ { "context": "If Alice could find \\( (m, m') \\) such that \\( H(m) = H(m') \\) and \\( m \\neq m' \\), Alice would have broken the collision resistance of \\( H \\).", "question": "### (a) Show that \\( c \\) commits Alice to her prediction. That is, explain why Alice can never trick Bob into accepting a prediction \\( m' \\neq m \\)." }, { "context": "A collision-resistant hash function could leak the first few bits of its input to the output.", "question": "### (b) Explain why this scheme might leak some information about Alice’s prediction to Bob." }, { "context": "Compute \\( c = H(m \\| r) \\) for random \\( r \\in \\{0, 1\\}^{256} \\), where \\( \\| \\) is string concatenation.", "question": "### (c) Say that instead we model \\( H : \\{0, 1\\}^* \\rightarrow \\{0, 1\\}^{256} \\) as a random oracle. Explain how Alice and Bob could patch their scheme to (a) hide all information about Alice’s prediction while (b) still preventing Alice from changing her prediction after the fact. (Hint: Remember that all CPA-secure encryption schemes use randomness.)" } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Hash-and-Sign
[ { "context": "To sign a message \\(M \\in \\{0,1\\}^{512}\\), take a collision-resistant hash function \\(H : \\{0,1\\}^{512} \\rightarrow \\{0,1\\}^{256}\\), and output a signature of \\(H(M)\\) (with respect to the underlying signature scheme for messages in \\(\\{0,1\\}^{256}\\)).", "question": "### Construct a signature scheme for messages in \\(\\{0,1\\}^{512}\\) given a signature scheme for messages in \\(\\{0,1\\}^{256}\\). You can use a hash function \\(H\\) but must state the precise properties you assume \\(H\\) satisfies (as well as its domain and range)." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
MAC
[ { "context": "The MAC will have a key \\( K \\) associated with it, and the output will be \\(\\{F(K, \\langle M, b_1, \\ldots, b_k \\rangle)\\}_{b_1, \\ldots, b_k \\in \\{0,1\\}}\\), for a constant \\( k \\), depending on the desired security.", "question": "### Show how to use a PRF \\( F \\) that outputs a *single* bit, to design a secure MAC scheme for messages in \\(\\{0, 1\\}^{128}\\). You can assume that \\( F \\) takes as input a key \\( K \\) and a message in \\(\\{0, 1\\}^*\\) and outputs a bit." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Public-Key Encryption
[ { "context": "The KeyGen algorithm samples randomness \\( r_1 \\), computes \\( m_1 = m_1(r_1) \\), and outputs \\( (sk, pk) = (r_1, m_1) \\). The encryption algorithm is given \\( pk = m_1 \\) and a message \\( m \\) to encrypt, chooses randomness \\( r_2 \\), computes \\( s \\) from \\( (m_1, m_2, r_2) \\) and outputs \\( (m_2, m \\oplus s) \\) as the encryption of \\( m \\). The decryption algorithm, given \\( sk = r_1 \\) and a ciphertext \\( (m_2, m \\oplus s) \\), uses \\( m_2, r_1 \\) to compute \\( s \\), and uses this to unmask \\( m \\oplus s \\), to obtain the decrypted message \\( m \\).", "question": "### Construct a CPA secure (also known as semantically secure) public-key encryption scheme from an arbitrary 2-message key exchange protocol, in which party 1 chooses randomness \\( r_1 \\) and sends a message \\( m_1 \\), which is a function of \\( r_1 \\) (i.e., \\( m_1 = m_1(r_1) \\)), party 2 chooses randomness \\( r_2 \\) and replies with a message \\( m_2 \\), which is a function of \\( m_1 \\) and \\( r_2 \\) (i.e., \\( m_2 = m_2(r_2, m_1) \\)), and the shared secret \\( s \\) can be efficiently computed from \\( (m_i, r_{i-1}) \\), for any \\( i \\in \\{1, 2\\} \\), and is indistinguishable from uniform given only \\( (m_1, m_2) \\).\nYou can assume (for simplicity) that the secret key \\( s \\) is a binary string of length \\( k \\) (for some \\( k \\)), and construct an encryption scheme for messages in \\( \\{0, 1\\}^k \\)." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Encryption in practice
To install the Rustup utility, the documentation instructs Linux users to run the shell command: ```sh curl https://sh.rustup.rs | sh ``` This shell command downloads the document at the specified URL and pipes it to the shell.
[ { "context": "The adversary could drop packets after the initial data transfer begins. Curl will output EOF and your shell will execute a prefix of the full shell script.", "question": "### (a) Say that you run the command above and that your ISP is malicious. Explain how the ISP could trick your computer into executing a shell script that is different from the one at `https://sh.rustup.rs`." }, { "context": "First download the entire file, making sure that the TLS connection closed cleanly. Then execute the file.", "question": "### (b) Explain what you (as the user) could do to prevent the attack of part (a), without changing anything on the server side." }, { "context": "Code the file in such a way that a prefix of the file will not execute any code. One way to do this is to wrap the entire script in a function definition and then execute the function at the very end.", "question": "### (c) Explain what the Rustup developers could do to prevent the attack of part (a), without changing the way that users invoke the script.\n**Hint:** It is possible to define a function in a shell script. The shell will not execute any code in the body of a function definition until the function definition ends and the script explicitly calls the function." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Discrete-logarithm problem
The discrete-logarithm problem modulo \( p \) is defined as follows: - **Input**: - A large prime \( p \), - A generator \( g \in \{0, \ldots, p - 1\} \), and - A value \( h \leftarrow g^x \mod p \) for \( x \leftarrow \{0, \ldots, p - 1\} \). - **Output**: The unique value \( x \) such that \( h = g^x \mod p \). For this problem, assume that multiplying two numbers modulo \( p \) takes time \( O(\log^2 p) \).
[ { "context": "\\( O(\\log^3 p) \\) time", "question": "### (a) How much time does it take to compute \\( g^x \\mod p \\) for a random \\( x \\in \\{0, \\ldots, p - 1\\}?\\)" }, { "context": "\\( O(p \\cdot \\log^3 p) \\) time", "question": "### (b) The following simple algorithm computes discrete logs modulo \\( p \\):\n\n**Algorithm**.\n- For \\( i = 1, 2, 3, \\ldots \\):\n - Compute \\( h' \\leftarrow g^i \\mod p \\).\n - If \\( h = h' \\), output \\( i \\).\n\nWhat is the running time of the algorithm?" }, { "context": "With good probability, at least one of the discrete logs will have size \\( O(\\sqrt{p}) \\). The algorithm above will find it in time \\( O(\\sqrt{p}) \\).", "question": "### (c) Say that you are given \\( B \\) discrete-log problem instances \\( (h_1, \\ldots, h_B) \\). If \\( B = \\sqrt{p} \\), explain how you can solve at least one of the problems in time \\( O(\\sqrt{p} \\cdot \\text{polylog}(p)) \\)." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Certificate revocation
This problem deals with certificate revocation in the public-key infrastructure.
[ { "context": "The OCSP server learns which websites the client is visiting.", "question": "### (a) The online certificate status protocol (OCSP) works as follows: when your browser gets a public-key certificate from a server, the certificate contains the URL of an OCSP server, typically run by the certificate authority who issued the certificate. The client then asks the OCSP server 'Is this certificate `<hash of cert>` still valid?' The OCSP server responds with a signed message indicating YES or NO.\nMany people worry that OCSP violates client privacy. Why would this be?" }, { "context": "The whole point of OCSP is to check whether a certificate is still valid. So the OCSP response includes a timestamp and clients will reject the response if it is too old.", "question": "### (b) A more recent technology, called OCSP stapling, has the web server (e.g., `nytimes.com`) fetch a signed OCSP response from its certificate authority indicating that its certificate is still valid.\nThis eliminates the privacy concern from part (a), but requires the web server to periodically fetch a new signed attestation from the certificate authority. Why would the web server need to refresh its OCSP response?" } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Metadata-hiding messaging
There are \( N \) students, each of whom wants to submit their course evaluations to the registrar anonymously. To do so, the students propose using a two-server mix net. That is, Server A has a keypair \((pk_A, sk_A)\) for a CCA-secure public-key encryption system (Enc, Dec) and Server B has a similar keypair \((pk_B, sk_B)\). Each student \( i \in \{1, \ldots, N\} \) will take her course evaluation \( m_i \) and encrypt it as \[ ct_i = \text{Enc}(pk_A, \text{Enc}(pk_B, m_i)). \] Each student \( i \) sends her ciphertext \( ct_i \) to server A, who shuffles and decrypts them. Server A sends the resulting ciphertexts to server B who shuffles and decrypts them and sends the resulting plaintext messages to the registrar.
[ { "context": "CCA-secure encryption schemes do not hide message lengths.", "question": "### (a) If students’ evaluations can be arbitrary bitstrings in \\(\\{0, 1\\}^*\\), explain why CCA security may not be enough to hide which student is sending which evaluation." }, { "context": "Two – all of them.", "question": "### (b) Consider an attacker who can see everything that the registrar sees and also can see everything that some number of mix servers sees. How many mix servers does the attacker need to compromise to learn which honest student sent a particular course evaluation?" }, { "context": "There are many options. One is: the first server drops all messages except the one from student \\(i\\).", "question": "### (c) Say that the first mix server is actively malicious (i.e., can deviate from the protocol) and colludes with the registrar. What can the first mix server do to learn which honest student is sending which evaluation?" }, { "context": "If the second server and registrar collude, they can learn whether a message came from students \\(\\{1, \\ldots, n/2\\}\\) or \\(\\{n/2 + 1, \\ldots, n\\}\\).", "question": "### (d) To save computation, the sysadmin running the first server proposes an optimization: rather than shuffle all messages according to a random permutation, just shuffle the first half of the ciphertexts and separately shuffle the second half. Explain why this optimized protocol provides a weaker notion of security against an adversarial second server and registrar." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
LPN Error Correction
Recall the LPN-based error correction scheme (Lecture 17). \( n \) refers to the number of bits in the secret key \( s \), \( m \) is the number of ring oscillator pairs and the number of bits in the helper data \( b \) and noise vector \( e \). The Gen step determines the \( b \) vector and exposes it. The Rep step chooses the \( n \) most stable bits (choosing the \( n \) out of \( m \) counter values that are the largest absolute values regardless of sign) and uses the corresponding \( n \) equations to solve for \( s \). Consider a modified (Gen', Rep'), where Gen', in addition to outputting \( b \in \{0, 1\}^m \) (as is outputted by Gen), also outputs \( z \in \{0, 1\}^m \) that specifies the \( n \) stable locations. Rep' takes as input \( (b, z) \) and a vector \( e' \), with the guarantee that \( e \) and \( e' \) agree on the stable locations (i.e., \( e_i = e'_i \) for every \( i \in \{0, 1\}^m \) such that \( z_i = 1 \)), and outputs \( s \).
[ { "context": "Yes. In both schemes, we are assuming that counter values that are large will not change across Gen and Rep invocations.", "question": "### (a) Briefly explain whether or not this scheme has, roughly speaking, equivalent error correction capability to the original scheme." }, { "context": "No. The adversary gets more information about the biometric bits (which of them are the most stable), so the problem faced by the adversary is not exactly LPN.", "question": "### (b) Briefly explain whether or not this scheme has equivalent security in hiding \\( s \\) and \\( e \\) to the original scheme." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
iOS Security
The iOS secure boot mechanism operates in three steps: 1. The Boot ROM reads the low-level bootloader code and checks that it is correctly signed using Apple’s signing key. The Boot ROM then executes the bootloader. 2. The bootloader code reads the relevant parts of the kernel and checks that they are correctly signed using Apple’s signing key. The bootloader then executes the kernel. 3. The kernel then boots up the phone.
[ { "context": "There would be no way to update the kernel in case of bugs. Also, the ROM is small and a kernel is big.", "question": "### (a) A much simpler design would just have the entire kernel built into the Boot ROM. Then there would be no need for signature verification or multi-step boot. Explain why this would be a bad idea." }, { "context": "The attacker replaces Apple’s public key with her own. Then she replaces Apple’s bootloader with her own bootloader that runs arbitrary code.", "question": "### (b) When the device boots, it reads Apple’s signature-verification key from ROM into a buffer in memory and uses this key for signature verification. An attacker finds a bug in the boot ROM: the bug enables the attacker to overwrite the buffer in memory that stores Apple’s public key. The overwrite happens early in the boot process—before the device has verified any signatures. Explain how the attacker can now execute any code on the phone she wants." }, { "context": "The boot ROM is read-only, so on boot your phone will load Apple’s true signature-verification key into memory. The secure boot process will then refuse to load the evil OS.", "question": "### (c) Say that an attacker uses the attack of part (b) to cause your phone to boot a malicious version of iOS. Later on, you reboot your phone. Explain why, on reboot, you will either have a clean OS or your phone will not boot." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Software security
Alyssa P. Hacker is designing a large software system and wants your advice on security.
[ { "context": "Use OpenSSL. Much better tested. Many possible bugs and considerations in a network and crypto library in particular, which may be difficult to discover for a one-off implementation, despite being in principle simpler. Even though OpenSSL is large, it is likely better-tested than a smaller replacement would be.", "question": "### (a) Alyssa needs to communicate securely over the Internet between computers located in different data centers. Her requirements are relatively simple: just encrypting the TCP connections between a small number of computers. She is considering either using a large existing software library that supports many features (e.g., the OpenSSL library that implements TLS), or implementing a much smaller library that supports just the small number of features she requires. What advice would you give her, and why?" }, { "context": "She should find an existing message format and associated encoding/decoding library that is designed to handle arbitrary inputs on decoding. It’s error-prone to write code that parses arbitrary inputs, so it’s good to leverage existing effort by other library developers. However, it’s important that the library was designed for this (e.g., not Python’s pickle library that was discussed in lecture).", "question": "### (b) Alyssa’s application will need to send customer records between her servers and applications running on users’ smartphones. Alyssa is wondering how she should represent her customer records in network messages, in terms of encoding and decoding them. One of her concerns is that existing data serialization libraries are complex, large pieces of code, and that more code means more bugs. She thinks her customer records can be implemented with a simpler encoder/decoder. Would you suggest that Alyssa use an existing data encoding library, or write a simpler version on her own? What requirements should she consider for the encoding library, whether she uses an existing one or writes her own? Why?" } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Privilege separation
Ben Bitdiddle is developing a stock trading application for smartphones. Unfortunately, he has gotten a bit carried away with features, and his application has grown to hundreds of thousands of lines of code! He is worried about security. The main security goal for his application is to ensure that stocks are traded (bought or sold) only when the user authorizes that operation. It would be a problem if an adversary could take advantage of the many potential bugs in Ben’s enormous application running on the user’s phone to send fake trades on behalf of the user.
[ { "context": "- Split Ben’s smartphone app into two: one handles login and trade approval (\"A\"), and another does everything else (\"B\").\n- Upon login, when the \"A\" app launches the \"B\" app, it should give it a reduced form of credentials that allows it to do everything except issue trades on the user’s behalf.\n- Adjust the server to support these two kinds of credentials (cookies, presumably).\n- When the giant app \"B\" wants to make a trade, it issues a call to app \"A\" to perform a trade.\n- \"A\" should prompt the user, showing what trade is about to take place, and the user must explicitly approve it before \"A\" sends the request to the server authenticated with \"A\"’s cookie.", "question": "### (a) Succinctly describe how Ben could use privilege separation to revise his design to improve confidence in achieving his security goal. You may need to make some changes to how the backend servers expect to interact with the app running on the user’s phone, and you may need to refactor Ben’s application into multiple apps that are largely isolated from one another. Describe any interfaces you introduce between your components. Use bullet points to make your design description more compact; if you’re writing 10 bullet points, that’s probably too verbose." } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Runtime defenses
[ { "context": "- Overwrite the contents of another struct field in a struct that contains an overflowed buffer.\n- Take advantage of a use-after-free vulnerability to overwrite some new data structure using an old fat pointer that refers to the same memory region.", "question": "### (a) Ben Bitdiddle wants to secure his gaming server written in C by using a 'fat pointer' runtime defense. Describe how an adversary could still exploit some memory errors in Ben’s C code despite this defense." }, { "context": "- Possible if the adversary can overwrite some important non-control-flow data.\n- Possible if the application uses computed jumps to invoke some sensitive code like `system()` and the adversary finds a buffer overflow that corrupts a function pointer that then gets invoked.", "question": "### (b) Ben Bitdiddle decides fat pointers aren’t a good idea for him, and instead enables control flow integrity when compiling his C code. Is it possible for an adversary to exploit a buffer overflow in Ben’s C code despite this defense? What would the adversary require to pull off such an attack, or why is it impossible?" } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT
Differential Privacy
Let \( f \) be a function over sensitive data sets. The data sets \( D \) and \( D' \) both have \( n \) records. We will define the global sensitivity of \( f \) as: \[ S(f) = \max_{\text{dist}(D, D')=1} |f(D) - f(D')| \] where \( \text{dist}(D, D') = 1 \) if and only if \( D' \) can be obtained from \( D \) by changing one record in \( D \). Any possible record has a single value that is between the minimum value \( a \) and the maximum value \( b \).
[ { "context": "\\( (b - a)/n \\). Noise distribution that needs to be added has mean \\( \\frac{b-a}{\\epsilon n} \\text{Lap}(0, 1) \\).", "question": "### (a) What is the global sensitivity when \\( f \\) is the (arithmetic) mean? In order to get \\( \\epsilon \\)-differential privacy, what noise distribution should we add to the outcome \\( f(D) \\)?" }, { "context": "\\( (b - a)^2/n \\). Noise distribution that needs to be added has mean \\( \\frac{(b-a)^2}{\\epsilon n} \\text{Lap}(0, 1) \\).", "question": "### (b) What is the global sensitivity when \\( f \\) is the variance? In order to get \\( \\epsilon \\)-differential privacy, what noise distribution should we add to the outcome \\( f(D) \\)?" } ]
2022-12-13T00:00:00
Henry Corrigan-Gibbs, Yael Kalai, Nickolai Zeldovich
MIT