## Stream: quantum computing

### Topic: quantum prisoner's dilemma

#### Yijun He (Aug 13 2019 at 09:41):

I just finished the proof for both the classical case and the entangled case of the quantum prisoner's dilemma.

#### Anthony Bordg (Aug 15 2019 at 10:50):

@Yijun He Can you open a pull request ?
I will review it as soon as possible once I'm done with the review of Deutsch-Jozsa.

#### Yijun He (Aug 15 2019 at 10:55):

I opened a pull request just now.

#### Anthony Bordg (Aug 20 2019 at 17:50):

I opened a pull request just now.

I'm going to start the review of your PR.

#### Yijun He (Aug 21 2019 at 03:34):

I closed the old pull request and opened a new one with only Quantum_Prisoners_Dilemma.thy.

#### Anthony Bordg (Aug 21 2019 at 14:29):

@Yijun He what is the main reference you used for your formalization ?

#### Yijun He (Aug 21 2019 at 14:33):

Sorry, I forgot to give you the link. It's exercise 4 there. Unfortunately, it seems there is no solution available on their webpage. At least, they give a link.

I referred to this paper in the link you gave me: https://arxiv.org/abs/quant-ph/9806088

#### Anthony Bordg (Aug 21 2019 at 14:39):

By the way, do not forget to update your branch, you're many commits behind, otherwise there will be incompabilities.

#### Hanna Lachnitt (Aug 21 2019 at 14:51):

I used the paper above and this:
https://arxiv.org/pdf/quant-ph/0301042.pdf
Although I am not sure if its relevant anymore

#### Anthony Bordg (Aug 22 2019 at 08:52):

@Hanna Lachnitt Regarding your second comment in the theory, the computation of $\hat{\mathcal{J}}=\exp \{i y \hat{D} \otimes \hat{D} / 2\}$ gives indeed

#### Hanna Lachnitt (Aug 22 2019 at 08:58):

@Anthony Bordg
I left this comment. Thank you, I was not sure since I read several papers which were a bit unclear about it (mostly didn't said what $\hat{D}$ was) and none gave an explicit representation so I calculated this myself.

#### Anthony Bordg (Aug 22 2019 at 09:43):

To illustrate my point about nested proofs à la Lamport, I prefer the second proof to the first one in the following:

lemma (in prisoner)
shows "J*|unit_vec 4 0⟩ = ψ⇩1"
proof
fix i j
assume "i < dim_row ψ⇩1" and "j < dim_col ψ⇩1"
then have f0: "i∈{0,1,2,3} ∧ j=0" using mat_of_cols_list_def by auto
then have  "(J*|unit_vec 4 0⟩) $$(i,j) = (∑k<4. (J$$ (i,k)) * ( |unit_vec 4 0⟩ $$(k,j)))" using mat_of_cols_list_def ket_vec_def by auto also have "... = (∑k∈{0,1,2,3}. (J$$ (i,k)) * ( |unit_vec 4 0⟩ $$(k,j)))" using set_4 atLeast0LessThan by auto also have "... = ψ⇩1$$(i,j)"
using mat_of_cols_list_def f0 by auto
finally show "(J*|unit_vec 4 0⟩) $$(i,j) = ψ⇩1$$(i,j)" by blast
next
show  "dim_row (J*|unit_vec 4 0⟩) = dim_row ψ⇩1"
using mat_of_cols_list_def by simp
next
show  "dim_col (J*|unit_vec 4 0⟩) = dim_col ψ⇩1"
using mat_of_cols_list_def
qed


and

lemma (in prisoner)
shows "J * |unit_vec 4 0⟩ = ψ⇩1"
proof
fix i j assume a0:"i < dim_row ψ⇩1" and a1:"j < dim_col ψ⇩1"
then have "(J * |unit_vec 4 0⟩) $$(i,j) = (∑k<4. (J$$ (i,k)) * ( |unit_vec 4 0⟩ $$(k,j)))" using mat_of_cols_list_def ket_vec_def by auto also have "... = (∑k∈{0,1,2,3}. (J$$ (i,k)) * ( |unit_vec 4 0⟩ $$(k,j)))" using set_4 atLeast0LessThan by simp also have "... = ψ⇩1$$(i,j)"
proof-
have "i∈{0,1,2,3} ∧ j=0"
using a0 a1 mat_of_cols_list_def by auto
thus ?thesis
using mat_of_cols_list_def by auto
qed
finally show "(J * |unit_vec 4 0⟩) $$(i,j) = ψ⇩1$$ (i,j)" by simp
next
show  "dim_row (J * |unit_vec 4 0⟩) = dim_row ψ⇩1"
using mat_of_cols_list_def by simp
next
show  "dim_col (J*|unit_vec 4 0⟩) = dim_col ψ⇩1"
using mat_of_cols_list_def by (simp add: ket_vec_def)
qed


Here nested proofs don't make a big difference, since the proof is short and easy, but it allows me to illustrate my point. In longer and more intricate proofs it's nicer to introduce statements (here f0) where they are actually used. Also, if properly used the depth is an indication for the reader of the difficulty/importance of a statement, the higher the depth the more obvious/low-level is the statement.
@Yijun He @Hanna Lachnitt

#### Anthony Bordg (Aug 22 2019 at 10:05):

There is a sign error in $U_A$ and $U_B$. Compare for instance

abbreviation (in restricted_strategic_space) U⇩A :: "complex Matrix.mat" where
"U⇩A ≡  mat_of_cols_list 2 [[(exp(𝗂*ψ⇩A))*cos(θ⇩A/2), sin (θ⇩A/2)],
[-sin(θ⇩A/2), (exp (-𝗂*ψ⇩A))*cos(θ⇩A/2)]]"


with (3) in the reference used.
@Yijun He

#### Anthony Bordg (Aug 22 2019 at 10:17):

A few lemmas should be moved into Basics.thy

#### Anthony Bordg (Aug 22 2019 at 10:42):

Instead of the ad hoc J_cnj, one should define the conjugate of a matrix in Quantum.thy.

#### Anthony Bordg (Aug 22 2019 at 16:17):

@Yijun He What is "alice_vec" in alice_vec_is_state ? Where does this vector come from ?

#### Anthony Bordg (Aug 22 2019 at 16:30):

Yijun He What is "alice_vec" in alice_vec_is_state ? Where does this vector come from ?

I think I understand, this is the vector you get when you plug $\gamma = 0$, $\psi_B = 0$ and $\theta_B = \pi$ in $\psi_f$.

#### Anthony Bordg (Aug 23 2019 at 14:10):

There is a sign error in $U_A$ and $U_B$.

@Yijun He Note however this "error" is not a problem, since you just replaced $D$ by $-D$ in (5) while $C$ is kept unchanged, but in that case $J$ as defined in (7) is still a solution of equation (6). So, it's why you was able to complete the formalization despite this discrepancy in signs.

#### Anthony Bordg (Aug 23 2019 at 15:15):

I have expanded the introduction:

Prisoner's dilemma ceases to pose a dilemma if quantum strategies are allowed for. Indeed,
Alice and Bob both choosing to defect is no longer a Nash equilibrium. However, a new Nash
equilibrium appears which is at the same time Pareto optimal. Moreover, there exists a
quantum strategy which always gives reward if played against any classical strategy.
Below the parameter γ can be seen as a measure of the game's entanglement. The game behaves
classically if γ = 0, and for the maximally entangled case (γ = 2*$\pi$) the dilemma disappears.

#### Anthony Bordg (Aug 23 2019 at 15:49):

@Yijun He It's a bit confusing that $\psi$ is used instead of $\phi$ as in the reference.

#### Anthony Bordg (Aug 23 2019 at 16:15):

One could bring a little more structure into the theory by creating three sections section <The Set-Up>, section <The Separable Case> and section <The Maximally Entangled Case>.

#### Anthony Bordg (Aug 23 2019 at 16:20):

One could add a fourth section on the biased strategic space and the miracle move.

#### Anthony Bordg (Aug 23 2019 at 16:28):

One could add formal specifications for Nash equilibrium and Pareto optimum and prove that in the maximally entangled case $D\otimes D$ is no longer a Nash equilibrium and that $Q \otimes Q$ is a Nash equilibrium and Pareto optimal.

#### Anthony Bordg (Aug 23 2019 at 16:49):

A lot of names could be improved, since they are not currently illuminating.
What about classical_case_DD (* both players defect*) instead of classical_case, classical_alice_payoff_D⇩B (* Alice's payoff in the classical case if Bob defects *) instead of classical_alice_optimal, quantum_case_QQ (* both players play the quantum move *) instead of quantum_case, quantum_alice_payoff_D⇩B (* Alice's payoff in the quantum case if Bod defects *) instead of quantum_alice_optimal ?

#### Anthony Bordg (Aug 23 2019 at 20:26):

I also prefer the prefixes separable_ and max_entangled_ instead of classical_ and quantum_.
Indeed, there are two ways to get the classical limit, either $\gamma = 0$ or ($\gamma = \pi/2$ and both players play $\phi = 0$). So, even in the maximally entangled case, i.e. the "quantum" case, one can retrieve the classical case as a limit.

#### Anthony Bordg (Aug 23 2019 at 21:09):

Yijun He It's a bit confusing that $\psi$ is used instead of $\phi$ as in the reference.

changed

#### Anthony Bordg (Aug 23 2019 at 21:10):

One could bring a little more structure into the theory by creating three sections section <The Set-Up>, section <The Separable Case> and section <The Maximally Entangled Case>.

#### Anthony Bordg (Aug 23 2019 at 21:10):

A lot of names could be improved, since they are not currently illuminating.
What about classical_case_DD (* both players defect*) instead of classical_case, classical_alice_payoff_D⇩B (* Alice's payoff in the classical case if Bob defects *) instead of classical_alice_optimal, quantum_case_QQ (* both players play the quantum move *) instead of quantum_case, quantum_alice_payoff_D⇩B (* Alice's payoff in the quantum case if Bod defects *) instead of quantum_alice_optimal ?

names changed

#### Anthony Bordg (Aug 23 2019 at 21:11):

I also prefer the prefixes separable_ and max_entangled_ instead of classical_ and quantum_.
Indeed, there are two ways to get the classical limit, either $\gamma = 0$ or ($\gamma = \pi/2$ and both players play $\phi = 0$). So, even in the maximally entangled case, i.e. the "quantum" case, one can retrieve the classical case as a limit.

prefixes changed

PR #18 merged.

#### Yijun He (Sep 02 2019 at 01:38):

When I was working on the unfair strategy case, I found a problem with the proof: the original paper says that, when Alice plays the miracle move ($\phi = \pi/2$ and $\theta = \pi/2$) and Bob plays only classical moves ($\phi = 0$) in the maximally entangled case, the payoff of Alice should be greater than 3. However, my proof shows that this is not true: in particular, I proved a lemma saying assumes "γ = pi/2" shows "φ⇩A = pi/2 ∧ θ⇩A = pi/2 ∧ φ⇩B = 0 ∧ θ⇩B = pi/2 ⟶ alice_payoff = 1 ∧ bob_payoff = 1"

#### Anthony Bordg (Sep 02 2019 at 12:58):

When I was working on the unfair strategy case, I found a problem with the proof: the original paper says that, when Alice plays the miracle move ($\phi = \pi/2$ and $\theta = \pi/2$) and Bob plays only classical moves ($\phi = 0$) in the maximally entangled case, the payoff of Alice should be greater than 3. However, my proof shows that this is not true: in particular, I proved a lemma saying assumes "γ = pi/2" shows "φ⇩A = pi/2 ∧ θ⇩A = pi/2 ∧ φ⇩B = 0 ∧ θ⇩B = pi/2 ⟶ alice_payoff = 1 ∧ bob_payoff = 1"

I will have a look as soon as possible.

#### Anthony Bordg (Sep 02 2019 at 13:22):

Did you use equation (10) to compute Alice's minimal expected payoff ?

#### Yijun He (Sep 02 2019 at 13:26):

Did you use equation (10) to compute Alice's minimal expected payoff ?

No, I just substituted in specific values of Alice's and Bob's strategies, and the result contradicts with the statement that $\_A(\hat{M}, \hat{U}(\theta,0)) ≥ 3$ for any $θ ∈ [0,π]$

#### Yijun He (Sep 02 2019 at 13:28):

Please refer to the lemma max_entangled_MH in https://github.com/MH2520/Isabelle_marries_Dirac/blob/prisoners_dilemma_only/Quantum_Prisoners_Dilemma.thy

#### Anthony Bordg (Sep 02 2019 at 14:10):

Did you use equation (10) to compute Alice's minimal expected payoff ?

No, I just substituted in specific values of Alice's and Bob's strategies, and the result contradicts with the statement that $\_A(\hat{M}, \hat{U}(\theta,0)) ≥ 3$ for any $θ ∈ [0,π]$

Indeed, it contradicts this statement. Are we sure that ψ⇩f in the formalization is correct ?

#### Yijun He (Sep 02 2019 at 14:18):

Did you use equation (10) to compute Alice's minimal expected payoff ?

No, I just substituted in specific values of Alice's and Bob's strategies, and the result contradicts with the statement that $\_A(\hat{M}, \hat{U}(\theta,0)) ≥ 3$ for any $θ ∈ [0,π]$

Indeed, it contradicts this statement. Are we sure that ψ⇩f in the formalization is correct ?

I am not entirely sure, but I haven't found any mistakes in the intermediate steps yet. (Such as the definitions of $J, U_A$ or $U_B$)

#### Anthony Bordg (Sep 02 2019 at 14:30):

I'm going to check the computation (1).

#### Anthony Bordg (Sep 02 2019 at 17:17):

I think ψ⇩f is incorrect.
One should find:
https://cdn.mathpix.com/snip/images/XEhGrZmKXZ689sZDabFDKLqYmEDFrSC_PGSs2-p8834.original.fullsize.png
@Yijun He

#### Yijun He (Sep 03 2019 at 00:01):

I think the expression in the picture is mostly the same as the ψ⇩f in the code, except the signs or orders of some terms are different.

#### Yijun He (Sep 03 2019 at 00:03):

The first and fourth term in the picture are exactly the same as ψ⇩f $$(0,0) and ψ⇩f$$ (3,0), and the second and third term are equal to -ψ⇩f $$(1,0) and -ψ⇩f$$ (2,0), which won't affect the resulting probability.

#### Anthony Bordg (Sep 03 2019 at 10:53):

Can we prove equation (1) ?

#### Yijun He (Sep 03 2019 at 11:00):

Can we prove equation (1) ?

I added the lemma equation_one to the theory just now. https://github.com/MH2520/Isabelle_marries_Dirac/blob/prisoners_dilemma_only/Quantum_Prisoners_Dilemma.thy

#### Anthony Bordg (Sep 03 2019 at 11:09):

Can we prove equation (1) ?

I added the lemma equation_one to the theory just now. https://github.com/MH2520/Isabelle_marries_Dirac/blob/prisoners_dilemma_only/Quantum_Prisoners_Dilemma.thy

There is no room for doubt anymore.

#### Anthony Bordg (Sep 22 2019 at 15:44):

@Yijun He The definition of a Pareto optimum should be something like

definition (in restricted_strategic_space) is_pareto_opt :: "bool" where
"is_pareto_opt ≡
∀tA pA tB pB.
(
(restricted_strategic_space.alice_payoff γ tA pA tB pB > alice_payoff ⟶ restricted_strategic_space.bob_payoff γ tA pA tB pB < bob_payoff)
∧
(restricted_strategic_space.bob_payoff γ tA pA tB pB > bob_payoff ⟶ restricted_strategic_space.alice_payoff γ tA pA tB pB < alice_payoff)
)"


To avoid something so long, you may want to rename restricted_strategic_space with strategic_space or something even shorter.

#### Anthony Bordg (Sep 22 2019 at 16:35):

@Yijun He The definition of a Nash equilibrium is simply:

definition (in restricted_strategic_space) is_nash_eq :: "bool" where
"is_nash_eq ≡
(∀tA pA. alice_payoff ≥ restricted_strategic_space.alice_payoff γ tA pA θ⇩B φ⇩B)
∧
(∀tB pB. bob_payoff ≥ restricted_strategic_space.bob_payoff γ θ⇩A φ⇩A tB pB)"


#### Yijun He (Sep 23 2019 at 10:15):

@Anthony Bordg Many thanks! I didn't realize that restricted_strategic_space.alice_payoff allows me to use alice_payoff directly as a function. I have fixed up all the definitions and lemmas, and I will make a pull request after double checking.

#### Anthony Bordg (Oct 06 2019 at 10:15):

To avoid something so long, you may want to rename *restricted_strategic_space* with *strategic_space* or something even shorter.


@Yijun He Sorry to come back on that, but to avoid any possible confusion a more precise name is needed. 2p_strategic_space or even 2p_space where 2p stands for 2-parameter are shorter and unambiguous.

#### Yijun He (Oct 06 2019 at 20:52):

@Anthony Bordg I replaced strategic_space by strategic_space_2p and updated the pull request. It seems that identifiers in Isabelle couldn't begin with a numeral.

Last updated: Jul 15 2022 at 23:21 UTC