Introducing entanglement distillation
What if the state Alice and Bob wanted to share is different? What changes in the protocol?
There exist many examples of distillation protocols, each of which is tailored towards a specific class of input states. All these protocols consist of the same basic procedure, which relies on copying information, using CNOTs, as discussed in the video, learning information via measurement, and deciding if the information was useful, via a post selection condition. If Alice and Bob want to share a different state, the biggest change to the protocol will usually be to the post selection condition, which is how they decide from the measurement outcomes whether the protocol was successful.
Do you think that increasing the number of rounds, keeping the source pair the same, will get you to higher and higher fidelity states?
By starting from a large pool of equivalent input states and repeatedly using the states output by successful distillations as input for new distillations it is possible to reach final states of higher and higher fidelity. As mentioned in the video, a limiting factor here is the success probability. Since distillation is a probabilistic procedure, doing more rounds tends to imply also having more failures and thus needing to start over more times.
On the other hand, if we always use the same pair as the source pair, but we always use low-quality target states, which have not been distilled, then the updated fidelity of the source pair will not necessarily increase much. In distillation the efficacy of the procedure depends on the quality of both input states.