This paper received overall positive reviews from four reviewers. The reviewers like the quality of the visual results, the extensiveness of the evaluation, and the novel idea of appearance transfer. Some concerns include clarity of presentation, no discussion of limitations, etc. From the perspective of technique merit, the consensus is that this paper shall be accepted. However, since the topic of this paper is regarding new approaches of generating synthetic faces, there are potential ethical concerns as indicated by some reviewers. As a result, we further send this paper to two NeurIPS ethical reviewers for additional comments. The ethical reviewer #1 stated that “the broader impact statement needs to be much stronger.”, for a number of reasons: “1) The paper positions itself as theoretical and therefore not posing a practical risk, but at the same time the paper does proposed a concrete algorithm and displays examples of significantly improved deepfakes. This is inconsistent. The authors should clarify. I agree no tool (or code for reproducibility is offered - which is a different kind of problem), but the algorithm seems clear enough someone could reproduce it with enough work. 2) The paper glosses over the very serious risks posed by deepfake techniques. The paper mentions security and privacy, but it doesn't mention manipulated media, misinformation, hoaxes and false news, fraud, defamation, etc.” The ethical reviewer #2 stated that “the broad impact statement is brief, and limited in scope. It claims that a potential positive impact is bringing deceased actors back to life by swapping their faces onto substitutes. However, this is a controversial idea as it could raise a legal action under the actor's right to publicity. But it is the discussion of ethical considerations that is a more serious shortcoming: it offers only a passing mention of security concerns and privacy harms. There is no mention of the widespread research and public discussion of the risk of these tools used in harassment, impersonating public figures, and revenge porn (see the scholarship of Danielle Citron among others). These are direct harms that far exceed privacy, and should be raised under the categories of potential harms in Section 4A. 4): "could it be used to impersonate intimate relations for the purpose of theft or fraud? Could it be used to impersonate public figures to influence political processes?" These are not theoretical concerns, but harms that already occur with these kinds of systems in the world today.” In light of the ethical concerns, the AC recommends a conditional acceptance of this paper. That is, this paper can be accepted after the authors make substantial improvement on the broad impact statement, especially in thoroughly addressing the concerns raised by the two ethical reviewers. For example, the ethical reviewer #1 mentioned that “the potential mitigations include the creation of methods for detecting deepfakes and of datasets to help researchers train deepfake detectors.” The authors should address the issue that “The paper mentions the method proposed can be used to generate more challenging deepfake datasets, but the authors don't share any data as far as I can tell.” The ethical reviewer #2 mentioned “This is an emerging area of research, and as the authors acknowledge, there is potential in this work to strengthen forgery detection algorithms. The paper should emphasize the serious risk of harm, and how this tool can be used to address those harms first and foremost. If it cannot propose any way to prevent these harms, but only to "improve" the ability for face swapping using the Appearance Optimal Transport model, then this paper brings undue risk of harm.” Furthermore, the ethical reviewer #2 suggested that “given the many news articles and research discussions about the harms of face swapping and deep fakes over the last two years, it would strengthen this paper to engage with those risks in detail. At present, it really only assesses this space as an optimal transport problem as a way to improve performance, rather than seriously contending with the serious harms of these tools being widely available (and considering who is most at risk).” ******************************* Note from Program Chairs: The camera-ready version of this paper has been reviewed with regard to the conditions listed above, and this paper is now fully accepted for publication.