NeurIPS 2020

CircleGAN: Generative Adversarial Learning across Spherical Circles


Review 1

Summary and Contributions: This paper present a new score function on a hypersphere to evaluate both realness and diversity of samples, which use the hypersphere as an embedding space and make spherical circles share the same normal vector as isolines, assume the real samples achieve the maximum realness and diversity, and reside in the great circlethe of hypersphere. The final experimental result is amazing.

Strengths: 1. The method improves the sample diversity along with the quality by adopting a hypersphere as an embedding space and performing the adversarial game on the most diversifiable region in the hypersphere. 2. Using the great circle on a hypersphere to place embeddings of the real samples on the most diversifiable region, rather than a point-based in conventional GANs.

Weaknesses: 1. The author does not give a explanation why the real samples that reside in the great circle indicates the maximum diversity. Does it mean that the diversity of generation samples is lower than the original image set? 2. The key information is missing in the description of the propoed method in Section 3, the statement is incoherent, and some symbols in the formulas (such as 'V_i', 'V^proj', 'p^~' in equal (1), 'S_r' in equal (5)) are not given specific meaning and explanation. So it is hard to read. 3. It is suggested to add a hypersphere chart to reflect the true intention of author, like 'make spherical circles sharing the same normal vector as isolines'.

Correctness: Intuitively, the hyperplane approach might be more effective. But this paper is hard to read and I am confused about the method described by the author. Although the result of the comparative experiment is amazing, I am not quite convinced.

Clarity: The article is poorly written and is not fully expressed, especially in the key of method description section.

Relation to Prior Work: Lack of discussion of existing methods, especially the Spherical GAN which also has the idea of hypersphere.

Reproducibility: No

Additional Feedback: 1. It may need to rearrange the text of the method section, and make clear the meaning of all custom symbols. 2. It need a further discussion about existing GAN methods which also use hypersphere, to batter present the characteristics of the propoesd method.


Review 2

Summary and Contributions: The authors incrementally suggest the novel method from baseline method which implements the point-based evaluation of realness on hyper-sphere. Contrary to baseline method, they aggressively searches the optimal distance by proposing following techniques: updating center point, incorporating various type of techniques including center estimation and radius equalization. Due to the such well-posed techniques, proposed method produces better results. The authors suggest new type of GAN objective functions for conditional GANs and shows the promising results.

Strengths: The authors have developed novel approaches to measure the discrepancy on hyper-sphere with various techniques. The experiments section is carefully composed. The proposed method outperforms state-of-the-art methods including baseline.

Weaknesses: In recent theoretical-oriented GAN models, they focus on suggesting new type of metrics between probability measures, and claim their theoretical superiority including stability, diversity in terms of probabilistic point of view. Contrary to recent trends, the proposed method starts from the combination of heuristic motivations and incrementally propose new objective function for GANs. The proposed method is some what interesting, but it is unreliable due to the lack of theoretical analysis.

Correctness: I like the ideas and concepts of 'diversity' and 'realness' on the sphere (which is projected by simple L2-normalization), but it is non-trivial to say that proposed objective function actually minimizes some 'distance' between real and fake probability distribution. SphereGAN implements IPMs as their objective function and shows the equivalence relation between minimizing Wasserstein distance in hyper-sphere and minimizing objective functions, but this kind of analysis is not dealt in proposed method even if SphereGAN is main baseline method. Thus authors needs to clarify what to minimize. The proposed method uses L2-normalization as a projection onto hyper-sphere which induces information loss as it is not one-to-one (All the conventional features lying in same lay started at origin is projected to same point in hyper-sphere). The stereo-graphic projection not only admits single fixed point where north pole ('center' in the paper) can be rotated transitively on the hyper-sphere. In this point of view, i think stereo-graphic projection can be applied to proposed method by replacing L2-normalization. Analysis of learning dynamic have not been dealt in the paper while this is main issue of GAN objective function. The stability of GAN learning have been investigated from the WGAN to SobolevGAN, in terms of designing gradient penalty. At least, the authors need to show the FID (or loss) landscape of proposed method. Overall, I'm not sure what brings the performance improvement in proposed method compared to baseline methods.

Clarity: Overall, I found this paper to be a nice read. The paper is well-written and structured clearly.

Relation to Prior Work: The authors properly discussed the difference between their method and baseline method.

Reproducibility: Yes

Additional Feedback:


Review 3

Summary and Contributions: The paper proposes a new GAN model that replace the original binary latent discrimination of the vanilla GAN by using a hypersphere space. This allows a "discrimination" on the quality and diversity of the input. The model can be extended to a conditional-GAN framework. Experiments are conducted on various image generation tasks, demonstrating the advantages of the proposed model.

Strengths: Overall, I think this is a very good paper. While the projecting (to the hypersphere) part is similar to SphereGAN, the paper present a new criterion on the discriminator, which considers both the realness and the diversity. These seems to work well with the hypersphere setup. The experiments compare the proposed CircleGAN with other GANs, where CircleGAN provides better overall performance. Ablation studies over SphereGAN is also included, which makes the proposal more convincing.

Weaknesses: 1) One question I have is on the computation cost of the proposed model. Since CircleGAN has additional training steps, and needs to update center, disc, and pivot, what is its training speed when comparing to, say SphereGAN? 2) In table 1(b), it seems without great circle learning (model 4), the model can still achieve good performance, even better than model 1 (IS). I'm wondering why is that?

Correctness: I think overall the paper is theoretically sound

Clarity: The paper is clearly written and easy to follow.

Relation to Prior Work: The proposed circleGAN is motivated from previous SphereGAN, which also use hypersphere embedding space. The proposed method add additional criterion for evaluation the realness and diversity of the generated samples, which are demonstrated to be beneficial for improving the performance of GAN.

Reproducibility: Yes

Additional Feedback:


Review 4

Summary and Contributions: this paper improve the sample diversity along with the quality by adopting a hypersphere as an embedding space and performing the adversarial game on the most diversifiable region in the hypersphere

Strengths: the idea is new, as far as I know, no similar idea was ever proposed in GAN framework.

Weaknesses: 1. It is recommended utilise a figure to show a brief illustration of the core idea for easy understanding. 2. lack experiment on large dataset and big image, e.g. imagenet. 3. The generated images seem not so good, and is far from recent sota GAN models such as stargan, stylegan. It will be better if the proposed idea was evluated on a stronger GAN baseline.

Correctness: yes

Clarity: yes

Relation to Prior Work: yes

Reproducibility: Yes

Additional Feedback: please see the weaknesses post rebuttal: after reading the rebuttal, I keep my original reate