{"title": "PerspectiveNet: A Scene-consistent Image Generator for New View Synthesis in Real Indoor Environments", "book": "Advances in Neural Information Processing Systems", "page_first": 7601, "page_last": 7612, "abstract": "Given a set of a reference RGBD views of an indoor environment, and a new viewpoint, our goal is to predict the view from that location. Prior work on new-view generation has predominantly focused on significantly constrained scenarios, typically involving artificially rendered views of isolated CAD models. Here we tackle a much more challenging version of the problem. We devise an approach that exploits known geometric properties of the scene (per-frame camera extrinsics and depth) in order to warp reference views into the new ones. The defects in the generated views are handled by a novel RGBD inpainting network, PerspectiveNet, that is fine-tuned for a given scene in order to obtain images that are geometrically consistent with all the views in the scene camera system. Experiments conducted on the ScanNet and SceneNet datasets reveal performance superior to strong baselines.", "full_text": "PerspectiveNet: A Scene-consistent Image Generator\nfor New View Synthesis in Real Indoor Environments\n\nDavid Novotny\n\nJeremy Reizenstein\n\nBenjamin Graham\nFacebook AI Research\n\nLondon\n\n{dnovotny,benjamingraham,reizenstein}@fb.com\n\nAbstract\n\nGiven a set of a reference RGBD views of an indoor environment, and a new\nviewpoint, our goal is to predict the view from that location. Prior work on new-\nview generation has predominantly focused on signi\ufb01cantly constrained scenarios,\ntypically involving arti\ufb01cially rendered views of isolated CAD models. Here we\ntackle a much more challenging version of the problem. We devise an approach\nthat exploits known geometric properties of the scene (per-frame camera extrinsics\nand depth) in order to warp reference views into the new ones. The defects in the\ngenerated views are handled by a novel RGBD inpainting network, PerspectiveNet,\nthat is \ufb01ne-tuned for a given scene in order to obtain images that are geometrically\nconsistent with all the views in the scene camera system. Experiments conducted\non the ScanNet and SceneNet datasets reveal performance superior to strong\nbaselines.\n\n1\n\nIntroduction\n\nDecisions often have to be made on the basis of incomplete information about our visual environment.\nHumans instinctively \ufb01ll the gaps in from prior experience. This is an enabler of many tasks such\nas navigation, and machine learning should strive to match this ability. One way of quantitatively\nmeasuring it is via generating new views within a partially explored environment. Many variants of\nthis problem, known as new view synthesis, exist, ranging from a category-speci\ufb01c setup, where the\nhallucinated views are conditioned on image(s) of an isolated instance of a well de\ufb01ned visual object\ncategory (car, chair) [9], to inferring new photo-realistic pictures of outdoor or indoor scenes given\na set of reference images [13]. The former can be seen as a subtask of the latter, since real scenes\ncontain many instances of various object categories in an arbitrary geometric con\ufb01guration. Perhaps\ndue to the challenging nature of the more unconstrained setup, the community in recent years mostly\nfocused on the category-speci\ufb01c scenario, restricting a large portion of the experimental evaluation to\nclean synthetic datasets such as ShapeNet [4].\nIn this work, we take a step toward the more complex task of generating new views of real indoor\nenvironments. Historically, the new view synthesis task has been addressed with either learning\nbased methods [43, 20, 19], that leverage deep nets to map an encoding of a viewpoint and style to a\nnew view, or methods that exploit geometric properties of a given scene to warp reference images\ninto a target viewpoint, usually with some human intervention, possibly followed by an inpainting\nstep that \ufb01lls the newly appearing holes [17, 7]. While learning based methods are suitable for the\ncategory-speci\ufb01c setup, where the viewpoint-to-image mapping is less complex, due to the regular\ngeometric structure of object categories, the indoor scene synthesis was predominantly addressed\nwith different variants of the render-inpaint technique. Similar to previous approaches, we tackle the\ntask by devising a novel variant of the render-inpaint approach.\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fInput\nviews\n\n{vi} \u2282 R4\u00d7H\u00d7W\n\n\u0082\n\nPartial\npoint cloud\nX \u2282 R3 \u00d7 R3\n\nRGB\n\nPartial\n\nnew views\n{\u00afvi} \u2282 R4\u00d7H\u00d7W\n\nCompleted\nnew views\n{\u02c6vi} \u2282 R4\u00d7H\u00d7W\n\nt\ne\nN\ne\nv\ni\nt\nc\ne\np\ns\nr\ne\nP\n\n\u0083\n\n\u0082\n\n\u0083\n\nFigure 1: New view synthesis in real indoor environments. Given a sparse set of reference RGBD\nroom views, our goal is to generate new views of the same room.\n\nOur main contribution is a novel scene-level multi-camera optimization scheme termed PerspectiveNet.\nThe crux of the method lies in regarding the joint set of reference and generated test views as a\ncalibrated optical system. Its known physical properties allow for exploiting powerful constraints\nthat enforce consistency of the newly generated views across all present cameras. Combined with a\nlatent representation of pixels in each view, we optimize over the set of latent image codes to generate\nglobally consistent set of views of a considered room.\nWe conduct one of the \ufb01rst systematic evaluations of the new view synthesis task in the context of\nreal indoor environments. Our method is compared with a variety of strong baselines that either\nfollow the render-inpaint paradigm, or reason about scene contents in 3D space with 3D convolutions.\nEvaluation on the ScanNet and SceneNet datasets reveals that our method outperforms all baselines\nboth qualitatively and quantitatively.\n\n2 Related Work\n\nCategory-speci\ufb01c new view synthesis Neural networks can successully learn complex mappings,\nincluding changes of appearance induced by a camera movement. Therefore they have been the\nmethod of choice in tackling new view synthesis. The task was mostly explored for isolated object\ncategories such as chairs or faces, since their regular structure signi\ufb01cantly constrains the problem.\nGiven an encoding of a relative transformation and a reference image, Transforming Autoencoder [16]\nproduced its transformed version. While [16] applied the architecture to images of digits, Multiview\nPerceptron [46] synthesized views of human faces.\nWith the advent of deep learning, convolutional neural networks (CNNs) enabled new view synthesis\nfor more complex object categories. Dosovitskiy et al. [9] generated new views of chairs from\na synthetic dataset (ShapeNet). Following [34], [34, 39, 22] proposed a similar encoder-decoder\narchitecture that, differently from [9], generated views of object instances previously unseen in the\ntrain set. Similar to our approach, several other methods proposed an alternative that transfers pixels\nfrom the reference views, followed by an image re\ufb01nement step [31, 43, 19]. Other approaches [20]\ninvolve an intermediate 3DCNN that aggregates information from the reference views, followed by a\nlearned 3D-to-image decoder. While the aforementioned methods show impressive results, they are\nrestricted to isolated views of object categories from a synthetic dataset. We differ by considering\na much more challenging setup with the reference views coming from real indoor environments\ncaptured with a hand-held camera.\n\nNew view synthesis in the wild Only very few works explored unconstrained generation of new\nviews in real environments. Flynn et al. [13] consider a simpler version of the task where the reference\nviews cover most of the frustum of the test views allowing to form the majority of the synthesized\nimage by copying pixels from the reference views. Eslami et al. [12] proposed an end-to-end trained\nGenerative Query Network (GQN) that renders new viewpoints given a latent encoding of the scene\nand a novel viewpoint. GQN can effortlessly browse simple synthetic environments, however it has\n\n2\n\n\fFigure 2: Scene-consistent optimization. For a given test scene, PerspectiveNet optimizes the\nlatent representations of new views in order to obtain a scene-consistent set of images that satisfy\ngeometric re-projection constraints of the scene camera system and have similar visual style.\n\nnot been tested in a real world setup. The recent method of Meshry et al. [29] captures a complete\ndistribution of possible appearance variations of a, mostly hole-free, image. This differs from our aim\nof inpainting large unde\ufb01ned regions. Recently, [33] trained a 3D ConvNet for generation of new\nviews of a single non-synthetic object instance.\n\nImage inpainting Our method is also related to image inpainting. Early approaches [2, 1, 35, 10,\n23] were recently outperformed by deep methods. Isola et al. [18] used a Conditional Generative\nAdversarial Network cGAN to translate between different types of pixel-wise labels. Many improve-\nments of the original cGAN architecture, including [38, 45, 44], were later proposed. Avoiding the\nuse of GANs, [26] leverage partial convolutions in combination with the perceptual and style losses\n[14] and achieve state-of-the-art results in semantic image inpainting. Recently, Ulyanov et al. [37]\ndemonstrated that convolutional layers constitute a strong prior for image denoising and, as such, can\nbe used for image denoising without prior training on a dataset of images.\n\n3 Method\nTask and naming conventions Our goal in this paper is to generate new views of an indoor scene\ngiven a set of reference views captured by a handheld RGBD camera. More formally, we take as input\na set of Nref reference RGBD views {vi}Nref\ni=1 , vi \u2208 R4\u00d7H\u00d7W annotated with their corresponding\ncamera extrinsic and intrinsic matrices gi \u2208 SE(3) and K i \u2208 R4\u00d74 respectively1. At some pixels,\nthe depth value is incorrectly recorded as zero to denote missing data. Given camera parameters\n{( \u02c6K i, \u02c6gi)}Ntest\ni=1 of Ntest test views, our method attempts to generate their RGBD content with a\nprediction {\u02c6vi}Ntest\nThroughout this paper we denote image spatial locations u = (u1, u2) \u2208 {1, ..., W} \u00d7 {1, ..., H}.\nAt each pixel u, we can identify the corresponding per-pixel depth du \u2208 R and color cu \u2208 [0, 1]3.\nThe knowledge of camera parameters and depth allows to back-project each pixel ui = (u1, u2) from\nu \u223c (K igi)\u22121[u1, u2, du, 1]T in the common coordinate\nimage i to its corresponding 3D point xi\nframe of the corresponding scene. Since we work with rendering algorithms that occasionally produce\nholes in images (i.e. pixels with unde\ufb01ned color), we denote by \u2126(v) a set of all locations u in a view\nv that are non-holes (pixels with de\ufb01ned color).\nIn what follows, we describe a render-inpaint baseline followed by our main contribution consisting\nof an extension of the baseline to a novel scene-consistent inpainting method, PerspectiveNet.\n\ni=1 . We denote V = {vi}Nref\n\ni=1 \u222a {\u02c6vi}Ntest\n\ni=1 as a set of all views in a given scene.\n\n3.1\n\nInpainting with a denoising RGBD autoencoder\n\nAs outlined above, we take a pragmatic approach and start by \u201ccopying\u201d all possible pixels from the\nreference views into the test ones. While this can be achieved with depth-based image rendering\n\n1We use upper indices to index frames, while lower indices stand for spatial locations of pixels within a frame\n\n3\n\nOptimizedimage codes\u02c6v1AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+r46mHAAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+r46mHAAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+r46mHAAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+r46mH\u02c6v2AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCraepiA==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCraepiA==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCraepiA==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGr0wfig95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCraepiA==\u02c6v3AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACGXicbZDLSgMxFIYz9VbHqu3azWARXJUZN7oU3LisYC/QDiWTOdOG5jIkZypl6Au49SV8Gncivo3pBbzUA4Gf/0sI50tywS2G4adX2dnd2z+oHvpHNf/45LRe61pdGAYdpoU2/YRaEFxBBzkK6OcGqEwE9JLp3ZL3ZmAs1+oR5znEko4Vzzij6Kr2qN4MW+Fqgu0QbUKTbGbU8BrDVLNCgkImqLWDKMwxLqlBzgQs/KEBBU9MS0lVWg6NTCGjhcBFmaPcwjb7xpOZw4WFnLIpHcPARUUl2LhcbbkILlyTBpk27igMVu3PFyWV1s5l4m5KihP7ly3L/9igwOwmLrnKCwTF1h9lhQhQB0tlQcoNMBRzFygz3G0asAk1lKET6/vOYfTX2HboXrWisBU9hKRKzsg5uSQRuSa35J60SYcwkpJn8uK9em/e+9p1xdtIb5Bf4318AW3JpAg=AAACJHicbZDLahsxFIbPJM2ljnNxttmImkJXZiZZJMtAN1mmECcG25gzmjO2iKQZpDMOZvBLdNu+RJ+mdBPyKJEdQ1q7BwQ//ychzpeWWnmO4z/R1vaHnd29/Y+Ng+bh0fFJq3nvi8pJ6spCF66XoietLHVZsaZe6QhNqukhffy64A9Tcl4V9o5nJQ0Njq3KlUQOVW8wQRbT0cXopB134uWIzZCsQhtWcztqRa1BVsjKkGWp0ft+Epc8rNGxkprmjYEjS0+yMAZtVg+cySjHSvO8LtlsYJ+/48k04MpTifIRx9QP0aIhP6yX687F59BkIi9cOJbFsv37RY3G+5lJw02DPPHrbFH+j/Urzq+GtbJlxWTl20d5pQUXYuFOZMqRZD0LAaVTYVMhJ+hQcjDcaASJybqyzXB/3kniTvIthn04g0/wBRK4hGu4gVvoggQN3+EH/Ix+Rb+j5zfdW9HK+yn8M9HLK2kNqAE=AAACJHicbZDLahsxFIbPJM2ljnNxttmImkJXZiZZJMtAN1mmECcG25gzmjO2iKQZpDMOZvBLdNu+RJ+mdBPyKJEdQ1q7BwQ//ychzpeWWnmO4z/R1vaHnd29/Y+Ng+bh0fFJq3nvi8pJ6spCF66XoietLHVZsaZe6QhNqukhffy64A9Tcl4V9o5nJQ0Njq3KlUQOVW8wQRbT0cXopB134uWIzZCsQhtWcztqRa1BVsjKkGWp0ft+Epc8rNGxkprmjYEjS0+yMAZtVg+cySjHSvO8LtlsYJ+/48k04MpTifIRx9QP0aIhP6yX687F59BkIi9cOJbFsv37RY3G+5lJw02DPPHrbFH+j/Urzq+GtbJlxWTl20d5pQUXYuFOZMqRZD0LAaVTYVMhJ+hQcjDcaASJybqyzXB/3kniTvIthn04g0/wBRK4hGu4gVvoggQN3+EH/Ix+Rb+j5zfdW9HK+yn8M9HLK2kNqAE=AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjO60GXBjUsFq4W2lDuZOzaYZIbkTqUMfQm3+hI+jbgRt76FaS34Uw8EDue7l3BPnCvpKAxfg7n5hcWl5ZXVytr6xubWdnXn2mWFFdgUmcpsKwaHShpskiSFrdwi6FjhTXx3NuY3A7ROZuaKhjl2NdwamUoB5KNWpw/EB73j3nYtrIcT8VkTTU2NTXXRqwbVTpKJQqMhocC5dhTm1C3BkhQKR5WORYP3ItMaTFJ2rE4whULRqMxJz2CXfuP+wOPCYQ7iDm6x7a0Bja5bTs4d8QOfJDzNrH+G+CT9uVGCdm6oYz+pgfruLxuH/7F2Qelpt5QmLwiN+PooLRSnjI+744m0KEgNvQFhpb+Uiz5YEOQbrlR8idHfymbN9VE9CuvRZVhrhNM6V9ge22eHLGInrMHO2QVrMsEUe2CP7Cl4Dl6Ct+D9a3QumO7ssl8KPj4BriuphQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZ7gHxfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEr2upiQ==\u02c61AAACMXicbZDNSsNAFIUn/lt/q0s3wSK4KokIuhTcuKxgVWxKuZncmMGZSZi5qZTQt3CrL+HTdCdufQmnteBPPTBwON+9DPfEhRSWgmDkzc0vLC4tr6zW1tY3Nre26zvXNi8NxzbPZW5uY7AohcY2CZJ4WxgEFUu8iR/Ox/ymj8aKXF/RoMCugnstUsGBXHQXZUBRkYle2NtuBM1gIn/WhFPTYFO1enWvHiU5LxVq4hKs7YRBQd0KDAkucViLDGp85LlSoJMqMirBFEpJw6ogNYNt+o2zvsOlxQL4A9xjx1kNCm23mhw89A9ckvhpbtzT5E/SnxsVKGsHKnaTCiizf9k4/I91SkpPu5XQRUmo+ddHaSl9yv1xe34iDHKSA2eAG+Eu9XkGBji5jms1V2L4t7JZc33UDINmeHncOAumda6wPbbPDlnITtgZu2At1macafbEntmL9+qNvDfv/Wt0zpvu7LJf8j4+Ac4/qqI=AAACMXicbZDNSsNAFIUn/lt/q0s3wSK4KokIuhTcuKxgVWxKuZncmMGZSZi5qZTQt3CrL+HTdCdufQmnteBPPTBwON+9DPfEhRSWgmDkzc0vLC4tr6zW1tY3Nre26zvXNi8NxzbPZW5uY7AohcY2CZJ4WxgEFUu8iR/Ox/ymj8aKXF/RoMCugnstUsGBXHQXZUBRkYle2NtuBM1gIn/WhFPTYFO1enWvHiU5LxVq4hKs7YRBQd0KDAkucViLDGp85LlSoJMqMirBFEpJw6ogNYNt+o2zvsOlxQL4A9xjx1kNCm23mhw89A9ckvhpbtzT5E/SnxsVKGsHKnaTCiizf9k4/I91SkpPu5XQRUmo+ddHaSl9yv1xe34iDHKSA2eAG+Eu9XkGBji5jms1V2L4t7JZc33UDINmeHncOAumda6wPbbPDlnITtgZu2At1macafbEntmL9+qNvDfv/Wt0zpvu7LJf8j4+Ac4/qqI=AAACMXicbZDNSsNAFIUn/lt/q0s3wSK4KokIuhTcuKxgVWxKuZncmMGZSZi5qZTQt3CrL+HTdCdufQmnteBPPTBwON+9DPfEhRSWgmDkzc0vLC4tr6zW1tY3Nre26zvXNi8NxzbPZW5uY7AohcY2CZJ4WxgEFUu8iR/Ox/ymj8aKXF/RoMCugnstUsGBXHQXZUBRkYle2NtuBM1gIn/WhFPTYFO1enWvHiU5LxVq4hKs7YRBQd0KDAkucViLDGp85LlSoJMqMirBFEpJw6ogNYNt+o2zvsOlxQL4A9xjx1kNCm23mhw89A9ckvhpbtzT5E/SnxsVKGsHKnaTCiizf9k4/I91SkpPu5XQRUmo+ddHaSl9yv1xe34iDHKSA2eAG+Eu9XkGBji5jms1V2L4t7JZc33UDINmeHncOAumda6wPbbPDlnITtgZu2At1macafbEntmL9+qNvDfv/Wt0zpvu7LJf8j4+Ac4/qqI=AAACMXicbZDNSsNAFIUn/lt/q0s3wSK4KokIuhTcuKxgVWxKuZncmMGZSZi5qZTQt3CrL+HTdCdufQmnteBPPTBwON+9DPfEhRSWgmDkzc0vLC4tr6zW1tY3Nre26zvXNi8NxzbPZW5uY7AohcY2CZJ4WxgEFUu8iR/Ox/ymj8aKXF/RoMCugnstUsGBXHQXZUBRkYle2NtuBM1gIn/WhFPTYFO1enWvHiU5LxVq4hKs7YRBQd0KDAkucViLDGp85LlSoJMqMirBFEpJw6ogNYNt+o2zvsOlxQL4A9xjx1kNCm23mhw89A9ckvhpbtzT5E/SnxsVKGsHKnaTCiizf9k4/I91SkpPu5XQRUmo+ddHaSl9yv1xe34iDHKSA2eAG+Eu9XkGBji5jms1V2L4t7JZc33UDINmeHncOAumda6wPbbPDlnITtgZu2At1macafbEntmL9+qNvDfv/Wt0zpvu7LJf8j4+Ac4/qqI=\u02c62AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlE0KXgxqWCrWJTys3kxgzOTMLMTaWEvoVbfQmfxp249SWc1oI/9cDA4Xz3MtwTF1JYCoJXb25+YXFpubZSX11b39jcamx3bF4ajm2ey9zcxGBRCo1tEiTxpjAIKpZ4Hd+fjfn1AI0Vub6iYYE9BXdapIIDueg2yoCiIhP9w/5WM2gFE/mzJpyaJpvqot/wGlGS81KhJi7B2m4YFNSrwJDgEkf1yKDGB54rBTqpIqMSTKGUNKoKUjPYpt84GzhcWiyA38Mddp3VoND2qsnBI3/fJYmf5sY9Tf4k/blRgbJ2qGI3qYAy+5eNw/9Yt6T0pFcJXZSEmn99lJbSp9wft+cnwiAnOXQGuBHuUp9nYICT67hedyWGfyubNZ3DVhi0wsuj5mkwrbPGdtkeO2AhO2an7JxdsDbjTLNH9sSevRfv1Xvz3r9G57zpzg77Je/jE9ADqqM=AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlE0KXgxqWCrWJTys3kxgzOTMLMTaWEvoVbfQmfxp249SWc1oI/9cDA4Xz3MtwTF1JYCoJXb25+YXFpubZSX11b39jcamx3bF4ajm2ey9zcxGBRCo1tEiTxpjAIKpZ4Hd+fjfn1AI0Vub6iYYE9BXdapIIDueg2yoCiIhP9w/5WM2gFE/mzJpyaJpvqot/wGlGS81KhJi7B2m4YFNSrwJDgEkf1yKDGB54rBTqpIqMSTKGUNKoKUjPYpt84GzhcWiyA38Mddp3VoND2qsnBI3/fJYmf5sY9Tf4k/blRgbJ2qGI3qYAy+5eNw/9Yt6T0pFcJXZSEmn99lJbSp9wft+cnwiAnOXQGuBHuUp9nYICT67hedyWGfyubNZ3DVhi0wsuj5mkwrbPGdtkeO2AhO2an7JxdsDbjTLNH9sSevRfv1Xvz3r9G57zpzg77Je/jE9ADqqM=AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlE0KXgxqWCrWJTys3kxgzOTMLMTaWEvoVbfQmfxp249SWc1oI/9cDA4Xz3MtwTF1JYCoJXb25+YXFpubZSX11b39jcamx3bF4ajm2ey9zcxGBRCo1tEiTxpjAIKpZ4Hd+fjfn1AI0Vub6iYYE9BXdapIIDueg2yoCiIhP9w/5WM2gFE/mzJpyaJpvqot/wGlGS81KhJi7B2m4YFNSrwJDgEkf1yKDGB54rBTqpIqMSTKGUNKoKUjPYpt84GzhcWiyA38Mddp3VoND2qsnBI3/fJYmf5sY9Tf4k/blRgbJ2qGI3qYAy+5eNw/9Yt6T0pFcJXZSEmn99lJbSp9wft+cnwiAnOXQGuBHuUp9nYICT67hedyWGfyubNZ3DVhi0wsuj5mkwrbPGdtkeO2AhO2an7JxdsDbjTLNH9sSevRfv1Xvz3r9G57zpzg77Je/jE9ADqqM=AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlE0KXgxqWCrWJTys3kxgzOTMLMTaWEvoVbfQmfxp249SWc1oI/9cDA4Xz3MtwTF1JYCoJXb25+YXFpubZSX11b39jcamx3bF4ajm2ey9zcxGBRCo1tEiTxpjAIKpZ4Hd+fjfn1AI0Vub6iYYE9BXdapIIDueg2yoCiIhP9w/5WM2gFE/mzJpyaJpvqot/wGlGS81KhJi7B2m4YFNSrwJDgEkf1yKDGB54rBTqpIqMSTKGUNKoKUjPYpt84GzhcWiyA38Mddp3VoND2qsnBI3/fJYmf5sY9Tf4k/blRgbJ2qGI3qYAy+5eNw/9Yt6T0pFcJXZSEmn99lJbSp9wft+cnwiAnOXQGuBHuUp9nYICT67hedyWGfyubNZ3DVhi0wsuj5mkwrbPGdtkeO2AhO2an7JxdsDbjTLNH9sSevRfv1Xvz3r9G57zpzg77Je/jE9ADqqM=\u02c63AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlU0KXgxqWCVbEp5WZyYwZnJmHmRimhb+FWX8Kn6U7c+hJOa8GfemDgcL57Ge6JCyksBcHQm5mdm19YrC3Vl1dW19Y3GptXNi8NxzbPZW5uYrAohcY2CZJ4UxgEFUu8ju9PR/z6AY0Vub6kfoFdBXdapIIDueg2yoCiIhO9g95GM2gFY/nTJpyYJpvovNfwGlGS81KhJi7B2k4YFNStwJDgEgf1yKDGR54rBTqpIqMSTKGUNKgKUlPYpt84e3C4tFgAv4c77DirQaHtVuODB/6uSxI/zY17mvxx+nOjAmVtX8VuUgFl9i8bhf+xTknpcbcSuigJNf/6KC2lT7k/as9PhEFOsu8McCPcpT7PwAAn13G97koM/1Y2ba72W2HQCi8OmyfBpM4a22Y7bI+F7IidsDN2ztqMM82e2DN78V69offmvX+NzniTnS32S97HJ9HHqqQ=AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlU0KXgxqWCVbEp5WZyYwZnJmHmRimhb+FWX8Kn6U7c+hJOa8GfemDgcL57Ge6JCyksBcHQm5mdm19YrC3Vl1dW19Y3GptXNi8NxzbPZW5uYrAohcY2CZJ4UxgEFUu8ju9PR/z6AY0Vub6kfoFdBXdapIIDueg2yoCiIhO9g95GM2gFY/nTJpyYJpvovNfwGlGS81KhJi7B2k4YFNStwJDgEgf1yKDGR54rBTqpIqMSTKGUNKgKUlPYpt84e3C4tFgAv4c77DirQaHtVuODB/6uSxI/zY17mvxx+nOjAmVtX8VuUgFl9i8bhf+xTknpcbcSuigJNf/6KC2lT7k/as9PhEFOsu8McCPcpT7PwAAn13G97koM/1Y2ba72W2HQCi8OmyfBpM4a22Y7bI+F7IidsDN2ztqMM82e2DN78V69offmvX+NzniTnS32S97HJ9HHqqQ=AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlU0KXgxqWCVbEp5WZyYwZnJmHmRimhb+FWX8Kn6U7c+hJOa8GfemDgcL57Ge6JCyksBcHQm5mdm19YrC3Vl1dW19Y3GptXNi8NxzbPZW5uYrAohcY2CZJ4UxgEFUu8ju9PR/z6AY0Vub6kfoFdBXdapIIDueg2yoCiIhO9g95GM2gFY/nTJpyYJpvovNfwGlGS81KhJi7B2k4YFNStwJDgEgf1yKDGR54rBTqpIqMSTKGUNKgKUlPYpt84e3C4tFgAv4c77DirQaHtVuODB/6uSxI/zY17mvxx+nOjAmVtX8VuUgFl9i8bhf+xTknpcbcSuigJNf/6KC2lT7k/as9PhEFOsu8McCPcpT7PwAAn13G97koM/1Y2ba72W2HQCi8OmyfBpM4a22Y7bI+F7IidsDN2ztqMM82e2DN78V69offmvX+NzniTnS32S97HJ9HHqqQ=AAACMXicbZDNSsNAFIUn/tb6W126CRbBVUlU0KXgxqWCVbEp5WZyYwZnJmHmRimhb+FWX8Kn6U7c+hJOa8GfemDgcL57Ge6JCyksBcHQm5mdm19YrC3Vl1dW19Y3GptXNi8NxzbPZW5uYrAohcY2CZJ4UxgEFUu8ju9PR/z6AY0Vub6kfoFdBXdapIIDueg2yoCiIhO9g95GM2gFY/nTJpyYJpvovNfwGlGS81KhJi7B2k4YFNStwJDgEgf1yKDGR54rBTqpIqMSTKGUNKgKUlPYpt84e3C4tFgAv4c77DirQaHtVuODB/6uSxI/zY17mvxx+nOjAmVtX8VuUgFl9i8bhf+xTknpcbcSuigJNf/6KC2lT7k/as9PhEFOsu8McCPcpT7PwAAn13G97koM/1Y2ba72W2HQCi8OmyfBpM4a22Y7bI+F7IidsDN2ztqMM82e2DN78V69offmvX+NzniTnS32S97HJ9HHqqQ=\u00afv3AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZjsHyfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEoR+pgQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZjsHyfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEoR+pgQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZjsHyfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEoR+pgQ==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMq6FJw41LBaqEt5U7mjg0mmSG5UylDX8KtvoRPI27ErW9hWgv+1AOBw/nuJdwT50o6CsPXYGZ2bn5hcWm5srK6tr6xWd26dllhBTZEpjLbjMGhkgYbJElhM7cIOlZ4E9+djfhNH62TmbmiQY4dDbdGplIA+ajZjsHyfvewu1kL6+FYfNpEE1NjE110q0G1nWSi0GhIKHCuFYU5dUqwJIXCYaVt0eC9yLQGk5RtqxNMoVA0LHPSU9il37jX97hwmIO4g1tseWtAo+uU43OHfM8nCU8z658hPk5/bpSgnRvo2E9qoJ77y0bhf6xVUHrSKaXJC0Ijvj5KC8Up46PueCItClIDb0BY6S/logcWBPmGKxVfYvS3smlzfVCPwnp0eVQ7DSd1LrEdtsv2WcSO2Sk7ZxeswQRT7IE9sqfgOXgJ3oL3r9GZYLKzzX4p+PgEoR+pgQ==\u00afv2AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCn1upgA==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCn1upgA==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCn1upgA==AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg95Rb7sW1sOJ+KyJpqbGprroVYNqJ8lEodGQUOBcOwpz6pZgSQqFo0rHosF7kWkNJik7VieYQqFoVOakZ7BLv3F/4HHhMAdxB7fY9taARtctJ+eO+IFPEp5m1j9DfJL+3ChBOzfUsZ/UQH33l43D/1i7oPS0W0qTF4RGfH2UFopTxsfd8URaFKSG3oCw0l/KRR8sCPINVyq+xOhvZbPm+qgehfXo8rjWCKd1rrA9ts8OWcROWIOdswvWZIIp9sAe2VPwHLwEb8H71+hcMN3ZZb8UfHwCn1upgA==\u00afv1AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+dl6l/AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+dl6l/AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+dl6l/AAACL3icbZDNSgMxFIUz/lt/q0s3wSK4KjMi6LLgxqWC1UJbyp3MHRtMMkNyp1KGvoRbfQmfRtyIW9/CtBb8qQcCh/PdS7gnzpV0FIavwdz8wuLS8spqZW19Y3Nru7pz7bLCCmyKTGW2FYNDJQ02SZLCVm4RdKzwJr47G/ObAVonM3NFwxy7Gm6NTKUA8lGrE4Plg17U266F9XAiPmuiqamxqS561aDaSTJRaDQkFDjXjsKcuiVYkkLhqNKxaPBeZFqDScqO1QmmUCgalTnpGezSb9wfeFw4zEHcwS22vTWg0XXLybkjfuCThKeZ9c8Qn6Q/N0rQzg117Cc1UN/9ZePwP9YuKD3tltLkBaERXx+lheKU8XF3PJEWBamhNyCs9Jdy0QcLgnzDlYovMfpb2ay5PqpHYT26PK41wmmdK2yP7bNDFrET1mDn7II1mWCKPbBH9hQ8By/BW/D+NToXTHd22S8FH5+dl6l/v1AAACKnicbZDNSgMxFIUz/tb61+rSzWARXJUZEXRZcOOyov2BtpRM5k4bmmSG5E6lDH0Et/oSPo274tYHMW0H1NYDgcP57iXcEySCG/S8mbOxubW9s1vYK+4fHB4dl8onTROnmkGDxSLW7YAaEFxBAzkKaCcaqAwEtILR3Zy3xqANj9UTThLoSTpQPOKMoo0ex32/X6p4VW8hd934uamQXPV+2Sl3w5ilEhQyQY3p+F6CvYxq5EzAtNjVoOCZxVJSFWZdLUOIaCpwmiUo17CJfvBwbHFqIKFsRAfQsVZRCaaXLS6duhc2Cd0o1vYpdBfp742MSmMmMrCTkuLQrLJ5+B/rpBjd9jKukhRBseVHUSpcjN15bW7INTAUE2so09xe6rIh1ZShLbdYtCX6q5Wtm+ZV1feq/sN1pebldRbIGTknl8QnN6RG7kmdNAgjA/JCXsmb8+58ODPnczm64eQ7p+SPnK9v3h6nnA==AAACKnicbZDNSgMxFIUz/tb61+rSzWARXJUZEXRZcOOyov2BtpRM5k4bmmSG5E6lDH0Et/oSPo274tYHMW0H1NYDgcP57iXcEySCG/S8mbOxubW9s1vYK+4fHB4dl8onTROnmkGDxSLW7YAaEFxBAzkKaCcaqAwEtILR3Zy3xqANj9UTThLoSTpQPOKMoo0ex32/X6p4VW8hd934uamQXPV+2Sl3w5ilEhQyQY3p+F6CvYxq5EzAtNjVoOCZxVJSFWZdLUOIaCpwmiUo17CJfvBwbHFqIKFsRAfQsVZRCaaXLS6duhc2Cd0o1vYpdBfp742MSmMmMrCTkuLQrLJ5+B/rpBjd9jKukhRBseVHUSpcjN15bW7INTAUE2so09xe6rIh1ZShLbdYtCX6q5Wtm+ZV1feq/sN1pebldRbIGTknl8QnN6RG7kmdNAgjA/JCXsmb8+58ODPnczm64eQ7p+SPnK9v3h6nnA==AAACKnicbZDNSgMxFIUz/tb61+rSzWARXJUZEXRZcOOyov2BtpRM5k4bmmSG5E6lDH0Et/oSPo274tYHMW0H1NYDgcP57iXcEySCG/S8mbOxubW9s1vYK+4fHB4dl8onTROnmkGDxSLW7YAaEFxBAzkKaCcaqAwEtILR3Zy3xqANj9UTThLoSTpQPOKMoo0ex32/X6p4VW8hd934uamQXPV+2Sl3w5ilEhQyQY3p+F6CvYxq5EzAtNjVoOCZxVJSFWZdLUOIaCpwmiUo17CJfvBwbHFqIKFsRAfQsVZRCaaXLS6duhc2Cd0o1vYpdBfp742MSmMmMrCTkuLQrLJ5+B/rpBjd9jKukhRBseVHUSpcjN15bW7INTAUE2so09xe6rIh1ZShLbdYtCX6q5Wtm+ZV1feq/sN1pebldRbIGTknl8QnN6RG7kmdNAgjA/JCXsmb8+58ODPnczm64eQ7p+SPnK9v3h6nnA==AAACKnicbZDNSgMxFIUz/tb61+rSzWARXJUZEXRZcOOyov2BtpRM5k4bmmSG5E6lDH0Et/oSPo274tYHMW0H1NYDgcP57iXcEySCG/S8mbOxubW9s1vYK+4fHB4dl8onTROnmkGDxSLW7YAaEFxBAzkKaCcaqAwEtILR3Zy3xqANj9UTThLoSTpQPOKMoo0ex32/X6p4VW8hd934uamQXPV+2Sl3w5ilEhQyQY3p+F6CvYxq5EzAtNjVoOCZxVJSFWZdLUOIaCpwmiUo17CJfvBwbHFqIKFsRAfQsVZRCaaXLS6duhc2Cd0o1vYpdBfp742MSmMmMrCTkuLQrLJ5+B/rpBjd9jKukhRBseVHUSpcjN15bW7INTAUE2so09xe6rIh1ZShLbdYtCX6q5Wtm+ZV1feq/sN1pebldRbIGTknl8QnN6RG7kmdNAgjA/JCXsmb8+58ODPnczm64eQ7p+SPnK9v3h6nnA==v2AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozRdBlwY3LivYCbSmZzJk2NMkMyZlKGfoIbvUlfBp3xa0PYnoBtfWHwM//nUM4f5AIbtDzZk5ua3tndy+/Xzg4PDo+KZZOmyZONYMGi0Ws2wE1ILiCBnIU0E40UBkIaAWjuzlvjUEbHqsnnCTQk3SgeMQZRRs9jvvVfrHsVbyF3E3jr0yZrFTvl5xSN4xZKkEhE9SYju8l2MuoRs4ETAtdDQqeWSwlVWHW1TKEiKYCp1mCcgOb6AcPxxanBhLKRnQAHWsVlWB62eLSqXtpk9CNYm2fQneR/t7IqDRmIgM7KSkOzTqbh/+xTorRbS/jKkkRFFt+FKXCxdid1+aGXANDMbGGMs3tpS4bUk0Z2nILBVuiv17ZpmlWK75X8R+uyzVvVWeenJMLckV8ckNq5J7USYMwMiAv5JW8Oe/OhzNzPpejOWe1c0b+yPn6Bt/ip50=AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozRdBlwY3LivYCbSmZzJk2NMkMyZlKGfoIbvUlfBp3xa0PYnoBtfWHwM//nUM4f5AIbtDzZk5ua3tndy+/Xzg4PDo+KZZOmyZONYMGi0Ws2wE1ILiCBnIU0E40UBkIaAWjuzlvjUEbHqsnnCTQk3SgeMQZRRs9jvvVfrHsVbyF3E3jr0yZrFTvl5xSN4xZKkEhE9SYju8l2MuoRs4ETAtdDQqeWSwlVWHW1TKEiKYCp1mCcgOb6AcPxxanBhLKRnQAHWsVlWB62eLSqXtpk9CNYm2fQneR/t7IqDRmIgM7KSkOzTqbh/+xTorRbS/jKkkRFFt+FKXCxdid1+aGXANDMbGGMs3tpS4bUk0Z2nILBVuiv17ZpmlWK75X8R+uyzVvVWeenJMLckV8ckNq5J7USYMwMiAv5JW8Oe/OhzNzPpejOWe1c0b+yPn6Bt/ip50=AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozRdBlwY3LivYCbSmZzJk2NMkMyZlKGfoIbvUlfBp3xa0PYnoBtfWHwM//nUM4f5AIbtDzZk5ua3tndy+/Xzg4PDo+KZZOmyZONYMGi0Ws2wE1ILiCBnIU0E40UBkIaAWjuzlvjUEbHqsnnCTQk3SgeMQZRRs9jvvVfrHsVbyF3E3jr0yZrFTvl5xSN4xZKkEhE9SYju8l2MuoRs4ETAtdDQqeWSwlVWHW1TKEiKYCp1mCcgOb6AcPxxanBhLKRnQAHWsVlWB62eLSqXtpk9CNYm2fQneR/t7IqDRmIgM7KSkOzTqbh/+xTorRbS/jKkkRFFt+FKXCxdid1+aGXANDMbGGMs3tpS4bUk0Z2nILBVuiv17ZpmlWK75X8R+uyzVvVWeenJMLckV8ckNq5J7USYMwMiAv5JW8Oe/OhzNzPpejOWe1c0b+yPn6Bt/ip50=AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozRdBlwY3LivYCbSmZzJk2NMkMyZlKGfoIbvUlfBp3xa0PYnoBtfWHwM//nUM4f5AIbtDzZk5ua3tndy+/Xzg4PDo+KZZOmyZONYMGi0Ws2wE1ILiCBnIU0E40UBkIaAWjuzlvjUEbHqsnnCTQk3SgeMQZRRs9jvvVfrHsVbyF3E3jr0yZrFTvl5xSN4xZKkEhE9SYju8l2MuoRs4ETAtdDQqeWSwlVWHW1TKEiKYCp1mCcgOb6AcPxxanBhLKRnQAHWsVlWB62eLSqXtpk9CNYm2fQneR/t7IqDRmIgM7KSkOzTqbh/+xTorRbS/jKkkRFFt+FKXCxdid1+aGXANDMbGGMs3tpS4bUk0Z2nILBVuiv17ZpmlWK75X8R+uyzVvVWeenJMLckV8ckNq5J7USYMwMiAv5JW8Oe/OhzNzPpejOWe1c0b+yPn6Bt/ip50=v3AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozKuiy4MZlRXuBtpRM5kwbmmSG5EylDH0Et/oSPo274tYHMb2A2vpD4Of/ziGcP0gEN+h5Uye3sbm1vZPfLeztHxweFUvHDROnmkGdxSLWrYAaEFxBHTkKaCUaqAwENIPh3Yw3R6ANj9UTjhPoStpXPOKMoo0eR72rXrHsVby53HXjL02ZLFXrlZxSJ4xZKkEhE9SYtu8l2M2oRs4ETAodDQqeWSwlVWHW0TKEiKYCJ1mCcg2b6AcPRhanBhLKhrQPbWsVlWC62fzSiXtuk9CNYm2fQnee/t7IqDRmLAM7KSkOzCqbhf+xdorRbTfjKkkRFFt8FKXCxdid1eaGXANDMbaGMs3tpS4bUE0Z2nILBVuiv1rZumlcVnyv4j9cl6vess48OSVn5IL45IZUyT2pkTphpE9eyCt5c96dD2fqfC5Gc85y54T8kfP1DeGmp54=AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozKuiy4MZlRXuBtpRM5kwbmmSG5EylDH0Et/oSPo274tYHMb2A2vpD4Of/ziGcP0gEN+h5Uye3sbm1vZPfLeztHxweFUvHDROnmkGdxSLWrYAaEFxBHTkKaCUaqAwENIPh3Yw3R6ANj9UTjhPoStpXPOKMoo0eR72rXrHsVby53HXjL02ZLFXrlZxSJ4xZKkEhE9SYtu8l2M2oRs4ETAodDQqeWSwlVWHW0TKEiKYCJ1mCcg2b6AcPRhanBhLKhrQPbWsVlWC62fzSiXtuk9CNYm2fQnee/t7IqDRmLAM7KSkOzCqbhf+xdorRbTfjKkkRFFt8FKXCxdid1eaGXANDMbaGMs3tpS4bUE0Z2nILBVuiv1rZumlcVnyv4j9cl6vess48OSVn5IL45IZUyT2pkTphpE9eyCt5c96dD2fqfC5Gc85y54T8kfP1DeGmp54=AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozKuiy4MZlRXuBtpRM5kwbmmSG5EylDH0Et/oSPo274tYHMb2A2vpD4Of/ziGcP0gEN+h5Uye3sbm1vZPfLeztHxweFUvHDROnmkGdxSLWrYAaEFxBHTkKaCUaqAwENIPh3Yw3R6ANj9UTjhPoStpXPOKMoo0eR72rXrHsVby53HXjL02ZLFXrlZxSJ4xZKkEhE9SYtu8l2M2oRs4ETAodDQqeWSwlVWHW0TKEiKYCJ1mCcg2b6AcPRhanBhLKhrQPbWsVlWC62fzSiXtuk9CNYm2fQnee/t7IqDRmLAM7KSkOzCqbhf+xdorRbTfjKkkRFFt8FKXCxdid1eaGXANDMbaGMs3tpS4bUE0Z2nILBVuiv1rZumlcVnyv4j9cl6vess48OSVn5IL45IZUyT2pkTphpE9eyCt5c96dD2fqfC5Gc85y54T8kfP1DeGmp54=AAACKnicbZDLSgMxFIYz9VbrrdWlm8EiuCozKuiy4MZlRXuBtpRM5kwbmmSG5EylDH0Et/oSPo274tYHMb2A2vpD4Of/ziGcP0gEN+h5Uye3sbm1vZPfLeztHxweFUvHDROnmkGdxSLWrYAaEFxBHTkKaCUaqAwENIPh3Yw3R6ANj9UTjhPoStpXPOKMoo0eR72rXrHsVby53HXjL02ZLFXrlZxSJ4xZKkEhE9SYtu8l2M2oRs4ETAodDQqeWSwlVWHW0TKEiKYCJ1mCcg2b6AcPRhanBhLKhrQPbWsVlWC62fzSiXtuk9CNYm2fQnee/t7IqDRmLAM7KSkOzCqbhf+xdorRbTfjKkkRFFt8FKXCxdid1eaGXANDMbaGMs3tpS4bUE0Z2nILBVuiv1rZumlcVnyv4j9cl6vess48OSVn5IL45IZUyT2pkTphpE9eyCt5c96dD2fqfC5Gc85y54T8kfP1DeGmp54=Partial rendersScene-consistentRGBD predictionsConsistency loss`consAAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==AAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==AAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==AAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==Style-consistency loss`styleAAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthaaUDbbTbt0swm7EyGE/g0vHhTx6p/x5r9x2+agrQ8GHu/N7M68MJXCoOt+O5W19Y3Nrep2bWd3b/+gfnjUNUmmGe+wRCa6F1LDpVC8gwIl76Wa0ziU/DGc3M78xyeujUjUA+YpD2I6UiISjKKVfJ9LOSgM5pJPB/WG23TnIKvEK0kDSrQH9S9/mLAs5gqZpMb0PTfFoKAaBbPv1fzM8JSyCR3xvqWKxtwExXznKTmzypBEibalkMzV3xMFjY3J49B2xhTHZtmbif95/Qyj66AQKs2QK7b4KMokwYTMAiBDoTlDmVtCmRZ2V8LGVFOGNqaaDcFbPnmVdC+antv07i8brZsyjiqcwCmcgwdX0II7aEMHGKTwDK/w5mTOi/PufCxaK045cwx/4Hz+AKhikhM=AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthaaUDbbTbt0swm7EyGE/g0vHhTx6p/x5r9x2+agrQ8GHu/N7M68MJXCoOt+O5W19Y3Nrep2bWd3b/+gfnjUNUmmGe+wRCa6F1LDpVC8gwIl76Wa0ziU/DGc3M78xyeujUjUA+YpD2I6UiISjKKVfJ9LOSgM5pJPB/WG23TnIKvEK0kDSrQH9S9/mLAs5gqZpMb0PTfFoKAaBbPv1fzM8JSyCR3xvqWKxtwExXznKTmzypBEibalkMzV3xMFjY3J49B2xhTHZtmbif95/Qyj66AQKs2QK7b4KMokwYTMAiBDoTlDmVtCmRZ2V8LGVFOGNqaaDcFbPnmVdC+antv07i8brZsyjiqcwCmcgwdX0II7aEMHGKTwDK/w5mTOi/PufCxaK045cwx/4Hz+AKhikhM=AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthaaUDbbTbt0swm7EyGE/g0vHhTx6p/x5r9x2+agrQ8GHu/N7M68MJXCoOt+O5W19Y3Nrep2bWd3b/+gfnjUNUmmGe+wRCa6F1LDpVC8gwIl76Wa0ziU/DGc3M78xyeujUjUA+YpD2I6UiISjKKVfJ9LOSgM5pJPB/WG23TnIKvEK0kDSrQH9S9/mLAs5gqZpMb0PTfFoKAaBbPv1fzM8JSyCR3xvqWKxtwExXznKTmzypBEibalkMzV3xMFjY3J49B2xhTHZtmbif95/Qyj66AQKs2QK7b4KMokwYTMAiBDoTlDmVtCmRZ2V8LGVFOGNqaaDcFbPnmVdC+antv07i8brZsyjiqcwCmcgwdX0II7aEMHGKTwDK/w5mTOi/PufCxaK045cwx/4Hz+AKhikhM=AAAB83icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthaaUDbbTbt0swm7EyGE/g0vHhTx6p/x5r9x2+agrQ8GHu/N7M68MJXCoOt+O5W19Y3Nrep2bWd3b/+gfnjUNUmmGe+wRCa6F1LDpVC8gwIl76Wa0ziU/DGc3M78xyeujUjUA+YpD2I6UiISjKKVfJ9LOSgM5pJPB/WG23TnIKvEK0kDSrQH9S9/mLAs5gqZpMb0PTfFoKAaBbPv1fzM8JSyCR3xvqWKxtwExXznKTmzypBEibalkMzV3xMFjY3J49B2xhTHZtmbif95/Qyj66AQKs2QK7b4KMokwYTMAiBDoTlDmVtCmRZ2V8LGVFOGNqaaDcFbPnmVdC+antv07i8brZsyjiqcwCmcgwdX0II7aEMHGKTwDK/w5mTOi/PufCxaK045cwx/4Hz+AKhikhM=ReferenceRGBD views0AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==0AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==0AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==AAACLHicbZDNSsNAFIUn9a/Wv1aXboJFdFUSEXRZcOOygm2FtpTJ5KYdOjMJMzeVEvoMbvUlfBo3Im59DqdtQG09MHA4370M9wSJ4AY9790prK1vbG4Vt0s7u3v7B+XKYcvEqWbQZLGI9UNADQiuoIkcBTwkGqgMBLSD0c2Mt8egDY/VPU4S6Ek6UDzijKKNmt3GkJ/1y1Wv5s3lrho/N1WSq9GvOJVuGLNUgkImqDEd30uwl1GNnAmYlroaFDyyWEqqwqyrZQgRTQVOswTlCjbRDx6OLU4NJJSN6AA61ioqwfSy+a1T99QmoRvF2j6F7jz9vZFRacxEBnZSUhyaZTYL/2OdFKPrXsZVkiIotvgoSoWLsTsrzg25BoZiYg1lmttLXTakmjK09ZZKtkR/ubJV07qo+V7Nv7us1r28ziI5JifknPjkitTJLWmQJmGEkyfyTF6cV+fN+XA+F6MFJ985In/kfH0DQdSoTg==Consistency loss`consAAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==AAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==AAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==AAAB8nicbVA9SwNBEJ2LXzF+RS1tFoNgFe5E0DJoYxnBxEByhL3NXLJkb/fY3RPCkZ9hY6GIrb/Gzn/jJrlCEx8MPN6bYWZelApurO9/e6W19Y3NrfJ2ZWd3b/+genjUNirTDFtMCaU7ETUouMSW5VZgJ9VIk0jgYzS+nfmPT6gNV/LBTlIMEzqUPOaMWid1eyhEP2dKmmm/WvPr/hxklQQFqUGBZr/61RsoliUoLRPUmG7gpzbMqbacCZxWepnBlLIxHWLXUUkTNGE+P3lKzpwyILHSrqQlc/X3RE4TYyZJ5DoTakdm2ZuJ/3ndzMbXYc5lmlmUbLEozgSxisz+JwOukVkxcYQyzd2thI2opsy6lCouhGD55VXSvqgHfj24v6w1boo4ynACp3AOAVxBA+6gCS1goOAZXuHNs96L9+59LFpLXjFzDH/gff4Av2GRiw==DenoisingAEDenoisingAEDenoisingAE\fFigure 3: We \ufb01rst train a denoising RGBD autoencoder \u03a6 for an inpainting task on a large dataset of\npartial renders of indoor scenes (a). During the scene-level optimization stage (b), \u03a6 is altered by\nadding learnable feature residuals \u2206\u03c6 that are summed with the outputs of intermediate layers \u03c6 in\norder to de\ufb01ne a latent parametrization of test scene views.\n\n(DBIR) [42, 3], this approach is not applicable in our case due to the large distances between camera\ncenters that cause occlusions between pixels that DBIR cannot resolve. Instead, we make use of a\ndifferentiable point tracer [36] that projects the whole scene point cloud X into each of the test views,\naccounting for occlusions in the process (description of the renderer is deferred to the supplementary).\nSince the reference views are selected in a sparse manner, the resulting scene point cloud, upon\nrendering, generates a mere partial render \u00afvi in each of the test cameras ( \u02c6K i, \u02c6gi).\nThe next step aims to in\ufb01ll the missing parts of the partial renders \u00afvi. To this end, we train a deep\ndenoising RGBD autoencoder \u03a6(\u00afvi) = \u02c6vi which accepts \u00afvi and returns a prediction of the full\nimage \u02c6vi (\ufb01g. 3a). \u03a6 comprises a feature pyramid network (FPN) [25] trunk terminated by a 3x3\nconvolutional \ufb01lter with 5 output channels (three RGB channels and additional two for depth and\nits con\ufb01dence) and a bilinear upsampler that resizes the output producing a clean RGBD frame \u02c6vi\nof the same spatial resolution as \u00afvi. Training minimizes the inpainting loss from [26] for the RGB\nchannels and the uncertainty-based error de\ufb01ned as a likelihood of predicted parameters of a Laplace\ndistribution over the set of output depth values [30]. Additionally, the network contains two more\nRGBD prediction branches attached to the output of the 2nd and 3rd upsample&add layer. These\npredict two more additional inpainted RGBD frames at a lower resolution, which are later passed\ntogether with the high-resolution output to the RGBD inpainting losses.\nIn practice, \u03a6 is capable of correcting small defects caused by e.g. rendering irregularly sampled\nsurfaces. However, it struggles with larger missing areas where semantically consistent structures,\nsuch as pieces of furniture, have to be hallucinated.\nIndeed this problem is known to be very\nchallenging due to its ambiguous nature where different inpaintings provide a reasonable explanation\nof the partial render. Instead of sharp predictions, \u03a6 produces an average over possible solutions\nmanifesting as a blurry RGBD output. Surprisingly, we observed the same behavior for a GAN-based\narchitecture [45] which has been designed for our scenario with a strongly multimodal output space.\nThe second main failure mode is an inpainting inconsistent across different views of the same\nunderlying 3D surface. This is expected since the denoiser \u03a6 is applied independently to each partial\nrender \u00afv. A possible solution is to reason in a common 3D space by applying a 3D ConvNet as in\n[20]. Unfortunately, our experiments revealed that the low resolution of the underlying voxel grid\nagain leads to blurry RGB predictions. The next section describes how we deal with both problems.\n\n3.2 Scene-consistent inpainting\n\nIn this paper, we propose to tackle the problem of ambiguity and inconsistency with PerspectiveNet,\na novel approach that jointly re\ufb01nes the set of test and reference views in order to obtain a globally\n\n4\n\nPartial RGBD renderReference views: Full RGBDDenoising RGBD autoencoder RGBD reconstruction lossInpainted RGBDGround truth RGBDa) RGBD autoencoder trainingb) Scene-level optimizationwTest viewModified denoising RGBD autoencoder \u03d53\u03d52Elementwise sum\u03d51\u0394\u03d51\u0394\u03d52\u0394\u03d53\u2295\u2295\u2295Partial RGBD renderPost-processable RGBD\u00afv\u0302vv\u03a6v\u03a6\u2032\u0302v\u00afvLearnable feature residuals\u2212\u2225\u2225\fconsistent solution that respects the geometric constraints of the scene camera system. Performing\nlocal scene-speci\ufb01c optimization allows to select one of the possible solutions, resolving the ambiguity\nissue. To deal with the scene-consistency conundrum, we leverage the depth predicted by the denoiser\n\u03a6 and back-project every pixel into the scene point cloud to derive multi-view consistency constraints\nthat guide the image inpainting on a global scene level.\nIn abstract terms, we pose the scene-centric image inpainting task as a minimization problem of an\nobjective L of the following form:\n\nNtest(cid:88)\n\n(cid:0)\u02c6vi(cid:12)(cid:12) V \\{\u02c6vi}(cid:1)\n\nL =\n\nmin\n\n(cid:96)cons\n\n{ \u02c6\u03c61,..., \u02c6\u03c6Ntest}\n\ni=1\n\n,\n\n\u02c6vi = \u03a8( \u02c6\u03c6i)\n\n(1)\n\nHere, (cid:96)cons(\u02c6vi|V \\ \u02c6vi) measures how geometrically consistent an inpainted image \u02c6vi is with the set\nV \\ \u02c6vi of the other inpainted / reference views in the camera system of the scene. The optimization\nis over the set of latent representations \u02c6\u03c6i of each test view \u02c6vi = \u03a8( \u02c6\u03c6i). \u03a8 is a mapping between a\nlatent space and the space of RGBD images. Intuitively, the minimizer of L constitutes a globally\nscene-consistent set of RGBD views \u02c6vi. We now describe two main ingredients of our method: (a)\nthe parametrization function \u03a8 of the input images, (b) our choice of consistency losses (cid:96)cons.\nLatent image parametrization Since optimizing over raw RGBD values is known to be dif\ufb01cult\nand often requires multiple regularizers, so we know we need a sophisticated function \u03a8. We\nfollow [3], who demonstrated that a deep latent coding of depth images can overcome the need for\ncomplex regularizers. In this work, we opt for a simple solution that leverages the RGBD denoising\nautoencoder \u03a6 from section 3.1 above.\nWe make use of the intermediate feature planes of \u03a6 to create a latent representation \u03c6(\u02c6v) of each test\nview \u02c6v. Since denoising autoencoders are known to learn generic image representations, optimizing\nover \u03c6(\u02c6v) is likely to produce a tensor lying on the manifold of plausible RGBD images. This\neffectively avoids using additional complex regularizers.\nMore speci\ufb01cally, after training \u03a6, we convert it into its modi\ufb01able version \u03a6(cid:48)(\u00afv, \u2206\u03c6(\u00afv)) (illustrated\nin \ufb01g. 3 (b)). The input to \u03a6(cid:48) is a partial render \u00afv as well as a tuple \u2206\u03c6(\u00afv) = (\u2206\u03c61(\u00afv), ..., \u2206\u03c6L(\u00afv))\nof feature residuals \u2206\u03c6l(\u00afv) that are element-wise added to each of the L intermediate feature tensors\nproduced by the feed-forward pass of \u03a6(\u00afv). More formally, \u03a6(cid:48) is de\ufb01ned as:\n\u03a6(cid:48)(\u00afv, \u2206\u03c6(\u00afv)) = \u03a6L( ... \u03a61(\u03a60(\u00afv) + \u2206\u03c61) ... + \u2206\u03c6L),\n\n(2)\nwhere \u03a6l stands for the l-th layer of network \u03a6. To avoid unnecessary image overparametrization we\nadd the feature residuals \u2206\u03c6l only to a preselected subset of feature layers l from the decoding part of\n\u03a6. In this manner, \u03a6 then replaces the latent mapping \u03a8 in eq. (1), while the tuple \u2206\u03c6(\u00afvi) corresponds\nto the latent image representation \u02c6\u03c6i. Having de\ufb01ned a convenient way of parametrizing images in\nour camera system, next we devise constraints that drive our scene-level inpainting optimization.\n\nReprojection consistency loss Our main constraint ensures that newly generated points in a given\ninpainted test view \u02c6vi are consistent with the projection of the scene point cloud \u02c6X i formed by\nrendering all other views into vi.\nMore formally, for each test view \u02c6vi, we form a view-speci\ufb01c point cloud \u02c6X i by back-projecting into\nthe common scene coordinate frame all pixels from the set V \\ {\u02c6vi} consisting of all reference and\ntest views excluding \u02c6vi itself. The point cloud \u02c6X i is then rendered into camera (\u02c6gi, \u02c6K i) forming a\ncontextual render \u02c7vi. For each test view \u02c6vi we then de\ufb01ne a multiview inpainting consistency loss\n(cid:96)cons(\u02c6vi) as follows:\n\n(cid:0)\u02c6vi(cid:12)(cid:12) V \\{\u02c6vi}(cid:1) =\n\n(cid:96)cons\n\n(cid:88)\n\nh(\u02c6vi\n\nu, \u02c7vi\n\nu),\n\nh(a, b) = \u03b4\n\n6(cid:88)\n\n(cid:16)(cid:112)\n\n(cid:17)\n1 + \u03b4\u22121(ac \u2212 bc)2 \u2212 1\n\n,\n\nc=1\n\nu\u2208\u2126(\u02c6vi)\u2229\u2126(\u02c7vi)\n\n(3)\nwhich is de\ufb01ned over all pixel locations u that have a non-hole status in both \u02c6vi and \u02c7vi. h(a, b) is\nan accumulation of Pseudo-Huber losses [5] across dimensions c of per-pixel RGBXYZ vectors\na, b \u2208 R6. Here, the XYZ component is a backprojection xi\nu \u2208 R3 of the depth value du into the\n3D coordinate frame of camera i. h(a, b) is further accumulated over 6 scales of a Gaussian image\npyramid. We set \u03b4 = 1.\n\n5\n\n\fStyle consistency loss The style loss [14] has been shown to facilitate more realistic results for\nimage generation [6] as well as image inpainting [26] tasks. We adopt the loss in the following form:\n\n(cid:0){\u02c6vi}Ntest\n\ni=1\n\n(cid:12)(cid:12){vi}Nref\n\ni=1\n\n(cid:1) =\n\n(cid:88)\n\n(cid:96)style\n\n(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Nref(cid:88)\n\ni=1\n\nl\u22081,2,3\n\n\u2212 Ntest(cid:88)\n\ni=1\n\n\u03a8l(vi)\u03a8l(vi)T\n\nNref HlWl\n\n\u03a8l(\u02c6vi)\u03a8l(\u02c6vi)T\n\nNtestHlWl\n\n,\n\n(4)\n\n(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)1\n\nwhere \u03a8l(v) \u2208 RD\u00d7HlWl denotes a set of features from l-th intermediate layer of an ImageNet\npre-trained VGG16 network [32] reshaped into a D \u00d7 HlWl matrix after \ufb02attening the last two\ndimensions of the feature tensor of the original size D \u00d7 Hl \u00d7 Wl. As in [27], we use features\nextracted after each of the \ufb01rst 3 convolutional layers of VGG16.\nIntuitively, the loss pools a style descriptor from reference images and ensures that the newly\ninpainted pixels match this distribution. Since the style transfer loss is known to produce \ufb01sh-\nscale artifacts, following [26], we use it in conjunction with a total variation regularizer (cid:96)TV =\n(u1,u2) is the RGB\n\n(u1,u2)| + |ci\nNtestW H\nvalue of a pixel at position (u1, u2) in a test view \u02c6vi.\n\n(u1,u2)|, where ci\n\n(u1,u2+1) \u2212 ci\n|ci\n\n(u1+1,u2) \u2212 ci\n\n(cid:80)Ntest\n\n(cid:80)\n\nu1,u2\n\ni=1\n\n1\n\nScene-level optimization Having de\ufb01ned the main constraints and image parametrization, we can\nnow write the objective L of the PerspectiveNet scene-level optimization:\n\nL = arg min\n{\u2206\u03c6(\u00afvi)}Ntest\n\ni=1\n\n(cid:96)style\n\n(cid:0){\u02c6vi}Ntest\n\ni=1\n\n(cid:12)(cid:12){vi}Nref\n\ni=1\n\n(cid:1) +\n\n(cid:0)\u02c6vi(cid:12)(cid:12) V \\{\u02c6vi}(cid:1)\n\n(cid:96)cons\n\nNtest(cid:88)\n\ni=1\n\n(5)\nFor a given scene, L is minimized with an Adam [21] optimizer for 50 iterations with an initial\nlearning rate of 0.01 decaying 10-fold after 35 iterations.\n\n\u02c6vi = \u03a6(cid:48)(\u00afvi, \u2206\u03c6(\u00afvi))\n\nPerspectiveNet in a nutshell Our algorithm thus works as follows. For each testing scene, the set\nof reference views {vi} is rendered into the target cameras {\u02c6gj} producing partial renders {\u00afvj}. The\nensuing scene-consistent optimization process, that minimizes L (eq. 5), then \ufb01nds the optimal set of\nlatent image representations {\u2206\u03c6(\u00afvi)} that, after being passed with {\u00afvj} to the modi\ufb01able denoising\nautoencoder \u03a6(cid:48), leads to the \ufb01nal set of scene-consistent new views {\u02c6vj = \u03a6(cid:48)(\u00afvj, \u2206\u03c6(\u00afvj))}.\n\n3.3 Technical details\nAdditional regularizers While (cid:96)cons ensures that newly generated pixels stay consistent with all\nviews in the scene, in principle, there is nothing stopping the the global optimization process from\nproducing a set of consistent inpaintings that get \u201cdetached\u201d from the reference views. We thus\nadd two regularization terms that prevent the solution from diverging too far from the ground truth\nprovided by the reference frames:\n\nNtest(cid:88)\n\n(cid:20) (cid:88)\n\n(cid:88)\n\n(cid:21)\n\n(cid:96)R =\n\nh(\u00afvi\n\nu, \u02c6vi\n\nu) +\n\nh(\u02c6vi\n\nu,t=0, \u02c6vi\nu)\n\n,\n\n(6)\n\ni=1\n\nu\u2208\u2126(\u00afvi\nu)\n\nu\u2208\u2126(\u02c6vi\nu)\n\nu close\nu, and the second term prevents the result of\nu,t=0 obtained by \u03a6 at the\n\nwhere the \ufb01rst term of the main sum brings the non-holes of the partial ground truth render \u00afvi\nto the corresponding pixels in the inpainted image \u02c6vi\nthe optimization \u02c6vi from grossly differing from the initial inpainting \u02c6vi\nbeginning of the global optimization.\nTraining the RGBD denoiser \u03a6 In order to train \u03a6, we collect a dataset of image pairs {(\u00afvi, vi)}\ngenerated by randomly sampling 8 reference views from the training scenes of a considered dataset of\nindoor scenes and rendering those using our point tracer into a random test view, for which the ground\ntruth RGBD frame vi is known. We further \ufb01lter out the pairs where less than 50% of the input pixels\nare de\ufb01ned. The RGBD autoencoder \u03a6 is trained with an initial learning rate 10\u22125 decaying 10-fold\nonce the loss plateaus. Where possible, the convolutional layers were initialized with weights of an\nImageNet pre-trained ResNet-50 network. Batch size was set to 4 and training on a single GPU took\napproximately 7 days. For each of the 2 datasets considered in this paper (ScanNet [8], SceneNet\n[28]), we train a separate autoencoder.\n\n6\n\n\fMethod\nPerspectiveNet\nPerspectiveNet w/o opt\nPartialConv [26]\n3DConvNet\nBiGAN [45]\n\nMethod\nPerspectiveNet\nPerspectiveNet w/o opt\nPartialConv [26]\n3DConvNet\nBiGAN [45]\n\n\u2193\n(cid:96)RGB\n1\n68.022\n66.511\n93.604\n78.590\n77.313\n\n\u2193\n(cid:96)RGB\n1\n49.698\n48.521\n76.470\n75.942\n55.815\n\n(a) ScanNet\n\nColor metrics\n\nPSNR \u2191\n13.762\n13.986\n11.374\n12.190\n12.742\n\nLPIPS \u2193\n0.422\n0.426\n0.461\n0.531\n0.523\n\n(b) SceneNet\n\nPSNR \u2191\n15.687\n16.324\n12.377\n12.614\n15.112\n\nLPIPS \u2193\n0.424\n0.442\n0.481\n0.570\n0.485\n\n1 [m] \u2193\n(cid:96)D\n0.115\n0.120\n0.750\n0.138\n0.215\n\n1 [m] \u2193\n(cid:96)D\n0.219\n0.227\n1.846\n0.653\n0.249\n\nDepth metrics\n\u03b42 \u2191\n0.411\n0.230\n0.236\n0.359\n0.212\n\n\u03b41 \u2191\n0.352\n0.188\n0.194\n0.301\n0.169\n\n\u03b41 \u2191\n0.366\n0.101\n0.008\n0.040\n0.319\n\n\u03b42 \u2191\n0.431\n0.125\n0.010\n0.050\n0.375\n\n\u03b43 \u2191\n0.471\n0.279\n0.283\n0.426\n0.265\n\n\u03b43 \u2191\n0.494\n0.155\n0.013\n0.062\n0.431\n\nTable 1: Quantitative evaluation of depth and image generation on the test sets of ScanNet (a)\nand SceneNet (b) comparing our method with 2D and 3D inpainting baselines.\n\n4 Experiments\n\nDatasets and evaluation protocol We chose 2 datasets for evaluation: ScanNet [8] and SceneNet\n[28]. ScanNet currently comprises one of the largest 3D datasets of real indoor scenes with 1500\ntraining and 100 test scenes. Contrasted to the realistic ScanNet, SceneNet is a dataset of 33k/1k\nsynthetic train/test scenes. SceneNet was chosen in order to benchmark the performance in a clean\nsetting free of challenging factors such as lighting changes or inaccurate camera extrinsics.\nEach dataset contains RGBD views of indoor scenes annotated with camera extrinsic and intrinsic\nparameters, allowing for evaluation of the new view synthesis. In order to benchmark a method on\na given test scene, we sample 4 reference views, for which we assume knowledge of their RGBD\nas well as camera parameters, and at most 8 reference views for which only the camera parameters\nare given. For the test views, we then generate the color and depth channels and compare them to\nthe corresponding ground truth frames. In order to obtain good coverage of the scene contents, the\nreference views were selected by clustering the camera pose descriptors (consisting of a concatenation\nof the vectorized camera rotation matrix and the camera translation vector) into four clusters and\npicking the typical point from each cluster as a reference camera. The test views were chosen in a\nsimilar fashion by clustering the parameters of the remaining cameras and picking the views that\ncontain at least 50%/40% de\ufb01ned pixels for ScanNet and SceneNet respectively after rendering the\ncontents of the reference views. For each dataset, we \ufb01rst train all methods on the frames coming\nfrom its training scenes. For ScanNet, the evaluation is conducted on all 100 test scenes, and for\nSceneNet we randomly sampled 100 scenes from the test set for evaluation. We produce images of\nwidth/height 320/240 pixels and compare with the ground truth images at the resolution of 640/480.\nFollowing standard practice [45, 38, 26, 40, 18], we quantitatively evaluate generated images by\nreporting the per-pixel (cid:96)1 error ((cid:96)RGB\n) and the peak-signal-to-noise-ratio (PSNR). Since (cid:96)1 loss and\nPSNR are known to be overly sensitive to errors in low-level image details while being insensitive to\nmore abstract semantic visual structures, we also evaluate the perceptive error LPIPS [41], which is a\ncalibrated version of a distance between images in a feature space of a pre-trained image classi\ufb01cation\nnetwork (VGG16 in our case). In order to evaluate the generated depth maps, following [24, 11],\n1 measured in meters) and metrics \u03b4i for i = {1, 2, 3}\nwe report per-pixel absolute depth error ((cid:96)D\nwhich measure the portion of test pixels that have their absolute depth error lower than a threshold\nti = 1.25i cm.\n\n1\n\nInpainting baselines Evaluation focuses mainly on inpainting baselines that consist of rendering\nthe reference views into the target ones, followed by \ufb01lling the holes with an algorithm. The baseline\nabbreviated as PartialConv uses a state-of-the-art inpainting architecture from [26] trained on the\nsame dataset as our RGBD denoiser. We also compare with BiGAN [45] trained on the same\ndataset. Finally, PerspectiveNet w/o opt is an ablation of our method and comprises the initial\n\n7\n\n\fGT\n\nPartial render\n\nBiGAN\n\nPartialConv\n\n3DConvNet PerspectiveNet\n\nt\ne\nN\nn\na\nc\nS\n)\na\n(\n\nt\ne\nN\ne\nn\ne\nc\nS\n)\nb\n(\n\nFigure 4: Qualitative evaluation of new RGBD view synthesis on the ScanNet (a) and SceneNet\n(b) datasets comparing our method (PerspectiveNet) to inpainting with partial convolutions (Partial-\nConv [26]) or Bicycle GAN (BiGAN [45]), and a sparse 3DConvNet that inpaints voxels directly\nin 3D. The \ufb01rst column of each row denotes the ground truth for a given test view while the second\nshows a partial render \u00afv of the reference views into the camera of the ground truth view. For each of\nthe 6 displayed test cases, we show the RGB (upper row) and depth prediction (lower row).\n\n8\n\n\finpainting produced by the RGBD denoiser from section 3.1 without any iterations performed by the\nscene-consistent optimizer.\n\n3D inpainting Apart from the inpainting methods, we further compare with an approach that\noperates on 3D voxel grids (3DConvNet). Since our application requires a voxel grid of a very high\nresolution and spatial extent, due to high memory requirements of the classic dense 3D ConvNet\narchitectures, we implemented a sparse U-Net convolutional network [15]. Detailed explanation of\nthe architecture is included in the supplementary material.\nTable 1 contains quantitative results on the ScanNet and SceneNet datasets. A qualitative comparison\nis present in \ufb01g. 4. Additional qualitative results are present in the supplementary material.\n\n1\n\n1\n\nDiscussion of results Table 1 reveals that our method outperforms the considered baselines on all\ndepth metrics. For the color metrics (cid:96)RGB\nand PSNR, we are on par with the ablation \u201cPerspectiveNet\nw/o opt\u201d. However, we outperform it on the more semantically meaningful LPIPS metric. Intuitively,\nsince PSNR and (cid:96)RGB\nare sensitive to low-frequency image details and LPIPS better assesses image\nrealism, the relative differences in the color metrics between PerspectiveNet and \u201cPerspectiveNet\nw/o opt\u201d signify that, while the local color distributions are roughly correct in both cases, adding the\nscene-consistent optimizer brings better image realism.\nQualitatively, compared to our approach, the inpainting baseline PartialConv generates more blurry\nresults due to a suboptimal loss function that does not take into account the ambiguity in the output.\nFurthermore, PartialConv records low performance of depth inpainting. Our method also outperforms\nBiGAN. For BiGAN, we observed that changes in the latent code z mostly lead to global change\nof the color statistics of the output image, rather than altering the geometry of the inpainted scene.\n3DConvNet records a competitive depth prediction accuracy, but lags behind in color prediction. This\nis most likely due to the reconstructions being optimized to match the partial point clouds in 3D,\nwithout considering the need for perceptual realism when rendering the voxels into the 2D test views.\n\n5 Conclusion\n\nIn this work, we tackled a previously seldom explored problem of new-view synthesis in real indoor\nenvironments. A novel approach, termed PerspectiveNet, based on the render-inpaint paradigm is\nproposed. The main technical contribution is a bundle-adjustment technique that jointly optimizes all\nviews in a given room in order to obtain a set of new views that is globally scene-consistent in terms\nof geometry and style. Evaluation on two large datasets of indoor scenes [8, 28] reveals performance\nsuperior to several strong baselines.\n\nReferences\n[1] Coloma Ballester, Marcelo Bertalmio, Vicent Caselles, Guillermo Sapiro, and Joan Verdera. Filling-in by\njoint interpolation of vector \ufb01elds and gray levels. IEEE Transactions on Image Processing, 10(8):1200\u2013\n1211, Aug 2001.\n\n[2] Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In\nProceedings of the 27th annual conference on Computer graphics and interactive techniques, pages\n417\u2013424. ACM Press/Addison-Wesley Publishing Co., 2000.\n\n[3] Michael Bloesch, Jan Czarnowski, Ronald Clark, Stefan Leutenegger, and Andrew J Davison.\nCodeSLAM\u2014learning a compact, optimisable representation for dense visual SLAM. In Proceedings of\nthe IEEE Conference on Computer Vision and Pattern Recognition, pages 2560\u20132568, 2018.\n\n[4] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio\nSavarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An\nInformation-Rich 3D Model Repository. Technical report, Stanford University \u2014 Princeton University \u2014\nToyota Technological Institute at Chicago, 2015.\n\n[5] Pierre Charbonnier, Laure Blanc-F\u00e9raud, Gilles Aubert, and Michel Barlaud. Deterministic edge-preserving\n\nregularization in computed imaging. IEEE Transactions on image processing, 6(2):298\u2013311, 1997.\n\n[6] Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded re\ufb01nement networks. In\n\nProceedings of the IEEE International Conference on Computer Vision, pages 1511\u20131520, 2017.\n\n9\n\n\f[7] Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu, and Daniel Cohen-Or. 3-sweep: Extracting editable objects\n\nfrom a single photo. ACM Trans. Graph., 32(6), November 2013.\n\n[8] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner.\nScannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern\nRecognition (CVPR), IEEE, 2017.\n\n[9] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with\nconvolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition, pages 1538\u20131546, 2015.\n\n[10] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In Proceedings\nof the 28th annual conference on Computer graphics and interactive techniques, pages 341\u2013346. ACM,\n2001.\n\n[11] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common\nmulti-scale convolutional architecture. In Proceedings of the IEEE international conference on computer\nvision, pages 2650\u20132658, 2015.\n\n[12] SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo,\nAvraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. Neural scene representation and\nrendering. Science, 360(6394):1204\u20131210, 2018.\n\n[13] John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new\nviews from the world\u2019s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition, pages 5515\u20135524, 2016.\n\n[14] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint\n\narXiv:1508.06576, 2015.\n\n[15] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3d semantic segmentation with\nsubmanifold sparse convolutional networks. In Proceedings of the IEEE Conference on Computer Vision\nand Pattern Recognition, pages 9224\u20139232, 2018.\n\n[16] Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In International\n\nConference on Arti\ufb01cial Neural Networks, pages 44\u201351. Springer, 2011.\n\n[17] Youichi Horry, Ken-Ichi Anjyo, and Kiyoshi Arai. Tour into the picture: Using a spidery mesh interface to\nmake animation from a single image. In Proceedings of the 24th Annual Conference on Computer Graphics\nand Interactive Techniques, SIGGRAPH \u201997, pages 225\u2013232. ACM Press/Addison-Wesley Publishing Co.,\n1997.\n\n[18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional\nadversarial networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on,\n2017.\n\n[19] Dinghuang Ji, Junghyun Kwon, Max McFarland, and Silvio Savarese. Deep view morphing. In Proceedings\n\nof the IEEE Conference on Computer Vision and Pattern Recognition, pages 2155\u20132163, 2017.\n\n[20] Abhishek Kar, Christian H\u00e4ne, and Jitendra Malik. Learning a multi-view stereo machine. In Advances in\n\nneural information processing systems, pages 365\u2013376, 2017.\n\n[21] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint\n\narXiv:1412.6980, 2014.\n\n[22] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse\n\ngraphics network. In Advances in neural information processing systems, pages 2539\u20132547, 2015.\n\n[23] Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. Texture optimization for example-based\n\nsynthesis. 24(3):795\u2013802, 2005.\n\n[24] Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth\nprediction with fully convolutional residual networks. In 2016 Fourth international conference on 3D\nvision (3DV), pages 239\u2013248. IEEE, 2016.\n\n[25] Tsung-Yi Lin, Piotr Doll\u00e1r, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature\npyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pages 2117\u20132125, 2017.\n\n10\n\n\f[26] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image\ninpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on\nComputer Vision (ECCV), pages 85\u2013100, 2018.\n\n[27] Guilin Liu, Kevin J. Shih, Ting-Chun Wang, Fitsum A. Reda, Karan Sapra, Zhiding Yu, Andrew Tao, and\n\nBryan Catanzaro. Partial convolution based padding. 2018.\n\n[28] John McCormac, Ankur Handa, Stefan Leutenegger, and Andrew J.Davison. Scenenet RGB-D: Can 5M\n\nsynthetic images beat generic ImageNet pre-training on indoor segmentation? 2017.\n\n[29] Moustafa Meshry, Dan B Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and\nRicardo Martin-Brualla. Neural rerendering in the wild. In Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition, pages 6878\u20136887, 2019.\n\n[30] David Novotny, Diane Larlus, and Andrea Vedaldi. Capturing the geometry of object categories from\n\nvideo supervision. IEEE transactions on pattern analysis and machine intelligence, 2018.\n\n[31] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded\nimage generation network for novel 3d view synthesis. In Proceedings of the IEEE conference on computer\nvision and pattern recognition, pages 3500\u20133509, 2017.\n\n[32] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni-\n\ntion. ICLR, 2014.\n\n[33] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nie\u00dfner, Gordon Wetzstein, and Michael Zollh\u00f6fer.\n\nDeepvoxels: Learning persistent 3d feature embeddings. CoRR, abs/1812.01024, 2018.\n\n[34] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Multi-view 3d models from single images\nwith a convolutional network. In European Conference on Computer Vision, pages 322\u2013337. Springer,\n2016.\n\n[35] Alexandru Telea. An image inpainting technique based on the fast marching method. Journal of graphics\n\ntools, 9(1):23\u201334, 2004.\n\n[36] Shubham Tulsiani, Richard Tucker, and Noah Snavely. Layer-structured 3d scene inference via view\n\nsynthesis. In ECCV, 2018.\n\n[37] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. 2017.\n\n[38] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-\nresolution image synthesis and semantic manipulation with conditional GANs. In Proceedings of the IEEE\nConference on Computer Vision and Pattern Recognition, pages 8798\u20138807, 2018.\n\n[39] Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with\nrecurrent transformations for 3d view synthesis. In Advances in Neural Information Processing Systems,\npages 1099\u20131107, 2015.\n\n[40] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting\nIn Proceedings of the IEEE Conference on Computer Vision and Pattern\n\nwith contextual attention.\nRecognition, pages 5505\u20135514, 2018.\n\n[41] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable\neffectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer\nVision and Pattern Recognition, pages 586\u2013595, 2018.\n\n[42] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth\nand ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition, pages 1851\u20131858, 2017.\n\n[43] Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. View synthesis by\n\nappearance \ufb02ow. In European conference on computer vision, pages 286\u2013301. Springer, 2016.\n\n[44] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using\ncycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference\non, 2017.\n\n[45] Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli\nShechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing\nSystems, pages 465\u2013476, 2017.\n\n11\n\n\f[46] Zhenyao Zhu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Multi-view perceptron: a deep model for\nlearning face identity and view representations. In Advances in Neural Information Processing Systems,\npages 217\u2013225, 2014.\n\n12\n\n\f", "award": [], "sourceid": 4150, "authors": [{"given_name": "David", "family_name": "Novotny", "institution": "Facebook AI Research"}, {"given_name": "Ben", "family_name": "Graham", "institution": "Facebook Research"}, {"given_name": "Jeremy", "family_name": "Reizenstein", "institution": "Facebook AI Research"}]}