{"title": "Bayesian Color Constancy with Non-Gaussian Models", "book": "Advances in Neural Information Processing Systems", "page_first": 1595, "page_last": 1602, "abstract": "", "full_text": "Bayesian Color Constancy\nwith Non-Gaussian Models\n\nCharles Rosenberg\n\nComputer Science Department\n\nThomas Minka\n\nStatistics Department\n\nAlok Ladsariya\n\nComputer Science Department\n\nCarnegie Mellon University\n\nCarnegie Mellon University\n\nCarnegie Mellon University\n\nPittsburgh, PA 15213\n\nPittsburgh, PA 15213\n\nPittsburgh, PA 15213\n\nchuck@cs.cmu.edu\n\nminka@stat.cmu.edu\n\nalokl@cs.cmu.edu\n\nAbstract\n\nWe present a Bayesian approach to color constancy which utilizes a non-\nGaussian probabilistic model of the image formation process. The pa-\nrameters of this model are estimated directly from an uncalibrated image\nset and a small number of additional algorithmic parameters are chosen\nusing cross validation. The algorithm is empirically shown to exhibit\nRMS error lower than other color constancy algorithms based on the\nLambertian surface re\ufb02ectance model when estimating the illuminants\nof a set of test images. This is demonstrated via a direct performance\ncomparison utilizing a publicly available set of real world test images\nand code base.\n\n1 Introduction\n\nColor correction is an important preprocessing step for robust color-based computer vision\nalgorithms. Because the illuminants in the world have varying colors, the measured color\nof an object will change under different light sources. We propose an algorithm for color\nconstancy which, given an image, will automatically estimate the color of the illuminant\n(assumed constant over the image), allowing the image to be color corrected.\n\nThis color constancy problem is ill-posed, because object color and illuminant color are\nnot uniquely separable. Historically, algorithms for color constancy have fallen into two\ngroups. The \ufb01rst group imposes constraints on the scene and/or the illuminant, in order to\nremove the ambiguities. The second group uses a statistical model to quantify the probabil-\nity of each illuminant and then makes an estimate from these probabilities. The statistical\napproach is attractive, since it is more general and more automatic\u2014hard constraints are a\nspecial case of statistical models, and they can be learned from data instead of being spec-\ni\ufb01ed in advance. But as shown by [3, 1], currently the best performance on real images\nis achieved by gamut mapping, a constraint-based algorithm. And, in the words of some\nleading researchers, even gamut mapping is not \u201cgood enough\u201d for object recognition [8].\n\nIn this paper, we show that it is possible to outperform gamut mapping with a statistical\napproach, by using appropriate probability models with the appropriate statistical frame-\nwork. We use the principled Bayesian color constancy framework of [4], but combine it\nwith rich, nonparametric image models, such as used by Color by Correlation [1]. The\n\n\fresult is a Bayesian algorithm that works well in practice and addresses many of the issues\nwith Color by Correlation, the leading statistical algorithm [1].\n\nAt the same time, we suggest that statistical methods still have much to learn from\nconstraint-based methods. Even though our algorithm outperforms gamut mapping on\naverage, there are cases in which gamut mapping provides better estimates, and, in fact,\nthe errors of the two methods are surprisingly uncorrelated. This is an interesting result,\nbecause it suggests that gamut mapping exploits image properties which are different from\nwhat is learned by our algorithm, and probably other statistical algorithms. If this is true,\nand if our statistical model could be extended in a way that captures these additional prop-\nerties, better algorithms should be possible in the future.\n\n2 The imaging model\n\nOur approach is to model the observed image pixels with a probabilistic generative model,\ndecomposing them as the product of unknown surface re\ufb02ectances with an unknown il-\nluminant. Using Bayes\u2019 rule, we obtain a posterior for the illuminant, and from this we\nextract the estimate with minimum risk, e.g., the minimum expected chromaticity error.\nLet y be an image pixel with three color channels: (yr, yg, yb). The pixel is assumed to be\nthe result of light re\ufb02ecting off of a surface under the Lambertian re\ufb02ectance model. Denote\nthe power of the light in each channel by (cid:96) = ((cid:96)r, (cid:96)g, (cid:96)b), with each channel ranging from\nzero to in\ufb01nity. For each channel, a surface can re\ufb02ect none of the light, all of the light,\nor somewhere in between. Denote this re\ufb02ectance by x = (xr, xg, xb), with each channel\nranging from zero to one. The model for the pixel is the well-known diagonal lighting\nmodel:\n\nyb = (cid:96)bxb\nTo simplify the equations below, we write this in matrix form as\n\nyg = (cid:96)gxg\n\nyr = (cid:96)rxr\n\n(1)\n\n(2)\n(3)\nThis speci\ufb01es the conditional distribution p(yj(cid:96), x). In reality, there are sensor noise and\nother factors which affect the observed color, but we will consider these to be negligible.\n\nL = diag((cid:96))\ny = Lx\n\nNext we make the common assumption that the light and the surface have been chosen\nindependently, so that p((cid:96), x) = p((cid:96))p(x). The prior distribution for the illuminant (p((cid:96)))\nwill be uniform over a constraint set, described later in section 5.3.\n\nThe most dif\ufb01cult step is to construct a model for the surface re\ufb02ectances in an image\ncontaining many pixels:\n\nY = (y(1), ..., y(n))\nX = (x(1), ..., x(n))\n\n(4)\n(5)\nWe need a distribution p(X) for all n re\ufb02ectances. One approach is to assume that the\nre\ufb02ectances are independent and Gaussian, as in [4], which gives reasonable results but can\nbe improved upon. Our approach is to quantize the re\ufb02ectance vectors into K bins, and\nconsider the re\ufb02ectances to be exchangeable\u2014a weaker assumption than independence.\nExchangeability implies that the probability only depends on the number of re\ufb02ectances in\neach bin. Thus if we denote the re\ufb02ectance histogram by (n1, ..., nK), where\nnk = n,\nthen\n\n(cid:80)\n\nk\n\n(6)\nwhere f is a function to be speci\ufb01ed. Independence is a special case of exchangeability. If\nmk = 1,\nmk is the probability of a surface having a re\ufb02ectance value in bin k, so that\n\np(x(1), ..., x(n)) / f(n1, ..., nK)\n\n(cid:80)\n\nk\n\n\fthen independence says\n\nf(n1, ..., nK) =\n\n(cid:89)\n\nk\n\nmnk\nk\n\n(7)\n\nAs an alternative to this, we have experimented with the Dirichlet-multinomial model,\nwhich employs a parameter s > 0 to control the amount of correlation. Under this model,\n\nf(n1, ..., nK) =\n\n\u0393(s)\n\n\u0393(n + s)\n\n\u0393(nk + smk)\n\n\u0393(smk)\n\n(8)\n\n(cid:89)\n\nk\n\nFor large s, correlation is weak and the model reduces to (7). For small s, correlation is\nstrong and the model expects a few re\ufb02ectances to be repeated many times, which is what\nwe see in real images. When s is very small, the expression (8) can be reduced to a simple\nform:\n\nf(n1, ..., nK) (cid:25)\n\n(smk\u0393(nk))clip(nk)\n\n(cid:89)\n\nk\n\n1\n\ns\u0393(n)\n\n(cid:26)\n\n0 if nk = 0\n1 if nk > 0\n\nclip(nk) =\n\nThis resembles a multinomial distribution on clipped counts. Unfortunately, this distri-\nbution strongly prefers that the image contains a small number of different re\ufb02ectances,\nwhich biases the light source estimate. Empirically we have achieved our best results using\na \u201cnormalized count\u201d modi\ufb01cation of the model which removes this bias:\n\n(cid:89)\n(cid:80)\n\nk\n\nf(n1, ..., nK) =\n\nm\u03bdk\nk\n\n\u03bdk = n clip(nk)\nk clip(nk)\n\n(9)\n\n(10)\n\n(11)\n\n(12)\n\nThe modi\ufb01ed counts \u03bdk sum to n just like the original counts nk, but are distributed equally\nover all re\ufb02ectances present in the image.\n\n3 The color constancy algorithm\n\nThe algorithm for estimating the illuminant has two parts: (1) discretize the set of all\nilluminants on a \ufb01ne grid and compute their likelihood and (2) pick the illuminant which\nminimizes the risk.\nThe likelihood of the observed image data Y for a given illuminant (cid:96) is\n\np(Yj(cid:96)) =\n\n(cid:90)\n\n(cid:32)(cid:89)\n\u22121jnp(X = L\n\n(cid:33)\np(y(i)j(cid:96), x(i))\n\u22121Y)\n\ni\n\nX\n\n= jL\n\np(X)dX\n\n(13)\n\n(14)\nThe quantity L\u22121Y can be understood as the color-corrected image. The determinant term,\n1/((cid:96)r(cid:96)g(cid:96)b)n, makes this a valid distribution over Y and has the effect of introducing a\npreference for dimmer illuminants independently of the prior on re\ufb02ectances. Also implicit\nin this likelihood are the bounds on x, which require re\ufb02ectances to be in the range of zero\nand one and thus we restrict our search to illuminants that satisfy:\n\n(cid:96)r (cid:21) max\n\ni\n\nyr(i)\n\n(cid:96)g (cid:21) max\n\ni\n\nyg(i)\n\n(cid:96)b (cid:21) max\n\ni\n\nThe posterior probability for (cid:96) then follows:\n\np((cid:96)jY) / p(Yj(cid:96))p((cid:96))\n\n/ jL\n\n\u22121jnp(X = L\n\n\u22121Y)p((cid:96))\n\nyb(i)\n\n(15)\n\n(16)\n(17)\n\n\fThe next step is to \ufb01nd the estimate of (cid:96) with minimum risk. An answer that the illuminant\n, when it is really (cid:96), incurs some cost, denoted R((cid:96)(cid:3)j(cid:96)). Let this function be quadratic\nis (cid:96)(cid:3)\nin some transformation g of the illuminant vector (cid:96):\nR((cid:96)(cid:3)j(cid:96)) = jjg((cid:96)(cid:3)\n\n) \u2212 g((cid:96))jj2\n\n(18)\n\nThis occurs, for example, when the cost function is squared error in chromaticity. Then the\nminimum-risk estimate satis\ufb01es\n\ng((cid:96)(cid:3)\n\n) =\n\ng((cid:96))p((cid:96)jY)d(cid:96)\n\n(19)\n\n(cid:90)\n\n(cid:96)\n\nThe right-hand side, the posterior mean of g, and the normalizing constant of the posterior\ncan be computed in a single loop over the grid of illuminants.\n\n4 Relation to other algorithms\n\nIn this section we describe related color constancy algorithms using the framework of the\nimaging model introduced in section 2. This is helpful because it allows us to compare all\nof these algorithms in a single framework and understand the assumptions made by each.\n\nIndependent, Gaussian re\ufb02ectances The previous work most similar to our own is by\n[10] and [4]; however, these methods are not tested on real images. They use a similar\nimaging model and maximum-likelihood and minimum-risk estimation, respectively. The\ndifference is that they use a Gaussian prior for the re\ufb02ectance vectors, and assume the\nre\ufb02ectances for different pixels are independent. The Gaussian assumption leads to a sim-\nple likelihood formula whose maximum can be found by gradient methods. However, as\nmentioned by [4], this is a constraining assumption, and more appropriate priors would be\npreferable.\n\nScale by max The scale by max algorithm (as tested e.g. in [3]) estimates the illuminant\nby the simple formula\n\n(cid:96)r = max\n\ni\n\nyr(i)\n\n(cid:96)g = max\n\ni\n\nyg(i)\n\n(cid:96)b = max\n\ni\n\nyb(i)\n\n(20)\n\nwhich is the dimmest illuminant in the valid set (15).\nIn the Bayesian algorithm, this\nsolution can be achieved by letting the re\ufb02ectances be independent and uniform over the\nrange 0 to 1. Then p(X) is constant and the maximum-likelihood illuminant is (20). This\nconnection was also noticed by [4].\n\nGray-world The gray-world algorithm [5] chooses the illuminant such that the average\nvalue in each channel of the corrected image is a constant, e.g. 0.5. This is equivalent to the\nBayesian algorithm with a particular re\ufb02ectance prior. Let the re\ufb02ectances be independent\nfor each pixel and each channel, with distribution p(xc) / exp(\u22122xc) in each channel c.\nThe log-likelihood for (cid:96)c is then\n\n(cid:88)\n\ni\n\nwhose maximum is (as desired)\n\nlog p(Ycj(cid:96)c) = \u2212n log (cid:96)c \u2212 2\n(cid:88)\n\nyc(i)\n(cid:96)c\n\n+ const.\n\n(21)\n\n(22)\n\n(cid:96)c =\n\n2\nn\n\nyc(i)\n\ni\n\n\fFigure 1: Plots of slices of the three dimensional color surface re\ufb02ectance distribution\nalong a single dimension. Row one plots green versus blue with 0,0 at the upper left of\neach subplot and slices in red whose magnitude increases from left to right. Row two plots\nred versus blue with slices in green. Row three plots red versus green with slices in blue.\n\nColor by Correlation Color by Correlation [6, 1] also uses a likelihood approach, but\nwith a different imaging model that is not based on re\ufb02ectance. Instead, observed pixels\nare quantized into color bins, and the frequency of each bin is counted for each illuminant,\nin a \ufb01nite set of illuminants. (Note that this is different from quantizing re\ufb02ectances, as\ndone in our approach.) Let mk((cid:96)) be the frequency of color bin k for illuminant (cid:96), and let\nn1 (cid:1)(cid:1)(cid:1) nK be the color histogram of the image, then the likelihood of (cid:96) is computed as\n\n(cid:89)\n\np(Yj(cid:96)) =\n\nmk((cid:96))clip(nk)\n\n(23)\n\nk\n\nWhile theoretically this is very general, there are practical limitations. First there are train-\ning issues. One must learn the color frequencies for every possible illuminant. Since\ncollecting real-world data whose illuminant is known is dif\ufb01cult, mk((cid:96)) is typically trained\nsynthetically with random surfaces, which may not represent the statistics of natural scenes.\nThe second issue is that colors and illuminants live in an unbounded 3D space [1], unlike\nre\ufb02ectances which are bounded. In order to store a color distribution for each illuminant,\nbrightness variation needs to be arti\ufb01cially bounded. The third issue is storage. To reduce\nthe storage of the mk((cid:96))\u2019s, Barnard et al [1] store the color distribution only for illuminants\nof a \ufb01xed brightness. However, as they describe, this introduces a bias in the estimation\nthey refer to as the \u201cdiscretization problem\u201d and try to solve it by penalizing bright illu-\nminants. The other part of the bias is due to using clipped counts in the likelihood. As\nexplained in section 2, a multinomial likelihood with clipped counts is a special case of the\nDirichlet-multinomial, and prefers images with a small number of different colors. This\nbias can be removed using a different likelihood function, such as (11).\n\n5 Parameter estimation\n\n5.1 Re\ufb02ectance Distribution\n\nTo implement the Bayesian algorithm, we need to learn the real-world frequencies mk of\nquantized re\ufb02ectance vectors. The direct approach to this would require a set of images\nwith ground truth information regarding the associated illumination parameters or, alter-\nnately, a set of images captured under a canonical illuminant and camera.\n\nUnfortunately, it is quite dif\ufb01cult to collect a large number of images under controlled\nconditions. To avoid this issue, we use bootstrapping, as described in [9], to approximate\nthe ground truth. The estimates from some \u201cbase\u201d color constancy algorithm are used as\na proxy for the ground truth. This might seem to be problematic in that it would limit any\nalgorithm based on these estimates to perform only as well as the base algorithm. However,\nthis need not be the case if the errors made by the base algorithm are relatively unbiased.\n\n\fWe used approximately 2300 randomly selected JPEG images from news sites on the web\nfor bootstrapping, consisting mostly of outdoor scenes, indoor news conferences, and sport-\ning event scenes. The scale by max algorithm was used as our \u201cbase\u201d algorithm. Figure\n1 is a plot of the probability distribution collected, where lighter regions represent higher\nprobability values. The distribution is highly structured and varies with the magnitude of\nthe channel response. This structure is important because it allows our algorithm to disam-\nbiguate between potential solutions to the ill-posed illumination estimation problem.\n\n5.2 Pre-processing and quantization\n\nTo increase robustness, pre-processing is performed on the image, similar to that performed\nin [3]. The \ufb01rst pre-processing step scales down the image to reduce noise and speed up\nthe algorithm. A new image is formed in which each pixel is the mean of an m by m\nblock of the original image. The second pre-processing step removes dark pixels from the\ncomputation, which, because of noise and quantization effects do not contain reliable color\ninformation. Pixels whose yr + yg + yb channel sum is less than a given threshold are\nexcluded from the computation.\n\nIn addition to the re\ufb02ectance prior, the parameters of our algorithm are: the number of\nre\ufb02ectance histogram bins, the scale down factor, and the dark pixel threshold value. To set\nthese parameters values, the algorithm was run over a large grid of parameter variations and\nperformance on the tuning set was computed. The tuning set was a subset of the \u201cmodel\u201d\ndata set described in [7] and disjoint from the test set. A total of 20 images were used, 10\nobjects imaged under 2 illuminants. (The \u201cball2\u201d object was removed so that there was no\noverlap between the tuning and test sets.) For the purpose of speed, only images captured\nwith the Philips Ultralume and the Macbeth Judge II \ufb02uorescent illuminants were included.\nThe best set of parameters was found to be: 32 (cid:2) 32 (cid:2) 32 re\ufb02ectance bins, scale down by\nm = 3, and omit pixels with a channel sum less than 8/(3 (cid:2) 255).\n\n5.3 Illuminant prior\n\nTo facilitate a direct comparison, we adopt the two illuminant priors from [3]. Each is\nuniform over a subset of illuminants. The \ufb01rst prior, full set, discretizes the illuminants\nuniformly in polar coordinates. The second prior, hull set, is a subset of full set restricted\nto be within the convex hull of the test set illuminants and other real world illuminants.\nOverall brightness, (cid:96)r + (cid:96)g + (cid:96)b, is discretized in the range of 0 to 6 in 0.01 steps.\n\n6 Experiments\n\n6.1 Evaluation Speci\ufb01cs\n\nTo test the algorithms we use the publicly available real world image data set [2] used\nby Barnard, Martin, Coath and Funt in a comprehensive evaluation of color constancy\nalgorithms in [3]. The data set consists of images of 30 scenes captured under 11 light\nsources, for a total of 321 images (after the authors removed images which had collection\nproblems) with ground truth illuminant information provided in the form of an RGB value.\n\nAs in the \u201crg error\u201d measure of [3], illuminant error is measured in chromaticity space:\n\n(cid:96)1 = (cid:96)r/((cid:96)r + (cid:96)g + (cid:96)b)\nR((cid:96)(cid:3)j(cid:96)) = ((cid:96)(cid:3)\n\n(24)\n(25)\nThe Bayesian algorithm is adapted to minimize this risk by computing the posterior mean\nin chromaticity space. The performance of an algorithm on the test set is reported as the\nsquare root of the average R((cid:96)(cid:3)j(cid:96)) across all images, referred to as the RMS error.\n\n(cid:96)2 = (cid:96)g/((cid:96)r + (cid:96)g + (cid:96)b)\n\n1 \u2212 (cid:96)1)2 + ((cid:96)(cid:3)\n\n2 \u2212 (cid:96)2)2\n\n\fTable 1: The average error of several color constancy algorithms on the test set. The value\nin parentheses is 1.64 times the standard error of the average, so that if two error intervals\ndo not overlap the difference is signi\ufb01cant at the 95% level.\n\nAlgorithm\nScale by Max\nGamut Mapping without Segmentation\nGamut Mapping with Segmentation\nBayes with Bootstrap Set Model\nBayes with Tuning Set Model\n\nRMS Error for Full Set RMS Error for Hull Set\n\n0.0584 (+/- 0.0034)\n0.0524 (+/- 0.0029)\n0.0426 (+/- 0.0023)\n0.0442 (+/- 0.0025)\n0.0344 (+/- 0.0017)\n\n0.0584 (+/- 0.0034)\n0.0461 (+/- 0.0025)\n0.0393 (+/- 0.0021)\n0.0351 (+/- 0.0020)\n0.0317 (+/- 0.0017)\n\nScale by Max\nGamut Mapping without Segmentation\nGamut Mapping with Segmentation\nBayes with Bootstrap Set Model\nBayes with Tuning Set Model\n\nFull Set\nHull Set\n\n0.030\n\n0.035\n\n0.040\n\n0.045\n\n0.050\n\n0.055\n\n0.060\n\nRMS error\n\nFigure 2: A graphical rendition of table 1. The standard errors are scaled by 1.64, so that if\ntwo error bars do not overlap the difference is signi\ufb01cant at the 95% level.\n\n6.2 Results\n\nThe results1 are summarized in Table 1 and Figure 2. We compare two versions of our\nBayesian method to the gamut mapping and scale by max algorithms. The appropriate\npreprocessing for each algorithm was applied to the images to achieve the best possible\nperformance. (Note that we do not include results for color by correlation since the gamut\nmapping results were found to be signi\ufb01cantly better in [3].)\nIn all con\ufb01gurations, our\nalgorithm exhibits the lowest RMS error except in a single case where it is not statisti-\ncally different than that of gamut mapping. The differences for the hull set are especially\nlarge. The hull set is clearly a useful constraint that improves the performance of all of the\nalgorithms evaluated.\n\nThe two versions of our Bayesian algorithm differ only in the data set used to build the\nre\ufb02ectance prior. The tuning set, while composed of separate images than the test set, is\nvery similar and has known illuminants, and, accordingly, gives the best results. Yet the\nperformance when trained on a very different set of images, the uncalibrated bootstrap set\nof section 5.1, is not that different, particularly when the illuminant search is constrained.\n\nThe gamut mapping algorithm (called CRULE and ECRULE in [3]) is also presented in two\nversions: with and without segmenting the images as a preprocessing step as described in\n[3]. These results were computed using software provided by Barnard and used to generate\nthe results in [3]. In the evaluation of color constancy algorithms in [3] gamut mapping was\nfound on average to outperform all other algorithms when evaluated on real world images.\n\nIt is interesting to note that the gamut mapping algorithm is sensitive to segmentation. Since\nfundamentally it should not be sensitive to the number of pixels of a particular color in the\nimage we must assume that this is because the segmentation is implementing some form of\nnoise \ufb01ltering. The Bayesian algorithm currently does not use segmentation.\n\nScale by max is also included as a reference point and still performs quite well given its sim-\nplicity, often beating out much more complex constancy algorithms [8, 3]. Its performance\nis the same for both illuminant sets since it does not involve a search over illuminants.\n\n1Result images can be found at http://www.cs.cmu.edu/\u02dcchuck/nips-2003/\n\n\fSurprisingly, when the error of the Bayesian method is compared with the gamut mapping\nmethod on individual test images, the correlation coef\ufb01cient is -0.04. Thus the images\nwhich confuse the Bayesian method are quite different from the images which confuse\ngamut mapping. This suggests that an algorithm which could jointly model the image\nproperties exploited by both algorithms might give dramatic improvements. As an exam-\nple of the potential improvement, the RMS error of an ideal algorithm whose error is the\nminimum of Bayes and gamut on each image in the test set is only 0.019.\n\n7 Conclusions and Future Work\n\nWe have demonstrated empirically that Bayesian color constancy with the appropriate non-\nGaussian models can outperform gamut mapping on a standard test set. This is true re-\ngardless of whether a calibrated or uncalibrated training set is used, or whether the full set\nor a restricted set of illuminants is searched. This should give new hope to the pursuit of\nstatistical methods as a unifying framework for color constancy.\n\nThe results also suggest ways to improve the Bayesian algorithm. The particular image\nmodel we have used, the normalized count model, is only one of many that could be tried.\nThis is simply an image modeling problem which can be attacked using standard statistical\nmethods. A particularly promising direction is to pursue models which can enforce con-\nstraints like that in the gamut mapping algorithm, since the images where Bayes has the\nlargest errors appear to be relatively easy for gamut mapping.\n\nAcknowledgments\nWe would like to thank Kobus Barnard for making his test images and code publicly avail-\nable. We would also like to thank Martial Hebert for his valuable insight and advice and\nDaniel Huber and Kevin Watkins for their help in revising this document. This work was\nsponsored in part by a fellowship from the Eastman Kodak company.\n\nReferences\n[1] K. Barnard, L. Martin, and B. Funt, \u201cColour by correlation in a three dimensional colour space,\u201d\n\nProceedings of the 6th European Conference on Computer Vision, pp. 275\u2013289, 2000.\n\n[2] K. Barnard, L. Martin, B. Funt,\n\nre-\nsearch,\u201d Color Research and Application, Volume 27, Number 3, pp. 147-151, 2002,\nhttp://www.cs.sfu.ca/\u02dccolour/data/colour constancy test images/\n\nand A. Coath,\n\ncolour\n\n\u201cA data\n\nset\n\nfor\n\n[3] K. Barnard, L. Martin, A. Coath, and B. Funt, \u201cA comparison of color constancy algorithms;\nPart Two. Experiments with Image Data,\u201d IEEE Transactions in Image Processing, vol. 11. no.\n9. pp. 985-996, 2002.\n\n[4] D. H. Brainard and W. T. Freeman, \u201cBayesian color constancy,\u201d Journal of the Optical Society\n\nof America A, vol. 14, no. 7, pp. 1393-1411, 1997.\n\n[5] G. Buchsbaum, \u201cA spatial processor model for object colour perception,\u201d Journal of the Franklin\n\nInstitute, vol. 10, pp. 1-26, 1980.\n\n[6] G. D. Finlayson and S. D. Hordley and P. M. Hubel, \u201cColour by correlation: a simple, unifying\napproach to colour constancy,\u201d The Proceedings of the Seventh IEEE International Conference\non Computer Vision, vol. 2, pp. 835-842, 1999.\n\n[7] B. Funt and V. Cardei and K. Barnard, \u201cLearning color constancy,\u201d Proceedings of Imaging\nScience and Technology / Society for Information Display Fourth Color Imaging Conference.\npp. 58-60, 1996.\n\n[8] B. Funt and K. Barnard and L. Martin, \u201cIs colour constancy good enough?,\u201d Proceedings of the\n\nFifth European Conference on Computer Vision, pp. 445-459, 1998.\n\n[9] B. Funt and V. Cardei. \u201cBootstrapping color constancy,\u201d Proceedings of SPIE: Electronic Imag-\n\ning IV, 3644, 1999.\n\n[10] H. J. Trussell and M. J. Vrhel, \u201cEstimation of illumination for color correction,\u201d Proc ICASSP,\n\npp. 2513-2516, 1991.\n\n\f", "award": [], "sourceid": 2426, "authors": [{"given_name": "Charles", "family_name": "Rosenberg", "institution": null}, {"given_name": "Alok", "family_name": "Ladsariya", "institution": null}, {"given_name": "Tom", "family_name": "Minka", "institution": null}]}