This tutorial on color management, camera profiles, and working spaces has been put together in the hope that it will be commented on, corrected, and possibly even incorporated into the digikam handbook. The existing information in the handbook is wrong on quite a few counts regarding color management, and therefore confusing to a new user. Rather than complain about the problems, I thought I would try to offer part of a solution.
"Color Management" is a complicated subject. Fortunately, as digital photographers we only need to know enough to make a few good choices along the image editing "pipeline" from camera to final output. Hopefully our color-managed image editor and all the other software we use in our "digital darkroom" will then do all the "heavy lifting" behind the scenes. In this tutorial I will attempt to describe as simply and accurately as possible two important color management choices: (1)What "camera profile" should I use during the raw conversion process? (2)What "working space" should I use while I am editing my image? My goal in this tutorial is to put these two very important color management choices in the overall context of an understanding of what color management actually does. Two other color management choices mentioned only in passing (but stay tuned for the next tutorial!) are what monitor/display profile should I use? and what do I do about choosing an "output" profile if I want to print or email my image or perhaps publish it to the web? Every camera is different: Digital cameras have an array of millions of little light sensors inside, making up either a CCD or a CMOS chip (the difference between "CCD" and "CMOS" is very interesting but beyond the scope of this tutorial; if you are curious, google is your friend). These light-sensing "pixels" are color-blind. So to allow pixels to record color information, each pixel is capped by a transparent red, green, or blue lens, usually alternating in what is called a "Bayer" array. The whole point of "interpolation" using "demosaicing algorithms" such as dcraw's default "AHD" is to "guess" what color light actually fell on any given pixel by "interpolating" information gathered from that single pixel plus its "neighboring" pixels (see http://en.wikipedia.org/wiki/Demosaicing). After interpolation, the raw converter software outputs a file (usually a 16-bit tiff) containing a trio of interpolated R,G,B values for each pixel in the image. The good news regarding today's digital cameras is that the sensors, all those little "pixels" on the ccd or cmos chip inside the camera, are capable of capturing virtually ALL the visible spectrum. The bad news is that the this trio of R,G,B numbers for each pixel in an image, as produced by the raw converter from the raw image that the camera stores on the camera card, is essentially meaningless until "interpreted" by a camera profile that is specific to the particular (make and model of) camera. Why? Because "pixel response to light" is the result of lots of camera-specific factors including: the nature of the sensor array itself, the precise coloring/transmissive qualities of the lens caps, and the particular "analog-to-digital conversion" and post-conversion processing that happens inside the camera to produce the raw image that gets stored on the card. (As an aside, the "analog-to-digital conversion" inside the camera is necessary because the light-sensing "pixels" are analog in nature - they collect a charge proportionate to the amount of light that reaches them; the accumulated charge on each pixel needs to be turned into a discrete, digital quantity by the camera's "analog to digital converter"). The "Universal Translator": your camera profile, the Profile Connection Space, and lcms So the question for each RGB trio of values in the (let us assume) 16-bit tiff produced by (let us assume) dcraw becomes, "What does a particular trio of RGB values for the pixels making up images produced by this particular (make and model) camera really mean in terms of some "absolute standard" referencing some "ideal observer"?" This "absolute standard" referencing an "ideal observer" is more commonly called a "Profile Connection Space". A "camera profile" is needed to accurately "characterize" or "describe" the response of a given camera's pixels to light entering that camera, so that the RGB values in the output file produced by the raw converter can be "translated" first into an absolute Profile Connection Space (PCS) and then from the PCS to your chosen working space. As a very important aside, for most of the open source world (including digikam), the software used to "translate" from the camera profile to the PCS and from the PCS to your chosen "working space" and eventually to your chosen "output space" (for printing or perhaps monitor display) is based on lcms (the "little color management engine" - see http://littlecms.com). For what it's worth, my own testing has shown that lcms does more accurate conversions than Adobe's proprietary color conversion engine. Further, for almost all raw conversion programs, including commercial closed source software such as Adobe Photoshop, the raw conversion is typically based on "decoding" of the proprietary raw file done by dcraw. David Coffin, author of dcraw, is the hero of raw conversion - without him we'd all be stuck using the usually "windows/mac only" proprietary software that comes with our digital cameras. For what it's worth, my own testing has shown that dcraw's interpolation algorithms (not to be confused with the aforementioned "decoding" of the proprietary raw file), if properly used, produce results equal or superior to commercial, closed source software. We in the world of linux and open source software are NOT second-class citizens when it comes to digital imaging. Far from. There are two commonly used Profile Connection Spaces - CIELAB and CIEXYZ (see http://en.wikipedia.org/wiki/Color_management, section on "Color translation", then look up CIELAB and CIEXYZ on wikipedia). Lcms uses the camera profile to "translate" the RGB values from the interpolated raw file, that is, the tiff produced by dcraw, into the appropriate Profile Connection Space (usually CIEXYZ - why "CIEXYZ"? I haven't taken the time to learn). A "profile connection space" is not itself a "working space". Rather a PCS is an absolute reference space used only for translating from one color space to another - think of a PCS as a "Universal Translator" for all the color profiles that an image might encounter in the course of its "journey" from camera raw file to final output: (1)Lcms uses the "camera profile", also called an "input" profile, to "translate" the interpolated dcraw-produced R,G,B numbers, which only have "meaning" relative to your (make and model of) camera, to a second set of R,G,B numbers that only have meaning in the Profile Connection Space. (2)Lcms "translates" the Profile Connection Space R,G,B numbers to the corresponding numbers in your chosen working space so you can edit your image. And again, these "working space" numbers ONLY have meaning relative to a given working space. The same "red", visually speaking, is represented by different "trios" of RGB numbers in different working spaces; and if you "assign" the wrong profile the image will look wrong, slightly wrong or very wrong depending on the differences between the two profiles. (3)While you are editing your image in your chosen working space using your favorite image editing program (which hopefully is digikam!), then lcms should translate all the working space RGB numbers back to the PCS, and then over to the correct RGB numbers that enable your monitor (your "display device") to give you the most accurate possible "display" representation of your image as it is being edited. This "translation for display" is done "on the fly" and you should never even notice it happening, unless it doesn't happen correctly - then the displayed image will look wrong, perhaps a little wrong, perhaps really, really, really wrong. Stay tuned for the next tutorial (assuming anyone finds this tutorial useful) for details on color-managing your monitor display. (4)When you are satisfied that your edited image is ready to share with the world, lcms "translates" the "working space" RGB numbers back into the PCS space and out again to a printer color space using a printer profile characterizing YOUR printer/paper combination (if you plan on printing the image) or to sRGB (if you plan on displaying the image on the web or emailing it to friends or perhaps creating a slide-show to play on monitors other than your own. To back up a little bit and look at the first color profile an image encounters, that is, the camera profile (see (1) immediately above) - dcraw can in fact apply your camera profile for you (dcraw uses lcms internally). But (i)the generating of the tiff composed of the interpolated RGB values derived from the camera raw file, and (ii)the application of the camera profile to the interpolated file, are two very distinct and totally separable (separable in theory and practice for dcraw; in theory only for most raw converters) steps. The dcraw command line output options "-o 0 [Raw color (unique to each camera)] -4 [16-bit linear] -T [tiff]" tell dcraw to output the RGB numbers from the raw interpolation into a tiff WITHOUT applying a camera ("input") profile (the words in brackets explain the options but should not be entered at the command line). Then, if you truly enjoy working from the command line, you can use the lcms utility "tifficc" to apply your camera profile yourself. The advantage of doing so is that you can tell lcms to use "high" quality conversion (dcraw seems to use the lcms default "medium"). The disadvantage, of course, is that applying your camera profile from the command line adds one extra step to your raw work flow. Where to find camera profiles: So where do we get these elusive and oh-so-necessary camera-specific profiles that we need to "translate" our interpolated raw files to a working color space? The UFRAW website section on color management (http://ufraw.sourceforge.net/Colors.html) has a bit of information on where to find ready-made camera profiles. It's an unfortunate fact of digital imaging that the camera profiles supplied by Canon, Nikon, and the like don't work as well with raw converters other than each camera manufacturer's own proprietary raw converter. Which is why Bibble and Phase One (and Adobe, but ACR "hides" the Adobe-made profiles inside the program code), for example, have to make their own profiles for all the cameras that they support - keep this "proprietary propensity" of your camera manufacturer in mind next time you buy a digital camera. But back to finding a camera profile for YOUR camera - the "real" answer (assuming you don't find a ready-made profile that makes you happy) is to make your own camera profile or have one made for you. There are quite a few commercial services who provide profiling services (for a fee, of course). Or you can use "LPRof" or "Argyll" to profile your camera yourself. I haven't yet walked down that road so I can't speak about how easy or difficult the process of profiling a camera might be. But I would imagine, knowing how very meticulous the people behind Argyll, LPRof, and lcms are about color management, that making your own camera profile is very "do-able" and very likely the results will be better than any proprietary profile. After all, Canon (and also Bibble and Phase One for that matter) didn't profile MY camera - they just profiled a camera LIKE mine. Working Spaces: So now your raw file has been interpolated by dcraw and you've obtained a camera profile and used lcms "tifficc" to "apply" your camera profile to the tiff produced by dcraw (or you've asked dcraw to apply it for you). What does all this mean? The "real" answer involves a lot of math and color science that goes way over my head and likely yours. The short, practical answer is that neither the camera profile space nor the Profile Connection Space is an appropriate space for image editing. Your next step is to choose a "working space" for image editing. And then you (or rather the lcms "color management engine" that your open source digital imaging software uses) actually perform a "double translation". First lcms uses the camera profile to translate the RGB values of each pixel in the "dcraw-output-image-without-camera-profile-applied" into the aforementioned "Profile Connection Space". Then it translates the RGB values of each pixel from the PCS to your chosen working space. Confusions and confusing terminology: Before talking more about "working spaces", some confusions and confusing terminology needs to be cleared up: First, sRGB is both a "working" color space and an "output" color space for images intended for the web and for monitor display (if you have a spiffy new monitor with a gamut larger than the gamut covered by sRGB, obviously you might want to reconsider what output profile to use to best take advantage of your wonderful and hopefully calibrated and profiled monitor, but please convert your image to sRGB before sending it on to your friends!). sRGB is also the color space that a lot of home and mass-production commercial printers "expect" image files to be in when sent to the printer. It is also the color space that most programs "assume" if an image does not have an embedded color profile telling the program what color space should be used to interpret ("translate") the RGB numbers. So if you choose to not use color-management, your color-management "choices" are simple - set everything to sRGB. Second, all jpegs (or tiffs, if you have an older Minolta Dimage camera) coming straight out of a camera (even if produced by point-and-shoots cameras that don't allow you to save a raw file) start life inside the camera as a raw file produced by the camera's A to D converter. The processor inside the camera interpolates the raw file, assigns a camera profile, translates the resulting RGB numbers to a working space (usually sRGB but sometimes you can choose AdobeRGB, depending on the camera), does the jpeg compression, and stores the jpeg file on your camera card. So jpegs (or tiffs) from your camera NEVER need to be assigned a camera or "input" profile which is then "translated" to a working space via a PCS. Jpegs from a camera are already in a working space. Third, in case anyone is unsure on this point, note that an "interpolated" raw file is no longer a raw file - it has been interpolated and then "output" as a tiff whose RGB values need to be "translated" to a working space, using the camera profile, the PCS, and lcms. Fourth (strictly for future reference), to introduce a bit of commonly heard color-management terminology here - the camera profile and your printer's color profile are both "device dependent," whereas the working space will be "device-independent" - it can be used with any image, with any properly color-managed software, without regard for where the image originated. Fifth, above I have used the words "translate" and "translation" as a descriptive metaphor for what lcms does when it "translates" RGB values from one color space to another via the PCS. The usual and correct terminology is "convert" and "conversion", which I will use below. The four "methods of conversion" from one color space to another are: "perceptual", "relative colorimetric", "absolute colorimetric", and "saturation". Which method of conversion you should use for any given image processing step from raw file to final output image is beyond the scope of this tutorial. The standard advice is: when in doubt, use "perceptual." Sixth (and again, strictly for future reference),"assign a profile" means "change the meaning of the RGB numbers in an image by embedding a new profile without changing the actual RGB numbers associated with each pixel in the image"; "convert" means "embed a new profile, but also change the RGB numbers at the same time so that the "meaning" of the RGB values - that is, the "real-world visible color" represented by the trio of RGB numbers associated with each pixel in an image - remains the same before and after the conversion from one space to another". You should be able to do multiple conversions of an image from one working space to another, and with a properly color-managed image editor, even though all the RGB numbers in the image will change with each conversion, the image on your screen should look the same (leaving aside the usually unnoticeable small but inevitable changes from accumulated gamut mismatches and mathematical rounding errors). However, every time you "assign" a new working space profile rather than "convert to" a new working space, the appearance of the image should more or less drastically change (usually for the worse). Finally, (and this is a crucially important point), color management is NOT "only relevant if you shoot raw". Color management affects every stage of the image processing pipeline, whether you start with a raw file that you, yourself "interpolate and translate" into a tiff, or if you start with a jpeg or tiff produced by your camera. Copyrighted and "copyleft" working spaces: I will take it as given that ALL the ordinarily encountered working spaces, such as: (1)the several variants of sRGB (see http://www.color.org/v4spec.xalter) (2)"BruceRGB" (http://www.brucelindbloom.com) (3)the various "ECI" (European color initiative) working space profiles (see "http://www.eci.org/doku.php?id=en:colourstandards:workingcolorspaces") (4)AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and Adobe ProPhoto are the same, just "branded" differently) and their non-branded, non-copyrighted counterparts (Oyranos includes a non-branded version of AdobeRGB; see "http://www.behrmann.name/index.php?option=com_content&task=view&id=34&Itemid=68") (5) and quite a few others that could be added to this list are all more or less "suitable" as working spaces. Which working space you "should" use depends only and solely on YOU, on YOUR requirements as the editor of YOUR digital images with YOUR eventual output intentions (web, fine art print, etc). However, as a critical aside, if you are using Adobe (or other copyrighted) working space profiles, these profiles contain copyright information that shows up in your "image exif" information. Lately I've been perusing the openicc mailing lists. Apparently lcms can be used to produce nonbranded, "copyleft" working space profiles that are just the same as - actually indistinguishable from - the branded, copyrighted working space profiles. It would be a wonderful addition to digikam if a set of "copyleft" working space profiles, including nonbranded, relabelled versions of ProPhotoRGB, AdobeRGB, and Adobe WidegamutRGB (perhaps in two "flavors" each: linear gamma and the usual gamma), could be bundled as part of the digikam package. Which working space: gamma Now, the next question is "which working space should I use?" Wikipedia says "Working spaces, such as sRGB or Adobe RGB, are color spaces that facilitate good results while editing. For instance, pixels with equal values of R,G,B should appear neutral. Using a large (gamut) working space will lead to posterization, while using a small working space will lead to clipping.[2] This trade-off is a consideration for the critical image editor" (http://en.wikipedia.org/wiki/Color_management#Working_spaces). Well, that quote from wikipedia is about as clear as mud and I don't know if I will be able to explain it more clearly, but I will try. "[P]ixels with equal values of R,G,B should appear neutral" just means that for any given pixel in an image that has been converted to a suitable working space, if R=G=B you should see grey or black or white on your screen. I am not aware of a list of other technical requirements for a "suitable" working space, though undoubtedly someone has produced such a list. But most working space profiles are characterized by (1)"RGB primaries" which dictate the range of colors, that is, the "gamut" covered by a given profile; (2)"white point", usually D50 or D65, which dictates the total dynamic range of the working space, from 0,0,0 (total black) to the brightest possible white; and (3)"gamma." The practical consequences that result from using different "RGB primaries", leading to larger or smaller working spaces, are discussed below. The practical consequences for different choices for the working space "white point" are beyond the scope of this tutorial. Here I will talk a little bit about the practical consequences of the working space "gamma" (for an excellent article and references, look up "gamma" on wikipedia). The "gamma" of a color profile dictates what "power transform" needs to take place to properly convert from an image's embedded color profile (perhaps your working color space) to another color profile with a different gamma, such as (i)the "display" profile used to display the image on the screen or (ii)perhaps to a new working space, or (iii)perhaps from your working space to your printer's color space. (As an aside, mathematically speaking, for a "power transform" you "normalize" the RGB numbers and raise the resulting numbers to an appropriate power depending on the respective gammas of the starting and ending color space, then renormalize the results to a new set of RGB numbers. Lcms does this for you when you ask lcms to convert from one color space to another; however, if ALL you are doing is a power transform, use imagemagick instead of lcms and just manipulate the RGB numbers directly - the results will be more accurate. Now aren't you glad that I've kept the mathematics of color management out of this tutorial?) One practical consequence of the "gamma" of a working space is that the higher the gamma, the more "tones" are available for editing in the shadows, with consequently fewer tones available in the highlights. So theoretically, if you are working on a very dark-toned ("low key") image you might want a working space with a higher gamma. And if you are working on a "high key" image, say a picture taken in full noon sunlight of a wedding dress with snow as a backdrop, you might want to choose a working space with a lower gamma, so you have more available tonal gradations in the highlights. But in the real world of real image editing, almost everyone uses working spaces with either gamma 1.8 or 2.2. As an aside, recently I've heard that some people are trying to "standardize" on gamma 2.0. As a very important aside, sRGB and "LStar-RGB" are not "gamma-based" working spaces. Rather, sRGB uses a "hybrid" gamma - see "http://en.wikipedia.org/wiki/SRGB" for details. And "LStar-RGB" uses a luminosity-based "tonal response curve" instead of a gamma value - see "http://www.colormanagement.org/en/workingspaces.html" for more information, and then google around for more in-depth information. In addition to "gamma 1.8" and "gamma 2.2" the only other "gamma" for a working space that gets much mention or use is "gamma 1", also called "linear gamma". "Linear gamma" is used in HDR (high dynamic range) imaging and also if one wants to avoid introducing "gamma-induced errors" into one's "regular" low dynamic range editing. "Gamma-induced errors" is a topic outside the scope of this tutorial, but see "Gamma errors in picture scaling, http://www.4p8.com/eric.brasseur/gamma.html"; "http://www.21stcenturyshoebox.com/essays/color_reproduction.html" for gamma-induced color shifts; and of course Timo Autiokar's somewhat infamous website, http://www.aim-dtp.net/. Unfortunately and despite their undeniable mathematical advantages, linear gamma working spaces have so few "tones" in the shadows that (in my opinion) they are impossible to use for editing if one is working in 8-bits, and still problematic at 16-bits (though I do use linear gamma working spaces myself for some parts of my image editing workflow). When the day comes when we are all doing our editing on 32-bit files produced by our HDR cameras on our personal supercomputers, I predict that we will all be using working spaces with gamma 1; Adobe Lightroom is already using a linear gamma working space "under the hood" and Lightzone has always used a linear gamma working space. Which working space: "large gamut" or "small gamut" One MAJOR consideration in choosing a working space is that some working spaces are "bigger" than others, meaning they cover more of the visible spectrum (and perhaps even include some "imaginary" colors - mathematical constructs that don't really exist). These bigger spaces offer the advantage of allowing you to keep all the colors captured by your camera and preserved by the lcms conversion from your camera profile to the really big "profile connection space". But "keeping all the possible colors" comes at a price. It seems that any given digital image (pictures of daffodils with saturated yellows being one common exception) likely only contains a small subset of all the possible visible colors that your camera is capable of capturing. This small subset is easily contained in one of the smaller working spaces. Using a very large working space mean that editing your image (applying curves, saturation, etc) can easily produce colors that your eventual output device (printer, monitor) simply can't display. So the "conversion" from your "working space" to your "output device space" (say your printer) will have to "remap" the "out of gamut" colors in your edited image, some of which might even be totally imaginary, to your printer color space with its much smaller gamut, leading to inaccurate colors at best and at worst to "banding" ('posterization' - gaps in what should be a smooth color transition, say, across an expanse of blue sky) and "clipping" (e.g your carefully crafted muted transitions across delicate shades of red, for example, might get "remapped" to a solid block of dull red after conversion to your printer's color space). In other words, large gamut working spaces, improperly handled, can lead to lost information on output. Small gamut working spaces can clip information on input. Like Wikipedia says, it's a trade-off. I can offer some oft-repeated advice: (1)For images intended for the web, use (one of the) sRGB (variants - there are several). (2)For the most accuracy in your image editing (that is, making the most of your "bits" with the least risk of banding or clipping when you convert your image from your working space to an output space), use the smallest working space that includes all the colors in the scene that you photographed, plus a little extra room for those new colors you intentionally produce as you edit. (3)If you are working in 8-bits rather than 16-bits, choose a smaller space rather than a larger space. (4)For archival purposes, convert your raw file to a 16-bit tiff with a large gamut working space to avoid loosing color information. Then convert this "archival" tiff to your working space of choice (saving the converted "working" tiff under a new name, of course). See "http://www.21stcenturyshoebox.com/essays/scenereferredworkflow.html" for more details. The "whys" of these bits of advice regarding "which working space" are beyond the scope of this tutorial. See Bruce Lindbloom's excellent website (http://www.brucelindbloom.com/, Info, Information about RGB Working Spaces) for a visual comparison of the "gamut" (array of included colors) of the various working color spaces. See "http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml" and "http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm" for a "pro" and "con" presentation, respectively, of the merits of using "large gamut" working spaces. And while you are on the "cambrideincolour.com" website, check out the tutorial on color management. And this concludes my tutorial on color management, camera profiles, and working spaces. Once again, please feel free to comment, correct, incorporate into the digikam handbook, or ignore altogether. As I already said, I couldn't help but notice that the existing information in the digikam handbook is wrong on quite a few counts regarding color management (regarding which I will post separately). Rather than just complain about the problems, I thought I would try my hand at spelling out some theoretical background and practical consequences of color management choices regarding "camera profiles" and "working spaces". Elle |
Elle,
Damned, you write the book (:=))) I recomend you to put this long text on kde digiKam wiki page : http://wiki.kde.org/tiki-index.php?page=digikam ... like this, other users can change/fix the content if necessary. Best Gilles Caulier 2008/6/1 elle stone <[hidden email]>: > > This tutorial on color management, camera profiles, and working spaces has > been put together in the hope that it will be commented on, corrected, and > possibly even incorporated into the digikam handbook. The existing > information in the handbook is wrong on quite a few counts regarding color > management, and therefore confusing to a new user. Rather than complain > about the problems, I thought I would try to offer part of a solution. > > "Color Management" is a complicated subject. Fortunately, as digital > photographers we only need to know enough to make a few good choices along > the image editing "pipeline" from camera to final output. Hopefully our > color-managed image editor and all the other software we use in our "digital > darkroom" will then do all the "heavy lifting" behind the scenes. > > In this tutorial I will attempt to describe as simply and accurately as > possible two important color management choices: > > (1)What "camera profile" should I use during the raw conversion process? > > (2)What "working space" should I use while I am editing my image? > > My goal in this tutorial is to put these two very important color management > choices in the overall context of an understanding of what color management > actually does. Two other color management choices mentioned only in passing > (but stay tuned for the next tutorial!) are what monitor/display profile > should I use? and what do I do about choosing an "output" profile if I want > to print or email my image or perhaps publish it to the web? > > > Every camera is different: > > Digital cameras have an array of millions of little light sensors inside, > making up either a CCD or a CMOS chip (the difference between "CCD" and > "CMOS" is very interesting but beyond the scope of this tutorial; if you are > curious, google is your friend). These light-sensing "pixels" are > color-blind. So to allow pixels to record color information, each pixel is > capped by a transparent red, green, or blue lens, usually alternating in > what is called a "Bayer" array. The whole point of "interpolation" using > "demosaicing algorithms" such as dcraw's default "AHD" is to "guess" what > color light actually fell on any given pixel by "interpolating" information > gathered from that single pixel plus its "neighboring" pixels (see > http://en.wikipedia.org/wiki/Demosaicing). After interpolation, the raw > converter software outputs a file (usually a 16-bit tiff) containing a trio > of interpolated R,G,B values for each pixel in the image. > The good news regarding today's digital cameras is that the sensors, all > those little "pixels" on the ccd or cmos chip inside the camera, are capable > of capturing virtually ALL the visible spectrum. The bad news is that the > this trio of R,G,B numbers for each pixel in an image, as produced by the > raw converter from the raw image that the camera stores on the camera card, > is essentially meaningless until "interpreted" by a camera profile that is > specific to the particular (make and model of) camera. Why? Because "pixel > response to light" is the result of lots of camera-specific factors > including: the nature of the sensor array itself, the precise > coloring/transmissive qualities of the lens caps, and the particular > "analog-to-digital conversion" and post-conversion processing that happens > inside the camera to produce the raw image that gets stored on the card. > (As an aside, the "analog-to-digital conversion" inside the camera is > necessary because the light-sensing "pixels" are analog in nature - they > collect a charge proportionate to the amount of light that reaches them; the > accumulated charge on each pixel needs to be turned into a discrete, digital > quantity by the camera's "analog to digital converter"). > > > The "Universal Translator": your camera profile, the Profile Connection > Space, and lcms > > So the question for each RGB trio of values in the (let us assume) 16-bit > tiff produced by (let us assume) dcraw becomes, "What does a particular trio > of RGB values for the pixels making up images produced by this particular > (make and model) camera really mean in terms of some "absolute standard" > referencing some "ideal observer"?" This "absolute standard" referencing an > "ideal observer" is more commonly called a "Profile Connection Space". A > "camera profile" is needed to accurately "characterize" or "describe" the > response of a given camera's pixels to light entering that camera, so that > the RGB values in the output file produced by the raw converter can be > "translated" first into an absolute Profile Connection Space (PCS) and then > from the PCS to your chosen working space. > As a very important aside, for most of the open source world (including > digikam), the software used to "translate" from the camera profile to the > PCS and from the PCS to your chosen "working space" and eventually to your > chosen "output space" (for printing or perhaps monitor display) is based on > lcms (the "little color management engine" - see http://littlecms.com). For > what it's worth, my own testing has shown that lcms does more accurate > conversions than Adobe's proprietary color conversion engine. Further, for > almost all raw conversion programs, including commercial closed source > software such as Adobe Photoshop, the raw conversion is typically based on > "decoding" of the proprietary raw file done by dcraw. David Coffin, author > of dcraw, is the hero of raw conversion - without him we'd all be stuck > using the usually "windows/mac only" proprietary software that comes with > our digital cameras. For what it's worth, my own testing has shown that > dcraw's interpolation algorithms (not to be confused with the aforementioned > "decoding" of the proprietary raw file), if properly used, produce results > equal or superior to commercial, closed source software. We in the world of > linux and open source software are NOT second-class citizens when it comes > to digital imaging. Far from. > There are two commonly used Profile Connection Spaces - CIELAB and CIEXYZ > (see http://en.wikipedia.org/wiki/Color_management, section on "Color > translation", then look up CIELAB and CIEXYZ on wikipedia). Lcms uses the > camera profile to "translate" the RGB values from the interpolated raw file, > that is, the tiff produced by dcraw, into the appropriate Profile Connection > Space (usually CIEXYZ - why "CIEXYZ"? I haven't taken the time to learn). > A "profile connection space" is not itself a "working space". Rather a PCS > is an absolute reference space used only for translating from one color > space to another - think of a PCS as a "Universal Translator" for all the > color profiles that an image might encounter in the course of its "journey" > from camera raw file to final output: > > (1)Lcms uses the "camera profile", also called an "input" profile, to > "translate" the interpolated dcraw-produced R,G,B numbers, which only have > "meaning" relative to your (make and model of) camera, to a second set of > R,G,B numbers that only have meaning in the Profile Connection Space. > > (2)Lcms "translates" the Profile Connection Space R,G,B numbers to the > corresponding numbers in your chosen working space so you can edit your > image. And again, these "working space" numbers ONLY have meaning relative > to a given working space. The same "red", visually speaking, is represented > by different "trios" of RGB numbers in different working spaces; and if you > "assign" the wrong profile the image will look wrong, slightly wrong or very > wrong depending on the differences between the two profiles. > > (3)While you are editing your image in your chosen working space using your > favorite image editing program (which hopefully is digikam!), then lcms > should translate all the working space RGB numbers back to the PCS, and then > over to the correct RGB numbers that enable your monitor (your "display > device") to give you the most accurate possible "display" representation of > your image as it is being edited. This "translation for display" is done > "on the fly" and you should never even notice it happening, unless it > doesn't happen correctly - then the displayed image will look wrong, perhaps > a little wrong, perhaps really, really, really wrong. Stay tuned for the > next tutorial (assuming anyone finds this tutorial useful) for details on > color-managing your monitor display. > > (4)When you are satisfied that your edited image is ready to share with the > world, lcms "translates" the "working space" RGB numbers back into the PCS > space and out again to a printer color space using a printer profile > characterizing YOUR printer/paper combination (if you plan on printing the > image) or to sRGB (if you plan on displaying the image on the web or > emailing it to friends or perhaps creating a slide-show to play on monitors > other than your own. > > To back up a little bit and look at the first color profile an image > encounters, that is, the camera profile (see (1) immediately above) - dcraw > can in fact apply your camera profile for you (dcraw uses lcms internally). > But (i)the generating of the tiff composed of the interpolated RGB values > derived from the camera raw file, and (ii)the application of the camera > profile to the interpolated file, are two very distinct and totally > separable (separable in theory and practice for dcraw; in theory only for > most raw converters) steps. The dcraw command line output options "-o 0 > [Raw color (unique to each camera)] -4 [16-bit linear] -T [tiff]" tell dcraw > to output the RGB numbers from the raw interpolation into a tiff WITHOUT > applying a camera ("input") profile (the words in brackets explain the > options but should not be entered at the command line). Then, if you truly > enjoy working from the command line, you can use the lcms utility "tifficc" > to apply your camera profile yourself. The advantage of doing so is that > you can tell lcms to use "high" quality conversion (dcraw seems to use the > lcms default "medium"). The disadvantage, of course, is that applying your > camera profile from the command line adds one extra step to your raw work > flow. > > > Where to find camera profiles: > > So where do we get these elusive and oh-so-necessary camera-specific > profiles that we need to "translate" our interpolated raw files to a working > color space? The UFRAW website section on color management > (http://ufraw.sourceforge.net/Colors.html) has a bit of information on where > to find ready-made camera profiles. It's an unfortunate fact of digital > imaging that the camera profiles supplied by Canon, Nikon, and the like > don't work as well with raw converters other than each camera manufacturer's > own proprietary raw converter. Which is why Bibble and Phase One (and Adobe, > but ACR "hides" the Adobe-made profiles inside the program code), for > example, have to make their own profiles for all the cameras that they > support - keep this "proprietary propensity" of your camera manufacturer in > mind next time you buy a digital camera. > But back to finding a camera profile for YOUR camera - the "real" answer > (assuming you don't find a ready-made profile that makes you happy) is to > make your own camera profile or have one made for you. There are quite a > few commercial services who provide profiling services (for a fee, of > course). Or you can use "LPRof" or "Argyll" to profile your camera > yourself. I haven't yet walked down that road so I can't speak about how > easy or difficult the process of profiling a camera might be. But I would > imagine, knowing how very meticulous the people behind Argyll, LPRof, and > lcms are about color management, that making your own camera profile is very > "do-able" and very likely the results will be better than any proprietary > profile. After all, Canon (and also Bibble and Phase One for that matter) > didn't profile MY camera - they just profiled a camera LIKE mine. > > > Working Spaces: > > So now your raw file has been interpolated by dcraw and you've obtained a > camera profile and used lcms "tifficc" to "apply" your camera profile to the > tiff produced by dcraw (or you've asked dcraw to apply it for you). What > does all this mean? The "real" answer involves a lot of math and color > science that goes way over my head and likely yours. The short, practical > answer is that neither the camera profile space nor the Profile Connection > Space is an appropriate space for image editing. Your next step is to > choose a "working space" for image editing. And then you (or rather the > lcms "color management engine" that your open source digital imaging > software uses) actually perform a "double translation". First lcms uses the > camera profile to translate the RGB values of each pixel in the > "dcraw-output-image-without-camera-profile-applied" into the aforementioned > "Profile Connection Space". Then it translates the RGB values of each pixel > from the PCS to your chosen working space. > > > Confusions and confusing terminology: > > Before talking more about "working spaces", some confusions and confusing > terminology needs to be cleared up: > First, sRGB is both a "working" color space and an "output" color space for > images intended for the web and for monitor display (if you have a spiffy > new monitor with a gamut larger than the gamut covered by sRGB, obviously > you might want to reconsider what output profile to use to best take > advantage of your wonderful and hopefully calibrated and profiled monitor, > but please convert your image to sRGB before sending it on to your > friends!). sRGB is also the color space that a lot of home and > mass-production commercial printers "expect" image files to be in when sent > to the printer. It is also the color space that most programs "assume" if > an image does not have an embedded color profile telling the program what > color space should be used to interpret ("translate") the RGB numbers. So > if you choose to not use color-management, your color-management "choices" > are simple - set everything to sRGB. > Second, all jpegs (or tiffs, if you have an older Minolta Dimage camera) > coming straight out of a camera (even if produced by point-and-shoots > cameras that don't allow you to save a raw file) start life inside the > camera as a raw file produced by the camera's A to D converter. The > processor inside the camera interpolates the raw file, assigns a camera > profile, translates the resulting RGB numbers to a working space (usually > sRGB but sometimes you can choose AdobeRGB, depending on the camera), does > the jpeg compression, and stores the jpeg file on your camera card. So > jpegs (or tiffs) from your camera NEVER need to be assigned a camera or > "input" profile which is then "translated" to a working space via a PCS. > Jpegs from a camera are already in a working space. > Third, in case anyone is unsure on this point, note that an "interpolated" > raw file is no longer a raw file - it has been interpolated and then > "output" as a tiff whose RGB values need to be "translated" to a working > space, using the camera profile, the PCS, and lcms. > Fourth (strictly for future reference), to introduce a bit of commonly > heard color-management terminology here - the camera profile and your > printer's color profile are both "device dependent," whereas the working > space will be "device-independent" - it can be used with any image, with any > properly color-managed software, without regard for where the image > originated. > Fifth, above I have used the words "translate" and "translation" as a > descriptive metaphor for what lcms does when it "translates" RGB values from > one color space to another via the PCS. The usual and correct terminology > is "convert" and "conversion", which I will use below. The four "methods of > conversion" from one color space to another are: "perceptual", "relative > colorimetric", "absolute colorimetric", and "saturation". Which method of > conversion you should use for any given image processing step from raw file > to final output image is beyond the scope of this tutorial. The standard > advice is: when in doubt, use "perceptual." > Sixth (and again, strictly for future reference),"assign a profile" means > "change the meaning of the RGB numbers in an image by embedding a new > profile without changing the actual RGB numbers associated with each pixel > in the image"; "convert" means "embed a new profile, but also change the RGB > numbers at the same time so that the "meaning" of the RGB values - that is, > the "real-world visible color" represented by the trio of RGB numbers > associated with each pixel in an image - remains the same before and after > the conversion from one space to another". You should be able to do > multiple conversions of an image from one working space to another, and with > a properly color-managed image editor, even though all the RGB numbers in > the image will change with each conversion, the image on your screen should > look the same (leaving aside the usually unnoticeable small but inevitable > changes from accumulated gamut mismatches and mathematical rounding errors). > However, every time you "assign" a new working space profile rather than > "convert to" a new working space, the appearance of the image should more or > less drastically change (usually for the worse). > Finally, (and this is a crucially important point), color management is NOT > "only relevant if you shoot raw". Color management affects every stage of > the image processing pipeline, whether you start with a raw file that you, > yourself "interpolate and translate" into a tiff, or if you start with a > jpeg or tiff produced by your camera. > > > Copyrighted and "copyleft" working spaces: > > I will take it as given that ALL the ordinarily encountered working spaces, > such as: > > (1)the several variants of sRGB (see http://www.color.org/v4spec.xalter) > (2)"BruceRGB" (http://www.brucelindbloom.com) > (3)the various "ECI" (European color initiative) working space profiles (see > "http://www.eci.org/doku.php?id=en:colourstandards:workingcolorspaces") > (4)AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and > Adobe ProPhoto are the same, just "branded" differently) and their > non-branded, non-copyrighted counterparts (Oyranos includes a non-branded > version of AdobeRGB; see > "http://www.behrmann.name/index.php?option=com_content&task=view&id=34&Itemid=68") > (5) and quite a few others that could be added to this list > > are all more or less "suitable" as working spaces. Which working space you > "should" use depends only and solely on YOU, on YOUR requirements as the > editor of YOUR digital images with YOUR eventual output intentions (web, > fine art print, etc). > However, as a critical aside, if you are using Adobe (or other copyrighted) > working space profiles, these profiles contain copyright information that > shows up in your "image exif" information. Lately I've been perusing the > openicc mailing lists. Apparently lcms can be used to produce nonbranded, > "copyleft" working space profiles that are just the same as - actually > indistinguishable from - the branded, copyrighted working space profiles. > It would be a wonderful addition to digikam if a set of "copyleft" working > space profiles, including nonbranded, relabelled versions of ProPhotoRGB, > AdobeRGB, and Adobe WidegamutRGB (perhaps in two "flavors" each: linear > gamma and the usual gamma), could be bundled as part of the digikam package. > > > Which working space: gamma > > Now, the next question is "which working space should I use?" Wikipedia > says "Working spaces, such as sRGB or Adobe RGB, are color spaces that > facilitate good results while editing. For instance, pixels with equal > values of R,G,B should appear neutral. Using a large (gamut) working space > will lead to posterization, while using a small working space will lead to > clipping.[2] This trade-off is a consideration for the critical image > editor" (http://en.wikipedia.org/wiki/Color_management#Working_spaces). > Well, that quote from wikipedia is about as clear as mud and I don't know > if I will be able to explain it more clearly, but I will try. "[P]ixels > with equal values of R,G,B should appear neutral" just means that for any > given pixel in an image that has been converted to a suitable working space, > if R=G=B you should see grey or black or white on your screen. > I am not aware of a list of other technical requirements for a "suitable" > working space, though undoubtedly someone has produced such a list. But > most working space profiles are characterized by (1)"RGB primaries" which > dictate the range of colors, that is, the "gamut" covered by a given > profile; (2)"white point", usually D50 or D65, which dictates the total > dynamic range of the working space, from 0,0,0 (total black) to the > brightest possible white; and (3)"gamma." > The practical consequences that result from using different "RGB > primaries", leading to larger or smaller working spaces, are discussed > below. The practical consequences for different choices for the working > space "white point" are beyond the scope of this tutorial. Here I will talk > a little bit about the practical consequences of the working space "gamma" > (for an excellent article and references, look up "gamma" on wikipedia). > The "gamma" of a color profile dictates what "power transform" needs to > take place to properly convert from an image's embedded color profile > (perhaps your working color space) to another color profile with a different > gamma, such as (i)the "display" profile used to display the image on the > screen or (ii)perhaps to a new working space, or (iii)perhaps from your > working space to your printer's color space. (As an aside, mathematically > speaking, for a "power transform" you "normalize" the RGB numbers and raise > the resulting numbers to an appropriate power depending on the respective > gammas of the starting and ending color space, then renormalize the results > to a new set of RGB numbers. Lcms does this for you when you ask lcms to > convert from one color space to another; however, if ALL you are doing is a > power transform, use imagemagick instead of lcms and just manipulate the RGB > numbers directly - the results will be more accurate. Now aren't you glad > that I've kept the mathematics of color management out of this tutorial?) > One practical consequence of the "gamma" of a working space is that the > higher the gamma, the more "tones" are available for editing in the shadows, > with consequently fewer tones available in the highlights. So > theoretically, if you are working on a very dark-toned ("low key") image you > might want a working space with a higher gamma. And if you are working on a > "high key" image, say a picture taken in full noon sunlight of a wedding > dress with snow as a backdrop, you might want to choose a working space with > a lower gamma, so you have more available tonal gradations in the > highlights. But in the real world of real image editing, almost everyone > uses working spaces with either gamma 1.8 or 2.2. > As an aside, recently I've heard that some people are trying to > "standardize" on gamma 2.0. As a very important aside, sRGB and "LStar-RGB" > are not "gamma-based" working spaces. Rather, sRGB uses a "hybrid" gamma - > see "http://en.wikipedia.org/wiki/SRGB" for details. And "LStar-RGB" uses a > luminosity-based "tonal response curve" instead of a gamma value - see > "http://www.colormanagement.org/en/workingspaces.html" for more information, > and then google around for more in-depth information. > In addition to "gamma 1.8" and "gamma 2.2" the only other "gamma" for a > working space that gets much mention or use is "gamma 1", also called > "linear gamma". "Linear gamma" is used in HDR (high dynamic range) imaging > and also if one wants to avoid introducing "gamma-induced errors" into one's > "regular" low dynamic range editing. "Gamma-induced errors" is a topic > outside the scope of this tutorial, but see "Gamma errors in picture > scaling, http://www.4p8.com/eric.brasseur/gamma.html"; > "http://www.21stcenturyshoebox.com/essays/color_reproduction.html" for > gamma-induced color shifts; and of course Timo Autiokar's somewhat infamous > website, http://www.aim-dtp.net/. > Unfortunately and despite their undeniable mathematical advantages, linear > gamma working spaces have so few "tones" in the shadows that (in my opinion) > they are impossible to use for editing if one is working in 8-bits, and > still problematic at 16-bits (though I do use linear gamma working spaces > myself for some parts of my image editing workflow). When the day comes > when we are all doing our editing on 32-bit files produced by our HDR > cameras on our personal supercomputers, I predict that we will all be using > working spaces with gamma 1; Adobe Lightroom is already using a linear gamma > working space "under the hood" and Lightzone has always used a linear gamma > working space. > > > Which working space: "large gamut" or "small gamut" > > One MAJOR consideration in choosing a working space is that some working > spaces are "bigger" than others, meaning they cover more of the visible > spectrum (and perhaps even include some "imaginary" colors - mathematical > constructs that don't really exist). These bigger spaces offer the > advantage of allowing you to keep all the colors captured by your camera and > preserved by the lcms conversion from your camera profile to the really big > "profile connection space". > But "keeping all the possible colors" comes at a price. It seems that any > given digital image (pictures of daffodils with saturated yellows being one > common exception) likely only contains a small subset of all the possible > visible colors that your camera is capable of capturing. This small subset > is easily contained in one of the smaller working spaces. Using a very > large working space mean that editing your image (applying curves, > saturation, etc) can easily produce colors that your eventual output device > (printer, monitor) simply can't display. So the "conversion" from your > "working space" to your "output device space" (say your printer) will have > to "remap" the "out of gamut" colors in your edited image, some of which > might even be totally imaginary, to your printer color space with its much > smaller gamut, leading to inaccurate colors at best and at worst to > "banding" ('posterization' - gaps in what should be a smooth color > transition, say, across an expanse of blue sky) and "clipping" (e.g your > carefully crafted muted transitions across delicate shades of red, for > example, might get "remapped" to a solid block of dull red after conversion > to your printer's color space). > In other words, large gamut working spaces, improperly handled, can lead to > lost information on output. Small gamut working spaces can clip information > on input. Like Wikipedia says, it's a trade-off. I can offer some > oft-repeated advice: > > (1)For images intended for the web, use (one of the) sRGB (variants - there > are several). > (2)For the most accuracy in your image editing (that is, making the most of > your "bits" with the least risk of banding or clipping when you convert your > image from your working space to an output space), use the smallest working > space that includes all the colors in the scene that you photographed, plus > a little extra room for those new colors you intentionally produce as you > edit. > (3)If you are working in 8-bits rather than 16-bits, choose a smaller space > rather than a larger space. > (4)For archival purposes, convert your raw file to a 16-bit tiff with a > large gamut working space to avoid loosing color information. Then convert > this "archival" tiff to your working space of choice (saving the converted > "working" tiff under a new name, of course). See > "http://www.21stcenturyshoebox.com/essays/scenereferredworkflow.html" for > more details. > > The "whys" of these bits of advice regarding "which working space" are > beyond the scope of this tutorial. See Bruce Lindbloom's excellent website > (http://www.brucelindbloom.com/, Info, Information about RGB Working Spaces) > for a visual comparison of the "gamut" (array of included colors) of the > various working color spaces. See > "http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml" and > "http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm" for a > "pro" and "con" presentation, respectively, of the merits of using "large > gamut" working spaces. And while you are on the "cambrideincolour.com" > website, check out the tutorial on color management. > > > And this concludes my tutorial on color management, camera profiles, and > working spaces. Once again, please feel free to comment, correct, > incorporate into the digikam handbook, or ignore altogether. As I already > said, I couldn't help but notice that the existing information in the > digikam handbook is wrong on quite a few counts regarding color management > (regarding which I will post separately). Rather than just complain about > the problems, I thought I would try my hand at spelling out some theoretical > background and practical consequences of color management choices regarding > "camera profiles" and "working spaces". > > Elle > > > -- > View this message in context: http://www.nabble.com/Tutorial%3A-Color-Management%2C-Camera-Profiles%2C---Working-Spaces-tp17587858p17587858.html > Sent from the digikam-users mailing list archive at Nabble.com. > > _______________________________________________ > Digikam-users mailing list > [hidden email] > https://mail.kde.org/mailman/listinfo/digikam-users > Digikam-users mailing list [hidden email] https://mail.kde.org/mailman/listinfo/digikam-users |
In reply to this post by Elle Stone-3
Great work Elle, useful to many people. Thanks a lot.
Sveinn í Felli elle stone wrote: > This tutorial on color management, camera profiles, and working spaces has > been put together in the hope that it will be commented on, corrected, and > possibly even incorporated into the digikam handbook. The existing > information in the handbook is wrong on quite a few counts regarding color > management, and therefore confusing to a new user. Rather than complain > about the problems, I thought I would try to offer part of a solution. > > "Color Management" is a complicated subject. Fortunately, as digital > photographers we only need to know enough to make a few good choices along > the image editing "pipeline" from camera to final output. Hopefully our > color-managed image editor and all the other software we use in our "digital > darkroom" will then do all the "heavy lifting" behind the scenes. > > In this tutorial I will attempt to describe as simply and accurately as > possible two important color management choices: > > (1)What "camera profile" should I use during the raw conversion process? > > (2)What "working space" should I use while I am editing my image? > > My goal in this tutorial is to put these two very important color management > choices in the overall context of an understanding of what color management > actually does. Two other color management choices mentioned only in passing > (but stay tuned for the next tutorial!) are what monitor/display profile > should I use? and what do I do about choosing an "output" profile if I want > to print or email my image or perhaps publish it to the web? > > > Every camera is different: > > Digital cameras have an array of millions of little light sensors inside, > making up either a CCD or a CMOS chip (the difference between "CCD" and > "CMOS" is very interesting but beyond the scope of this tutorial; if you are > curious, google is your friend). These light-sensing "pixels" are > color-blind. So to allow pixels to record color information, each pixel is > capped by a transparent red, green, or blue lens, usually alternating in > what is called a "Bayer" array. The whole point of "interpolation" using > "demosaicing algorithms" such as dcraw's default "AHD" is to "guess" what > color light actually fell on any given pixel by "interpolating" information > gathered from that single pixel plus its "neighboring" pixels (see > http://en.wikipedia.org/wiki/Demosaicing). After interpolation, the raw > converter software outputs a file (usually a 16-bit tiff) containing a trio > of interpolated R,G,B values for each pixel in the image. > The good news regarding today's digital cameras is that the sensors, all > those little "pixels" on the ccd or cmos chip inside the camera, are capable > of capturing virtually ALL the visible spectrum. The bad news is that the > this trio of R,G,B numbers for each pixel in an image, as produced by the > raw converter from the raw image that the camera stores on the camera card, > is essentially meaningless until "interpreted" by a camera profile that is > specific to the particular (make and model of) camera. Why? Because "pixel > response to light" is the result of lots of camera-specific factors > including: the nature of the sensor array itself, the precise > coloring/transmissive qualities of the lens caps, and the particular > "analog-to-digital conversion" and post-conversion processing that happens > inside the camera to produce the raw image that gets stored on the card. > (As an aside, the "analog-to-digital conversion" inside the camera is > necessary because the light-sensing "pixels" are analog in nature - they > collect a charge proportionate to the amount of light that reaches them; the > accumulated charge on each pixel needs to be turned into a discrete, digital > quantity by the camera's "analog to digital converter"). > > > The "Universal Translator": your camera profile, the Profile Connection > Space, and lcms > > So the question for each RGB trio of values in the (let us assume) 16-bit > tiff produced by (let us assume) dcraw becomes, "What does a particular trio > of RGB values for the pixels making up images produced by this particular > (make and model) camera really mean in terms of some "absolute standard" > referencing some "ideal observer"?" This "absolute standard" referencing an > "ideal observer" is more commonly called a "Profile Connection Space". A > "camera profile" is needed to accurately "characterize" or "describe" the > response of a given camera's pixels to light entering that camera, so that > the RGB values in the output file produced by the raw converter can be > "translated" first into an absolute Profile Connection Space (PCS) and then > from the PCS to your chosen working space. > As a very important aside, for most of the open source world (including > digikam), the software used to "translate" from the camera profile to the > PCS and from the PCS to your chosen "working space" and eventually to your > chosen "output space" (for printing or perhaps monitor display) is based on > lcms (the "little color management engine" - see http://littlecms.com). For > what it's worth, my own testing has shown that lcms does more accurate > conversions than Adobe's proprietary color conversion engine. Further, for > almost all raw conversion programs, including commercial closed source > software such as Adobe Photoshop, the raw conversion is typically based on > "decoding" of the proprietary raw file done by dcraw. David Coffin, author > of dcraw, is the hero of raw conversion - without him we'd all be stuck > using the usually "windows/mac only" proprietary software that comes with > our digital cameras. For what it's worth, my own testing has shown that > dcraw's interpolation algorithms (not to be confused with the aforementioned > "decoding" of the proprietary raw file), if properly used, produce results > equal or superior to commercial, closed source software. We in the world of > linux and open source software are NOT second-class citizens when it comes > to digital imaging. Far from. > There are two commonly used Profile Connection Spaces - CIELAB and CIEXYZ > (see http://en.wikipedia.org/wiki/Color_management, section on "Color > translation", then look up CIELAB and CIEXYZ on wikipedia). Lcms uses the > camera profile to "translate" the RGB values from the interpolated raw file, > that is, the tiff produced by dcraw, into the appropriate Profile Connection > Space (usually CIEXYZ - why "CIEXYZ"? I haven't taken the time to learn). > A "profile connection space" is not itself a "working space". Rather a PCS > is an absolute reference space used only for translating from one color > space to another - think of a PCS as a "Universal Translator" for all the > color profiles that an image might encounter in the course of its "journey" > from camera raw file to final output: > > (1)Lcms uses the "camera profile", also called an "input" profile, to > "translate" the interpolated dcraw-produced R,G,B numbers, which only have > "meaning" relative to your (make and model of) camera, to a second set of > R,G,B numbers that only have meaning in the Profile Connection Space. > > (2)Lcms "translates" the Profile Connection Space R,G,B numbers to the > corresponding numbers in your chosen working space so you can edit your > image. And again, these "working space" numbers ONLY have meaning relative > to a given working space. The same "red", visually speaking, is represented > by different "trios" of RGB numbers in different working spaces; and if you > "assign" the wrong profile the image will look wrong, slightly wrong or very > wrong depending on the differences between the two profiles. > > (3)While you are editing your image in your chosen working space using your > favorite image editing program (which hopefully is digikam!), then lcms > should translate all the working space RGB numbers back to the PCS, and then > over to the correct RGB numbers that enable your monitor (your "display > device") to give you the most accurate possible "display" representation of > your image as it is being edited. This "translation for display" is done > "on the fly" and you should never even notice it happening, unless it > doesn't happen correctly - then the displayed image will look wrong, perhaps > a little wrong, perhaps really, really, really wrong. Stay tuned for the > next tutorial (assuming anyone finds this tutorial useful) for details on > color-managing your monitor display. > > (4)When you are satisfied that your edited image is ready to share with the > world, lcms "translates" the "working space" RGB numbers back into the PCS > space and out again to a printer color space using a printer profile > characterizing YOUR printer/paper combination (if you plan on printing the > image) or to sRGB (if you plan on displaying the image on the web or > emailing it to friends or perhaps creating a slide-show to play on monitors > other than your own. > > To back up a little bit and look at the first color profile an image > encounters, that is, the camera profile (see (1) immediately above) - dcraw > can in fact apply your camera profile for you (dcraw uses lcms internally). > But (i)the generating of the tiff composed of the interpolated RGB values > derived from the camera raw file, and (ii)the application of the camera > profile to the interpolated file, are two very distinct and totally > separable (separable in theory and practice for dcraw; in theory only for > most raw converters) steps. The dcraw command line output options "-o 0 > [Raw color (unique to each camera)] -4 [16-bit linear] -T [tiff]" tell dcraw > to output the RGB numbers from the raw interpolation into a tiff WITHOUT > applying a camera ("input") profile (the words in brackets explain the > options but should not be entered at the command line). Then, if you truly > enjoy working from the command line, you can use the lcms utility "tifficc" > to apply your camera profile yourself. The advantage of doing so is that > you can tell lcms to use "high" quality conversion (dcraw seems to use the > lcms default "medium"). The disadvantage, of course, is that applying your > camera profile from the command line adds one extra step to your raw work > flow. > > > Where to find camera profiles: > > So where do we get these elusive and oh-so-necessary camera-specific > profiles that we need to "translate" our interpolated raw files to a working > color space? The UFRAW website section on color management > (http://ufraw.sourceforge.net/Colors.html) has a bit of information on where > to find ready-made camera profiles. It's an unfortunate fact of digital > imaging that the camera profiles supplied by Canon, Nikon, and the like > don't work as well with raw converters other than each camera manufacturer's > own proprietary raw converter. Which is why Bibble and Phase One (and Adobe, > but ACR "hides" the Adobe-made profiles inside the program code), for > example, have to make their own profiles for all the cameras that they > support - keep this "proprietary propensity" of your camera manufacturer in > mind next time you buy a digital camera. > But back to finding a camera profile for YOUR camera - the "real" answer > (assuming you don't find a ready-made profile that makes you happy) is to > make your own camera profile or have one made for you. There are quite a > few commercial services who provide profiling services (for a fee, of > course). Or you can use "LPRof" or "Argyll" to profile your camera > yourself. I haven't yet walked down that road so I can't speak about how > easy or difficult the process of profiling a camera might be. But I would > imagine, knowing how very meticulous the people behind Argyll, LPRof, and > lcms are about color management, that making your own camera profile is very > "do-able" and very likely the results will be better than any proprietary > profile. After all, Canon (and also Bibble and Phase One for that matter) > didn't profile MY camera - they just profiled a camera LIKE mine. > > > Working Spaces: > > So now your raw file has been interpolated by dcraw and you've obtained a > camera profile and used lcms "tifficc" to "apply" your camera profile to the > tiff produced by dcraw (or you've asked dcraw to apply it for you). What > does all this mean? The "real" answer involves a lot of math and color > science that goes way over my head and likely yours. The short, practical > answer is that neither the camera profile space nor the Profile Connection > Space is an appropriate space for image editing. Your next step is to > choose a "working space" for image editing. And then you (or rather the > lcms "color management engine" that your open source digital imaging > software uses) actually perform a "double translation". First lcms uses the > camera profile to translate the RGB values of each pixel in the > "dcraw-output-image-without-camera-profile-applied" into the aforementioned > "Profile Connection Space". Then it translates the RGB values of each pixel > from the PCS to your chosen working space. > > > Confusions and confusing terminology: > > Before talking more about "working spaces", some confusions and confusing > terminology needs to be cleared up: > First, sRGB is both a "working" color space and an "output" color space for > images intended for the web and for monitor display (if you have a spiffy > new monitor with a gamut larger than the gamut covered by sRGB, obviously > you might want to reconsider what output profile to use to best take > advantage of your wonderful and hopefully calibrated and profiled monitor, > but please convert your image to sRGB before sending it on to your > friends!). sRGB is also the color space that a lot of home and > mass-production commercial printers "expect" image files to be in when sent > to the printer. It is also the color space that most programs "assume" if > an image does not have an embedded color profile telling the program what > color space should be used to interpret ("translate") the RGB numbers. So > if you choose to not use color-management, your color-management "choices" > are simple - set everything to sRGB. > Second, all jpegs (or tiffs, if you have an older Minolta Dimage camera) > coming straight out of a camera (even if produced by point-and-shoots > cameras that don't allow you to save a raw file) start life inside the > camera as a raw file produced by the camera's A to D converter. The > processor inside the camera interpolates the raw file, assigns a camera > profile, translates the resulting RGB numbers to a working space (usually > sRGB but sometimes you can choose AdobeRGB, depending on the camera), does > the jpeg compression, and stores the jpeg file on your camera card. So > jpegs (or tiffs) from your camera NEVER need to be assigned a camera or > "input" profile which is then "translated" to a working space via a PCS. > Jpegs from a camera are already in a working space. > Third, in case anyone is unsure on this point, note that an "interpolated" > raw file is no longer a raw file - it has been interpolated and then > "output" as a tiff whose RGB values need to be "translated" to a working > space, using the camera profile, the PCS, and lcms. > Fourth (strictly for future reference), to introduce a bit of commonly > heard color-management terminology here - the camera profile and your > printer's color profile are both "device dependent," whereas the working > space will be "device-independent" - it can be used with any image, with any > properly color-managed software, without regard for where the image > originated. > Fifth, above I have used the words "translate" and "translation" as a > descriptive metaphor for what lcms does when it "translates" RGB values from > one color space to another via the PCS. The usual and correct terminology > is "convert" and "conversion", which I will use below. The four "methods of > conversion" from one color space to another are: "perceptual", "relative > colorimetric", "absolute colorimetric", and "saturation". Which method of > conversion you should use for any given image processing step from raw file > to final output image is beyond the scope of this tutorial. The standard > advice is: when in doubt, use "perceptual." > Sixth (and again, strictly for future reference),"assign a profile" means > "change the meaning of the RGB numbers in an image by embedding a new > profile without changing the actual RGB numbers associated with each pixel > in the image"; "convert" means "embed a new profile, but also change the RGB > numbers at the same time so that the "meaning" of the RGB values - that is, > the "real-world visible color" represented by the trio of RGB numbers > associated with each pixel in an image - remains the same before and after > the conversion from one space to another". You should be able to do > multiple conversions of an image from one working space to another, and with > a properly color-managed image editor, even though all the RGB numbers in > the image will change with each conversion, the image on your screen should > look the same (leaving aside the usually unnoticeable small but inevitable > changes from accumulated gamut mismatches and mathematical rounding errors). > However, every time you "assign" a new working space profile rather than > "convert to" a new working space, the appearance of the image should more or > less drastically change (usually for the worse). > Finally, (and this is a crucially important point), color management is NOT > "only relevant if you shoot raw". Color management affects every stage of > the image processing pipeline, whether you start with a raw file that you, > yourself "interpolate and translate" into a tiff, or if you start with a > jpeg or tiff produced by your camera. > > > Copyrighted and "copyleft" working spaces: > > I will take it as given that ALL the ordinarily encountered working spaces, > such as: > > (1)the several variants of sRGB (see http://www.color.org/v4spec.xalter) > (2)"BruceRGB" (http://www.brucelindbloom.com) > (3)the various "ECI" (European color initiative) working space profiles (see > "http://www.eci.org/doku.php?id=en:colourstandards:workingcolorspaces") > (4)AdobeRGB, Adobe WideGamutRGB, and Kodak/Adobe ProPhotoRGB (Kodak and > Adobe ProPhoto are the same, just "branded" differently) and their > non-branded, non-copyrighted counterparts (Oyranos includes a non-branded > version of AdobeRGB; see > "http://www.behrmann.name/index.php?option=com_content&task=view&id=34&Itemid=68") > (5) and quite a few others that could be added to this list > > are all more or less "suitable" as working spaces. Which working space you > "should" use depends only and solely on YOU, on YOUR requirements as the > editor of YOUR digital images with YOUR eventual output intentions (web, > fine art print, etc). > However, as a critical aside, if you are using Adobe (or other copyrighted) > working space profiles, these profiles contain copyright information that > shows up in your "image exif" information. Lately I've been perusing the > openicc mailing lists. Apparently lcms can be used to produce nonbranded, > "copyleft" working space profiles that are just the same as - actually > indistinguishable from - the branded, copyrighted working space profiles. > It would be a wonderful addition to digikam if a set of "copyleft" working > space profiles, including nonbranded, relabelled versions of ProPhotoRGB, > AdobeRGB, and Adobe WidegamutRGB (perhaps in two "flavors" each: linear > gamma and the usual gamma), could be bundled as part of the digikam package. > > > Which working space: gamma > > Now, the next question is "which working space should I use?" Wikipedia > says "Working spaces, such as sRGB or Adobe RGB, are color spaces that > facilitate good results while editing. For instance, pixels with equal > values of R,G,B should appear neutral. Using a large (gamut) working space > will lead to posterization, while using a small working space will lead to > clipping.[2] This trade-off is a consideration for the critical image > editor" (http://en.wikipedia.org/wiki/Color_management#Working_spaces). > Well, that quote from wikipedia is about as clear as mud and I don't know > if I will be able to explain it more clearly, but I will try. "[P]ixels > with equal values of R,G,B should appear neutral" just means that for any > given pixel in an image that has been converted to a suitable working space, > if R=G=B you should see grey or black or white on your screen. > I am not aware of a list of other technical requirements for a "suitable" > working space, though undoubtedly someone has produced such a list. But > most working space profiles are characterized by (1)"RGB primaries" which > dictate the range of colors, that is, the "gamut" covered by a given > profile; (2)"white point", usually D50 or D65, which dictates the total > dynamic range of the working space, from 0,0,0 (total black) to the > brightest possible white; and (3)"gamma." > The practical consequences that result from using different "RGB > primaries", leading to larger or smaller working spaces, are discussed > below. The practical consequences for different choices for the working > space "white point" are beyond the scope of this tutorial. Here I will talk > a little bit about the practical consequences of the working space "gamma" > (for an excellent article and references, look up "gamma" on wikipedia). > The "gamma" of a color profile dictates what "power transform" needs to > take place to properly convert from an image's embedded color profile > (perhaps your working color space) to another color profile with a different > gamma, such as (i)the "display" profile used to display the image on the > screen or (ii)perhaps to a new working space, or (iii)perhaps from your > working space to your printer's color space. (As an aside, mathematically > speaking, for a "power transform" you "normalize" the RGB numbers and raise > the resulting numbers to an appropriate power depending on the respective > gammas of the starting and ending color space, then renormalize the results > to a new set of RGB numbers. Lcms does this for you when you ask lcms to > convert from one color space to another; however, if ALL you are doing is a > power transform, use imagemagick instead of lcms and just manipulate the RGB > numbers directly - the results will be more accurate. Now aren't you glad > that I've kept the mathematics of color management out of this tutorial?) > One practical consequence of the "gamma" of a working space is that the > higher the gamma, the more "tones" are available for editing in the shadows, > with consequently fewer tones available in the highlights. So > theoretically, if you are working on a very dark-toned ("low key") image you > might want a working space with a higher gamma. And if you are working on a > "high key" image, say a picture taken in full noon sunlight of a wedding > dress with snow as a backdrop, you might want to choose a working space with > a lower gamma, so you have more available tonal gradations in the > highlights. But in the real world of real image editing, almost everyone > uses working spaces with either gamma 1.8 or 2.2. > As an aside, recently I've heard that some people are trying to > "standardize" on gamma 2.0. As a very important aside, sRGB and "LStar-RGB" > are not "gamma-based" working spaces. Rather, sRGB uses a "hybrid" gamma - > see "http://en.wikipedia.org/wiki/SRGB" for details. And "LStar-RGB" uses a > luminosity-based "tonal response curve" instead of a gamma value - see > "http://www.colormanagement.org/en/workingspaces.html" for more information, > and then google around for more in-depth information. > In addition to "gamma 1.8" and "gamma 2.2" the only other "gamma" for a > working space that gets much mention or use is "gamma 1", also called > "linear gamma". "Linear gamma" is used in HDR (high dynamic range) imaging > and also if one wants to avoid introducing "gamma-induced errors" into one's > "regular" low dynamic range editing. "Gamma-induced errors" is a topic > outside the scope of this tutorial, but see "Gamma errors in picture > scaling, http://www.4p8.com/eric.brasseur/gamma.html"; > "http://www.21stcenturyshoebox.com/essays/color_reproduction.html" for > gamma-induced color shifts; and of course Timo Autiokar's somewhat infamous > website, http://www.aim-dtp.net/. > Unfortunately and despite their undeniable mathematical advantages, linear > gamma working spaces have so few "tones" in the shadows that (in my opinion) > they are impossible to use for editing if one is working in 8-bits, and > still problematic at 16-bits (though I do use linear gamma working spaces > myself for some parts of my image editing workflow). When the day comes > when we are all doing our editing on 32-bit files produced by our HDR > cameras on our personal supercomputers, I predict that we will all be using > working spaces with gamma 1; Adobe Lightroom is already using a linear gamma > working space "under the hood" and Lightzone has always used a linear gamma > working space. > > > Which working space: "large gamut" or "small gamut" > > One MAJOR consideration in choosing a working space is that some working > spaces are "bigger" than others, meaning they cover more of the visible > spectrum (and perhaps even include some "imaginary" colors - mathematical > constructs that don't really exist). These bigger spaces offer the > advantage of allowing you to keep all the colors captured by your camera and > preserved by the lcms conversion from your camera profile to the really big > "profile connection space". > But "keeping all the possible colors" comes at a price. It seems that any > given digital image (pictures of daffodils with saturated yellows being one > common exception) likely only contains a small subset of all the possible > visible colors that your camera is capable of capturing. This small subset > is easily contained in one of the smaller working spaces. Using a very > large working space mean that editing your image (applying curves, > saturation, etc) can easily produce colors that your eventual output device > (printer, monitor) simply can't display. So the "conversion" from your > "working space" to your "output device space" (say your printer) will have > to "remap" the "out of gamut" colors in your edited image, some of which > might even be totally imaginary, to your printer color space with its much > smaller gamut, leading to inaccurate colors at best and at worst to > "banding" ('posterization' - gaps in what should be a smooth color > transition, say, across an expanse of blue sky) and "clipping" (e.g your > carefully crafted muted transitions across delicate shades of red, for > example, might get "remapped" to a solid block of dull red after conversion > to your printer's color space). > In other words, large gamut working spaces, improperly handled, can lead to > lost information on output. Small gamut working spaces can clip information > on input. Like Wikipedia says, it's a trade-off. I can offer some > oft-repeated advice: > > (1)For images intended for the web, use (one of the) sRGB (variants - there > are several). > (2)For the most accuracy in your image editing (that is, making the most of > your "bits" with the least risk of banding or clipping when you convert your > image from your working space to an output space), use the smallest working > space that includes all the colors in the scene that you photographed, plus > a little extra room for those new colors you intentionally produce as you > edit. > (3)If you are working in 8-bits rather than 16-bits, choose a smaller space > rather than a larger space. > (4)For archival purposes, convert your raw file to a 16-bit tiff with a > large gamut working space to avoid loosing color information. Then convert > this "archival" tiff to your working space of choice (saving the converted > "working" tiff under a new name, of course). See > "http://www.21stcenturyshoebox.com/essays/scenereferredworkflow.html" for > more details. > > The "whys" of these bits of advice regarding "which working space" are > beyond the scope of this tutorial. See Bruce Lindbloom's excellent website > (http://www.brucelindbloom.com/, Info, Information about RGB Working Spaces) > for a visual comparison of the "gamut" (array of included colors) of the > various working color spaces. See > "http://www.luminous-landscape.com/tutorials/prophoto-rgb.shtml" and > "http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm" for a > "pro" and "con" presentation, respectively, of the merits of using "large > gamut" working spaces. And while you are on the "cambrideincolour.com" > website, check out the tutorial on color management. > > > And this concludes my tutorial on color management, camera profiles, and > working spaces. Once again, please feel free to comment, correct, > incorporate into the digikam handbook, or ignore altogether. As I already > said, I couldn't help but notice that the existing information in the > digikam handbook is wrong on quite a few counts regarding color management > (regarding which I will post separately). Rather than just complain about > the problems, I thought I would try my hand at spelling out some theoretical > background and practical consequences of color management choices regarding > "camera profiles" and "working spaces". > > Elle > > _______________________________________________ Digikam-users mailing list [hidden email] https://mail.kde.org/mailman/listinfo/digikam-users |
In reply to this post by Gilles Caulier-4
Hi Giles,
My apologies if the tutorial was too long for the mailing list - I was a bit worried that it was too long. I want digikam to be the best that it possibly can. It doesn't handle color management correctly at present (bug). And the handbook has a lot of errors regarding color management (documentation). I can't program to correct software bugs. But I can help write documentation regarding color management. I created a wiki account per your suggestion, but after five minutes of staring at the wiki, I still don't know how to post. Please feel free to post my tutorial for me, or else give explicit directions for how I can post. Elle
|
In reply to this post by Sveinn í Felli
Thanks!
Elle
|
In reply to this post by Elle Stone-3
2008/6/2 elle stone <[hidden email]>:
> > Hi Giles, > > My apologies if the tutorial was too long for the mailing list - I was a bit > worried that it was too long. I want digikam to be the best that it > possibly can. It doesn't handle color management correctly at present > (bug). And the handbook has a lot of errors regarding color management > (documentation). I can't program to correct software bugs. But I can help > write documentation regarding color management. > > I created a wiki account per your suggestion, but after five minutes of > staring at the wiki, I still don't know how to post. Please feel free to > post my tutorial for me, or else give explicit directions for how I can > post. > > Elle Elle, I don't know why wiki do not work properlly for you. Another suggestion is to patch the digiKam handbook. It's not very complicated. hanbook use docbook format: http://www.docbook.org/ http://en.wikipedia.org/wiki/DocBook It's a text file based on xml with section/subsection. The file is here : http://websvn.kde.org/branches/extragear/kde3/graphics/doc/digikam/index.docbook?view=log Why using docbook format and not another format like ODF ? because the file content can be processed by script for the internationalization process. docbook can be converted to others publishing format as PDF, PS, RTF, ODF, etc. http://www.digikam.org/drupal/docs All huge technical documentations done in opensource use this format. Another typical case is pro-editing as OReilly... You can make a first try, post me a patch and i can fix and commit your work on svn for inclusion. Best Gilles Caulier _______________________________________________ Digikam-users mailing list [hidden email] https://mail.kde.org/mailman/listinfo/digikam-users |
In reply to this post by Elle Stone-3
Hi Elle, I finally found the time to include your great tutorial in the digikam docs. Have a look at it : http://digikam3rdparty.free.fr/0.9.x-releases/digikam_0.9.4.pdf and tell me if you agree. Of course I added you to the credits and authors. Thanks again. Gerhard _______________________________________________ Digikam-users mailing list [hidden email] https://mail.kde.org/mailman/listinfo/digikam-users |
Hi, Gerhard,
I apologize for taking so long to respond. I just finished an extensive rewrite of the tutorial. I think my original version lacked organization and flow. The rewrite, which I will either post here or send to you privately (let me know your preference), is much more readable, and some errors have been corrected. Elle
|
On Tuesday 09 September 2008 12:45:45 elle stone wrote: > Hi, Gerhard, > > I apologize for taking so long to respond. I just finished an extensive > rewrite of the tutorial. I think my original version lacked organization > and flow. The rewrite, which I will either post here or send to you > privately (let me know your preference), is much more readable, and some > errors have been corrected. > > Elle Hi Elle, I much appreciate your effort you put into the documentation, and I hope, thanks to you, finally to get a grip on CM :-) Send it to me either way, but it might be advantageous to post it here in the digikam ML, so anybody interested can read it before it goes into the documentation. Anyway, I'm on the road until Sunday and can't work on it right away. Gerhard > Gerhard Kulzer-3 wrote: > > Hi Elle, > > I finally found the time to include your great tutorial in the digikam > > docs. > > Have a look at it : http://digikam3rdparty.free.fr/0.9.x- > > releases/digikam_0.9.4.pdf and tell me if you agree. > > Of course I added you to the credits and authors. > > > > Thanks again. > > > > Gerhard > > > > _______________________________________________ > > Digikam-users mailing list > > [hidden email] > > https://mail.kde.org/mailman/listinfo/digikam-users _______________________________________________ Digikam-users mailing list [hidden email] https://mail.kde.org/mailman/listinfo/digikam-users |
Hi Gerhard,
I'm going to send the rewritten tutorial directly to you. If you want to post it to the mailing list, perhaps under a new topic so that people can find it, that would be fine with me. I think it is a good tutorial, but it is a bit long. At this point, two small sections have been removed pending "permission to quote." Elle
|
Free forum by Nabble | Edit this page |