Aspect Ratio...

Discussion in 'Photography' started by J, Nov 19, 2011.

  1. J

    Pete A Guest

    From what you've said up to this point, your monitor has almost nothing
    to do with being unable to get the colour punch in the prints; it is
    the limitation of the printing process itself. To illustrate, I'll use
    the sRGB colour space to make things easier for me.

    Suppose I create a simple abstract having only a green object. I'll use
    the strongest green I can get, which has the tristimulus vales 0,255,0.
    To add texture to the object I need to shade it. To darken an area I
    reduce the value of the green channel; to lighten an area, I leave the
    green channel at 255 and increase the R and B channels. E.g. 0,25,0 for
    dark green and 240,255,240 for a bright pale green.

    Clearly, the saturation has to drop in the lightened area. What about
    the darkened area? Technically it is still a 100% saturated green
    (using the HSV model), but human vision will make us believe the
    saturation has dropped significantly. The way I think of it is this:
    darkening the green dilutes it with black; lightening dilutes it with
    white.

    I could create that abstract using any old monitor. If I print it using
    pigmented inks, I will likely be very disappointed with the result.
    Ignore the text in the following URL, just look at the first two
    chromaticity diagrams:

    <http://en.wikipedia.org/wiki/CIE_1931_color_space>

    I would get a similar result if I sent my image for printing on some
    photographic papers, even the papers which are claimed to have a gamut
    wider than sRGB. Their gamut _is_ wider in the dark areas, but is much
    narrower in the light areas.

    Dye inks are very good for punchy abstracts, it's just a case of
    finding the most suitable combination of inks and paper for the style
    of work. Downloadable colour profiles and soft proofing allow us to
    "try before we buy".

    Hope that helps.
     
    Pete A, Nov 24, 2011
    #81
    1. Advertisements

  2. J

    PeterN Guest

    It is interesting, but it doesn't solve my issue.
    For an experiment on how monitor accuracy relates to your final image
    try assigning different profiles to your image. You will see vastly
    different interpretations.
    While I work with punchy colors, I also use subtle changes, that simply
    are not available in the sRGB color space.
    The monitor must be accurate if I am to have a chance of printing what I
    get on screen.

    BTW
    One of the reasons I work in LAB, is that it is the largest color space
    that can be seen by the human eye' I can do a decent noise reduction
    simply by applying the remove scratches & speckles filter to the
    luminosity layer; sharpening using unsharp mask on the luminosity layer
    lets me do a lot of sharpening, without producing the halo effect; and I
    have minimal ghosting.
     
    PeterN, Nov 24, 2011
    #82
    1. Advertisements

  3. J

    Pete A Guest

    None of this reply is meant to be argumentative; it's just my
    observations and thoughts.

    I see small differences when I switch between monitor profiles and vast
    differences between printer profiles. That's using the ProPhoto printer
    evaluation image I posted earlier.
    I've never used an editor with LAB mode so what I'm writing here is
    probably rubbish. IIRC, sRGB specifies a monitor brightness of 80cd/sq
    metre. At this brightness, average human vision can _only just_
    perceive the difference between adjacent values in its 24-bit
    representation.

    Using Adobe RGB instead of sRGB yields a slightly wider colour space
    and much more resolution in the deep shadow areas. The enhanced shadow
    detail is below the resolving power of any LCD or print production that
    I know of. CRT and plasma screens can easily resolve these shadows in a
    totally dark room. Packing this wider gamut into the same 24-bit
    representation means that, at some brightness levels, human vision will
    easily detect the difference between adjacent values in its 24-bit
    representation as banding.

    If one uses computer hardware plus an operating system that transfers
    video to the monitor at more than 8-bits/channel then banding will be
    invisible, even at very high monitor brightness levels. Such a system
    is mandatory for editing very subtle colour adjustments. What I can't
    understand is how these subtleties could be resolved when viewing a
    print which, relatively, has very low contrast.

    Obviously, there is something fundamental missing from my understanding.
    LAB encompasses a colour space that is far beyond the human eye; it
    extends to colours that cannot possibly exist in reality.
    What I like about using Capture NX2 is that, by default, it applies USM
    to the luminance only and noise reduction is well balanced between
    luminance and chrominance. I've read endlessly that it is better
    perform sharpening via the high-pass filter with the layer mask set to
    "overlay", but this method has no threshold control so it increases
    noise. Well, just like me, those who advocated the method are random
    internet posters with a lot to learn :)
     
    Pete A, Nov 24, 2011
    #83
  4. J

    PeterN Guest

    I don't take it as argumentative. It is a discussion, hopefully from
    which we can both open our minds to different concepts.

    That is a nice image for the purpose.
    I have an irrational desire to work in the widest color space possible.
    I well understand that LAB's gamut extends beyond human perceivable
    colors, and most printers and monitors will not see them either. However
    I understand the LAB space to encompass the full value of what we humans
    can perceive.
    I have a fairly comprehensive book by Dan Margulis.
    <http://www.amazon.com/Photoshop-LAB-Color-Adventures-Colorspace/dp/0321356780>

    It is not light reading, and he goes into great detail and the technical
    reasons for his statements.

    I refer to it regularly.
    True, but few of us can hear extremely high or very low musical tones.
    But there existence in a recording adds to its richness. I suspect LAB
    applies in a similar manner.
    I hear CaptureNX does a fine job, but after playing with it, I would
    consider using it in place of ACR.
    As for high pass sharpening, it certainly can increase noise. But it has
    a purpose. I have experimented with multiple layers of high pass
    sharpening, and set the mode to soft, and/or vivid. That is not for
    everyday sharpening. You can control it by using masks and changing the
    opacity of the layer. For special effects, I have also used the emboss
    layer in a similar manner. Yes, that is using PS to manipulate images,
    but I like to do that.
     
    PeterN, Nov 25, 2011
    #84
  5. J

    Pete A Guest

    Thanks very much for your reply, Peter. I spent most of yesterday
    reading about CIELAB and its various applications; especially colour
    spaces, profiles and scene rendering. The latter is very hard to
    understand without the LAB colour model.

    It makes sense to edit in LAB mode for producing prints and all other
    colour work having exact colours specified in CIELAB D50 co-ordinates.
    As often happens during learning, many things have become clear, but it
    has raised a whole load of new questions for me to think about and
    learn from :)

    A couple of things have me puzzled:

    1. The connection between using the ProPhoto colour space and editing
    in LAB mode. I've never come across an image file with a LAB profile,
    so does the editor set it's working space to LAB and keep it at that or
    does it switch back to, say, ProPhoto when the LAB mode edits are
    completed?

    2. Many colour spaces, including CIELAB and ProPhoto, are relative to
    the CIE standard illuminant D50 used in publishing and light
    booths/tables for colour assessment. Does this mean that D50 colour
    spaces should be used with a monitor set to D50? I know the colour
    management system converts a D50 working space into the the device
    (monitor) colour space, but isn't this using the D50 space outside of
    its intended purpose?
    You can't fool me any longer: your art is just waiting for ultra-wide
    gamut devices so that the viewer can be warmed by the infrared and get
    a tan from the UV. Add a compatible 3-D printer and there will be no
    limits to your work ;-)

    Using your audio analogy, there is absolutely no point in recording or
    reproducing sine-wave tones that are beyond the range of perception.
    This implies that band-limiting filters may be used in the recording
    and reproduction chain. Working in the amplitude-frequency domain, this
    implication is easily verified. However, working in the amplitude-time
    domain using only in-band, but complex signals, the effect of filters
    is clearly evident and the effect(s) can often be heard by experienced
    listeners. Human perception and interpretation of audible and
    sub-audible information is extremely complex and we have no single
    model to work with. I'm sure the same is true for human vision.

    As to wide image gamut, I have a few thoughts. If the viewer has never
    seen the original image (or original object) they are unlikely to
    detect a restricted gamut; provided that it has been soft-clipped
    (mapped) rather than hard-clipped (limited). In terms of handling
    extreme colours, it seems a bit like the audio band-limiting filter
    problem - what is the best type of filter to use and do we have an
    editing domain in which to create the filters?

    Obviously, we have the luminance filter which matches the eye response,
    but this is useless for mapping our images onto media having very
    restricted gamut compared to the eye, such as an sRGB device or much
    worse, newsprint. Traditionally, we've used IR and UV filters to
    restrict the range of red and blue gamut, but these have ripples in the
    passband which affect colour fidelity.

    It's hard for me to imagine an optical filter that would restrict the
    gamut of in-band colours such as orange and green - I'll go even more
    insane if I think about it for too long!
     
    Pete A, Nov 26, 2011
    #85
  6. J

    PeterN Guest

    LAB was originally intended to provide a transition between RGB and
    CMYK. If you look at the gamuts for each of the latter you will see that
    much of the gamut for each is out of gamut for the other. Certain
    adjustments can be made in LAB, that simply cannot be done in RGB. e.g.
    An object may not show details, due to color smudging. Often that can be
    cured by straightening the curve on the a and.or b channel in LAB.

    BTW there are many edits that can be better performed n RGB. The image
    will decide which mode of edit is better.
    Only if you do the change manually.

    I'm not sure I understand your question.
    Insanity is hereditary. I got it from my children
     
    PeterN, Nov 27, 2011
    #86
  7. Isn't D50 just a standard target for paper based print? The recommendation
    with a digital workflow is now D65.

    My last LCD screen had a D50 white point and was surprisingly good for a
    bottom end display. In comparison the new one is sRGB and the colour and
    brightness, and angle variance across the screen is a bitch.
     
    Charles E. Hardwidge, Nov 27, 2011
    #87
  8. J

    Pete A Guest

    Taking your points in reverse order, the variation with angle seems to
    be much worse in the vertical direction on all the LCDs I've seen. As
    one might expect, primary colours fair better than the intermediates.

    Some publishers still use D50 monitors (more precisely, a Tcp of 5000
    K) and D50 workflow to achieve extraordinarily accurate print.

    D50 is by far the most common standard illuminant for colour matching
    reflective materials including prints, fabrics and paints. D65 seems to
    be the standard for light emissive devices such as TVs and monitors.

    The more I try to understand the reasoning behind D50 and D65 the more
    confused I become. The only thing that makes sense so far is: if I use
    an editing workspace of D50, have my computer and monitor set to D50,
    print an image on a printer designed for D50 (and use its ICC profile),
    then view the print under D50 next to my monitor, the print should look
    the same as my monitor when set to soft-proof the digital image.

    What is abundantly clear to me is that same print viewed under D65 will
    not colour match anything at all - if it could colour match then there
    would be no need to specify an illuminant/white point in the first
    place.

    Also abundantly clear is the purpose of editing on an all D65 system to
    produce D65 white point targets such as TV, sRGB and Adobe RGB images.

    I simply cannot understand how a photo print (D50) can have accurate
    colours if the image is prepared in a D65 environment. Perhaps this is
    why some publishers still use a D50 environment.

    Having produced many prints that look better than their digital
    originals on my monitors, I have no real cause for concern. However, it
    really bugs me that I don't understand the above.
     
    Pete A, Nov 27, 2011
    #88
  9. I think D50 & D65 are just targets. If you display a print in an environment
    of D50 or D65 they will look different. That's why for the final print stage
    people would use D50 lamps or hold the print to a window?

    The other thing is unless you're viewing under controlled conditions like a
    gallery the viewing light could be anywhere from tungsten to overcast
    cloudy.

    Uh. Does my head in thinking about this stuff. Maybe it's better just to
    accept the target for what it is otherwise you lose yourself in
    overthinking. Sometimes a spoon is just a spoon?
     
    Charles E. Hardwidge, Nov 28, 2011
    #89
  10. J

    Pete A Guest

    I've had a fair bit of success making prints specifically for their
    display environment, but it's been via guesswork that gets refined
    through trial-and-error. I'm hoping to find a more controlled method to
    make it easier.

    As I've mentioned before, I no longer view or edit my digital images
    when North light illuminates my room during the afternoon.
    Over-thinking (analysis paralysis) is possibly my most severe and
    frequent problem.
     
    Pete A, Nov 29, 2011
    #90
  11. J

    PeterN Guest

    On 11/29/2011 9:52 AM, Pete A wrote:
    <>snip>

    ..
    That comes with your eye being trained. With my F3 I used to be able to
    make reasonably accurate exposures, by eye. I also could make a fair
    guess on bracketing.
    Now with digital I seem to have lost that ability and rely totally on
    the metering system and histogram.
     
    PeterN, Nov 30, 2011
    #91
  12. I was doing my brain in with profiling and calibration so just looking at
    settling on something practical. If everything profiles and calibrates okay
    that's what's important. Plus, other people can have clarity because they're
    not so attached to it and if we're all tugging in the same direction that
    has something going for it. Does an end run of the funk...
     
    Charles E. Hardwidge, Nov 30, 2011
    #92
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.