Why don't Sony and Pentax have this problem? Dead pixels, defective pixels

Discussion in 'Digital Cameras' started by RichA, Apr 8, 2011.

  1. RichA

    RichA Guest

    A problem that has plagued Nikon entry to upper entry bodies since the
    D80. All the cameras sport Sony sensors. Is it possible that Nikon
    knowingly buys (at a cut rate) defective sensors, that most won't
    notice have a problem, reserving the good sensors for the D200's on
    up? Is it possible neither Sony nor Pentax will use defective ones?

    http://nikonrumors.com/forum/topic.php?id=2662&page=12
     
    RichA, Apr 8, 2011
    #1
    1. Advertisements

  2. RichA

    Me Guest

    Probably more to do with (battery saving?) method used for pixel-binning
    for downsampling to HD video - that's where most of the problems seems
    to be reported (rather than for stills) in the few posts I read on that
    link. At least as annoying (IMO) is jiggly aliasing artifacts from poor
    downsampling with other dslr HD output. An affordable video cam with
    APS-c native 1080p sensor with appropriate AA filter would be nice.
    Does such a thing exist?
    I hope the next d*00 doesn't use a version of the D7000 sensor. Would
    be a retrograde step for Nikon to offer a camera in that range with a
    tighter crop ratio. Cutting the sensor size to save a few bucks might
    be okay at the lower end, but I'd be a bit pissed off if 10mm became
    11mm on my next "DX" nikon body.
     
    Me, Apr 9, 2011
    #2
    1. Advertisements

  3. RichA

    Apteryx Guest

    I don't think you have to worry too much about that changing from 23.6mm
    x 15.8mm sensor to a 23.6mm x 15.6mm one.

    And I tend to assume that the 23.6mm x 15.6mm is the new Nikon DX
    standard. They don't usually chop and change their sensor sizes too
    often. With the D100 in 2002, Nikon introduced a sensor sized at 23.7mm
    x 15.6mm. The same size (though not the same sensor) was used in the
    D70, D70s, D2, D2X, D50, D40, D40x. Then they introduced a new size of
    23.6mm x 15.8mm in the D200, and used the same size for the D80, D60,
    D300, D90, D300s, D5000, and D3000 (there was an overlap between the new
    and old between the D200 and D80 (2006) and the D40x (2007)).

    Since introducing the 23.6mm x 15.6mm sized sensor in the D7000, they
    are now using it in the D5100.

    Apart from the Return Of The D100 Sensor Size in the 2007 D40x (they
    probably had a few lying around) the only oddity is the D3100.
    Introduced between the last 23.6 x 15.8 sensor camera and the first 23.6
    x 15.6 one, it has a 23.1mm x 15.4mm sensor. Time will tell whether that
    remains an oddity, or is the new standard size for D3xxx series cameras.

    Apteryx
     
    Apteryx, Apr 10, 2011
    #3
  4. RichA

    Apteryx Guest

    On 10/04/2011 12:12 p.m., Apteryx wrote:
    The same size (though not the same sensor) was used in the
    Ok, there was no D2. And the D2X (and D2Xs, D2H and D2Hs) had different
    sized sensors. The D2X (and I think the D2Xs) within the "normal" DX
    range at 23.7mm x 15.7mm), while the D2H and D2Hs were smaller, 23.3 x
    15.5 and 23.1 x 15.5, close to the present D3100.

    Apteryx
     
    Apteryx, Apr 10, 2011
    #4
  5. Right. Those and the other differences mentioned are completely
    inconsequential.

    There were more effective differences in 35mm, and who complained about
    that? The image wasn't always exactly 24 x 36 mm. Wide angle lenses would
    generally produce slightly larger dimensions, and anyway slide mounts and
    negative carriers rarely if ever made the full image available -- except in
    the case of negative carriers that had been filed out to show the full
    frame.
     
    Neil Harrington, Apr 10, 2011
    #5
  6. RichA

    RichA Guest

    I doubt it, since the problem long predates video in DSLRs.
     
    RichA, Apr 10, 2011
    #6
  7. RichA

    Me Guest

    My bad. Don't know where I got the idea that Nikon had released other
    cameras (than the D3100) with smaller (than 1:1.5 crop) sensors.
    The difference between 23.6 and 23.1 mm isn't inconsequential. YMMV.
     
    Me, Apr 10, 2011
    #7
  8. Actually, I didn't know the D3100's sensor was reduced half a millimeter in
    that dimension. That is a greater difference than the usual change in
    dimensions, but most of the different sensors have varied somewhat in
    overall size. They have all been something-less-than-24mm x
    something-less-than-16mm, and the difference generally has been
    inconsequential.

    As for "smaller (than 1:1.5 crop)", DX sensors have always been a bit
    smaller than that. The 1.5 was a rounding off of something closer to 1.52,
    based on given sensor size compared to 24 x 36mm. This smaller sensor size
    indicates a true lens factor closer to 1.56x.

    The important question is, What size is the *effective* area of the sensor?
    In the case of the 3100, according to the manual which I've just now
    downloaded, total pixels are 14.8 million but effective pixels are 14.2
    million. So presumably the overall size given includes all 14.8 Mpixels, and
    the effective pixels occupy a still smaller size rectangle. Or do they? It's
    possible that they used to do it that way but now call "sensor size" only
    the area with effective pixels, which might mean there's no difference at
    all compared to the older models.

    I really don't know. Someone must. In any case I still think it's too small
    a difference to be worth worrying about.
     
    Neil Harrington, Apr 11, 2011
    #8
  9. RichA

    Me Guest

    I'm pretty sure that the size stated is for the effective pixels /
    imaging area.
     
    Me, Apr 11, 2011
    #9
  10. You may be right. On the other hand, if that's so then why do they give the
    total Mpixels too? I have never really understood the reason for that. Do
    the other 0.6 Mpixels not do anything *at all*?
     
    Neil Harrington, Apr 12, 2011
    #10
  11. RichA

    Guest Guest

    it's a bigger number, so why not use it?
    it needs pixels around the periphery for black level, among other
    things.
     
    Guest, Apr 12, 2011
    #11
  12. RichA

    me Guest

    Through the years (D70/D200/D300) I've seen different raw converters
    also come up with different image sizes for a given camera.
     
    me, Apr 12, 2011
    #12
  13. RichA

    Me Guest

    Likewise, but I suspect that's just where they cut off the edges of the
    RGBG matrix in demosaicing.
     
    Me, Apr 12, 2011
    #13
  14. RichA

    Better Info Guest

    You have 3 different issues involved here.

    The actual number of photosites includes all of them on the sensor. This
    includes those outside of the imaging area (the total "effective pixels").
    These non-imaging areas are used for setting black-levels, reading thermal
    noise, etc. Control-groups of photosites that are used to test all others
    against These large borders of black and white rectangular blocks (and in
    one case I recall seeing a purple region) not usually seen in any of your
    images can be viewed by converting RAW files with DCRAW's command line
    options.

    If I recall, this whole-sensor image is only available in the PGM format
    output. I did it years ago using the 100% hardware RAW data from CHDK
    cameras just to see what it looked like, so don't ask me today which
    command-line switches allowed this. It might have just been the -D switch,
    for "document mode". I don't really recall now. Other cameras may
    automatically truncate these out-of-bounds regions in the RAW file it spits
    out for you. This is not the case with CHDK RAW, where it is every
    photosite on the sensor that is recorded in the hardware RAW file (though
    not if using CHDK's DNG file-format option).

    The size of any final JPG, TIF, or other images from the lesser total of
    "effective pixels" (photosites) depends on the interpolation algorithms
    being used.

    The one in the camera is generally fast and discards large areas of the
    imaging photosites on the borders. The reason for this is that each RGGB
    set of photosites is being interpolated into each adjoining RGB photo
    pixel. This requires that each photosite be surrounded by a given number of
    them for the interpolation process. Edge and corner photosites do not have
    these surrounding supporting photosites on all sides to make a judgment
    value of their intended colors in the resulting RGB file. To simplify and
    hasten the conversion process they are often discarded for the final image.
    But not completely. Their values are still used to create the colors for
    the pixels further away from the edges and corners. Their values being
    interpolated into pixels up to 4 or more photosites away from them--again
    this being interpolation-method dependent.

    Now we get to interpolation methods. Better interpolation algorithms can
    deal with these edge and corner pixels and will then include them in the
    resulting JPG or TIF file. Their colors may not be as precise because they
    lack the surrounding photosites on all sides to determine their intended
    colors, but some find the additional resolution and FOV for the actual
    photographic detail contained in these edges to be of greater importance
    than color problems, for B&W images especially--where luminance detail is
    the only thing that's most important.
     
    Better Info, Apr 12, 2011
    #14
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.