# Math question - sort of

Discussion in '35mm Cameras' started by Eric Miller, Sep 23, 2009.

1. ### Kennedy McEwenGuest

No it is an exact quote of your question, excluding the waffle that put
it in context as that was already clear from the preceding thread.
Context. The answer is NOT simply scaling pixel size of the 10D to the
7D, which is what you were given. Without taking optical resolution
into account the equivalent focal length you would need on the 10D
compared to the 7D could be close to 50% higher than is in fact the
case! Optics, even perfect optics, don't have infinite resolution! When
the optical resolution is close to the pixel resolution then they MUST
be taken into account to answer your question, or you end up with
meaningless unresolved pixels. You seem to have a major problem
understanding that.
When you were 5 and asked your Mom where you came from, you were
probably happy with her reply that a stork brought you. By the time you
were 10 you would expect a better, more complete answer, to exactly the
same question. By the time you were 15 you ought to know the full
answer yourself. Stop behaving like a 5 year old - there is no Santa
Claus, even if some of your friends still believe there is!
Smart enough to know that any question I ask may well have an answer
which is more complex than I expected and with enough common decency not
to criticise those who make the effort to explain that.

Carry on living in ignorance: the x1.7 scale factor given by the partial
answer is at least 50% too high. A 400mm lens on the 7D would NOT give
equivalent resolution to a 680mm lens on a 10D: you will be lucky to
achieve half of that 280mm effective focal length extension depending on
the optical resolution of the 400mm lens in question. In other words
540mm, or less, equivalence in terms of what is actually RESOLVED.

Kennedy McEwen, Sep 26, 2009

2. ### John SheehyGuest

The lenses are to blame for any optical issues with high densities. The
higher density *NEVER* exacerbates any lens problems. Lower densities
lower the resolution, so you see less of everything, including subject
detail.

You position is all "talk" and "logic". You can not demonstrate what you

Here's what happens when you try to demonstrate, and go about it the
right way:

You shoot the same scene with the same lens, same ISO, same Av and Tv,
and then you use a converter with no noise reduction, and upsample
critical crops from both images to the same subject size. No matter how
much lens fault is brought into the light with the higher density, the
higher density still has a more accurate rendition of the subject,
because those faults ARE ALWAYS THERE, REGARDLESS OF PIXEL DENSITY. Less
agressive sampling does not avoid lens issues; it just makes it harder to
tell why the image has so much less real subject detail.

John Sheehy, Sep 27, 2009

3. ### Paul FurmanGuest

Extreme microlenses can emphasize CA and even vignetting. I don't know
if that's necessarily proportional to pixel density but it appears to be
more of an issue.
--
Paul Furman
www.edgehill.net
www.baynatives.com

all google groups messages filtered due to spam

Paul Furman, Sep 27, 2009
4. ### Paul FurmanGuest

On second thought, the original question is about magnification, just
magnification at/near infinity. FOV only matters here if the 5 inch tall
bird goes outside the frame. Print size doesn't exactly matter either
unless you want the result in inches instead of pixels, the only
question is how much the bird can be enlarged.
This is still a handy basepoint. How many pixels tall would the 5 inch
bird be for a 7.2MP full frame 35mm camera, focused to infinity? I guess
we need to know how far away the bird is.
Or if you double the linear pixel count, that doubles the number also.
So the pixel spacing is really all you need, though it's nice to nail it
all back to that normal lens at infinity and an 8x10 print as the basepoint.

Yeah, this is magnification, like binoculars, microscopes & telescopes
are described as 5x, 10x, etc.

scratch this comment:
Lastly, the lens has resolution limits so you can say that the lens is
only good up to a particular magnification and decide not to waste money
on pixels beyond that point. However, the point where a lens gives up is
variable according to many factors like how close to the edge or center,
what aperture, subject distance/magnification, etc. The MTF charts have
to pick a narrow definition and the nyquist lines on those have to pick
a simple theoretical diffraction point but there is usually some
discernible detail beyond that even if it doesn't meet the strict criteria.

--
Paul Furman
www.edgehill.net
www.baynatives.com

all google groups messages filtered due to spam

Paul Furman, Sep 27, 2009
5. ### You Are The Weakest LinkGuest

Film has silver grain (analog photosites) sizes of 2um or less, the size of
photosites on most small sensor cameras of 1/2.5 being approx. 2um. You
will always be limited to your weakest link. If you increase the lens
resolution you are limited to the resolution of your sensor being 4-8um in
photosite sizes. (Luckily, in P&S cameras the optics quality and resolution
is matched to the photosite sizes.) If you increase the pixel density
without increasing the lens quality then all you are capturing with those
smaller photosites are the blurry edges afforded by the lens. No gain in
useful information. A bit like those toy telescopes that advertise 600x
magnification on a 2" diameter objective lens. All you are doing is
magnifying blur beyond 50x magnification with a 2" lens. Or those that put
high-gain amplifiers on their fringe-area TV antennas to only amply noise.

It's not an "either/or" venture. It's an "and" issue.

You Are The Weakest Link, Sep 27, 2009
6. ### John SheehyGuest

I meant the density itself. Of course, microlenses could be poorly
designed. Even then, however, oversampling allows extemely easy and smooth
correction of CA, both from the lens, and that generated by poor
microlenses.

John Sheehy, Sep 28, 2009
7. ### You Are The Weakest LinkGuest

You don't know how to comprehend that increasing either does NOT include
increasing both.

Pray tell, if you have a sensor that can only record the absolute minimum
of (for sake of argument) 3" of arc, how then will a lens that can resolve
1" of arc be recorded on that sensor?

If you have a sensor that can record 1" of arc, how then can a lens that
can only resolve 3" of arc record 1" of arc on that sensor?

in antialiasing masks, printer limitations, and the limits of the human eye
depending on viewing distance, then the resolution limits climbs
exponentially.

You're an idiot pretend-photographer troll. Plain and simple. Proved 100%.

<throwing a dead goat under its troll's bridge to see if it'll go feed on
that>

You Are The Weakest Link, Sep 28, 2009
8. ### Wolfgang WeisselbergGuest

OR != XOR. Look it up. Write a logic table.

-Wolfgang

Wolfgang Weisselberg, Sep 29, 2009
9. ### Martin BrownGuest

Although it does make it easier.
There are resampling methods derived from radio astronomy that can
handle this situation accurately but they are computationally expensive.
Bilinear spline is about the cheapest half decent option found in
standard packages. But there are better ones if you have resources to
burn. It ends up with the law of diminishing returns so how far you push
it is really determined by how unique or irreplaceable the image is.

If you have the option then oversampling the measured data by about 1.5x
the Nyquist theoretical minimum for a monochrome imaging system is
worthwhile. Otherwise you may see obvious jaggies in the raw image.
Beyond that you are not gaining much although for a Bayer sensor you
still get a bit of extra chroma information out to 2x oversampled.

Regards,
Martin Brown

Martin Brown, Oct 7, 2009
10. ### Martin BrownGuest

Actually there isn't all that much of a difference apart from the
obvious one that a time series is one dimensional and so a lot more
amenable to analytical techniques when sampled at equal intervals.

Time sampled data is usually integrated over a time delta-t rather than
a true snapshot of the signal by a flash converter at exact time t.
Indeed, but having some of that extra data can make post processing
deconvolution more reliable provided that you have not traded signal to
noise.

The point here is that an undersampled digital image does present some
difficulties for post processing to remove chromatic and other
abberations. They are not insurmountable but it is easier with an
oversampled image.

Regards,
Martin Brown

Martin Brown, Oct 8, 2009
11. ### Mr. StratGuest

Are you still using that Panasonic piece of shit? I guess it doesn't
matter since you don't have the ability to create a decent image.

Mr. Strat, Oct 9, 2009