Previous | Next --- Slide 33 of 62
Back to Lecture Thumbnails
wilsonlee

Kayvon mentioned at the end of the "why can't we just use a bigger hole?" discussion that mobile cameras get away with a bigger hole by using a lens. That made me wonder when the first camera with a lens was invented. Some reading on the history of cameras pointed out that the first use of a lens can be traced back to around 1550. And the oldest known record of the principle of pinhole image dates back to around 400 BC with Mozi of China.

More details: https://en.wikipedia.org/wiki/History_of_the_camera

sushain

@wilsonlee, that's a pretty interesting read! Thanks for sharing.

It's a shame that there's not much analysis of how smartphones have changed the camera ecosystem aside from the perfunctory statement at the end. I'd be interested in hearing about what the newest smartphones are really bringing to the table with their triple-lens cameras (see Galaxy S10+ rumors), i.e. is it all just marketing gimmicks, and what's the future of smartphone cameras? Will they ever eclipse DSLRs and push Nikon out the same way Kodak went?

cat

I found the picture helpful to understand how lens work. Lens make sure light rays reflected from each point of the subject maps to one point on the film. Without lens, the same light rays pass a bigger hole and reach different points on the film. We would get a blurry image instead.

Gyro

There are many other projections if you are interested: https://en.wikipedia.org/wiki/Graphical_projection

maq

@sushain really interesting question! Reminded me of this article: https://fstoppers.com/originals/future-smartphone-photography-bright-297527

In part of the article, the author talks about how smartphone manufacturers need to leverage computational photography to achieve better results, since smartphones currently have much smaller sensors than say, DSLRs

anon

Google's Pixel 3 has many camera improvements that rely primarily on software (1, 2) and includes only a single rear lens whereas competitors are including 2 or 3 rear lenses. These software improvements are powered by Google's Visual Core processor which allows for efficient and high performance execution of AI / ML algorithms.

dgupta2

On that note, Apple has also done some impressive work in computational photography to allow users to adjust image characteristics like aperture after the image has been taken. A demo of depth-of-field adjustment is here at around 1:13:38 here: https://www.apple.com/apple-events/september-2018/

I'm really curious how something like this works.