L-MOUNT Forum

Register a free account now!

If you are registered, you get access to the members only section, can participate in the buy & sell second hand forum and last but not least you can reserve your preferred username before someone else takes it.

Do you want computational techniques in camera ?

pdk42

Moderator
It seems that Panasonic are down-playing "computational photography":

https://petapixel.com/2024/03/13/pa...sers-want-computational-photography-features/

From my perspective, I think its desirability depends on two considerations:
  • What is meant by "computational photography"?
  • What should be done in camera?

I have no philosophical objection to computational techniques in general (it's really only about the final result), but let's look at what's being done today. These techniques are today targeted at four different types of use-case:
  • To work around limitations of the sensor
  • To increase resolution
  • To enhance/simplify the camera's operation
  • To improve the aesthetic appearance of images

Let's look at each:

1) Techniques to work around limitations in the sensor

This really boils down to taking a burst of images to either:
  • Build a HDR composite
  • Suppress noise at high ISO

2) Techniques to increase resolution

Again, this is based on a burst, but this time to increase resolution. We can see:
  • Pixel shift Hi Res
  • Hand held Hi Res

3) Techniques to simplify camera operation

This includes:
  • Taking a burst of short exposures and compositing them to emulate a long exposure (Olympus "Live ND")
  • Doing the above selectively (Olympus "Live Grad ND")
  • Special use-cases for focusing (focus stacking, starry sky AF)
  • Building a timelapse into a video
  • Building a set of images into a live composite (additive composition)

4) Techniques to enhance the image aesthetically

The most obvious are:

- Blurring backgrounds
- Removing red-eye
- Fixing skin complexion
- Fake "live" effects (moving the subject with the mouse so that it seems to move relative to the background)
- Replacing skies
- Adding fake flare


Personally, I'm very happy with doing any of types (1), (2), or (3) in-camera. And it seems that Panasonic would agree since they do most of them!!

But, like most "real" photographers, I wouldn't want any type (4) doing in-camera (but I may resort to some of it later in PP).
 
Last edited:
Yeah, I've read that too. Interesting statement from a company, that is one of the leaders (of only major camera manufacturers) in terms of things many people would call part of "computational photography".

Panasonic has the best hand held High resolution on the market. They have the only full frame cameras with live composite. And a few more things.

I think it comes down to what you exactly mean with computational photography. For me, everything what enhance details, like hhhr or focus stacking, are welcome features. But I don't think we need things to manipulate the actual image, like sky replacement, in an actual cameras.

I'm not completely against those manipulation things and it's a difficult to differentiate between manipulation and enhanced details. But I think manipulation is something for postprocessing and I don't want that to happen automatically and on the small camera screen.

What is more important would be a reliable and fast connection to smartphones and tablets. On such devices I can get better control for editing, including the use of AI features, than I ever could on a small camera screen.
 
Yeah, I've read that too. Interesting statement from a company, that is one of the leaders (of only major camera manufacturers) in terms of things many people would call part of "computational photography".

Panasonic has the best hand held High resolution on the market. They have the only full frame cameras with live composite. And a few more things.

I think it comes down to what you exactly mean with computational photography. For me, everything what enhance details, like hhhr or focus stacking, are welcome features. But I don't think we need things to manipulate the actual image, like sky replacement, in an actual cameras.

I'm not completely against those manipulation things and it's a difficult to differentiate between manipulation and enhanced details. But I think manipulation is something for postprocessing and I don't want that to happen automatically and on the small camera screen.

What is more important would be a reliable and fast connection to smartphones and tablets. On such devices I can get better control for editing, including the use of AI features, than I ever could on a small camera screen.
I agree with all of that, esp the last point.

I'm yet to see a good implementation of a camera smartphone app. The Panasonic one isn't bad (try Nikon for a truly awful experience), but it's still very clunky in terms of setting up and maintaining the connection during camera power cycles. I don't understand why manufacturers aren't putting GPS and mobile data directly into the camera to avoid all this bluetooth pairing. A really good implementation would be syncing to the cloud (either auto or on-demand for tagged images) and then a separate phone/tablet app to do any editing and social media publishing needed. The camera should stick to taking the photo, geotagging it, and uploading it to the cloud.

Then a really good camera manufacturer's cloud system could do all sorts of interesting stuff, including AI, auto application of profiles, auto publish to Flickr etc etc. Lots of scope to take the camera past just the capture.
 
The camera should stick to taking the photo, geotagging it, and uploading it to the cloud.

...auto publish to Flickr etc etc.

Stop for a moment and think what some people will do with that.

I recently did a search on flickr for photos taken with the Sigma fp L:
https://www.flickr.com/search/?camera=sigma/fp_l&safe_search=1&sort=date-posted-desc

Scroll through a couple of pages... what do you see? The person behind all that repetition has uploaded over 315,000 photos. And that's without a camera linked to the cloud and auto publishing to social media.
 
Answering your original question:

I have absolutely no objection to any of 1) or 3) provided they're optional so I can always have an unmodified RAW image. 2) I would always prefer to be done by the camera, and it's something I think Panasonic do particularly well.

With the exception of removing red eye, there's nothing in 4) that I'd ever do, and I really don't want a camera that'd do anything of the type as part of its standard image processing. It's why I almost never use my iPhone's camera.

And I may be a luddite here, but I just don't care about linking a camera to a phone, tablet or cloud services, and have never used those features in any camera I've owned that has provided them.
 
And I may be a luddite here, but I just don't care about linking a camera to a phone, tablet or cloud services, and have never used those features in any camera I've owned that has provided them.
I admit that I hardly ever do either. If I'm travelling and have just the iPad with me then I'll link to that to offload images and post them to Flickr or whatever. My normal workflow is to pull the card out of the camera, transfer to the PC, and then PP and publish from there. But I know that lots of "normal" people want a quicker and more direct way to put stuff on-line. In that way, serious cameras fall a long way short.
 
And I may be a luddite here, but I just don't care about linking a camera to a phone, tablet or cloud services, and have never used those features in any camera I've owned that has provided them.
Yeah, you're a Luddite :)
I'm personally not a huge user of linking & uploading photos through an app, but it is quite handy at times. I've used it with my G100's a little bit, when on holidays. Last trip I took, I didn't take any sort of computing device, other than my phone. It was pretty damn liberating, and with the G100 it's one, single button press to upload to the phone, & then wherever you like from there. Mind you, this was done in the evenings, when a bit of spare time presented itself, rather than madly posting away all day long. So one connection worked for me, didn't run into any issues of losing it when the camera or phone powered down.
I've not yet had a chance to set up my S5, I'm guessing it won't be any more difficult than my G100's to achieve. Plus, the same app works really well as a remote for the camera as well.
200920-P1074747.jpg
  • Panasonic - DC-G9
  • 8.0 mm
  • ƒ/6.3
  • 1/320 sec
  • Center-Weighted Average
  • Auto exposure
  • 0.7
  • ISO 200
meditation.jpg
  • Panasonic - DC-G9
  • 8.0 mm
  • ƒ/5.6
  • 1/800 sec
  • Center-Weighted Average
  • Auto exposure
  • 0.3
  • ISO 200
 
It seems that Panasonic are down-playing "computational photography":
I saw that PetaPixel article and was a little surprised. It didn't seem like it was coming from a technology company like Panasonic. And it was awfully general; it didn't discuss what particular computational photography Panasonic would not do. I'll put it this way, that interview didn't smell right. It seemed more like an excuse for not innovating. That their target users, hybrid creators, don't want computational photography because it takes away from the work they do in editing is a stupid thing to say. These same hybrid creators use iPhones as backup cameras, and don't think twice about using the computational photography built into their phones.

The applications I have in mind are specific to my needs. I'm using a G9 II as backup to my S5IIx. And a micro forur thirds camera is not equivalent to full frame in low light. But if computational photography can make an iPhone as good as it is with its diminutive sensor, then Panasonic should be able to make the much larger micro four thirds sensor look more like a full frame sensor. We can do this already with Topaz DeNoise AI or Lightroom denoise. It's not a great leap to imagine this built into a micro four thirds camera, where ISO limits are greatly extended. And this approach on a full frame camera would push ISO out even further. This is but one application; we can all think of more. Get with it Panasonic.
 
One time I spent weeks removing dust spots from a 2 day landscape trip, caused by sticky pollen which showed at narrow apertures. Changing over 5-6 primes all day long... How stupid was all that and I'll never do that again but if a robot can successfully fix such monotonous laborious waste of time jobs then good.

So far I haven't even used a computer but saying tgat haven't done any real photography yet but I hate PP and LR anyway. Transferring jpegs to phone then casting to TV or uploading to internet all without the Apple Mac M1 mini and Lightroom. The Panasonic jpegs are far nicer than Pentax (old system) anyway allowing PP to be much less than before snd I'm going to try and do as much in camera now, before it had to be RAW only but not with the S5ii.

I don't even have a desk any longer and my photo monitor is in a box upstairs stored, I don't even want to use that any longer but it's there if I need it. Most times viewing on 48" lounge OLED does rightly and you van just cast to it.

But computational photography like false bokeh I don't like, I only ever used basic PP much like in the old dark room, change WB, contrast change or enhancement, remove dust and a lot of brightening/darkening/graduating on landscapes or astro. Also Ilford monochrome and other profiles but I like the L.monochrome on S5ii and hopefully learn how to use the realtime LUTs for film emulations.

It was worth dumping old Pentax alone for diminishing PP requirements and S5ii and lenses help massively already.

And then you have die-hard Pentaxioners telling me my S5ii isn't a true photographic tool :p :p :p :p :p :p :p
 
One time I spent weeks removing dust spots from a 2 day landscape trip, caused by sticky pollen which showed at narrow apertures. Changing over 5-6 primes all day long..
My worst, I was photographing a local forest fire and wanted to change lenses. The thick smoke is lots of tiny particles. I got in my car with the windows up to make the lens change, but that didn't help. My sensor was so messed up it took forever to get it clean again. The lesson is to grab just one good zoom lens if you are off to shoot a forest fire.
 
My worst, I was photographing a local forest fire and wanted to change lenses. The thick smoke is lots of tiny particles. I got in my car with the windows up to make the lens change, but that didn't help. My sensor was so messed up it took forever to get it clean again. The lesson is to grab just one good zoom lens if you are off to shoot a forest fire.
I'll keep that in mind! For my mistakes I bought the 24-105 and 70-300, one or the other will usually do and no changes.

What about your lungs? :oops:
 
Stop for a moment and think what some people will do with that.

I recently did a search on flickr for photos taken with the Sigma fp L:
https://www.flickr.com/search/?camera=sigma/fp_l&safe_search=1&sort=date-posted-desc

Scroll through a couple of pages... what do you see? The person behind all that repetition has uploaded over 315,000 photos. And that's without a camera linked to the cloud and auto publishing to social media
I looked at that link as well, that person who uploads all his photos to Flickr, probably syncs his iPhone directly and probably ingests his photos in a nas wich also automatically syncs it. Without any form of culling it looks like... Just uploads everything. 3500 pages hahahaha, first one being from 2005. That is on average only 46 a day. Every day, for 19 years in a row.
 
Computational photography? No want for fake bokeh/thin DOF, sky replacement, or anything like that. I'd like to see more stacking for reduced noise & better colour, hand held high res, that sort of thing though. If I could get m4/3 bodies with the same dynamic range & higher ISO performance as 35mm format, in a smaller package, I'd jump back in a heartbeat.
 
Computational photography? No want for fake bokeh/thin DOF
Lightroom now has a beta for fake background blur, kind of like with iPhone portrait. I played with it just a little. I didn't like much what I saw, but it might work for some things, maybe a portrait. It will be interesting to follow along and see how good it gets. But I think this belongs in just post processing, not in the camera.
 
Lightroom now has a beta for fake background blur, kind of like with iPhone portrait. I played with it just a little. I didn't like much what I saw, but it might work for some things, maybe a portrait. It will be interesting to follow along and see how good it gets. But I think this belongs in just post processing, not in the camera.
Getting nice bokeh is part of the skill and satisfaction of making a photograph. Selecting backgrounds, moving around and selecting lenses for distinctive bokeh are all part of the art.

With the move from Pentax to Lumix S5ii I'm going to try to do as much as possible SOOC, as little PP as possible nowadays wheras before I had to use RAW, so for that reason alone I've no interest in fake bokeh. I never used anything like sharpening, blurring or most Lightroom tools. A lot of what I did before I can now do in the S5ii like monochrome filters and aspect ratio changes.

The only reason I might keep my favourite legacy flower lens, the Vivitar 28mm f2.0 Close Focus is for the unique paintbrushed bokeh and the fun and thought you need to put in for composition whilst maintaining focus and selecting bokeh change is very satisfying when you nail it.

Sorry if I said some of this before :oops:
 
Back
Top