maximum a posteriori

the brain’s main function is to regulate our body, breathing comes first i guess, but a large portion of it goes to vision, so i'd like to talk about how it also shows me what i see. i live in a 3D color world and here i am, making 2D grayscale representations of it, and that illusion of depth in a 2D space feels very much like the real thing, or not, it depends. besides, black and white photography isn't about capturing reality, but rather our perception of it.

as i stand here, looking at a wet silver gelatin print, shinning with all its smoothness, i tell myself "i need more depth". and then it hit me: it's not a given, is it? depth i mean. it's all made up, my judgement isn't limited as much by knowledge as it is limited by my attitude as i blink and re-interpret my thinking. do i appreciate to look at a flat image better than the 3D space around me? does my brain analyses them differently?

well, let me think.

i can feel the depth in a printed image because of a few cues, sometimes it even feels like looking through a window, and those few cues produce a perception of depth without trying to fool me that it isn't 2D. i lean on my sink "which knob do i turn for this?" i guess i print more than i shoot because of some connections my neurons make somewhere between my retina and my visual vortex, from the front of my face to the back of my skull. besides, the thalamus receives maybe a quarter at most of its info from the retina, the rest comes from the brainstem and the cortex anyway. perhaps the neurons in my striate cortex don't respond differently, as they should, to visual stimulation either from one eye or the other, like in the rest of the visual system. the spacial and temporal influences on visual signals are there to adjust and transform the structure of my retinal activity patterns, yes, of course, but also to increase the signal-to-noise ratio of the retinal signal, meaning, how much information can my neurons transmit while preserving its basic content. the decomposition of the retinal image begins in the retina, from light that went through my cornea and lens. it's always about the lens with cameras, but in my case -and others i'm sure- visual recognition happens in the brain, where it processes what the optic nerve feeds it, according to the changing rate of the signal over time, and the spectral domain residuals, the noise if you will.

alright then, i know that the amount of depth perceived depends on a few things, the image data (from the rods and cones, the trusted light receptors) but also from prior visual knowledge, to fill in the blanks. that last part is what interests me. in the visual language, the word prior has a strong meaning, because the interpretation of retinal images that rely on previous experience are, on average, better than those that don't. in other words, image data + prior expectations = perceived image. simple, and it may very well be the most important equation in probability theory as far as i’m concerned. unconscious inference, two of my favorite words. just to think that my knowledge changes how i see things, and the things i see become part of my knowledge. that same knowledge changes how i see things. can i then even see the things themselves? it probably has to do with my episodic, perhaps even my semantic memory, both simultaneously really. never ending the cycle in order to achieve a so-called maximum a posteriori, a fantastic system compared to a function based on the image data alone (i.e. maximum likelihood estimate). i think about it every time i need to recognize objects in an image to prove i am not a robot, and teach a piece of software to at least see in 2D. is the depth i see in an image based on similar situations in my real life ?

what ?

i can't add more depth by adjusting focus, it's been set already. i could play with contrast, objects on the same plane tend to be close in value. contrast separates, lighter feels closer and darker farther. i don't know who said that the dual reality of a photograph enables us to perceive a scene as 3D at the same time we see it flat is inherent, but it certainly doesn't feel like it right now. here i am with a dark subject in the foreground and a light fog background. in this case i know it's closer because i can see the texture. another cue. and by the way, why do i need binocular vision to understand a flat surface? i also know the object is in the foreground. my perception of depth and distance depends on my position in relation to the scene. i know i'm standing in front of a piece of paper, my question is "where was the photographer?" there is no parallax error looking at a photograph, it's a binocular disparity. i'm getting very ambiguous depth cues here.

of course i could just print the negative and go home . but i know it's 2D taken from 3D, and i translate it back to 3D so i can check if it looks 3D enough. knowing that, i can't just print, it's as if there was a voice in my head "hello, this is the central visual system." ( with the voice of frank zappa’s central scrutinizer ) and i trust my visual system to a certain extend only, from about 400nm all the way to 700 maybe. beyond that, in any direction, i rely on machines made by other brains. right now though, i just want to look at the print and through the wall on which it sticks, i want the fog to make me lose perspective. it's not black and white it's continuous gray. black and white works well for duotone, i'll just sprinkle a few dots here, a few less there and the irregular gradient should pull me in.

or not.

b+w photography exist because science works in stages, and that particular step in attempting to draw with light, well, it became its own art form, monochrome photography. it's an aesthetic that pleases the eye, it certainly satisfies my visual system, perhaps as a break from all the colors arounds us always. because i like a beautiful blue cyanotype or one of the many sepias around, but in the end i crave the absence of color most, more than the very nature of gray itself. if that makes any sense. or perhaps some of us are able to feel depth better without the color noise. our vision is already limited to the natural spectrum of light once it has passed through the earth's atmosphere, still i get a bit of solace from placing a grayscale in front of my eyes, it sometimes looks like stairs to nowhere, an illusion of depth from fake shadows that are simply separate shades. also i have always favored wide angle lenses for that purpose, the quest for depth on a 2D paper stage. it's a bit like a magic trick, it depends on the viewer as well. i know if i'm not in the right mood i won't get into an image as much, it won't reach out and touch me, the whole experience will fall flat so to say.

ok.

so, when i put a wet silver gelatin print on the plexi above my fix and look at it -for the very first time- i see the paper itself and hopefully a 3D image through some imaginary window floating, grazing even, right above the intertwined fibers. if not, then my next choice of exposure is already in the making, although i'm not sure if it's from my free will or simply my brain recalculating a new visual rendering based on previous data filling in the blanks. always back to prior visual knowledge. i'm afraid i only see depth on a print because my retina recognizes the content as a representation of a 3D scene and my brain corrects automatically without my conscious help. when i feel good about myself i tell stories of experience and visual prowess in the art of printmaking, but deep down i know my brain looks for depth as a habit, just so i can walk through a 3D environment and not trip every other step.

Previous
Previous

more than just black and white

Next
Next

the guessing game