I’ve taken a break from studying this week due to a nasty cold which turned me into more of a Mucus Machine than a TMA Machine, but today I’ve been getting back to M366, and in particular getting stuck into Block 4: Neural Networks.

Unit 1 Section 3 is about recognising and classifying patterns, and touches on the human ability to recognise noisy or incomplete patterns. The ability to spot patterns has always impressed me (although it does lead to the bizarre excesses of religious-themed pareidolia), and I was especially interested to read that humans are apparently pretty good at recognising noisy patterns aurally as well as visually.

Humans are great at seeing patterns everywhere. The downside is seeing the Face on Mars or the Virgin Mary in your cheese on toast, but the upside is spotting building-faces like this one.

Humans are great at seeing patterns everywhere. The downside is seeing the "Face on Mars" or the Virgin Mary in your cheese on toast, but the upside is spotting funny building-faces like this one.

Section 3 briefly mentions some experiments done by Richard Warren in 1970, in which test subjects were played a recording of a polysyllabic word, with one syllable obscured by a loud click. Although the test subjects reported that they had heard a click, they could not correctly identify which syllable had been obscured; even odder, they reported hearing the whole word, including the missing syllable. Fascinating!

I did a bit of digging in the OU’s journal search engine to see if I could find any articles that discussed these experiments in detail, and while I couldn’t find anything on the “click” experiment, I did find an article on a similar experiment by the same researcher, in which the syllables were obscured by coughs rather than clicks1. Interestingly, this article states that the “filling-in” effect disappears if the missing syllable is replaced by silence rather than a sound – which made me wonder if the same is true of our visual pattern recognition skills…

A bit more googling and OU library-trawling turned up an interesting article in Biological Cybernetics, ‘Recognition of partly occluded patterns: a neural network model’ by Kunihiko Fukushima2. Now, the details of the article are quite a bit beyond my understanding at the moment (perhaps I’ll have more success if I revisit it after finishing Block 4!), but I was able to glean a bit of confirmation that indeed, we do generally find it harder to identify visual patterns if they are obscured by opaque invisible objects, which I think are a good visual equivalent to silence.

For example, the images below show two sequences of letters, occluded by some white circles. I’ve included a real word and one made up of a random-ish series of letters (as random as me bashing my hand on the keyboard is, anyway!) because, as mentioned in Block 4, humans use the context of an incomplete pattern to help “fill-in” the gaps – so this should be a bit harder in the second image:

An English word partially obscured by some white circles
An series of random-ish letters partially obscured by some white circles

These are pretty hard to read, I think (although the topic of M366 might give you a clue as to the first one!). Here are the same sequences of letters, but this time the occluding circles are blue:

An English word partially obscured by some blue circles
An series of random-ish letters partially obscured by some blue circles

I think these are easier to identify, even in the nonsense-word version. Perhaps it’s something to do with the way that the blue circles have distinct edges, whereas the white circles are not clearly distinct from the background.

I wonder, though, whether we are also less able to recognised partially occluded visual patterns when the source of the occlusion is a physical blind spot, rather than an object positioned in front of the pattern? What if the “noise” in the image was provided by an absence of sensory information rather than by the extra, irrelevant information provided by the obscuring object? I’ve found a few articles via the OU library that discuss the effect of artificially-induced scotomata on things like saccades and visual search tasks, but nothing so far that seems to directly address the recognition of incomplete visual patterns. I’ll keep looking, but if anyone can point me in the direction of some relevant research, I’d be very grateful. Otherwise I’ll need to start convincing Alex that he’d enjoy undergoing some temporary TMS-induced scotomata in the name of science!

1 Richard M Warren, ‘Perceptual Restoration of Missing Speech Sounds’, Science, New Series, Vol. 167, No. 3917 (Jan. 23, 1970), pp. 392-393
Stable link at JSTOR
back to post

2 Kunihiko Fukushima, ‘Recognition of partly occluded patterns: a neural network model’, Biological Cybernetics Vol. 84 Issue 4, (2001), pp251-259
Stable link via OU Library
back to post