Dr Perceptron from Futurama (picture from theinfosphere.org)

Dr Perceptron from Futurama (picture from theinfosphere.org)

Not Dr Perceptron, that is – the perceptrons I’ve been grappling with this week are the ones featured in Block 4 Unit 2 of M366, which I think has been pretty much the toughest part of the course so far. Not boring, just very dense and packed with detail. This was the first unit that still had me scratching my head at the end of my first read-through; it wasn’t until I’d gone through the unit again to make sure I’d covered all the Learning Outcomes that I actually started to feel comfortable with the material.

And one thing that particularly had me scratching my head was a curious glitch in the PDF copy of Block 4, which somehow omitted quite an important element of Figure 2.44, a section of which is pictured below. Can you spot what’s missing?


Yes, the actual functions are missing. I sat there for a couple of minutes staring at those graphs, zooming in to see if the lines were just extremely close to the axes, changing my PDF reader colour scheme settings to see if the lines had been rendered invisible by the theme, and eventually calling Alex across to look at it too. At which point common sense prevailed, and Alex suggested just looking in the printed Block 4 book to see if the graphs were any different in there; and of course, they were perfect in the printed copy. Which is great – I know now what effect varying the value of λ has on the unipolar sigmoid function – but it does make me feel a little paranoid about the reliability of the other diagrams in the PDF copies of the course units. Hopefully there haven’t been any other less obvious omissions!

Anyway, despite a few hiccups and a lot of head-scratching, Unit 2 does cover some fascinating stuff. I particularly liked the bits about input spaces and weight spaces. I’m generally not great at drawing or interpreting diagrams, but I found the illustrations of 2D and 3D input spaces and error surfaces really helpful.

2D input space with one decision boundary - not as cool to look at as the 3D version, but this is about as far as my graphics skills stretch, Im afraid!

2D input space with one decision boundary - not as cool to look at as the 3D version, but this is about as far as my graphics skills stretch, I'm afraid!

And it’s pretty mind-blowing to think about these spaces extending into 4+ dimensions. The input space pictured here is 2D, and it’s being divided into two regions by a decision boundary which is a 1D line; similarly, a 3D input space (i.e. a cube) would have a decision boundary which was a 2D plane, and that’s fairly easy to visualise. But imagine a 4D hypercube input space being intersected by a 3D hyperplane decision boundary! That’s pretty awesome.

I wonder sometimes, though, if the tutors on courses like M366 and the various OU maths courses ever get jaded about the thrill of trying to visualise higher-dimensional polyhedra? I think perhaps after several years of students like me going “wooo, hypercubes!”, they might get a bit tired of it. Then again, I suppose “wooo, hypercubes!” is better than “meh”, which is pretty much how I was feeling about M366 during the previous block. So, woooooo hypercubes!

Advertisements