top of page

Functional Fixedness in Machine Learning

I've noticed this post going around today, which is basically a reinforcement* of the potential for unsupervised learning.



As time goes by, as machines make light of more and more human tasks, people will finally realise the 'human bar' wasn't actually set that high.


'Human level intelligence' is held as the benchmark, but this is comparing apples with... sponges. Intelligence does not equal human and there's nothing wrong with that; heck, only a few swings ago we were essentially monkeys.


From psychology, 'functional fixedness' is a bias that prevents an object from being used in a way besides its intended purpose. The bias - this confined thinking - prevents people from solving problems with tools to hand.


I suspect machine learning as a discipline could be suffering from this bias: we think of 'intelligence' according to its traditional use - 'human', so we are limiting what we're striving to achieve. Perhaps human-machine symbiosis should be the focus.


Then, I wonder, if computers do more of the thinking will we be doing more feeling?


It's also a nice example of a point I made previously, in the post titled 'Nature's Fingerprint' - that there are secrets to cracking ML in neuro-psychology.


*see what I did there?



38 views0 comments

Recent Posts

See All
bottom of page