Thursday, August 29, 2019

Black box

Unpopular quote from my image and video processing professor - “The only problem with machine learning is that the machine does the learning and you don’t.”

While I understand that is missing a lot of nuance, it has stuck with me over the past few years as I feel like I am missing out on the cool machine learning work going on out there.
There is a ton of learning about calculus, probability, and statistics when doing machine learning, but I can’t shake the fact that at the end of the day, the output is basically a black box. As you start toying with AI you realize that the only way to learn from your architecture and results is by tuning parameters and trial and error.
Of course there are many applications that only AI can solve, which is all good and well, but I’m curious to hear from some heavy machine learning practitioners - what is exciting to you about your work?
This is a serious inquiry because I want to know if it’s worth exploring again. In the past university AI classes I took, I just got bored writing tiny programs that leveraged AI libraries to classify images, do some simple predictions etc.

The complaint that a neural network, or deep learning model, is a "black box", where you can't see why the model is making a particular output seems quite weak to me. Not being interested in it because of that is worse, because it is like saying that we shouldn't study neuroscience because we don't understand how memories are stored in the nervous system.  The box is not black. It is just a very complicated box, with lots of weights and layers. You can look at the weights and inputs and layers and literally see exactly what simple math applied to the inputs led to the output. It might take you a while, but it is all there. We are building tools all of the time to make it easier to make this process more efficient, but, unlike a brain, no one is stopping you from reaching into the box and taking a look.

blog comments powered by Disqus