Categories
Programming

TensorFlow-Keras or: How I Learned to Stop Worrying and Love Linear Regression (Part 2)

Continuing off from last week’s post, I will be discussing how in working with TensorFlow and Keras I learned the real reason behind why Machine Learning is often referred to as Artificial Intelligence (beyond the fact that artificial intelligence sounds cooler). This is because oftentimes any sense of Intelligence that your models have is entirely Artificial, and the machine has merely tricked you into believing that it actually works, when in fact it a terrible model and you should feel bad for bringing it into existence.

As I mentioned last week, Machine Learning shows you the stupid intelligence of computers when left to their literal interpretations. Much like someone answering ’11’ if you asked ‘What do you get when you put 1 and 1 together?’, models can oftentimes draw conclusions that are correct on some deeper, literal level, but completely wrong in every other regard.

Case in point: when trying to create a model to predict the ticket risk for any particular geographic point within the geoboundaries of the data, my random sample point generator (by virtue of the spread out density of Los Angeles and their parking habits) would result in 90% of the risk output to be 0. This is a fact that I only truly came to understand the gravitas of later. At first, I was amazed that when I began training the model with the randomly sampled data, the accuracy % of the model quickly rose to around 85-90%, and roughly hovered around that percentage no matter how long I trained the model. But when it came time to actually use the model to make predictions, the results seemed to be skewed.

Eventually, I came to realize what was actually happening. By giving the model data where the correct output is 0 90% of the time, then the model learns very quickly how to be correct 90% by just always guessing 0. While obvious in retrospect, my error came down to attributing human logic and reasoning to the model, instead of realizing how primitive in nature machine learning still is.

Ultimately, I found that what makes or breaks a machine learning model has not so much to do with the model itself, but instead on the data you’re feeding it. If your data is too noisy or stochastic, you cannot create any reasonable models from it, at least at the current level of technological advancement in machine learning.

The most positive takeaway I can have from these trials and tribulations is that for all the fear of a ‘rogue AI’ in the current media and collective unconsciousness, humanity has little to fear for now, unless the key to humanity’s downfall is to just always guess 0.

Leave a Reply

Your email address will not be published. Required fields are marked *