Finding surveillance planes using random forests#
The story:
- https://www.buzzfeednews.com/article/peteraldhous/spies-in-the-skies
- https://www.buzzfeednews.com/article/peteraldhous/hidden-spy-planes
This story, done by Peter Aldhous at Buzzfeed News, involved training a machine learning algorithm to recognize government surveillance planes based on what their flight patterns look like.
Topics: Random Forests
Datasets
- feds.csv: Transponder codes of planes operated by the federal government
- planes_features.csv: various features describing each plane's flight patterns
- train.csv: a labeled dataset of transponder codes and whether each plane is a surveillance plane or not
- The
label
column was originallyclass
, but I renamed it because pandas freaks out a bit with a column namedclass
- This was created by Buzzfeed
feds.csv
- The
- data dictionary: You can find the data dictionary published with their analysis here
- a few other files
What's the goal?#
The FBI and Department of Homeland Security operate many planes that are not directly labeled as belonging to the government. If we can uncover these planes, we have a better idea of the surveillance activities they are undertaking.
Imports#
Also set a large number of maximum columns.
import pandas as pd
pd.set_option("display.max_columns", 100)
Read in our data#
Almost all classification problems start with a set of labeled features. In this case, the features are in one CSV file and the labels are in another. Read both files in and merge them on adshex
, the transponder code.
# Read in your features
features = pd.read_csv("data/planes_features.csv")
features.head()
# Read in your labels
labeled = pd.read_csv("data/train.csv").rename(columns={'class': 'label'})
labeled.head()
df = labeled.merge(features, on='adshex')
df.head()
No wait, merge them again!#
We have features for about 20,000 planes and labels for about 600 planes. When you merge, the planes you have features for but not labels for will disappear.
We want to keep those in the dataframe so we can play detective with them later, and try to find surveillance planes using the features. When you merge, you should use how='left'
or how='right'
to keep unmatched columns from the left (or right) dataframe.
df = labeled.merge(features, on='adshex', how='right')
Confirm you have 19,799 rows and 34 columns.
df.shape
df.label.value_counts()
How do you feel about that split?
Prepare this column for machine learning. What's wrong with it as "surveil"
and "other"
? Add a new column that we can use for classification.
# Replace label with numbers
df['label'] = df.label.replace({
'surveil': 1,
'other': 0
})
df.head()
Categorical variables#
Do we have any variables that count as categories? Yes, we do! ...but how many different categories does it have?
- Tip: You can use
.unique()
or.value_counts()
to count unique items, depending on what you're looking for
df.type.value_counts()
Most of those types of plane only have one appearance, which means they wouldn't be very helpful identifiers in the final analysis. For example, if I only see one GLF5 and it's a surveillance plane, does that mean the next one I see is probably a surveillance plane? With such a small sample size, I have no idea!
We have a few options
- Create a very large set of dummy variables out of all 133 types of planes
- Create
0
/1
columns for common plane types and ignore the less common ones - C182, T206, SR22 - Interview someone who knows something about planes and put these into a few broader categories
- Keep them as one column, just turn them into numbers - it doesn't make sense in terms of order, but if one or two plane types are very indicative of a surveillance plane the forest might pick it up
Oddly enough, the last one is a common approach. Let's use it!
If you want to convert a list of categories into numbers, an easy way is to use the Categorical
data type.
df.type = df.type.astype('category')
df.type.head()
It looks like a normal bunch of strings, but pandas is secretly using a number for each one! You can find the number with .cat.codes
.
Use df.type.cat.codes
to make a new columns called type_code
.
df['type_code'] = df.type.cat.codes
df[['type', 'type_code']].head(10)
We'll use type_code
for machine learning since sklearn needs a number, and type
for reading since we like text.
Building our classifier#
When we're about to classify, we usually just drop our target column to build our inputs and outputs:
X = train_df.drop(column='column_you_are_predicting')
y = train_df.column_you_are_predicting
This time is a little different. First, we have unlabeled data in there! Use .dropna()
to filter your training data so we only have labeled data.
Confirm train_df
has 597 rows and 35 columns.
train_df = df.dropna()
train_df.shape
We also have a few extra columns that we aren't using for classification (like the text version of the type column and the transponder code). It's fine to drop multiple columns here that you aren't using, just a little bit messier. You also have to make sure you're dropping all the right ones.
Do a .head()
to double-check all of the columns you need to drop when creating your X
.
df.head(2)
Create your X
and y
.#
When you do train_df.drop
, you'll want to remove more than just your 0
/1
surveillance label. What other columns do you not want to use as input? Maybe some categories you converted into codes?
X = train_df.drop(columns=['adshex', 'type', 'label'])
y = train_df.label
Triple-check that X
is a list of numeric features and and y
is a numeric label.
X.head(2)
y.head(2)
Split into test and train datasets#
We could be nice and lazy and use all our data for training, but it just isn't right! Taking a test using the exact same questions you studied is just cheating. Split your data into test and train.
- Tip: Don't do this manually! There's a method for it in sklearn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
Classify using a logistic classifier#
Train your classifier#
Build a LogisticRegression
and fit it to your data, making sure you're training using only X_train
and y_train
.
- Tip: You'll want to give
LogisticRegression
an extra argument ofmax_iter=4000
- it means "work a little harder than you expect," because otherwise it won't find an answer (by default it only has amax_iter
of 100)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1e9, solver='lbfgs', max_iter=4000)
clf.fit(X_train, y_train)
Examine the coefficients#
What does it mean? What features is the classifier using? Do you care about the odds ratio? What is even the point of this LogisticRegression
thing?
import numpy as np
feature_names = X.columns
coefficients = clf.coef_[0]
pd.DataFrame({
'feature': feature_names,
'coefficient (log odds ratio)': coefficients,
'odds ratio': np.exp(coefficients)
}).sort_values(by='odds ratio', ascending=False)
If we don't care about the odds ratio, using the eli5
package can shrink our code by a lot (and give us color!)
import eli5
feature_names = list(X.columns)
# Use this line instead for wonderful warnings about the results
# eli5.show_weights(clf, feature_names=feature_names, show=eli5.formatters.fields.ALL)
eli5.show_weights(clf, feature_names=feature_names)
How well does our classifier perform?#
Let's take a look at the confusion matrix to see how well this classifier finds surveillance planes. Make sure you're using y_test
and X_test
, not the full dataset.
from sklearn.metrics import confusion_matrix
y_true = y_test
y_pred = clf.predict(X_test)
matrix = confusion_matrix(y_true, y_pred)
label_names = pd.Series(['not surveil', 'surveil'])
pd.DataFrame(matrix,
columns='Predicted ' + label_names,
index='Is ' + label_names)
Classify using a decision tree#
Now we'll use a decision tree. This is how you make one:
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
But it's up to you to teach it what spy planes look like using your training data.
If we use max_depth=
to limit the depth of the tree, it will help us visualize it. For example, max_depth=5
will only allow the tree to make five decisions.
Make a decision tree and fit it to your data. Use a max_depth=
of something between 2 to 5.
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
What are the important features?#
We'll use slightly different code for a decision tree, as it likes to draw big pictures if we don't stop it. The code looks like this:
import eli5
feature_names=list(X.columns)
eli5.show_weights(clf, feature_names=feature_names, show=['description', 'feature_importances'])
import eli5
feature_names=list(X.columns)
eli5.show_weights(clf, feature_names=feature_names, show=['description', 'feature_importances'])
Understanding the output#
Why is the feature importance difference than for logistic regression?
Also, if you don't specify a max_depth
, that's a LOT of zeroes! It doesn't even use most of the features! Why not?
# Because it's a different algorithm
# Because the features aren't important
How well does the tree perform?#
Display another confusion matrix with your new classifier.
from sklearn.metrics import confusion_matrix
y_true = y_test
y_pred = clf.predict(X_test)
matrix = confusion_matrix(y_true, y_pred)
label_names = pd.Series(['not surveil', 'surveil'])
pd.DataFrame(matrix,
columns='Predicted ' + label_names,
index='Is ' + label_names)
Visualize the tree#
You can use eli5
to visualize the decision tree itself! It usually takes up too much space, but since it's a special occasion we'll let it go.
feature_names=list(X.columns)
label_names = ['not surveillance', 'surveillance']
eli5.show_weights(clf, feature_names=feature_names, target_names=label_names, show=['decision_tree'])
If you'd like your graph to have colors colors, or to not use eli5, you can do it the old-fashioned way. You might need to brew install graphviz
and pip install graphviz
.
from sklearn import tree
import graphviz
label_names = ['not surveillance', 'surveillance']
feature_names = X.columns
dot_data = tree.export_graphviz(clf,
feature_names=feature_names,
filled=True,
class_names=label_names)
graph = graphviz.Source(dot_data)
graph
- Tip: You'll probably need to scroll sideways a bit
from sklearn import tree
import graphviz
label_names = ['not surveillance', 'surveillance']
feature_names = X.columns
dot_data = tree.export_graphviz(clf,
feature_names=feature_names,
filled=True,
class_names=label_names)
graph = graphviz.Source(dot_data)
graph
One more classifier: Random forest#
Build and train your classifier#
We can build a random forest classifier like this:
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
But you're in charge of fitting it to your training data!
- Tip: You can also set
max_depth
here, but you won't be able to visualize the result. - Tip: Increase
n_estimators
to 100 to make a better classifier.
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, max_depth=5)
clf.fit(X_train, y_train)
What are the important features?#
feature_names = list(X.columns)
eli5.show_weights(clf, feature_names=feature_names, show=['description', 'feature_importances'])
Understanding the output#
What is a random forest, and why is the feature importance difference than for the decision tree? Isn't a random forest just like a decision tree or something?
# It's a lot of decision trees that all work together, so it'll even try to use less useful features
How well does it perform?#
from sklearn.metrics import confusion_matrix
y_true = y_test
y_pred = clf.predict(X_test)
matrix = confusion_matrix(y_true, y_pred)
label_names = pd.Series(['not surveil', 'surveil'])
pd.DataFrame(matrix,
columns='Predicted ' + label_names,
index='Is ' + label_names)
How confident do you feel in the model?#
# Very confident
Actually finding spy planes#
Now let's try not actually find our spy planes
Retrain our model#
When we did test/train split, we trained our model with only a subset of our data, so we could test with the rest. Now that we're working in the "real world" we want to re-train it using not just _train
and _test
data, but instead everything we have labels for.
clf.fit(X, y)
Filter for planes we want to predict#
We have a dataframe of features that includes three types of planes:
- Those that are labeled as surveillance planes
- Those that are labeled as not surveillance
- Those that aren't labeled
Which do we want to predictions for? Filter a new dataframe that's just those.
- Tip: Scroll up to see where you created your
train_df
, it's the opposite!
real_df = df[df.label.isna()]
How many planes do you have in that list? Confirm it's about 19,200.
real_df.shape
Predicting#
Build your X
- remember you need to drop a few columns - and use that to make a prediction for each plane.
Assign the prediction into the predicted
column.
- Tip: Scroll up to see where you created your features for training, it's similar
- Tip: pandas will yell at us about setting values on copies of a slice but it's fine
X = real_df.drop(columns=['label', 'adshex', 'type'])
real_df['predicted'] = clf.predict(X)
How many planes did it predict to be surveillance planes?#
It should be roughly around 70-80 planes.
real_df[real_df.predicted == 1].shape
But.. what about those other ones? The ones that are just below the threshold?#
The cutoff for a prediction of 1
is 50%, but since we have a lot of time we're interested in investigating the top 150. To get the probability for each row, you will use clf.predict_proba
instead of clf.predict
. Also, to get the predicted probability for the 1
category, you'll need to add [:,1]
to the end of the
clf.predict_proba(***your features***)[:,1]
Create a new column called predicted_prob
that is the chance that the plane is a surveillance plane.
- Tip: You dropped three columns when using
clf.predict
, but if you drop the same three you'll get an error now. There's now an extra column that you'll need to drop! What is it?
# Predict the probability it's in the class represented by '1'
real_df['predicted_prob'] = clf.predict_proba(real_df.drop(columns=['label', 'adshex', 'type', 'predicted']))[:,1]
real_df.head()
Get the top 200 predictions#
Take a look at what the probabilities look like, showing the top 200 planes that are most likely to be surveillance planes.
Then save them to a file for later research.
top_predictions = real_df.sort_values(by='predicted_prob', ascending=False).head(200)
top_predictions
top_predictions.to_csv("planes-to-research.csv")
Questions#
Question 1#
What kind of machine learning are we doing here, and why are we doing it?
# Classification (or supervised learning) because we have labels
Question 2#
What are a few different ways you can deal with categorical data? Think about how we dealt with race in the reveal regression compared to how we dealt with type in this dataset.
# You can one-hot encode them if you have few
# You can just make them numbers if you have a lot
Question 3#
Every time we ran a machine learning algorithm on our dataset, we looked at feature importance.
- When might it be important to explain what our model found important?
- When might it not be important?
# If we're trying to understand what's going wrong or why it is/isn't working well
# It's more important if we're presenting this to the public
Question 4#
Using words and not column names, describe what the machine learning algorithm found to be important when identifying surveillance planes.
# Slow speed, constant turning vs going straight
Question 5#
Why did we use test/train split when it would have been more effective to give our model all of the data from the start?
# Shouldn't test on things that it's already seen
Question 6#
Why did we use a random forest instead of a decision tree or logistic regression? Was there something about the data?
# Because it did a better job!!!
Question 7#
Why did we use probability instead of just looking for planes with a predicted value of 1? It seems like we should have just trusted the algorithm, right?
# The 0/1 is an arbitrary cutoff of 50%, we're fine going lower because it gives us more to research
Question 8#
What if our random forest or input dataset were flawed? What would be the repercussions?
# We'd be investigating a bunch of planes that didn't need to be investigated
Question 9#
The government could claim that we're threatening national security by publishing this paper as well as publishing this code - now anyone could look for planes that are surveilling them. What do you think?
# Up to you!
Question 10#
We're using data from the past, but you can get real-time flight data from many services. Can you think of any uses for this algorithm using real-time instead of historical data?
# Finding out when something crazy is going on police-wise, maybe
Question 11#
This isn't a question, but if you look at candidates.csv
and candidates-annotates.csv
you can see how Buzzfeed did their research after finding a list of suspicious planes.
# k