9 Review

We took a look at the FOIA Predictor, a web site that allows you to enter your FOIA request and see what the chances are that it will be granted.

We learned that this kind of a problem is an example of a classification. Classification problems involve a machine learning algorithm predicting a label or category (successful or denied) based on a set of details about the thing called features (average sentence length, destination agency, etc).

Classification algorithms are tested in a way similar your a teacher giving out practice tests to study with. She knows the answers to all of the questions, just like we know whether each FOIA request was granted or not. We teach (or train) the algorithm on some of the FOIA requests we have, allowing it to figure out the difference between a successful and unsuccessful request. Once that’s done, we test it on the ones it hasn’t seen yet!

The algorithm's score in predicting is called accuracy. In this case it was easy to get a 70% score as long as you always guessed “rejected,” which means that accuracy is not the best measurement you can use when testing an algorithm. Instead, we turned to confusion matrices.

A confusion matrix is a series of boxes that compare how the algorithm performed in more detail. It shows how many successful requests were predicted as either category, as well as for unsuccessful requests.

We compared different kinds of classifiers with our dataset, and used the eli5 package to examine the reasons each classifier each came to its decisions. This concept is called explainability. The classifiers included k-nearest neighbors, logistic regression, decision tree, and random forest.

K-nearest neighbors unfortunately couldn’t explain itself, but the others did a good job in listing what features they found to be important. It turned out that all classifiers were leaning heavily on the agency being applied to, so we tried removing that feature and running the algorithm again. Logistic regression failed completely without this information, but the others still did a fine job predicting.

Finally, we looked at how to use eli5 to explain individual predictions. Because random forests can reflect complicated interactions we weren’t able to get advice on how to improve our FOIA requests, but we did get to see which features were the most important.