# Logistic Regression Quickstart#

Already know what's what with logistic regression, just need to know how to tackle it in Python? We're here for you! If not, continue on to the next section.

We're going to **ignore the nuance of what we're doing** in this notebook, it's really just for people who need to see the process.

## Pandas for our data#

As is typical, we'll be using pandas dataframes for the data.

```
import pandas as pd
import numpy as np
df = pd.DataFrame([
{ 'length_in': 55, 'completed': 1 },
{ 'length_in': 55, 'completed': 1 },
{ 'length_in': 55, 'completed': 1 },
{ 'length_in': 60, 'completed': 1 },
{ 'length_in': 60, 'completed': 0 },
{ 'length_in': 70, 'completed': 1 },
{ 'length_in': 70, 'completed': 0 },
{ 'length_in': 82, 'completed': 1 },
{ 'length_in': 82, 'completed': 0 },
{ 'length_in': 82, 'completed': 0 },
{ 'length_in': 82, 'completed': 0 },
])
df
```

## Performing a regression#

The statsmodels package is your best friend when it comes to regression. In theory you can do it using other techniques or libraries, but statsmodels is just *so simple*.

For the regression below, I'm using the formula method of describing the regression. If that makes you grumpy, check the regression reference page for more details.

```
import statsmodels.formula.api as smf
model = smf.logit("completed ~ length_in", data=df)
results = model.fit()
results.summary()
```

### Converting coefficient to odds#

```
coefs = pd.DataFrame({
'coef': results.params.values,
'odds ratio': np.exp(results.params.values),
'pvalue': results.pvalues,
'name': results.params.index
})
coefs
```

For each additional inch I add to a scarf, my odds of finishing is 94% of what it was before (a.k.a. is lowered by 6%).

## Making predictions#

```
X_unknown = pd.DataFrame([
{ 'length_in': 20 },
{ 'length_in': 55 },
{ 'length_in': 80 },
{ 'length_in': 100 }
])
X_unknown['prediction'] = results.predict(X_unknown)
X_unknown
```

## Multivariable regression#

Multivariable regression is easy-peasy. We're going to add the size of our needles to our dataset. Larger needles make work go faster, so lazy people like me are more likely to finish.

```
df = pd.DataFrame([
{ 'length_in': 55, 'large_gauge': 1, 'completed': 1 },
{ 'length_in': 55, 'large_gauge': 0, 'completed': 1 },
{ 'length_in': 55, 'large_gauge': 0, 'completed': 1 },
{ 'length_in': 60, 'large_gauge': 0, 'completed': 1 },
{ 'length_in': 60, 'large_gauge': 0, 'completed': 0 },
{ 'length_in': 70, 'large_gauge': 0, 'completed': 1 },
{ 'length_in': 70, 'large_gauge': 0, 'completed': 0 },
{ 'length_in': 82, 'large_gauge': 1, 'completed': 1 },
{ 'length_in': 82, 'large_gauge': 0, 'completed': 0 },
{ 'length_in': 82, 'large_gauge': 0, 'completed': 0 },
{ 'length_in': 82, 'large_gauge': 1, 'completed': 0 },
])
df
```

```
model = smf.logit("completed ~ length_in + large_gauge", data=df)
results = model.fit()
results.summary()
```

### Converting coefficient to odds ratio#

```
coefs = pd.DataFrame({
'coef': results.params.values,
'odds ratio': np.exp(results.params.values),
'pvalue': results.pvalues,
'name': results.params.index
})
coefs
```

Using large gauge needles doubles your odds of finishing a project!

```
# Switching from small to large gauge needles
# is equivalent to how many inches?
math.log(2.15, 1.08)
```

```
X_unknown = pd.DataFrame([
{ 'length_in': 60, 'large_gauge': 1 },
{ 'length_in': 60, 'large_gauge': 0 },
{ 'length_in': 70, 'large_gauge': 1 },
{ 'length_in': 70, 'large_gauge': 0 },
])
X_unknown['prediction'] = results.predict(X_unknown)
X_unknown
```

There you go!

If you'd like more details, you can continue on in this section. If you'd just like the how-to-do-an-exact-thing explanations, check out the regression reference page.

```
```