Reducing Traffic Mortality in the USA
  • AI Chat
  • Code
  • Report
  • Beta
    Spinner

    1. The raw data files and their format

    While the rate of fatal road accidents has been decreasing steadily since the 80s, the past ten years have seen a stagnation in this reduction. Coupled with the increase in number of miles driven in the nation, the total number of traffic related-fatalities has now reached a ten year high and is rapidly increasing.

    Per request of the US Department of Transportation, we are currently investigating how to derive a strategy to reduce the incidence of road accidents across the nation. By looking at the demographics of traffic accident victims for each US state, we find that there is a lot of variation between states. Now we want to understand if there are patterns in this variation in order to derive suggestions for a policy action plan. In particular, instead of implementing a costly nation-wide plan we want to focus on groups of states with similar profiles. How can we find such groups in a statistically sound way and communicate the result effectively?

    To accomplish these tasks, we will make use of data wrangling, plotting, dimensionality reduction, and unsupervised clustering.

    The data given to us was originally collected by the National Highway Traffic Safety Administration and the National Association of Insurance Commissioners. This particular dataset was compiled and released as a CSV-file by FiveThirtyEight under the CC-BY4.0 license.

    # Check the name of the current folder
    current_dir = !pwd
    print(current_dir)
    
    # List all files in this folder
    file_list = !ls
    print(file_list)
    
    # List all files in the datasets directory
    dataset_list = !ls datasets
    print(dataset_list)
    
    # View the first 20 lines of datasets/road-accidents.csv
    accidents_head = !head -n 20 datasets/road-accidents.csv
    
    accidents_head

    2. Read in and get an overview of the data

    Next, we will orient ourselves to get to know the data with which we are dealing.

    # Import the `pandas` module as "pd"
    import pandas as pd
    
    # Read in `road-accidents.csv`
    car_acc = pd.read_csv("datasets/road-accidents.csv", sep='|', comment="#")
    
    # Save the number of rows columns as a tuple
    rows_and_cols = car_acc.shape
    print('There are {} rows and {} columns.\n'.format(
        rows_and_cols[0], rows_and_cols[1]))
    
    # Generate an overview of the DataFrame
    car_acc_information = car_acc.info()
    print(car_acc_information)
    
    # Display the last five rows of the DataFrame
    car_acc.tail()

    3. Create a textual and a graphical summary of the data

    We now have an idea of what the dataset looks like. To further familiarize ourselves with this data, we will calculate summary statistics and produce a graphical overview of the data. The graphical overview is good to get a sense for the distribution of variables within the data and could consist of one histogram per column. It is often a good idea to also explore the pairwise relationship between all columns in the data set by using a using pairwise scatter plots (sometimes referred to as a "scatterplot matrix").

    # import seaborn and make plots appear inline
    import seaborn as sns
    %matplotlib inline
    
    # Compute the summary statistics of all columns in the `car_acc` DataFrame
    sum_stat_car = car_acc.describe()
    print(sum_stat_car)
    
    # Create a pairwise scatter plot to explore the data
    sns.pairplot(sum_stat_car)

    4. Quantify the association of features and accidents

    We can already see some potentially interesting relationships between the target variable (the number of fatal accidents) and the feature variables (the remaining three columns).

    To quantify the pairwise relationships that we observed in the scatter plots, we can compute the Pearson correlation coefficient matrix. The Pearson correlation coefficient is one of the most common methods to quantify correlation between variables, and by convention, the following thresholds are usually used:

    • 0.2 = weak
    • 0.5 = medium
    • 0.8 = strong
    • 0.9 = very strong
    # Compute the correlation coefficent for all column pairs
    corr_columns = car_acc.corr()
    corr_columns

    5. Fit a multivariate linear regression

    From the correlation table, we see that the amount of fatal accidents is most strongly correlated with alcohol consumption (first row). But in addition, we also see that some of the features are correlated with each other, for instance, speeding and alcohol consumption are positively correlated. We, therefore, want to compute the association of the target with each feature while adjusting for the effect of the remaining features. This can be done using multivariate linear regression.

    Both the multivariate regression and the correlation measure how strongly the features are associated with the outcome (fatal accidents). When comparing the regression coefficients with the correlation coefficients, we will see that they are slightly different. The reason for this is that the multiple regression computes the association of a feature with an outcome, given the association with all other features, which is not accounted for when calculating the correlation coefficients.

    A particularly interesting case is when the correlation coefficient and the regression coefficient of the same feature have opposite signs. How can this be? For example, when a feature A is positively correlated with the outcome Y but also positively correlated with a different feature B that has a negative effect on Y, then the indirect correlation (A->B->Y) can overwhelm the direct correlation (A->Y). In such a case, the regression coefficient of feature A could be positive, while the correlation coefficient is negative. This is sometimes called a masking relationship. Let’s see if the multivariate regression can reveal such a phenomenon.

    # Import the linear model function from sklearn
    from sklearn import linear_model
    
    # Create the features and target DataFrames
    features = car_acc.drop(['drvr_fatl_col_bmiles', 'state'], axis=1)
    target = car_acc["drvr_fatl_col_bmiles"]
    
    # Create a linear regression object
    reg = linear_model.LinearRegression()
    
    # Fit a multivariate linear regression model
    reg.fit(features, target)
    
    # Retrieve the regression coefficients
    fit_coef = reg.coef_
    fit_coef

    6. Perform PCA on standardized data

    We have learned that alcohol consumption is weakly associated with the number of fatal accidents across states. This could lead us to conclude that alcohol consumption should be a focus for further investigations and maybe strategies should divide states into high versus low alcohol consumption in accidents. But there are also associations between alcohol consumptions and the other two features, so it might be worth trying to split the states in a way that accounts for all three features.

    One way of clustering the data is to use PCA to visualize data in reduced dimensional space where we can try to pick up patterns by eye. PCA uses the absolute variance to calculate the overall variance explained for each principal component, so it is important that the features are on a similar scale (unless we would have a particular reason that one feature should be weighted more).

    We'll use the appropriate scaling function to standardize the features to be centered with mean 0 and scaled with standard deviation 1.

    # Standardize and center the feature columns
    from sklearn.preprocessing import StandardScaler
    scaler = StandardScaler()
    features_scaled = scaler.fit_transform(features)
    
    # Import the PCA class function from sklearn
    from sklearn.decomposition import PCA
    pca = PCA()
    
    # Fit the standardized data to the pca
    pca.fit(features_scaled)
    
    # Plot the proportion of variance explained on the y-axis of the bar plot
    import matplotlib.pyplot as plt
    plt.bar(range(1, pca.n_components_ + 1),  pca.explained_variance_ratio_)
    plt.xlabel('Principal component #')
    plt.ylabel('Proportion of variance explained')
    plt.xticks([1, 2, 3])
    
    # Compute the cumulative proportion of variance explained by the first two principal components
    two_first_comp_var_exp = pca.explained_variance_ratio_[0].cumsum()[0] + pca.explained_variance_ratio_[1].cumsum()[0]
    print("The cumulative variance of the first two principal components is {}".format(
        round(two_first_comp_var_exp, 5)))

    7. Visualize the first two principal components

    The first two principal components enable visualization of the data in two dimensions while capturing a high proportion of the variation (79%) from all three features: speeding, alcohol influence, and first-time accidents. This enables us to use our eyes to try to discern patterns in the data with the goal to find groups of similar states. Although clustering algorithms are becoming increasingly efficient, human pattern recognition is an easily accessible and very efficient method of assessing patterns in data.

    We will create a scatter plot of the first principle components and explore how the states cluster together in this visualization.

    # Transform the scaled features using two principal components
    pca = PCA(n_components=2)
    p_comps = pca.fit_transform(features_scaled)
    
    # Extract the first and second component to use for the scatter plot
    p_comp1 = p_comps[:, 0]
    p_comp2 = p_comps[:, 1]
    
    # Plot the first two principal components in a scatter plot
    plt.scatter(p_comp1, p_comp2)

    8. Find clusters of similar states in the data

    It was not entirely clear from the PCA scatter plot how many groups in which the states cluster. To assist with identifying a reasonable number of clusters, we can use KMeans clustering by creating a scree plot and finding the "elbow", which is an indication of when the addition of more clusters does not add much explanatory power.