pandas Foundations

31 minute read

NumPy and pandas working together

Pandas depends upon and interoperates with NumPy, the Python library for fast numeric array computations. For example, we can use the DataFrame attribute .values to represent a DataFrame df as a NumPy array. We can also pass pandas data structures to NumPy methods. In this exercise, we have imported pandas as pd and loaded world population data every 10 years since 1960 into the DataFrame df. Our job is to extract the values and store them in an array using the attribute .values. We’ll then use those values as input into the NumPy np.log10() method to compute the base 10 logarithm of the population values. Finally, we will pass the entire pandas DataFrame into the same NumPy np.log10() method and compare the results.

# Import numpy
import numpy as np
# Create array of DataFrame values: np_vals
np_vals = df.values
# Create new array of base 10 logarithm values: np_vals_log10
np_vals_log10 = np.log10(np_vals)
# Create array of new DataFrame by passing df to np.log10(): df_log10
df_log10 = np.log10(df)
# Print original and new data containers
print(type(np_vals), type(np_vals_log10))
print(type(df), type(df_log10))

Zip lists to build a DataFrame

We’re going to make a pandas DataFrame of the top three countries to win gold medals since 1896 by first building a dictionary. list_keys contains the column names ‘Country’ and ‘Total’. list_values contains the full names of each country and the number of gold medals awarded. The values have been taken from Wikipedia. Our job is to use these lists to construct a list of tuples, use the list of tuples to construct a dictionary, and then use that dictionary to construct a DataFrame. In doing so, we’ll make use of the list(), zip(), dict() and pd.DataFrame() functions. Pandas has already been imported as pd. Note: The zip() function in Python 3 and above returns a special zip object, which is essentially a generator. To convert this zip object into a list, you’ll need to use list().

# Zip the 2 lists together into one list of (key,value) tuples: zipped
zipped = list(zip(list_keys,list_values))
# Inspect the list using print()
print(zipped)
# Build a dictionary with the zipped list: data
data = dict(zipped)
# Build and inspect a DataFrame from the dictionary: df
df = pd.DataFrame(data)
print(df)

Labeling your data

We can use the DataFrame attribute df.columns to view and assign new string labels to columns in a pandas DataFrame. In this exercise, we have imported pandas as pd and defined a DataFrame df containing top Billboard hits from the 1980s (from Wikipedia). Each row has the year, artist, song name and the number of weeks at the top. However, this DataFrame has the column labels a, b, c, d. Our job is to use the df.columns attribute to re-assign descriptive column labels.

# Build a list of labels: list_labels
list_labels = ['year','artist','song','chart weeks']
# Assign the list of labels to the columns attribute: df.columns
df.columns = list_labels

Building DataFrames with broadcasting

We can implicitly use ‘broadcasting’, a feature of NumPy, when creating pandas DataFrames. We’re going to create a DataFrame of cities in Pennsylvania that contains the city name in one column and the state name in the second. We have imported the names of 15 cities as the list cities. Our job is to construct a DataFrame from the list of cities and the string ‘PA’.

# Make a string with the value 'PA': state
state = 'PA'
# Construct a dictionary: data
data = {'state':state, 'city':cities}
# Construct a DataFrame from dictionary data: df
df = pd.DataFrame(data)
# Print the DataFrame
print(df)

Reading a flat file

We have preloaded the data for you using the pandas function read_csv(). Now, it’s our turn! Our job is to read the World Bank population data we saw earlier into a DataFrame using read_csv(). The file is available in the variable data_file. The next step is to reread the same file, but simultaneously rename the columns using the names keyword input parameter, set equal to a list of new column labels. We will also need to set header=0 to rename the column labels.

# Read in the file: df1
df1 = pd.read_csv('world_population.csv')
# Create a list of the new column labels: new_labels
new_labels = ['year', 'population']
# Read in the file, specifying the header and names parameters: df2
df2 = pd.read_csv('world_population.csv', header=0, names=new_labels)
# Print both the DataFrames
print(df1)
print(df2)

Delimiters, headers, and extensions

Not all data files are clean and tidy. Pandas provides methods for reading those not-so-perfect data files that we encounter far too often. We have monthly stock data for four companies downloaded from Yahoo Finance. The data is stored as one row for each company and each column is the end-of-month closing price. The file name is given to us in the variable file_messy. In addition, this file has three aspects that may cause trouble for lesser tools: multiple header lines, comment records (rows) interleaved throughout the data rows, and space delimiters instead of commas. Our job is to use pandas to read the data from this problematic file_messy using non-default input options with read_csv() so as to tidy up the mess at read time. Then, write the cleaned up data to a CSV file with the variable file_clean that has been prepared for us, as we might do in a real data workflow.

# Read the raw file as-is: df1
df1 = pd.read_csv(file_messy)
# Print the output of df1.head()
print(df1.head())
# Read in the file with the correct parameters: df2
df2 = pd.read_csv(file_messy, delimiter=' ', header=3, comment='#')
# Print the output of df2.head()
print(df2.head())
# Save the cleaned up DataFrame to a CSV file without the index
df2.to_csv(file_clean, index=False)
# Save the cleaned up DataFrame to an excel file without the index
df2.to_excel('file_clean.xlsx', index=False)

Plotting series using pandas

Data visualization is often a very effective first step in gaining a rough understanding of a data set to be analyzed. Pandas provides data visualization by both depending upon and interoperating with the matplotlib library. We will now explore some of the basic plotting mechanics with pandas as well as related matplotlib options. We have pre-loaded a pandas DataFrame df which contains the data you need. Our job is to use the DataFrame method df.plot() to visualize the data, and then explore the optional matplotlib input parameters that this .plot() method accepts. The pandas .plot() method makes calls to matplotlib to construct the plots. This means that we can use the skills we’ve learned in previous visualization courses to customize the plot. In this exercise, we’ll add a custom title and axis labels to the figure.

# Create a plot with color='red'
df.plot(color='red')
# Add a title
plt.title('Temperature in Austin')
# Specify the x-axis label
plt.xlabel('Hours since midnight August 1, 2010')
# Specify the y-axis label
plt.ylabel('Temperature (degrees F)')
# Display the plot
plt.show()

Plotting DataFrames

Comparing data from several columns can be very illuminating. Pandas makes doing so easy with multi-column DataFrames. By default, calling df.plot() will cause pandas to over-plot all column data, with each column as a single line. In this exercise, we have pre-loaded three columns of data from a weather data set - temperature, dew point, and pressure - but the problem is that pressure has different units of measure. The pressure data, measured in Atmospheres, has a different vertical scaling than that of the other two data columns, which are both measured in degrees Fahrenheit. Our job is to plot all columns as a multi-line plot, to see the nature of vertical scaling problem. Then, use a list of column names passed into the DataFrame df[column_list] to limit plotting to just one column, and then just 2 columns of data. When we are finished, we will have created 4 plots. We can cycle through them by clicking on the ‘Previous Plot’ and ‘Next Plot’ buttons.

# Plot all columns (default)
df.plot()
plt.show()
# Plot all columns as subplots
df.plot(subplots=True)
plt.show()
# Plot just the Dew Point data
column_list1 = ['Dew Point (deg F)']
df[column_list1].plot()
plt.show()
# Plot the Dew Point and Temperature data, but not the Pressure data
column_list2 = ['Temperature (deg F)','Dew Point (deg F)']
df[column_list2].plot()
plt.show()

pandas line plots

We saw that the .plot() method will place the Index values on the x-axis by default. We’ll practice making line plots with specific columns on the x and y axes. We will work with a dataset consisting of monthly stock prices in 2015 for AAPL, GOOG, and IBM. The stock prices were obtained from Yahoo Finance. Our job is to plot the ‘Month’ column on the x-axis and the AAPL and IBM prices on the y-axis using a list of column names. Explore it using methods such as .head(), .info(), and .describe() to see the column names.

# Create a list of y-axis column names: y_columns
y_columns = ['AAPL','IBM']
# Generate a line plot
df.plot(x='Month', y=y_columns)
# Add the title
plt.title('Monthly stock prices')
# Add the y-axis label
plt.ylabel('Price ($US)')
# Display the plot
plt.show()

pandas scatter plots

Pandas scatter plots are generated using the kind=’scatter’ keyword argument. Scatter plots require that the x and y columns be chosen by specifying the x and y parameters inside .plot(). Scatter plots also take an s keyword argument to provide the radius of each circle to plot in pixels. We’re going to plot fuel efficiency (miles-per-gallon) versus horse-power for 392 automobiles manufactured from 1970 to 1982 from the UCI Machine Learning Repository. The size of each circle is provided as a NumPy array called sizes. This array contains the normalized ‘weight’ of each automobile in the dataset.

# Generate a scatter plot
df.plot(kind='scatter', x='hp', y='mpg', s=sizes)
# Add the title
plt.title('Fuel efficiency vs Horse-power')
# Add the x-axis label
plt.xlabel('Horse-power')
# Add the y-axis label
plt.ylabel('Fuel efficiency (mpg)')
# Display the plot
plt.show()

pandas box plots

While pandas can plot multiple columns of data in a single figure, making plots that share the same x and y axes, there are cases where two columns cannot be plotted together because their units do not match. The .plot() method can generate subplots for each column being plotted. Here, each plot will be scaled independently. Our job is to generate box plots for fuel efficiency (mpg) and weight from the automobiles data set. To do this in a single figure, we’ll specify subplots=True inside .plot() to generate two separate plots.

# Make a list of the column names to be plotted: cols
cols = ['weight','mpg']
# Generate the box plots
df[cols].plot(kind='box', subplots=True)
# Display the plot
plt.show()

pandas hist, pdf and cdf

Pandas relies on the .hist() method to not only generate histograms, but also plots of probability density functions (PDFs) and cumulative density functions (CDFs). We will work with a dataset consisting of restaurant bills that includes the amount customers tipped. The original dataset is provided by the Seaborn package. Our job is to plot a PDF and CDF for the fraction column of the tips dataset. This column contains information about what fraction of the total bill is comprised of the tip. Remember, when plotting the PDF, we need to specify normed=True in your call to .hist(), and when plotting the CDF, we need to specify cumulative=True in addition to normed=True.

# This formats the plots such that they appear on separate rows
fig, axes = plt.subplots(nrows=2, ncols=1)
# Plot the PDF
df.fraction.plot(ax=axes[0], kind='hist', bins=30, normed=True, range=(0,.3))
plt.show()
# Plot the CDF
df.fraction.plot(ax=axes[1], kind='hist', bins=30, normed=True, cumulative=True, range=(0,.3))
plt.show()

Bachelor’s degrees awarded to women

We will investigate statistics of the percentage of Bachelor’s degrees awarded to women from 1970 to 2011. Data is recorded every year for 17 different fields. This data set was obtained from the Digest of Education Statistics. Our job is to compute the minimum and maximum values of the ‘Engineering’ column and generate a line plot of the mean value of all 17 academic fields per year. To perform this step, we’ll use the .mean() method with the keyword argument axis=’columns’. This computes the mean across all columns per row.

# Print the minimum value of the Engineering column
print(df['Engineering'].min())
# Print the maximum value of the Engineering column
print(df['Engineering'].max())
# Construct the mean percentage per year: mean
mean = df.mean(axis='columns')
# Plot the average percentage per year
mean.plot(x='Year')
# Display the plot
plt.show()

Median vs mean

In many data sets, there can be large differences in the mean and median value due to the presence of outliers. We’ll investigate the mean, median, and max fare prices paid by passengers on the Titanic and generate a box plot of the fare prices. This data set was obtained from Vanderbilt University.

# Print summary statistics of the fare column with .describe()
print(df['fare'].describe())
# Generate a box plot of the fare column
df['fare'].plot(kind='box')
# Show the plot
plt.show()

Quantiles

We’ll investigate the probabilities of life expectancy in countries around the world. This dataset contains life expectancy for persons born each year from 1800 to 2015. Since country names change or results are not reported, not every country has values. This dataset was obtained from Gapminder. First, we will determine the number of countries reported in 2015. There are a total of 260 unique countries in the entire dataset. Then, we will compute the 5th and 95th percentiles of life expectancy over the entire dataset. Finally, we will make a box plot of life expectancy every 50 years from 1800 to 2000. Notice the large change in the distributions over this period.

# Print the number of countries reported in 2015
print(df['2015'].count())
# Print the 5th and 95th percentiles
print(df.quantile([0.05,0.95]))
# Generate a box plot
years = ['1800','1850','1900','1950','2000']
df[years].plot(kind='box')
plt.show()

Standard deviation of temperature

Let’s use the mean and standard deviation to explore differences in temperature distributions in Pittsburgh in 2013. The data has been obtained from Weather Underground. We’re going to compare the distribution of daily temperatures in January and March. We’ll compute the mean and standard deviation for these two months. We will notice that while the mean values are similar, the standard deviations are quite different, meaning that one month had a larger fluctuation in temperature than the other.

# Print the mean of the January and March data
print(january.mean(),march.mean())
# Print the standard deviation of the January and March data
print(january.std(),march.std())

Separate and summarize

Let’s use population filtering to determine how the automobiles in the US differ from the global average and standard deviation. How does the distribution of fuel efficiency (MPG) for the US differ from the global average and standard deviation? We’ll compute the means and standard deviations of all columns in the full automobile dataset. Next, we’ll compute the same quantities for just the US population and subtract the global values from the US values.

# Compute the global mean and global standard deviation: global_mean, global_std
global_mean = df.mean()
global_std = df.std()
# Filter the US population from the origin column: us
us = df[df['origin']=='US']
# Compute the US mean and US standard deviation: us_mean, us_std
us_mean = us.mean()
us_std = us.std()
# Print the differences
print(us_mean - global_mean)
print(us_std - global_std)

Separate and plot

Population filtering can be used alongside plotting to quickly determine differences in distributions between the sub-populations. We’ll work with the Titanic dataset. There were three passenger classes on the Titanic, and passengers in each class paid a different fare price. We’ll investigate the differences in these fare prices. Our job is to use Boolean filtering and generate box plots of the fare prices for each of the three passenger classes. The fare prices are contained in the ‘fare’ column and passenger class information is contained in the ‘pclass’ column. When you’re done, notice the portions of the box plots that differ and those that are similar.

# Display the box plots on 3 separate rows and 1 column
fig, axes = plt.subplots(nrows=3, ncols=1)
# Generate a box plot of the fare prices for the First passenger class
titanic.loc[titanic['pclass'] == 1].plot(ax=axes[0],y='fare', kind='box')
# Generate a box plot of the fare prices for the Second passenger class
titanic.loc[titanic['pclass'] == 2].plot(ax=axes[1],y='fare', kind='box')
# Generate a box plot of the fare prices for the Third passenger class
titanic.loc[titanic['pclass'] == 3].plot(ax=axes[2],y='fare', kind='box')
# Display the plot
plt.show()

Creating and using a DatetimeIndex

The pandas Index is a powerful way to handle time series data, so it is valuable to know how to build one yourself. Pandas provides the pd.to_datetime() function for just this task. For example, if passed the list of strings [‘2015-01-01 091234’,’2015-01-01 091234’] and a format specification variable, such as format=’%Y-%m-%d %H%M%S, pandas will parse the string into the proper datetime elements and build the datetime objects. Here a list of temperature data and a list of date strings has been pre-loaded for you as temperature_list and date_list respectively. Our job is to use the .to_datetime() method to build a DatetimeIndex out of the list of date strings, and to then use it along with the list of temperature data to build a pandas Series.

# Prepare a format string: time_format
time_format = '%Y-%m-%d %H:%M'
# Convert date_list into a datetime object: my_datetimes
my_datetimes = pd.to_datetime(date_list,format=time_format)  
# Construct a pandas Series using temperature_list and my_datetimes: time_series
time_series = pd.Series(temperature_list, index=my_datetimes)

Partial string indexing and slicing

Pandas time series support “partial string” indexing. What this means is that even when passed only a portion of the datetime, such as the date but not the time, pandas is remarkably good at doing what one would expect. Pandas datetime indexing also supports a wide variety of commonly used datetime string formats, even when mixed. In this exercise, a time series that contains hourly weather data has been pre-loaded for us. This data was read using the parse_dates=True option in read_csv() with index_col=”Dates” so that the Index is indeed a DatetimeIndex. All data from the ‘Temperature’ column has been extracted into the variable ts0. Our job is to use a variety of natural date strings to extract one or more values from ts0. After we are done, we will have three new variables - ts1, ts2, and ts3. We can slice these further to extract only the first and last entries of each.

# Extract the hour from 9pm to 10pm on '2010-10-11': ts1
ts1 = ts0.loc['2010-10-11 21:00:00':'2010-10-11 22:00:00']
# Extract '2010-07-04' from ts0: ts2
ts2 = ts0.loc['2010-07-04']
# Extract data from '2010-12-15' to '2010-12-31': ts3
ts3 = ts0.loc['2010-12-15':'2010-12-31']

Reindexing the Index

Reindexing is useful in preparation for adding or otherwise combining two time series data sets. To reindex the data, we provide a new index and ask pandas to try and match the old data to the new index. If data is unavailable for one of the new index dates or times, we must tell pandas how to fill it in. Otherwise, pandas will fill with NaN by default. Two time series data sets containing daily data have been pre-loaded for us, each indexed by dates. The first, ts1, includes weekends, but the second, ts2, does not. The goal is to combine the two data sets in a sensible way. Our job is to reindex the second data set so that it has weekends as well, and then add it to the first. When we are done, it would be informative to inspect your results.

# Reindex without fill method: ts3
ts3 = ts2.reindex(ts1.index)
# Reindex with fill method, using forward fill: ts4
ts4 = ts2.reindex(ts1.index, method='ffill')
# Combine ts1 + ts2: sum12
sum12 = ts1+ts2
# Combine ts1 + ts3: sum13
sum13 = ts1 + ts3
# Combine ts1 + ts4: sum14
sum14 = ts1 + ts4

Resampling and frequency

Pandas provides methods for resampling time series data. When downsampling or upsampling, the syntax is similar, but the methods called are different. Both use the concept of ‘method chaining’ - df.method1().method2().method3() - to direct the output from one method call to the input of the next, and so on, as a sequence of operations, one feeding into the next. For example, if you have hourly data, and just need daily data, pandas will not guess how to throw out the 23 of 24 points. You must specify this in the method. One approach, for instance, could be to take the mean, as in df.resample(‘D’).mean(). Our job is to resample the data using a variety of aggregation methods to answer a few questions.

# Downsample to 6 hour data and aggregate by mean: df1
df1 = df['Temperature'].resample('6H').mean()
# Downsample to daily data and count the number of data points: df2
df2 = df['Temperature'].resample('D').count()

Separating and resampling

With pandas, we can resample in different ways on different subsets of your data. For example, resampling different months of data with different aggregations. In this exercise, the data set containing hourly temperature data from the last exercise has been pre-loaded. Our job is to resample the data using a variety of aggregation methods. The DataFrame is available in the workspace as df.

# Extract temperature data for August: august
august = df.loc['August,2010','Temperature']
# Downsample to obtain only the daily highest temperatures in August: august_highs
august_highs = august.resample('D').max()
# Extract temperature data for February: february
february = df.loc['February,2010','Temperature']
# Downsample to obtain the daily lowest temperatures in February: february_lows
february_lows = february.resample('D').min()

Rolling mean and frequency

In this exercise, some hourly weather data is pre-loaded. We will continue to practice resampling, this time using rolling means. Rolling means (or moving averages) are generally used to smooth out short-term fluctuations in time series data and highlight long-term trends. You can read more about them here. To use the .rolling() method, you must always use method chaining, first calling .rolling() and then chaining an aggregation method after it. For example, with a Series hourly_data, hourly_data.rolling(window=24).mean() would compute new values for each hourly point, based on a 24-hour window stretching out behind each point. The frequency of the output data is the same: it is still hourly. Such an operation is useful for smoothing time series data. Our job is to resample the data using the combination of .rolling() and .mean().

# Extract data from 2010-Aug-01 to 2010-Aug-15: unsmoothed
unsmoothed = df['Temperature']['2010-Aug-01':'2010-Aug-15']
# Apply a rolling mean with a 24 hour window: smoothed
smoothed = unsmoothed.rolling(window=24).mean()
# Create a new DataFrame with columns smoothed and unsmoothed: august
august = pd.DataFrame({'smoothed':smoothed, 'unsmoothed':unsmoothed})
# Plot both smoothed and unsmoothed data using august.plot().
august.plot()
plt.show()

Resample and roll with it

As of pandas version 0.18.0, the interface for applying rolling transformations to time series has become more consistent and flexible, and feels somewhat like a groupby. We can now flexibly chain together resampling and rolling operations. In this exercise, the same weather data from the previous exercises has been pre-loaded for you. Our job is to extract one month of data, resample to find the daily high temperatures, and then use a rolling and aggregation operation to smooth the data.

# Extract the August 2010 data: august
august = df['Temperature']['Aug,2010':'Aug,2010']
# Resample to daily data, aggregating by max: daily_highs
daily_highs = august.resample('D').max()
# Use a rolling 7-day window with method chaining to smooth the daily high temperatures in August
daily_highs_smoothed = daily_highs.rolling(window=7).mean()
print(daily_highs_smoothed)

Method chaining and filtering

We’ve seen that pandas supports method chaining. This technique can be very powerful when cleaning and filtering data. In this exercise, a DataFrame containing flight departure data for a single airline and a single airport for the month of July 2015 has been pre-loaded. Our job is to use .str() filtering and method chaining to generate summary statistics on flight delays each day to Dallas.

# Strip extra whitespace from the column names: df.columns
df.columns = df.columns.str.strip()
# Extract data for which the destination airport is Dallas: dallas
dallas = df['Destination Airport'].str.contains('DAL')
# Compute the total number of Dallas departures each day: daily_departures
daily_departures = dallas.resample('D').sum()
# Generate the summary statistics for daily Dallas departures: stats
stats = daily_departures.describe()

Missing values and interpolation

One common application of interpolation in data analysis is to fill in missing data. In this exercise, noisy measured data that has some dropped or otherwise missing values has been loaded. The goal is to compare two time series, and then look at summary statistics of the differences. The problem is that one of the data sets is missing data at some of the times. The pre-loaded data ts1 has value for all times, yet the data set ts2 does not: it is missing data for the weekends. Our job is to first interpolate to fill in the data for all days. Then, compute the differences between the two data sets, now that they both have full support for all times. Finally, generate the summary statistics that describe the distribution of differences.

# Reset the index of ts2 to ts1, and then use linear interpolation to fill in the NaNs: ts2_interp
ts2_interp = ts2.reindex(ts1.index).interpolate(how='linear')
# Compute the absolute difference of ts1 and ts2_interp: differences
differences = np.abs(ts1-ts2_interp)
# Generate and print summary statistics of the differences
print(differences.describe())

Time zones and conversion

Time zone handling with pandas typically assumes that we are handling the Index of the Series. In this exercise, we will learn how to handle timezones that are associated with datetimes in the column data, and not just the Index. We will work with the flight departure dataset again, and this time we will select Los Angeles (‘LAX’) as the destination airport. Here we will use a mask to ensure that we only compute on data we actually want. To learn more about Boolean masks, click here!

# Buid a Boolean mask to filter out all the 'LAX' departure flights: mask
mask = df['Destination Airport'] == 'LAX'
# Use the mask to subset the data: la
la = df[mask]
# Combine two columns of data to create a datetime series: times_tz_none
times_tz_none = pd.to_datetime(la['Date (MM/DD/YYYY)'] + ' ' + la['Wheels-off Time'])
# Localize the time to US/Central: times_tz_central
times_tz_central = times_tz_none.dt.tz_localize('US/Central')
# Convert the datetimes from US/Central to US/Pacific
times_tz_pacific = times_tz_central.dt.tz_convert('US/Pacific')

Plotting time series, datetime indexing

Our job is to convert the ‘Date’ column from a collection of strings into a collection of datetime objects. Then, we will use this converted ‘Date’ column as your new index, and re-plot the data, noting the improved datetime awareness. Before proceeding, look at the plot shown and observe how pandas handles data with the default integer index. Then, inspect the DataFrame df using the .head() method in the IPython Shell to get a feel for its structure.

# Plot the raw data before setting the datetime index
df.plot()
plt.show()
# Convert the 'Date' column into a collection of datetime objects: df.Date
df.Date = pd.to_datetime(df['Date'])
# Set the index to be the converted 'Date' column
df.set_index('Date',inplace=True)
# Re-plot the DataFrame to see that the axis is now datetime aware!
df.plot()
plt.show()

Plotting date ranges, partial indexing

Now that we have set the DatetimeIndex in our DataFrame, we have a much more powerful and flexible set of tools to use when plotting our time series data. Of these, one of the most convenient is partial string indexing and slicing. In this exercise, we’ve pre-loaded a full year of Austin 2010 weather data, with the index set to be the datetime parsed ‘Date’ column as shown in the previous exercise. Our job is to use partial string indexing of the dates, in a variety of datetime string formats, to plot all the summer data and just one week of data together. First, remind yourself how to extract one month of temperature data using ‘May 2010’ as a key into df.Temperature[], and call head() to inspect the result: df.Temperature[‘May 2010’].head().

# Plot the summer data
df.Temperature['2010-Jun':'2010-Aug'].plot()
plt.show()
plt.clf()
# Plot the one week data
df.Temperature['2010-06-10':'2010-06-17'].plot()
plt.show()
plt.clf()

Case Study

Reading in a data file

The problem with real data such as this is that the files are almost never formatted in a convenient way. In this exercise, there are several problems to overcome in reading the file. First, there is no header, and thus the columns don’t have labels. There is also no obvious index column, since none of the data columns contain a full date or time. Our job is to read the file into a DataFrame using the default arguments. After inspecting it, we will re-read the file specifying that there are no headers supplied.

# Import pandas
import pandas as pd
# Read in the data file: df
df = pd.read_csv('data.csv')
# Print the output of df.head()
print(df.head())
# Read in the data file with header=None: df_headers
df_headers = pd.read_csv('data.csv', header=None)
# Print the output of df_headers.head()
print(df_headers.head())

Re-assigning column names

After the initial step of reading in the data, the next step is to clean and tidy it so that it is easier to work with. In this exercise, we will begin this cleaning process by re-assigning column names and dropping unnecessary columns. pandas has been imported in the workspace as pd, and the file NOAA_QCLCD_2011_hourly_13904.txt has been parsed and loaded into a DataFrame df. The comma separated string of column names, column_labels, and list of columns to drop, list_to_drop, have also been loaded for us.

# Split on the comma to create a list: column_labels_list
column_labels_list = column_labels.split(',')
# Assign the new column labels to the DataFrame: df.columns
df.columns= column_labels_list
# Remove the appropriate columns: df_dropped
df_dropped = df.drop(list_to_drop,axis='columns')
# Print the output of df_dropped.head()
print(df_dropped.head())

Cleaning and tidying datetime data

In order to use the full power of pandas time series, we must construct a DatetimeIndex. To do so, it is necessary to clean and transform the date and time columns. Our job is to clean up the date and Time columns and combine them into a datetime collection to be used as the Index.

# Convert the date column to string: df_dropped['date']
df_dropped['date'] = df_dropped['date'].astype(str)
# Pad leading zeros to the Time column: df_dropped['Time']
df_dropped['Time'] = df_dropped['Time'].apply(lambda x:'{:0>4}'.format(x))
# Concatenate the new date and Time columns: date_string
date_string = df_dropped['date']+ df_dropped['Time']
# Convert the date_string Series to datetime: date_times
date_times = pd.to_datetime(date_string, format='%Y%m%d%H%M')
# Set the index to be the new date_times container: df_clean
df_clean = df_dropped.set_index(date_times)
# Print the output of df_clean.head()
print(df_clean.head())

Cleaning the numeric columns

The numeric columns contain missing values labeled as ‘M’. In this exercise, our job is to transform these columns such that they contain only numeric values and interpret missing data as NaN. The pandas function pd.to_numeric() is ideal for this purpose: It converts a Series of values to floating-point values. Furthermore, by specifying the keyword argument errors=’coerce’, you can force strings like ‘M’ to be interpreted as NaN.

# Print the dry_bulb_faren temperature between 8 AM and 9 AM on June 20, 2011
print(df_clean.loc['2011-June-20 08:00:00':'2011-June-20 09:00:00', 'dry_bulb_faren'])
# Convert the dry_bulb_faren column to numeric values: df_clean['dry_bulb_faren']
df_clean['dry_bulb_faren'] = pd.to_numeric(df_clean['dry_bulb_faren'], errors='coerce')
# Print the transformed dry_bulb_faren temperature between 8 AM and 9 AM on June 20, 2011
print(df_clean.loc['2011-June-20 08:00:00':'2011-June-20 09:00:00', 'dry_bulb_faren'])
# Convert the wind_speed and dew_point_faren columns to numeric values
df_clean['wind_speed'] = pd.to_numeric(df_clean['wind_speed'],errors='coerce')
df_clean['dew_point_faren'] = pd.to_numeric(df_clean['dew_point_faren'], errors='coerce')

Signal min, max, median

Now that we have the data read and cleaned, we can begin with statistical EDA. First, we will analyze the 2011 Austin weather data. Our job in this exercise is to analyze the ‘dry_bulb_faren’ column and print the median temperatures for specific time ranges. We can do this using partial datetime string selection.

# Print the median of the dry_bulb_faren column
print(df_clean['dry_bulb_faren'].median())
# Print the median of the dry_bulb_faren column for the time range '2011-Apr':'2011-Jun'
print(df_clean.loc['2011-Apr':'2011-Jun', 'dry_bulb_faren'].median())
# Print the median of the dry_bulb_faren column for the month of January
print(df_clean.loc['2011-Jan','dry_bulb_faren' ].median())

Signal variance

We’re now ready to compare the 2011 weather data with the 30-year normals reported in 2010. We can ask questions such as, on average, how much hotter was every day in 2011 than expected from the 30-year average? Our job is to first resample df_clean and df_climate by day and aggregate the mean temperatures. We will then extract the temperature related columns from each - ‘dry_bulb_faren’ in df_clean, and ‘Temperature’ in df_climate - as NumPy arrays and compute the difference. Notice that the indexes of df_clean and df_climate are not aligned - df_clean has dates in 2011, while df_climate has dates in 2010. This is why we extract the temperature columns as NumPy arrays. An alternative approach is to use the pandas .reset_index() method to make sure the Series align properly.

# Downsample df_clean by day and aggregate by mean: daily_mean_2011
daily_mean_2011 = df_clean.resample('D').mean()
# Extract the dry_bulb_faren column from daily_mean_2011 using .values: daily_temp_2011
daily_temp_2011 = daily_mean_2011['dry_bulb_faren'].values
# Downsample df_climate by day and aggregate by mean: daily_climate
daily_climate = df_climate.resample('D').mean()
# Extract the Temperature column from daily_climate using .reset_index(): daily_temp_climate
daily_temp_climate = daily_climate.reset_index()['Temperature']
# Compute the difference between the two arrays and print the mean difference
difference = daily_temp_2011 - daily_temp_climate
print(difference.mean())

Sunny or cloudy

On average, how much hotter is it when the sun is shining? In this exercise, we will compare temperatures on sunny days against temperatures on overcast days. Our job is to use Boolean selection to filter out sunny and overcast days, and then compute the difference of the mean daily maximum temperatures between each type of day. The column ‘sky_condition’ provides information about whether the day was sunny (‘CLR’) or overcast (‘OVC’).

# Select days that are sunny: sunny
sunny = df_clean.loc[df_clean['sky_condition']=='CLR']
# Select days that are overcast: overcast
overcast = df_clean.loc[df_clean['sky_condition'].str.contains('OVC')]
# Resample sunny and overcast, aggregating by maximum daily temperature
sunny_daily_max = sunny.resample('D').max()
overcast_daily_max = overcast.resample('D').max()
# Print the difference between the mean of sunny_daily_max and overcast_daily_max
print(sunny_daily_max.mean() - overcast_daily_max.mean())

Weekly average temperature and visibility

Is there a correlation between temperature and visibility? Let’s find out. In this exercise, our job is to plot the weekly average temperature and visibility as subplots. To do this, we need to first select the appropriate columns and then resample by week, aggregating the mean. In addition to creating the subplots, we will compute the Pearson correlation coefficient using .corr(). The Pearson correlation coefficient, known also as Pearson’s r, ranges from -1 (indicating total negative linear correlation) to 1 (indicating total positive linear correlation). A value close to 1 here would indicate that there is a strong correlation between temperature and visibility.

# Import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
# Select the visibility and dry_bulb_faren columns and resample them: weekly_mean
weekly_mean = df_clean.loc[:,['visibility','dry_bulb_faren']].resample('W').mean()
# Print the output of weekly_mean.corr()
print(weekly_mean.corr())
# Plot weekly_mean with subplots=True
weekly_mean.plot(subplots=True)
plt.show()

Daily hours of clear sky

In a previous exercise, we analyzed the ‘sky_condition’ column to explore the difference in temperature on sunny days compared to overcast days. Recall that a ‘sky_condition’ of ‘CLR’ represents a sunny day. In this exercise, we will explore sunny days in greater detail. Specifically, we will use a box plot to visualize the fraction of days that are sunny. The ‘sky_condition’ column is recorded hourly. Our job is to resample this column appropriately such that we can extract the number of sunny hours in a day and the number of total hours. Then, we can divide the number of sunny hours by the number of total hours, and generate a box plot of the resulting fraction.

# Create a Boolean Series for sunny days: sunny
sunny = df_clean['sky_condition']=='CLR'
# Resample the Boolean Series by day and compute the sum: sunny_hours
sunny_hours = sunny.resample('D').sum()
# Resample the Boolean Series by day and compute the count: total_hours
total_hours = sunny.resample('D').count()
# Divide sunny_hours by total_hours: sunny_fraction
sunny_fraction = sunny_hours / total_hours
# Make a box plot of sunny_fraction
sunny_fraction.plot(kind='box')
plt.show()

Heat or humidity

Dew point is a measure of relative humidity based on pressure and temperature. A dew point above 65 is considered uncomfortable while a temperature above 90 is also considered uncomfortable. In this exercise, we will explore the maximum temperature and dew point of each month. The columns of interest are ‘dew_point_faren’ and ‘dry_bulb_faren’. After resampling them appropriately to get the maximum temperature and dew point in each month, generate a histogram of these values as subplots. Uncomfortably, we will notice that the maximum dew point is above 65 every month!

# Resample dew_point_faren and dry_bulb_faren by Month, aggregating the maximum values: monthly_max
monthly_max = df_clean[['dew_point_faren','dry_bulb_faren']].resample('M').max()
# Generate a histogram with bins=8, alpha=0.5, subplots=True
monthly_max.plot(kind='hist',bins=8,alpha=0.5,subplots=True)
# Show the plot
plt.show()

Probability of high temperatures

We already know that 2011 was hotter than the climate normals for the previous thirty years. In this final exercise, we will compare the maximum temperature in August 2011 against that of the August 2010 climate normals. More specifically, we will use a CDF plot to determine the probability of the 2011 daily maximum temperature in August being above the 2010 climate normal value. To do this, we will leverage the data manipulation, filtering, resampling, and visualization skills you have acquired throughout this course. Our job is to select the maximum temperature in August in df_climate, and then maximum daily temperatures in August 2011. We will then filter out the days in August 2011 that were above the August 2010 maximum, and use this to construct a CDF plot. Once you’ve generated the CDF, notice how it shows that there was a 50% probability of the 2011 daily maximum temperature in August being 5 degrees above the 2010 climate normal value!

# Extract the maximum temperature in August 2010 from df_climate: august_max
august_max = df_climate.loc['2010-Aug','Temperature'].max()
print(august_max)
# Resample the August 2011 temperatures in df_clean by day and aggregate the maximum value: august_2011
august_2011 = df_clean.loc['2011-Aug','dry_bulb_faren'].resample('D').max()
# Filter out days in august_2011 where the value exceeded august_max: august_2011_high
august_2011_high = august_2011.loc[august_2011>august_max]
# Construct a CDF of august_2011_high
august_2011_high.plot(kind='hist',normed=True,cumulative=True,bins=25)
# Display the plot
plt.show()

Datasets

Tags:

Updated: