Skip to contents

Introduction

In this lab session you will explore

  • Data wrangling; restructuring data frames
  • Graphical data exploration
  • Cross validation
  1. During this lab, you can work in a .R file, but working with code chunks in a .Rmd is recommended.
  2. Make sure you update the StatCompLab package to version 23.7.0 or higher.

Solution:

The accompanying Tutorial07Solutions tutorial/vignette documents contain the solutions explicitly, to make it easier to review the material after the workshops.


Throughout this tutorial, you’ll make use of the data wrangling and plotting tools in the dplyr, magrittr, ggplot2 and other tidyverse packages, so start your code with library(tidyverse) and library(StatCompLab).

Note that the exercises in this tutorial build on each other, often with one solution being similar to previous solutions, with just a few additions and/or modifications.

Some of the data wrangling functions used in the tutorial are

  • mutate to add or alter variables
  • filter to keep only rows fulfilling given criteria
  • group_by to perform analyses within subgroups defined by one or more variables
  • summarise to compute data summaries, usually with subgroups
  • select to keep only some variables
  • left_join to join two data frames together, based on common identifier variables
  • pivot_wider to split a value variable into new named variables based on a category variable
  • pivot_longer to gather several variables into a new single value variable, and the names stored in a new category variable
  • %>%, the “pipe” operator used to glue a sequence of operations together by feeding the result of an operation into the first argument of the following function call
  • head to view only the first few rows of a data frame. This can be useful when debugging a sequence of “piped” expressions, by placing it last in the pipe sequence
  • pull to extract the contents of a single variable

Functions for plotting,

  • ggplot for initialising plots
  • geom_point for drawing points
  • facet_wrap for splitting a plot into “facet” plots, based on one or more category variables

Scottish weather data

The Global Historical Climatology Network at https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily provides historical weather data collected from all over the globe. A subset of the daily resolution data set is available in the StatCompLab package containing data from eight weather stations in Scotland, covering the time period from 1 January 1960 to 31 December 2018. Some of the measurements are missing, either due to instrument problems or data collection issues. Load the data with

data(ghcnd_stations, package = "StatCompLab")
data(ghcnd_values, package = "StatCompLab")

The ghcnd_stations data frame has 5 variables:

  • ID: The identifier code for each station
  • Name: The humanly readable station name
  • Latitude: The latitude of the station location, in degrees
  • Longitude: The longitude of the station location, in degrees
  • Elevation: The station elevation, in metres above sea level

The station data set is small enough that you can view the whole thing, e.g. with knitr::kable(ghcnd_stations). You can try to find some of the locations on a map (google maps an other online map systems can usually interpret latitude and longitude searches).

The ghcnd_values data frame has 7 variables:

  • ID: The station identifier code for each observation
  • Year: The year the value was measured
  • Month: The month the value was measured
  • Day: The day of the month the value was measured
  • DecYear: “Decimal year”, the measurement date converted to a fractional value, where whole numbers correspond to 1 January, and fractional values correspond to later dates within the year. This is useful for both plotting and modelling.
  • Element: One of “TMIN” (minimum temperature), “TMAX” (maximum temperature), or “PRCP” (precipitation), indicating what each value in the Value variable represents
  • Value: Daily measured temperature (in degrees Celsius) or precipitation (in mm)

The values data object has 502901 rows, so we don’t want to try to view the whole object directly. Instead, we can start by summarising it.

Start by counting how many observations each station has, for each type of measurement. The shortest approach is to use the count() function. A more generalisable approach is to use the group_by(), summarise(), and n() functions. See the description on ?count for how count() is connected to the others. To avoid having to create temporary named variables, end the pipe operations with a call to knitr::kable(), especially if you’re working in an RMarkdown document.

Exploratory plotting

Before, we only looked at the station data and weather measurements separately. When plotting, we would at least like to have access to the station names instead of the identifying codes, to give a more humanly readable presentation.

This can be accomplished with the left_join() function, that can add copies of the rows from one data frame to another, where one or more columns match. Create a new variable, ghcnd that for each observation contains both the measurements and the station data:

ghcnd <- left_join(ghcnd_values, ghcnd_stations, by = "ID")
head(ghcnd)

Now plot daily minimum and maximum temperature measurements connected by lines as a function of time (DecYear), with a different colour for each element, and a separate subplot for each station (facet_wrap(~variablename)), labeled by the station names.

Again, avoid creating a temporary named variable. Instead, feed the initial data wrangling result into ggplot() directly.

Due to the amount of data, it’s difficult to see clear patterns here. Produce two figures, one showing the yearly averages of TMIN and TMAX as points, and one showing the monthly seasonal averages (for months 1 through 12) of TMIN and TMAX, separately for each station.

Again, avoid creating a temporary named variable. In the previous code, insert calls to group_by() and summarise(), and modify the x-values in the aesthetics.

What are the common patterns in the yearly values, and in the monthly seasonal values?

Scatter plots

If we want to do a scatter plot of TMIN and TMAX, we need to rearrange the data a bit. For this we can use the pivot_wider function, that can turn a name variable and a values variable into several named variable. Note that if only some measurement elements are present on a given day, NA’s will be produced by default. Optionally, filter these rows out before calling ggplot().

Draw a scatterplot for daily TMIN vs TMAX for each station, with colour determined by the month.

Cross validation

Choose one of the stations, and create a new data variable data from ghcnd with the yearly averages of TMIN as a column (as in the previous pivot_wider output), with missing values removed with filter().

Within-sample assessment

Now, using the whole data estimate a linear model for TMIN, with lm() formula TMIN ~ 1 + Year, and compute the average 80% Interval score (use proper_score() that you used in lab 6) for prediction intervals for each of the TMIN observations in data. See ?predict.lm for documentation for the predict() method for models estimated with lm().

Cross validation

We now want to compute the 5 average 80% Interval scores from 5-fold cross validation based on a random partition of the data into 5 approximately equal parts.

First add a new column Group to data defining the partitioning, using mutate(). One approach is to compute a random permutation index vector, and then use the modulus operator %% to reduce it to 5 values, or ceiling() on scaled indices.

Then loop over the partition groups, estimating the model leaving the group out, and then predicting and scoring predictions for the group.

Compare the resulting scores with the one based on the whole data set. Is the average cross validation score larger or smaller?