Apr 6, 2019 - Change over time in geographies

Comments

TL;DR

  • This looks at one persnickety issue that appears trivial but is a bit more nuanced than it appears: (1) finding change over time (2) when using geographical zone data (3) when you want to know how some sub-population in those zones has changed.
  • Out of four ways of measuring that change, the one I find most useful and intuitive is: percentage change of the sub-population as a proportion of the zone total population in the first time period.
  • E.g. if the subpop and total pop are 50/200 and 60/350 for two consecutive time points, find ((60-50)/200) * 100 to get a percentage (+5% change in this case).
  • This preserves zone proportionality while also keeping the correct change polarity if total zone population has a large change too. I also find it more intuitive: it’s asking e.g. “how has the student population changed compared to the total population of the zone last year?”

Why?

Measuring how things change over time is straightforward for single items or aggregate values: either use the raw change or find proportion change. For the latter, you can also just use natural log values - if the proportions aren’t too large, the difference between the natural log of time points is an good approximation of percent change. (That link has a nice little table showing at what values the match diverges.)

It becomes more complicated, though, once you start using geographical zones - and particularly if you’re wanting change for some sub-population in those zones.

It’s actually really simple but if you’re using geographical data to look at change over time, this choice can have a large impact on the outcome, potentially reversing the polarity of values. That may be important if you’re using it for modelling e.g. how change of a sub-population impacts some other variable.

Toy example

First-up, the libraries we’ll be using:

library(tidyverse)
library(sf)
library(tmap)

Here’s some toy population data for a bunch of zones. We want to know how much the student population has changed between two years. First, though, let’s get some geographical zones - doesn’t matter where, they’re just for illustration. (These happen to be four LSOAs in Sheffield.)

zones <- readRDS(gzcon(url('https://www.dropbox.com/s/h4qgf3bt7ooiqjd/fourzones.rds?dl=1')))

plot(st_geometry(zones))

center

Let’s give those some pretend data for population numbers, for t minus 1 (last year) and t (this year). We’ll make a copy of the neighbourhood for both years then row-bind them into a single sf dataframe:

#Nicer zone names
zones$code = c(1:4)

neighbourhood_t <- zones
neighbourhood_tminus1 <- zones

#Last year
neighbourhood_tminus1$totalpop <- c(200,350,50,100)
neighbourhood_tminus1$studentpop <- c(50,30,10,1)
neighbourhood_tminus1$year = 't minus 1'

#This year
neighbourhood_t$totalpop <- c(350,400,80,150)
neighbourhood_t$studentpop <- c(60,30,30,4)
neighbourhood_t$year = 't'

#Join those both together
neighbourhood <- rbind(neighbourhood_tminus1,neighbourhood_t)

#Make sure R knows what order time is in
neighbourhood$year <- factor(neighbourhood$year, levels = c('t minus 1','t'))

We can see how total population changed between the two time points using tmap:

tm_shape(neighbourhood) +
  tm_polygons('totalpop') +
  tm_facets('year')

center

The issue with that: our zones are different sizes. Other things being equal, we’d expect larger zones to have a bigger population. The most obvious way to deal with that: if we’re looking at total population, we can use population per unit area. Or if we’re looking at a sub-group - students, say - we can just find what proportion of the population they are:

neighbourhood <- neighbourhood %>%
  mutate(percent_students = (studentpop/totalpop)*100)

tm_shape(neighbourhood) +
  tm_polygons('percent_students') +
  tm_facets('year')

center

Change over time

So how did the number of students change between those two years? We can start with the most obvious measure, just finding change in the raw number of students between time points. Dplyr’s lag function does the job.

Here, arrange is just making sure years are ordered correctly. They are already in the correct order here but worth remembering this is a necessary step for lag to work: it needs to know the order to find correct lag values in sequence. We could also use ‘order_by = year’ to do it, but I think this is tidier (especially if you’re doing calculations with several instances of lag, as we do below). Group by geographical zone (code) as we want lag per zone.

change <- neighbourhood %>% 
  arrange(year) %>% 
  group_by(code) %>% 
  mutate(totalchange = studentpop - lag(studentpop))

It might be more clear to see these just as bars so we can see the exact numbers. Note, for plotting we also drop the empty first year that we don’t have lag values for.

change %>% filter(year == 't') %>% 
  ggplot(aes(x = code, y = totalchange)) +
  geom_col()

center

We can see one zone saw no increase in student numbers, and another had a massive increase. But - again - without accounting for the zone’s size, we don’t know if this increase is actually large, relative to the zone’s population.

And the solution might be the same as before: find the proportion of change in the numbers of students between the two time points. We’ll add to the same dataframe with the original totalchange variable, adding proportion change, so we can compare. (Unwieldy long names will make sense in a moment…)

change <- change %>% 
  mutate(
    percentchange_intotalstudents = (studentpop-lag(studentpop))/lag(studentpop)*100
    )

#plot again
change %>% filter(year == 't') %>% 
  ggplot(aes(x = code, y = percentchange_intotalstudents)) +
  geom_col()

center

Now zone 4 shows an alarmingly mahoosive rise in student numbers: up by 300%! But we can see from the previous plot, student numbers only went up by 3. Looking at the actual data, we can see in zone code 4 there was only 1 student last year. So an additional 3 is indeed a 300% increase. But given the total zone population is now 150, students only make up 2.7% of the population.

year code totalpop studentpop percent_students
t minus 1 1 200 50 25.000000
t minus 1 2 350 30 8.571429
t minus 1 3 50 10 20.000000
t minus 1 4 100 1 1.000000
t 1 350 60 17.142857
t 2 400 30 7.500000
t 3 80 30 37.500000
t 4 150 4 2.666667

So maybe we want the change in proportion of students between years? We’ve already found students as a percentage of the total population, so we can just lag that (adding to our previous measures):

change <- change %>% 
  mutate(percentchange_in_propofstudents = percent_students-lag(percent_students))

change %>% filter(year == 't') %>% 
  ggplot(aes(x = code, y = percentchange_in_propofstudents)) +
  geom_col()

center

Huh. Now we have a couple of negative values. We already know raw student numbers have increased - but this shows, as a proportion of zones one and two, they’ve actually decreased.

And it may be that’s the change metric you’re after, but I find it dissatisfying. It may matter greatly (for e.g. modelling a reaction to changing sub-populations) whether numbers are increasing or not, so it might be essential to preserve that information. But I don’t want a metric that treats an increase from 1 to 4 as a much more intense change, if proportionally it’s trivial.

A simple solution is this: just use the first time period total population for the proportion denominator for both student counts. So for our zone 1 example, this would be 50/200 for t minus 1 and 60/200 for t. This preserves the polarity of change but keeps it proportional to the geographical zone. As it’s the same denominator for both, that can just be (60-50)/200:

change <- change %>% 
  mutate(propdiff_over_tminus1 = ((studentpop-lag(studentpop))/lag(totalpop))*100)

change %>% filter(year == 't') %>% 
  ggplot(aes(x = code, y = propdiff_over_tminus1)) +
  geom_col()

center

That makes more intuitive sense to me: we’re asking, how has the student population changed compared to the total population of the zone last year? Comparing to a fixed population value makes it more comparable, I reckon, and using the first time period makes more sense logically, if we’re thinking about causation (which we usually are).

Putting all of them together to compare:

change %>%
  filter(year == 't') %>% 
  select(code,totalchange:propdiff_over_tminus1) %>% 
  gather(key = metric, value = value, totalchange:propdiff_over_tminus1) %>% 
  mutate(metric = factor(metric, levels = c(
    'totalchange',
    'percentchange_intotalstudents',
    'percentchange_in_propofstudents',
    'propdiff_over_tminus1'
  ))) %>% 
  ggplot(aes(x = code, y = value)) +
  facet_wrap(~metric, scales = 'free_y') +
  geom_col()

center

To summarise: for measuring change over time of sub-groups (students in this case) in geographical zones -

  • Total change doesn’t account for zone population so doesn’t mean much.
  • Percent change in total students gives proportional change for students but also doesn’t account for zone population. So large changes in small numbers seem unreasonably prominent.
  • Percent change in proportions may be what you need, but it introduces a new problem: we can lose the polarity of change. That depends on how much total population changes, and we may not want that.
  • Percent change relative just to t minus 1 solves that polarity issue and - to my eye at least - makes more intuitive sense. It has its own downside: the proportion can appear quite high compared to the current population. So you have to remember, and communicate, exactly what it is measuring.

Sep 17, 2017 - Learn ggplot and R using English house price and wage data

Comments

I recently ran a day course on visualising data with R for the Urban Big Data Centre in Glasgow. I used the rather high-falutin’ phrase ‘principles of visualising data with ggplot’ - why principles? Because (a) once you get the basic principles of ggplot, you can run with the rest and (b) they’re not all that obvious when you’re starting out. So the aim of day was: get to the point where the ggplot cheatsheet makes some sense. (We also worked with the data wrangling cheatsheet; you can get to both of these via RStudio’s own help menu too.)

Same as with the last workshop I’ve written it up so it can be worked through by anyone.

  • There’s a pre-amble course chunk for anyone totally new to R. It goes through the basics of R, RStudio and dataframes - it’s in the form of an R project. That zip can be downloaded from here. The PDF is in the course booklet folder.

  • The main course folder can be downloaded here. It’s the same folder structure but doesn’t have an already-made RStudio project, so you’ll just have to make a new one - that’s explained in the course booklet (again, in the folder of the same name).

The day used two datasets, pre-wrangled into more friendly form. First, Land Registry house price data for England - the original data can be downloaded in one ~4gb chunk from here. Second, median wage data at local authority level from NOMIS. There was also a bit of Census employment data in there.

One of the main things to get across about principles: data wrangling and visualisation are two sides of the same coin. The tidyverse has done a sublime job of making those two things work smoothly together in R. As Hadley sez: ‘by constraining your options, it simplifies how you can think about common data manipulation tasks’. I went for a tidyverse first approach and I think it paid off. As long as you’re very careful to be clear what’s actually happening with things like the pipe operator (clue’s in the name so it’s intuitive enough once explained), it’s all good. It’s also much, much less boring than wading through the entirety of base-R first. Bits can be introduced where needed but you don’t need everything.

At some point I’ll post up the pre-wrangling code: it’s useful for showing how to get Land Registry housing data geocoded using the - also freely available - codepoint open postcode location data, binning house location points into geographical zones etc.

I wasn’t sure how much of the original housing data would fit onto the machines we were using, so only included a subset of data to use (that’s in the zip above). The whole thing for England can be downloaded here as an RDS file. Load into R (assuming you’ve downloaded and put into a data folder in your R project) with the following. It decompresses to about 1.7gb in memory.

sales <- readRDS('data/landRegistryPricePaidEngland_w_LADs_allchar.rds')

The most startling part for me was linking house price data to the NOMIS median wage data - I talk about that a lot more in its own post here. tl;dr: London! Oh my God!

The optional extras at the end include something on prettifying graphs, how to cram as much into one graph as humanly possible. This quote’s been doing the rounds: ‘a designer knows they have achieved perfection not when there is nothing left to add, but when there is nothing left to take away’… RUBBISH! THROW IT ALL IN THERE! EVERY FIGURE IS 250 WORD EQUIVALENT IN PAPERS! Well, no, it’s not rubbish, obviously, but there are times…

There’s also a bit on outputting multiple plots to a folder and lastly, inspired by the amazing R for Data Science book, a section on visualising coefficients and error rates from many regression outputs. Statistically dodgy (spatial/temporal data with no account of autocorrelation) but gets the point across. Because regression tables schmregschmession schmables, I say. See below: it’s just Census employment data versus house prices in some TTWAs. Oh and then a little easy mapping thing to end to illustrate another idea for using dplyr.

Any questions, feel free to ask: d dot olner at sheffield dot ac uk or on that there Twitter.

House prices as a multiple of wages

center

Pulling out model values into a viz

X axis: percent increase in house prices for a 1 percent increase in employment in wards for those TTWAs. (Note point above: illustration of pulling out model values, not a good model!)

center

Sep 17, 2017 - English House prices - how have they changed compared to median wages

Comments

This post looks at how to plot house prices as a multiple of yearly wages for English local authorities. There should be everything here you need to recreate the plots - as with the one day training course this comes from, you’ll need to stick the data in a data folder in an R project first. (The post on the day course links to the materials if you need guidance on doing that.)

Without adjusting raw house prices, their value over time is not really comparable. Adjusting them just for straight inflation can be misleading as house prices aren’t actually included in that. A better measure is ‘cost of housing as a multiple of wages’. This also allows for a more geographically interesting picture. NOMIS median wage data is available yearly for each local authority and goes back to 1997, so you can build a house price index for each local authority up to 2016. (E.g. ‘this house is worth four times the yearly median wage here.’)

The graph in the previous post shows this index for the top and bottom five local authorities (from a rank of 2016 wage multiples). But I’d left out the top value - Kensington and Chelsea. It’s so far above the other values, it makes them rather unreadable. (During the course, we did look at Kensington/Chelsea in another section, so it was covered.)

Here’s what it looks like with Kensington and Chelsea included. Note: this uses a pre-selected sub-set of the more well-known local authorities we used on the course. For reproducing it yourself, I’ve included code and data below that has all the local authories - London, of course, has more of the top values once all are used. Note note: this is using mean house prices per local authority - there’s also code below for using the median, which brings the values closer to those in e.g. this Guardian article. I think using mean house prices might give a better picture of how some very high value properties (of which London has plenty of course) affect affordability - views welcome.

Anyway - data! –>

center

Yikes. The average house in Kensington and Chelsea has gone from just under twenty times the median wage to not far off seventy times now. Good God. Part of the reason is shown if you just plot the wage multiple against the median wage itself:

center

Kensingon and Chelsea is out there on the right by itself: actually not that high a median wage but by far the largest wage multiple for housing. So what this is likely showing: as well as having some of the priciest housing, it also has many lower-waged people. It’s a highly mixed area. In some imaginary future (I’m sure this would never happen…) where much of the area’s social housing tenants had moved elsewhere, the wage multiple would drop.

The NOMIS data actually contains every wage percentile - it’s very gappy for 90th percentile as they don’t include numbers that are not statistically reliable and there aren’t enough people in that category. At any rate - looking at other percentiles might provide a deeper look at what’s going on.

The lowest-value areas (easier to see in the minus-Kensington graph in the other post) have certainly seen the gap increase - something like five times to ten times the median wage over the data period. But there’s also a dynamic that I’ve seen in data at local scales too: after the 07/08 crash, the poorest areas have seen house values decline (even relative to wages, it appears) where the richer areas have continued on up. For some of the richest, the crash looks like little more than a dink in that.

There are some other slightly puzzling areas with very high median wages and low house values. I’m a little suspicious of the data for those (e.g. Copeland in the point plot above) but more digging also required…

Code and data to play with yourself

To end: here’s some R code with links to tidied data for both median wages and English house prices if you want to reproduce this or look at different areas. It’s about the same as the course code but (1) it uses all local authorities so London now has six of the top seven values; (2) I’ve added median house price in too (plotted at the end), which is more in line with the values in the Guardian article linked above. At some point I’ll come back to grabbing and tidying the data / getting the Land Registry data geocoded etc.

library(dplyr)
library(ggplot2)
library(forcats)
library(readr)
library(tidyr)
library(lubridate)

#Load median wages at local authority level 97 to 2016
#taken from NOMIS and tidied
#This is in the course folder or download from here then stick in data folder for project
#https://www.dropbox.com/s/66cd8jqoltv1l30/medianWages_localAuthority.csv?dl=1
wages <- read_csv('data/medianWages_localAuthority.csv')

#Gather year into a single column
wagesLong <- wages %>% 
  gather(key = year, value = medianwage, `1997`:`2016`)

#If you want a quick look at the wage data...
# ggplot(wagesLong, aes(x = year, y = medianwage)) +
#   geom_boxplot()

#Load all tidied English housing data, not just sub-selection
#Download from here then stick in data folder for project
#https://www.dropbox.com/s/pzas4l1uc4piu5f/landRegistryPricePaidEngland_w_LADs_allchar.rds?dl=1
sales <- readRDS('data/landRegistryPricePaidEngland_w_LADs_allchar.rds')

sales$year <- year(sales$date)

#Summarise at house prices at local authority level and per year
saleSummary <- sales %>% 
  group_by(localauthority,year) %>% 
  summarise(meanPrice = mean(price),
            medianPrice = median(price))

#Make year numeric
wagesLong$year <- as.numeric(wagesLong$year)

#Join house price and wage data
price_n_wage <- inner_join(
  wagesLong,
  saleSummary,
  by = c('year', 'Area' = 'localauthority')
)

#Make new variables using mutate - yearly wage and house price as multiple of that
price_n_wage <- price_n_wage %>% 
  mutate(
    medianwageyearly = medianwage *52, # weekly wage to yearly wage
    wagemultipleFromMean = meanPrice / medianwageyearly, # house price as multiple of that yearly wage
    wagemultipleFromMedian = medianPrice / medianwageyearly # house price as multiple of that yearly wage
    )

#Use 2016 wage multiple as an index for picking out the top and bottom England values
#Handy to look at directly
price_n_wage2016 <- price_n_wage %>% 
  filter(year == 2016) %>% 
  arrange(-wagemultipleFromMean)

#Zones to show in plot (one of at least three ways of doing this)
#Include Kensington and Chelsea, exclude last entry as no wage data 
zoneselection <- price_n_wage2016 %>% slice(c(1:7,309:315))

#Not South Bucks - rest of top 7 are in London
londonzones <- zoneselection$Area[1:6]

#Add flag for London zones
price_n_wage <- price_n_wage %>% 
  mutate(inLondon = ifelse(Area %in% londonzones, 'london','other'))

ggplot(price_n_wage %>% filter(Area %in% zoneselection$Area), 
       aes(x = year, y = wagemultipleFromMean, colour = fct_reorder(Area,-wagemultipleFromMean))) +
  geom_line(aes(linetype = inLondon), size = 1) +
  ylab('wage multiple') +
  labs(colour = 'local authority', linetype = ' ') +
  scale_y_continuous(breaks = seq(from = 0, to = 80, by = 5),
  labels = c('0','5x','10x','15x','20x','25x','30x','35x','40x','45x','50x','55x','60x','65x','70x','75x','80x')) +
  ggtitle("House prices as multiple of yearly wage\ntop and bottom seven English local authorities\nFrom mean house price") +
  theme(plot.title = element_text(face="bold",hjust = 0.5),
        axis.title.x=element_blank())

center

And the same plot but with median house price

center