The Köppen Climate classification is a widely used climate classification system. It classifies locations around the world as climates like “Tropical rainforest” or “Warm summer continental”.
(Image from Wikipedia and )
One funny thing about the Köppen classification is that it puts Portland, Oregon in the same group as San Francisco and in a very similar group to Los Angeles. I might be wrong, but Southern California (comfortably walking outside in a t-shirt at 2am) seemed like a pretty different climate than Seattle (not seeing the sun for n months)!
I wanted to try classifying climates of locations. In this post, I’ll try to classify the climate of continental US weather stations based on the weather they recorded with the help of a neural net. I’ll train a neural net on some task that also helps it learn vector representations for each station. Then I’ll cluster the vector representations to cluster similar stations into climate classifications.
I used data from the Global Historical Climatology Network [2, 3]. I used the subset of weather stations from the U.S. Historical Climatology Network.
This gives 2.4G of data containing decades (sometimes over a century!) of daily weather reports from 1,218 stations around the US. It usually gives high temperatures, low temperatures, and precipitation, but also gives information about snow, wind, fog, evaporation rates, volcanic dust, tornados, etc.
Getting the data
Wheee, let’s download a few GBs of historical weather data.
The readme is super useful.
tar file for the US stations is located at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_hcn.tar.gz (heads up, it uses an underscore, not a hyphen like the readme says!)
The other useful file is
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt, which contains coordinates and station names.
If I go through the
.dly files in
ghcnd_hcn/ folder, and look up the station names in
ghcnd-stations.txt, I can plot the stations I have daily weather records for using Basemap. (See “Basemap” below for code and details.)
Reducing the data
There are two issues with the dataset: it’s a little large to easily load and manipulate in memory and some records are missing or are labeled as low-quality. I’d love to figure out how to use larger datasets or handle the low-quality data, but I’ll save that for another post. Instead, I’ll create a small subset I can comfortably load into memory. For each station, I’ll sample up to 2000 daily weather reports, including the maximum temperature, minimum temperature, and precipitation.
I limit the data in a few ways. I dropped data before 1990 to make the datasets more manageable. I also require all three weather values are not missing and don’t have a quality issue. These may introduce bias. If I was doing real science instead of trying to make a pretty graph, I’d justify these decisions better!
Loading the dataset
scripts/process_weather_data.py for details, but if you ran the command in this folder using
data/weather/data as the output file, such as:
python scripts/process_weather_data.py [filtered .dly files, see process_weather_data] data/weather/data
then I think this code should work.
To give an idea of how this data looks, here’s the number of examples per station, the minimum and maximum temperature.
There are a few things I’m overlooking for now: the weather report values are always integers and precipitation is always positive. Precipitation has a larger range and is often 0. I’ll end up converting the month and day into the approximate day of the year (e.g., 21 = January 21), which means January 1 and December 31 are considered far apart.
The purpose of the neural network is to learn a good vector representation of the weather station. To do this, I’ll set up a task that hopefully encourages the network to learn a good vector representation.
The inputs are the station_id and day of the year (so the month and day, but the year is missing).
The network needs to predict the precipitation, high temperature and low temperature for that day. I compare its prediction to an arbitrary year’s actual weather and use how poorly it does to tell TensorFlow how to update the network parameters.
To get the vector representation, I pass the station_id through an embedding layer. Then I concatenate the day to make a bigger vector. I’ll pass the station+day vector through a little neural network that needs to predict the three weather values. The full network looks something like this:
In the image above, the weather data from U.S. Historical Climatology Network gives me the blue and green boxes. I’m using the left-most boxes (station_id and day) as input and the right-most boxes (precipitation, high temperature, low temperature) as output. As I train the model, back-propagation will find better parameters for the blue boxes (the station vector representations and the neural network.)
When I’m satisfied, I’ll take the vector representation and use an unsupervised classifier on it I don’t particularly care about the neural network learns. I’m just using it to learn the vector representations. Once I have the vector representations, I’ll use K-Means clustering to cluster stations that have similar vectors. These clusters will become my climate classification!
Defining the network
Below I implement the network in TensorFlow.
As the model is training, I’ll occasionally cluster the stations and save the results.
Now I can start training! You can monitor the job through TensorBoard.
Deciding when to stop
If I had a metric for how good the climate classifications were, I could use the metric to decide when the model is done training. For the purpose of this post (again, pretty pictures), I’ll just train the model long enough to run through the dataset a few times.
(I have 2.4M weather reports. Since I’m using batch size of 50, it will take around 50K steps will go through the dataset once.)
Here’s the last saved figure from a run that used 6 classes.
Starting with a disclaimer: it’s really easy to start seeing patterns in noise!
That said, I think it’s neat that places near each other were assigned to the same group! The neural network didn’t know about the latitude or longitude of stations, only random weather reports from those stations.
It’s also neat that parts of the map look to me like they follow Köppen! The east is split up by latitude into brown, pink, and green. The West coast gets its own and the West gets another.
Though Portland and Seattle still share California’s climate. It’s also probably weird that the humid South and the arid South West all have the same climate. The Yellow climate also looks a little arbitrary. It doesn’t pick up mountains.
And that’s it! I did a proof of concept of classifying climate with neural nets! Here are a few other things I found during this project.
Year-long predictions by climate
The point of this model was to train the embeddings. But the model also learned to predict the weather for each station. For fun, let’s check them out.
Predictions by climate
It would be cool to see why the model placed each station in each climate. These graphs are pretty cool, but I wouldn’t read too much into them!
I’ll group up weather stations by assigned climate, and ask the model to predict the weather every day of the year. Then for each day, I’ll plot the prediction of the median of all stations in that climate. To give an idea of the range of predictions in that climate, and shade in the area between the 5th and 95th percentile of stations.
In other words, on June 11, the precipitation chart will show a solid line where the median of the precipitation predictions for all stations. It shows a shaded region between the 5th and 95th percentile of precipitation predictions. (Note it’s not following a particular station! It’s just looking at all of the predictions for a day)
It looks like blue (the West) and maroon (northern part of east of the Rockies) have colder winters.
I think the precipitation predictions look a little weird. That said, I approve that the red dots were mostly on the West Coast, and the precipitation predictions show that climate includes a lot of places with very rainy winters. And that blue, the West, includes more dry places.
TensorBoard has a neat embedding visualization currently called Projector. If you add column headers to the tsv files the classifier outputs above, you can load them in as labels and kind of get an idea of what KMeans is doing!
Day of the year
This was funny: before I used the day of the year, I used separate values for the month and the year. Here is what the predictions looked like:
I used Basemap to generate images. Here’s some of the code I used to generate the maps.
My sister told me about the problem with Köppen!
The neural network to learn embeddings is a little similar to how CBOW and Skip-Grams work.
This was a neat proof-of-concept! There are lots of directions one can take this project, but I’m out of time for today. A few off the top of my head are:
- Use more data!
- Snow data is often present in the dataset, so I don’t even need to deal with missing data.
- Try different representations of days, like using an embedding!
- Find a better way to choose the model.
 By Peel, M. C., Finlayson, B. L., and McMahon, T. A.(University of Melbourne)Enhanced, modified, and vectorized by Ali Zifan. - Hydrology and Earth System Sciences: “Updated world map of the Köppen-Geiger climate classification” (Supplement)map in PDF (Institute for Veterinary Public Health)Legend explanation, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=47086879
 Menne, M.J., I. Durre, R.S. Vose, B.E. Gleason, and T.G. Houston, 2012: An overview of the Global Historical Climatology Network-Daily Database. Journal of Atmospheric and Oceanic Technology, 29, 897-910, doi:10.1175/JTECH-D-11-00103.1.
 Menne, M.J., I. Durre, B. Korzeniewski, S. McNeal, K. Thomas, X. Yin, S. Anthony, R. Ray, R.S. Vose, B.E.Gleason, and T.G. Houston, 2012: Global Historical Climatology Network - Daily (GHCN-Daily), Version 3. NOAA National Climatic Data Center. http://doi.org/10.7289/V5D21VHZ 2018/06/09.