Recently, I decided to crunch some data from the Australian Bureau of Meteorology (which I’ll just call BoM) to assess some of my own perceptions about how the climate in my home city of Brisbane had changed throughout my lifetime. As always, I performed the analysis in Knime, a free and open software platform that allows you to do highly sophisticated and repeatable data analyses without having to learn how to code. Along the way, I also took the opportunity to sharpen my skills at using R as a platform for making data visualisations, which is something that Knime doesn’t do quite as well.
The result of this process is HeatTraKR, a Knime workflow for analysing and visualising climate data from the Australian Bureau of Meteorology, principally the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) dataset, which has been developed specifically to monitor climate variability and change in Australia. The workflow uses Knime’s native functionality to download, prepare and manipulate the data, but calls upon R to create the visual outputs. (The workflow does allow you to create the plots with Knime’s native nodes, but they are not as nice as the R versions.)
I’ve already used the HeatTraKR to produce this post about how the climate in Brisbane and Melbourne (my new home city) is changing. But the workflow has some capabilities that are not showcased in that post, and I will take the opportunity to demonstrate these a little later in the present post.
Below I explain how to install and use the HeatTraKR, and take a closer look at some of its outputs that I have not already discussed in my other post. Continue reading →
For so long, climate change has been discussed in Australia (and indeed elsewhere) as if it were an abstract concept, a threat that looms somewhere in the future. Not anymore. In 2019, climate change became a living nightmare from which Australia may never awake.
While I prepared this post in the dying weeks of 2019 and the beginning of 2020, there was not a day when some part of the country was not on fire. As at 24 January, more than 7.7 million hectares — that’s an area about the size of the Czech Republic — have burned. Thirty-three people have died. Towns have been destroyed. Old-growth forests have burned. Around a billion animals have been killed. Whole species have probably been lost.
The effects were not only felt in the bush. Capital cities such as Sydney, Melbourne and Canberra endured scorching temperatures while choking in smoke. Newspaper front pages (except those of the Murdoch press) became a constant variation on the theme of red. The country entered a state of collective trauma, as if at war with an unseen and invincible enemy.
The connection between the bushfires and climate change has been accepted by nearly everyone, with the notable exception of certain denialists who happen to be running the country–and even they are starting to change their tune (albeit to one of ‘adaptation and resilience’). One thing that is undeniable is that 2019 was both the hottest and driest year Australia has experienced since records began, and by no small margin. In December, the record for the country’s hottest day was smashed twice in a single week. And this year was not an aberration. Eight of the ten hottest years on record occurred in the last 10 years. Environmentally, politically, and culturally, the country is in uncharted territory.
I watched this nightmare unfold from my newly adopted city of Melbourne, to which which I moved from Brisbane with my then-fiancée-now-wife in January 2019. As far as I can tell, Melbourne has been one of the better places in the country to have been in the past few months. The summer here has been pleasantly mild so far, save for a few horrific days when northerly winds baked the city and flames lapped at the northern suburbs. It seems that relief from the heat is never far away in Melbourne: the cool change always comes, tonight or tomorrow if not this afternoon. During the final week of 2019, as other parts of Victoria remained an inferno, Melbourne reverted to temperatures in the low 20s. We even got some rain. It was almost embarrassing.
Finding relief from the heat is one of the reasons my wife and I moved to Melbourne. Having lived in Brisbane all of our lives, we were used to its subtropical summers, but the last few pushed us over the edge. To be sure, Brisbane rarely sees extreme heat. In summer, the maximums hover around 30 degrees, and rarely get beyond the mid-30s. But as Brisbanites are fond of saying (especially to southerners ), it’s not the heat, it’s the humidity that gets you. The temperature doesn’t have to be much about 30 degrees in Brisbane before comfort levels become thoroughly unreasonable. Continue reading →
I created a Knime workflow — the TroveKleaner — that uses a combination of topic modelling, string matching and other methods to correct OCR errors in large collections of texts. You can download it from GitHub.
It works, but does not correct all errors. It doesn’t even attempt to do so. Instead of examining every word in the text, it builds a dictionary of high-confidence errors and corrections, and uses the dictionary to make substitutions in the text.
It’s worth a look if you plan to perform computational analyses on a large collection of error-ridden digitised texts. It may also be of interest if you want to learn about topic modelling, string matching, ngrams, semantic similarity measures, and how all these things can be used in combination.
This post discusses the second in a series of Knime workflows that I plan to release for the purpose of mining newspaper texts from Trove, that most marvellous collection of historical newspapers and much more maintained by the National Library of Australia. The end-game is to release the whole process for geo-parsing and geovisualisation that I presented in this post on my other blog. But revising those workflows and making them fit for public consumption will be a big job (and not one I get paid for), so I’ll work towards it one step at a time.
Already, I have released the Trove KnewsGetter, which interfaces with the Trove API to allow you to download newspaper texts in bulk. But what do you do with 20,000 newspaper articles from Trove?
Before you even think about how to analyse this data, the first thing you will probably do is cast your eyes over it, just to see what it looks like.
A typical reaction upon seeing Trove’s OCR-derived text for the first time. Continue reading →
NOTE: This post discusses the most recent version (v2.0) of the Trove KnewsGetter. You can obtain the latest version from the GitHub page.
Around about this time last year, I hatched a side-project to keep me amused while finishing my PhD thesis (which is still being examined, thanks for asking). Keen to apply my new skills in text analytics to something other than my PhD case study (a corpus of news texts about coal seam gas), I decided to try my hand at analysing historical newspapers. In the process, I finally brought my PhD back into contact with the project that led me to commence a PhD in the first place.
I’m talking here about my other blog, which explores (albeit very rarely, these days) the natural history of the part of Brisbane in which I grew up. Pivotal to the inception of that blog was the publicly available collection of historical newspapers on Trove, a wondrous online resource maintained by the National Library of Australia. Having never studied history before, I became an instant deskchair historian when I discovered how easily I could search 100 years of newspapers for the names of streets, waterways, parks — and yes, even people. I trawled Trove for everything I could find about Western Creek and its surrounds, so that I could tell the story how this waterway and its catchment had been transformed by urbanisation.
How anyone found the time and patience to study history before there were digitised resources like Trove is beyond me. I cannot even imagine how many person-hours would be needed to replicate the work performed by a single keyword search of Trove’s collection. The act of digitising and indexing textual archives has revolutionised the way in which historical study can be done.
But keyword searches, as powerful as they are, barely scratch the surface of what can be done nowadays with digitised texts. In the age of algorithms, it is possible to not merely index keywords, but to mine textual collections in increasingly sophisticated ways. For example, there are algorithms that can tell the difference between ordinary words and different kinds of named entities, like places or people. Another class of algorithms goes beyond counting individual keywords and instead detect topics — collections of related words that correspond with recurring themes in a collection of texts.
My PhD thesis was largely a meditation on these latter types of algorithms, known as topic models. Along the way, I also used named entity recognition techniques to identify place names and relate them to topics, ultimately enabling me to map the geographic reach of topics in the text.
These were the sorts of techniques that I wanted to bring to apply to Trove’s historical newspapers through my side-project last year. The outcome of this project was a paper that I presented at the Australian Digital Humanities conference in Adelaide in September 2018. To this day, it remains a ‘paper’ in name only, existing only as a slideshow and a lengthy post on my other blog. Releasing some more tangible outputs from this project is on my to-do list for 2019.
In this post, I am going to share the first in what will hopefully be a series of such outputs. This output is a workflow that performs the foundational step in any data analysis — namely, acquiring the data. I hereby introduce the KnewsGrabber — a Knime workflow for harvesting newspaper articles from Trove. Continue reading →