Open access books – measured in a context

Ronald Snijder

Wed 25 Oct 2023

Read this article at hypothèses.org

For over a decade, there have been open access books platforms. Each of those platforms share usage data and when you are an author of an open access book, you would find that it has been downloaded a certain amount of times. But how should you interpret that number? Unfortunately, the answer is not straightforward. The usage is influenced by the language of the title, its subject, but also by the platform: not all platforms reach the same audiences; furthermore, there might be seasonable differences. For instance, usage of the OAPEN Library is lower in the months of June to August, compared to September to November.

So, it would be helpful to have some clarity. A possible solution is a new metric – the Transparent Open Access Normalized Index (TOANI) score. It is designed to provide a simple answer to the question of how well an individual open access book or chapter is performing. The transparency is based on clear rules, and by making all of the data used visible. The data is normalized, using a common scale for the complete collection of an open access book platform and – to keep the level of complexity as low as possible – the score is based on a simple metric: the usage is either average, below or above average.

How does it work? As a proof of concept, we analysed the usage data of over 18,000 books in the OAPEN Library. Each book was assigned one high level subject, and the language was categorized as either English, German or Other languages. Each book was placed in a group that combined one subject and one language. Within those groups, we looked at the usage data, and determined whether a book was having average, more or less downloads.

Between groups, there are large differences: for instance, a German-language book on Humanities with 300 downloads is doing better than average, while an English-language book on Humanities would need to have reached at least 652 downloads to reach the same level. Another example is the difference between titles on Language in German versus other languages. Here, German-language books downloaded more than 250 times are scoring better than average. For books in other languages the bar is much higher: 385.

In this way, we can see how well a book is performing, compared to similar titles. In other words: when we consider the context of a book, we can actually say if its usage is better than expected.

Read more in the newly published article by Ronald Snijder, “Measured in a context: making sense of open access book data,” Insights, 2023, 36: 20, 1–10; DOI: https://doi.org/10.1629/uksg.627