The Public Library of Science (PLOS) collects a range of alternative metrics about the articles they publish to provide different, more meaningful and granular insights into reader response. PLOS captures usage statistics, social shares, academic bookmarks and both scholarly and non-scholarly citations, all offering distinct types of information. Early interest in an article is more apparent through HTML views and mentions on social sharing sites than usage statistics, and Mendeley bookmarking reflects interest but does not correlate with citation count. An article’s appearance in citation databases commonly takes at least two years. Mentions in blogs often stimulate commentary and critique. Instead of presenting only a simplistic citation number, PLOS offers article level metrics (ALM) signposts reflecting ALM that capture the variety of response, audience, timing, purpose and impact of a scientific article.

altmetrics
impact of scholarly output
citation impact
scholarly publishing
social web
collaborative filtering

Bulletin, April/May 2013


The Many Faces of Article-Level Metrics

by Jennifer Lin and Martin Fenner

The Public Library of Science (PLOS) is collecting and displaying a large variety of metrics about the articles they publish (Table 1). They include traditional citations, usage stats and altmetrics.

  1. Usage Stats
    1. PLOS website
    2. PubMed Central
  2. Social Shares
    1. Facebook
    2. Twitter
  3. Academic Bookmarks
    1. Mendeley
    2. CiteULike
  4. Scholarly Citations
    1. CrossRef
    2. Scopus
    3. Web of Science
    4. Pubmed Central
  5. Non-scholarly Citations
    1. ResearchBlogging
    2. ScienceSeeker
    3. Nature Blogs
    4. Wikipedia

Table 1. Metrics collected and displayed as part of the PLOS Article-Level Metrics

The sheer variety of metrics can be overwhelming, and it would be much easier if we simply substitute a single number for them all. Such economy – though perhaps convenient – is not feasible if we are to preserve the breadth of information that these metrics offer as well as maintain felicity to their different natures. We will discuss how individual article-level metrics (ALMs) in the PLOS suite measure different things:

  • Different audiences: public vs. scholarly interest
  • Different dimensions: attention, self-promotion, or impact
  • Different timepoints after publication: days, months, or years.

The nature of these measurements is quite dissimilar. As such, they offer different information that is of value in disparate ways. We conclude that it is necessary to look at these metrics altogether as a group, shifting focus to the most relevant ones based on the questions that need to be addressed.

In our early analysis of PLOS ALM data, we have observed some typical patterns that have emerged around certain areas – early attention, scholarly attention, citation impact and non-scholarly citations – which go far to illustrate the diversity of researcher engagement and the subsequent need for different measurements adequate to the task.

Early Attention 
To illustrate the first distinction – different audiences – let us take a quick view of early ALM activity across the PLOS corpus. Early attention for a newly published paper is best described using HTML views and social shares via Twitter and Facebook. There is a strong interaction between social shares and HTML views, and attention via social shares seems to amplify usage (Figure 1). 

Figure 1
Figure 1. Tweets vs. HTML views at the PLOS website for articles in the PLOS Medicine Big Food Collection. Circle size correlates with Facebook activity (likes, shares, comments). Data collected August 19, 2012. 

Even though a strong correlation emerges, social shares add an important dimension missing from usage stats: they include information about who is sharing a link to the article and what they are saying.
Since PLOS is an open-access publisher that makes all content openly available in full text, a substantial amount of early activity could be attributed to public attention. For scholarly activity in the first days after publication it is better to look at PDF downloads or at the usage stats from PubMed Central, as these numbers do not seem to be much influenced by social shares. In our data, about 90% of PLOS articles show a ratio of HTML views to PDF downloads that is very close to 4:1. Articles with a much higher ratio (e.g. 10:1) regularly show strong activity in Twitter and Facebook. This observation suggests that the HTML/PDF ratio, together with the numbers of social shares, is a better indicator of public interest in an article than absolute numbers of usage stats.

Scholarly Interest
PLOS collects metrics from Mendeley and CiteULike, both scholarly bookmarking services. Although we often see some activity in the days after publication of a paper, they typically take a few weeks or months to accumulate and, in contrast to usage stats, don’t taper off as quickly. Both bookmarking services are used by scholars, so the numbers don’t necessarily correlate strongly with the social shares from Twitter and Facebook, which also see a lot of public activity. 

Mendeley bookmark numbers show no real correlation with citation counts, suggesting that storing a paper in a reference manager and citing the paper may not be strongly correlated (Figure 2). 

Figure 2
Figure 2. CrossRef Citations vs. Mendeley Readers for all PLOS Medicine articles published 2010-2011. Data collected January 24, 2013. 

Content popular with a relatively large number of scientists – for example, the PLOS Computational Biology 10 Simple Rules Collection – often has both high usage stats and high numbers of scholarly bookmarks (Figure 3). 

Figure 3
Figure 3.
PDF downloads (open circles), HTML pageviews (closed circles) and Mendeley readers (numbers on the right) for 23 editorials in PLOS Computational Biology. Articles are sorted by descending age. 

For the most part, however, the correlation between usage stats (PDF downloads and usage stats from PubMed Central included) and Mendeley bookmarks is not especially strong. The number of scholarly bookmarks likely at least in part indicates something beyond citations and usage stats. Such low correlation between these ALM groupings is understandable if we view them as measuring fundamentally different modes or dimensions of researcher engagement. 

Non-scholarly Citations
We have seen further evidence of fundamental differences in the dimensions of research activity captured with the non-scholarly citations ALMs. PLOS tracks three science blog aggregators (ResearchBlogging, Nature Blogs and ScienceSeeker). Blog posts represent a fundamentally different medium than social sharing platforms. The absolute number of times that an original article is cited in blog posts is not the most indicative of its influence and reach as less than 5% of articles are discussed in science blogs. Instead, the enduring value of this measurement lies in the richness of the content that it provides. Authors of blog posts commonly use the open format of a blog posting to engage the cited articles in great detail, providing deep commentary and critique that neither a bookmark or social share can by its very nature. PLOS also began collecting non-scholarly citations on Wikipedia as of September 2012. (Similarly, Wikipedia links to 6% of our articles. An analysis of the kind of PLOS content used in Wikipedia is forthcoming.)

Citation Impact 
Scholarly citations have traditionally been used to look at the impact of a paper. PLOS is collecting metrics from four different citation databases (CrossRef, Web of Science, Scopus, PubMed Central). Although the numbers differ somewhat, they generally show a very strong correlation to each other. Within this group, all the indices capture the same activity relative to each other and as such, reflect the same event time horizon. But they do so in stark contrast to the other ALMs. Citations are much slower to accumulate than the other metrics. We generally see a two- to five-year lag before meaningful numbers emerge. These measurements not only capture a different dimension of engagement but also a dramatically contrasting timescale after publication than the other elements in the suite. 

Conclusions
Article-level metrics describe many different aspects to the broad spectrum of research engagement and can never be expressed in a single number. From the considerations enumerated above, one recommended approach is to focus on the metrics that correspond and are most relevant to a particular use case (for example, immediate attention after publication). Alternatively, we can deploy a set of metrics to describe an article, be it composed of the entire suite or a select view. PLOS has introduced ALM signposts, an aggregated view of usage stats, social shares, academic bookmarks and scholarly citations. They appear at the top of each article as well in key navigational places on the site. Although the signposts only represent a subset of ALMs, they are meant to provide an easy at-a-glance view of the article’s activity across the different groups of measurements (audiences, dimensions and timescales). 

By providing a broader spectrum of metrics rolled-up into a summary view, we offer researchers a more manageable set of numbers that hopefully has not replaced deeper explanation for simplification. We have evidence that these components each offer a different view, depending on the user and need. In short, not all “faces of ALM” may be useful or even relevant to any one circumstance. But each, whether a single “face” or set, offers a more adequate outlook than one that is applied without distinction to every circumstance. 


Martin Fenner is the technical lead for the PLOS Article-Level Metrics project. Previously he worked as a medical doctor and cancer researcher. He can be reached at mfenner@plos.org or @mfenner.

Jennifer Lin is passionate about open access and its political and social impacts. As a former business consultant with Accenture, she worked with Fortune 500 companies as well as governments to develop and deploy new products and services. Jennifer received her Ph.D. in political philosophy and has served as an instructor at Johns Hopkins University. She can be reached at jlin<at>plos.org (http://plos.org).