Article Text

Download PDFPDF

Are we currently underestimating the risk of scrum-related neck injuries in rugby union front-row players?
  1. James C Brown1,2,
  2. Mike I Lambert1,
  3. Sharief Hendricks1,
  4. Clint Readhead3,
  5. Evert Verhagen2,
  6. Nicholas Burger1,
  7. Wayne Viljoen3
  1. 1UCT/MRC Research Unit for Exercise Science and Sports Medicine, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa
  2. 2Department of Public and Occupational Health, EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands
  3. 3South African Rugby Union (SARU), Cape Town, South Africa
  1. Correspondence to James C Brown, UCT/MRC Research Unit for Exercise Science and Sports Medicine, Department of Human Biology, Faculty of Health Sciences, University of Cape Town, Cape Town 7700, South Africa; jamesbrown06{at}gmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Injury epidemiology—the quantification of ‘EXPOSURE’

To understand the risk of incurring a particular rugby injury, and to identify risk factors related to this injury, it is necessary to know the injury counts and the time that the players are exposed to the risk of sustaining that injury.1 The latter poses an interesting debate with respect to the scrum as there are various ways in which the exposure can be expressed. While they are not the only two methods, ‘exposure’ in sports injury epidemiology has often been calculated either by the ‘Athlete at risk’ method or the ‘Athlete participation’ method.2 Embedded ImageEmbedded Image The ‘athlete participation’ method is sometimes the only way to calculate injury incidence rates for certain investigations, such as for catastrophic injury risk,3 in which data are mainly collected retrospectively. However, this method typically underestimates injury rates as the exact time at risk is not quantified.2 Where match time is recorded, the current consensus statement for the surveillance of injuries in rugby union4 provides a formula for the calculation of match exposure. This consensus statement, which defines terms and preferred methodology, has significantly advanced the quality of research on injuries associated with rugby union by offering guidelines for a standardised approach, enabling universal comparison of injury risk and risk factors.5

‘Athlete at risk’—nuances of the calculation

For the majority of rugby union epidemiological studies, it is assumed that within one team (Team A) 15 players (the number of players per team on the field at one time) are at risk for Y minutes (80 min for senior level) over Z number of matches during a season/tournament.1 ,4

This exposure calculation assumes that all 15 players of the team are at equal risk of injury during this time: Y (minutes of match) ×Z (number of matches). However, the 15 players are comprised of two broad positional groupings: 8 forwards and 7 backs. Rugby union involves activities, such as scrummaging, that are position specific and therefore do not involve all 15 players.6 Furthermore, players are not at risk for the full match time as the ball is typically in play for far less time than the full 80 min.6 Outside of this ‘ball-in-play’ time, there is no injury risk to players. Also, certain activities, such as scrummaging, occur for a small percentage of the total match time: at international level, players only physically scrum for 1 min and 16 s per match, on average.6 Thus, the calculated incidence rate, based on the assumed exposure of 15 players at risk for the full match time (80 min, in most cases), would be a gross underestimation of the actual incidence rate of activity-related injuries when measured against the incidence rate calculated using the time of ‘ball-in-play’. Reporting incidence rates separately for forwards and backs1 may reduce some, but not all, of the inaccuracies of this calculation. For example, if only the three front-row positions (loose-head prop, hooker and tight-head prop) are mainly at risk of injury during the scrum, the current exposure calculation is overestimated, and the incidence rate is underestimated by the order of five players who are not at risk of injury. This point is even more pertinent for catastrophic scrum injuries which occur almost exclusively to the front row.3 ,7 ,8

Scrummaging is not the only event that may have an underestimated injury risk because of its specificity during a match. Other examples of events that are more related to certain playing positions than others due to the position-specific nature of the game are lineouts, kicking, ball carries, tackles, mauls and rucks. However, scrums are a well-structured and controlled event, governed by numerous safety laws which referees have to enforce. Thus, the authors felt that this specific aspect of play was the most suited to practically demonstrate the underestimated injury risk per event, in this case for scrum-related injuries in the front row playing positions.

Two worked examples, based on the data from previously published manuscripts,7 ,9 are presented to illustrate this potential underestimation in the calculation of non-catastrophic (worked example A) and catastrophic (worked example B) injury rates. Incidences with 95% CIs were calculated using formulae10 frequently used in rugby union injury studies1 ,5: incidence rates were considered significantly different from each other if their respective 95% CIs did not overlap at all.

Worked example A: evaluation of non-catastrophic injury rates using the ‘athlete at risk’ method

Teams followed across two rugby seasons (with 15 players per team), participating in a total of 420 matches that are 80 min each in length, have an overall exposure time of 8400 player hours. There are 91 scrum-related neck injuries to the front-row players (various severities, including medical attention injuries) during these 420 matches. Depending on the number of players whom one considers to be ‘at risk’ during the scrum, there are typically three methods to calculate risk exposure, each of these exposure calculations producing notably different incidence rates.

  1. Consider all 15 players (figure 1—all players); (91/8400)× 1000=an incidence rate of 10.8 scrum-related injuries per 1000 player-hours (95% CI 8.6 to 13.1). As mentioned previously, this is highly inaccurate since not all 15 players participate in a scrum.

  2. Consider only the 8 forwards (figure 1—forwards only); [91/(8400×8/15)]×1000=an incidence rate of 20.3 scrum-related injuries per 1000 forward-hours (95% CI 16.1 to 24.5), which is significantly greater than the incidence rate for all 15 players. This method has been suggested by Brooks and Fuller1 and Gianotti et al11 previously, but does not feature in the current consensus statement4 and is not applied ubiquitously.

  3. Consider only the 3 front-row players who are at risk of suffering a scrum-related injury (figure 1—front row only); [91/(8400×3/15)]×1000=an incidence rate of 54.2 scrum-related injuries per 1000 front-row-hours (95% CI 43.0 to 65.3), which is significantly greater than both the incidence rates for all 15 players (1) and for all 8 forwards only (2). Given the previously recorded scrum injuries, this calculation may be closer to a true reflection of the injury incidence rate during the scrum.

Figure 1

The same number of scrum-related injuries produces three different injury incidence rates, depending on the number of players who are considered to be ‘at risk’ in the exposure time (player hours). The injury incidence rate for all players is significantly less than for the three front-row forwards only.

Moreover, scrummaging accounts for a small amount of total activity—it accounts for only 2% of total match time (1 min 16 s of 80 min)6 and 6% of total events/activities (1447 of 22 842 events).12 Thus, points (1) and (3), which assume that the exposure to scrum injury risk is for the entire 80 min of a senior match, also overestimate the exposure and underestimate the risk. However, quantifying the amount of time spent scrummaging and ball-in-play time may be more logistically challenging, particularly for non-elite or community levels of the game.

An alternative approach to evaluating risk would be to disregard time completely and to calculate injury counts per event—that is, 10 scrum injuries per 92 scrums. This method has been performed previously for a prospective cohort study12 and showed the scrum to have the highest propensity to cause injury. Should the primary measure be scrum injuries only, owing to the small number of scrums in a match, this method could still be practically feasible.

Worked example B—evaluation of catastrophic injury rates using the ‘athlete participation’ method

The following raw data were used from a previous publication7 which investigated the incidence rate of catastrophic injuries in South Africa between 2008 and 2011. The raw data are used here purely for illustrative purposes. To calculate the national incidence rate of injuries, the ‘athlete at risk’ method would be logistically difficult. Therefore, the ‘athlete participation’ method was used for this particular investigation.7

Table 1 provides the estimated player number exposure hours for calculation of relative match-related catastrophic acute spinal cord injury (ASCI) average incidence rates associated with the scrum (4.8 injuries/year) between 2008 and 2011 in South Africa. Hookers (n=13), props (n=5) and locks (n=1) incurred scrum-related catastrophic ASCI. There were an estimated 651 146 active players at the time of the study.

On the assumption that all 15 playing positions were represented proportionally, if one then multiplied this total number of players by the fraction of forwards that comprise a starting team (8/15), one would get a gross estimate of the forwards population at risk (347 278 players; table 1). This method has been described and used previously for assessing scrum law changes.11 Furthermore, with the same assumption as made above, by then multiplying the total number of players by the fraction of front-row forwards (loose-head prop, hooker and tight-head prop) that are at risk for a catastrophic injury during a scrum (3/15), one would have a gross estimate of the front-row forward population at risk (130 230 players). Similarly, one could multiply the total number of players by a fraction of 2/15 and 1/15 to get a gross estimate of the number of props (86 820) and hookers at risk (43 410). Intuitively, accurately recording the exposure time for such a large number of players would be impractical, and therefore the ‘athlete participation’ method remains the most feasible for catastrophic injury studies.

Table 1

The estimated total number of South African rugby players7 at risk is provided, and the respectively calculated forwards, front-row players (loose-head prop, hooker and tight-head prop), props (loose-head prop and tight-head prop) and hooker position only

Using the ‘athlete participation’ method, and therefore the player participation numbers, scrum-related catastrophic injury incidence was calculated in a variety of ways using the respective player numbers at risk (ie, all players, forwards, front-row players, props and hookers only) and represented as injuries per 100 000 players. Similarly, only the scrum injuries that occurred to the front row (n=18; 4.5 injuries/year) were considered for the ‘front-row’ calculation, only injuries that occurred to props were considered for the ‘props’ calculation (n=5; 1.3/year), and only injuries that occurred to the hooker (n=13; 3.3 injuries/year) were considered for the ‘hooker’ calculation.

Using methods common to sports epidemiology research,1 ,10 injury incidence rates for the front row (3.7 injuries/100 000 players, 95% CI 0.4 to 7.0) would not be statistically different from the incidence rate for all players (0.7 injuries/100 000 players, 95% CI 0.1 to 1.4), despite this representing a fivefold difference (figure 2). Similarly, hookers (7.5/100 000 players, 95% CI 0.0 to 15.6) had a fivefold greater injury rate than props (1.4/100 000 players, 95% CI 0.1 to 4.0), although this would also not be statistically significant due to the overlap of these two CIs. As catastrophic events are rare occurrences, the calculation of a relative rate,13 using a clinically relevant threshold for a ‘meaningful’ difference, could be a more practical method to assess if the two rates were different. Even though the difference is statistically not significant, the hooker position, from a clinical relevance point of view, would be at far greater risk of catastrophic injury than other positions in the scrum. The definition of ‘clinical relevance’ would depend on the particular research question.

Figure 2

Catastrophic acute spinal cord injury incidence rates for the scrum. The incidence rate is shown for different player exposure numbers (players at risk) for the scrum-related injuries only: total players, eight forwards only, three front-row forwards only (loose-head prop, hooker and tight-head prop), props only (loose-head and tight-head prop) and the hooker position only.

Discussion and conclusions

The current consensus statement for injury surveillance in rugby union4 has improved epidemiological rigour within the rugby injury field. However, considering the relatively small number of players at risk of scrum-related injuries,7 ,14 and without acknowledging the reduction in relative exposure, we are in danger of masking the real risk of injury to the front row and, more specifically, the hooker playing position in the scrum. This ‘masking’ effect is evident when data from previous publications7 ,9 are recalculated with only those players who are at risk of injury in the scrum (front row) being considered in the exposure calculation (figures 1 and 2). While the incidence of spinal cord injury in rugby is generally low, and while the calculated overall risk of a scrum-related catastrophic injury would probably still remain relatively low, it is important not to underestimate this risk.

Furthermore, with this ‘masking’ effect, it is possible that changes in scrum injury incidence rates as a result of the modified scrum laws that South African Rugby Union (SARU)15 and, more recently, the International Rugby Board (IRB) have mandated16 could be overlooked. Any injury prevention intervention needs to be accurately evaluated before and after the problem is identified.17 With the large CIs that would result from reducing the scrum-related exposure to the fraction of the front row and positional grouping numbers, any statistical effect of the law changes might indeed be missed. A relative rate change with a clinically relevant threshold or using a Poisson regression, as employed by Quarrie et al,18 to assess the effect of RugbySmart on neck and spinal injuries, may offer alternative approaches to consider.

The authors propose that future epidemiological studies should, where possible, attempt to more accurately quantify the actual player exposure when assessing scrum-related injury risk. At the very least, this exposure should be changed based on the assumption that only three players—the loose-head prop, hooker and tight-head prop—are effectively at risk for suffering a scrum-related injury (general or catastrophic). Although there is a chance that the lock positions could also be injured during the scrum, the authors contend that this chance is very low, based on only one isolated injury until now that the authors are aware of.7 Furthermore, although using fractions to recalculate the ‘forward’ and ‘front-row’ populations for catastrophic injuries in the ‘athlete participation’ method may be based on many assumptions, the authors contend that this recalculated exposure is more accurate than calculating an injury rate based on the total playing population, as is currently performed. This would align risk estimates in catastrophic injury epidemiological reports with the majority of non-catastrophic epidemiological reports, which only consider the forwards in the exposure calculation. Similarly, with non-catastrophic scrum injuries included in the ‘athlete at risk’ method, calculating the adjusted player exposure hours using only the three front-row players, or using the injuries per event approach, would provide a better representation of the true injury incidence per scrum event than is currently provided.

References

View Abstract

Footnotes

  • Contributors WV and CR collected and entered data. CR, MIL, WV and JCB analysed the data. JCB drafted the first version of the manuscript. All other authors (MIL, CR, SH, NB, EV and WV) provided feedback. JCB revised the manuscript, with edits and submitted the manuscript.

  • Competing interests None.

  • Ethics approval University of Cape Town Human Research Ethics Committee.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles