Skip to main content

Comparing apples and pears: misleading conclusions about the population mental health impact of a parenting programme, a commentary on Marryat, Thompson and Wilson (2017)

A Publisher Correction to this article was published on 15 October 2019

This article has been updated

Abstract

Background

The article by Marryat, Thompson and Wilson (2017) in BMC Pediatrics presents an evaluation of the implementation of the Triple P system as a public health intervention conducted by the Glasgow City Council and NHS Greater Glasgow and Clyde.

Discussion

Unfortunately, the conclusions drawn are questionable for multiple reasons. The lack of a controlled design precludes defensible conclusions about intervention effects free from routine threats to internal validity. There was a substantial mismatch between the intervention sample and the population sample assessed. The article’s title and abstract leave readers with the mistaken impression that the children assessed for outcome were suitably representative of intervention families, when in fact many of the children in the intervention families were missing from the teacher-report outcome assessment (a single questionnaire), and many or most of the children in the teacher-report outcome assessment belonged to families who had never received the intervention. Although Triple P targets parent-child relations and child behavioural and emotional problems at home, Marryat et al. narrowly defined mental health impact as child difficulties in nursery or preschool, while not reporting data from practitioners and parents in the same evaluation that did not support the authors’ conclusion. The paper was further diminished by a number of misleading statements and factual errors related for example to other research on Triple P.

Summary

Studying the extent to which child mental health functioning at home can generalise to school settings is an important topic of inquiry in relation to parenting support interventions, but unfortunately the Marryat et al. article did not move this area forward.

Peer Review reports

Background

The Triple P — Positive Parenting Program (Triple P) [1] is a multilevel system of parenting support designed to prevent and treat child social, emotional and behavioural problems. The Triple P system involves a population health approach to parenting support, and in recent years has been evaluated at a population level in various locations including the United States [2], Australia [3], and Ireland [4]. The population approach involves universal access to evidence-based parenting support through a mix of prevention, early intervention and targeted intervention options, with the aim of providing parents with the minimal amount of support they need.

A recent paper published in BMC Pediatrics [5] reports on a city-wide implementation of Triple P in Glasgow, Scotland. We applaud the authors for conducting an independent evaluation of the Triple P system, and endorse the value of independent evaluations of parenting support interventions. A healthy mix of independent and developer-led evaluations is vital for the ongoing refinement and dissemination of rigorous, evidence-based practice in the field of parenting support interventions. We also welcome the focus on generalisation of the effects of parenting programs to other contexts, as this has important implications for the range of services that are offered to communities. However, following a careful review of the methodology and findings reflected in this paper, we have concerns relating to the validity of inferences drawn by the authors, namely that “no convincing evidence of benefit for preschool aged children’s mental health problems” was found from the initiative. We believe this conclusion is untenable due to the methodological, conceptual, and measurement limitations of the uncontrolled study, which we detail below. There are also numerous misleading claims and factual errors throughout, which further undermine the paper’s validity.

Main text

Uncontrolled design

The Marryat et al. [5] design neglected to include a viable control condition. The lack of a control or comparison group precludes any conclusions about program effects that are free from routine threats to internal validity. Although briefly mentioning this as a weakness, the authors maintain that prior survey data make a control condition unnecessary, and support this notion by pointing out sampling and other design challenges encountered in prior experimental studies. Challenges encountered in prior controlled studies do not mitigate the absence of a control group in the Marryat et al. (2017) study. This methodological limitation is further compounded by the fact that delivery of Triple P had already started in the target city prior to the launch of this study, further suggesting the importance of including a no-intervention control condition. The authors claimed that it is highly unlikely the prior delivery of Triple P affected baseline data, but they provided no evidence for this assertion.

Mismatch between intervention and outcome measurement age

One of the most serious methodological problems with the article has to do with the age range of the children assessed. The article fails to make it clear to the reader that there is a substantial mismatch between the intervention and the outcome measurement with respect to child age range and sampling. Parental participation in the intervention (i.e., the independent variable) targeted children 2-16 years of age, while the teacher-reported outcome variable (i.e., the dependent variable) assessed 4 and 5 year-olds. The article provided detail about the number of families participating in the intervention, and briefly mentioned that a substantial proportion (i.e., 40% or more) of the families that received parenting services had children that were too old to be picked up by the outcome measure. The 40% figure refers to the percentage of children who were older than age five at the time their parents participated in the intervention. However, the implications of attempting to detect population-level impact by assessing a diluted, marginal sample of those who actually received the intervention have not been discussed.

Related to the age and sample mismatch issue, many of the families receiving the parenting intervention did not have a child within an eligible age-range for inclusion in the teacher-report outcome assessment, and thus were not represented in the evaluation of the intervention. Likewise, many of the children assessed via teacher report were from families who had never received the intervention. No data were provided regarding the proportion of children assessed that had a parent who had participated in Triple P. This omission, along with the aforementioned lack of control condition, makes it impossible to calculate common indices representative of population-level impact, such as risk ratios or number-needed-to-treat.

Narrow focus for assessing mental health impact

Triple P aims to reduce child behavioural and emotional difficulties through the mechanism of promoting change in parenting practices, and thus the primary focus is on producing change in the family context. Although Marryat et al. [5] claimed to evaluate the mental health impact of Triple P, they presented data related only to child difficulties at school (in this case, the nursery or pre-school context) via a routinely collected teacher-report questionnaire, the Strengths and Difficulties Questionnaire (SDQ) [6]. This narrow focus is important because (a) changes within the school setting are not the primary target of the Triple P intervention, and (b) the aims and conclusions outlined by the authors do not align with the actual data reported.

The impact of family-based interventions like Triple P on school adjustment is an important research question, and one that would be reasonable to explore. We might anticipate that significant improvements in child mental health or behavioural difficulties seen within the home context might also be seen at school, particularly if the child has significant difficulties at school in the first place. However, reliance on teacher-report data as the sole indicator of population-level impact on child mental health is seriously flawed. First, there are generally low levels of concordance between teacher and parent reports regarding child difficulties, with often only modest correlation (e.g., < .30) between parents and teachers as informants, and teachers typically reporting fewer problems overall (e.g. [7, 8]).

Teacher report cannot be used as a proxy for parents’ experiences with their children at home or for parental reports on children’s mental health status. To support the decision to present only teacher data, the authors claimed that reliance on parental report can be problematic due to its potential to introduce a measurement confound with parent’s mental state, however no evidence is presented that teacher-reported data provides a more realistic or reliable indication of children’s mental health or difficult behaviours than parent-reported data. Teacher-report data can provide a valuable contribution within a multi-informant approach to understanding the broader impact of a parenting intervention such as Triple P, yet as with any single-informant approach to data collection, findings should be framed within the confines of the extent to which these are generalisable—in this case, teachers’ views of child behaviour within the preschool setting posited to generalise to the home, and children’s general mental health. We acknowledge that issues of pragmatism can preclude the collection of data from multiple sources, but the authors failed to acknowledge this limitation. The result, unfortunately, took the form of over-reaching and inappropriately generalised conclusions regarding the population-level impact on mental health.

Selective reporting

Original data from the final report of the Glasgow Parenting Support Framework Evaluation [9] included parent-reported outcomes for pre-school children using the SDQ. Although not aggregate-level data, these data showed positive outcomes for Triple P when parents completed the program. However, these findings and other qualitative data from practitioners and parents have been ignored, which is problematic because reporting the full pattern of findings would have given the reader a more complete understanding that perhaps would have contradicted the authors’ stated conclusions. Furthermore, the authors claimed there were no changes in social outcomes for children, but they only examined the Conduct Problems subscale when analysing the SDQ data, and not other SDQ subscales. The authors directed the reader to Additional File 4: Table S1 for further information regarding the pattern of differences in subscales other than the Total Difficulties score and Conduct Problems subscale, yet this table includes mean differences on only SDQ Total score and no individual subscale information. Subscales are plotted individually in Additional File 3: Fig. S1, however the lack of accompanying statistical information hinders any substantive interpretation of the data.

Factual errors and misleading statements

The authors claimed that Triple P has little effect in deprived communities. This claim ignores studies showing that socioeconomic status does not moderate effect sizes for child outcomes in Triple P studies [10] and the mounting evidence that Triple P works well in low resource communities (e.g. [2, 4, 11]). There have since been a number of high quality studies showing the value of Triple P in a range of disadvantaged communities. Examples include: a place-based randomised trial of the Triple P system in the US showing population level effects on child maltreatment in communities with substantial representation of disadvantaged families [2]; an RCT of low intensity Triple P Discussion Groups in Panama showing positive effects on child and parent outcomes with parents in deprived communities [11]; an RCT of Triple P Discussion Groups with a Maori indigenous population in New Zealand [12]; evaluations of Group Triple P with Aboriginal and Torres Strait Islander samples in Australia [13]; and a trial of Triple P Online with vulnerable disadvantaged urban mainly African American and Latino families in Los Angeles [14]. Qualitative studies showing high levels of consumer acceptance of Triple P principles and techniques have been conducted with homeless parents [15], vulnerable low income families involved with child protective services [14], and women in shelters who have histories of domestic violence [16]. Fives et al. [4] reported that many participants in the Ireland population roll out of Triple P were low SES (39% of Group Triple P participants, 33% of workshop participants, and 26% of seminar participants had a medical card, a key indicator of low SES). Contrary to Marryat et al.’s conclusion [5], Triple P has been found to be a promising intervention with many vulnerable, socially disadvantaged parents.

The paper also raised concerns about the costs of Triple P without defining the costs or placing the costs in perspective relative to not intervening, or the costs of other intervention strategies. Serving 10,000 families ostensibly costs more than serving 100 families, but the key metric would be the per-family cost, which the article ignored in making a general pronouncement (i.e., “consumes substantial resources”). It did not discuss the potential cost savings of brief, early, minimal intervention, or the mix of varied delivery formats, for example, the cost saving in offering group programs serving several families in the same amount of staff time as conducting individual sessions.

The paper failed to take into account that during the intervention period in the same catchment area other parenting interventions were also being supported and implemented concurrently, albeit on a smaller scale. This again highlights the need for control data to allow suitable comparisons to support conclusions around population-level impact of any universal prevention or public health initiative.

Finally, there are some major errors in the article. Firstly, it reports null results of “a recent cluster randomized control trial exploring the impact of Triple P levels 2 and 3 on pre-schoolers’ externalizing behaviours and parental mental health”. The references cited relate to Hiscock et al. [17], a study that was not a Triple P intervention, and Malti et al. [18], a study that tested one level of Triple P (Level 4 Group). Similarly, the article refers to Prinz and Sanders [19] in relation to “previous work in which no significant improvement in child-based outcomes resulted from a public health parenting programme” which is an incorrect citation—the article cited is a theoretical piece about population-level interventions and does not include an evaluation nor any discussion of child outcome results. The authors also cite a study reporting a subgroup analysis focusing on lone parent families that showed no benefit from the Triple P intervention [20]. It is true the study reported no group difference between intervention and control parents around parenting and child behaviour based on self-report data. However, independent clinical observations reported within the same paper showed significant improvements in positive parenting behaviour and decreases in negative child behaviour for the intervention group. We find it curious that this finding was omitted from the authors’ discussion, particularly considering it reports data from an independent source which would seem of relevance given the prior arguments made by the authors.

The paper claimed to have registered the study protocol, yet the reference list only cites a University of Glasgow webpage for a description of the protocol, no trial registration number. Furthermore, the protocol as described is significantly different from the primary findings reported in the paper or the final evaluation report.

Measurement problems

The study had a number of measurement problems. First, one of the primary outcome measures was a modified version of the Conduct Problems subscale of the SDQ. Using only three of the original five items for this scale resulted in a modified version that had low internal consistency (α = 0.66), which is below the commonly accepted threshold (0.7) for scientific acceptability, and which relied on a questionably small number of items (three). Additionally, they use a weighted procedure to compute an average score for this modified subscale, and then applied the standard cut-off levels intended for the full subscale. Given these measurement issues the validity of this scale as a primary outcome variable is uncertain and highly questionable.

Conclusions

Overall, while an independent evaluation of a complex community-wide intervention such as that undertaken in Glasgow is welcome, the capacity to learn from the present evaluation is diminished by methodological, interpretational and factual issues and errors. Given the absence of a proper control or comparison group, and in light of the substantial mismatch between the intervention sample and the outcome measurement age group, rather than sweeping claims of “No evidence of whole population mental health impact,” the scientifically justifiable conclusion is one of uncertainty. It is not possible to assert with any confidence that the observed data reflect a true test of intervention impact (i.e. behavior assessed in the preschool setting, with assumptions unable to be drawn about the home setting) or an inadequate or limited test of intervention impact (i.e. questionable measurement validity and a lack of suitable control condition). This article provides further support for the pressing need for the field to develop accurate and scalable measurement procedures to test population effects for public health interventions.

The ongoing delivery of Triple P in Glasgow is viewed by the NHS as part of a long-term strategy and it was expected to take several years for any new programme to be properly established in practice. In a city with Glasgow’s levels of poverty and deprivation, health visitors implementing the programme have spent time engaging parents and helping them understand the need and benefit of parenting support. As expected, there have been many learnings over the years since Triple P was first introduced, including the need for dedicated practitioners within health visiting teams to run parenting groups and to establish strong links and partnerships with the voluntary sector to further improve engagement. Although positive outcomes have been achieved with many individual families who completed the programme, sustained implementation of Triple P requires a quality improvement framework that has been adopted by the implementation team in Glasgow. This involves applying learnings from implementation science, large-scale rollouts of the Triple P system (e.g. [4, 21]), consumer and end user feedback from parents and practitioners, and outcome data collected as part of routine implementation. The ultimate aim is to continuously improve fidelity of delivery, minimise drop out and increase the reach and impact of the intervention.

Abbreviations (Sanders et. al).

NHS: National Health Service; SDQ: Strengths and Difficulties Questionnaire; Triple P: Triple P—Positive Parenting Program

Acknowledgements (Sanders et. al).

Not applicable.

Authors’ contributions (Sanders et. al).

All authors made substantial intellectual contributions to the conception, drafting, and critical revision of this manuscript, and have given final approval for its publication.

Funding (Sanders et. al).

Not applicable.

Availability of data and materials (Sanders et. al).

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Ethics approval and consent to participate (Sanders et. al).

Not applicable.

Consent for publication (Sanders et. al).

Not applicable.

Competing interests (Sanders et. al).

The Triple P – Positive Parenting Program is owned by The University of Queensland (UQ). The University through its main technology transfer company, UniQuest Pty Ltd., has licensed Triple P International Pty Ltd. to publish and disseminate the program worldwide. Royalties stemming from published Triple P resources are distributed to the Faculty of Health and Behavioural Sciences at UQ, Parenting and Family Support Centre, School of Psychology at UQ, and contributory authors. No author has any share or ownership in Triple P International Pty Ltd. MS, KT and AM are contributory authors of disseminated Triple P resources and receive royalties as specified above. MS, KT, JD and AM receive salary from the Parenting and Family Support Centre; MS is a consultant to Triple P International. LdC, SM, and JK have no financial competing interests to declare. MS, JD, KT, AM and JK are members of the Triple P Research Network (TPRN), a non-financial body formed to foster international collaborations and knowledge sharing between researchers interested in parent and family research including the Triple P—Positive Parenting Program.

Authors’ response

In the above correspondence, Sanders et al. have commented on our paper [5] which reported a lack of impact of whole-population implementation Triple P in Glasgow City. Sanders et al. consider the findings not proven due to ‘methodological, interpretational and factual issues and errors’.

The main criticisms are:

  1. 1)

    The design neglected to include a control condition

  2. 2)

    The mismatch between the age of the target child for the intervention and the age of population assessment

  3. 3)

    The focus on the outcome as childhood mental health difficulties being too narrow

  4. 4)

    Selective reporting of outcomes

  5. 5)

    Factual errors and misleading statements

  6. 6)

    A number of measurement problems

We strongly reject the contention that our conclusions were not justified by the evidence. Whilst our research has weaknesses, as with any evaluation carried out in the real world, these were clearly set out in the original paper, and do not alter our overall conclusions. We now discuss each of the arguments in turn.

Assertion 1 the design neglected to include a variable control condition

It is true that the study design did not have a control group. The authors acknowledged this as a potential weakness in the original manuscript. The resources available to us, administered through NHS Greater Glasgow and Clyde (NHS GGC), were insufficient to meet the cost of a control group, and data collection had not begun at the start of the intervention, so no pre-intervention comparison group was available.

The study design was subject to a review process. The evaluation steering group, which included NHS members, sent the protocol out to external peer review by Warwick Medical School, who provided very strong support for our design.

The conclusions reached in the Glasgow evaluation are supported by the only other published independent UK evaluation of Triple P, a randomised controlled trial showing no effect from Triple P interventions [22].

Assertion 2 the mismatch between the age of the target child for the intervention and the age of population assessment

This criticism refers to the fact that the six years’ of population outcome measures of child mental health difficulties were assessed at age 4-5 years, whereas the group Triple P interventions were delivered to parents of children of a range of ages, many of whom were over that age. This argument has little relevance to our conclusions. First, group Triple P was only one part of a population-level programme. Triple P International, in its tender documents shared with NHS GGC in 2010, considered the level of reach of the programme was sufficient to produce a whole-population effect and that effect should have been seen at any and all ages of children. Second, the Glasgow Triple P programme included a city-wide media campaign involving television, newspaper and billboard posters aimed at all parents as well as a universal seminar programme. It is difficult to believe that any Glasgow family was not exposed to at least some of these Triple P materials. Third, parents nominated only one of their children as the index child when attending groups, and it was this child for whom the age was recorded in our process evaluation. Many attending parents would have had younger children in the family who would have been affected by the Triple P programme if it had been effective. Finally, we reiterate that the overall intensity of intervention was at least as high as that reported in previous non-independent studies claiming positive results [2, 3].

Assertion 3 the focus on the outcome as childhood mental health difficulties being too narrow

The Triple P programme claims to ‘prevent – as well as treat – behavioral and emotional problems in children’ [Triple P website, accessed 5th March 2018]. This is considered by the developers to occur through changes in parenting practices. Sanders et al. point out that our paper only reports behavioural and emotional problems within the school context, as reported by teachers, rather than the home context, as reported by parents. As the original paper explains, a multi-informant approach would have been desirable but no resources were available for this.

Parental reports of children’s mental health are strongly influenced by the parent’s own state of mind [23, 24], and parental reports of depression, anxiety and stress over the course of a group Triple P intervention generally show improvements in mental state [10].

Given the overwhelming balance of evidence that independent observers fail to report any impact of Triple P on child behaviour [25], and that parental (usually maternal) mood is improved by group attendance, the most parsimonious explanation is that Group Triple P does not have an impact on child behaviour; it simply enables attending parents to think that their child is behaving better. This is clearly a desirable outcome but it is difficult to see why any service commissioner would consider investing in an expensive programme with such limited impact. Further independent research would be valuable in this area.

Assertion 4 selective reporting of outcomes

Sanders et al., criticise our paper for not presenting the parent reported child mental health outcomes collected during the course of the intervention. These data were only available for a relatively small number of families who completed interventions. Aside from the problems expressed in the previous section about parent-reported child mental health outcomes, the purpose of our paper was to assess whether there had been any effect on population-level child mental health difficulties, which the population level Triple P programme purports to produce. These impacts were not found over six years of collected data.

Sanders et al. are correct in that, of the 44.1% of parents who completed a group Triple P intervention once they had enrolled in the programme, mean overall child mental health difficulties were reported by this relatively small number of parents to fall to a modest extent from 15.8 to 12 (on a scale of 0-40, n = 366). Parents who completed a group Triple P intervention, however, had children with lower levels of difficulties at the start of the intervention, were more affluent and better educated than those who failed to complete it, suggesting that the parents with children most in need of the intervention would not be receiving it in full [9]. Given the low numbers and completion levels of the more intensive strands of Triple P which do show parental-reported positive impacts in Glasgow city, we would not expect to see an overall impact on population level child mental health difficulties, even if we had surveyed the population of parents in Glasgow, as opposed to just teachers. We would welcome the publication of the qualitative element of the evaluation which gives some insight into parental perceptions of the programme.

Assertion 5 factual errors and misleading statements

Sanders et al. disagree with our statement that ‘Some doubt has been expressed about the effectiveness of Triple P in deprived communities’ [5]. This statement referred to the meta-analysis of Triple P by Thomas and Zimmer-Gembeck (2007), which concluded that ‘Due to the high number of Triple P studies in the meta-analysis with middle or higher SES, it is not certain that findings can be generalized to low income or high risk groups at this time.’ [26]. This complements our own findings from the population level study in Glasgow City that families from lower SES groups were less likely to complete Triple P interventions [9]. The Sanders et al. response suggests that our paper ‘ignores studies showing that socioeconomic status does not moderate effect sizes for child outcomes’, quoting their own meta-analysis, but that meta-analysis explicitly states in its limitations section that ‘some potential moderator variables could not be examined due to incomplete reporting in primary studies (i.e., parental age, socio-economic status, child gender, parental psychopathology and level of substance use, and family structure). Moderators such as these may account for some unexplained variance and could be investigated in future research.’ Furthermore, with regards to the quantitative studies that Sanders et al. go on to list [2, 4, 11,12,13,14], although these studies admirably target support for families from particularly low resource settings, overall completion and engagement rates are often low, as are completion of evaluation data, and, where information is provided, evidence suggests that engagement/completion rates are disproportionately low for less educated families [4, 14], as well as for single [12] or unmarried [14] mothers. In many cases, information on completion by SES is not provided, and we found no reporting of results by SES to either confirm or deny our findings within these papers. Additionally, a further recent paper explicitly looking at attrition in a Triple P intervention, reported that parents who dropped out were more substantially more likely to have less education compared with parents who completed the intervention [27].

Sanders et al. raise concerns about the lack of information about the cost of the Triple P programme in our paper, despite this being raised as a concern. We requested these cost data formally from NHSGGC, in 2015, and were told that this information was not available. An estimate that we consider very conservative, owing to the lack of inclusion of staff costs in programme delivery, was given in The Times at £4 million [28]. We would welcome the publication of comprehensive cost data for Triple P in Glasgow city.

Sanders et al. assert that our paper ‘failed to take into account that during the intervention period in the same catchment area other parenting interventions were also being supported and implemented concurrently’. From our own knowledge, the only other major parenting programme with significant levels of delivery, the Family Nurse Partnership, began a pilot implementation in Glasgow City in April 2012 [29] with the first cohort of mothers and children completing the programme in late 2014, after the evaluation of Triple P concluded and with first-time parents of children much younger than preschool age – so none of these families would have been included in our analysis. There were a few other parenting programmes operating in Glasgow (e.g. Mellow Parenting, the NCH programme and Incredible Years), however these were operating on a very small scale indeed, not at whole population level, and highly unlikely to have affected overall population level mental health.

The response correctly points out that three cross-references have become misaligned in the final version of the bibliography. We apologise for the typographical errors, but our conclusions are not in any way altered by them and we are happy to offer a corrigendum.

Sanders et al. are mistaken in their claim that our paper says that the protocol was registered: we stated that ‘The protocol for this study was published in 2010’ and a link is provided to the published version. As the evaluation was not a randomised trial, there was no mechanism to formally register it at the time.

Assertion 6 a number of measurement problems

Despite Sanders et al. claim that there are ‘a number of measurement problems’, they list only two linked difficulties, which were discussed in the original paper. The change in use of the Conduct Problems version was less than ideal but was necessary. It was carried out in response to nursery staff who were finding the 4-16 year old version inappropriate for some preschool children [30]. The alpha measuring internal consistency dropped from .71 to .66, falling just short of the normal level which is seen as acceptable, however one aspect of the creation of this measure is the number of items in the scale, so this fall may simply be a reflection of the change from five items to three items in the scale – all of which was discussed in the original paper. We believe that the ‘weighting’ which Sanders et al. refer to is the averaging of the three items and then multiplying by five in order to create a comparable score. In the original documentation and code produced by Youth in Mind, the organisation hosting the SDQ, the averaging and multiplication of three items in a normal five item measure (where two items within the scale were not completed or incomprehensible, for example) is permitted, so this is no different from usual accepted practice [31]. Sanders et al. state inaccurately that this calculation makes the overall results ‘highly questionable’, however, the conduct problems scale is only one of four scales used to form the Total Difficulties score, and none of the other scales showed improvements over time either.

Conclusions

The authors strongly reject the contention that our original conclusions were not justified by the evidence. The original paper set out the weaknesses in the study design, however, our methods were robust and would have shown a positive impact of Triple P on the mental health of the population of children in Glasgow city, should there have been one.

We did not agree to perform retrospective subgroup analyses that would show the Triple P interventions in a more favourable light. We consider that independent evaluations should be carried out without the influence of programme developers in order that replicability of results can be truly established.

There is some continuing small scale activity including an independent trial comparing antenatal Triple P with another parenting programme and a highly targeted and structured programme offering level 4 Triple P to a small number of families, but the whole population approach has undoubtedly been abandoned.

Change history

  • 15 October 2019

    The original publication of this article was published in error before the text of the article had been finalised. The article has now been corrected. The publisher apologizes for the inconvenience caused to the authors and readers.

References

  1. Sanders MR. Development, evaluation, and multinational dissemination of the triple P-positive parenting program. Annu Rev Clin Psychol. 2012;8:345–79 Available from: http://www.annualreviews.org/doi/abs/10.1146/annurev-clinpsy-032511-43104.

    Article  Google Scholar 

  2. Prinz RJ, Sanders MR, Shapiro CJ, Whitaker DJ, Lutzker JR. Population-based prevention of child maltreatment: the U.S. triple P system population trial. Prev Sci. 2009;10:1–12 Available from: http://link.springer.com/10.1007/s11121-009-0123-3.

    Article  Google Scholar 

  3. Sanders MR, Ralph A, Sofronoff K, Gardiner P, Thompson R, Dwyer S, et al. Every family: a population approach to reducing behavioral and emotional problems in children making the transition to school. J Prim Prev. 2008;29:197–222 Available from: http://link.springer.com/10.1007/s10935-008-0139-7.

    Article  Google Scholar 

  4. Fives A, Pursell L, Heary C, Nic Gabhainn S, Canavan J. Parenting support for every parent: a population-level evaluation of triple P in Longford Westmeath. Final report. Athlone; 2014.

  5. Marryat L, Thompson L, Wilson P. No evidence of whole population mental health impact of the triple P parenting programme: findings from a routine dataset. BMC Pediatr. 2017;17:40 Available from: http://bmcpediatr.biomedcentral.com/articles/10.1186/s12887-017-0800-5.

    Article  Google Scholar 

  6. Goodman R, Scott S. Comparing the strengths and difficulties questionnaire and the child behavior checklist: is small beautiful? J Abnorm Child Psychol. 1999;27:17–24 Available from: http://link.springer.com/article/10.1023/A%253A1022658222914.

    Article  CAS  Google Scholar 

  7. Achenbach TM, McConaughy SH, Howell CT. Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity. Psychol Bull. 1987;101:213–32 Available from: http://doi.apa.org/getdoi.cfm?doi=10.1037/0033-2909.101.2.213.

    Article  CAS  Google Scholar 

  8. Stanger C, Lewis M. Agreement among parents, teachers, and children on internalizing and externalizing behavior problems. J Clin Child Psychol. 2010;22:107–16 Available from: http://www.tandfonline.com/doi/abs/10.1207/s15374424jccp2201_11. Lawrence Erlbaum Associates, Inc.

    Article  Google Scholar 

  9. Marryat L, Thompson L, McGranachan M, Barry S, Sim F, White J, et al. Parenting support framework evaluation final report, 2014, project report: NHS Greater Glasgow and Clyde; 2014.

  10. Sanders MR, Kirby JN, Tellegen CL, Day JJ. The triple P-positive parenting program: a systematic review and meta-analysis of a multi-level system of parenting support. Clin Psychol Rev. 2014;34:337–57 Available from: http://linkinghub.elsevier.com/retrieve/pii/S0272735814000683.

    Article  Google Scholar 

  11. Mejia A, Calam R, Sanders MR. A pilot randomized controlled trial of a brief parenting intervention in low-resource settings in Panama. Prev Sci. 2015;16:707–17 Available from: http://link.springer.com/10.1007/s11121-015-0551-1. Springer US.

    Article  Google Scholar 

  12. Keown LJ, Sanders MR, Franke N, Shepherd M. Te Whānau Pou Toru: a randomized controlled trial (RCT) of a culturally adapted low-intensity variant of the triple P-positive parenting program for indigenous Māori families in New Zealand. Prev Sci. 2018. https://doi.org/10.1007/s11121-018-0886-5.

    Article  Google Scholar 

  13. Turner KMT, Hodge LM, Forster M, McIlduff CD. Working effectively with indigenous families. In: Sanders MR, Mazzucchelli TG, editors. The power of positive parenting: transforming the lives of children, parents and communities using the triple P system. New York: Oxford University Press; 2018. p. 321–31.

    Google Scholar 

  14. Love SM, Sanders MR, Turner KM, Maurange M, Knott T, Prinz R, et al. Social media and gamification: engaging vulnerable parents in an online evidence-based parenting program. Child Abuse Negl. 2016;53:95–107.

    Article  Google Scholar 

  15. Haskett ME, Armstrong J, Neal SC, Aldianto K. Perceptions of triple P-positive parenting program seminars among parents experiencing homelessness. J Child Fam Stud. 2018;27(6):1957–67.

    Article  Google Scholar 

  16. Wessels I, Ward CL. Battered women and parenting: acceptability of an evidence-based parenting programme to women in shelters. Child Adolesc Ment Health. 2016;28(1):21–31.

    Article  Google Scholar 

  17. Hiscock H, Bayer JK, Price A, Ukoumunne OC, Rogers S, Wake M. Universal parenting programme to prevent early childhood behavioural problems: cluster randomised trial. BMJ. 2008;336.

    Article  Google Scholar 

  18. Malti T, Ribeaud D, Eisner MP. The effectiveness of two universal preventive interventions in reducing Children’s externalizing behavior: a cluster randomized controlled trial. J Clin Child Adolesc Psychol. 2011;40:677–92 Available from: http://www.tandfonline.com/doi/abs/10.1080/15374416.2011.597084.

    Article  Google Scholar 

  19. Prinz RJ, Sanders MR. Adopting a population-level approach to parenting and family support interventions. Clin Psychol Rev. 2007;27:739–49 Available from: http://www.ncbi.nlm.nih.gov/pubmed/17336435.

    Article  Google Scholar 

  20. Hahlweg K, Heinrichs N, Kuschel A, Bertram H, Naumann S. Long-term outcome of a randomized controlled universal prevention trial through a positive parenting program: is it worth the effort? Child Adolesc Psychiatry Ment Health. 2010;4:14 Available from: http://capmh.biomedcentral.com/articles/10.1186/1753-2000-4-14.

    Article  Google Scholar 

  21. Doyle O, Hegarty M, Owens C. Population-based system of parenting support to reduce the prevalence of child social, emotional, and Behavioural problems: difference-in-differences study. Prev Sci. 2018;3:1–10.

    Google Scholar 

  22. Little M, Berry V, Morpeth L, Blower S, Axford N, Taylor R, Bywater T, Lehtonen M, Tobin K. The impact of three evidence-based Programmes delivered in public Systems in Birmingham, UK. Int J Confl Violence. 2013;6(2):260–72.

    Google Scholar 

  23. Whittingham K, Sofronoff K, Sheffield JK. Stepping stones triple P: a pilot study to evaluate acceptability of the program by parents of a child diagnosed with an autism Spectrum disorder. Res Dev Disabil. 2006;27(4):364–80.

    Article  CAS  Google Scholar 

  24. Najman JM, Williams GM, Nikles J, Spence S, Bor W, O'Callaghan M, Le Brocque R, Andersen MJ, Shuttlewood GJ. Bias influencing maternal reports of child behaviour and emotional state. Soc Psychiatry Psychiatr Epidemiol. 2001;36(4):186–94.

    Article  CAS  Google Scholar 

  25. Wilson P, Rush R, Hussey S, Puckering C, Sim F, Allely C, Doku P, McConnachie A, Gillberg C. How evidence-based is an ‘evidence-based parenting program’? A PRISMA systematic review and meta-analysis of triple P. BMC Med. 2012;10(1):130.

    Article  Google Scholar 

  26. Thomas R, Zimmer-Gembeck MJ. Behavioral outcomes of parent-child interaction therapy and triple P-positive parenting program: a review and meta-analysis. J Abnorm Child Psychol. 2007;35(3):475–95.

    Article  Google Scholar 

  27. Ozbek A, Gencer O, Mustan AT. Which parents dropout from an evidence-based parenting programme (Triple-P) at CAMHS? Comparison of programme-completing and dropout parents. Clinical child psychology and psychiatry. 2019;24(1):144–57.

    Article  Google Scholar 

  28. Wade M. NHS ‘wasted’ £4million spent on controversial parenting initiative. Edinburgh: The Times; 2016. p. 5.

    Google Scholar 

  29. CHP GC: Family nurse partnership (FNP). 2012. http://library.nhsggc.org.uk/mediaAssets/CHP%20Glasgow/Item%20No%2010%20-%20Paper%202012-094%20Family%20Nurse%20Partnership.pdf.

    Google Scholar 

  30. White J, Connelly G, Thompson L, Wilson P. Assessing children's social and emotional wellbeing at school entry using the strengths and difficulties questionnaire: professional perspectives. Educ Res. 2013;55:87–98.

    Article  Google Scholar 

  31. YouthinMind: Scoring the SDQ. 2016. http://www.sdqinfo.com/py/sdqinfo/c0.py.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew R. Sanders.

Ethics declarations

Ethics Approval and Consent to Participate (Marryat et. al)

Not applicable.

Consent to Publish (Marryat et. al)

Not applicable.

Availability of Data and Materials (Marryat et. al)

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Competing interests (Marryat et. al)

LM has received indirect support through the University of Glasgow and NatCen Social Research for evaluations of the Family Nurse Partnership and Triple P. In addition, her PhD fees were paid for as part of the contract for the evaluation of the parenting support framework in Glasgow City. She received no personal remuneration for any of this work. She has no personal financial conflict of interest, has not been involved in developing any parenting programme and is not represented on any parenting charity or commercial board. She has no reputational conflict of interest to declare.

PW has received indirect support through his employing Universities for evaluation of a number of parenting programmes including Triple P, Circle of Security, the Robusthed.dk parent programme and Mellow Parenting. He has received no personal remuneration for any of this work. He is a friend and former colleague at the University of Glasgow of an employee (Christine Puckering) of the Mellow Parenting charity who developed much of that programme, and he has co-authored a number of papers with her about the Mellow Parenting programme, the Mellow Parenting Observation Scale and the Attachment and Behavioral Catchup (ABC) programme. Research grants to PW’s departments for evaluation of parenting programmes have been awarded by the Scottish Government Health Department, the Scottish Government Chief Scientist Office, the Scottish Collaboration for Public Health Research and Policy, Yorkhill Endowment Funds, TrygFonden, the Grant Foundation and NIHR. PW has no personal financial conflict of interest, has not been involved in developing any parenting programme and is not represented on any parenting charity or commercial board. He has no reputational conflict of interest to declare.

LT has received indirect support through her employing Universities for evaluation of a number of parenting programmes including Triple P, Parents InC, Incredible Years, the Robusthed.dk parent programme and Mellow Parenting. She has received no personal remuneration for any of this work. She is a friend and former colleague at the University of Glasgow of an employee (Christine Puckering) of the Mellow Parenting charity who developed much of that programme, and she has co-authored a number of papers with her about the Mellow Parenting programme and the Mellow Parenting Observation Scale. Research grants to LT’s departments for evaluation of parenting programmes have been awarded by the Scottish Government Health Department, the Scottish Government Chief Scientist Office, the Scottish Collaboration for Public Health Research and Policy, Yorkhill Endowment Funds, TrygFonden, the Grant Foundation and NIHR. LT has no personal financial conflict of interest, has not been involved in developing any parenting programme and is not represented on any parenting charity or commercial board. She has no reputational conflict of interest to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sanders, M.R., de Caestecker, L., McLeod, S. et al. Comparing apples and pears: misleading conclusions about the population mental health impact of a parenting programme, a commentary on Marryat, Thompson and Wilson (2017). BMC Pediatr 19, 269 (2019). https://doi.org/10.1186/s12887-019-1570-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12887-019-1570-z

Keywords