In 2005, elementary schools in England were told by their Department of Education to include, in their curriculum, a program known as SEAL─which teaches children how to develop their social and emotional skills. In 2007, this mandate was extended to high schools─English children now get this curriculum every single year of their student life. It’s nothing less than an official governmental national strategy for the future.
SEAL's rocketing through British schools was really launched by a single evaluation of a pilot program, which had been conducted in 80 elementary schools from 2002 to 2005.
In the pilot, schools used activities such as group discussion, stories, puppet play, games, and role playing, to teach topics such as antibullying, the hurtfulness of gossip, and "uncomfortable feelings"─such as bereavement over a loved one's death. When children were kind to each other or acted appropriately, teachers would publicly reward the children with prizes and displayed certificates. The stated goal of the pilot program was beyond reproach: To ensure that “every child have the support they [sic] need to be healthy, stay safe, enjoy and achieve, make a positive contribution, and achieve economic well-being.”
So was SEAL's pilot successful, as advertised? (FYI, we’re relying heavily here on the work of Carol Craig, at the Glasgow-based Centre for Confidence and Well-Being.)
Considering the monumental impact of this study, it’s shocking how badly it lacked methodological rigor. There was no real control group for the program. Administration of the pilot itself varied considerably. Some schools had daily SEAL lessons; some had them just a couple times a week. And some schools started months or even a year later than others.
There was also no effort to find impartial observers of the children's behavior in the program; instead, the study chiefly relied on teachers' and children's ratings of their own behavior. The teachers were asked, broadly, if they thought the program was working. Had it raised the standard of learning? Had it engendered positive attitudes toward school across all students? Had children’s social skills improved?
More importantly, the researchers didn’t even survey all the teachers who had participated in the pilot.
Instead, it only surveyed the teachers in 31 schools deemed to be examples of "good practice." So if a school tried to implement the program, but had trouble with it, or it wasn’t working─they didn’t get to fill out questionnaires (even though their student data was harvested). Nine of the best-practice schools were selected for visits and interviews, so many of the qualitative conclusions drawn by the report are based on interviews with only a small portion of the actual pilot sample, and an admittedly biased selection at that.
That biased methodology alone should have disqualified the report and made anyone suspicious of the pilot study’s agenda.
Surprisingly, even with the selectivity of the teacher data, the teacher ratings were still decidedly mixed.
Just as many teachers─or more─said the program had not improved concentration, the standard of learning, or student listening skills. The biased sample of teachers did feel positive about the pilot, but for other reasons. They felt it had made kids “want to be good” by publicly rewarding good behavior.
Politicians who read this report might not have realized the data was so problematic, because it pushed this data down to page 57, and made no mention of it in its summary. Instead, the report filled its pages with qualitative impressions─glowing quotes from individual teachers in those few “best practice” schools.
Unlike the teacher ratings, the three years’ worth of student ratings were randomly sampled. And the students were much more across-the-board negative about the program. They felt the pilot hadn’t budged their self-esteem; nor had it budged their social skills. And the students even thought that the program had actually lowered the quality of their academic work a bit.
Students’ self-rated awareness of emotions actually went down, the first year of the program, and slowly crept back up over the next two years. Their attitude toward school didn’t improve until the third year of the pilot, and then only modestly.
So much for the vague perception of what happened, among students and teachers. Let’s get to the hard data on students’ grades and attendance.
But first, you need to know that the pilot study didn’t just include Socio-Emotional Learning (SEL) strategies. There were three other interventional approaches mixed in. To focus on just one of those alternatives (to keep this from getting too complicated), there was simple “teacher coaching.” Teachers were given access to a counselor who helped them talk through and strategize problems in their classrooms. Of the pilot schools which tried the SEL curriculum, a chunk of them got SEL-only, but other chunks got SEL + Teacher Coaching, or SEL + Alternative 3, et cetera. There were also a number of schools who got only Teacher Coaching, etc.
Effectively, these combinations of interventions created something akin to control groups to compare the SEL-only schools against. But the report didn't do those comparisons: that analysis fell to Carol Craig, of the Centre for Improvement and Well-Being. She went to the trouble of separating out the data.
When it came to the students' academics, the combination of interventions raised scores a tiny bit─from an 81 average to an 82. Overall, the pilot was a meager success. However, SEL came out positive only if it was paired with one of the other three interventions. And cross-group analysis reveals that it was the other intervention (such as teacher coaching) driving the improvement.
Children in SEL-only schools actually did worse, academically. Those drops were not severe─barely statistically significant─but they were negative: down a point for early elementary reading and writing, and down 3 points for early-elementary math.
So much for the claim the socio-emotionally learning had boosted the children's scores.
It didn’t affect attendance either.
Nevertheless, the study’s authors ignored their own data to make oversized claims about the pilot program’s success. In their summary, they wrote:
The SEAL programme had a major impact of children’s well-being, confidence, social and communication skills, relationships, including bullying, playtime behaviour, pro-social behaviour and attitudes towards school. It increased children’s awareness of emotions in others and the calmer environment in the classroom also led to some perceived improvement in learning and attainment.
Based on the real data, teachers all across England should have been given a little coaching time─someone to call to hash out problems in the classroom. Instead, the success of teacher coaching was co-opted by the Emotional Intelligence movement, to get nothing less than a new National Strategy adopted countrywide.
One can’t help but wonder, why was the hard data ignored? Why were the students’ perceptions ignored? Why didn’t anyone admit that the pilot program’s evaluation methodology was inherently biased? To answer that, you have to understand the pressure of the times.
In 2007, Unicef released a scathing report. They had surveyed the well-being of students in the world’s 21 “economically-advanced” countries. The report began with a powerful quote: “The true measure of a nation’s standing is how well it attends to its children─their health and safety, their material security, their education and socialization, and their sense of being loved, valued, and included in the families and societies into which they are born.” The United Kingdom ranked dead-last in “subjective well-being,” “behaviours and risks,” and “family and peer relationships.” On average, across all six dimensions, they also ranked dead-last.
Stinging from this callout, SEAL became the political answer. After a World Health Organization forum in Italy, the U.K. leaned heavily on its new SEAL curriculum in defending itself to the WHO. The argument was, basically, yes our kids are unhappy, but we’ve got SEAL now, so back off.
There’s no question that we all want kids to feel secure and optimistic about their lives. The real question is whether socio-emotional curriculum actually works, or if it’s being used as merely a panacea.