| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Pam Morgan Meta-Analysis

Page history last edited by pmorgan@crimson.ua.edu 13 years ago

 

Evaluation of Evidence-Based Practices in Online Learning

A Meta-Analysis and Review of Online Learning Studies

 

 

 

  1. What were the measures used in this study?
  2. How did the researchers define “better”?
  3. How did the researchers define “performance”?

 

It appears to me that the study was focused on “objective measures of student learning” (U.S. Department of Education, 2010, p. xii) and improved learning outcomes, as well as student achievement.  The report consistently mentioned effect sizes, which is the difference between mean of the treatment group and the mean of the control group divided by the pooled standard deviation (SD). The report authors also referred often to an 1992 article authored by J. Cohen that stated that effect sizes of .20 and lower were considered small, and those around .50 were medium and those above .80 were large (U.S. Department of Education, 2010, p. xiv).  For online learning versus face to face (Category 1), the outcomes only needed to be the same.  But for blended versus face-to-face (Category 2), the effect sizes had to be larger.  There was also Category 3, which looked at variations in online learning.

 The measures of the learning outcomes had to be objective and direct, and included

 

  • Scores on standardized tests
  • scores on researcher created assessments
  • scores/grades on teacher created assessments
  • grades or grade point averages

 

And for teacher-learners

 

  • assessment of content knowledge
  • analysis of lesson plans
  • observations or logs of class activities
  • analysis of portfolios
  • supervisor rating of job performance

(U.S. Department of Education, 2010, p . 12)

 

The general finding of the meta-analysis was that classes with an online learning component (either blended or purely online) had stronger learning outcomes, with a mean effect size for all 50 contrasts of +0.20, p <.001 (U.S. Department of Education, 2010, p . 18). 

 

Performance seemed to be declarative or procedural knowledge (U.S. Department of Education, 2010, p. 35).  I noticed that the disciplines tested were the subject matter was medicine or healthcare, which was most common, but also teacher education, computer science, mathematics, language, science, social science and business (U.S. Department of Education, 2010, p. xiii).  I didn’t see a lot with the humanities or with problem solving knowledge.  It seems easier to measure declarative knowledge and procedural knowledge.  The former is knowledge that can be stated, the latter is knowledge of how to do something.  These seem very observable.  I would be interested to see how other types of knowledge could be tested and measured.

 

Comments (0)

You don't have permission to comment on this page.