| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Nichols, Adam Meta-Analysis

Page history last edited by Adam Nichols 13 years ago

Evaluation of Evidence-Based Practices in Online Learning

Meta-Analysis and Review of Online Learning Studies

 

Means, B., Toyama, Y., Murphy, R., Bakia, M., Jones, K., Department of Education (ED), O., & SRI, I. (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. US Department of Education, Retrieved from EBSCOhost.

 

http://www.eric.ed.gov/PDFS/ED505824.pdf

 

What were the measures used in this study?

 

There researchers of this meta-analysis used difference between student outcomes for online and face-to-face classes—measured as the difference between treatment and control means, divided by the pooled standard deviation—was larger in those studies contrasting conditions that blended elements of online and face-to-face instruction with conditions taught entirely face-to-face. Analysts noted that these blended conditions often included additional learning time and instructional elements not received by students in control conditions. (ix)

 

They went on to examine the effects for objective measures of student learning only. Furthermore they discarded the effects for student or teacher perceptions of learning or course quality, student affect.  (xii)

 

In two stages of screening of the abstracts and full texts of the articles, 176 online learning research studies published between 1996 and 2008 were identified that used an experimental or quasi-experimental design and objectively measured student learning outcomes. Of these 176 studies, 99 had at least one contrast between an included online or blended learning condition and face-to-face instruction that potentially could be used in the quantitative meta-analysis. Just nine of these 99 involved K–12 learners. The 77 studies without a face-to face condition compared different variations of online learning (without a face-to-face control condition) and were set aside for narrative synthesis. (xii)

 

 

Two of the meta-analyses reference in this study included video-based distance learning as well as web-based learning. It also included studies in which the outcome measure was student satisfaction, attitude or other non-learning measures. This particular meta-analysis reported here is restricted to an analysis of effect sizes for objective student learning measures in experimental, controlled quasi-experimental, and crossover studies of applications with web-based components. (p. 6)

 

This meta-analysis, limited their study corpus to experiments or quasiexperiments with an achievement measure as the learning outcome similar to Machtmes’ and Asher’s meta-analysis.In most cases the researchers of the meta-analyses had an objective learning measure as the outcome measure. (p. 7)

One goal of the meta-analysis was to report a learning outcome that was measured for both treatment and control groups. A learning outcome needed to be measured in the same way across study conditions. For example, they included scores on standardized tests, scores which are considered learning outcomes for the purposes of this study.

These included assessments of content knowledge, analysis of lesson plans or other materials related to the intervention, observation of class activities, analysis of portfolios, or supervisor’s rating of job performance. Studies that were considered to be using only non-learning outcome measures were excluded. These included measures such as attitude, retention, attendance, and level of learner/instructor satisfaction.

(p. 12)

 

The measurements noted here were used in a variety of different ways. The goal of using these measurements was to determine which strategies or practices would result in better performance and to find out which strategy or practice was a better delivery method.

 

 

 

How did the researchers define “better”?

 

 

The researchers essentially defined better as the difference between learning outcomes based on the measurements used in the meta-analysis. The study found that on average, students in online conditions performed modestly better than those learning the same material through traditional face-to-face instruction. Learning outcomes for students who engaged in online learning exceeded those of students receiving face-to face instruction. (xiv)

 

One example was the use of interactive video resulting in better performance of students. The authors used four conditions: traditional face-to-face and three online environments—interactive video, non-interactive video, and non-video. Students were randomly assigned to one of the four groups. Students in the interactive video group performed significantly better than the other three groups. (p. 40)

 

Another example involved students using an advanced collaborative tool vs those who used a standard collaborative and how it relates to higher grades. Ibrahim (2007) compared the success rates of students learning the Java programming language who used a standard collaborative tool with the success rate of those who used an advanced collaborative tool that allowed compiling, saving and running programs inside the tool. The course grades for students using the advanced collaborative tool were higher than those of students using the more standard tool.

 

Much like the example above, Dinov, Sanchez and Christou (2008) integrated tools from the Statistics Online Computational Resource in three courses in probability and statistics. For each course, two groups were compared: one group of students received a “low intensity” experience that provided them with access to a few online statistical tools; the other students received a “high-intensity” condition with access to many online tools for acting on data. Across the three classes, pooling all sections, students in the more active, high-intensity online tool condition demonstrated better understanding of the material on mid-term and final examinations than did the other students. (p. 41)

 

The definition of better regarding this research was determined by the performance of different test groups given a specific task. Different variables were introduced and the ones that resulted in a greater performance were deemed to be better and in turn resulted in better performance by the individuals in this study.

 

 

 

How did the researchers define “performance?

 

The performance of the individuals in this study can also be based on the results of the learning outcomes derived from the measurements used.

 

Poirier and Feldman found a significant main effect favoring the purely online course format for examination grades but no effect on student performance on writing assignments. (p. 38)

 

However, in another study, Caldwell (2006) found no significant differences in performance on a multiple-choice test between undergraduate computer science majors enrolled in a blended course and those enrolled in an online course. Both groups used a Web-based platform for instruction, which was supplemented by a face-to-face lab component for the blended group. (p. 39)

 

Evans (2007) explored the effects on performance of more and less expository online instruction for students learning chemistry lab procedures. After asking students to complete an online unit that was either text-based or dynamic and interactive, Evans found that SAT score and gender were stronger predictors of student performance on a posttest with conceptual and procedural items than was the type of online unit to which students were exposed. (pgs. 41&42)

 

Shen, Lee and Tsai (2007) found a combination of effects for self-regulation and opportunities to learn through realistic problems. They compared the performance of students who did and did not receive instruction in self-regulation learning strategies such as managing study time, goal setting and self-evaluation. The group that received instruction in self-regulated learning performed better in their online learning. (p. 45)

 

However, in some studies, no learning outcome resulted in improved performance in a specific category over the others. In such cases, analysts extracted multiple contrasts from the study and calculated the weighted average of the multiple outcome scores if the outcome measures were similar. (A5)

 

Given these examples of to the specific meta-analyses mentioned performance was used as a comparison between two groups of individuals. The groups were tested using different forms of interactive technology, or the lack there of, for the same task and the better the results concluded, the better the students performed.

Comments (0)

You don't have permission to comment on this page.