| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

jjones meta analysis

Page history last edited by joy jones 13 years, 1 month ago Saved with comment

 

Evaluation of Evidence-Based Practices in Online Learning

A Meta-Analysis and Review of Online Learning Studies

 

Each of you please answer the following questions on the class wiki as indicated, although I want you to “talk/collaborate” with one another in the class as you answer these questions.

 

1.      What were the measures used in this study?

2.      How did the researchers define “better”?

3.      How did the researchers define “performance?

______________________________________________________________________________________________________

 

What were the measures used in this study?

 

The overall measure of this study is student learning outcomes. There were several measures used. The variance was dependent on the “condition” the study was taking place. There were blended online/face-to-face conditions and just face-to-face conditions.

Measures (objective and Direct) Used Were:

Scores on Standardized Test, p. 12

Scores on Researcher-Created Assessments, p.12

Grades/Scores on Teacher-Created Assessments (e.g., assignments, midterm/final exams) p. 12

Grades or Grade Point Averages, p.12

 

 

Teacher Learners: assessments of content knowledge, analysis of lesson plans or other materials related to the intervention, observation (or logs) of class activities, analysis of portfolios, or supervisor’s rating of job performance,  p. 12

Online quizzes, homework, support mechanisms such as guiding questions, p. xvi

….Measured as the difference between treatment and control means, divided by the pooled standard deviation, p. ix

Pre- and post-tests of student writing, scored on a researcher-developed rubric, were used as outcome measures, p. 33

Multiple-choice test including subtests on oral and written comprehension, p. 32

Multiple Choice, p. 47

Tests of Students Writing Ability,

p. 32

Internet Use, p. 13

 

Student Satisfaction, p. 6

 

 

 

How did the researchers define “better”?

 

The researches defined “better” if the addition, improvement, or modifier applied to the “conditions” improved student outcomes. I have placed direct text from the analysis below to support my answer.

 

…On average, students in online learning conditions performed modestly better than those receiving face-to-face instruction. P. ix

 

A replacement application that is equivalent to conventional instruction in terms of learning outcomes is considered a success if it provides learning online without sacrificing student achievement. P. 3

 

.....(Spanish) The other measure was a test of students’ writing ability, and the effect size for this skill was –0.24, with students receiving face-to-face instruction doing significantly better than those receiving the online blended version of the course. P. 32

 

…. (K-12 Study) An effect size of +0.37 was obtained, with online students performing better than their peers in conventional classrooms. P. 32

 

(4 groups) Students in the interactive video group performed significantly better than the other three groups. P. 40

 

Across the three classes, pooling all sections, students in the more active, high-intensity online tool condition demonstrated better understanding of the material on mid-term and final examinations than did the other students. P. 41

 

…. (elaborated questions” and “maximizing reasons”) Elaborated questions stimulated better-developed arguments, but maximizing reasons instructions did not. P.42

 

The group that received instruction in self-regulated learning performed better in their online learning. P. 45

 

In a quasi-experimental study of Taiwan middle school students taking a Web-based biology course, Wang et al. (2006) found that students in the condition using a formative online selfassessment strategy performed better than those in conditions using traditional tests, whether the traditional tests were online or administered in paper-and-pencil format. P. 45

 

When a developer of alternatives was specified, the student-moderated groups performed significantly better than the instructor-moderated groups. P. 46

 

Online learning conditions produced better outcomes than face-to-face learning alone, regardless of whether these instructional practices were used. P. 51

 

…..(influence of study methods variables) It is reassuring to note that, on average, online learning produced better student learning outcomes than face-to-face instruction in those studies with random-assignment experimental designs (p < .001) and in those studies with the largest sample sizes (p < . 01). P. 52

 

How did the researchers define “performance?

 

Performance is the outcome of the analysis. Performance of students were based on course examinations, written performance, multiple choice test, SAT score and gender,  guiding questions, self-regulated learning, social scripts, and exposure to collaborative tools, to name a few. There was a tendency for higher performance in the face-to-face condition in the test for homogeneity. “Student performance was statistically higher on tests taken immediately after completion of modules that included self-assessment questions than after completion of those without such questions p. 45”

 

Comments (0)

You don't have permission to comment on this page.