Can Feedback Elevate the Quality of Online Learning?
Reference:
Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(4), 412–433. https://doi.org/10.1111/j.1083-6101.2007.00331.x
Annotation:
Ertmer et al. (2007) explores whether structured peer feedback can sustain or improve the quality of graduate students’ online discussion posts in a fully online course. Using Bloom’s taxonomy as a scoring rubric, the authors examined students’ perceptions of both giving and receiving feedback and measured changes in posting quality over time. Although peer feedback did not significantly increase scores, it successfully maintained quality levels and fostered deeper reflection, metacognition, and engagement. Students valued instructor feedback more but acknowledged peer feedback as a meaningful mechanism for clarifying thinking, validating ideas, and reinforcing learning.
Ertmer et al. (2007) offer a carefully structured and methodologically transparent case study, especially notable for using a variety of tools like surveys, interviews, and rubric-based scoring. By adopting Bloom’s taxonomy as a consistent evaluation framework, the authors ensured a high degree of face validity, which is something often missing in online-learning research. A major strength lies in how they operationalized “quality” through observable cognitive indicators, rather than relying on self-reports alone. Their mixed-methods approach allowed them to capture both the stability of posting quality (quantitative) and the rich internal reasoning students engaged in while giving feedback (qualitative).
The study’s clarity in describing its procedures, anonymity protections, and reliability checks makes it replicable and trustworthy. Moreover, the article’s discussion is unusually candid about logistical constraints, like delayed feedback cycles, showing an awareness of the real-world instructional design challenges that L&D professionals regularly navigate. Overall, the study stands out for its practical applicability and its nuanced treatment of peer review as both a cognitive and social learning tool.
For Allegiant Professional Resources, where our mission is to elevate workforce learning outcomes for clients and consumers, this study reinforces a core truth: learning quality improves when learners actively evaluate and articulate understanding, not just consume content. Ertmer et al.’s insights support our belief that learning frameworks must move beyond passive LMS modules or gamified environments that prioritize activity over cognition. Giving feedback deepens learning more than receiving it and this study helps us further understand the dynamics of learning to better design effective training programs. Allegiant’s vision for a next-generation corporate learning architecture that uses reflective, socially driven, neurologically aligned pathways to strengthen memory, decision-making, and skill transfer.
As we build frameworks that tailor learning to cognitive profiles, peer-based scaffolding can become a powerful differentiator: it honors neurodiverse strengths such as pattern recognition, deep analysis, or verbal reasoning while fostering equitable, inclusive knowledge construction. This article directly informs the L&D ecosystems we design for clients, where meaningful interaction, self-assessment, and cognitive challenge become cornerstones of higher retention and real-world performance.