Allegiant Professional Resources LLC Allegiant Professional Resources LLC

AI as the New Workplace Assistant—Promise, Limits, and Practical Realities

As organizations increasingly experiment with AI tools to answer employee questions, interpret policy documents, or guide internal procedures, it is tempting to see AI as a kind of universal workplace assistant: always available, endlessly patient, and capable of reducing administrative burden. But this week’s readings reminded me that using AI as a catch-all solution requires a far more grounded approach. Nemorin et al. (2023) highlight how AI is often surrounded by inflated promises, and this made me more cautious about positioning an AI assistant as a complete replacement for human judgment. Just the way that these AI bots take things so literally makes me think of the old Amelia Bedelia books!

If an internal AI tool provides incorrect information about procedures or compliance requirements, the consequences can be far more serious than a simple technology glitch. The authors also note that AI hype often conceals deeper issues related to privacy and surveillance, which pushed me to consider how internal search tools might inadvertently track or profile employees based on the questions they ask. This may inherently make the AI assistant bias as it collects information on the employee population and the types of questions they may be asking. Can you imagine an AI chatbot telling a high-performing employee they should just quit?

Similarly, Sofia et al. (2023) argue that AI is reshaping workforce expectations by creating constant demands for reskilling. This made me rethink the assumption that AI assistants automatically reduce workload; instead, employees need training to use these tools effectively and to understand their limitations, especially when the AI is interpreting policies or guiding procedural decisions. Their discussion on employee trust also resonated with me. Deploying AI internally is not just a technical decision, it is a cultural one. Employees are far more likely to rely on an AI assistant when the organization communicates clearly about how it works, what data it uses, and where human oversight still matters.

Touretzky et al. (2019) reinforce this human-centered approach by emphasizing the importance of AI literacy. Their argument that foundational AI understanding is essential made me realize that workplace AI assistants should not merely give answers but should support the development of employee judgment. When people understand how AI models process information, they become more discerning and less likely to accept outputs uncritically. The authors’ focus on ethical reasoning also shaped my thinking about internal AI tools. If an AI assistant is delivering guidance on workplace policies, the organization has a responsibility to ensure the system does so ethically, accurately, and in ways that support, not undermine, employee autonomy. Sometimes, this may expose initiatives in the organization such as a RIF (reduction in Force) inadvertently since AI tools don’t understand how to execute or properly incorporate the concept of timing in employee matters.

Overall, these readings helped me see AI assistants not as a replacement for employee work, but as a carefully governed support tool that requires human literacy, ethical design, and transparent communication. As I read my classmates’ reflections later this week, I’m curious how others are considering the balance between efficiency and responsibility in AI integration, and what they believe organizations owe employees when deploying such tools.

Read More
Allegiant Professional Resources LLC Allegiant Professional Resources LLC

Can Feedback Elevate the Quality of Online Learning?

Reference:

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(4), 412–433. https://doi.org/10.1111/j.1083-6101.2007.00331.x

Annotation:

Ertmer et al. (2007) explores whether structured peer feedback can sustain or improve the quality of graduate students’ online discussion posts in a fully online course. Using Bloom’s taxonomy as a scoring rubric, the authors examined students’ perceptions of both giving and receiving feedback and measured changes in posting quality over time. Although peer feedback did not significantly increase scores, it successfully maintained quality levels and fostered deeper reflection, metacognition, and engagement. Students valued instructor feedback more but acknowledged peer feedback as a meaningful mechanism for clarifying thinking, validating ideas, and reinforcing learning.

Ertmer et al. (2007) offer a carefully structured and methodologically transparent case study, especially notable for using a variety of tools like surveys, interviews, and rubric-based scoring. By adopting Bloom’s taxonomy as a consistent evaluation framework, the authors ensured a high degree of face validity, which is something often missing in online-learning research. A major strength lies in how they operationalized “quality” through observable cognitive indicators, rather than relying on self-reports alone. Their mixed-methods approach allowed them to capture both the stability of posting quality (quantitative) and the rich internal reasoning students engaged in while giving feedback (qualitative).

The study’s clarity in describing its procedures, anonymity protections, and reliability checks makes it replicable and trustworthy. Moreover, the article’s discussion is unusually candid about logistical constraints, like delayed feedback cycles, showing an awareness of the real-world instructional design challenges that L&D professionals regularly navigate. Overall, the study stands out for its practical applicability and its nuanced treatment of peer review as both a cognitive and social learning tool.

For Allegiant Professional Resources, where our mission is to elevate workforce learning outcomes for clients and consumers, this study reinforces a core truth: learning quality improves when learners actively evaluate and articulate understanding, not just consume content. Ertmer et al.’s insights support our belief that learning frameworks must move beyond passive LMS modules or gamified environments that prioritize activity over cognition. Giving feedback deepens learning more than receiving it and this study helps us further understand the dynamics of learning to better design effective training programs. Allegiant’s vision for a next-generation corporate learning architecture that uses reflective, socially driven, neurologically aligned pathways to strengthen memory, decision-making, and skill transfer.

As we build frameworks that tailor learning to cognitive profiles, peer-based scaffolding can become a powerful differentiator: it honors neurodiverse strengths such as pattern recognition, deep analysis, or verbal reasoning while fostering equitable, inclusive knowledge construction. This article directly informs the L&D ecosystems we design for clients, where meaningful interaction, self-assessment, and cognitive challenge become cornerstones of higher retention and real-world performance.

Read More
Allegiant Professional Resources LLC Allegiant Professional Resources LLC

The Cost of Ineffective Employee Training

References:

Durgungoz, F.C., Durgungoz, A. “Interactive lessons are great, but too much is too much”: Hearing out neurodivergent students, Universal Design for Learning and the case for integrating more anonymous technology in higher education. High Educ (2025). https://doi.org/10.1007/s10734-024-01389-6

Kessler, R. C., Adler, L., Barkley, R., Biederman, J., Conners, C. K., Demler, O., … Walters, E. E. (2006). The prevalence and correlates of adult ADHD in the United States: Results from the National Comorbidity Survey Replication. The American Journal of Psychiatry, 163(4), 716-723. https://doi.org/10.1176/appi.ajp.163.4.716

Annotation:

Durgungoz’s, et al, study explores how technology-enhanced learning environments grounded in the Universal Design for Learning (UDL) framework can improve engagement for neurodivergent learners, including those with ADHD, while cautioning against overstimulation from excessive interactivity. Interestingly, the findings suggest that digital training programs are most effective when they provide flexibility, anonymity, and multiple ways to engage neurodivergent employees. The most effective programs allowed the employees to control pacing, choose preferred interaction modes, and reduce cognitive overload.

Why is this relevant to employee learning and development? According to Kessler, et al, (2006) the current adult ADHD prevalence at ~4–4.4%, and workplace studies show ADHD is associated with measurable reductions in job performance, higher absence and accident odds, and a quantifiable human-capital loss per affected worker (for example, a study of a large employer found ADHD workers averaged a 4–5% reduction in work performance and an estimated lost productivity value of roughly US$4,300 per affected worker per year).

The studies strength lies in combining both qualitative and quantitative approaches by collecting feedback from neurodivergent adults in higher education to assess emotional and cognitive engagement across different instructional formats. The researchers clearly outlines how UDL-driven technology design enhances inclusion by offering multiple means of engagement and representation, while also noting that excessive interactivity can overwhelm participants. The presentation is balanced, integrating participant voices with data analysis, and uses well-structured arguments supported by empirical findings. This approach strengthens its case for adapting UDL to corporate training by emphasizing flexibility, anonymity, and learner choice.

Allegiant Professional Resources’ mission to design corporate learning programs that genuinely enhance employee skillsets rather than simply deliver information makes the UDL approach a valuable tool in our repertoire. The study’s emphasis on UDL provides a research-based framework that supports our approach of tailoring training experiences to diverse cognitive styles and engagement preferences. Just as the article highlights the importance of balancing interaction with structure for neurodivergent adult learners, our team applies similar principles when developing corporate trainings, integrating technology that allows flexibility, pacing control, and choice in how learners engage with material.

This research reinforces the value of embedding inclusivity and intentional design into skill development programs, ensuring that each training we create is not only accessible but also effective in building lasting competencies that translate directly to workplace performance.

Additional References:

Kessler, R. C., Adler, L., Barkley, R., et al. (2006). The prevalence and correlates of adult ADHD in the United States: Results from the National Comorbidity Survey Replication. American Journal of Psychiatry, 163(4), 716-723. https://doi.org/10.1176/appi.ajp.163.4.716

Read More
Allegiant Professional Resources LLC Allegiant Professional Resources LLC

The Impact of Choice in Learning

Reference:

Murphy, J., Farrell, K., & Myers, J. (2024). Student choice in online asynchronous higher education courses. In Proceedings of the [Conference Name if known]. ACM. https://doi.org/10.1145/3760213.3708894

Annotation:

The article explores how offering students choices in online asynchronous higher education courses enhances engagement, autonomy, and relevance. Drawing from theories like constructivism, self-determination, and andragogy, the authors argue that allowing flexibility in content, process, and product supports deeper learning and motivation. A pilot study with undergraduate and graduate students found that choice particularly strengthened connections to career goals, encouraged authentic learning experiences, and increased satisfaction. The findings suggest that structured opportunities for choice can transform courses into learner-centered environments that foster agency, self-regulation, and practical application.

Murphy, Farrell, and Myers (2024) does a good job of clearly connecting theory to practice by showing how student choice can improve engagement in online learning. The use of a pilot study with both undergraduates and graduate students gives it a practical angle that helps support the claims, even if the sample size is modest. The mix of quantitative survey results and qualitative student feedback adds depth and makes the findings feel more grounded. Overall, the article is well organized and easy to follow, making complex ideas accessible without being overly technical.

The ideas in this article translate well into workplace training and curriculum design because they highlight the importance of giving adults meaningful choices in how they learn. In professional settings, employees bring diverse experiences, learning preferences, and career goals, so offering flexibility in content, process, and product can make training more relevant and motivating. The emphasis on autonomy and authentic application resonates strongly with adult learning in the workplace, where practical connections often matter more than abstract theory. This approach supports consultants and trainers in creating programs that not only build skills but also encourage ownership, engagement, and long-term growth.

Read More
Allegiant Professional Resources LLC Allegiant Professional Resources LLC

Perception drives Interpretation of Feedback

Reference:

Newman, D. (2025). Examining the emotional tone of student evaluations of teaching. Canadian Journal of Learning and Technology, 51(1), 1–18. https://doi.org/10.21432/CJLT-28695

Annotation:

How does perception affect feedback? Newman (2025) analyzed 600 student-written evaluations from Rate My Professors (2018–2023) to determine the emotional tone of the language used. Students feedback was reviewed using indicators such as pleasantries and words with positive connotations using Whissell’s Dictionary of Affectionate (DOA). The study found that students provided feedback to instructors in the evaluations that were emotionally neutral in tone however, the instructors perceived the tone to be overly critical on average.

The study’s strengths lie in the reliability of the tools used, like the DOA, and the simplicity of how the study is measured. The correlations are easy to understand and the study itself and its methods are easy enough to understand that replication can be completed with ease. Newman (2025) also provided adequate acknowledgements to the limitations of the information reviewed such as sampling bias, word count variability, and the constraints of publicly available online data.

In the context of organizational performance management, this article underscores the value of distinguishing emotional perception from objective data. Similar to how faculty may overinterpret student comments as overly negative, employees and managers often perceive performance evaluations as more emotionally charged than they actually are. For consultants, the findings point to the importance of designing evaluation systems that emphasize neutrality and balance. By integrating structured training on how to give and receive feedback, organizations can foster a shared understanding that feedback is a tool for growth rather than criticism. Embedding “feedback literacy” into workplace practices not only reduces defensiveness and bias but also equips both leaders and staff with the skills to interpret evaluations constructively. This approach supports the development of resilient, evidence-based performance systems that encourage trust, reduce anxiety, and create a culture where feedback is seen as an essential driver of individual and organizational improvement.

Read More