The Things AI Can’t Teach: The Value of Huamnity
Reference:
DeSchryver, M., Henriksen, D., Leahy, S., & Lindsay, S. (2024). Beyond automation: Intrinsically human aspects of creativity in the age of generative AI. Central Michigan University & Arizona State University.
Annotation:
In a world where GenAI is getting better at writing, designing, analyzing, and even “creating,” this article asks a surprisingly grounding question:
What parts of creativity are still fundamentally human and why should we care?
The authors argue that while AI can mimic creative output, it cannot replicate the experience of creativity. They highlight six intrinsically human creative capacities:
Curiosity
Intuition
Mindfulness/Patience
Imagination
Empathy
Embodied Thinking
Each of these capacities is shown to stem from lived experience, emotion, bodily awareness, and cultural or ethical context, things AI cannot meaningfully possess.
The article concludes with a bold call for education and training programs to prioritize these human strengths, especially as workplaces adopt more AI tools. What makes this article compelling for L&D practitioners is how clearly it demonstrates that the deepest forms of learning transfer rely on human senses and embodied cognition, not just content delivery.
Even in corporate e-learning or hybrid training, learners use their:
sense of movement
perception of space
emotional resonance
curiosity-driven discomfort
intuitive pattern recognition
empathetic social awareness
reflective stillness
These are not “nice to have” elements. They are the mechanisms through which information becomes memory, memory becomes understanding, and understanding becomes real-world behavior change. AI can support training, but it cannot replace these body-anchored processes.
The article’s strengths lie in its clear framework of six human creative traits, which provides educators with a practical structure for evaluating AI’s role in learning environments. It also connects theory to real educational practice, offering concrete implications for classrooms and instructional design.
The authors thoughtfully distinguish between AI’s ability to mimic creative outputs and the uniquely human experience of creativity, and they incorporate cultural and embodied perspectives that highlight AI’s current limitations. However, the article can be dense at times, relying heavily on academic theory, which may feel abstract for practitioners seeking immediate application. Its cultural analysis leans largely on Western research, leaving room for broader global insight, and while it acknowledges that AI may evolve toward more human-like traits, it stops short of fully exploring emerging areas such as embodied robotics and multimodal agentic systems.
The article does not explicitly frame creativity in terms of the body’s senses—but it could, and doing so makes the implications for learning transfer even more powerful.
Below is a reframing of the six traits through the lens of innate human sensory faculties, capacities AI cannot authentically replicate.
1. Curiosity → The Sense of “Cognitive Hunger”
Linked to dopamine systems, orientation reflexes, and the brain’s drive toward novelty.
In training, curiosity sparks attention — the first gateway to learning transfer.
2. Intuition → Gut Sense (Interoception) + Pattern Experience
Humans feel intuition physically: tightness, ease, resonance.
AI has no interoceptive system and no lived experiences to shape intuitive judgment.
3. Mindfulness/Patience → Temporal Sensory Awareness
Humans perceive time through emotional and physiological regulation.
Incubation, the moment when learning quietly consolidates, depends on embodied calm, not computational speed.
4. Imagination → Mental Imagery + Visuospatial Processing
When we imagine, sensory cortices light up as if we are seeing or hearing.
AI recombines text and image data but does not experience imagery.
5. Empathy → Emotional Resonance (Affective Sensing)
Humans detect microexpressions, tone, posture, and relational energy unconsciously.
AI can label emotions but cannot feel them or use them for moral discernment.
6. Embodied Thinking → The Entire Sensorimotor System
Creativity is deeply body-based: gesture, movement, rhythm, weight, balance.
These physical cues are essential for problem-solving, skill acquisition, and long-term memory encoding.
AI as the New Workplace Assistant—Promise, Limits, and Practical Realities
As organizations increasingly experiment with AI tools to answer employee questions, interpret policy documents, or guide internal procedures, it is tempting to see AI as a kind of universal workplace assistant: always available, endlessly patient, and capable of reducing administrative burden. But this week’s readings reminded me that using AI as a catch-all solution requires a far more grounded approach. Nemorin et al. (2023) highlight how AI is often surrounded by inflated promises, and this made me more cautious about positioning an AI assistant as a complete replacement for human judgment. Just the way that these AI bots take things so literally makes me think of the old Amelia Bedelia books!
If an internal AI tool provides incorrect information about procedures or compliance requirements, the consequences can be far more serious than a simple technology glitch. The authors also note that AI hype often conceals deeper issues related to privacy and surveillance, which pushed me to consider how internal search tools might inadvertently track or profile employees based on the questions they ask. This may inherently make the AI assistant bias as it collects information on the employee population and the types of questions they may be asking. Can you imagine an AI chatbot telling a high-performing employee they should just quit?
Similarly, Sofia et al. (2023) argue that AI is reshaping workforce expectations by creating constant demands for reskilling. This made me rethink the assumption that AI assistants automatically reduce workload; instead, employees need training to use these tools effectively and to understand their limitations, especially when the AI is interpreting policies or guiding procedural decisions. Their discussion on employee trust also resonated with me. Deploying AI internally is not just a technical decision, it is a cultural one. Employees are far more likely to rely on an AI assistant when the organization communicates clearly about how it works, what data it uses, and where human oversight still matters.
Touretzky et al. (2019) reinforce this human-centered approach by emphasizing the importance of AI literacy. Their argument that foundational AI understanding is essential made me realize that workplace AI assistants should not merely give answers but should support the development of employee judgment. When people understand how AI models process information, they become more discerning and less likely to accept outputs uncritically. The authors’ focus on ethical reasoning also shaped my thinking about internal AI tools. If an AI assistant is delivering guidance on workplace policies, the organization has a responsibility to ensure the system does so ethically, accurately, and in ways that support, not undermine, employee autonomy. Sometimes, this may expose initiatives in the organization such as a RIF (reduction in Force) inadvertently since AI tools don’t understand how to execute or properly incorporate the concept of timing in employee matters.
Overall, these readings helped me see AI assistants not as a replacement for employee work, but as a carefully governed support tool that requires human literacy, ethical design, and transparent communication. As I read my classmates’ reflections later this week, I’m curious how others are considering the balance between efficiency and responsibility in AI integration, and what they believe organizations owe employees when deploying such tools.
Applying Activity Theory to Transform Learning Impact
Reference:
Marroquín, E. M. (2025). Activity theory as framework for analysis of workplace learning in the context of technological change. Learning and Teaching: The International Journal of Higher Education in the Social Sciences, Elsevier.
https://doi.org/10.1016/j.later.2025.1000083
Annotation:
The rise of AI has happened faster than businesses and experts can adapt to the changes it has inevitably caused. Marroquín (2025) explores how Activity Theory can serve as a powerful framework for understanding how workplace learning evolves within technologically mediated environments. The author argues that as artificial intelligence and automation transform job functions, learning must be viewed not as a discrete event but as an integral part of the work activity system (comprising tools, rules, roles, community, and the object of work).
Rather than focusing on isolated training sessions, the study suggests that learning occurs through the contradictions and adaptations that arise as employees interact with new tools and changing structures. By examining these tensions, the article highlights how organizational learning can drive systemic transformation and measurable performance outcomes making this incredibly relevant to the field of organizational development.
Marroquin’s use of Activity Theory offers a rich, systems-level analysis that transcends traditional learning frameworks focused on individual cognition. The methodology draws on the framework’s core elements such as mediation, contradictions, and expansive learning which provides a structured yet flexible lens to analyze real-world complexity in workplace settings.
The strength of this article lies in its integration of theory and practice: it effectively links conceptual depth with practical implications for managing learning in AI-enabled environments. At Allegiant Professional Resources, our learning and development initiatives echo Marroquin’s perspective: learning is only valuable if it changes work outcomes. We’ve moved away from counting inputs such as “2 hours of training completed” or “5,000 skills tagged” and instead focus on impact measures, such as reduced error rates, faster cycle times, or improved decision accuracy after interventions.
Activity Theory helps us trace how those results occur by analyzing the full activity system like what tools employees use, which rules or norms guide their work, how their roles interact, and what the shared object of their activity is. When contradictions emerge (for example, when a new AI dashboard changes reporting workflows), we view them as learning opportunities rather than inefficiencies. Marroquín’s work reinforces our philosophy that training is not the outcome but instead - performance improvement is. It provides a theoretical foundation for measuring not activity, but transformation within the work system, a principle that continues to shape Allegiant’s evidence-based approach to organizational learning and impact measurement.
Try using AI Personalized Podcasts to Drive Retention & Employee Development
Reference:
Do, T. D., Bin Shafqat, U., Ling, E., & Sarda, N. (2024). PAIGE: Examining learning outcomes and experiences with personalized AI-generated educational podcasts (arXiv preprint arXiv:2409.04645). https://doi.org/10.48550/arXiv.2409.04645
Annotation:
The researcher take a deep dive into how generative AI can convert textbook chapters into personalized educational podcasts for a group of 180 college students. The researchers compared traditional textbook reading with both generalized and personalized AI-generated podcasts across multiple subject areas. Their findings showed that students overwhelmingly preferred podcasts to reading, and that personalized podcasts tailored to learners’ backgrounds and interests improved comprehension in several disciplines.
The takeaway is clear: AI-driven, personalized audio content can enhance learning engagement and outcomes when designed with relevance and learner context in mind.
The study’s methodology, integrating AI-driven podcast generation with validated user experience measures, models exactly the kind of data-informed experimentation L&D professionals can use to evaluate their own digital learning tools. It also underscores the importance of delivery design, such as the conversational tone, pacing, and modality that can have a deep influence in learner motivation. Consultants working with clients on upskilling strategies can take from this that AI isn’t just a content generator; it’s an adaptive facilitator that can align learning experiences to individual needs and organizational culture.
At Allegiant, our consulting work centers on helping organizations create inclusive learning environments that make workplace learning more effective for all employees, particularly those whose neurodivergence offers unique cognitive strengths. Studies like this one inform how we think about designing micro-learning and leadership development content that doesn’t just “teach,” but connects meaningfully with how diverse minds engage with information.
We also see a connection between this research and how business leaders who host industry podcasts can influence engagement and retention. A 2023 LinkedIn Workplace Learning Report found that employees who feel connected to their organization’s thought leadership (through podcasts or leadership-led storytelling) are 33% more likely to stay with the company. Integrating AI-generated podcasts or internal learning channels can give employees that same sense of inclusion and relevance.
As our research and consulting practice evolves, we’re exploring how personalization, audio learning, and neurodivergent engagement strategies can converge to make corporate learning both equitable and deeply human.