The 93/7 Problem: Why AI Investments Are Failing People—and What Leaders Must Do About It 

I recently read somewhere that a Deloitte director shared a statistic; he revealed a striking imbalance in enterprise AI investment: 93% flows into technology, platforms, and tools, while a mere 7% supports people. At first glance, this allocation appears rational. Artificial intelligence demands sophisticated infrastructure, evolving models, complex orchestration layers, and extensive vendor ecosystems—all requiringsubstantial capital. However, this ratio represents more than a budgetary imbalance. It actively undermines the very outcomes organizations seek from their AI investments. 

The fundamental error lies in framing AI transformation as a purely technical challenge when it is inherently socio-technical. This mischaracterization creates a widening gap between organizational capabilities and employee readiness. The consequences manifest most visibly in customer service and experience domains, where AI success depends equally on algorithmic precision and human judgment, empathy, escalation protocols, and governance frameworks. 

The Proliferation Challenge 

A familiar pattern emerges across industries: organizations continuously discover AI tools operating within their own walls. These discoveries range from chatbot platforms and call summarization engines to agent assist copilots, knowledge retrieval systems, and workflow automation layers—often deployed independently by different teams. While typically launched with positive intentions and promising pilot results, these tools frequently arrive without shared understanding of purpose, ownership, or integration into existing workflows. 

This fragmentation stems directly from overemphasizing technology acquisition while neglecting organizational readiness. When budgets prioritize tools over training, teams default to purchasing software rather than building capability. Innovation becomes scattered, duplicated, and resistant to scale. Employees navigate an increasingly complex maze of overlapping solutions, each promising productivity transformation, yet few accompanied by adequate time, training, or incentives for effective adoption. 

For customer service environments, this fragmentation proves particularly damaging. Agents already manage significant cognitive load, balancing speed, quality, compliance requirements, and emotional labor. Introducing multiple AI tools without cohesive learning strategies amplifies rather than reducing operational friction. 

Structural Risks of Imbalanced Investment 

The 93/7 investment split creates tangible operational risks beyond philosophical concerns. AI systems generate value not through mere existence but through human understanding—knowing when to rely on AI recommendations, when to override them, how to interpret outputs, and how to explain decisions to customers. These competencies require deliberate instruction, practice, and reinforcement rather than intuitive discovery. 

Underinvestment in people creates predictable failure modes. Adoption suffers as employees either underutilize AI due to mistrust or overuse it without understanding limitations. Quality becomes inconsistent when agents using identical tools produce dramatically different outcomes. Governance weakens as personnel lacking AI comprehension fail to identify errors, bias, or compliance violations. 

In customer experience contexts, these risks translate directly into customer dissatisfaction. Poorly trained agents relying on AI-generated responses can escalate frustration more rapidly than no AI intervention. The underlying technology may function correctly, but the surrounding human system fails. 

The Accelerating Skills Gap 

AI transformation fundamentally reshapes job roles faster than training programs evolve. Customer service agents transition from responders to supervisors of AI outputs, curators of context, and decision-makers for edge cases. Team leads become performance coaches for hybrid human-AI workflows. Customer experience leaders must interpret AI-driven insights while maintaining accountability for human outcomes. 

Yet training approaches remain largely static. Organizations continue relying on one-time enablement sessions, brief vendor demonstrations, or optional learning modules—approaches insufficient even before generative AI emerged. In today’s environment, they prove wholly inadequate. 

The rapid evolution of AI tools means skills degrade quickly. Prompting strategies shift, capabilities expand, interfaces change, and governance rules evolve. Without continuous learning investment, organizations operate cutting-edge tools with outdated mental models. 

The Capability Fallacy 

The 93/7 imbalance reflects a fundamental misconception: purchasing advanced AI tools automatically upgrades organizational capability. This mindset treats AI like traditional software, where features directly translate to productivity gains. AI operates differently, behaving probabilistically rather than deterministically, requiring interpretation, judgment, and contextual awareness. 

This distinction proves critical in customer service. AI can suggest responses but cannot fully assess customer emotional states, cultural contexts, or historical brand relationships. Humans must provide that assessment. Without training for effective AI collaboration, systems produce either robotic interactions or inconsistent service experiences. 

True capability emerges from human-machine interaction rather than tools themselves. That interaction requires deliberate design, practice, and refinement—nonachievable with 7% investment in people. 

Rebalancing the Investment Portfolio 

Advocating for increased people investment does not mean decelerating technological progress. It means recognizing that technology without talent represents unrealized potential. Organizations must deliberately rebalance AI spending to reflect this reality. 

Training must evolve beyond “tool operation” toward “thinking with AI.” Employees need understanding of AI strengths, failure modes, bias manifestation, and accountability maintenance. For customer service, this includes training on when to trust AI recommendations, when to challenge them, and how to recover gracefully from AI errors. 

Equally critical is creating protected time and space for learning. The primary barrier to effective AI adoption is not resistance but workload. Agents and supervisors face expectations to master new tools atop already demanding roles. Without dedicated learning time, even superior training programs fail. 

Establishing AI Literacy as Core Competency 

AI literacy should be treated as foundational competency rather than specialized skills. Just as organizations once invested heavily in digital literacy, they must now invest in AI fluency across all roles. This does not require transforming everyone into data scientists but rather providing working understanding of how AI systems reason, what inputs shape outputs, and what ethical responsibilities accompany their use. 

In customer experience, AI literacy directly impacts trust. Customers increasingly ask whether decisions were made by humans or AI and why requests were denied. Employees lacking AI literacy struggle to answer confidently. Well-trained personnel can explain decisions transparently, preserving trust even when outcomes prove unfavorable. 

From Tool Acquisition to Capability Design 

The phenomenon of employees “discovering” internal AI tools signals poor capability design. Organizations should shift from asking “What tools should we purchase next?” to “What capabilities are we building?” Capability-first thinking redirects focus from procurement to outcomes. 

For customer service, target capabilities might include faster issue resolution, more personalized interactions, proactive problem detection, or improved agent wellbeing. Once capabilities are defined, leaders can determine which tools support them and what training makes those tools effective. This approach naturally increases people investment because capability cannot exist without skill. 

Leadership Imperatives 

Correcting the 93/7 imbalance demands leadership courage. Technology investments appear tangible, visible, and easier to justify. People investments are messier, with returns harder to measure, slower to realize, and deeply tied to culture. Yet they differentiate between AI as experimental novelty and AI as sustained competitive advantage. 

Leaders must model learning themselves, signaling that AI competence is neither optional nor delegable. They must reward thoughtful AI use rather than mere speed or volume. They must fund training not as one-time initiative but as an ongoing operating expense. 

The Path Forward 

AI will continue advancing at an extraordinary pace. Tools will become more powerful, autonomous, and embedded in customer service workflows. However, regardless of technological sophistication, customer experience remains a human-centered discipline. Empathy, judgment, accountability, and trust cannot be automated. 

A future allocating 93% to technology and 7% to people does not deliver AI-powered excellence. It produces underutilized tools, overwhelmed employees, and inconsistent customer experiences. Better balance is not merely desirable—it is necessary. 

Successful organizations will recognize AI not as human capability replacement but as a force multiplier. They will invest accordingly, ensuring that as machines become smarter, people become more capable alongside them. This is how AI delivers on its promise—not by sidelining humans but by elevating them.