Finance
Bridging the Gap: Skepticism, Incentives, and Strategies for AI Adoption
By Isaac Dwamena How businesses and institutions can overcome hesitation, unlock value, and shape an inclusive AI future From boardrooms to governments, it is clear that AI is the future. And yet, behind closed doors, something different is happening. Many companies hesitate. Leaders nod along at...
The High Street Journal
published: Jul 03, 2025
By Isaac Dwamena
How businesses and institutions can overcome hesitation, unlock value, and shape an inclusive AI future
From boardrooms to governments, it is clear that AI is the future. And yet, behind closed doors, something different is happening. Many companies hesitate. Leaders nod along at conferences, but back home, pilot projects stall. Strategies drift and tools sit idle.
It isn’t because they don’t see the potential. They do. It’s because the path from potential to practicality is clouded with skepticism, risk, and, most critically, a lack of meaningful incentives.
This is the paradox at the center of AI adoption: we know it can transform industries, but we haven’t made the case compelling enough to move the masses. At least not yet.
AI skepticism
AI skepticism is more than fear of robots or job loss. It’s deeper. It’s cultural, psychological, and in many cases, entirely rational. The stakes are high on both the personal and business level. But when it comes to business and institutional use, where livelihoods, accountability, privacy and brand reputation are on the line, the stakes shift dramatically. Here, skepticism is less about fear of the unknown and more about control, cost, and consequences. Bridging this gap requires not just better tools, but a deeper trust in AI’s ability to deliver value without unintended consequences.
Based on several conversations across multiple industries here are the top fears and resistance points in AI adoption ranked in order of severity by their perceived risk level.
The fear that AI can be uncontrollable or dominate humans is valid. The real danger lies in how humans choose to use or fail to control it. The solution lies in strong governance, transparency, and keeping humans in control.
The fear of mass job loss is valid. AI is expected to replace many repetitive roles, and faster than past technologies have. However, like previous tech shifts, it should also create new types of work, often in hybrid, AI-assisted roles. The key is to invest in reskilling and support smooth transitions, not just manage losses. AI may not replace jobs one-to-one, especially not in the same regions or for the same skill levels. Since new roles often demand digital skills or creativity, they risk excluding many workers. The answer may well lie in equitable training, local job ecosystems, and policies that broaden access to the new economy.
If left unchecked, AI can deepen existing inequalities by concentrating power and scaling biased systems. But when designed with intention, it can also expand access to healthcare, education, and opportunity. Building inclusive, community-aware AI systems is essential to ensuring it levels the field.
Many leaders hesitate to adopt AI because they’re unsure how to measure its return on investment (ROI). But AI’s value often shows up in time saved, burnout reduced, and cognitive overload relieved, not just in revenue. For example, during conversations with multiple doctors using conversational AI tools in their practices, they noted not only faster documentation, but also more time to focus on patients, improved mental clarity, and less administrative stress. To unlock this kind of value, organizations need ROI frameworks that go beyond dollars, better allocation, capturing efficiency, well-being, and strategic advantage alike.
Across businesses and institutions, these concerns arise again and again. This is not resistance to innovation. They are reasonable concerns that deserve tangible answers. Perhaps we can learn from cases where skepticism was overcome and what it looks like when it works:
Mayo Clinic, one of the leading medical institutions in the United States, introduced AI tools cautiously but purposefully, particularly in radiology and oncology. Recognizing that physicians were wary of “black-box” systems, Mayo prioritized explainable AI models that would assist rather than replace human judgment. By involving clinicians in model development and offering extensive training, the institution built trust and demonstrated how AI could reduce diagnostic errors and improve time-to-treatment. This patient-centered, augmentation-first approach led to high adoption rates and strong internal support.
Shell is a global energy giant in a traditionally conservative industry, where skepticism toward emerging technologies is common, especially among engineers concerned about safety and system reliability. Rather than starting with sweeping transformation, Shell focused on a single high-impact use case: predictive maintenance for drilling equipment. By partnering with an AI platform and involving frontline engineers early, the company used AI to reduce equipment downtime and extend asset life. The success of this initiative not only delivered measurable ROI but also shifted internal attitudes, creating momentum for broader AI adoption across departments.
Unilever, the global consumer goods company, successfully leveraged AI to transform its hiring process while addressing both efficiency and equity concerns. Facing high volumes of job applicants and internal pressure to reduce bias, Unilever adopted an AI-powered platform that used behavioral science and natural language processing to screen candidates through online games and video interviews. The process drastically reduced hiring time from weeks to days while improving the quality and diversity of hires. Crucially, Unilever maintained human oversight throughout and publicly shared the ethical frameworks guiding its use of AI, helping to build trust internally and externally.
What’s the pattern? Small wins, local context, human-in-the-loop.
Rethinking Incentives
Incentives matter. To accelerate AI adoption, we need to go beyond performance metrics and dashboards. We need a new class of incentives, bold, layered, and creative, such as:
Financial Incentives: offer AI tax credits, transition grants, and results-based pricing to lower adoption risk. For example, AI vendors could bill based on value delivered, like a percentage of cost savings from automation, rather than flat fees. These models help small businesses and public institutions take the leap without heavy upfront investment.
Strategic Incentives: create AI maturity indexes that benchmark organizations and drive competitive pride. Host competitive threat simulations that demonstrate how fast movers can outpace laggards. These pressure-based incentives could appeal to executive strategy and industry standing.
Cultural Incentives: recognize adopters with awards, badges, and certifications that elevate their brand and attract talent or investment. Think of it as the “Leadership in Energy and Environmental Design (LEED) certification” of AI where businesses are rewarded for responsible, forward-looking integration. where businesses are rewarded for responsible, forward-looking integration. Cultural incentives make AI adoption a symbol of leadership.
Social Impact Incentives: provide ethical AI accreditations and encourage open data contributions to ensure adoption benefits more than just the bottom line. Organizations can gain recognition for deploying AI that improves healthcare access or reduces bias.
Policy-Linked Incentives: Use regulatory sandboxes to let businesses test AI in low-risk environments and tie procurement eligibility to AI-readiness. For example, governments might require AI capability in bids for certain digital contracts. This approach blends encouragement with pressure, ensuring AI becomes part of compliance, not just innovation.
The AI industry must also play a part in incentivizing. Its most powerful act won’t be launching the next large language model, but bridging the gaps. It should:
Rethink Pricing & Product Strategy: AI companies should offer localized freemium models and usage-based pricing to lower barriers for small businesses and underserved regions. For example, a health NGO in Kenya could pay per diagnosis, while a U.S. bakery pays per automated scheduling task. These flexible models make AI tools financially viable for high-impact but low-budget users.
Act as Infrastructure Builders, Beyond tools, AI firms can provide foundational support through AI infrastructure grants, like cloud credits, labeled datasets, or compute access for underserved startups and institutions. They can also invest in edge-compatible AI that works offline, such as crop disease detection apps that sync only when connected. This is essential in areas with limited internet access.
Be Learning Partners: The AI industry should invest in train-the-trainer programs and localized education, empowering local educators to teach AI concepts in context. Open curriculum libraries in native languages or formats (video, radio, comics) can make learning accessible. This enables sustainable knowledge growth in both rural Africa and non-technical communities globally.
Champion Open Ecosystems: AI companies can expand access by supporting region-specific open-source projects, like Swahili NLP or agriculture advisory chatbots. They should also provide API access credits and plug-and-play SDKs to help small teams integrate AI into basic operations like payroll or school admin systems. Open ecosystems foster innovation from the ground up.
Signal Ethical Inclusion: Initiatives like “AI for Inclusion” partner programs and inclusion impact reporting can spotlight and reward responsible, equitable deployment. These programs can elevate social ventures and track how many AI resources reach underserved users. Recognition, not just capability, becomes a driver of adoption and accountability.
The goal isn’t just adoption. It’s inclusion. Will AI be a tool of consolidation or empowerment? We can wait for the market to “naturally” adopt AI, or we can architect that future through intentional incentives, inclusive infrastructure, and shared responsibility.
Isaac Dwamena is a contributing opinion writer and the founder of Dwara Innovation, an artificial intelligence firm. He’s a strategist and advisor focused on AI integration across markets and regions.
Read More