Get GenAI guide

Access HaxiTAG GenAI research content, trends and predictions.

Showing posts with label Usage. Show all posts
Showing posts with label Usage. Show all posts

Monday, October 13, 2025

From System Records to Agent Records: Workday’s Enterprise AI Transformation Paradigm—A Future of Human–Digital Agent Coexistence

Based on a McKinsey Inside the Strategy Room interview with Workday CEO Carl Eschenbach (August 21, 2025), combined with Workday official materials and third-party analyses, this study focuses on enterprise transformation driven by agentic AI. Workday’s practical experience in human–machine collaborative intelligence offers valuable insights.

In enterprise AI transformation, two extremes must be avoided: first, treating AI as a “universal cost-cutting tool,” falling into the illusion of replacing everything while neglecting business quality, risk, and experience; second, refusing to experiment due to uncertainty, thereby missing opportunities to elevate efficiency and value.

The proper approach positions AI as a “productivity-enhancing digital colleague” under a governance and measurement framework, aiming for measurable productivity gains and new value creation. By starting with small pilots and iterative scaling, cost reduction, efficiency enhancement, and innovation can be progressively unified.

Overview

Workday’s AI strategy follows a “human–agent coexistence” paradigm. Using consistent data from HR and finance systems of record (SOR) and underpinned by governance, the company introduces an “Agent System of Record (ASR)” to centrally manage agent registration, permissions, costs, and performance—enabling a productivity leap from tool to role-based agent.

Key Principles and Concepts

  1. Coexistence, Not Replacement: AI’s power comes from being “agentic”—technology working for you. Workday clearly positions AI for peaceful human–agent coexistence.

  2. Domain Data and Business Context Define the Ceiling: The CEO emphasizes that data quality and domain context, especially in HR and finance, are foundational. Workday serves over 10,000 enterprises, accumulating structured processes and data assets across clients.

  3. Three-System Perspective: HR, finance, and customer SORs form the enterprise AI foundation. Workday focuses on the first two and collaborates with the broader ecosystem (e.g., Salesforce).

  4. Speed and Culture as Multipliers: Treating “speed” as a strategic asset and cultivating a growth-oriented culture through service-oriented leadership that “enables others.”


Practice and Governance (Workday Approach)

  • ASR Platform Governance: Unified directories and observability for centralized control of in-house and third-party agents; role and permission management, registration and compliance tracking, cost budgeting and ROI monitoring, real-time activity and strategy execution, and agent orchestration/interconnection via A2A/MCP protocols (Agent Gateway). Digital colleagues in HaxiTAG Bot Factory provide similar functional benefits in enterprise scenarios.

  • Role-Based (Multi-Skill) Agents: Upgrade from task-based to configurable “role” agents, covering high-value processes such as recruiting, talent mobility, payroll, contracts, financial audit, and policy compliance.

  • Responsible AI System: Appoint a Chief Responsible AI Officer and employ ISO/IEC 42001 and NIST AI RMF for independent validation and verification, forming a governance loop for bias, security, explainability, and appeals.

  • Organizational Enablement: Systematic AI training for 20,000+ employees to drive full human–agent collaboration.

Value Proposition and Business Implications

  • From “Application-Centric” to “Role-Agent-Centric” Experience: Users no longer “click apps” but collaborate with context-aware role agents, requiring rethinking of traditional UI and workflow orchestration.

  • Measurable Digital Workforce TCO/ROI: ASR treats agents as “digital employees,” integrating budget, cost, performance, and compliance into a single ledger, facilitating CFO/CHRO/CAIO governance and investment decisions.

  • Ecosystem and Interoperability: Agent Gateway connects external agents (partners or client-built), mitigating “agent sprawl” and shadow IT risks.

Methodology: A Reusable Enterprise Deployment Framework

  1. Objective Function: Maximize productivity, minimize compliance/risk, and enhance employee experience; define clear boundaries for tasks agents can independently perform.

  2. Priority Scenarios: Select high-frequency, highly regulated, and clean-data HR/finance processes (e.g., payroll verification, policy responses, compliance audits, contract obligation extraction) as MVPs.

  3. ASR Capability Blueprint:

    • Directory: Agent registration, profiles (skills/capabilities), tracking, explainability;

    • Identity & Permissions: Least privilege, cross-system data access control;

    • Policy & Compliance: Policy engine, action audits, appeals, accountability;

    • Economics: Budgeting, A/B and performance dashboards, task/time/result accounting;

    • Connectivity: Agent Gateway, A2A/MCP protocol orchestration.

  4. “Onboard Agents Like Humans”: Implement lifecycle management and RACI assignment for “hire–trial–performance–promotion–offboarding” to prevent over-authorization or improper execution.

  5. Responsible AI Governance: Align with ISO 42001 and NIST AI RMF; establish processes and metrics (risk registry, bias testing, explainability thresholds, red teaming, SLA for appeals), and regularly disclose internally and externally.

  6. Organization and Culture: Embed “speed” in OKRs/performance metrics, emphasize leadership in “serving others/enabling teams,” and establish CAIO/RAI committees with frontline coaching mechanisms.

Industry Insight: Instead of full-scale rollout, adopt a four-piece “role–permission–metric–governance” loop, gradually delegating authority to create explainable autonomy.

Assessment and Commentary

Workday unifies humans and agents within existing HR/finance SORs and governance, balancing compliance with practical deployment density, shortening the path from pilot to scale. Constraints and risks include:

  1. Ecosystem Lock-In: ASR strongly binds to Workday data and processes; open protocols and Marketplace can mitigate this.

  2. Cross-System Consistency: Agents spanning ERP/CRM/security domains require end-to-end permission and audit linkage to avoid “shadow agents.”

  3. Measurement Complexity: Agent value must be assessed by both process and outcome (time saved ≠ business result).

Sources: McKinsey interview with Workday CEO on “coexistence, data quality, three-system perspective, speed and leadership, RAI and training”; Workday official pages/news on ASR, Agent Gateway, role agents, ROI, and Responsible AI; HFS, Josh Bersin, and other industry analyses on “agent sprawl/governance.”

Related topic:

Maximizing Efficiency and Insight with HaxiTAG LLM Studio, Innovating Enterprise SolutionsEnhancing Enterprise Development: Applications of Large Language Models and Generative AIUnlocking Enterprise Success: The Trifecta of Knowledge, Public Opinion, and IntelligenceRevolutionizing Information Processing in Enterprise Services: The Innovative Integration of GenAI, LLM, and Omni ModelMastering Market Entry: A Comprehensive Guide to Understanding and Navigating New Business Landscapes in Global MarketsHaxiTAG's LLMs and GenAI Industry Applications - Trusted AI SolutionsEnterprise AI Solutions: Enhancing Efficiency and Growth with Advanced AI Capabilities

Friday, September 19, 2025

AI-Driven Transformation at P&G: Strategic Integration Across Operations and Innovation

As a global leader in the consumer goods industry, Procter & Gamble (P&G) deeply understands that technological innovation is central to delivering sustained consumer value. In recent years, P&G has strategically integrated Artificial Intelligence (AI) and Generative AI (Gen AI) into its operational and innovation ecosystems, forming a company-wide AI strategy. This strategy is consumer-centric, efficiency-driven, and aims to transform the organization, processes, and culture at scale.

Strategic Vision: Consumer Delight as the Sole Objective

P&G Chairman and CEO Jon Moeller emphasizes that AI should serve the singular goal of generating delight for consumers, customers, employees, society, and shareholders—not technology for its own sake. Only technologies that accelerate and enhance this objective are worth adopting. This orientation ensures that all AI projects are tightly aligned with business outcomes, avoiding fragmented or siloed deployments.

Infrastructure: Building a Scalable Enterprise AI Factory

CIO Vittorio Cretella describes P&G’s internal generative AI tool, ChatPG (built on OpenAI API), which supports over 35 enterprise-wide use cases. Through its “AI Factory,” deployment efficiency has increased tenfold. This platform enables standardized deployment and iteration of AI models across regions and functions , embedding AI capabilities as strategic infrastructure in daily operations.

Core Use Cases

1. Supply Chain Forecasting and Optimization

In collaboration with phData and KNIME, P&G integrates complex and fragmented supply chain data (spanning 5,000+ products and 22,000 components) into a unified platform. This enables real-time risk prediction, inventory optimization, and demand forecasting. A manual verification process once involving over a dozen experts has been eliminated, cutting response times from two hours to near-instantaneous.

2. Consumer Behavior Insights and Product Development

Smart products like the Oral-B iO electric toothbrush collect actual usage data, which AI models use to uncover behavioral discrepancies (e.g., real brushing time averaging 47 seconds versus the reported two minutes). These insights inform R&D and formulation innovation, significantly improving product design and user experience.

3. Marketing and Media Content Testing

Generative AI enables rapid creative ideation and execution. Large-scale A/B testing shortens concept validation cycles from months to days, reducing costs. AI also automates media placement and audience segmentation, enhancing both precision and efficiency.

4. Intelligent Manufacturing and Real-Time Quality Control

Sensors and computer vision systems deployed across P&G facilities enable automated quality inspection and real-time alerts. This supports “hands-free” night shift production with zero manual supervision, reducing defects and ensuring consistent product quality.

Collective Intelligence: AI as a Teammate

Between May and July 2024, P&G collaborated with Harvard Business School’s Digital Data Design Institute and Wharton School to conduct a Gen AI experiment involving over 700 employees. Key findings include:

  • Teams using Gen AI improved efficiency by ~12%;

  • Individual AI users matched or outperformed full teams without AI;

  • AI facilitated cross-functional integration and balanced solutions;

  • Participants reported enhanced collaboration and positive engagement .

These results reinforce Professor Karim Lakhani’s “Cybernetic Teammate” concept, where AI transitions from tool to teammate.

Organizational Transformation: Talent and Cultural Integration

P&G promotes AI adoption beyond tools—embedding it into organizational culture. This includes mandatory training, signed AI use policies, and executive-level hands-on involvement. CIO Seth Cohen articulates a “30% technology, 70% organization” transformation formula, underscoring the primacy of culture and talent in sustainable change.

Sustaining Competitive AI Advantage

P&G’s AI strategy is defined by its system-level design, intentionality, scalability, and long-term sustainability. Through:

  • Consumer-centric value orientation,

  • Standardized, scalable AI infrastructure,

  • End-to-end coverage from supply chain to marketing,

  • Collaborative innovation between AI and employees,

  • Organizational and cultural transformation,

P&G establishes a self-reinforcing loop of AI → Efficiency → Innovation. AI is no longer a technical pursuit—it is a foundational pillar of enduring corporate competitiveness.

Related topic:

Developing LLM-based GenAI Applications: Addressing Four Key Challenges to Overcome Limitations
Analysis of AI Applications in the Financial Services Industry
Application of HaxiTAG AI in Anti-Money Laundering (AML)
Analysis of HaxiTAG Studio's KYT Technical Solution
Strategies and Challenges in AI and ESG Reporting for Enterprises: A Case Study of HaxiTAG
HaxiTAG ESG Solutions: Best Practices Guide for ESG Reporting
Impact of Data Privacy and Compliance on HaxiTAG ESG System

Tuesday, September 9, 2025

Morgan Stanley’s DevGen.AI: Reshaping Enterprise Legacy System Modernization Through Generative AI

As enterprises increasingly grapple with the pressing challenge of modernizing legacy software systems, Morgan Stanley has unveiled DevGen.AI—an internally developed generative AI tool that sets a new benchmark for enterprise-grade modernization strategies. Built upon OpenAI’s GPT models, DevGen.AI is designed to tackle the long-standing issue of outdated systems—particularly those written in languages like COBOL—that are difficult to maintain, adapt, or scale within financial institutions.

The Innovation: A Semantic Intermediate Layer

DevGen.AI’s most distinctive innovation lies in its use of an “intermediate language” approach. Rather than directly converting legacy code into modern programming languages, it first translates source code into structured, human-readable English specifications. Developers can then use these specs to rewrite the system in modern languages. This human-in-the-loop paradigm—AI-assisted specification generation followed by manual code reconstruction—offers superior adaptability and contextual accuracy for the modernization of complex, deeply embedded enterprise systems.

By 2025, DevGen.AI has analyzed over 9 million lines of legacy code, saving developers more than 280,000 working hours. This not only reduces reliance on scarce COBOL expertise but also provides a structured pathway for large-scale software asset refactoring across the firm.

Application Scenarios and Business Value at Morgan Stanley

DevGen.AI has been deployed across three core domains:

1. Code Modernization & Migration

DevGen.AI accelerates the transformation of decades-old mainframe systems by translating legacy code into standardized technical documentation. This enables faster and more accurate refactoring into modern languages such as Java or Python, significantly shortening technology upgrade cycles.

2. Compliance & Audit Support

Operating in a heavily regulated environment, financial institutions must maintain rigorous transparency. DevGen.AI facilitates code traceability by extracting and describing code fragments tied to specific business logic, helping streamline both internal audits and external regulatory responses.

3. Assisted Code Generation

While its generated modern code is not yet fully optimized for production-scale complexity, DevGen.AI can autonomously convert small to mid-sized modules. This provides substantial savings on initial development efforts and lowers the barrier to entry for modernization.

A key reason for Morgan Stanley’s choice to build a proprietary AI tool is the ability to fine-tune models based on domain-specific semantics and proprietary codebases. This avoids the semantic drift and context misalignment often seen with general-purpose LLMs in enterprise environments.

Strategic Insights from an AI Engineering Milestone

DevGen.AI exemplifies a systemic response to technical debt in the AI era, offering a replicable roadmap for large enterprises. Beyond showcasing generative AI’s real-world potential in complex engineering tasks, the project highlights three transformative industry trends:

1. Legacy System Integration Is the Gateway to Industrial AI Adoption

Enterprise transformation efforts are often constrained by the inertia of legacy infrastructure. DevGen.AI demonstrates that AI can move beyond chatbot interfaces or isolated coding tasks, embedding itself at the heart of IT infrastructure transformation.

2. Semantic Intermediation Is Critical for Quality and Control

By shifting the translation paradigm from “code-to-code” to “code-to-spec,” DevGen.AI introduces a bilingual collaboration model between AI and humans. This not only enhances output fidelity but also significantly improves developer control, comprehension, and confidence.

3. Organizational Modernization Amplifies AI ROI

Mike Pizzi, Morgan Stanley’s Head of Technology, notes that AI amplifies existing capabilities—it is not a substitute for foundational architecture. Therefore, the success of AI initiatives hinges not on the models themselves, but on the presence of a standardized, modular, and scalable technical infrastructure.

From Intelligent Tools to Intelligent Architecture

DevGen.AI proves that the core enterprise advantage in the AI era lies not in whether AI is adopted, but in how AI is integrated into the technology evolution lifecycle. AI is no longer a peripheral assistant; it is becoming the central engine powering IT transformation.

Through DevGen.AI, Morgan Stanley has not only addressed legacy technical debt but has also pioneered a scalable, replicable, and sustainable modernization framework. This breakthrough sets a precedent for AI-driven transformation in highly regulated, high-complexity industries such as finance. Ultimately, the value of enterprise AI does not reside in model size or novelty—but in its strategic ability to drive structural modernization.

Related topic:

How to Get the Most Out of LLM-Driven Copilots in Your Workplace: An In-Depth Guide
Empowering Sustainable Business Strategies: Harnessing the Potential of LLM and GenAI in HaxiTAG ESG Solutions
The Application and Prospects of HaxiTAG AI Solutions in Digital Asset Compliance Management
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions
Empowering Enterprise Sustainability with HaxiTAG ESG Solution and LLM & GenAI Technology
The Application of HaxiTAG AI in Intelligent Data Analysis
How HaxiTAG AI Enhances Enterprise Intelligent Knowledge Management
Effective PR and Content Marketing Strategies for Startups: Boosting Brand Visibility
Leveraging HaxiTAG AI for ESG Reporting and Sustainable Development

Tuesday, August 19, 2025

Internal AI Adoption in Enterprises: In-Depth Insights, Challenges, and Strategic Pathways

In today’s AI-driven enterprise service landscape, the implementation and scaling of internal AI applications have become key indicators of digital transformation success. The ICONIQ 2025 State of AI report provides valuable insights into the current state, emerging challenges, and future directions of enterprise AI adoption. This article draws upon the report’s key findings and integrates them with practical perspectives on enterprise service culture to deliver a professional analysis of AI deployment breadth, user engagement, value realization, and evolving investment structures, along with actionable strategic recommendations.

High AI Penetration, Yet Divergent User Engagement

According to the report, while up to 70% of employees have access to internal AI tools, only around half are active users. This discrepancy reveals a widespread challenge: despite significant investments in AI deployment, employee engagement often falls short, particularly in large, complex organizations. The gap between "tool availability" and "tool utilization" reflects the interplay of multiple structural and cultural barriers.

Key among these is organizational inertia. Long-established workflows and habits are not easily disrupted. Without strong guidance, training, and incentive systems, employees may revert to legacy practices, leaving AI tools underutilized. Secondly, disparities in employee skill sets hinder AI adoption. Not all employees possess the aptitude or willingness to learn and adapt to new technologies, and perceived complexity can lead to avoidance. Third, lagging business process reengineering limits AI’s impact. The introduction of AI must be accompanied by streamlined workflows; otherwise, the technology remains disconnected from business value chains.

In large enterprises, AI adoption faces additional challenges, including the absence of a unified AI strategy, departmental silos, and concerns around data security and regulatory compliance. Furthermore, employee anxiety over job displacement may create resistance. Research shows that insufficient collective buy-in or vague implementation directives often lead to failed AI initiatives. Uncoordinated tool usage may also result in fragmented knowledge retention, security risks, and misalignment with strategic goals. Addressing these issues requires systemic transformation across technology, processes, organizational structure, and culture to ensure that AI tools are not just “accessible,” but “habitual and valuable.”

Scenario Depth and Productivity Gains Among High-Adoption Enterprises

The report indicates that enterprises with high AI adoption deploy an average of seven or more internal AI use cases, with coding assistants (77%), content generation (65%), and document retrieval (57%) being the most common. These findings validate AI’s broad applicability and emphasize that scenario depth and diversity are critical to unlocking its full potential. By embedding AI into core functions such as R&D, operations, and marketing, leading enterprises report productivity gains ranging from 15% to 30%.

Scenario-specific tools deliver measurable impact. Coding assistants enhance development speed and code quality; content generation automates scalable, personalized marketing and internal communications; and document retrieval systems reduce the cost of information access through semantic search and knowledge graph integration. These solutions go beyond tool substitution — they optimize workflows and free employees to focus on higher-value, creative tasks.

The true productivity dividend lies in system integration and process reengineering. High-adoption enterprises treat AI not as isolated pilots but as strategic drivers of end-to-end automation. Integrating content generators with marketing automation platforms or linking document search systems with CRM databases exemplifies how AI can augment user experience and drive cross-functional value. These organizations also invest in data governance and model optimization, ensuring that high-quality data fuels reliable, context-aware AI models.


Evolving AI R&D Investment Structures

The report highlights that AI-related R&D now comprises 10%–20% of enterprise R&D budgets, with continued growth across revenue segments — signaling strong strategic prioritization. Notably, AI investment structures are dynamically shifting, necessitating foresight and flexibility in resource planning.

In the early stages, talent represents the largest cost. Enterprises compete for AI/ML engineers, data scientists, and AI product managers who can bridge technical expertise with business understanding. Talent-intensive innovation is critical when AI technologies are still nascent. Competitive compensation, career development pathways, and open innovation cultures are essential for attracting and retaining such talent.

As AI matures, cost structures tilt toward cloud computing, inference operations, and governance. Once deployed, AI systems require substantial compute resources, particularly for high-volume, real-time workloads. Model inference, data transmission, and infrastructure scalability become cost drivers. Simultaneously, AI governance—covering privacy, fairness, explainability, and regulatory compliance—emerges as a strategic imperative. Establishing AI ethics committees, audit frameworks, and governance platforms becomes essential to long-term scalability and risk mitigation.

Thus, enterprises must shift from a narrow R&D lens to a holistic investment model, balancing technical innovation with operational sustainability. Cloud cost optimization, model efficiency improvements (e.g., pruning, quantization), and robust data governance are no longer optional—they are competitive necessities.

Strategic Recommendations

1. Scenario-Driven Co-Creation: The Core of AI Value Realization

AI’s business value lies in transforming core processes, not simply introducing new technologies. Enterprises should anchor AI initiatives in real business scenarios and foster cross-functional co-creation between business leaders and technologists.

Establish cross-departmental AI innovation teams comprising business owners, technical experts, and data scientists. These teams should identify high-impact use cases, redesign workflows, and iterate continuously. Begin with data-rich, high-friction areas where value can be validated quickly. Ensure scalability and reusability across similar processes to minimize redundant development and maximize asset value.

2. Culture and Talent Mechanisms: Keys to Active Adoption

Bridging the gap between AI availability and consistent use requires organizational commitment, employee empowerment, and cultural transformation.

Promote an AI-first mindset through leadership advocacy, internal storytelling, and grassroots experimentation. Align usage with performance incentives by incorporating AI adoption metrics into KPIs or OKRs. Invest in tiered AI literacy programs, tailored to roles and seniority, to build a baseline of AI fluency and confidence across the organization.

3. Cost Optimization and Sustainable Governance

As costs shift toward compute and compliance, enterprises must optimize infrastructure and fortify governance.

Implement granular cloud cost control strategies and improve model inference efficiency through hardware acceleration or architectural simplification. Develop a comprehensive AI governance framework encompassing data privacy, algorithmic fairness, model interpretability, and ethical accountability. Though initial investments may be substantial, they provide long-term protection against legal, reputational, and operational risks.

4. Data-Driven ROI and Strategic Iteration

Establish end-to-end AI performance and ROI monitoring systems. Track tool usage, workflow impact, and business outcomes (e.g., efficiency gains, customer satisfaction) to quantify value creation.

Design robust ROI models tailored to each use case — including direct and indirect costs and benefits. Use insights to refine investment priorities, sunset underperforming projects, and iterate AI strategy in alignment with evolving goals. Let data—not assumptions—guide AI evolution.

Conclusion

Enterprise AI adoption has entered deep waters. To capture long-term value, organizations must treat AI not as a tool, but as a strategic infrastructure, guided by scenario-centric design, cultural alignment, and governance excellence. Only then can they unlock AI’s productivity dividends and build a resilient, intelligent competitive advantage.

Related Topic

Enhancing Customer Engagement with Chatbot Service
HaxiTAG ESG Solution: The Data-Driven Approach to Corporate Sustainability
Simplifying ESG Reporting with HaxiTAG ESG Solutions
The Adoption of General Artificial Intelligence: Impacts, Best Practices, and Challenges
The Significance of HaxiTAG's Intelligent Knowledge System for Enterprises and ESG Practitioners: A Data-Driven Tool for Business Operations Analysis
HaxiTAG AI Solutions: Driving Enterprise Private Deployment Strategies
HaxiTAG EiKM: Transforming Enterprise Innovation and Collaboration Through Intelligent Knowledge Management
AI-Driven Content Planning and Creation Analysis
AI-Powered Decision-Making and Strategic Process Optimization for Business Owners: Innovative Applications and Best Practices
In-Depth Analysis of the Potential and Challenges of Enterprise Adoption of Generative AI (GenAI)

Friday, August 1, 2025

The Strategic Shift of Generative AI in the Enterprise: From Adoption Surge to Systemic Evolution

Bain & Company’s report, “Despite Barriers, the Adoption of Generative AI Reaches an All-Time High”, provides an authoritative and structured exploration of the strategic significance, systemic challenges, and capability-building imperatives of generative AI (GenAI) in enterprise services. It offers valuable insights for senior executives and technical leaders seeking to understand the business impact and organizational implications of GenAI deployment.

Generative AI at Scale: A Technological Leap Triggering Organizational Paradigm Shifts

According to Bain’s 2025 survey, 95% of U.S. enterprises have adopted generative AI, with production use cases increasing by 101% year-over-year. This leap signals not only technological maturity but a foundational shift in enterprise operating models—GenAI is no longer a peripheral innovation but a core driver reshaping workflows, customer engagement, and product development.

The IT function has emerged as the fastest adopter, integrating GenAI into modules such as code generation, knowledge retrieval, and system operations—demonstrating the technology’s natural alignment with knowledge-intensive tasks. Initially deployed to enhance operational efficiency and reduce costs, GenAI is now evolving from a productivity enhancer into a value creation engine as enterprises deepen its application.

Strategic Prioritization: Evolving Enterprise Mindsets and Readiness Gaps

Notably, the share of companies prioritizing AI as a strategic initiative has risen to 15% within a year, and 50% now have a defined implementation roadmap. This trend indicates a shift among leading firms from a narrow focus on deployment to building comprehensive AI governance frameworks—encompassing platform architecture, talent models, data assets, and process redesign.

However, the report also reveals a significant bifurcation: half of all companies still lack a clear strategy. This reflects an emerging “capability polarization” in the market. Front-runners are institutionalizing GenAI through standardized workflows, mature governance, and deep vendor partnerships, while others remain stuck in fragmented pilots without coherent organizational frameworks.

Realizing Value: A Reinforcing Feedback Loop of Performance and Confidence

Over 80% of reported use cases met or exceeded expectations, and nearly 60% of satisfied enterprises reported measurable business improvements—affirming the commercial viability of GenAI. These high-yield use cases—document generation, customer inquiry automation, internal search, reporting—share common traits: high knowledge structure, task repeatability, and stable context.

More importantly, this success has triggered a confidence flywheel: early wins → increased executive trust → expanded resource allocation → greater capabilities. Among organizations that have scaled GenAI, approximately 90% report target attainment or outperformance—highlighting the compounding marginal value of GenAI as it evolves from a tactical tool to a strategic platform.

Structural Challenges: Beyond Technical Hurdles to Organizational Complexity

Despite steep adoption curves, enterprises face three core, systemic constraints that must be addressed:

  1. Data Security and Governance: As GenAI embeds itself deeper into critical systems, issues such as compliance, access control, and context integrity become paramount. Late-stage adopters are particularly focused on data lifecycle integrity and output accountability—underscoring the growing sensitivity to AI-related risk externalities.

  2. Talent Gaps and Knowledge Asymmetries: 75% of companies report an inability to find internal expertise in critical functions. This is less about a shortage of AI engineers, and more about the lack of organizational infrastructure to integrate business users with AI systems—via interfaces, training, and process alignment.

  3. Vendor Fragmentation and Ecosystem Fragility: With rapid evolution in AI infrastructure and models, long-term stability remains elusive. Concerns about vendor quality and model maintainability are surging among advanced adopters—reflecting increased strategic dependence on reliable ecosystem partners.

Reconstructing the Investment Rhythm: From Exploration Budgets to Operational Expenditures

Enterprise GenAI investment is entering a phase of structural normalization. Since early 2024, average annual AI budgets have reached $10 million—up 102% year-over-year. More significantly, 60% of GenAI projects are now funded through standard operating budgets, signaling a shift from experimental spending to institutionalized resource allocation.

This transition reflects a change in organizational perception: GenAI is no longer a one-off innovation initiative, but a core pillar within digital architecture, talent strategy, and process transformation. Enterprises are integrating GenAI into AI governance hubs and scenario-driven microservice deployments, emphasizing long-term, scalable orchestration.

Strategic Insight: GenAI as a Competitive Operating System of the Future

The central insight from Bain’s research is clear: generative AI is not just about technical deployment—it demands a fundamental redesign of organizational capabilities and cognitive infrastructure. Companies that sustainably unlock value from GenAI exhibit four shared traits:

  • Clear prioritization of high-value GenAI scenarios across the enterprise;

  • A cross-functional AI operations hub to align data, processes, models, and personnel;

  • A layered AI talent architecture—including prompt engineers, data governance experts, and domain modelers;

  • Integration of GenAI into core governance systems such as budgeting, KPIs, compliance, ethics, and knowledge management.

In the coming years, enterprise competition will no longer hinge on whether GenAI is adopted, but on how effectively organizations rewire their business models, restructure internal systems, and build defensible, sustainable AI capabilities. GenAI will become a benchmark for digital maturity—and a decisive differentiator in asymmetric competition.

Conclusion

Bain’s research offers a mirror reflecting how deeply generative AI is transforming the enterprise landscape. In this era of complex technological and organizational convergence, companies must look beyond tools and models. Strategic vision, systemic governance, and human-AI symbiosis are essential to unleashing the full multiplier effect of GenAI. Only with such a holistic approach can organizations seize the opportunity to lead in the next wave of digital transformation—and shape the future of business itself.

AI Automation: A Strategic Pathway to Enterprise Intelligence in the Era of Task Reconfiguration

With the rapid advancement of generative AI and task-level automation, the impact of AI on the labor market has gone far beyond the simplistic notion of "job replacement." It has entered a deeper paradigm of task reconfiguration and value redistribution. This transformation not only reshapes job design but also profoundly reconstructs organizational structures, capability boundaries, and competitive strategies. For enterprises seeking intelligent transformation and enhanced service and competitiveness, understanding and proactively embracing this change is no longer optional—it is a strategic imperative.

The "Dual Pathways" of AI Automation: Structural Transformation of Jobs and Skills

AI automation is reshaping workforce structures along two main pathways:

  • Routine Automation (e.g., customer service responses, schedule planning, data entry): By replacing predictable, rule-based tasks, automation significantly reduces labor demand and improves operational efficiency. A clear outcome is the decline in job quantity and the rise in skill thresholds. For instance, British Telecom’s plan to cut 40% of its workforce and Amazon’s robot fleet surpassing its human workforce exemplify enterprises adjusting the human-machine ratio to meet cost and service response imperatives.

  • Complex Task Automation (e.g., roles involving analysis, judgment, or interaction): Automation decomposes knowledge-intensive tasks into standardized, modular components, expanding employment access while lowering average wages. Job roles like telephone operators or rideshare drivers are emblematic of this "commoditization of skills." Research by MIT reveals that a one standard deviation drop in task specialization correlates with an 18% wage decrease—even as employment in such roles doubles, illustrating the tension between scaling and value compression.

For enterprises, this necessitates a shift from role-centric to task-centric job design, and a comprehensive recalibration of workforce value assessment and incentive systems.

Task Reconfiguration as the Engine of Organizational Intelligence: Not Replacement, but Reinvention

When implementing AI automation, businesses must discard the narrow view of “human replacement” and adopt a systems approach to task reengineering. The core question is not who will be replaced, but rather:

  • Which tasks can be automated?

  • Which tasks require human oversight?

  • Which tasks demand collaborative human-AI execution?

By clearly classifying task types and redistributing responsibilities accordingly, enterprises can evolve into truly human-machine complementary organizations. This facilitates the emergence of a barbell-shaped workforce structure: on one end, highly skilled "super-individuals" with AI mastery and problem-solving capabilities; on the other, low-barrier task performers organized via platform-based models (e.g., AI operators, data labelers, model validators).

Strategic Recommendations:

  • Accelerate automation of procedural roles to enhance service responsiveness and cost control.

  • Reconstruct complex roles through AI-augmented collaboration, freeing up human creativity and judgment.

  • Shift organizational design upstream, reshaping job archetypes and career development around “task reengineering + capability migration.”

Redistribution of Competitive Advantage: Platform and Infrastructure Players Reshape the Value Chain

AI automation is not just restructuring internal operations—it is redefining the industry value chain.

  • Platform enterprises (e.g., recruitment or remote service platforms) have inherent advantages in standardizing tasks and matching supply with demand, giving them control over resource allocation.

  • AI infrastructure providers (e.g., model developers, compute platforms) build strategic moats in algorithms, data, and ecosystems, exerting capability lock-in effects downstream.

To remain competitive, enterprises must actively embed themselves within the AI ecosystem, establishing an integrated “technology–business–talent” feedback loop. The future of competition lies not between individual companies, but among ecosystems.

Societal and Ethical Considerations: A New Dimension of Corporate Responsibility

AI automation exacerbates skill stratification and income inequality, particularly in low-skill labor markets, where “new structural unemployment” is emerging. Enterprises that benefit from AI efficiency gains must also fulfill corresponding responsibilities:

  • Support workforce skill transition through internal learning platforms and dual-capability development (“AI literacy + domain expertise”).

  • Participate in public governance by collaborating with governments and educational institutions to promote lifelong learning and career retraining systems.

  • Advance AI ethics governance to ensure fairness, transparency, and accountability in deployment, mitigating hidden risks such as algorithmic bias and data discrimination.

AI Is Not Destiny, but a Matter of Strategic Choice

As one industry mentor aptly stated, “AI is not fate—it is choice.” How a company defines which tasks are delegated to AI essentially determines its service model, organizational form, and value positioning. The future will not be defined by “AI replacing humans,” but rather by “humans redefining themselves through AI.”

Only by proactively adapting and continuously evolving can enterprises secure their strategic advantage in this era of intelligent reconfiguration.

Related Topic

Generative AI: Leading the Disruptive Force of the Future
HaxiTAG EiKM: The Revolutionary Platform for Enterprise Intelligent Knowledge Management and Search
From Technology to Value: The Innovative Journey of HaxiTAG Studio AI
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions
HaxiTAG Studio: AI-Driven Future Prediction Tool
A Case Study:Innovation and Optimization of AI in Training Workflows
HaxiTAG Studio: The Intelligent Solution Revolutionizing Enterprise Automation
Exploring How People Use Generative AI and Its Applications
HaxiTAG Studio: Empowering SMEs with Industry-Specific AI Solutions
Maximizing Productivity and Insight with HaxiTAG EIKM System

Friday, July 18, 2025

OpenAI’s Seven Key Lessons and Case Studies in Enterprise AI Adoption

AI is Transforming How Enterprises Work

OpenAI recently released a comprehensive guide on enterprise AI deployment, openai-ai-in-the-enterprise.pdf, based on firsthand experiences from its research, application, and deployment teams. It identified three core areas where AI is already delivering substantial and measurable improvements for organizations:

  • Enhancing Employee Performance: Empowering employees to deliver higher-quality output in less time

  • Automating Routine Operations: Freeing employees from repetitive tasks so they can focus on higher-value work

  • Enabling Product Innovation: Delivering more relevant and responsive customer experiences

However, AI implementation differs fundamentally from traditional software development or cloud deployment. The most successful organizations treat AI as a new paradigm, adopting an experimental and iterative approach that accelerates value creation and drives faster user and stakeholder adoption.

OpenAI’s integrated approach — combining foundational research, applied model development, and real-world deployment — follows a rapid iteration cycle. This means frequent updates, real-time feedback collection, and continuous improvements to performance and safety.

Seven Key Lessons for Enterprise AI Deployment

Lesson 1: Start with Rigorous Evaluation
Case: How Morgan Stanley Ensures Quality and Safety through Iteration

As a global leader in financial services, Morgan Stanley places relationships at the core of its business. Faced with the challenge of introducing AI into highly personalized and sensitive workflows, the company began with rigorous evaluations (evals) for every proposed use case.

Evaluation is a structured process that assesses model performance against benchmarks within specific applications. It also supports continuous process improvement, reinforced with expert feedback at each step.

In its early stages, Morgan Stanley focused on improving the efficiency and effectiveness of its financial advisors. The hypothesis was simple: if advisors could retrieve information faster and reduce time spent on repetitive tasks, they could provide more and better insights to clients.

Three initial evaluation tracks were launched:

  • Translation Accuracy: Measuring the quality of AI-generated translations

  • Summarization: Evaluating AI’s ability to condense information using metrics for accuracy, relevance, and coherence

  • Human Comparison: Comparing AI outputs to expert responses, scored on accuracy and relevance

Results: Today, 98% of Morgan Stanley advisors use OpenAI tools daily. Document access has increased from 20% to 80%, and search times have dropped dramatically. Advisors now spend more time on client relationships, supported by task automation and faster insights. Feedback has been overwhelmingly positive — tasks that once took days now take hours.

Lesson 2: Embed AI into Products
Case: How Indeed Humanized Job Matching

AI’s strength lies in handling vast datasets from multiple sources, enabling companies to automate repetitive work while making user experiences more relevant and personalized.

Indeed, the world’s largest job site, now uses GPT-4o mini to redefine job matching.

The “Why” Factor: Recommending good-fit jobs is just the beginning — it’s equally important to explain why a particular role is suggested.

By leveraging GPT-4o mini’s analytical and language capabilities, Indeed crafts natural-language explanations in its messages and emails to job seekers. Its popular "invite to apply" feature also explains how a candidate’s background makes them a great fit.

When tested against the prior matching engine, the GPT-powered version showed:

  • A 20% increase in job application starts

  • A 13% improvement in downstream hiring success

Given that Indeed sends over 20 million messages monthly and serves 350 million visits, these improvements translate to major business impact.

Scaling posed a challenge due to token usage. To improve efficiency, OpenAI and Indeed fine-tuned a smaller model that achieved similar results with 60% fewer tokens.

Helping candidates understand why they’re a fit for a role is a deeply human experience. With AI, Indeed is enabling more people to find the right job faster — a win for everyone.

Lesson 3: Start Early, Invest Ahead of Time
Case: Klarna’s Compounding Returns from AI Adoption

AI solutions rarely work out-of-the-box. Use cases grow in complexity and impact through iteration. Early adoption helps organizations realize compounding gains.

Klarna, a global payments and shopping platform, launched a new AI assistant to streamline customer service. Within months, the assistant handled two-thirds of all service chats — doing the work of hundreds of agents and reducing average resolution time from 11 to 2 minutes. It’s expected to drive $40 million in profit improvement, with customer satisfaction scores on par with human agents.

This wasn’t an overnight success. Klarna achieved these results through constant testing and iteration.

Today, 90% of Klarna’s employees use AI in their daily work, enabling faster internal launches and continuous customer experience improvements. By investing early and fostering broad adoption, Klarna is reaping ongoing returns across the organization.

Lesson 4: Customize and Fine-Tune Models
Case: How Lowe’s Improved Product Search

The most successful enterprises using AI are those that invest in customizing and fine-tuning models to fit their data and goals. OpenAI has invested heavily in making model customization easier — through both self-service tools and enterprise-grade support.

OpenAI partnered with Lowe’s, a Fortune 50 home improvement retailer, to improve e-commerce search accuracy and relevance. With thousands of suppliers, Lowe’s deals with inconsistent or incomplete product data.

Effective product search requires both accurate descriptions and an understanding of how shoppers search — which can vary by category. This is where fine-tuning makes a difference.

By fine-tuning OpenAI models, Lowe’s achieved:

  • A 20% improvement in labeling accuracy

  • A 60% increase in error detection

Fine-tuning allows organizations to train models on proprietary data such as product catalogs or internal FAQs, leading to:

  • Higher accuracy and relevance

  • Better understanding of domain-specific terms and user behavior

  • Consistent tone and voice, essential for brand experience or legal formatting

  • Faster output with less manual review

Lesson 5: Empower Domain Experts
Case: BBVA’s Expert-Led AI Adoption

Employees often know their problems best — making them ideal candidates to lead AI-driven solutions. Empowering domain experts can be more impactful than building generic tools.

BBVA, a global banking leader with over 125,000 employees, launched ChatGPT Enterprise across its operations. Employees were encouraged to explore their own use cases, supported by legal, compliance, and IT security teams to ensure responsible use.

“Traditionally, prototyping in companies like ours required engineering resources,” said Elena Alfaro, Global Head of AI Adoption at BBVA. “With custom GPTs, anyone can build tools to solve unique problems — getting started is easy.”

In just five months, BBVA staff created over 2,900 custom GPTs, leading to significant time savings and cross-departmental impact:

  • Credit risk teams: Faster, more accurate creditworthiness assessments

  • Legal teams: Handling 40,000+ annual policy and compliance queries

  • Customer service teams: Automating sentiment analysis of NPS surveys

The initiative is now expanding into marketing, risk, operations, and more — because AI was placed in the hands of people who know how to use it.

Lesson 6: Remove Developer Bottlenecks
Case: Mercado Libre Accelerates AI Development

In many organizations, developer resources are the primary bottleneck. When engineering teams are overwhelmed, innovation slows, and ideas remain stuck in backlogs.

Mercado Libre, Latin America's largest e-commerce and fintech company, partnered with OpenAI to build Verdi, a developer platform powered by GPT-4o and GPT-4o mini.

Verdi integrates language models, Python, and APIs into a scalable, unified platform where developers use natural language as the primary interface. This empowers 17,000 developers to build consistently high-quality AI applications quickly — without deep code dives. Guardrails and routing logic are built-in.

Key results include:

  • 100x increase in cataloged products via automated listings using GPT-4o mini Vision

  • 99% accuracy in fraud detection through daily evaluation of millions of product listings

  • Multilingual product descriptions adapted to regional dialects

  • Automated review summarization to help customers understand feedback at a glance

  • Personalized notifications that drive engagement and boost recommendations

Next up: using Verdi to enhance logistics, reduce delivery delays, and tackle more high-impact problems across the enterprise.

Lesson 7: Set Bold Automation Goals
Case: How OpenAI Automates Its Own Work

At OpenAI, we work alongside AI every day — constantly discovering new ways to automate our own tasks.

One challenge was our support team’s workflow: navigating systems, understanding context, crafting responses, and executing actions — all manually.

We built an internal automation platform that layers on top of existing tools, streamlining repetitive tasks and accelerating insight-to-action workflows.

First use case: Working on top of Gmail to compose responses and trigger actions. The platform pulls in relevant customer data and support knowledge, then embeds results into emails or takes actions like opening support tickets.

By integrating AI into daily workflows, the support team became more efficient, responsive, and customer-centric. The platform now handles hundreds of thousands of tasks per month — freeing teams to focus on higher-impact work.

It all began because we chose to set bold automation goals, not settle for inefficient processes.

Key Takeaways

As these OpenAI case studies show, every organization has untapped potential to use AI for better outcomes. Use cases may vary by industry, but the principles remain universal.

The Common Thread: AI deployment thrives on open, experimental thinking — grounded in rigorous evaluation and strong safety measures. The best-performing companies don’t rush to inject AI everywhere. Instead, they align on high-ROI, low-friction use cases, learn through iteration, and expand based on that learning.

The Result: Faster and more accurate workflows, more personalized customer experiences, and more meaningful work — as people focus on what humans do best.

We’re now seeing companies automate increasingly complex workflows — often with AI agents, tools, and resources working in concert to deliver impact at scale.

Related topic:

Exploring HaxiTAG Studio: The Future of Enterprise Intelligent Transformation
Leveraging HaxiTAG AI for ESG Reporting and Sustainable Development
Revolutionizing Market Research with HaxiTAG AI
How HaxiTAG AI Enhances Enterprise Intelligent Knowledge Management
The Application of HaxiTAG AI in Intelligent Data Analysis
The Application and Prospects of HaxiTAG AI Solutions in Digital Asset Compliance Management
Report on Public Relations Framework and Content Marketing Strategies

Monday, June 16, 2025

Case Study: How Walmart is Leading the AI Transformation in Retail

As one of the world's largest retailers, Walmart is advancing the adoption of artificial intelligence (AI) and generative AI (GenAI) at an unprecedented pace, aiming to revolutionize every facet of its operations—from customer experience to supply chain management and employee services. This retail titan is not only optimizing store operations for efficiency but is also rapidly emerging as a “technology-powered retailer,” setting new benchmarks for the commercial application of AI.

From Traditional Retail to AI-Driven Transformation

Walmart’s AI journey begins with a fundamental redefinition of the customer experience. In the past, shoppers had to locate products in sprawling stores, queue at checkout counters, and navigate after-sales service independently. Today, with the help of the AI assistant Sparky, customers can interact using voice, images, or text to receive personalized recommendations, price comparisons, and review summaries—and even reorder items with a single click.

Behind the scenes, store associates use the Ask Sam voice assistant to quickly locate products, check stock levels, and retrieve promotion details—drastically reducing reliance on manual searches and personal experience. Walmart reports that this tool has significantly enhanced frontline productivity and accelerated onboarding for new employees.

AI Embedded Across the Enterprise

Beyond customer-facing applications, Walmart is deeply embedding AI across internal operations. The intelligent assistant Wally, designed for merchandisers and purchasing teams, automates sales analysis and inventory forecasting, empowering more scientific replenishment and pricing decisions.

In supply chain management, AI is used to optimize delivery routes, predict overstock risks, reduce food waste, and even enable drone-based logistics. According to Walmart, more than 150,000 drone deliveries have already been completed across various cities, significantly enhancing last-mile delivery capabilities.

Key Implementations

Name Type Function Overview
Sparky Customer Assistant GenAI-powered recommendations, repurchase alerts, review summarization, multimodal input
Wally Merchant Assistant Product analytics, inventory forecasting, category management
Ask Sam Employee Assistant Voice-based product search, price checks, in-store navigation
GenAI Search Customer Tool Semantic search and review summarization for improved conversion
AI Chatbot Customer Support Handles standardized issues such as order tracking and returns
AI Interview Coach HR Tool Enhances fairness and efficiency in recruitment
Loss Prevention System Security Tech RFID and AI-enabled camera surveillance for anomaly detection
Drone Delivery System Logistics Innovation Over 150,000 deliveries completed; expansion ongoing

From Models to Real-World Applications: Walmart’s AI Strategy

Walmart’s AI strategy is anchored by four core pillars:

  1. Domain-Specific Large Language Models (LLMs): Walmart has developed its own retail-specific LLM, Wallaby, to enhance product understanding and user behavior prediction.

  2. Agentic AI Architecture: Autonomous agents automate tasks such as customer inquiries, order tracking, and inventory validation.

  3. Global Scalability: From inception, Walmart's AI capabilities are designed for global deployment, enabling “train once, deploy everywhere.”

  4. Data-Driven Personalization: Leveraging behavioral and transactional data from hundreds of millions of users, Walmart delivers deeply personalized services at scale.

Challenges and Ethical Considerations

Despite notable success, Walmart faces critical challenges in its AI rollout:

  • Data Accuracy and Bias Mitigation: Preventing algorithmic bias and distorted predictions, especially in sensitive areas like recruitment and pricing.

  • User Adoption: Encouraging customers and employees to trust and embrace AI as a routine decision-making tool.

  • Risks of Over-Automation: While Agentic AI boosts efficiency, excessive automation risks diminishing human oversight, necessitating clear human-AI collaboration boundaries.

  • Emerging Competitive Threats: AI shopping assistants like OpenAI’s “Operator” could bypass traditional retail channels, altering customer purchase pathways.

The Future: Entering the Era of AI Collaboration

Looking ahead, Walmart plans to launch personalized AI shopping agents that can be trained by users to understand their preferences and automate replenishment orders. Simultaneously, the company is exploring agent-to-agent retail protocols, enabling machine-to-machine negotiation and transaction execution. This form of interaction could fundamentally reshape supply chains and marketing strategies.

Marketing is also evolving—from traditional visual merchandising to data-driven, precision exposure strategies. The future of retail may no longer rely on the allure of in-store lighting and advertising, but on the AI-powered recommendation chains displayed on customers’ screens.

Walmart’s AI transformation exhibits three critical characteristics that serve as reference for other industries:

  • End-to-End Integration of AI (Front-to-Back AI)

  • Deep Fine-Tuning of Foundation Models with Retail-Specific Knowledge

  • Proactive Shaping of an AI-Native Retail Ecosystem

This case study provides a tangible, systematic reference for enterprises in retail, manufacturing, logistics, and beyond, offering practical insights into deploying GenAI, constructing intelligent agents, and undertaking organizational transformation.

Walmart also plans to roll out assistants like Sparky to Canada and Mexico, testing the cross-regional adaptability of its AI capabilities in preparation for global expansion.

While enterprise GenAI applications represent a forward-looking investment, 92% of effective use cases still emerge from ground-level operations. This underscores the need for flexible strategies that align top-down design with bottom-up innovation. Notably, the case lacks a detailed discussion on data governance frameworks, which may impact implementation fidelity. A dynamic assessment mechanism is recommended, aligning technological maturity with organizational readiness through a structured matrix—ensuring a clear and measurable path to value realization.

Related Topic

Unlocking Enterprise Success: The Trifecta of Knowledge, Public Opinion, and Intelligence
From Technology to Value: The Innovative Journey of HaxiTAG Studio AI
Unveiling the Thrilling World of ESG Gaming: HaxiTAG's Journey Through Sustainable Adventures
Mastering Market Entry: A Comprehensive Guide to Understanding and Navigating New Business Landscapes in Global Markets
HaxiTAG's LLMs and GenAI Industry Applications - Trusted AI Solutions
Automating Social Media Management: How AI Enhances Social Media Effectiveness for Small Businesses
Challenges and Opportunities of Generative AI in Handling Unstructured Data
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions