Get GenAI guide

Access HaxiTAG GenAI research content, trends and predictions.

Monday, October 6, 2025

AI-Native GTM Teams Run 38% Leaner: The New Normal?

Companies under $25M ARR with high AI adoption are running with just 13 GTM FTEs versus 21 for their traditional SaaS peers—a 38% reduction in headcount while maintaining competitive growth rates.

But here’s what’s really interesting: This efficiency advantage seems to fade as companies get larger. At least right now.

This suggests there’s a critical window for AI-native advantages, and founders who don’t embrace these approaches early may find themselves permanently disadvantaged against competitors who do.

The Numbers Don’t Lie: AI Creates Real Leverage

GTM Headcount by AI Adoption (<$25M ARR companies):
  • Total GTM FTEs: 13 (High AI) vs 21 (Medium/Low AI)
  • Post-Sales allocation: 25% vs 33% (8-point difference)
  • Revenue Operations: 17% vs 12% (more AI-focused RevOps)
What This Means in Practice: A typical $15M ARR company with high AI adoption might run with:
  • sales reps (vs 8 for low adopters)
  • 3 post-sales team members (vs 7 for low adopters)
  • 2 marketing team members (vs 3 for low adopters)
  • 2 revenue operations specialists (vs 3 for low adopters)
The most dramatic difference is in post-sales, where high AI adopters are running with 8 percentage points less headcount allocation—suggesting that AI is automating significant portions of customer onboarding, support, and success functions.

What AI is Actually Automating

Based on the data and industry observations, here’s what’s likely happening behind these leaner structures:

Customer Onboarding & Implementation

AI-powered onboarding sequences that guide customers through setup
Automated technical implementation for straightforward use cases
Smart documentation that adapts based on customer configuration
Predictive issue resolution that prevents support tickets before they happen

Customer Success & Support

Automated health scoring that identifies at-risk accounts without manual monitoring
Proactive outreach triggers based on usage patterns and engagement
Self-service troubleshooting powered by AI knowledge bases
Automated renewal processes for straightforward accounts

Sales Operations

Intelligent lead scoring that reduces manual qualification
Automated proposal generation customized for specific use cases
Real-time deal coaching that helps reps close without manager intervention
Dynamic pricing optimization based on prospect characteristics

Marketing Operations

Automated content generation for campaigns, emails, and social
Dynamic personalization at scale without manual segmentation
Automated lead nurturing sequences that adapt based on engagement

The Efficiency vs Effectiveness Balance

The critical insight here isn’t just that AI enables smaller teams—it’s that smaller, AI-augmented teams can be more effective than larger traditional teams.
Why This Works:
  1. Reduced coordination overhead: Fewer people means less time spent in meetings and handoffs
  2. Higher-value focus: Team members spend time on strategic work rather than routine tasks
  3. Faster decision-making: Smaller teams can pivot and adapt more quickly
  4. Better talent density: Budget saved on headcount can be invested in higher-quality hires
The Quality Question: Some skeptics might argue that leaner teams provide worse customer experience. But the data suggests otherwise—companies with high AI adoption actually show lower late renewal rates (23% vs 25%) and higher quota attainment (61% vs 56%).

The $50M+ ARR Reality Check

Here’s where the story gets interesting: The efficiency advantages don’t automatically scale.
Looking at larger companies ($50M+ ARR), the headcount differences between high and low AI adopters become much smaller:
  • $50M-$100M ARR companies:
    • High AI adoption: 54 GTM FTEs
    • Low AI adoption: 68 GTM FTEs (26% difference, not 38%)
  • $100M-$250M ARR companies:
    • High AI adoption: 150 GTM FTEs
    • Low AI adoption: 134 GTM FTEs (Actually higher headcount!)

Why Scaling Changes the Game:

  1. Organizational complexity: Larger teams require more coordination regardless of AI tools
  2. Customer complexity: Enterprise deals often require human relationship management
  3. Process complexity: More sophisticated sales processes may still need human oversight
  4. Change management: Larger organizations are slower to adopt and optimize AI workflows

Wednesday, October 1, 2025

Builder’s Guide for the Generative AI Era: Technical Playbooks and Industry Trends

A Deep Dive into the 2025 State of AI Report

As generative AI moves from labs into industry deep waters, the key challenge facing every tech enterprise is no longer technical feasibility, but how to translate AI's potential into tangible product value. The 2025 State of AI Report, published by ICONIQ Capital, surveys over 300 software executives and introduces a Builder’s Playbook for the Generative AI Era, offering a full-cycle blueprint from planning to production. This report not only maps out the current technological landscape but also pinpoints the critical vectors of evolution, providing actionable frameworks for builders navigating the AI frontier.

The Technology Stack Landscape: Infrastructure Blueprint for Generative AI

The deployment of generative AI hinges on a robust stack of tools. Just as constructing a house requires a full set of materials, building AI products requires tools spanning training, development, inference, and monitoring. While the current ecosystem has stabilized to some extent, it remains in rapid flux.

In model training and fine-tuning, PyTorch and TensorFlow dominate, jointly commanding over 50% market share, due to their rich ecosystems and community momentum. AWS SageMaker and OpenAI’s fine-tuning services follow, appealing to teams seeking low-code, out-of-the-box solutions. Hugging Face and Databricks Mosaic are gaining traction rapidly—the former known for its open model hub and user-friendly tuning utilities, the latter for integrating model workflows within data lake architectures—signaling a new wave of open-source and cloud-native convergence.

In application development, LangChain and Hugging Face lead the pack, powering applications such as chatbots and document intelligence, with a combined penetration exceeding 60%. Security reinforcement has become critical: 30% of companies now employ tools like Guardrails to constrain model output and filter sensitive content. Meanwhile, high-abstraction tools like Vercel AI SDK are lowering the entry barrier for developers, enabling fast prototyping without deep understanding of model internals.

For monitoring and observability, the industry is transitioning from legacy APMs (e.g., Datadog, New Relic) to AI-native platforms. While half still rely on traditional tools, newer solutions like LangSmith and Weights & Biases—each with ~17% adoption—offer better support for tracking prompt-output mappings and behavioral drift. However, 10% of respondents remain unaware of what monitoring stack is in use, reflecting gaps that may create downstream risk.

Inference optimization shows a heavy reliance on NVIDIA—over 60% use TensorRT with Triton to boost throughput and reduce GPU cost. Among non-NVIDIA solutions, ONNX Runtime leads (18%), offering cross-platform flexibility. Still, 17% of firms lack any inference optimization, risking latency and cost issues under load.

In model hosting and vector databases, zero-deployment APIs from foundation model vendors are the dominant hosting choice, followed by AWS Bedrock and Google Vertex for their multi-cloud advantages. In vector databases, Elastic and Pinecone lead on search maturity, while Redis and ClickHouse address needs for real-time and cost-sensitive applications.

Model Strategy: A Gradient from API Dependence to Customization

Choosing the right model and usage approach is central to product success. The report identifies a clear gradient of model strategies, ranging from API usage to fine-tuning and full in-house model development.

Third-party APIs remain the norm: 80% of companies use external APIs (e.g., OpenAI, Anthropic), far surpassing those doing fine-tuning (61%) or developing models in-house (32%). For most, APIs offer the fastest way to test ideas with minimal investment—ideal for early-stage exploration. However, high-growth companies show bolder strategies: 77% fine-tune models, and 54% build their own, significantly above the average. As products scale, generic models hit their accuracy ceilings, driving demand for domain-specific customization and IP-based differentiation.

RAG (Retrieval-Augmented Generation) and fine-tuning are the most widely adopted techniques (each ~67%). RAG boosts factual accuracy by injecting external knowledge—critical in legal or medical contexts—while fine-tuning adjusts models to domain-specific language and logic using minimal data. Only 31% conduct full pretraining, as it remains prohibitively expensive and typically reserved for hyperscalers.

Infrastructure choices reflect a preference for cloud-native: 68% run fully in the cloud, 64% rely on external APIs, only 23% use hybrid deployments, and a mere 8% run fully on-prem. This points to a cost-sensitive model where renting compute outpaces building in-house capacity.

Model selection criteria diverge by use case. For external-facing products, accuracy (77%) is paramount, followed by cost (57%) and tunability (41%). For internal tools, cost (72%) leads, followed by privacy and compliance. This dual standard shows that AI is a stickier value proposition for external engagement, and an efficiency lever internally.

Implementation Challenges: From Technical Hurdles to Business Proof

Getting from “0 to 1” is relatively straightforward—going from “1 to 100” is where most struggle. The report outlines three primary obstacles:

  1. Hallucination: The top issue. When uncertain, models fabricate plausible but incorrect outputs—unacceptable in sensitive domains like contracts or diagnostics. RAG can mitigate but not fully solve this.

  2. Explainability and trust: The “black-box” nature of AI undermines user confidence, especially in domains like finance or autonomous driving where the rationale often matters more than the output itself.

  3. ROI justification: AI investment is ongoing (compute, talent, data), but returns are often indirect (e.g., productivity gains). Only 55% of companies can currently track ROI—highlighting a major decision-making bottleneck.

Monitoring maturity scales with product stage: over 75% of GA or scaling-stage products employ advanced or automated monitoring (e.g., drift detection, feedback loops, auto-retraining). In contrast, many pre-launch products rely on minimal or no monitoring, risking failure at scale.

Agentic Workflows: The Rise of Automation-First Systems

As discrete AI capabilities mature, focus is shifting toward end-to-end task automation—enter the age of Agentic Workflows. AI agents autonomously interpret user intent, decompose tasks, and orchestrate tool usage (e.g., fetching data, writing reports, sending emails), solving the classic problem of “data-rich, insight-poor” operations.

High-growth firms are leading the charge: 47% have deployed agents in production vs. 23% overall. This leap moves AI from augmenting to replacing human labor, especially in repeatable processes like customer support, logistics, or finance.

Notably, 80% of AI-native companies use Agentic Workflows, signaling a paradigm shift from “prompt-response” to workflow orchestration. Tomorrow’s AI will behave more like a “digital coworker” than a reactive plugin.

Costs and Resources: From Burn Rate to Operational Discipline

The “burn rate” of generative AI is well understood, but as maturity rises, companies are moving toward proactive cost optimization.

AI-enabled firms now allocate 15%-25% of R&D budgets to AI (up from 10%-15% in 2024). Crucially, budget structures shift with product maturity: early on, talent accounts for 57% of spend (hiring ML engineers, data scientists), but at scale, this drops to 36%, with inference (up to 22%) and storage (up to 12%) growing substantially. Inference becomes the dominant cost center in operational phases.

Pain points are predictable: 70% cite API usage fees as hardest to manage (due to volume-based pricing), followed by inference (49%) and fine-tuning (48%). In response, cost strategies include:

  • 41% shift to open-source models to avoid API fees,

  • 37% optimize inference to maximize hardware utilization,

  • 32% use quantization/distillation to compress model size and reduce runtime costs.

Internal Productivity: How AI Is Rewiring Organizations

Beyond external products, internal AI adoption is reshaping organizational efficiency. Budgets for internal AI are expected to nearly double in 2025, reaching 1%-8% of revenue. Large enterprises (> $500M) are reallocating from R&D and operations, and 27% are tapping into HR budgets—substituting headcount with automation.

Yet tool penetration lags actual usage: While 70% of employees have access to AI tools, only 50% use them regularly—dropping to 44% in enterprises > $1B revenue. This reflects poor tool-job fit and insufficient user training or change management.

Top internal use cases: code generation, content creation, and knowledge retrieval. High-growth firms generate 33% of code via AI—vs. 27% for others—making AI a central force in development velocity.

ROI metrics prioritize productivity gains (75%), then cost savings (51%), with revenue growth (20%) trailing. This confirms AI’s core internal role is cost and time efficiency.

Key Trends: Six Strategic Directions for Generative AI

The report outlines six trends that will shape the next 1–3 years of competition:

  1. AI-Native Speed Advantage: AI-first firms outpace AI-enabled peers in launch and scale, thanks to aligned teams, tolerant funding models, and optimized stacks.

  2. Cost Pressure Moves Upstream: As GPU access normalizes, cost has become a top-3 buying factor. API fees are now the #1 pain point, driving demand for operational excellence.

  3. Rise of Agentic Workflows: 80% of AI-native firms use multi-step automation, signaling a shift from prompt-based tools to end-to-end orchestration.

  4. Split Criteria for Models: External apps prioritize accuracy; internal apps prioritize cost and compliance. This dual standard demands flexible, case-by-case model governance.

  5. Governance Becomes Institutionalized: 66% meet basic compliance (e.g., GDPR), and 38% have formal AI policies. Human-in-the-loop remains the most common safeguard (47%). Governance is now a launch requirement—not a post-facto fix.

  6. Monitoring Market Remains Fragmented: Traditional APMs still dominate, but AI-native observability platforms are gaining ground. This nascent market is ripe for innovation and consolidation.

Conclusion: A Builder’s Action Checklist

The 2025 State of AI Report offers a clear roadmap for builders:

  • Tech stack: Tailor toolchains to your product stage, balancing agility and control.

  • Modeling strategy: Differentiate by scenario—use RAG, fine-tuning, or agents where they best fit.

  • Cost control: Track and optimize cost across the lifecycle—from API usage to inference and retraining.

  • Governance: Embed compliance and monitoring early—don’t bolt them on later.

Generative AI is reshaping entire industries—but its real value lies not in the technology itself, but in how deeply builders embed it into context. This report unveils validated playbooks from industry leaders—understanding them may just unlock the secret to moving from follower to frontrunner in the AI era.

Related Topic

Enhancing Customer Engagement with Chatbot Service
HaxiTAG ESG Solution: The Data-Driven Approach to Corporate Sustainability
Simplifying ESG Reporting with HaxiTAG ESG Solutions
The Adoption of General Artificial Intelligence: Impacts, Best Practices, and Challenges
The Significance of HaxiTAG's Intelligent Knowledge System for Enterprises and ESG Practitioners: A Data-Driven Tool for Business Operations Analysis
HaxiTAG AI Solutions: Driving Enterprise Private Deployment Strategies
HaxiTAG EiKM: Transforming Enterprise Innovation and Collaboration Through Intelligent Knowledge Management
AI-Driven Content Planning and Creation Analysis
AI-Powered Decision-Making and Strategic Process Optimization for Business Owners: Innovative Applications and Best Practices
In-Depth Analysis of the Potential and Challenges of Enterprise Adoption of Generative AI (GenAI)

Friday, September 26, 2025

Slack Leading the AI Collaboration Paradigm Shift: A Systemic Overhaul from Information Silos to an Intelligent Work OS

At a critical juncture in enterprise digital transformation, the report “10 Ways to Transform Your Work with AI in Slack” offers a clear roadmap for upgrading collaboration practices. It positions Slack as an “AI-powered Work OS” that, through dialog-driven interactions, agent-based automation, conversational customer data integration, and no-code workflow tools, addresses four pressing enterprise pain points: information silos, redundant processes, fragmented customer insights, and cross-organization collaboration barriers. This represents a substantial technological leap and organizational evolution in enterprise collaboration.

From Messaging Tool to Work OS: Redefining Collaboration through AI

No longer merely a messaging platform akin to “Enterprise WeChat,” Slack has strategically repositioned itself as an end-to-end Work Operating System. At the core of this transformation is the introduction of natural language-driven AI agents, which seamlessly connect people, data, systems, and workflows through conversation, thereby creating a semantically unified collaboration context and significantly enhancing productivity and agility.

  1. Team of AI Agents: Within Slack’s Agent Library, users can deploy function-specific agents (e.g., Deal Support Specialist). By using @mentions, employees engage these agents via natural language, transforming AI from passive tool to active collaborator—marking a shift from tool usage to intelligent partnership.

  2. Conversational Customer Data: Through deep integration with Salesforce, CRM data is both accessible and actionable directly within Slack channels, eliminating the need to toggle between systems. This is particularly impactful for frontline functions like sales and customer support, where it accelerates response times by up to 30%.

  3. No-/Low-Code Automation: Slack’s Workflow Builder empowers business users to automate tasks such as onboarding and meeting summarization without writing code. This AI-assisted workflow design lowers the automation barrier and enables business-led development, democratizing process innovation.

Four Pillars of AI-Enhanced Collaboration

The report outlines four replicable approaches for building an AI-augmented collaboration system within the enterprise:

  • 1) AI Agent Deployment: Embed role-based AI agents into Slack channels. With NLU and backend API integration, these agents gain contextual awareness, perform task execution, and interface with systems—ideal for IT support and customer service scenarios.

  • 2) Conversational CRM Integration: Salesforce channels do more than display data; they allow real-time customer updates via natural language, bridging communication and operational records. This centralizes lifecycle management and drives sales efficiency.

  • 3) No-Code Workflow Tools (Workflow Builder): By linking Slack with tools like G Suite and Asana, users can automate business processes such as onboarding, approvals, and meetings through pre-defined triggers. AI can draft these workflows, significantly lowering the effort needed to implement end-to-end automation.

  • 4) Asynchronous Collaboration Enhancements (Clips + Huddles): By integrating video and audio capabilities directly into Slack, Clips enable on-demand video updates (replacing meetings), while Huddles offer instant voice chats with auto-generated minutes—both vital for supporting global, asynchronous teams.

Constraints and Implementation Risks: A Systematic Analysis

Despite its promise, the report candidly identifies a range of limitations and risks:

Constraint Type Specific Limitation Impact Scope
Ecosystem Dependency Key conversational CRM features require Salesforce licenses Non-Salesforce users must reengineer system integration
AI Capability Limits Search accuracy and agent performance depend heavily on data governance and access control Poor data hygiene undermines agent utility
Security Management Challenges Slack Connect requires manual security policy configuration for external collaboration Misconfiguration may lead to compliance or data exposure risks
Development Resource Demand Advanced agents require custom logic built with Python/Node.js SMEs may lack the technical capacity for deployment

Enterprises must assess alignment with their IT maturity, skill sets, and collaboration goals. A phased implementation strategy is advisable—starting with low-risk domains like IT helpdesks, then gradually extending to sales, project management, and customer support.

Validation by Industry Practice and Deployment Recommendations

The report’s credibility is reinforced by empirical data: 82% of Fortune 100 companies use Slack Connect, and some organizations have replaced up to 30% of recurring meetings with Clips, demonstrating the model’s practical viability. From a regulatory compliance standpoint, adopting the Slack Enterprise Grid ensures robust safeguards across permissioning, data archiving, and audit logging—essential for GDPR and CCPA compliance.

Recommended enterprise adoption strategy:

  1. Pilot in Low-Risk Use Cases: Validate ROI in areas like helpdesk automation or onboarding;

  2. Invest in Data Asset Management: Build semantically structured knowledge bases to enhance AI’s search and reasoning capabilities;

  3. Foster a Culture of Co-Creation: Shift from tool usage to AI-driven co-production, increasing employee engagement and ownership.

The Future of Collaborative AI: Implications for Organizational Transformation

The proposed triad—agent team formation, conversational data integration, and democratized automation—marks a fundamental shift from tool-based collaboration to AI-empowered organizational intelligence. Slack, as a pioneering “Conversational OS,” fosters a new work paradigm—one that evolves from command-response interactions to perceptive, co-creative workflows. This signals a systemic restructuring of organizational hierarchies, roles, technical stacks, and operational logics.

As AI capabilities continue to advance, collaborative platforms will evolve from information hubs to intelligence hubs, propelling enterprises toward adaptive, data-driven, and cognitively aligned collaboration. This transformation is more than a tool swap—it is a deep reconfiguration of cognition, structure, and enterprise culture.

Related topic:

Developing LLM-based GenAI Applications: Addressing Four Key Challenges to Overcome Limitations
Analysis of AI Applications in the Financial Services Industry
Application of HaxiTAG AI in Anti-Money Laundering (AML)
Analysis of HaxiTAG Studio's KYT Technical Solution
Strategies and Challenges in AI and ESG Reporting for Enterprises: A Case Study of HaxiTAG
HaxiTAG ESG Solutions: Best Practices Guide for ESG Reporting
Impact of Data Privacy and Compliance on HaxiTAG ESG System

Friday, September 19, 2025

AI-Driven Transformation at P&G: Strategic Integration Across Operations and Innovation

As a global leader in the consumer goods industry, Procter & Gamble (P&G) deeply understands that technological innovation is central to delivering sustained consumer value. In recent years, P&G has strategically integrated Artificial Intelligence (AI) and Generative AI (Gen AI) into its operational and innovation ecosystems, forming a company-wide AI strategy. This strategy is consumer-centric, efficiency-driven, and aims to transform the organization, processes, and culture at scale.

Strategic Vision: Consumer Delight as the Sole Objective

P&G Chairman and CEO Jon Moeller emphasizes that AI should serve the singular goal of generating delight for consumers, customers, employees, society, and shareholders—not technology for its own sake. Only technologies that accelerate and enhance this objective are worth adopting. This orientation ensures that all AI projects are tightly aligned with business outcomes, avoiding fragmented or siloed deployments.

Infrastructure: Building a Scalable Enterprise AI Factory

CIO Vittorio Cretella describes P&G’s internal generative AI tool, ChatPG (built on OpenAI API), which supports over 35 enterprise-wide use cases. Through its “AI Factory,” deployment efficiency has increased tenfold. This platform enables standardized deployment and iteration of AI models across regions and functions , embedding AI capabilities as strategic infrastructure in daily operations.

Core Use Cases

1. Supply Chain Forecasting and Optimization

In collaboration with phData and KNIME, P&G integrates complex and fragmented supply chain data (spanning 5,000+ products and 22,000 components) into a unified platform. This enables real-time risk prediction, inventory optimization, and demand forecasting. A manual verification process once involving over a dozen experts has been eliminated, cutting response times from two hours to near-instantaneous.

2. Consumer Behavior Insights and Product Development

Smart products like the Oral-B iO electric toothbrush collect actual usage data, which AI models use to uncover behavioral discrepancies (e.g., real brushing time averaging 47 seconds versus the reported two minutes). These insights inform R&D and formulation innovation, significantly improving product design and user experience.

3. Marketing and Media Content Testing

Generative AI enables rapid creative ideation and execution. Large-scale A/B testing shortens concept validation cycles from months to days, reducing costs. AI also automates media placement and audience segmentation, enhancing both precision and efficiency.

4. Intelligent Manufacturing and Real-Time Quality Control

Sensors and computer vision systems deployed across P&G facilities enable automated quality inspection and real-time alerts. This supports “hands-free” night shift production with zero manual supervision, reducing defects and ensuring consistent product quality.

Collective Intelligence: AI as a Teammate

Between May and July 2024, P&G collaborated with Harvard Business School’s Digital Data Design Institute and Wharton School to conduct a Gen AI experiment involving over 700 employees. Key findings include:

  • Teams using Gen AI improved efficiency by ~12%;

  • Individual AI users matched or outperformed full teams without AI;

  • AI facilitated cross-functional integration and balanced solutions;

  • Participants reported enhanced collaboration and positive engagement .

These results reinforce Professor Karim Lakhani’s “Cybernetic Teammate” concept, where AI transitions from tool to teammate.

Organizational Transformation: Talent and Cultural Integration

P&G promotes AI adoption beyond tools—embedding it into organizational culture. This includes mandatory training, signed AI use policies, and executive-level hands-on involvement. CIO Seth Cohen articulates a “30% technology, 70% organization” transformation formula, underscoring the primacy of culture and talent in sustainable change.

Sustaining Competitive AI Advantage

P&G’s AI strategy is defined by its system-level design, intentionality, scalability, and long-term sustainability. Through:

  • Consumer-centric value orientation,

  • Standardized, scalable AI infrastructure,

  • End-to-end coverage from supply chain to marketing,

  • Collaborative innovation between AI and employees,

  • Organizational and cultural transformation,

P&G establishes a self-reinforcing loop of AI → Efficiency → Innovation. AI is no longer a technical pursuit—it is a foundational pillar of enduring corporate competitiveness.

Related topic:

Developing LLM-based GenAI Applications: Addressing Four Key Challenges to Overcome Limitations
Analysis of AI Applications in the Financial Services Industry
Application of HaxiTAG AI in Anti-Money Laundering (AML)
Analysis of HaxiTAG Studio's KYT Technical Solution
Strategies and Challenges in AI and ESG Reporting for Enterprises: A Case Study of HaxiTAG
HaxiTAG ESG Solutions: Best Practices Guide for ESG Reporting
Impact of Data Privacy and Compliance on HaxiTAG ESG System

Saturday, September 13, 2025

Building a Trustworthy Enterprise AI Agent Governance Framework: Strategic Insights and Practical Implications from Microsoft Copilot Studio

Case Overview: From Low-Code to Enterprise-Grade AI Agent Governance

This case centers on Microsoft’s governance strategy for AI agents, with Copilot Studio as the core platform, as outlined in The CIO Playbook to Governing AI Agents in a Low-Code World 2025. The core thesis is that organizations are transitioning from tool-based assistance to agent-operated operations, where agents evolve from passive executors to intelligent digital colleagues embedded in business processes. By extending its governance experience with Power Platform to the domain of AI agents, Microsoft introduces a five-pillar governance framework that emphasizes security, compliance, and business value—marking a paradigm shift where AI agent governance becomes a strategic capability for the enterprise.

Application Scenarios and Value Realization

Copilot Studio, as Microsoft’s strategic agent development and deployment platform, has been adopted by over 90% of Fortune 500 companies, serving more than 230,000 organizations. Its representative use cases include:

  • Intelligent Customer and Employee Support: Agents handle internal IT support and external customer interactions, improving responsiveness and reducing operational labor.

  • Process Automation Executors: Agents replace repetitive tasks across finance, legal, and HR functions, driving operational efficiency.

  • Knowledge-Driven Decision Support: Powered by embedded RAG (retrieval-augmented generation), agents tap into enterprise knowledge bases to deliver intelligent recommendations.

  • Cross-Department Digital Workforce Coordination: With tools like Entra Agent ID and Microsoft Purview, enterprises gain unified control over agent identity, behavior traceability, and lifecycle governance.

Through the adoption of zoned governance models and continuous monitoring of performance and ROI, organizations are not only scaling their AI capabilities, but also ensuring their deployment remains secure, compliant, and controllable.


Strategic Reflections: Elevating AI Governance and Redefining the CIO Role

  1. Governance as an Innovation Enabler, Not a Constraint
    Microsoft’s approach—“freedom within guardrails”—leverages structured models such as zoned governance, ALM pipelines, and permission stratification to strike a dual spiral of innovation and compliance.

  2. CIOs as ‘Agent Bosses’ and AI Strategists
    Traditional IT leadership can no longer shoulder the responsibility of AI transformation alone. CIOs must evolve to lead AI agents with capabilities in task orchestration, organizational integration, and performance management.

  3. From Power Platform CoE to AI CoE: An Inevitable Evolution
    This case demonstrates a minimal-friction transition from low-code governance to intelligent agent governance, offering a practical migration path for digital enterprises.

Toward Strategic Maturity: Agent Governance as the Cornerstone of Enterprise Intelligence

The Copilot Studio governance framework offers not only operational guidance for deploying agents, but also cultivates a strategic mindset:

The true strength of enterprise AI lies not only in models and infrastructure, but in the systemic restructuring of organizations, mechanisms, and culture.

This case serves as a valuable reference for organizations embarking on large-scale AI agent deployment, especially those with foundational low-code experience, complex governance environments, and high compliance demands. In the future, AI agent governance capability will become a defining metric of digital organizational maturity.

Related topic:

Microsoft Copilot+ PC: The Ultimate Integration of LLM and GenAI for Consumer Experience, Ushering in a New Era of AI
In-depth Analysis of Google I/O 2024: Multimodal AI and Responsible Technological Innovation Usage
Google Gemini: Advancing Intelligence in Search and Productivity Tools
Google Gemini's GPT Search Update: Self-Revolution and Evolution
GPT-4o: The Dawn of a New Era in Human-Computer Interaction
GPT Search: A Revolutionary Gateway to Information, fan's OpenAI and Google's battle on social media
GPT-4o: The Dawn of a New Era in Human-Computer Interaction

Tuesday, September 9, 2025

Morgan Stanley’s DevGen.AI: Reshaping Enterprise Legacy System Modernization Through Generative AI

As enterprises increasingly grapple with the pressing challenge of modernizing legacy software systems, Morgan Stanley has unveiled DevGen.AI—an internally developed generative AI tool that sets a new benchmark for enterprise-grade modernization strategies. Built upon OpenAI’s GPT models, DevGen.AI is designed to tackle the long-standing issue of outdated systems—particularly those written in languages like COBOL—that are difficult to maintain, adapt, or scale within financial institutions.

The Innovation: A Semantic Intermediate Layer

DevGen.AI’s most distinctive innovation lies in its use of an “intermediate language” approach. Rather than directly converting legacy code into modern programming languages, it first translates source code into structured, human-readable English specifications. Developers can then use these specs to rewrite the system in modern languages. This human-in-the-loop paradigm—AI-assisted specification generation followed by manual code reconstruction—offers superior adaptability and contextual accuracy for the modernization of complex, deeply embedded enterprise systems.

By 2025, DevGen.AI has analyzed over 9 million lines of legacy code, saving developers more than 280,000 working hours. This not only reduces reliance on scarce COBOL expertise but also provides a structured pathway for large-scale software asset refactoring across the firm.

Application Scenarios and Business Value at Morgan Stanley

DevGen.AI has been deployed across three core domains:

1. Code Modernization & Migration

DevGen.AI accelerates the transformation of decades-old mainframe systems by translating legacy code into standardized technical documentation. This enables faster and more accurate refactoring into modern languages such as Java or Python, significantly shortening technology upgrade cycles.

2. Compliance & Audit Support

Operating in a heavily regulated environment, financial institutions must maintain rigorous transparency. DevGen.AI facilitates code traceability by extracting and describing code fragments tied to specific business logic, helping streamline both internal audits and external regulatory responses.

3. Assisted Code Generation

While its generated modern code is not yet fully optimized for production-scale complexity, DevGen.AI can autonomously convert small to mid-sized modules. This provides substantial savings on initial development efforts and lowers the barrier to entry for modernization.

A key reason for Morgan Stanley’s choice to build a proprietary AI tool is the ability to fine-tune models based on domain-specific semantics and proprietary codebases. This avoids the semantic drift and context misalignment often seen with general-purpose LLMs in enterprise environments.

Strategic Insights from an AI Engineering Milestone

DevGen.AI exemplifies a systemic response to technical debt in the AI era, offering a replicable roadmap for large enterprises. Beyond showcasing generative AI’s real-world potential in complex engineering tasks, the project highlights three transformative industry trends:

1. Legacy System Integration Is the Gateway to Industrial AI Adoption

Enterprise transformation efforts are often constrained by the inertia of legacy infrastructure. DevGen.AI demonstrates that AI can move beyond chatbot interfaces or isolated coding tasks, embedding itself at the heart of IT infrastructure transformation.

2. Semantic Intermediation Is Critical for Quality and Control

By shifting the translation paradigm from “code-to-code” to “code-to-spec,” DevGen.AI introduces a bilingual collaboration model between AI and humans. This not only enhances output fidelity but also significantly improves developer control, comprehension, and confidence.

3. Organizational Modernization Amplifies AI ROI

Mike Pizzi, Morgan Stanley’s Head of Technology, notes that AI amplifies existing capabilities—it is not a substitute for foundational architecture. Therefore, the success of AI initiatives hinges not on the models themselves, but on the presence of a standardized, modular, and scalable technical infrastructure.

From Intelligent Tools to Intelligent Architecture

DevGen.AI proves that the core enterprise advantage in the AI era lies not in whether AI is adopted, but in how AI is integrated into the technology evolution lifecycle. AI is no longer a peripheral assistant; it is becoming the central engine powering IT transformation.

Through DevGen.AI, Morgan Stanley has not only addressed legacy technical debt but has also pioneered a scalable, replicable, and sustainable modernization framework. This breakthrough sets a precedent for AI-driven transformation in highly regulated, high-complexity industries such as finance. Ultimately, the value of enterprise AI does not reside in model size or novelty—but in its strategic ability to drive structural modernization.

Related topic:

How to Get the Most Out of LLM-Driven Copilots in Your Workplace: An In-Depth Guide
Empowering Sustainable Business Strategies: Harnessing the Potential of LLM and GenAI in HaxiTAG ESG Solutions
The Application and Prospects of HaxiTAG AI Solutions in Digital Asset Compliance Management
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions
Empowering Enterprise Sustainability with HaxiTAG ESG Solution and LLM & GenAI Technology
The Application of HaxiTAG AI in Intelligent Data Analysis
How HaxiTAG AI Enhances Enterprise Intelligent Knowledge Management
Effective PR and Content Marketing Strategies for Startups: Boosting Brand Visibility
Leveraging HaxiTAG AI for ESG Reporting and Sustainable Development