Get GenAI guide

Access HaxiTAG GenAI research content, trends and predictions.

Showing posts with label AI in enterprises. Show all posts
Showing posts with label AI in enterprises. Show all posts

Tuesday, January 6, 2026

Anthropic: Transforming an Entire Organization into an “AI-Driven Laboratory”

Anthropic’s internal research reveals that AI is fundamentally reshaping how organizations produce value, structure work, and develop human capital. Today, approximately 60% of engineers’ daily workload is supported by Claude—accelerating delivery while unlocking an additional 27% of new tasks previously beyond the team’s capacity. This shift transforms backlogged work such as refactoring, experimentation, and visualization into systematic outputs.

The traditional role-based division of labor is giving way to a task-structured AI delegation model, requiring organizations to define which activities should be AI-first and which must remain human-led. Meanwhile, collaboration norms are being rewritten: instant Q&A is absorbed by AI, mentorship weakens, and experiential knowledge transfer diminishes—forcing organizations to build compensating institutional mechanisms. In the long run, AI fluency and workforce retraining will become core organizational capabilities, catalyzing a full-scale redesign of workflows, roles, culture, and talent strategies.


AI Is Rewriting How a Company Operates

  • 132 engineers and researchers

  • 53 in-depth interviews

  • 200,000 Claude Code interaction logs

These findings go far beyond productivity—they reveal how an AI-native organization is reshaped from within.

Anthropic’s organizational transformation centers on four structural shifts:

  1. Recomposition of capacity and project portfolios

  2. Evolution of division of labor and role design

  3. Reinvention of collaboration models and culture

  4. Forward-looking talent strategy and capability development


Capacity Structure: When 27% of Work Comes from “What Was Previously Impossible”

Story Scenario

A product team had long wanted to build a visualization and monitoring system, but the work was repeatedly deprioritized due to limited staffing and urgency. After adopting Claude Code, debugging, scripting, and boilerplate tasks were delegated to AI. With the same engineering hours, the team delivered substantially more foundational work.

As a result, dashboards, comparative experiments, and long-postponed refactoring cycles finally moved forward.

Research shows around 27% of Claude-assisted work represents net-new capacity—tasks that simply could not have been executed before.

Organizational Abstractions

  1. AI converts “peripheral tasks” into new value zones
    Refactoring, testing, visualization, and experimental work—once chronically under-resourced—become systematically solvable.

  2. Productivity gains appear as “doing more,” not “needing fewer people”
    Output scales faster than headcount reduction.

Insight for Organizations:
AI should be treated as a capacity amplifier, not a cost-cutting device. Create a dedicated AI-generated capacity pool for exploratory and backlog-clearing projects.


Division of Labor: Organizations Are Co-Writing the Rules of AI Delegation

Story Scenario

Teams gradually formed a shared understanding:

  • Low-risk, easily verifiable, repetitive tasks → AI-first

  • Architecture, core logic, and cross-functional decisions → Human-first

Security, alignment, and infrastructure teams differ in mission but operate under the same logic:
examine task structure first, then determine AI vs. human ownership.

Organizational Abstractions

  1. Work division shifts from role-based to task-based
    A single engineer may now: write code, review AI output, design prompts, and make architectural judgments.

  2. New roles are emerging organically
    AI collaboration architect, prompt engineer, AI workflow designer—titles informal, responsibilities real.

Insight for Organizations:
Codify AI usage rules in operational processes, not just job descriptions. Make delegation explicit rather than relying on team intuition.


Collaboration & Culture: When “Ask AI First” Becomes the Default

Story Scenario

New engineers increasingly ask Claude before consulting senior colleagues. Over time:

  • Junior questions decrease

  • Seniors lose visibility into juniors’ reasoning

  • Tacit knowledge transfer drops sharply

Engineers remarked:
“I miss the real-time debugging moments where learning naturally happened.”

Organizational Abstractions

  1. AI boosts work efficiency but weakens learning-centric collaboration and team cohesion

  2. Mentorship must be intentionally reconstructed

    • Shift from Q&A to Code Review, Design Review, and Pair Design

    • Require juniors to document how they evaluated AI output, enabling seniors to coach thought processes

Insight for Organizations:
Do not mistake “fewer questions” for improved efficiency. Learning structures must be rebuilt through deliberate mechanisms.


Talent & Capability Strategy: Making AI Fluency a Foundational Organizational Skill

Story Scenario

As Claude adoption surged, Anthropic’s leadership asked:

  • What will an engineering team look like in five years?

  • How do implementers evolve into AI agent orchestrators?

  • Which roles need reskilling rather than replacement?

Anthropic is now advancing its AI Fluency Framework, partnering with universities to adapt curricula for an AI-augmented future.

Organizational Abstractions

  1. AI is a human capital strategy, not an IT project

  2. Reskilling must be proactive, not reactive

  3. AI fluency will become as fundamental as computer literacy across all roles

Insight for Organizations:
Develop AI education, cross-functional reskilling pathways, and ethical governance frameworks now—before structural gaps appear.


Final Organizational Insight: AI Is a Structural Variable, Not Just a New Tool

Anthropic’s experience yields three foundational principles:

  1. Redesign workflows around task structure—not tools

  2. Embed AI into talent strategy, culture, and role evolution

  3. Use institutional design—not individual heroism—to counteract collaboration erosion and skill atrophy

The organizations that win in the AI era are not those that adopt tools first, but those that first recognize AI as a structural force—and redesign themselves accordingly.

Related topic:

European Corporate Sustainability Reporting Directive (CSRD)
Sustainable Development Reports
External Limited Assurance under CSRD
European Sustainable Reporting Standard (ESRS)
HaxiTAG ESG Solution
GenAI-driven ESG strategies
Mandatory sustainable information disclosure
ESG reporting compliance
Digital tagging for sustainability reporting

ESG data analysis and insights 

Monday, October 13, 2025

From System Records to Agent Records: Workday’s Enterprise AI Transformation Paradigm—A Future of Human–Digital Agent Coexistence

Based on a McKinsey Inside the Strategy Room interview with Workday CEO Carl Eschenbach (August 21, 2025), combined with Workday official materials and third-party analyses, this study focuses on enterprise transformation driven by agentic AI. Workday’s practical experience in human–machine collaborative intelligence offers valuable insights.

In enterprise AI transformation, two extremes must be avoided: first, treating AI as a “universal cost-cutting tool,” falling into the illusion of replacing everything while neglecting business quality, risk, and experience; second, refusing to experiment due to uncertainty, thereby missing opportunities to elevate efficiency and value.

The proper approach positions AI as a “productivity-enhancing digital colleague” under a governance and measurement framework, aiming for measurable productivity gains and new value creation. By starting with small pilots and iterative scaling, cost reduction, efficiency enhancement, and innovation can be progressively unified.

Overview

Workday’s AI strategy follows a “human–agent coexistence” paradigm. Using consistent data from HR and finance systems of record (SOR) and underpinned by governance, the company introduces an “Agent System of Record (ASR)” to centrally manage agent registration, permissions, costs, and performance—enabling a productivity leap from tool to role-based agent.

Key Principles and Concepts

  1. Coexistence, Not Replacement: AI’s power comes from being “agentic”—technology working for you. Workday clearly positions AI for peaceful human–agent coexistence.

  2. Domain Data and Business Context Define the Ceiling: The CEO emphasizes that data quality and domain context, especially in HR and finance, are foundational. Workday serves over 10,000 enterprises, accumulating structured processes and data assets across clients.

  3. Three-System Perspective: HR, finance, and customer SORs form the enterprise AI foundation. Workday focuses on the first two and collaborates with the broader ecosystem (e.g., Salesforce).

  4. Speed and Culture as Multipliers: Treating “speed” as a strategic asset and cultivating a growth-oriented culture through service-oriented leadership that “enables others.”


Practice and Governance (Workday Approach)

  • ASR Platform Governance: Unified directories and observability for centralized control of in-house and third-party agents; role and permission management, registration and compliance tracking, cost budgeting and ROI monitoring, real-time activity and strategy execution, and agent orchestration/interconnection via A2A/MCP protocols (Agent Gateway). Digital colleagues in HaxiTAG Bot Factory provide similar functional benefits in enterprise scenarios.

  • Role-Based (Multi-Skill) Agents: Upgrade from task-based to configurable “role” agents, covering high-value processes such as recruiting, talent mobility, payroll, contracts, financial audit, and policy compliance.

  • Responsible AI System: Appoint a Chief Responsible AI Officer and employ ISO/IEC 42001 and NIST AI RMF for independent validation and verification, forming a governance loop for bias, security, explainability, and appeals.

  • Organizational Enablement: Systematic AI training for 20,000+ employees to drive full human–agent collaboration.

Value Proposition and Business Implications

  • From “Application-Centric” to “Role-Agent-Centric” Experience: Users no longer “click apps” but collaborate with context-aware role agents, requiring rethinking of traditional UI and workflow orchestration.

  • Measurable Digital Workforce TCO/ROI: ASR treats agents as “digital employees,” integrating budget, cost, performance, and compliance into a single ledger, facilitating CFO/CHRO/CAIO governance and investment decisions.

  • Ecosystem and Interoperability: Agent Gateway connects external agents (partners or client-built), mitigating “agent sprawl” and shadow IT risks.

Methodology: A Reusable Enterprise Deployment Framework

  1. Objective Function: Maximize productivity, minimize compliance/risk, and enhance employee experience; define clear boundaries for tasks agents can independently perform.

  2. Priority Scenarios: Select high-frequency, highly regulated, and clean-data HR/finance processes (e.g., payroll verification, policy responses, compliance audits, contract obligation extraction) as MVPs.

  3. ASR Capability Blueprint:

    • Directory: Agent registration, profiles (skills/capabilities), tracking, explainability;

    • Identity & Permissions: Least privilege, cross-system data access control;

    • Policy & Compliance: Policy engine, action audits, appeals, accountability;

    • Economics: Budgeting, A/B and performance dashboards, task/time/result accounting;

    • Connectivity: Agent Gateway, A2A/MCP protocol orchestration.

  4. “Onboard Agents Like Humans”: Implement lifecycle management and RACI assignment for “hire–trial–performance–promotion–offboarding” to prevent over-authorization or improper execution.

  5. Responsible AI Governance: Align with ISO 42001 and NIST AI RMF; establish processes and metrics (risk registry, bias testing, explainability thresholds, red teaming, SLA for appeals), and regularly disclose internally and externally.

  6. Organization and Culture: Embed “speed” in OKRs/performance metrics, emphasize leadership in “serving others/enabling teams,” and establish CAIO/RAI committees with frontline coaching mechanisms.

Industry Insight: Instead of full-scale rollout, adopt a four-piece “role–permission–metric–governance” loop, gradually delegating authority to create explainable autonomy.

Assessment and Commentary

Workday unifies humans and agents within existing HR/finance SORs and governance, balancing compliance with practical deployment density, shortening the path from pilot to scale. Constraints and risks include:

  1. Ecosystem Lock-In: ASR strongly binds to Workday data and processes; open protocols and Marketplace can mitigate this.

  2. Cross-System Consistency: Agents spanning ERP/CRM/security domains require end-to-end permission and audit linkage to avoid “shadow agents.”

  3. Measurement Complexity: Agent value must be assessed by both process and outcome (time saved ≠ business result).

Sources: McKinsey interview with Workday CEO on “coexistence, data quality, three-system perspective, speed and leadership, RAI and training”; Workday official pages/news on ASR, Agent Gateway, role agents, ROI, and Responsible AI; HFS, Josh Bersin, and other industry analyses on “agent sprawl/governance.”

Related topic:

Maximizing Efficiency and Insight with HaxiTAG LLM Studio, Innovating Enterprise SolutionsEnhancing Enterprise Development: Applications of Large Language Models and Generative AIUnlocking Enterprise Success: The Trifecta of Knowledge, Public Opinion, and IntelligenceRevolutionizing Information Processing in Enterprise Services: The Innovative Integration of GenAI, LLM, and Omni ModelMastering Market Entry: A Comprehensive Guide to Understanding and Navigating New Business Landscapes in Global MarketsHaxiTAG's LLMs and GenAI Industry Applications - Trusted AI SolutionsEnterprise AI Solutions: Enhancing Efficiency and Growth with Advanced AI Capabilities

Tuesday, August 19, 2025

Internal AI Adoption in Enterprises: In-Depth Insights, Challenges, and Strategic Pathways

In today’s AI-driven enterprise service landscape, the implementation and scaling of internal AI applications have become key indicators of digital transformation success. The ICONIQ 2025 State of AI report provides valuable insights into the current state, emerging challenges, and future directions of enterprise AI adoption. This article draws upon the report’s key findings and integrates them with practical perspectives on enterprise service culture to deliver a professional analysis of AI deployment breadth, user engagement, value realization, and evolving investment structures, along with actionable strategic recommendations.

High AI Penetration, Yet Divergent User Engagement

According to the report, while up to 70% of employees have access to internal AI tools, only around half are active users. This discrepancy reveals a widespread challenge: despite significant investments in AI deployment, employee engagement often falls short, particularly in large, complex organizations. The gap between "tool availability" and "tool utilization" reflects the interplay of multiple structural and cultural barriers.

Key among these is organizational inertia. Long-established workflows and habits are not easily disrupted. Without strong guidance, training, and incentive systems, employees may revert to legacy practices, leaving AI tools underutilized. Secondly, disparities in employee skill sets hinder AI adoption. Not all employees possess the aptitude or willingness to learn and adapt to new technologies, and perceived complexity can lead to avoidance. Third, lagging business process reengineering limits AI’s impact. The introduction of AI must be accompanied by streamlined workflows; otherwise, the technology remains disconnected from business value chains.

In large enterprises, AI adoption faces additional challenges, including the absence of a unified AI strategy, departmental silos, and concerns around data security and regulatory compliance. Furthermore, employee anxiety over job displacement may create resistance. Research shows that insufficient collective buy-in or vague implementation directives often lead to failed AI initiatives. Uncoordinated tool usage may also result in fragmented knowledge retention, security risks, and misalignment with strategic goals. Addressing these issues requires systemic transformation across technology, processes, organizational structure, and culture to ensure that AI tools are not just “accessible,” but “habitual and valuable.”

Scenario Depth and Productivity Gains Among High-Adoption Enterprises

The report indicates that enterprises with high AI adoption deploy an average of seven or more internal AI use cases, with coding assistants (77%), content generation (65%), and document retrieval (57%) being the most common. These findings validate AI’s broad applicability and emphasize that scenario depth and diversity are critical to unlocking its full potential. By embedding AI into core functions such as R&D, operations, and marketing, leading enterprises report productivity gains ranging from 15% to 30%.

Scenario-specific tools deliver measurable impact. Coding assistants enhance development speed and code quality; content generation automates scalable, personalized marketing and internal communications; and document retrieval systems reduce the cost of information access through semantic search and knowledge graph integration. These solutions go beyond tool substitution — they optimize workflows and free employees to focus on higher-value, creative tasks.

The true productivity dividend lies in system integration and process reengineering. High-adoption enterprises treat AI not as isolated pilots but as strategic drivers of end-to-end automation. Integrating content generators with marketing automation platforms or linking document search systems with CRM databases exemplifies how AI can augment user experience and drive cross-functional value. These organizations also invest in data governance and model optimization, ensuring that high-quality data fuels reliable, context-aware AI models.


Evolving AI R&D Investment Structures

The report highlights that AI-related R&D now comprises 10%–20% of enterprise R&D budgets, with continued growth across revenue segments — signaling strong strategic prioritization. Notably, AI investment structures are dynamically shifting, necessitating foresight and flexibility in resource planning.

In the early stages, talent represents the largest cost. Enterprises compete for AI/ML engineers, data scientists, and AI product managers who can bridge technical expertise with business understanding. Talent-intensive innovation is critical when AI technologies are still nascent. Competitive compensation, career development pathways, and open innovation cultures are essential for attracting and retaining such talent.

As AI matures, cost structures tilt toward cloud computing, inference operations, and governance. Once deployed, AI systems require substantial compute resources, particularly for high-volume, real-time workloads. Model inference, data transmission, and infrastructure scalability become cost drivers. Simultaneously, AI governance—covering privacy, fairness, explainability, and regulatory compliance—emerges as a strategic imperative. Establishing AI ethics committees, audit frameworks, and governance platforms becomes essential to long-term scalability and risk mitigation.

Thus, enterprises must shift from a narrow R&D lens to a holistic investment model, balancing technical innovation with operational sustainability. Cloud cost optimization, model efficiency improvements (e.g., pruning, quantization), and robust data governance are no longer optional—they are competitive necessities.

Strategic Recommendations

1. Scenario-Driven Co-Creation: The Core of AI Value Realization

AI’s business value lies in transforming core processes, not simply introducing new technologies. Enterprises should anchor AI initiatives in real business scenarios and foster cross-functional co-creation between business leaders and technologists.

Establish cross-departmental AI innovation teams comprising business owners, technical experts, and data scientists. These teams should identify high-impact use cases, redesign workflows, and iterate continuously. Begin with data-rich, high-friction areas where value can be validated quickly. Ensure scalability and reusability across similar processes to minimize redundant development and maximize asset value.

2. Culture and Talent Mechanisms: Keys to Active Adoption

Bridging the gap between AI availability and consistent use requires organizational commitment, employee empowerment, and cultural transformation.

Promote an AI-first mindset through leadership advocacy, internal storytelling, and grassroots experimentation. Align usage with performance incentives by incorporating AI adoption metrics into KPIs or OKRs. Invest in tiered AI literacy programs, tailored to roles and seniority, to build a baseline of AI fluency and confidence across the organization.

3. Cost Optimization and Sustainable Governance

As costs shift toward compute and compliance, enterprises must optimize infrastructure and fortify governance.

Implement granular cloud cost control strategies and improve model inference efficiency through hardware acceleration or architectural simplification. Develop a comprehensive AI governance framework encompassing data privacy, algorithmic fairness, model interpretability, and ethical accountability. Though initial investments may be substantial, they provide long-term protection against legal, reputational, and operational risks.

4. Data-Driven ROI and Strategic Iteration

Establish end-to-end AI performance and ROI monitoring systems. Track tool usage, workflow impact, and business outcomes (e.g., efficiency gains, customer satisfaction) to quantify value creation.

Design robust ROI models tailored to each use case — including direct and indirect costs and benefits. Use insights to refine investment priorities, sunset underperforming projects, and iterate AI strategy in alignment with evolving goals. Let data—not assumptions—guide AI evolution.

Conclusion

Enterprise AI adoption has entered deep waters. To capture long-term value, organizations must treat AI not as a tool, but as a strategic infrastructure, guided by scenario-centric design, cultural alignment, and governance excellence. Only then can they unlock AI’s productivity dividends and build a resilient, intelligent competitive advantage.

Related Topic

Enhancing Customer Engagement with Chatbot Service
HaxiTAG ESG Solution: The Data-Driven Approach to Corporate Sustainability
Simplifying ESG Reporting with HaxiTAG ESG Solutions
The Adoption of General Artificial Intelligence: Impacts, Best Practices, and Challenges
The Significance of HaxiTAG's Intelligent Knowledge System for Enterprises and ESG Practitioners: A Data-Driven Tool for Business Operations Analysis
HaxiTAG AI Solutions: Driving Enterprise Private Deployment Strategies
HaxiTAG EiKM: Transforming Enterprise Innovation and Collaboration Through Intelligent Knowledge Management
AI-Driven Content Planning and Creation Analysis
AI-Powered Decision-Making and Strategic Process Optimization for Business Owners: Innovative Applications and Best Practices
In-Depth Analysis of the Potential and Challenges of Enterprise Adoption of Generative AI (GenAI)

Saturday, August 17, 2024

How Enterprises Can Build Agentic AI: A Guide to the Seven Essential Resources and Skills

After reading the Cohere team's insights on "Discover the seven essential resources and skills companies need to build AI agents and tap into the next frontier of generative AI," I have some reflections and summaries to share, combined with the industrial practices of the HaxiTAG team.

  1. Overview and Insights

In the discussion on how enterprises can build autonomous AI agents (Agentic AI), Neel Gokhale and Matthew Koscak's insights primarily focus on how companies can leverage the potential of Agentic AI. The core of Agentic AI lies in using generative AI to interact with tools, creating and running autonomous, multi-step workflows. It goes beyond traditional question-answering capabilities by performing complex tasks and taking actions based on guided and informed reasoning. Therefore, it offers new opportunities for enterprises to improve efficiency and free up human resources.

  1. Problems Solved

Agentic AI addresses several issues in enterprise-level generative AI applications by extending the capabilities of retrieval-augmented generation (RAG) systems. These include improving the accuracy and efficiency of enterprise-grade AI systems, reducing human intervention, and tackling the challenges posed by complex tasks and multi-step workflows.

  1. Solutions and Core Methods

The key steps and strategies for building an Agentic AI system include:

  • Orchestration: Ensuring that the tools and processes within the AI system are coordinated effectively. The use of state machines is one effective orchestration method, helping the AI system understand context, respond to triggers, and select appropriate resources to execute tasks.

  • Guardrails: Setting boundaries for AI actions to prevent uncontrolled autonomous decisions. Advanced LLMs (such as the Command R models) are used to achieve transparency and traceability, combined with human oversight to ensure the rationality of complex decisions.

  • Knowledgeable Teams: Ensuring that the team has the necessary technical knowledge and experience or supplementing these through training and hiring to support the development and management of Agentic AI.

  • Enterprise-grade LLMs: Utilizing LLMs specifically trained for multi-step tool use, such as Cohere Command R+, to ensure the execution of complex tasks and the ability to self-correct.

  • Tool Architecture: Defining the various tools used in the system and their interactions with external systems, and clarifying the architecture and functional parameters of the tools.

  • Evaluation: Conducting multi-faceted evaluations of the generative language models, overall architecture, and deployment platform to ensure system performance and scalability.

  • Moving to Production: Extensive testing and validation to ensure the system's stability and resource availability in a production environment to support actual business needs.

  1. Beginner's Practice Guide

Newcomers to building Agentic AI systems can follow these steps:

  • Start by learning the basics of generative AI and RAG system principles, and understand the working mechanisms of state machines and LLMs.
  • Gradually build simple workflows, using state machines for orchestration, ensuring system transparency and traceability as complexity increases.
  • Introduce guardrails, particularly human oversight mechanisms, to control system autonomy in the early stages.
  • Continuously evaluate system performance, using small-scale test cases to verify functionality, and gradually expand.
  1. Limitations and Constraints

The main limitations faced when building Agentic AI systems include:

  • Resource Constraints: Large-scale Agentic AI systems require substantial computing resources and data processing capabilities. Scalability must be fully considered when moving into production.
  • Transparency and Control: Ensuring that the system's decision-making process is transparent and traceable, and that human intervention is possible when necessary to avoid potential risks.
  • Team Skills and Culture: The team must have extensive AI knowledge and skills, and the corporate culture must support the application and innovation of AI technology.
  1. Summary and Business Applications

The core of Agentic AI lies in automating multi-step workflows to reduce human intervention and increase efficiency. Enterprises should prepare in terms of infrastructure, personnel skills, tool architecture, and system evaluation to effectively build and deploy Agentic AI systems. Although the technology is still evolving, Agentic AI will increasingly be used for complex tasks over time, creating more value for businesses.

HaxiTAG is your best partner in developing Agentic AI applications. With extensive practical experience and numerous industry cases, we focus on providing efficient, agile, and high-quality Agentic AI solutions for various scenarios. By partnering with HaxiTAG, enterprises can significantly enhance the return on investment of their Agentic AI projects, accelerating the transition from concept to production, thereby building sustained competitive advantage and ensuring a leading position in the rapidly evolving AI field.

Related topic:

Analysis of HaxiTAG Studio's KYT Technical Solution
Enhancing Encrypted Finance Compliance and Risk Management with HaxiTAG Studio
Generative Artificial Intelligence in the Financial Services Industry: Applications and Prospects
Application of HaxiTAG AI in Anti-Money Laundering (AML)
HaxiTAG Studio: Revolutionizing Financial Risk Control and AML Solutions
Analysis of HaxiTAG Studio's KYT Technical Solution
Enhancing Encrypted Finance Compliance and Risk Management with HaxiTAG Studio