Get GenAI guide

Access HaxiTAG GenAI research content, trends and predictions.

Showing posts with label usecase. Show all posts
Showing posts with label usecase. Show all posts

Friday, September 26, 2025

Slack Leading the AI Collaboration Paradigm Shift: A Systemic Overhaul from Information Silos to an Intelligent Work OS

At a critical juncture in enterprise digital transformation, the report “10 Ways to Transform Your Work with AI in Slack” offers a clear roadmap for upgrading collaboration practices. It positions Slack as an “AI-powered Work OS” that, through dialog-driven interactions, agent-based automation, conversational customer data integration, and no-code workflow tools, addresses four pressing enterprise pain points: information silos, redundant processes, fragmented customer insights, and cross-organization collaboration barriers. This represents a substantial technological leap and organizational evolution in enterprise collaboration.

From Messaging Tool to Work OS: Redefining Collaboration through AI

No longer merely a messaging platform akin to “Enterprise WeChat,” Slack has strategically repositioned itself as an end-to-end Work Operating System. At the core of this transformation is the introduction of natural language-driven AI agents, which seamlessly connect people, data, systems, and workflows through conversation, thereby creating a semantically unified collaboration context and significantly enhancing productivity and agility.

  1. Team of AI Agents: Within Slack’s Agent Library, users can deploy function-specific agents (e.g., Deal Support Specialist). By using @mentions, employees engage these agents via natural language, transforming AI from passive tool to active collaborator—marking a shift from tool usage to intelligent partnership.

  2. Conversational Customer Data: Through deep integration with Salesforce, CRM data is both accessible and actionable directly within Slack channels, eliminating the need to toggle between systems. This is particularly impactful for frontline functions like sales and customer support, where it accelerates response times by up to 30%.

  3. No-/Low-Code Automation: Slack’s Workflow Builder empowers business users to automate tasks such as onboarding and meeting summarization without writing code. This AI-assisted workflow design lowers the automation barrier and enables business-led development, democratizing process innovation.

Four Pillars of AI-Enhanced Collaboration

The report outlines four replicable approaches for building an AI-augmented collaboration system within the enterprise:

  • 1) AI Agent Deployment: Embed role-based AI agents into Slack channels. With NLU and backend API integration, these agents gain contextual awareness, perform task execution, and interface with systems—ideal for IT support and customer service scenarios.

  • 2) Conversational CRM Integration: Salesforce channels do more than display data; they allow real-time customer updates via natural language, bridging communication and operational records. This centralizes lifecycle management and drives sales efficiency.

  • 3) No-Code Workflow Tools (Workflow Builder): By linking Slack with tools like G Suite and Asana, users can automate business processes such as onboarding, approvals, and meetings through pre-defined triggers. AI can draft these workflows, significantly lowering the effort needed to implement end-to-end automation.

  • 4) Asynchronous Collaboration Enhancements (Clips + Huddles): By integrating video and audio capabilities directly into Slack, Clips enable on-demand video updates (replacing meetings), while Huddles offer instant voice chats with auto-generated minutes—both vital for supporting global, asynchronous teams.

Constraints and Implementation Risks: A Systematic Analysis

Despite its promise, the report candidly identifies a range of limitations and risks:

Constraint Type Specific Limitation Impact Scope
Ecosystem Dependency Key conversational CRM features require Salesforce licenses Non-Salesforce users must reengineer system integration
AI Capability Limits Search accuracy and agent performance depend heavily on data governance and access control Poor data hygiene undermines agent utility
Security Management Challenges Slack Connect requires manual security policy configuration for external collaboration Misconfiguration may lead to compliance or data exposure risks
Development Resource Demand Advanced agents require custom logic built with Python/Node.js SMEs may lack the technical capacity for deployment

Enterprises must assess alignment with their IT maturity, skill sets, and collaboration goals. A phased implementation strategy is advisable—starting with low-risk domains like IT helpdesks, then gradually extending to sales, project management, and customer support.

Validation by Industry Practice and Deployment Recommendations

The report’s credibility is reinforced by empirical data: 82% of Fortune 100 companies use Slack Connect, and some organizations have replaced up to 30% of recurring meetings with Clips, demonstrating the model’s practical viability. From a regulatory compliance standpoint, adopting the Slack Enterprise Grid ensures robust safeguards across permissioning, data archiving, and audit logging—essential for GDPR and CCPA compliance.

Recommended enterprise adoption strategy:

  1. Pilot in Low-Risk Use Cases: Validate ROI in areas like helpdesk automation or onboarding;

  2. Invest in Data Asset Management: Build semantically structured knowledge bases to enhance AI’s search and reasoning capabilities;

  3. Foster a Culture of Co-Creation: Shift from tool usage to AI-driven co-production, increasing employee engagement and ownership.

The Future of Collaborative AI: Implications for Organizational Transformation

The proposed triad—agent team formation, conversational data integration, and democratized automation—marks a fundamental shift from tool-based collaboration to AI-empowered organizational intelligence. Slack, as a pioneering “Conversational OS,” fosters a new work paradigm—one that evolves from command-response interactions to perceptive, co-creative workflows. This signals a systemic restructuring of organizational hierarchies, roles, technical stacks, and operational logics.

As AI capabilities continue to advance, collaborative platforms will evolve from information hubs to intelligence hubs, propelling enterprises toward adaptive, data-driven, and cognitively aligned collaboration. This transformation is more than a tool swap—it is a deep reconfiguration of cognition, structure, and enterprise culture.

Related topic:

Developing LLM-based GenAI Applications: Addressing Four Key Challenges to Overcome Limitations
Analysis of AI Applications in the Financial Services Industry
Application of HaxiTAG AI in Anti-Money Laundering (AML)
Analysis of HaxiTAG Studio's KYT Technical Solution
Strategies and Challenges in AI and ESG Reporting for Enterprises: A Case Study of HaxiTAG
HaxiTAG ESG Solutions: Best Practices Guide for ESG Reporting
Impact of Data Privacy and Compliance on HaxiTAG ESG System

Tuesday, September 9, 2025

Morgan Stanley’s DevGen.AI: Reshaping Enterprise Legacy System Modernization Through Generative AI

As enterprises increasingly grapple with the pressing challenge of modernizing legacy software systems, Morgan Stanley has unveiled DevGen.AI—an internally developed generative AI tool that sets a new benchmark for enterprise-grade modernization strategies. Built upon OpenAI’s GPT models, DevGen.AI is designed to tackle the long-standing issue of outdated systems—particularly those written in languages like COBOL—that are difficult to maintain, adapt, or scale within financial institutions.

The Innovation: A Semantic Intermediate Layer

DevGen.AI’s most distinctive innovation lies in its use of an “intermediate language” approach. Rather than directly converting legacy code into modern programming languages, it first translates source code into structured, human-readable English specifications. Developers can then use these specs to rewrite the system in modern languages. This human-in-the-loop paradigm—AI-assisted specification generation followed by manual code reconstruction—offers superior adaptability and contextual accuracy for the modernization of complex, deeply embedded enterprise systems.

By 2025, DevGen.AI has analyzed over 9 million lines of legacy code, saving developers more than 280,000 working hours. This not only reduces reliance on scarce COBOL expertise but also provides a structured pathway for large-scale software asset refactoring across the firm.

Application Scenarios and Business Value at Morgan Stanley

DevGen.AI has been deployed across three core domains:

1. Code Modernization & Migration

DevGen.AI accelerates the transformation of decades-old mainframe systems by translating legacy code into standardized technical documentation. This enables faster and more accurate refactoring into modern languages such as Java or Python, significantly shortening technology upgrade cycles.

2. Compliance & Audit Support

Operating in a heavily regulated environment, financial institutions must maintain rigorous transparency. DevGen.AI facilitates code traceability by extracting and describing code fragments tied to specific business logic, helping streamline both internal audits and external regulatory responses.

3. Assisted Code Generation

While its generated modern code is not yet fully optimized for production-scale complexity, DevGen.AI can autonomously convert small to mid-sized modules. This provides substantial savings on initial development efforts and lowers the barrier to entry for modernization.

A key reason for Morgan Stanley’s choice to build a proprietary AI tool is the ability to fine-tune models based on domain-specific semantics and proprietary codebases. This avoids the semantic drift and context misalignment often seen with general-purpose LLMs in enterprise environments.

Strategic Insights from an AI Engineering Milestone

DevGen.AI exemplifies a systemic response to technical debt in the AI era, offering a replicable roadmap for large enterprises. Beyond showcasing generative AI’s real-world potential in complex engineering tasks, the project highlights three transformative industry trends:

1. Legacy System Integration Is the Gateway to Industrial AI Adoption

Enterprise transformation efforts are often constrained by the inertia of legacy infrastructure. DevGen.AI demonstrates that AI can move beyond chatbot interfaces or isolated coding tasks, embedding itself at the heart of IT infrastructure transformation.

2. Semantic Intermediation Is Critical for Quality and Control

By shifting the translation paradigm from “code-to-code” to “code-to-spec,” DevGen.AI introduces a bilingual collaboration model between AI and humans. This not only enhances output fidelity but also significantly improves developer control, comprehension, and confidence.

3. Organizational Modernization Amplifies AI ROI

Mike Pizzi, Morgan Stanley’s Head of Technology, notes that AI amplifies existing capabilities—it is not a substitute for foundational architecture. Therefore, the success of AI initiatives hinges not on the models themselves, but on the presence of a standardized, modular, and scalable technical infrastructure.

From Intelligent Tools to Intelligent Architecture

DevGen.AI proves that the core enterprise advantage in the AI era lies not in whether AI is adopted, but in how AI is integrated into the technology evolution lifecycle. AI is no longer a peripheral assistant; it is becoming the central engine powering IT transformation.

Through DevGen.AI, Morgan Stanley has not only addressed legacy technical debt but has also pioneered a scalable, replicable, and sustainable modernization framework. This breakthrough sets a precedent for AI-driven transformation in highly regulated, high-complexity industries such as finance. Ultimately, the value of enterprise AI does not reside in model size or novelty—but in its strategic ability to drive structural modernization.

Related topic:

How to Get the Most Out of LLM-Driven Copilots in Your Workplace: An In-Depth Guide
Empowering Sustainable Business Strategies: Harnessing the Potential of LLM and GenAI in HaxiTAG ESG Solutions
The Application and Prospects of HaxiTAG AI Solutions in Digital Asset Compliance Management
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions
Empowering Enterprise Sustainability with HaxiTAG ESG Solution and LLM & GenAI Technology
The Application of HaxiTAG AI in Intelligent Data Analysis
How HaxiTAG AI Enhances Enterprise Intelligent Knowledge Management
Effective PR and Content Marketing Strategies for Startups: Boosting Brand Visibility
Leveraging HaxiTAG AI for ESG Reporting and Sustainable Development

Sunday, August 31, 2025

Unlocking the Value of Generative AI under Regulatory Compliance: An Intelligent Overhaul of Model Risk Management in the Banking Sector

Case Overview, Core Themes, and Key Innovations

This case is based on Capgemini’s white paper Model Risk Management: Scaling AI within Compliance Requirements, which addresses the evolving governance frameworks necessitated by the widespread deployment of Generative AI (Gen AI) in the banking industry. It focuses on aligning the legacy SR 11-7 model risk guidelines with the unique characteristics of Gen AI, proposing a forward-looking Model Risk Management (MRM) system that is verifiable, explainable, and resilient.

Through a multidimensional analysis, the paper introduces technical approaches such as hallucination detection, fairness auditing, adversarial robustness testing, explainability mechanisms, and sensitive data governance. Notably, it proposes the paradigm of “MRM by design,” embedding compliance requirements natively into model development and validation workflows to establish a full-lifecycle governance loop.

Scenario Analysis and Functional Value

Application Scenarios:

  • Intelligent Customer Engagement: Enhancing customer interaction via large language models.

  • Automated Compliance: Utilizing Gen AI for AML/KYC document processing and monitoring.

  • Risk and Credit Modeling: Strengthening credit evaluation, fraud detection, and loan approval pipelines.

  • Third-party Model Evaluation: Ensuring compliance controls during the adoption of external foundation models.

Functional Impact:

  • Enhanced Risk Visibility: Multi-dimensional monitoring of hallucinations, toxicity, and fairness in model outputs increases the transparency of AI-induced risks.

  • Improved Regulatory Alignment: A structured mapping between SR 11-7 and the EU AI Act enables U.S. banks to better align with global regulatory standards.

  • Systematized Validation Toolkits: A multi-tiered validation framework centered on conceptual soundness, outcome analysis, and continuous monitoring.

  • Lifecycle Governance Architecture: A comprehensive control system encompassing input management, model core, output guardrails, monitoring, alerts, and human oversight.

Insights and Strategic Implications for AI-enabled Compliance

  • Regulatory Paradigm Shift: Traditional models emphasize auditability and linear explainability, whereas Gen AI introduces non-determinism, probabilistic reasoning, and open-ended outputs—driving a transition from reviewing logic to auditing behavior and outcomes.

  • Compliance-Innovation Synergy: The concept of “compliance by design” encourages AI developers to embed regulatory logic into architecture, traceability, and data provenance from the ground up, reducing retrofit compliance costs.

  • A Systems Engineering View of Governance: Model governance must evolve from a validation-only responsibility to an enterprise-level safeguard, incorporating architecture, data stewardship, security operations, and third-party management into a coordinated governance network.

  • A Global Template for Financial Governance: The proposed alignment of EU AI Act dimensions (e.g., fairness, explainability, energy efficiency, drift control) with SR 11-7 provides a regulatory interoperability model for multinational financial institutions.

  • A Scalable Blueprint for Trusted Gen AI: This case offers a practical risk governance framework applicable to high-stakes sectors such as finance, insurance, government, and healthcare, setting the foundation for responsible and scalable Gen AI deployment.

Related Topic

HaxiTAG AI Solutions: Driving Enterprise Private Deployment Strategies
HaxiTAG EiKM: Transforming Enterprise Innovation and Collaboration Through Intelligent Knowledge Management
AI-Driven Content Planning and Creation Analysis
AI-Powered Decision-Making and Strategic Process Optimization for Business Owners: Innovative Applications and Best Practices
In-Depth Analysis of the Potential and Challenges of Enterprise Adoption of Generative AI (GenAI)

Friday, July 18, 2025

OpenAI’s Seven Key Lessons and Case Studies in Enterprise AI Adoption

AI is Transforming How Enterprises Work

OpenAI recently released a comprehensive guide on enterprise AI deployment, openai-ai-in-the-enterprise.pdf, based on firsthand experiences from its research, application, and deployment teams. It identified three core areas where AI is already delivering substantial and measurable improvements for organizations:

  • Enhancing Employee Performance: Empowering employees to deliver higher-quality output in less time

  • Automating Routine Operations: Freeing employees from repetitive tasks so they can focus on higher-value work

  • Enabling Product Innovation: Delivering more relevant and responsive customer experiences

However, AI implementation differs fundamentally from traditional software development or cloud deployment. The most successful organizations treat AI as a new paradigm, adopting an experimental and iterative approach that accelerates value creation and drives faster user and stakeholder adoption.

OpenAI’s integrated approach — combining foundational research, applied model development, and real-world deployment — follows a rapid iteration cycle. This means frequent updates, real-time feedback collection, and continuous improvements to performance and safety.

Seven Key Lessons for Enterprise AI Deployment

Lesson 1: Start with Rigorous Evaluation
Case: How Morgan Stanley Ensures Quality and Safety through Iteration

As a global leader in financial services, Morgan Stanley places relationships at the core of its business. Faced with the challenge of introducing AI into highly personalized and sensitive workflows, the company began with rigorous evaluations (evals) for every proposed use case.

Evaluation is a structured process that assesses model performance against benchmarks within specific applications. It also supports continuous process improvement, reinforced with expert feedback at each step.

In its early stages, Morgan Stanley focused on improving the efficiency and effectiveness of its financial advisors. The hypothesis was simple: if advisors could retrieve information faster and reduce time spent on repetitive tasks, they could provide more and better insights to clients.

Three initial evaluation tracks were launched:

  • Translation Accuracy: Measuring the quality of AI-generated translations

  • Summarization: Evaluating AI’s ability to condense information using metrics for accuracy, relevance, and coherence

  • Human Comparison: Comparing AI outputs to expert responses, scored on accuracy and relevance

Results: Today, 98% of Morgan Stanley advisors use OpenAI tools daily. Document access has increased from 20% to 80%, and search times have dropped dramatically. Advisors now spend more time on client relationships, supported by task automation and faster insights. Feedback has been overwhelmingly positive — tasks that once took days now take hours.

Lesson 2: Embed AI into Products
Case: How Indeed Humanized Job Matching

AI’s strength lies in handling vast datasets from multiple sources, enabling companies to automate repetitive work while making user experiences more relevant and personalized.

Indeed, the world’s largest job site, now uses GPT-4o mini to redefine job matching.

The “Why” Factor: Recommending good-fit jobs is just the beginning — it’s equally important to explain why a particular role is suggested.

By leveraging GPT-4o mini’s analytical and language capabilities, Indeed crafts natural-language explanations in its messages and emails to job seekers. Its popular "invite to apply" feature also explains how a candidate’s background makes them a great fit.

When tested against the prior matching engine, the GPT-powered version showed:

  • A 20% increase in job application starts

  • A 13% improvement in downstream hiring success

Given that Indeed sends over 20 million messages monthly and serves 350 million visits, these improvements translate to major business impact.

Scaling posed a challenge due to token usage. To improve efficiency, OpenAI and Indeed fine-tuned a smaller model that achieved similar results with 60% fewer tokens.

Helping candidates understand why they’re a fit for a role is a deeply human experience. With AI, Indeed is enabling more people to find the right job faster — a win for everyone.

Lesson 3: Start Early, Invest Ahead of Time
Case: Klarna’s Compounding Returns from AI Adoption

AI solutions rarely work out-of-the-box. Use cases grow in complexity and impact through iteration. Early adoption helps organizations realize compounding gains.

Klarna, a global payments and shopping platform, launched a new AI assistant to streamline customer service. Within months, the assistant handled two-thirds of all service chats — doing the work of hundreds of agents and reducing average resolution time from 11 to 2 minutes. It’s expected to drive $40 million in profit improvement, with customer satisfaction scores on par with human agents.

This wasn’t an overnight success. Klarna achieved these results through constant testing and iteration.

Today, 90% of Klarna’s employees use AI in their daily work, enabling faster internal launches and continuous customer experience improvements. By investing early and fostering broad adoption, Klarna is reaping ongoing returns across the organization.

Lesson 4: Customize and Fine-Tune Models
Case: How Lowe’s Improved Product Search

The most successful enterprises using AI are those that invest in customizing and fine-tuning models to fit their data and goals. OpenAI has invested heavily in making model customization easier — through both self-service tools and enterprise-grade support.

OpenAI partnered with Lowe’s, a Fortune 50 home improvement retailer, to improve e-commerce search accuracy and relevance. With thousands of suppliers, Lowe’s deals with inconsistent or incomplete product data.

Effective product search requires both accurate descriptions and an understanding of how shoppers search — which can vary by category. This is where fine-tuning makes a difference.

By fine-tuning OpenAI models, Lowe’s achieved:

  • A 20% improvement in labeling accuracy

  • A 60% increase in error detection

Fine-tuning allows organizations to train models on proprietary data such as product catalogs or internal FAQs, leading to:

  • Higher accuracy and relevance

  • Better understanding of domain-specific terms and user behavior

  • Consistent tone and voice, essential for brand experience or legal formatting

  • Faster output with less manual review

Lesson 5: Empower Domain Experts
Case: BBVA’s Expert-Led AI Adoption

Employees often know their problems best — making them ideal candidates to lead AI-driven solutions. Empowering domain experts can be more impactful than building generic tools.

BBVA, a global banking leader with over 125,000 employees, launched ChatGPT Enterprise across its operations. Employees were encouraged to explore their own use cases, supported by legal, compliance, and IT security teams to ensure responsible use.

“Traditionally, prototyping in companies like ours required engineering resources,” said Elena Alfaro, Global Head of AI Adoption at BBVA. “With custom GPTs, anyone can build tools to solve unique problems — getting started is easy.”

In just five months, BBVA staff created over 2,900 custom GPTs, leading to significant time savings and cross-departmental impact:

  • Credit risk teams: Faster, more accurate creditworthiness assessments

  • Legal teams: Handling 40,000+ annual policy and compliance queries

  • Customer service teams: Automating sentiment analysis of NPS surveys

The initiative is now expanding into marketing, risk, operations, and more — because AI was placed in the hands of people who know how to use it.

Lesson 6: Remove Developer Bottlenecks
Case: Mercado Libre Accelerates AI Development

In many organizations, developer resources are the primary bottleneck. When engineering teams are overwhelmed, innovation slows, and ideas remain stuck in backlogs.

Mercado Libre, Latin America's largest e-commerce and fintech company, partnered with OpenAI to build Verdi, a developer platform powered by GPT-4o and GPT-4o mini.

Verdi integrates language models, Python, and APIs into a scalable, unified platform where developers use natural language as the primary interface. This empowers 17,000 developers to build consistently high-quality AI applications quickly — without deep code dives. Guardrails and routing logic are built-in.

Key results include:

  • 100x increase in cataloged products via automated listings using GPT-4o mini Vision

  • 99% accuracy in fraud detection through daily evaluation of millions of product listings

  • Multilingual product descriptions adapted to regional dialects

  • Automated review summarization to help customers understand feedback at a glance

  • Personalized notifications that drive engagement and boost recommendations

Next up: using Verdi to enhance logistics, reduce delivery delays, and tackle more high-impact problems across the enterprise.

Lesson 7: Set Bold Automation Goals
Case: How OpenAI Automates Its Own Work

At OpenAI, we work alongside AI every day — constantly discovering new ways to automate our own tasks.

One challenge was our support team’s workflow: navigating systems, understanding context, crafting responses, and executing actions — all manually.

We built an internal automation platform that layers on top of existing tools, streamlining repetitive tasks and accelerating insight-to-action workflows.

First use case: Working on top of Gmail to compose responses and trigger actions. The platform pulls in relevant customer data and support knowledge, then embeds results into emails or takes actions like opening support tickets.

By integrating AI into daily workflows, the support team became more efficient, responsive, and customer-centric. The platform now handles hundreds of thousands of tasks per month — freeing teams to focus on higher-impact work.

It all began because we chose to set bold automation goals, not settle for inefficient processes.

Key Takeaways

As these OpenAI case studies show, every organization has untapped potential to use AI for better outcomes. Use cases may vary by industry, but the principles remain universal.

The Common Thread: AI deployment thrives on open, experimental thinking — grounded in rigorous evaluation and strong safety measures. The best-performing companies don’t rush to inject AI everywhere. Instead, they align on high-ROI, low-friction use cases, learn through iteration, and expand based on that learning.

The Result: Faster and more accurate workflows, more personalized customer experiences, and more meaningful work — as people focus on what humans do best.

We’re now seeing companies automate increasingly complex workflows — often with AI agents, tools, and resources working in concert to deliver impact at scale.

Related topic:

Exploring HaxiTAG Studio: The Future of Enterprise Intelligent Transformation
Leveraging HaxiTAG AI for ESG Reporting and Sustainable Development
Revolutionizing Market Research with HaxiTAG AI
How HaxiTAG AI Enhances Enterprise Intelligent Knowledge Management
The Application of HaxiTAG AI in Intelligent Data Analysis
The Application and Prospects of HaxiTAG AI Solutions in Digital Asset Compliance Management
Report on Public Relations Framework and Content Marketing Strategies

Saturday, July 12, 2025

From Tool to Productivity Engine: Goldman Sachs' Deployment of “Devin” Marks a New Inflection Point in AI Industrialization

Goldman Sachs’ pilot deployment of Devin, an AI software engineer developed by Cognition, represents a significant signal within the fintech domain and marks a pivotal shift in generative AI’s trajectory—from a supporting innovation to a core productivity engine. Driven by increasing technical maturity and deepening industry awareness, this initiative offers three profound insights:

Human-AI Collaboration Enters a Deeper Phase

That Devin still requires human oversight underscores a key reality: current AI tools are better suited as Augmented Intelligence Partners rather than full replacements. This deployment reflects a human-centered principle of AI implementation—emphasizing enhancement and collaboration over substitution. Enterprise service providers should guide clients in designing hybrid workflows that combine “AI + Human” synergy—for example, through pair programming or human-in-the-loop code reviews—and establish evaluation metrics to monitor efficiency and risk exposure.

From General AI to Industry-Specific Integration

The financial industry, known for its data intensity, strict compliance standards, and complex operational chains, is breaking new ground by embracing AI coding tools at scale. This signals a lowering of the trust barrier for deploying generative AI in high-stakes verticals. For solution providers, this reinforces the need to shift from generic models to scenario-specific AI capability modules. Emphasis should be placed on aligning with business value chains and identifying AI enablement opportunities in structured, repeatable, and high-frequency processes. In financial software development, this means building end-to-end AI support systems—from requirements analysis to design, compliance, and delivery—rather than deploying isolated model endpoints.

Synchronizing Organizational Capability with Talent Strategy

AI’s influence on enterprises now extends well beyond technology—it is reshaping talent structures, managerial models, and knowledge operating systems. Goldman Sachs’ adoption of Devin is pushing traditional IT teams toward hybrid roles such as prompt engineers, model tuners, and software developers, demanding greater interdisciplinary collaboration and cognitive flexibility. Industry mentors should assist enterprises in building AI literacy assessment frameworks, establishing continuous learning platforms, and promoting knowledge codification through integrated data assets, code reuse, and AI toolchains—advancing organizational memory towards algorithmic intelligence.

Conclusion

Goldman Sachs’ trial of Devin is not only a forward-looking move in financial digitization but also a landmark case of generative AI transitioning from capability-driven to value-driven industrialization. For enterprise service providers and AI ecosystem stakeholders, it represents both an opportunity and a challenge. Only by anchoring to real-world scenarios, strengthening organizational capabilities, and embracing human-AI synergy as a paradigm, can enterprises actively lead in the generative AI era and build sustainable intelligent innovation systems.

Related Topic

Maximizing Market Analysis and Marketing growth strategy with HaxiTAG SEO Solutions - HaxiTAG
Boosting Productivity: HaxiTAG Solutions - HaxiTAG
HaxiTAG Studio: AI-Driven Future Prediction Tool - HaxiTAG
Seamlessly Aligning Enterprise Knowledge with Market Demand Using the HaxiTAG EiKM Intelligent Knowledge Management System - HaxiTAG
HaxiTAG Studio: Leading the Future of Intelligent Prediction Tools - HaxiTAG
Enhancing Business Online Presence with Large Language Models (LLM) and Generative AI (GenAI) Technology - HaxiTAG
Maximizing Productivity and Insight with HaxiTAG EIKM System - HaxiTAG
HaxiTAG Recommended Market Research, SEO, and SEM Tool: SEMRush Market Explorer - GenAI USECASE
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions - HaxiTAG
HaxiTAG EIKM System: An Intelligent Journey from Information to Decision-Making - HaxiTAG

Monday, June 30, 2025

AI-Driven Software Development Transformation at Rakuten with Claude Code

Rakuten has achieved a transformative overhaul of its software development process by integrating Anthropic’s Claude Code, resulting in the following significant outcomes:

  • Claude Code demonstrated autonomous programming for up to seven continuous hours in complex open-source refactoring tasks, achieving 99.9% numerical accuracy;

  • New feature delivery time was reduced from an average of 24 working days to just 5 days, cutting time-to-market by 79%;

  • Developer productivity increased dramatically, enabling engineers to manage multiple tasks concurrently and significantly boost output.

Case Overview, Core Concepts, and Innovation Highlights

This transformation not only elevated development efficiency but also established a pioneering model for enterprise-grade AI-driven programming.

Application Scenarios and Effectiveness Analysis

1. Team Scale and Development Environment

Rakuten operates across more than 70 business units including e-commerce, fintech, and digital content, with thousands of developers serving millions of users. Claude Code effectively addresses challenges posed by multilingual, large-scale codebases, optimizing complex enterprise-grade development environments.

2. Workflow and Task Types

Workflows were restructured around Claude Code, encompassing unit testing, API simulation, component construction, bug fixing, and automated documentation generation. New engineers were able to onboard rapidly, reducing technology transition costs.

3. Performance and Productivity Outcomes

  • Development Speed: Feature delivery time dropped from 24 days to just 5, representing a breakthrough in efficiency;

  • Code Accuracy: Complex technical tasks were completed with up to 99.9% numerical precision;

  • Productivity Gains: Engineers managed concurrent task streams, enabling parallel development. Core tasks were prioritized by developers while Claude handled auxiliary workstreams.

4. Quality Assurance and Team Collaboration

AI-driven code review mechanisms provided real-time feedback, improving code quality. Automated test-driven development (TDD) workflows enhanced coding practices and enforced higher quality standards across the team.

Strategic Implications and AI Adoption Advancements

  1. From Assistive Tool to Autonomous Producer: Claude Code has evolved from a tool requiring frequent human intervention to an autonomous “programming agent” capable of sustaining long-task executions, overcoming traditional AI attention span limitations.

  2. Building AI-Native Organizational Capabilities: Even non-technical personnel can now contribute via terminal interfaces, fostering cross-functional integration and enhancing organizational “AI maturity” through new collaborative models.

  3. Unleashing Innovation Potential: Rakuten has scaled AI utility from small development tasks to ambient agent-level automation, executing monorepo updates and other complex engineering tasks via multi-threaded conversational interfaces.

  4. Value-Driven Deployment Strategy: Rakuten prioritizes AI tool adoption based on value delivery speed and ROI, exemplifying rational prioritization and assurance pathways in enterprise digital transformation.

The Outlook for Intelligent Evolution

By adopting Claude Code, Rakuten has not only achieved a leap in development efficiency but also validated AI’s progression from a supportive technology to a core component of process architecture. This case highlights several strategic insights:

  • AI autonomy is foundational to driving both efficiency and innovation;

  • Process reengineering is the key to unlocking organizational potential with AI;

  • Cross-role collaboration fosters a new ecosystem, breaking down technical silos and making innovation velocity a sustainable competitive edge.

This case offers a replicable blueprint for enterprises across industries: by building AI-centric capability frameworks and embedding AI across processes, roles, and architectures, organizations can accumulate sustained performance advantages, experiential assets, and cultural transformation — ultimately elevating both organizational capability and business value in tandem.

Related Topic

Unlocking Enterprise Success: The Trifecta of Knowledge, Public Opinion, and Intelligence
From Technology to Value: The Innovative Journey of HaxiTAG Studio AI
Unveiling the Thrilling World of ESG Gaming: HaxiTAG's Journey Through Sustainable Adventures
Mastering Market Entry: A Comprehensive Guide to Understanding and Navigating New Business Landscapes in Global Markets
HaxiTAG's LLMs and GenAI Industry Applications - Trusted AI Solutions
Automating Social Media Management: How AI Enhances Social Media Effectiveness for Small Businesses
Challenges and Opportunities of Generative AI in Handling Unstructured Data
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions

Monday, June 16, 2025

Case Study: How Walmart is Leading the AI Transformation in Retail

As one of the world's largest retailers, Walmart is advancing the adoption of artificial intelligence (AI) and generative AI (GenAI) at an unprecedented pace, aiming to revolutionize every facet of its operations—from customer experience to supply chain management and employee services. This retail titan is not only optimizing store operations for efficiency but is also rapidly emerging as a “technology-powered retailer,” setting new benchmarks for the commercial application of AI.

From Traditional Retail to AI-Driven Transformation

Walmart’s AI journey begins with a fundamental redefinition of the customer experience. In the past, shoppers had to locate products in sprawling stores, queue at checkout counters, and navigate after-sales service independently. Today, with the help of the AI assistant Sparky, customers can interact using voice, images, or text to receive personalized recommendations, price comparisons, and review summaries—and even reorder items with a single click.

Behind the scenes, store associates use the Ask Sam voice assistant to quickly locate products, check stock levels, and retrieve promotion details—drastically reducing reliance on manual searches and personal experience. Walmart reports that this tool has significantly enhanced frontline productivity and accelerated onboarding for new employees.

AI Embedded Across the Enterprise

Beyond customer-facing applications, Walmart is deeply embedding AI across internal operations. The intelligent assistant Wally, designed for merchandisers and purchasing teams, automates sales analysis and inventory forecasting, empowering more scientific replenishment and pricing decisions.

In supply chain management, AI is used to optimize delivery routes, predict overstock risks, reduce food waste, and even enable drone-based logistics. According to Walmart, more than 150,000 drone deliveries have already been completed across various cities, significantly enhancing last-mile delivery capabilities.

Key Implementations

Name Type Function Overview
Sparky Customer Assistant GenAI-powered recommendations, repurchase alerts, review summarization, multimodal input
Wally Merchant Assistant Product analytics, inventory forecasting, category management
Ask Sam Employee Assistant Voice-based product search, price checks, in-store navigation
GenAI Search Customer Tool Semantic search and review summarization for improved conversion
AI Chatbot Customer Support Handles standardized issues such as order tracking and returns
AI Interview Coach HR Tool Enhances fairness and efficiency in recruitment
Loss Prevention System Security Tech RFID and AI-enabled camera surveillance for anomaly detection
Drone Delivery System Logistics Innovation Over 150,000 deliveries completed; expansion ongoing

From Models to Real-World Applications: Walmart’s AI Strategy

Walmart’s AI strategy is anchored by four core pillars:

  1. Domain-Specific Large Language Models (LLMs): Walmart has developed its own retail-specific LLM, Wallaby, to enhance product understanding and user behavior prediction.

  2. Agentic AI Architecture: Autonomous agents automate tasks such as customer inquiries, order tracking, and inventory validation.

  3. Global Scalability: From inception, Walmart's AI capabilities are designed for global deployment, enabling “train once, deploy everywhere.”

  4. Data-Driven Personalization: Leveraging behavioral and transactional data from hundreds of millions of users, Walmart delivers deeply personalized services at scale.

Challenges and Ethical Considerations

Despite notable success, Walmart faces critical challenges in its AI rollout:

  • Data Accuracy and Bias Mitigation: Preventing algorithmic bias and distorted predictions, especially in sensitive areas like recruitment and pricing.

  • User Adoption: Encouraging customers and employees to trust and embrace AI as a routine decision-making tool.

  • Risks of Over-Automation: While Agentic AI boosts efficiency, excessive automation risks diminishing human oversight, necessitating clear human-AI collaboration boundaries.

  • Emerging Competitive Threats: AI shopping assistants like OpenAI’s “Operator” could bypass traditional retail channels, altering customer purchase pathways.

The Future: Entering the Era of AI Collaboration

Looking ahead, Walmart plans to launch personalized AI shopping agents that can be trained by users to understand their preferences and automate replenishment orders. Simultaneously, the company is exploring agent-to-agent retail protocols, enabling machine-to-machine negotiation and transaction execution. This form of interaction could fundamentally reshape supply chains and marketing strategies.

Marketing is also evolving—from traditional visual merchandising to data-driven, precision exposure strategies. The future of retail may no longer rely on the allure of in-store lighting and advertising, but on the AI-powered recommendation chains displayed on customers’ screens.

Walmart’s AI transformation exhibits three critical characteristics that serve as reference for other industries:

  • End-to-End Integration of AI (Front-to-Back AI)

  • Deep Fine-Tuning of Foundation Models with Retail-Specific Knowledge

  • Proactive Shaping of an AI-Native Retail Ecosystem

This case study provides a tangible, systematic reference for enterprises in retail, manufacturing, logistics, and beyond, offering practical insights into deploying GenAI, constructing intelligent agents, and undertaking organizational transformation.

Walmart also plans to roll out assistants like Sparky to Canada and Mexico, testing the cross-regional adaptability of its AI capabilities in preparation for global expansion.

While enterprise GenAI applications represent a forward-looking investment, 92% of effective use cases still emerge from ground-level operations. This underscores the need for flexible strategies that align top-down design with bottom-up innovation. Notably, the case lacks a detailed discussion on data governance frameworks, which may impact implementation fidelity. A dynamic assessment mechanism is recommended, aligning technological maturity with organizational readiness through a structured matrix—ensuring a clear and measurable path to value realization.

Related Topic

Unlocking Enterprise Success: The Trifecta of Knowledge, Public Opinion, and Intelligence
From Technology to Value: The Innovative Journey of HaxiTAG Studio AI
Unveiling the Thrilling World of ESG Gaming: HaxiTAG's Journey Through Sustainable Adventures
Mastering Market Entry: A Comprehensive Guide to Understanding and Navigating New Business Landscapes in Global Markets
HaxiTAG's LLMs and GenAI Industry Applications - Trusted AI Solutions
Automating Social Media Management: How AI Enhances Social Media Effectiveness for Small Businesses
Challenges and Opportunities of Generative AI in Handling Unstructured Data
HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions

Thursday, May 1, 2025

How to Identify and Scale AI Use Cases: A Three-Step Strategy and Best Practices Guide

The "Identifying and Scaling AI Use Cases" report by OpenAI outlines a three-step strategy for identifying and scaling AI applications, providing best practices and operational guidelines to help businesses efficiently apply AI in diverse scenarios.

I. Identifying AI Use Cases

  1. Identifying Key Areas: The first step is to identify AI opportunities in the day-to-day operations of the company, particularly focusing on tasks that are efficient, low-value, and highly repetitive. AI can help automate processes, optimize data analysis, and accelerate decision-making, thereby freeing up employees' time to focus on more strategic tasks.

  2. Concept of AI as a Super Assistant: AI can act as a super assistant, supporting all work tasks, particularly in areas such as low-value repetitive tasks, skill bottlenecks, and navigating uncertainty. For example, AI can automatically generate reports, analyze data trends, assist with code writing, and more.

II. Scaling AI Use Cases

  1. Six Core Use Cases: Businesses can apply the following six core use cases based on the needs of different departments:

    • Content Creation: Automating the generation of copy, reports, product manuals, etc.

    • Research: Using AI for market research, competitor analysis, and other research tasks.

    • Coding: Assisting developers with code generation, debugging, and more.

    • Data Analysis: Automating the processing and analysis of multi-source data.

    • Ideation and Strategy: Providing creative support and generating strategic plans.

    • Automation: Simplifying and optimizing repetitive tasks within business processes.

  2. Internal Promotion: Encourage employees across departments to identify AI use cases through regular activities such as hackathons, workshops, and peer learning sessions. By starting with small-scale pilot projects, organizations can accumulate experience and gradually scale up AI applications.

III. Prioritizing Use Cases

  1. Impact/Effort Matrix: By evaluating each AI use case in terms of its impact and effort, prioritize those with high impact and low effort. These are often the best starting points for quickly delivering results and driving larger-scale AI application adoption.

  2. Resource Allocation and Leadership Support: High-value, high-effort use cases require more time, resources, and support from top management. Starting with small projects and gradually expanding their scale will allow businesses to enhance their overall AI implementation more effectively.

IV. Implementation Steps

  1. Understanding AI’s Value: The first step is to identify which business areas can benefit most from AI, such as automating repetitive tasks or enhancing data analysis capabilities.

  2. Employee Training and Framework Development: Provide training to employees to help them understand and master the six core use cases. Practical examples can be used to help employees better identify AI's potential.

  3. Prioritizing Projects: Use the impact/effort matrix to prioritize all AI use cases. Start with high-benefit, low-cost projects and gradually expand to other areas.

Summary

When implementing AI use case identification and scaling, businesses should focus on foundational tasks, identifying high-impact use cases, and promoting full employee participation through training, workshops, and other activities. Start with low-effort, high-benefit use cases for pilot projects, and gradually build on experience and data to expand AI applications across the organization. Leadership support and effective resource allocation are also crucial for the successful adoption of AI.

Related topic:

Wednesday, April 9, 2025

Rethinking Human-AI Collaboration: The Future of Synergy Between AI Agents and Knowledge Professionals

Reading and share my thinking about stanford article rethinking-human-ai-agent-collaboration-for-the-knowledge-worke 

Opening Perspective

2025 has emerged as the “Year of AI Agents.” Yet, beneath the headlines lies a more fundamental inquiry: what does this truly mean for professionals in knowledge-intensive industries—law, finance, consulting, and beyond?

We are witnessing a paradigm shift: LLMs are no longer merely tools, but evolving into intelligent collaborators—AI agents acting as “machine colleagues.” This transformation is redefining human-machine interaction and reconstructing the core of what we mean by “collaboration” in professional environments.

From Hierarchies to Dynamic Synergy

Traditional legal and consulting workflows follow a pipeline model—linear, hierarchical, and role-bound. AI agents introduce a more fluid, adaptive mode of working—closer to collaborative design or team sports. In this model, tasks are distributed based on contextual awareness and capabilities, not rigid roles.

This shift requires AI agents and humans to co-navigate multi-objective, fast-changing workflows, with real-time alignment and adaptive task planning as core competencies.

The Co-Gym Framework: A New Foundation for AI Collaboration

Stanford’s “Collaborative Gym” (Co-Gym) framework offers a pioneering response. By creating an interactive simulation environment, Co-Gym enables:

  • Deep human-AI pre-task interaction

  • Clarification of shared objectives

  • Negotiated task ownership

This strengthens not only the AI’s contextual grounding but also supports human decision paths rooted in intuition, anticipation, and expertise.

Use Case: M&A as a Stress Test for Human-AI Collaboration

M&A transactions exemplify high complexity, high stakes, and fast-shifting priorities. From due diligence to compliance, unforeseen variables frequently reshuffle task priorities.

Under conventional AI systems, such volatility results in execution errors or strategic misalignment. In contrast, a Co-Gym-enabled AI agent continuously re-assesses objectives, consults human stakeholders, and reshapes the workflow—ensuring that collaboration remains robust and aligned.

Case-in-Point

During a share acquisition negotiation, the sudden discovery of a patent litigation issue triggers the AI agent to:

  • Proactively raise alerts

  • Suggest tactical adjustments

  • Reorganize task flows collaboratively

This “co-creation mechanism” not only increases accuracy but reinforces human trust and decision authority—two critical pillars in professional domains.

Beyond Function: A Philosophical Reframing

Crucially, Co-Gym is not merely a feature set—it is a philosophical reimagining of intelligent systems.
Effective AI agents must be communicative, context-sensitive, and capable of balancing initiative with control. Only then can they become:

  • Conversational partners

  • Strategic collaborators

  • Co-creators of value

Looking Ahead: Strategic Recommendations

We recommend expanding the Co-Gym model across other professional domains featuring complex workflows, including:

  • Venture capital and startup financing

  • IPO preparation

  • Patent lifecycle management

  • Corporate restructuring and bankruptcy

In parallel, we are developing fine-grained task coordination strategies between multiple AI agents to scale collaborative effectiveness and further elevate the agent-to-partner transition.

Final Takeaway

2025 marks an inflection point in human-AI collaboration. With frameworks like Co-Gym, we are transitioning from command-execution to shared-goal creation.
This is not merely technological evolution—it is the dawn of a new work paradigm, where AI agents and professionals co-shape the future

Related Topic

Research and Business Growth of Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Industry Applications - HaxiTAG
LLM and Generative AI-Driven Application Framework: Value Creation and Development Opportunities for Enterprise Partners - HaxiTAG
Enterprise Partner Solutions Driven by LLM and GenAI Application Framework - GenAI USECASE
Unlocking Potential: Generative AI in Business - HaxiTAG
LLM and GenAI: The New Engines for Enterprise Application Software System Innovation - HaxiTAG
Exploring LLM-driven GenAI Product Interactions: Four Major Interactive Modes and Application Prospects - HaxiTAG
Developing LLM-based GenAI Applications: Addressing Four Key Challenges to Overcome Limitations - HaxiTAG
Exploring Generative AI: Redefining the Future of Business Applications - GenAI USECASE
Leveraging LLM and GenAI: ChatGPT-Driven Intelligent Interview Record Analysis - GenAI USECASE
How to Effectively Utilize Generative AI and Large-Scale Language Models from Scratch: A Practical Guide and Strategies - GenAI USECASE

Sunday, December 29, 2024

Case Study and Insights on BMW Group's Use of GenAI to Optimize Procurement Processes

 Overview and Core Concept:

BMW Group, in collaboration with Boston Consulting Group (BCG) and Amazon Web Services (AWS), implemented the "Offer Analyst" GenAI application to optimize traditional procurement processes. This project centers on automating bid reviews and comparisons to enhance efficiency and accuracy, reduce human errors, and improve employee satisfaction. The case demonstrates the transformative potential of GenAI technology in enterprise operational process optimization.

Innovative Aspects:

  1. Process Automation and Intelligent Analysis: The "Offer Analyst" integrates functions such as information extraction, standardized analysis, and interactive analysis, transforming traditional manual operations into automated data processing.
  2. User-Customized Design: The application caters to procurement specialists' needs, offering flexible custom analysis features that enhance usability and adaptability.
  3. Serverless Architecture: Built on AWS’s serverless framework, the system ensures high scalability and resilience.

Application Scenarios and Effectiveness Analysis:
BMW Group's traditional procurement processes involved document collection, review and shortlisting, and bid selection. These tasks were repetitive, error-prone, and burdensome for employees. The "Offer Analyst" delivered the following outcomes:

  • Efficiency Improvement: Automated RFP and bid document uploads and analyses significantly reduced manual proofreading time.
  • Decision Support: Real-time interactive analysis enabled procurement experts to evaluate bids quickly, optimizing decision-making.
  • Error Reduction: Automated compliance checks minimized errors caused by manual operations.
  • Enhanced Employee Satisfaction: Relieved from tedious tasks, employees could focus on more strategic activities.

Inspiration and Advanced Insights into AI Applications:
BMW Group’s success highlights that GenAI can enhance operational efficiency and significantly improve employee experience. This case provides critical insights:

  1. Intelligent Business Process Transformation: GenAI can be deeply integrated into key enterprise processes, fundamentally improving business quality and efficiency.
  2. Optimized Human-AI Collaboration: The application’s user-centric design transfers mundane tasks to AI, freeing human resources for higher-value functions.
  3. Flexible Technical Architecture: The use of serverless architecture and API integration ensures scalability and cross-system collaboration for future expansions.

In the future, applications like the "Offer Analyst" can extend beyond procurement to areas such as supply chain management, financial analysis, and sales forecasting, providing robust support for enterprises’ digital transformation. BMW Group’s case sets a benchmark for driving AI application practices, inspiring other industries to adopt similar models for smarter and more efficient operations.

Related Topic

Innovative Application and Performance Analysis of RAG Technology in Addressing Large Model Challenges

HaxiTAG: Enhancing Enterprise Productivity with Intelligent Knowledge Management Solutions

Leveraging Large Language Models (LLMs) and Generative AI (GenAI) Technologies in Industrial Applications: Overcoming Three Key Challenges

HaxiTAG's Studio: Comprehensive Solutions for Enterprise LLM and GenAI Applications

HaxiTAG Studio: Pioneering Security and Privacy in Enterprise-Grade LLM GenAI Applications

HaxiTAG Studio: The Intelligent Solution Revolutionizing Enterprise Automation

HaxiTAG Studio: Leading the Future of Intelligent Prediction Tools

HaxiTAG Studio: Advancing Industry with Leading LLMs and GenAI Solutions

HaxiTAG Studio Empowers Your AI Application Development

HaxiTAG Studio: End-to-End Industry Solutions for Private datasets, Specific scenarios and issues