By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Launching the AI Center of Excellence: Moving AI from Prototypes to Production

We hosted our inaugural AI Center of Excellence meetup in New York, bringing together product and engineering leaders from over two dozen enterprises to discuss their experiences building applications with LLMs. Many thanks to our friends from BlackRock, Cisco, Citi, Cooley LLP, Credit Karma, CrowdStrike, Duolingo, Goldman Sachs, Instagram, MetLife, Morgan Stanley, Jefferies, JPMorgan Chase, Nielsen, Nutanix, Point72, Scale AI, S&P Global, and the University of Michigan for their contributions to our community.

Deploying AI is a strategic priority across the Fortune 500, and in 2024 the focus is on which applications will move from prototype to production to create real ROI. As more organizations embrace AI and integrate LLMs into their workflows, it is clear that transformative gains can be made once technical and operational challenges are overcome. Where are enterprises finding the most significant ROI? And where is the promise of AI not living up to the hype? It is often hard to tell without digging deep.

The Low Hanging Fruit: Automating Routine Work vs. “Moonshots”

In 2023, OpenAI’s ChatGPT and Microsoft’s GitHub Copilot were the only pervasive tools in the enterprise, and much of the industry focused on building a pipeline of possibilities for AI in the enterprise. Entering 2024, every major software company is piloting or developing copilot capabilities for their existing products, and internal teams have dozens to hundreds of possible use cases to consider where LLMs and AI can help to automate or augment human work. Finding applications that create optimal end-user adoption and financial return is now the key to unlocking budgets and organizational support for larger AI-enabled deployments. Building an enterprise wide roadmap can help align stakeholders and prevent widespread experimentation that may never lead to commercial success.

The above map is a crowdsourced depiction of where AI is being deployed across our enterprise community - this is a subjective visualization of how our AI Center of Excellence members perceive the rate at which LLM use cases are internally adopted and the value delivered from each piloted application. We note that individual companies may have a completely different experience or viewpoint than depicted here, but many agree with the overall trends seen across the industry. Some observations that are commonly shared that are at times counterintuitive:

  • The most pervasive use cases across the board were in code generation and copilots for developer productivity, followed closely by LLM usage for internal knowledge sharing and automating routine business processes. The largest internal and external deployments of these LLM-enabled apps are now measured in the tens of thousands of users and customers (the largest cited >100k users).
  • Some of the most substantial savings from AI adoption often come from the least glamorous use cases. Automating routine tasks, such as language translation or customer support inquiries, can often generate the fastest path to ROI. Some customers have reported over 8-figure ROIs from surprisingly “unsexy” use cases.
  • Every successful deployment of AI still has a “human in the loop.”  Most executives saw significant boosts in task productivity for each person enabled, and did not forecast a downsizing or decline in the number of jobs when LLMs were used at scale.
  • Many organizations felt a portfolio of routine tasks with clear ROI complemented investments in more ambitious moonshot projects. This diversification approach allowed execs to take quick hitting ROI wins to fund more expensive strategic AI initiatives.

Though there is always friction in finding budget and executive sponsorship, most enterprises are still accelerating their adoption and use of AI despite the technical and organizational hurdles. One of the larger speed bumps seen internally is meeting compliance and security requirements, and navigating these concerns is new territory in every organization that must learn the rules and regulations of AI. Stakeholders outside of product and engineering play an increasingly important role in the pace of deployment of any AI application, and collaborating across these domains will be essential for the foreseeable future.

Joining the AI Center of Excellence Community

We created our AI Center of Excellence to bring together product and engineering executives from Fortune 500 companies, high-growth startups, and major tech platforms in AI to facilitate knowledge sharing and collaboration across the industry. Our initial list of 35 members represents some of the largest and earliest adopters of LLMs in the industry, and we are excited to expand this group in the coming months.

Partial List of Founding Members of the AI Center of Excellence

As enterprises embark on their journey to deploy AI, they often encounter similar challenges and shared experiences with peers across the industry. Through this collaborative effort, we believe organizations of all sizes and sectors can benefit from the best practices and wisdom learned when deploying AI. If you are a senior executive building with LLMs today, please reach out to join our community. We convene in physical and virtual forums throughout the year and will share insights regularly on this blog.

Thanks to all for participating in our first event - we look forward to seeing you soon!