엔터프라이즈 AI의 다음 단계: 통합 운영 레이어와 슈퍼앱 구축 전략
요약
OpenAI는 기업들이 단순한 코파일럿을 넘어, 전사적인 업무 흐름에 AI를 깊숙이 통합하는 '통합 지능 계층(Underlying Intelligence Layer)' 구축에 집중해야 한다고 강조합니다. GPT-5.4와 같은 최신 모델 성능과 함께, OpenAI Frontier는 회사 전체의 에이전트가 사내 시스템 및 데이터 소스에 걸쳐 작동하도록 지원하며, 이를 통해 기업들이 AI를 업무 전반의 핵심 인프라로 활용할 수 있게 합니다. 궁극적으로 모든 직원이 사용하는 '통합 AI 슈퍼앱'을 목표로 하여 생산성을 극대화하는 것이 핵심
핵심 포인트
- 기업들은 이제 개별적인 코파일럿 수준을 넘어, 전사적 업무 전체를 아우르는 통합 운영 레이어(Unified Operating Layer)를 요구하고 있습니다.
- OpenAI는 Frontier라는 이름으로 인프라부터 모델, 인터페이스까지 풀스택 솔루션을 제공하며 기업의 AI 핵심 기반이 되는 것을 목표로 합니다.
- 에이전트 중심 워크플로우가 주류가 되면서, OpenAI는 AWS 등 파트너와 협력하여 에이전트가 컨텍스트를 유지하고 여러 시스템을 넘나들게 하는 '상태 저장 런타임 환경(Stateful Runtime Environment)'을 구축 중입니다.
- 궁극적인 목표는 모든 직원이 사용하는 단일 접점인 '통합 AI 슈퍼앱'으로, ChatGPT의 사용자 경험과 에이전트 기능을 결합하여 생산성을 극대화합니다.
The next phase of enterprise AI
I just wrapped my first 90 days with OpenAI and have had the opportunity to meet with hundreds of our customers. What has struck me most is their immense sense of urgency and readiness. I’ve spent my entire career at the intersection of technology and enterprise transformation, and yet, I have never seen this level of conviction spread so quickly and consistently across industries. These leaders recognize AI as the most consequential shift of their lifetime, and they’re asking us how to reinvent their companies around it.
I also saw that conviction reflected in our business this quarter. Building on our consumer strength, enterprise now makes up more than 40% of our revenue, and is on track to reach parity with consumer by the end of 2026. Codex just hit 3 million weekly active users, our APIs process more than 15 billion tokens per minute, and GPT‑5.4 is driving record engagement across agentic workflows. We’re seeing demand from new customers like Goldman Sachs, Phillips, and State Farm, and also growing with existing ones like Cursor, DoorDash, Thermo Fisher, and LY Corporation.
It’s clear we’re past the experimentation phase. AI is now doing real work, and as a result, every company is grappling with two main questions:
- How do we put the most capable AI to work across the entire business, not just individual copilots and assistants?
- How do we make AI part of people’s everyday work, so it helps them unlock their full potential?
These questions will define how companies operate and compete in the years ahead, and that’s what our enterprise strategy is building toward: Frontier as the underlying intelligence layer governing all of a company’s agents, and a unified AI superapp as the primary experience where employees get things done.
OpenAI is uniquely positioned to shape the future of enterprise because we are one of the few companies building the full stack, from infrastructure and models to the interfaces employees use every day. We are listening to our customers and quickly becoming the core infrastructure for AI, making it possible for people around the world and businesses, big and small, to just build things and confidently step into the future of work.
As we’ve shared before, the world is in a phase of capability overhang, where AI models can already do far more than most people and enterprises are using them for today. We are committed to closing that gap by making frontier intelligence usable, trusted, and embedded in how work actually gets done.
One thing I hear over and over is that companies are tired of AI point solutions that don’t talk to each other and just create chaos. They want AI to be a unified operating layer for their business, with AI coworkers grounded in their company’s context, connected to internal systems, external data sources, and governed by the right permissions and controls. That is what we’re providing with OpenAI Frontier, which is helping customers like Oracle, State Farm, and Uber build, deploy, and manage agents company-wide. While other solutions embed agents within a single product or environment, Frontier enables agents to move across a company’s systems and data, working across tools, and continuing to improve over time.
On top of being a research company building frontier models, we’re also a deployment company. We’ve taken what we’ve learned from working directly with hundreds of large enterprises on integrating AI agents and turned it into a scalable foundation. Together with our Frontier Alliances partners McKinsey & Company, Boston Consulting Group (BCG), Accenture, and Capgemini, and other partners like Amazon Web Services (AWS), Databricks, and Snowflake, we help enterprises integrate OpenAI’s intelligence into the infrastructure and data ecosystems they already rely on. For example, our Stateful Runtime Environment, which we’re building with AWS, makes it simple for agents to keep context, remember prior work, and operate across a business' tools and data, so it’s far more effective for complex, real-world use cases.
As AI scales across the company, it also has to effortlessly show up in the daily workflow of every person and team. That’s why we’re building towards a unified AI superapp: one place where employees can work with AI agents throughout the day to complete tasks and take action across the tools they already use. This experience will bring together the best of ChatGPT, Codex, agentic browsing, and broader capabilities in order to multiply what individual employees and small teams can accomplish.
In recent months, we’ve seen a shift where the people who are furthest ahead have gone from using AI for help on tasks, to managing teams of agents to do tasks for them. The shift started with agentic tools like Codex, which has grown more than 5X since the start of the year. This includes customers like GitHub, Nextdoor, Notion, and Wonderful that are building multi-agent systems that can execute engineering work end-to-end. We’ve also started to see employees in every function adopting agents in their workflows. For example, our sales team brings in new business using an agent that researches inbound prospects, scores them against a rubric, sends a personalized email to qualified leads, and updates the CRM for them.
We’re excited to bring new solutions to enterprises that will make agents more accessible to everyone. One of OpenAI’s biggest advantages is our ability to bridge personal and professional use cases. ChatGPT has 900 million weekly users, which means employees already know how to work with it. For enterprises, that reduces rollout friction and accelerates the point where every employee can delegate tedious tasks and take on more ambitious projects.
--
My first quarter at OpenAI has made me more convinced than ever that the AI transformation is happening faster than most people realize. Enterprises want a partner who understands the scale of this transition and can help them confidently move forward. That means meeting them in the systems they already rely on, giving them a practical path from experimentation to deployment, and making adoption easier through the right pricing and packaging. Above all, they want to trust that the company helping them make this transformation is invested in their success and building for their needs.
At OpenAI, I feel the commitment at every level, in every function. We are wholeheartedly focused on continuously earning the right to help enterprises – and the people behind them – reinvent their companies for the future of AGI with clarity, confidence, and trust. It's the opportunity and responsibility of a lifetime, and I couldn’t be more excited about what we’re building with our customers and partners.
AI 자동 생성 콘텐츠
본 콘텐츠는 OpenAI Blog의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.
원문 바로가기