본문으로 건너뛰기

© 2026 Molayo

OpenAI속보헤드라인2026. 04. 24. 19:55

OpenAI의 AI 인프라화 가속: $122B 투자 유치 및 성장 전략 발표

요약

OpenAI가 1,220억 달러 규모의 자금 조달을 완료하며 시장 지배력을 공고히 했습니다. 회사는 ChatGPT를 통해 소비자 접점을 확보하고, 기업용 솔루션(Enterprise)과 개발자 API 활용을 결합하여 강력한 '성장 플라이휠'을 구축하고 있습니다. 현재 월간 20억 달러의 매출을 기록하며 인터넷/모바일 시대 거대 기술 기업들보다 4배 빠르게 성장 중입니다. 특히, GPT-5.4 출시와 함께 엔터프라이즈 부문이 전체 매출의 40%를 차지하며 소비자 시장과 대등해질 것으로 예상됩니다. 컴퓨팅 자원(Compute) 확보가 향

핵심 포인트

  • OpenAI는 총 1,220억 달러 규모의 자금 조달을 완료했으며, 포스트 머니 기업 가치(Post-money valuation)는 8,520억 달러에 달합니다.
  • ChatGPT를 통해 소비자 시장에서 주간 활성 사용자(WAU) 9억 명 이상, 월별 웹 방문 및 모바일 세션 수 면에서 경쟁사 대비 압도적인 우위를 점하고 있습니다.
  • 엔터프라이즈 부문 매출이 현재 전체의 40%를 차지하며, 2026년 말까지 소비자 시장과 동등한 수준에 도달할 것으로 예상됩니다.
  • API 사용량 측면에서 Codex는 주간 사용자 2백만 명을 돌파했으며, 이는 지난 3개월 대비 5배 증가했고 월별로는 70% 이상 성장했습니다.

Accelerating the next phase of AI

Today, we closed our latest funding round with $122 billion in committed capital at a post money valuation of $852 billion.

OpenAI is becoming the core infrastructure for AI, making it possible for people around the world and businesses, big and small, to just build things. The broad consumer reach of ChatGPT creates a powerful distribution channel into the workplace, where demand is rapidly shifting from basic model access to intelligent systems that reshape how businesses operate. Developers build on and expand the platform by leveraging our APIs, and Codex is transforming how developers turn ideas into working software. Durable access to compute is the strategic advantage that compounds across the entire system: it advances research, improves products, expands access, and structurally lowers the cost of delivery at scale. Together, consumer adoption, enterprise deployment, developer usage, and compute form a reinforcing flywheel that is translating capability into economic impact.

OpenAI was the fastest technology platform to reach 10 million users, the fastest to 100 million users, and soon the fastest to 1 billion weekly active users. Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month. At this stage, we are growing revenue four times faster than the companies who defined the Internet and mobile eras, including Alphabet and Meta.

This is commercial scale, and it is mission scale. The fastest way to widen the benefits of AI is to put useful intelligence in people’s hands early and let that access compound globally. AI is driving productivity gains, accelerating scientific discovery, and expanding what people and organizations can build. This funding gives us the resources to continue to lead at the scale this moment demands.

Deep conviction across global capital

Our ambition is matched by the commitment of the partners backing us. The round was anchored by our strategic partners Amazon, NVIDIA, and SoftBank, with continued participation from our long-term partner, Microsoft. SoftBank co-led the round alongside a16z, D. E. Shaw Ventures, MGX, TPG, and accounts advised by T. Rowe Price Associates, Inc.

There was also significant participation from a diverse set of global institutions including Altimeter, Appaloosa LP, ARK Invest, affiliated funds of BlackRock, Blackstone, Coatue, D1 Capital Partners, Dragoneer, Fidelity Management & Research Company, Goanna Capital, Insight Partners, The Paragon Group, Sands Capital, Sequoia Capital, Sound Ventures, Temasek, Thrive Capital, UC Investments (University of California CIO Office), and Winslow Capital.

For the first time, we extended participation to investors through bank channels, raising over $3 billion from individual investors. Today, we’re also announcing that OpenAI will be included in several exchange-traded funds managed by ARK Invest, further broadening ownership and giving more people the opportunity to share in the upside economics of OpenAI and the AI era.

We have also expanded our existing revolving credit facility to approximately $4.7 billion, which gives us added flexibility as we continue to invest at scale. The facility is supported by a global syndicate including JPMorgan Chase, Citi, Goldman Sachs, Morgan Stanley, Wells Fargo, Mizuho, Royal Bank of Canada, SMBC, UBS, HSBC, and Santander. The facility remains undrawn at close.

Leadership across consumer and enterprise

We are continually shipping advances across ChatGPT, the API, and our enterprise products. We recently launched GPT‑5.4, our most capable model yet, with meaningful gains in intelligence and workflow performance. We expanded Codex into a flagship coding agent. We pushed forward on memory, search, personalization, and multimodal interaction. We also expanded into areas like health, scientific discovery, and commerce.

That product momentum shows up in the numbers. ChatGPT is the overwhelming leader in consumer AI with more than 900 million weekly active users, and over 50 million subscribers. ChatGPT has 6x the monthly web visits and mobile sessions than the next largest AI app, while total AI time spent is 4x the next largest AI app and 4x all others combined. Search usage has nearly tripled in a year, and our ads pilot reached more than $100 million in ARR in under six weeks. These are not just growth milestones—they show that frontier AI is becoming part of everyday life for people around the world.

Momentum is just as strong on the enterprise side, which now makes up more than 40% of our revenue, and is on track to reach parity with consumer by the end of 2026. GPT‑5.4 is driving record engagement across agentic workflows. Our APIs now process more than 15 billion tokens per minute. Codex now serves over 2 million weekly users, up 5x in the past three months, with usage growing more than 70% month over month.

Compute is a strategic advantage

Compute powers every layer of AI: frontier research and models, products, deployment, and revenue. Since ChatGPT launched, both our revenue and our available compute have scaled rapidly as demand for intelligent systems has accelerated. With each new generation of infrastructure, we train more capable models, making each token more intelligent than before. At the same time, algorithmic and hardware improvements reduce the cost to serve each token, lowering the cost per unit of intelligence. That added intelligence makes AI useful for more complex workflows, which increases usage, drives compute demand, and accelerates the next turn of the flywheel.

This creates a compounding effect: better infrastructure and better models lower the cost of delivery, while improved products and deeper enterprise deployment increase revenue per unit of compute. As utilization increases and the platform matures, this drives meaningful operating leverage over time.

Over the past 15 months, we have expanded our infrastructure strategy beyond a small number of core providers to meet the scale and reliability requirements of global AI deployment.

Nvidia remains the foundation of our infrastructure. Our training fleet and the majority of our inference stack continue to run on Nvidia GPUs, and with this round we are deepening that partnership as we scale.

Demand for AI systems is growing faster and becoming more diverse. No single architecture can efficiently meet the needs of the entire AI frontier. To meet that demand and stay flexible, we are building a broader infrastructure portfolio across multiple cloud partners, multiple chip platforms, and deeper co-design across the stack.

This strategy now spans:

  • cloud through Microsoft, Oracle, AWS, CoreWeave, and Google Cloud;
  • silicon through NVIDIA, AMD, AWS Trainium, Cerebras, and our own chip in partnership with Broadcom;
  • data centers through partnerships with Oracle, SBE, and SoftBank.

The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow. That gives us the ability to reinvest and deliver intelligence more efficiently to consumers, enterprises, and builders around the world.

Building an AI superapp

That is why we are building a unified AI superapp. As models become more capable, the limiting factor shifts from intelligence to usability. Users do not want disconnected tools. They want a single system that can understand intent, take action, and operate across applications, data, and workflows. Our superapp will bring together ChatGPT, Codex, browsing, and our broader agentic capabilities into one agent-first experience.

This is not just product simplification. It is a distribution and deployment strategy. By unifying our surfaces, we can translate advances in model capability directly into user adoption and engagement. Our consumer scale becomes the front door for enterprise usage, as familiarity in daily life drives adoption at work. At the same time, a single product surface allows us to improve faster, ship more coherently, and capture more of the value created by agentic workflows.

The result is a tightly integrated system: infrastructure that enables intelligence, intelligence that powers agents, and products that make those agents useful at global scale.

Moments like this do not come often. In past generations, capital markets helped build the systems that defined modern economies, from electricity to highways to the internet. This is that kind of moment again. The capital being deployed today is helping build the infrastructure layer for intelligence itself. Over time, that value will flow back into the economy, to companies, to communities, and increasingly to individuals.

Let’s go build.

AI 자동 생성 콘텐츠

본 콘텐츠는 OpenAI Blog의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.

원문 바로가기
4

댓글

0