Sora 피드 철학: 창의성과 안전을 결합한 추천 시스템 설계
요약
OpenAI가 공개하는 Sora 피드의 핵심 원칙과 작동 방식을 소개합니다. 이 피드는 단순 스크롤링이 아닌 '창의성'과 '능동적 참여'를 최우선으로 하여 사용자에게 영감을 주는 것을 목표로 합니다. 사용자가 직접 알고리즘을 제어할 수 있는 '조향 가능한 랭킹(steerable ranking)' 기능을 제공하며, 개인화 추천은 활동 기록, ChatGPT 데이터 등을 기반으로 하지만 부모의 통제권을 통해 관리 가능합니다. 동시에, 폭력성, 혐오 발언 등 광범위한 사용 정책 위반 콘텐츠를 엄격하게 필터링하고, 생성 단계부터 강력한 가드
핵심 포인트
- Sora 피드는 수동적 스크롤 대신 창의성과 능동적인 참여에 초점을 맞춘 랭킹 시스템을 설계했습니다.
- 사용자는 '조향 가능한 랭킹(steerable ranking)' 기능을 통해 알고리즘에게 원하는 콘텐츠 유형을 직접 지시할 수 있습니다.
- 개인화 추천은 사용자의 Sora 활동, ChatGPT 기록 등을 종합적으로 고려하지만, 부모 통제 기능으로 관리 가능합니다.
- 콘텐츠는 폭력성, 혐오 발언 등 광범위한 정책 위반 요소를 엄격히 필터링하며, 생성 단계(point of creation)부터 방어막을 구축합니다.
The Sora feed philosophy
Our aim with the Sora feed is simple: help people learn what’s possible, and inspire them to create. Here are some of core starting principles to bring this vision to life:
- Optimize for creativity. We’re designing ranking to favor creativity and active participation, not passive scrolling. We think this is what makes Sora joyful to use.
- Put users in control. The feed ships with steerable ranking, so you can tell the algorithm exactly what you’re in the mood for. Parents can also turn off feed personalization and control continuous scroll for their teens through ChatGPT parental controls.
- Prioritize connection. We want Sora to help people strengthen and form new connections, especially through fun, magical Cameo flows. Connected content will be favored over global, unconnected content.
- Balance safety and freedom. The feed is designed to be widely accessible and safe. Robust guardrails aim to prevent unsafe or harmful generations from the start and we block content that may violate our Usage Policies. At the same time, we also want to leave room for expression, creativity, and community. We know recommendation systems are living, breathing things. As we learn from real use, we’ll adjust the details—in service of these principles.
Our recommendation algorithms are designed to give you personalized recommendations that inspire you and others to be creative. Each individual has unique interests and tastes so we’ve built a personalized system to best serve this mission.
To personalize your Sora Feed, we may consider signals like:
- Your activity on Sora: This may include activity including your posts, followed accounts, liked and commented posts, and remixed content. It may also include the general location (such as the city) from which your device accesses Sora, based on information like your IP address.
- Your ChatGPT data: We may consider your ChatGPT history, but you can always turn this off in Sora’s Data Controls, within Settings.
- Content engagement signals: This may include signals such as views, likes, comments, instructions to “see less content like this,” and remixes.
- Author signals: This may include follower count, other posts, and past post engagement.
- Safety signals: Whether or not the post is considered violative or appropriate.
We may use these signals to predict if this content is something you may like to see and riff off of. Parents are also able to turn off feed personalization and manage continuous scroll for their teens using parental controls in ChatGPT.
Keeping the Sora Feed safe and fun for everyone means walking a careful line: protect users from harmful content, while leaving enough freedom for creativity to thrive.
We may remove content that violates our Global Usage Policies. Additionally, content deemed inappropriate for users may be removed from Feed and other sharing platforms (such as user galleries and side characters) in accordance with our Sora Distribution Guidelines. This includes:
- Graphic sexual content;
- Graphic violence or content promoting violence;
- Extremist propaganda;
- Hateful content;
- Content that promotes or depicts self harm or disordered eating;
- Unhealthy dieting or exercise behaviors;
- Appearance-based critiques or comparisons;
- Bullying content;
- Dangerous challenges likely to be imitated by minors;
- Content glorifying depression;
- Promotion of age-restricted goods or activities including illegal drugs or harmful substances; and
- Low quality content where the primary purpose is engagement bait;
- Content that recreates the likeness of living individuals without their consent, or deceased public figures in contexts where their likeness is not permitted for use;
- Content that may infringe on the intellectual property rights of others.
Our first layer of defense is at the point of creation. Because every post is generated within Sora, we can build in strong guardrails that prevent unsafe or harmful content before it’s made. If a generation bypasses these guardrails, we may remove the sharing of that content.
Beyond generation, the feed is designed to be appropriate for all Sora users. Content that may be harmful, unsafe, or age-inappropriate is filtered out for teen accounts. We use automated tools to scan all feed content for compliance with our Global Usage Policies and feed eligibility. These systems are continuously updated as we learn more about new risks. If you see something you think does not follow our Usage policies, you can report it.
We complement this with human review. Our team monitors user reports and proactively checks feed activity to catch what automation may miss. If you see something you think does not follow our Usage Policies, you can report it.
But safety isn’t only about strict filters. Too many restrictions can stifle creativity, while too much freedom can undermine trust. We aim for a balance: proactive guardrails where the risks are highest, combined with a reactive “report + takedown” system that gives users room to explore and create while ensuring we can act quickly when problems arise. This approach has served us well in ChatGPT’s 4o image generation model, and we’re building on that philosophy here.
We also know we won’t get this balance perfect from day one. Recommendation systems and safety models are living, evolving systems, and your feedback will be essential in helping us refine them. We look forward to learning together and improving over time.
AI 자동 생성 콘텐츠
본 콘텐츠는 OpenAI Blog의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.
원문 바로가기