본문으로 건너뛰기

© 2026 Molayo

HN요약2026. 04. 24. 13:02

데이터 워크플로우 특화 AI 에디터 'nao' 출시: 데이터 엔지니어링 경험 혁신

요약

기존의 범용 LLM 코딩 도구(예: Cursor)는 데이터베이스 스키마나 데이터 컨텍스트를 이해하지 못해 실제 데이터 작업에 한계가 있습니다. nao Labs에서 개발한 'nao'는 VS Code 기반으로 BigQuery, Snowflake, Postgres 등 주요 데이터 웨어하우스와 직접 연결되며, 사용자의 데이터 스키마와 코드베이스 전체를 RAG(Retrieval-Augmented Generation) 방식으로 학습합니다. 이를 통해 SQL, Python, YAML 등 데이터 관련 코드를 작성할 때 데이터의 출력을 예측하고,

핵심 포인트

  • nao는 BigQuery, Snowflake, Postgres 등 주요 DW와 직접 연결되는 AI 코드 에디터로, 범용 LLM 도구의 한계를 극복했습니다.
  • 데이터 스키마 및 전체 데이터 계보(Data Lineage)를 RAG 방식으로 학습하여, SQL/Python 코드를 작성할 때 데이터 출력 결과를 시각적으로 비교하고 검증할 수 있습니다.
  • AI 에이전트가 누락 값 감지, 이상치 탐지 등 데이터 품질 검사(Data Quality Check) 및 다운스트림 영향도 분석을 자동으로 수행합니다.
  • 데이터 팀이 dbt 등을 활용하여 모델링, 문서화, 테스트를 진행할 때, 비기술적 사용자도 쉽게 접근 가능한 개발자 경험을 제공합니다.

Launch HN: Nao Labs (YC X25) – Cursor for Data

Hey HN, we’re Claire and Christophe from nao Labs (
https://getnao.io/). We just launched nao, an AI code editor to work with data: a local editor, directly connected with your data warehouse, and powered by an AI copilot with built-in context of your data schema and data-specific tools.
See our demo here: https://www.youtube.com/watch?v=QmG6X-5ftZU

Writing code with LLMs is the new normal in software engineering. But not when it comes to manipulating data. Tools like Cursor don’t interact natively with data warehouses — they autocomplete SQL blindly, not knowing your data schema. Most of us are still juggling multiple tools: writing code in Cursor, checking results in the warehouse console, troubleshooting with an observability tool, and verifying in BI tool no dashboard broke.

When you want to write code on data with LLMs, you don’t care much about the code, you care about the data output. You need a tool that helps you write code relevant for your data, lets you visualize its impact on the output, and quality check it for you.

Christophe and I have each spent 10 years in data — Christophe was a data engineer and has built data platforms for dozens of orgs, I was head of data and helped data teams building their analytics & data products. We’ve seen how the business asks you to ship data fast, while you’re here wondering if this small line of code will mistakenly multiply the revenue on your CEO dashboard by x5. Which leaves you 2 choices: test extensively and ship slow. Not test and ship fast. That’s why we wanted to create nao: a tool really adapted to our data work, that would allow data teams to ship at business pace.

nao is a fork of VS Code, with built-in connectors for BigQuery, Snowflake, and Postgres. We built our own AI copilot and tab system, gave them a RAG of your data warehouse schemas and of your codebase. We added a set of agent tools to query data, compare data, understand data tools like dbt, assess the downstream impact of code in your whole data lineage.

The AI tab and the AI agent write straight away code matching your schema, may it be for SQL, python, yaml. It shows you code diffs and data diffs side by side, to visualize what your change did to the data output. And you can leave the data quality checks to the agent: detect missing or duplicated values, outliers, anticipate breaking changes downstream or compare dev and production data differences.

Data teams usually use nao for writing SQL pipelines, often with dbt. It helps them create data models, document them, test them, while making sure they’re not breaking data lineage and figures in the BI. In run mode, they also use it to run some analytics, and identify data quality bugs in production. For less technical profiles, it’s also a great help to strengthen their code best practices. For large teams, it ensures that the code & metrics remain well factorized and consistent.

Software engineers use nao for the database exploration part: write SQL queries with nao tab, explore data schema with the agent, and write DDL.

Question we often get is: why not just use Cursor and MCPs? Cursor has to trigger many MCP calls to get full context of the data, while nao has it always available in one RAG. MCPs stay in a very enclosed part of Cursor: they don’t bring data context to the tab. And they don’t make the UI more adapted to data workflows. Besides, nao comes as pre-packaged for data teams: they don’t have to set up extensions, install and authenticate in MCPs, build CI/CD pipelines. Which means even non-technical data teams can have a great developer experience.

Our long-term goal is to become the best place to work with data. We want to fine-tune our own models for SQL, Python and YAML to give the most relevant code suggestions for data. We want to enlarge our comprehension of all data stack tools, to become the only agnostic editor for any of your data workflow.

You can try it here: https://sunshine.getnao.io/releases/ - download nao, sign up for free and start using it. Just for HN Launch, you can create a temporary account with a simple username if you’d prefer not to use your email. For now, we only have Mac version but Linux and Windows are coming.

We’d love to hear your feedback — and get your thoughts on how we can improve even further the data dev experience!

AI 자동 생성 콘텐츠

본 콘텐츠는 HN AI Engineering의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.

원문 바로가기
4

댓글

0