본문으로 건너뛰기

© 2026 Molayo

HN요약2026. 04. 24. 14:35

PostgreSQL 기반의 현대적 작업 큐 시스템 'Hatchet' 출시 (클라우드/오픈소스)

요약

기존 Celery나 BullMQ 등의 한계를 극복한 새로운 오픈소스 작업 큐(Task Queue) 솔루션 Hatchet이 클라우드 버전을 공개했습니다. Hatchet은 PostgreSQL을 핵심 데이터 저장소로 사용하여, 태스크 실행의 상태 관리와 워크플로우 진행 과정을 단일 트랜잭션으로 처리함으로써 데이터 무결성과 안정성을 극대화합니다. 특히 RAG 파이프라인 오케스트레이션, 에이전트 기반 LLM 워크플로우 구축 등 복잡한 비동기 작업을 안정적으로 처리할 수 있어 개발자들에게 큰 가치를 제공합니다.

핵심 포인트

  • PostgreSQL을 활용하여 태스크 실행 상태와 결과 업데이트를 단일 트랜잭션으로 관리, 데이터 손실 및 레이스 컨디션 위험 최소화.
  • Fanout 워크플로우(하나의 태스크가 여러 개의 병렬 자식 태스크 트리거) 지원을 추가하여 개발자 친화적인 영속적 실행 모델 구현.
  • Webhook Worker를 도입하여 HTTP 기반 트리거를 지원, Vercel 등 환경에서 타임아웃 제한 문제를 해결할 수 있도록 함.
  • 오픈소스(MIT 라이선스)로 제공되며, 클라우드 자가 서비스(self-serve cloud)와 로컬 개발용 경량 버전(hatchet-lite)을 함께 출시함.

Launch HN: Hatchet (YC W24) – Open-source task queue, now with a cloud version

Hey HN - this is Alexander and Gabe from Hatchet (
https://hatchet.run). We’re building a modern task queue as an alternative to tools like Celery for Python and BullMQ for Node. Our open-source repo is at
https://github.com/hatchet-dev/hatchet and is 100% MIT licensed.

When we did a Show HN a few months ago (https://news.ycombinator.com/item?id=39643136), our cloud version was invite-only and we were focused on our open-source offering.

Today we’re launching our self-serve cloud so that anyone can get started creating tasks on our platform - you can get started at https://cloud.onhatchet.run, or you can use these credentials to access a demo (should be prefilled):
URL: https://demo.hatchet-tools.com
Email: hacker@news.ycombinator.com
Password: HatchetDemo123!

People are currently using Hatchet for a bunch of use-cases: orchestrating RAG pipelines, queueing up user notifications, building agentic LLM workflows, or scheduling image generation tasks on GPUs.

We built this out of frustration with existing tools and a conviction that PostgreSQL is the right choice for a task queue. Beyond the fact that many developers are already using Postgres in their stack, which makes it easier to self-host Hatchet, it’s also easier to model higher-order concepts in Postgres, like chains of tasks (which we call workflows). In our system, the acknowledgement of the task, the task result, and the updates to higher-order models are done as part of the same Postgres transaction, which significantly reduces the risk of data loss/race conditions when compared with other task queues (which usually pass acknowledgements through a broker, storing the task results elsewhere, and only then figuring out the next task in the chain).

We also became increasingly frustrated with tools like Celery and the challenges it introduces when using a modern Python stack (> 3.5). We wrote up a list of these frustrations here: https://docs.hatchet.run/blog/problems-with-celery.

Since our Show HN, we’ve (partially or completely) addressed the most common pieces of feedback from the post, which we’ll outline here:

  1. The most common ask was built-in support for fanout workflows — one task which triggers an arbitrary number of child tasks to run in parallel. We previously only had support for DAG executions. We generalized this concept and launched child workflows (https://docs.hatchet.run/home/features/child-workflows). This is the first step towards a developer-friendly model of durable execution.
  2. Support for HTTP-based triggers — we’ve built out support for webhook workers (https://docs.hatchet.run/home/features/webhooks), which allow you to trigger any workflow over an HTTP webhook. This is particularly useful for apps on Vercel, who are dealing with timeout limits of 60s, 300s, or 900s (depending on your tier).
  3. Our RabbitMQ dependency — while we haven’t gotten rid of this completely, we’ve recently launched hatchet-lite (https://docs.hatchet.run/self-hosting/hatchet-lite), which allows you to run the various Hatchet components in a single Docker image that bundles RabbitMQ along with a migration process, admin CLI, our REST API, and our gRPC engine. Hopefully the lite was a giveaway, but this is meant for local development and low-volume processing, on the order of hundreds per minute.

We’ve also launched more features, like support for global rate limiting, steps which only run on workflow failure, and custom event streaming.

We’ll be here the whole day for questions and feedback, and look forward to hearing your thoughts!

AI 자동 생성 콘텐츠

본 콘텐츠는 HN AI Engineering의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.

원문 바로가기
3

댓글

0