Hornbeam Documentation
Hornbeam is an HTTP server and application server that combines Python’s web and AI capabilities with Erlang’s infrastructure:
- Python handles: Web apps (WSGI/ASGI), ML models, data processing
- Erlang handles: Scaling, concurrency, distribution, fault tolerance, shared state
Quick Links
Getting Started
- Installation & Quick Start - Get up and running
- WSGI Guide - Run Flask, Django, and WSGI apps
- ASGI Guide - Run FastAPI, Starlette, and async apps
Guides
- WebSocket Guide - Real-time bidirectional communication
- Erlang Integration - ETS, RPC, Pub/Sub
- ML Integration - Caching, distributed inference
Examples
- Flask Application - Traditional WSGI app
- FastAPI Application - Modern async API
- WebSocket Chat - Real-time chat
- Embedding Service - ML with ETS caching
- Distributed ML - Cluster inference
Reference
- Configuration - All options
- Erlang API - Erlang modules
- Python API - Python modules
Why Hornbeam?
| Challenge | Traditional Python | Hornbeam |
|---|---|---|
| Concurrent connections | ~1000s (GIL) | Millions (BEAM) |
| Distributed computing | Redis/RabbitMQ | Built-in Erlang RPC |
| Shared state | External database | ETS (in-memory) |
| Hot reload | Restart server | Live code swap |
| Fault tolerance | Process crashes = down | Supervisor restarts |
Erlang Python Integration
Hornbeam is built on Erlang Python, which provides:
- Bidirectional Erlang ↔ Python calls
- Sub-interpreters for true parallelism
- Free-threaded Python (3.13+) support
- Automatic type conversion
- Streaming from generators
See the Erlang Python documentation for low-level Python integration details.