Documentation
Piglet Run documentation - AI coding sandbox with PostgreSQL, JuiceFS, VS Code, and more.
Welcome to Piglet Run documentation!
Piglet Run is a lightweight runtime environment from Pigsty, designed as a cloud coding sandbox for AI Web Coding. It integrates PostgreSQL database, JuiceFS distributed storage, VS Code, JupyterLab, and more into a unified environment.
Documentation Structure
Our documentation follows the Diataxis framework, organized into four categories:
| Category | Purpose | Example |
|---|
| Concept | Understand the principles | What is Piglet Run? How does snapshot work? |
| Tutorial | Learn step by step | Install Piglet Run, Create your first project |
| Task | Get specific things done | Backup database, Deploy application |
| Reference | Look up detailed info | Configuration options, CLI commands |
Quick Links
Core Features
| Feature | Description |
|---|
| 🤖 AI Coding | Pre-installed Claude Code, VS Code, Jupyter, Python/Go/Node.js |
| 🐘 Data Powerhouse | PostgreSQL 18 + 400+ extensions |
| 💾 Shared Storage | JuiceFS stores workspace in database |
| ⏱️ Time Machine | Database PITR + filesystem snapshots |
| 🔀 Instant Clone | Copy-on-Write database forking |
| 🌐 One-Click Deploy | Built-in Nginx with auto SSL |
| 📊 Full Observability | VictoriaMetrics + Grafana |
1 - About
Learn about Piglet Run project, its license, community, and how to get support.
Piglet Run is a lightweight runtime environment from Pigsty, designed as a cloud coding sandbox for AI Web Coding.
What is Piglet Run?
Piglet Run integrates:
- PostgreSQL 18 with 400+ extensions
- JuiceFS distributed storage with PITR support
- VS Code Server for web-based development
- JupyterLab for data science
- Claude Code for AI-assisted coding
- Grafana for observability
All in a single, easy-to-deploy package.
Project Links
Topics
| Topic | Description |
|---|
| License | Open source license (Apache 2.0) |
1.1 - License
Piglet Run is open source software licensed under Apache License 2.0.
Apache License 2.0
Piglet Run is licensed under the Apache License 2.0.
Copyright 2025 Ruohang Feng
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
What This Means
The Apache 2.0 license allows you to:
- ✅ Use - Use the software for any purpose
- ✅ Modify - Modify the source code
- ✅ Distribute - Distribute copies of the software
- ✅ Commercial Use - Use in commercial projects
- ✅ Patent Grant - Includes patent rights from contributors
Requirements
When using Piglet Run, you must:
- Include the license and copyright notice
- State any significant changes made to the code
- Include the NOTICE file if one exists
Third-Party Components
Piglet Run includes several third-party open source components:
| Component | License |
|---|
| PostgreSQL | PostgreSQL License |
| JuiceFS | Apache 2.0 |
| VS Code Server | MIT |
| Grafana | AGPL 3.0 |
| VictoriaMetrics | Apache 2.0 |
| Nginx | BSD-2-Clause |
Please refer to each component’s license for specific terms.
2 - Concept
Understand the core concepts, architecture, and design philosophy behind Piglet Run.
Concept documentation helps you understand the principles and architecture behind Piglet Run.
What You’ll Learn
- How Piglet Run works under the hood
- The design philosophy and architecture
- Key concepts like snapshots, cloning, and storage
Topics
2.1 - Overview
What is Piglet Run?
Piglet Run is a lightweight runtime environment from Pigsty, designed as a cloud coding sandbox for AI Web Coding. It integrates PostgreSQL database, JuiceFS distributed storage, VS Code, JupyterLab, and more into a unified environment.
Why Piglet Run?
In the age of AI-assisted development, developers need:
- Instant development environments that just work
- Powerful databases with all extensions available
- Safe experimentation with easy rollback
- Seamless collaboration between humans and AI agents
Piglet Run provides all of this in a single package.
Key Features
| Feature | Description |
|---|
| 🤖 AI Coding | Pre-installed Claude Code, OpenCode, VS Code, Jupyter |
| 🐘 Data Powerhouse | PostgreSQL 18 + 400+ extensions |
| 💾 Shared Storage | JuiceFS stores workspace in database |
| ⏱️ Time Machine | Database PITR + filesystem snapshots |
| 🔀 Instant Clone | Copy-on-Write database forking |
| 🌐 One-Click Deploy | Built-in Nginx with auto SSL |
| 📊 Full Observability | VictoriaMetrics + Grafana |
| 🇨🇳 China Friendly | Global CDN + China mirrors |
Who is it for?
- Solo developers who want a powerful dev environment
- Teams that need shared development infrastructure
- AI developers using Claude Code or similar tools
- Data scientists working with PostgreSQL and Jupyter
- Learners exploring PostgreSQL and web development
How it works
┌─────────────────────────────────────────────────────────────┐
│ Piglet Run │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ VS Code │ │ Jupyter │ │ Claude │ │ Nginx │ │
│ │ Server │ │ Lab │ │ Code │ │ Proxy │ │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │ │
│ └────────────┴────────────┴────────────┘ │
│ │ │
│ ┌──────────────────────┴──────────────────────┐ │
│ │ JuiceFS (Shared Storage) │ │
│ └──────────────────────┬──────────────────────┘ │
│ │ │
│ ┌──────────────────────┴──────────────────────┐ │
│ │ PostgreSQL 18 + 400+ Extensions │ │
│ └─────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ VictoriaMetrics + Grafana (Monitoring) │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Next Steps
2.2 - Architecture
System Architecture
Piglet Run is built on top of Pigsty, providing a streamlined development environment.
Components
| Component | Role | Port |
|---|
| Nginx | Reverse proxy, SSL termination | 80, 443 |
| VS Code Server | Web-based IDE | /code |
| JupyterLab | Data science notebook | /jupyter |
| PostgreSQL | Primary database | 5432 |
| JuiceFS | Distributed filesystem | - |
| VictoriaMetrics | Metrics storage | 8428 |
| Grafana | Monitoring dashboards | /ui |
Network Architecture
Internet
│
▼
┌─────────┐
│ Nginx │ :80, :443
└────┬────┘
│
├──────────────┬──────────────┬──────────────┐
│ │ │ │
▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ VS Code │ │ Jupyter │ │ Grafana │ │ App │
│ /code │ │/jupyter │ │ /ui │ │ /* │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
Storage Architecture
All development work is stored in PostgreSQL via JuiceFS:
┌─────────────────────────────────────┐
│ Working Directory │
│ ~/workspace │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ JuiceFS │
│ (POSIX-compatible FS) │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ PostgreSQL │
│ (Metadata + Data Chunks) │
└─────────────────────────────────────┘
Next Steps
2.3 - Storage
JuiceFS Shared Storage
Piglet Run uses JuiceFS to provide a distributed filesystem backed by PostgreSQL.
Why JuiceFS?
- POSIX Compatible: Works like a normal filesystem
- Database-Backed: Data stored in PostgreSQL
- Snapshots: Point-in-time recovery support
- Multi-User: Share workspace across sessions
How It Works
┌────────────────────────────────────────────┐
│ Application Layer │
│ (VS Code, Jupyter, Claude Code, etc.) │
└──────────────────┬─────────────────────────┘
│ POSIX API
▼
┌────────────────────────────────────────────┐
│ JuiceFS FUSE │
│ (Filesystem in Userspace) │
└──────────────────┬─────────────────────────┘
│
┌──────────┴──────────┐
│ │
▼ ▼
┌───────────────┐ ┌───────────────┐
│ Metadata │ │ Data Chunks │
│ (PostgreSQL) │ │ (PostgreSQL) │
└───────────────┘ └───────────────┘
Features
| Feature | Description |
|---|
| Transparent | Use like local filesystem |
| Durable | Data stored in database |
| Concurrent | Multiple users/agents access |
| Snapshots | Point-in-time recovery |
Next Steps
2.4 - Snapshot
Time Machine
Piglet Run provides point-in-time recovery for both database and filesystem.
Database PITR
PostgreSQL’s built-in PITR (Point-in-Time Recovery) allows you to restore the database to any point in time. Managed by pgBackRest.
# Show backup information
pig pb info
# List all backups
pig pb ls
# Create a full backup
pig pb backup full
# Restore to latest backup
pig pb restore
# Restore to specific time
pig pb restore -t "2025-01-29 10:00:00"
Filesystem Snapshots
JuiceFS snapshots preserve the state of your workspace:
# Create a snapshot (using juicefs CLI)
juicefs snapshot create /jfs/data snapshot-before-experiment
# List snapshots
juicefs snapshot list /jfs/data
# Restore from snapshot
juicefs snapshot restore /jfs/data snapshot-before-experiment
Use Cases
| Scenario | Solution |
|---|
| AI broke code | Restore filesystem snapshot |
| Bad database migration | Use pig pb restore -t <time> |
| Experiment failed | Roll back entire environment |
| Need clean state | Restore to baseline snapshot |
Backup Management
# View backup status
pig pb info
# View backup logs
pig pb log tail
# Create incremental backup
pig pb backup incr
# Create differential backup
pig pb backup diff
Next Steps
2.5 - Clone
Instant Cloning
Piglet Run supports Copy-on-Write (CoW) cloning for rapid database forking.
How It Works
Copy-on-Write means:
- Zero copy at clone time
- Only changed blocks consume storage
- TB-scale databases clone in milliseconds
Original Database
┌─────────────────────┐
│ Block A │ Block B │ Block C │
└────┬────┴────┬────┴────┬────┘
│ │ │
▼ ▼ ▼
┌────────────────────────────┐
│ Shared Storage │
└────────────────────────────┘
▲ ▲ ▲
│ │ │
┌────┴────┬────┴────┬────┴────┐
│ Block A │ Block B │ Block C │ (shared)
│ │ Block B'│ │ (changed in clone)
└─────────┴─────────┴─────────┘
Cloned Database
Use Cases
| Use Case | Benefit |
|---|
| Development | Clone prod for testing |
| AI Experiments | Branch for each experiment |
| Feature Branches | Database per branch |
| Training | Each learner gets own copy |
Database Cloning
Using PostgreSQL utilities:
# Create database from template
pig pg psql -c "CREATE DATABASE dev TEMPLATE prod"
# Or using pg_dump/pg_restore for cross-server clone
pg_dump -Fc prod > prod.dump
pg_restore -d dev prod.dump
Using pgBackRest for Cloning
# Restore to a new cluster as a clone
pig pb restore --target-pgdata=/data/pg-clone
# Or restore to specific time point
pig pb restore -t "2025-01-29 10:00:00" --target-pgdata=/data/pg-clone
Filesystem Cloning with JuiceFS
# Clone directory using JuiceFS snapshot
juicefs snapshot create /jfs/workspace ws-snapshot
juicefs snapshot restore /jfs/workspace-clone ws-snapshot
Next Steps
2.6 - Monitoring
Full Observability
Piglet Run includes a complete monitoring stack based on VictoriaMetrics and Grafana.
Components
| Component | Role |
|---|
| VictoriaMetrics | Time-series database |
| Grafana | Visualization dashboards |
| node_exporter | System metrics |
| pg_exporter | PostgreSQL metrics |
Dashboards
Access Grafana at http://<ip>/ui
Available dashboards:
- Claude Code: AI agent monitoring
- PostgreSQL: Database performance
- System: Host metrics
- JuiceFS: Filesystem statistics
Metrics
Over 3000+ metrics collected:
- Database queries, connections, locks
- System CPU, memory, disk, network
- Application-specific metrics
Alerts
Configure alerts for:
- High CPU/memory usage
- Database connection limits
- Disk space warnings
- Query performance issues
Next Steps
2.7 - Security
Security Model
Piglet Run provides multiple layers of security for your development environment.
Access Control
| Layer | Mechanism |
|---|
| Network | Firewall, VPN support |
| Web | Nginx authentication |
| Database | PostgreSQL roles |
| Filesystem | Unix permissions |
Authentication
Default authentication methods:
- VS Code: Password or token
- Jupyter: Token-based
- Grafana: Username/password
- PostgreSQL: Role-based access
Encryption
| Type | Support |
|---|
| In Transit | SSL/TLS |
| At Rest | Database encryption |
| Backup | Encrypted backups |
Best Practices
- Change default passwords immediately
- Enable SSL for all services
- Restrict network access
- Regular security updates
Next Steps
3 - Tutorial
Step-by-step guides to learn Piglet Run from scratch.
Tutorial documentation provides step-by-step guides to learn Piglet Run from scratch.
Learning Path
Follow these tutorials in order to get started:
- Installation - Set up Piglet Run on your server
- Quick Start - Create your first project
- VS Code - Using the web-based VS Code
- Jupyter - Data analysis with Jupyter
- Claude Code - AI-assisted development
- Database - Working with PostgreSQL
Topics
3.1 - Installation
Install Piglet Run
This tutorial guides you through installing Piglet Run on a fresh server.
Prerequisites
- OS: Linux (Ubuntu 22.04+, Debian 12+, RHEL 8+, Rocky 8+)
- CPU: 2+ cores recommended
- RAM: 4GB minimum, 8GB+ recommended
- Disk: 40GB+ free space
- Network: Internet access for package download
Quick Install
1. Install Pig CLI
# Default (Cloudflare CDN)
curl -fsSL https://repo.pigsty.io/pig | bash
# China Mirror
curl -fsSL https://repo.pigsty.cc/pig | bash
2. Setup Repositories
pig repo set # Setup all required repositories
3. Install Pigsty with Piglet Profile
pig sty init # Download Pigsty to ~/pigsty
cd ~/pigsty
./configure -m piglet # Configure with piglet preset
./install.yml # Run installation playbook
Step-by-Step Installation
Download and Install Pig
curl -fsSL https://repo.pigsty.io/pig | bash
Setup Repositories
pig repo set # One-step repo setup
pig repo add all --region china # Use China mirrors if needed
Install PostgreSQL and Extensions
pig install pg17 # Install PostgreSQL 17
pig install pg_duckdb vector -v 17 # Install extensions
Install Pigsty Distribution
pig sty init # Download Pigsty
pig sty boot # Install Ansible
pig sty conf -m piglet # Generate piglet config
pig sty deploy # Run deployment
Verify Installation
After installation, check status:
pig status # Check pig environment
pig ext status # Check installed extensions
pig pg status # Check PostgreSQL status
Access the services:
| Service | URL |
|---|
| Homepage | http://<ip>/ |
| VS Code | http://<ip>/code |
| Jupyter | http://<ip>/jupyter |
| Grafana | http://<ip>/ui |
| PostgreSQL | postgres://<ip>:5432 |
Troubleshooting
Check Logs
pig pg log tail # PostgreSQL logs
pig pt log -f # Patroni logs (if HA enabled)
Common Issues
| Issue | Solution |
|---|
| Repository error | pig repo set -u to refresh |
| Package conflict | pig repo rm then pig repo set |
| Permission denied | Run with sudo or as root |
Next Steps
3.2 - Quick Start
Your First 5 Minutes
This tutorial gets you productive with Piglet Run in 5 minutes.
Access Your Environment
After installation, access your environment:
| Service | URL | Default Credentials |
|---|
| Homepage | http://<ip>/ | None |
| VS Code | http://<ip>/code | See /data/code/config.yaml |
| Jupyter | http://<ip>/jupyter | Token in logs |
| Grafana | http://<ip>/ui | admin / admin |
Create Your First Project
1. Open VS Code
Navigate to http://<ip>/code in your browser.
2. Open Terminal
Press Ctrl+` to open the integrated terminal.
3. Create a Project
cd ~/workspace
mkdir my-first-project
cd my-first-project
4. Create a Simple App
Create app.py:
from http.server import HTTPServer, SimpleHTTPRequestHandler
print("Server running on http://localhost:8000")
HTTPServer(('', 8000), SimpleHTTPRequestHandler).serve_forever()
5. Run It
Connect to PostgreSQL
psql postgres://postgres@localhost/postgres
Or in Python:
import psycopg
conn = psycopg.connect("postgres://postgres@localhost/postgres")
Next Steps
3.3 - VS Code
Web-based VS Code
Learn to use the web-based VS Code server in Piglet Run.
Access
Open your browser and navigate to:
http://<ip>/code
Features
The web VS Code includes:
- Full VS Code experience in browser
- Extensions support
- Integrated terminal
- Git integration
- Python, Go, Node.js support
Getting Started
1. Open a Folder
Click “Open Folder” and select /root/workspace.
2. Install Extensions
Recommended extensions:
- Python
- Pylance
- GitLens
- Database Client
Press Ctrl+, to open settings.
Tips
- Use
Ctrl+Shift+P for command palette - `Ctrl+`` for integrated terminal
Ctrl+B to toggle sidebar
Next Steps
3.4 - Jupyter
JupyterLab Tutorial
Learn to use JupyterLab for data analysis in Piglet Run.
Access
Navigate to:
http://<ip>/jupyter
Features
- Interactive Python notebooks
- Rich output (charts, tables, images)
- PostgreSQL integration
- Multiple kernels (Python, SQL)
Getting Started
1. Create a Notebook
Click “Python 3” under Notebook.
2. Connect to PostgreSQL
import psycopg
import pandas as pd
conn = psycopg.connect("postgres://postgres@localhost/postgres")
df = pd.read_sql("SELECT * FROM pg_stat_activity", conn)
df.head()
3. Visualize Data
import matplotlib.pyplot as plt
df['state'].value_counts().plot(kind='bar')
plt.show()
Tips
- Use
Shift+Enter to run cells - Save notebooks to
/root/workspace/notebooks
Next Steps
3.5 - Claude Code
AI-Assisted Development
Learn to use Claude Code for AI-assisted development in Piglet Run.
Prerequisites
You need an Anthropic API key. Set it up:
export ANTHROPIC_API_KEY="your-api-key"
Getting Started
1. Launch Claude Code
In VS Code terminal:
2. Give Instructions
Ask Claude to help with your project:
> Create a FastAPI app with PostgreSQL integration
3. Review and Accept
Claude will:
- Analyze your request
- Generate code
- Explain the changes
- Wait for your approval
Best Practices
| Practice | Reason |
|---|
| Create snapshots | Roll back if needed |
| Review changes | Verify before accepting |
| Be specific | Better results |
| Iterate | Refine step by step |
Safety
Piglet Run makes AI coding safer:
- Snapshots: Restore any time
- Monitoring: Track AI activity
- Isolation: Sandboxed environment
Monitoring
View Claude Code activity at:
http://<ip>/ui/d/claude-code
Next Steps
3.6 - Database
PostgreSQL Basics
Learn to work with PostgreSQL in Piglet Run.
Connect
Using psql
psql postgres://postgres@localhost/postgres
Using Python
import psycopg
conn = psycopg.connect("postgres://postgres@localhost/postgres")
Create Database
Install Extensions
PostgreSQL 18 with 400+ extensions available:
-- Vector search
CREATE EXTENSION vector;
-- Time series
CREATE EXTENSION timescaledb;
-- Full text search (Chinese)
CREATE EXTENSION zhparser;
Basic Operations
-- Create table
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Insert data
INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com');
-- Query
SELECT * FROM users;
Monitoring
View database performance at:
http://<ip>/ui/d/pgsql-overview
Next Steps
3.7 - Application
Build and Deploy
Learn to build and deploy a web application in Piglet Run.
Create a FastAPI App
1. Set Up Project
cd ~/workspace
mkdir myapp && cd myapp
python -m venv venv
source venv/bin/activate
pip install fastapi uvicorn psycopg[binary]
2. Create Application
Create main.py:
from fastapi import FastAPI
import psycopg
app = FastAPI()
@app.get("/")
def root():
return {"message": "Hello from Piglet Run!"}
@app.get("/users")
def get_users():
conn = psycopg.connect("postgres://postgres@localhost/postgres")
cur = conn.execute("SELECT * FROM users")
return cur.fetchall()
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
3. Run Locally
4. Deploy with Nginx
See Deploy Task for production deployment.
Next Steps
4 - Task
Goal-oriented guides for specific operations like backup, restore, deploy.
Task documentation provides goal-oriented guides for specific operations.
How to Use
Each task guide focuses on accomplishing a specific goal:
- Problem-focused: Start with what you want to achieve
- Step-by-step: Clear instructions to follow
- Practical: Real-world scenarios
Topics
| Topic | Description |
|---|
| Backup | Backup database and files |
| Restore | Restore from backup or snapshot |
| Clone | Clone database or environment |
| Deploy | Deploy web application |
| SSL | Configure SSL certificates |
| Domain | Set up custom domain |
| Scale | Scale resources up or down |
| Migrate | Migrate data from other systems |
| Monitor | Set up alerts and monitoring |
| Upgrade | Upgrade Piglet Run |
4.1 - Backup
Learn how to backup your database and files in Piglet Run.
Overview
Piglet Run provides multiple backup methods to protect your data:
- Database Backup: Full and incremental PostgreSQL backups
- File Backup: User files and configurations
- Snapshot: Complete system state capture
Quick Backup
Create a full backup with a single command:
Backup Database
Full Database Backup
Incremental Backup
pig backup db --incremental
Backup Specific Database
Backup Files
Backup User Files
Backup Configurations
Scheduled Backups
Configure automatic backups in /etc/piglet/backup.yml:
backup:
schedule: "0 2 * * *" # Daily at 2 AM
retention: 7 # Keep 7 days
type: incremental
Backup Storage
Backups are stored in:
| Type | Location |
|---|
| Database | /data/backup/postgres/ |
| Files | /data/backup/files/ |
| Config | /data/backup/config/ |
Next Steps
4.2 - Restore
Learn how to restore your database and files from backups or snapshots.
Overview
Piglet Run supports multiple restore scenarios:
- Point-in-Time Recovery: Restore to any moment in time
- Full Restore: Restore from a complete backup
- Selective Restore: Restore specific databases or files
Quick Restore
Restore from the latest backup:
Restore Database
List Available Backups
Restore Full Backup
pig restore db --backup 2024-01-15
Point-in-Time Recovery
pig restore db --time "2024-01-15 14:30:00"
Restore Specific Database
pig restore db mydb --backup 2024-01-15
Restore Files
Restore All Files
pig restore files --backup 2024-01-15
Restore Specific Directory
pig restore files /home/dba/projects --backup 2024-01-15
Restore from Snapshot
pig snapshot restore snap-20240115
Verification
After restore, verify data integrity:
pig verify db
pig verify files
Next Steps
4.3 - Clone
Learn how to clone databases and environments in Piglet Run.
Overview
Cloning allows you to create exact copies of:
- Database: Clone a database for testing or development
- Environment: Clone the entire Piglet Run instance
- Schema Only: Clone structure without data
Quick Clone
Clone a database instantly:
pig clone db mydb mydb_copy
Clone Database
Full Clone
pig clone db production development
Schema Only
pig clone db production development --schema-only
Clone with Data Filter
pig clone db production development --filter "created_at > '2024-01-01'"
Clone Environment
Create Environment Clone
pig clone env --name staging
Clone to Remote Server
pig clone env --target user@remote-server
Clone from Snapshot
pig clone snapshot snap-20240115 --name dev-clone
Clone Options
| Option | Description |
|---|
--schema-only | Clone structure without data |
--no-owner | Skip ownership information |
--no-privileges | Skip privilege information |
--parallel N | Use N parallel jobs |
Use Cases
- Development: Clone production for local development
- Testing: Create isolated test environments
- Analytics: Clone for reporting without impacting production
Next Steps
4.4 - Deploy
Learn how to deploy web applications on Piglet Run.
Overview
Piglet Run supports deploying various web applications:
- Static Sites: HTML, CSS, JavaScript
- Node.js: Express, Next.js, React
- Python: Flask, Django, FastAPI
- PHP: Laravel, WordPress
Quick Deploy
Deploy a static site:
Deploy Static Site
From Local Directory
pig deploy ./dist --name my-site
From Git Repository
pig deploy https://github.com/user/repo --name my-site
Deploy Node.js Application
Basic Deployment
cd my-node-app
pig deploy --type nodejs
With Custom Port
pig deploy --type nodejs --port 3000
Configuration
Create piglet.yml in your project:
type: nodejs
entry: server.js
port: 3000
env:
NODE_ENV: production
Deploy Python Application
Flask Application
pig deploy --type python --framework flask
Django Application
pig deploy --type python --framework django
Deployment Management
List Deployments
View Logs
Restart Application
pig deploy restart my-site
Remove Deployment
pig deploy remove my-site
Next Steps
4.5 - SSL
Learn how to configure SSL certificates for secure HTTPS connections.
Overview
Piglet Run supports multiple SSL certificate options:
- Let’s Encrypt: Free automatic certificates
- Self-Signed: For development and testing
- Custom: Bring your own certificates
Quick SSL Setup
Enable SSL with Let’s Encrypt:
pig ssl enable --domain example.com
Let’s Encrypt Certificates
Enable for Domain
pig ssl letsencrypt --domain example.com --email admin@example.com
Multiple Domains
pig ssl letsencrypt --domain example.com --domain www.example.com
Wildcard Certificate
pig ssl letsencrypt --domain "*.example.com" --dns cloudflare
Self-Signed Certificates
Generate Self-Signed
pig ssl self-signed --domain localhost
For Development
pig ssl self-signed --domain dev.local --days 365
Custom Certificates
Install Custom Certificate
pig ssl install --cert /path/to/cert.pem --key /path/to/key.pem
With Certificate Chain
pig ssl install \
--cert /path/to/cert.pem \
--key /path/to/key.pem \
--chain /path/to/chain.pem
Certificate Management
View Certificates
Check Expiration
Renew Certificates
Configuration
SSL settings in /etc/piglet/ssl.yml:
ssl:
provider: letsencrypt
email: admin@example.com
auto_renew: true
renew_days: 30
Next Steps
4.6 - Domain
Learn how to set up custom domains for your Piglet Run services.
Overview
Configure custom domains for:
- Main Application: Your primary domain
- Subdomains: Service-specific subdomains
- Multiple Domains: Support multiple domains
Quick Setup
Add a custom domain:
pig domain add example.com
DNS Configuration
Required DNS Records
Point your domain to Piglet Run:
| Type | Name | Value |
|---|
| A | @ | Your server IP |
| A | www | Your server IP |
| CNAME | code | @ |
| CNAME | jupyter | @ |
Using Cloudflare
pig domain add example.com --dns cloudflare --api-key YOUR_API_KEY
Add Custom Domain
Primary Domain
pig domain add example.com --primary
Subdomain for Services
pig domain add code.example.com --service vscode
pig domain add jupyter.example.com --service jupyter
Domain Management
List Domains
Remove Domain
pig domain remove old-domain.com
Set Primary
pig domain primary example.com
Configuration
Domain settings in /etc/piglet/domains.yml:
domains:
primary: example.com
aliases:
- www.example.com
services:
vscode: code.example.com
jupyter: jupyter.example.com
grafana: monitor.example.com
Verify Domain
Check domain configuration:
pig domain verify example.com
Next Steps
4.7 - Scale
Learn how to scale your Piglet Run resources up or down.
Overview
Piglet Run supports scaling:
- Database: Adjust PostgreSQL resources
- Storage: Expand disk capacity
- Services: Scale service resources
Quick Scale
Scale database resources:
pig scale db --cpu 4 --memory 8G
Scale Database
Increase Resources
pig scale db --cpu 4 --memory 16G
Adjust Connection Limits
pig scale db --max-connections 200
pig scale db --shared-buffers 4G
Scale Storage
Expand Disk
pig scale storage --size 100G
Add Storage Volume
pig scale storage add --mount /data/extra --size 50G
Scale Services
VS Code Server
pig scale service vscode --memory 4G
JupyterLab
pig scale service jupyter --memory 8G
Resource Limits
View current resource allocation:
Example output:
Service CPU Memory Storage
--------- ---- ------ -------
PostgreSQL 2 4G 20G
VS Code 1 2G -
Jupyter 1 2G -
Nginx 0.5 512M -
Configuration
Scale settings in /etc/piglet/resources.yml:
resources:
postgres:
cpu: 2
memory: 4G
storage: 20G
vscode:
cpu: 1
memory: 2G
jupyter:
cpu: 1
memory: 2G
Best Practices
- Monitor resource usage before scaling
- Scale gradually to avoid disruption
- Test changes in development first
Next Steps
4.8 - Migrate
Learn how to migrate data from other systems to Piglet Run.
Overview
Piglet Run supports migration from:
- Other PostgreSQL: Migrate from existing PostgreSQL instances
- MySQL/MariaDB: Convert and migrate from MySQL
- Cloud Databases: AWS RDS, Google Cloud SQL, Azure Database
- Files: Import from SQL dumps or CSV files
Quick Migration
Migrate from another PostgreSQL:
pig migrate postgres://user:pass@source-host/dbname
Migrate from PostgreSQL
Direct Connection
pig migrate pg \
--host source.example.com \
--port 5432 \
--user migrate_user \
--database production
From pg_dump File
pig migrate import backup.sql
Schema Only
pig migrate pg --host source.example.com --schema-only
Migrate from MySQL
Direct Migration
pig migrate mysql \
--host mysql.example.com \
--user migrate_user \
--database myapp
With Type Mapping
pig migrate mysql --host source --type-map mysql-to-pg.yml
Migrate from Cloud
AWS RDS
pig migrate rds \
--instance mydb-instance \
--region us-west-2 \
--profile aws-profile
Google Cloud SQL
pig migrate cloudsql \
--instance myproject:region:instance \
--credentials /path/to/credentials.json
Import Files
SQL Dump
pig migrate import dump.sql --database mydb
CSV Files
pig migrate csv data.csv --table users --database mydb
Multiple CSV Files
pig migrate csv ./data/ --database mydb
Migration Options
| Option | Description |
|---|
--schema-only | Migrate structure only |
--data-only | Migrate data only |
--no-owner | Skip ownership |
--parallel N | Parallel jobs |
--exclude TABLE | Exclude tables |
Verification
Verify migration:
pig migrate verify --source postgres://source/db --target postgres://target/db
Next Steps
4.9 - Monitor
Learn how to set up alerts and monitoring for your Piglet Run instance.
Overview
Piglet Run includes comprehensive monitoring:
- Grafana Dashboards: Visual monitoring
- Alerting: Configurable alerts
- Metrics: Prometheus-based metrics
- Logs: Centralized logging
Quick Setup
Access monitoring dashboard:
pig monitor open
# Opens Grafana at http://<ip>/ui
Grafana Dashboards
Available Dashboards
| Dashboard | Description |
|---|
| Overview | System overview and health |
| PostgreSQL | Database performance metrics |
| Node | Server resource usage |
| Nginx | Web server statistics |
Access Dashboards
# Default credentials
URL: http://<ip>/ui
User: admin
Password: (shown during install)
Set Up Alerts
Enable Email Alerts
pig alert email --to admin@example.com --smtp smtp.example.com
Enable Slack Alerts
pig alert slack --webhook https://hooks.slack.com/services/...
Enable Webhook Alerts
pig alert webhook --url https://api.example.com/alerts
Alert Rules
View Alert Rules
Add Custom Alert
pig alert add \
--name "High CPU" \
--condition "cpu_usage > 80" \
--duration 5m \
--severity warning
Default Alerts
| Alert | Condition | Severity |
|---|
| High CPU | > 80% for 5m | Warning |
| High Memory | > 90% for 5m | Warning |
| Disk Full | > 85% | Critical |
| DB Down | Connection failed | Critical |
| Replication Lag | > 1s | Warning |
Configuration
Alert configuration in /etc/piglet/alerts.yml:
alerts:
email:
enabled: true
to: admin@example.com
smtp:
host: smtp.example.com
port: 587
slack:
enabled: false
webhook: ""
rules:
- name: high_cpu
expr: cpu_usage > 80
for: 5m
severity: warning
View Logs
# All service logs
pig logs
# Specific service
pig logs postgres
pig logs nginx
Next Steps
4.10 - Upgrade
Learn how to upgrade your Piglet Run installation.
Overview
Piglet Run upgrades include:
- Minor Updates: Bug fixes and security patches
- Major Upgrades: New features and improvements
- PostgreSQL Upgrades: Database version upgrades
Quick Upgrade
Upgrade to latest version:
Check for Updates
View Current Version
Check Available Updates
Standard Upgrade
Upgrade to Specific Version
pig upgrade --version 2.5.0
Dry Run
Upgrade PostgreSQL
Check Compatible Versions
Upgrade Database Version
pig upgrade pg --version 17
With Full Backup
pig upgrade pg --version 17 --backup
Before Upgrading
Create Backup
Check Compatibility
pig upgrade check --verbose
Review Release Notes
Rollback
If upgrade fails:
Restore from Backup
pig restore --backup pre-upgrade
Rollback to Previous Version
Upgrade History
View upgrade history:
Example output:
Version Date Status
------- ---------- -------
2.5.0 2024-01-15 Current
2.4.1 2024-01-01 Previous
2.4.0 2023-12-15 Archived
Configuration
Upgrade settings in /etc/piglet/upgrade.yml:
upgrade:
auto_backup: true
notify: true
channel: stable # stable, beta, nightly
Next Steps
5 - Reference
Detailed technical reference for CLI, configuration, services, and APIs.
Reference documentation provides detailed technical information.
How to Use
Reference docs are for looking up specific information:
- CLI Commands: Full command reference
- Configuration: All configuration options
- Services: Details about built-in services
Topics
5.1 - CLI
Command-line interface reference for Piglet Run, powered by pig - the PostgreSQL package manager.
Overview
The pig CLI provides complete control over PostgreSQL installation, extension management, and system operations.
Installation
# Default (Cloudflare CDN)
curl -fsSL https://repo.pigsty.io/pig | bash
# China Mirror
curl -fsSL https://repo.pigsty.cc/pig | bash
Verify installation:
Global Options
| Option | Description |
|---|
--help, -h | Show help |
--debug | Enable debug mode |
--log-level | Set log level (debug/info/warn/error) |
-H, --home | Pigsty home directory |
-i, --inventory | Configuration inventory path |
Main Commands
Repository Management (pig repo)
pig repo list # List available repos and modules
pig repo info # Show repository information
pig repo status # Display current repo config
pig repo add [modules...] # Add repositories
pig repo set # Setup all required repos (recommended)
pig repo rm [modules...] # Remove repositories
pig repo update # Update package cache
Extension Management (pig ext)
pig ext list [pattern] # Search/list extensions
pig ext info <name> # Show extension details
pig ext avail <name> # Show availability matrix
pig ext status # Show installed extensions
pig ext scan # Scan installed extensions
pig ext add <name> [-v version] # Install extension
pig ext rm <name> # Remove extension
pig ext update # Update extensions
Installation Alias (pig install)
pig install pg17 # Install PostgreSQL 17
pig install pg_duckdb -v 17 # Install extension for PG 17
pig install vector postgis # Install multiple extensions
PostgreSQL Management (pig pg)
pig pg init # Initialize data directory
pig pg start # Start PostgreSQL
pig pg stop # Stop PostgreSQL
pig pg status # Check status
pig pg psql [database] # Connect to database
pig pg ps # Show connections
pig pg vacuum [database] # Vacuum database
pig pg log tail # View logs in real-time
Backup Management (pig pb)
pig pb info # Show backup information
pig pb ls # List all backups
pig pb backup # Create backup
pig pb backup full # Full backup
pig pb backup incr # Incremental backup
pig pb restore # Restore to latest
pig pb restore -t <time> # Restore to specific time
pig pb log tail # View backup logs
Patroni Cluster (pig pt)
pig pt list # List cluster members
pig pt config # Show cluster config
pig pt status # View service status
pig pt log -f # View logs in real-time
Pigsty Management (pig sty)
pig sty init # Download and install Pigsty
pig sty boot # Install Ansible dependencies
pig sty conf [-m template] # Generate configuration
pig sty deploy # Run deployment playbook
pig sty list # List available versions
System Status (pig status)
pig status # Show environment status
Environment Variables
| Variable | Description |
|---|
PIGSTY_HOME | Pigsty home directory (default: ~/pigsty) |
PIG_LOG_LEVEL | Log level |
Examples
# Quick setup for Piglet Run
pig repo set # Setup repositories
pig install pg17 # Install PostgreSQL 17
pig install vector pg_duckdb # Install extensions
pig sty init && pig sty deploy # Deploy Pigsty
# Daily operations
pig pg status # Check PostgreSQL
pig pb info # Check backups
pig ext status # Check extensions
See Also
5.2 - Configuration
Configuration file reference for Piglet Run.
Overview
Piglet Run uses YAML configuration files located in /etc/piglet/.
Main Configuration
File: /etc/piglet/piglet.yml
# Piglet Run Configuration
# System settings
system:
hostname: piglet
timezone: UTC
locale: en_US.UTF-8
# Database settings
database:
host: localhost
port: 5432
user: dba
database: postgres
max_connections: 100
shared_buffers: 256MB
# Services
services:
vscode:
enabled: true
port: 8080
jupyter:
enabled: true
port: 8888
grafana:
enabled: true
port: 3000
# Storage
storage:
data_dir: /data
backup_dir: /data/backup
temp_dir: /tmp/piglet
# Logging
logging:
level: info
file: /var/log/piglet/piglet.log
max_size: 100M
max_files: 10
Database Configuration
File: /etc/piglet/database.yml
# PostgreSQL Configuration
postgresql:
version: 17
data_directory: /data/postgres
# Connection settings
listen_addresses: localhost
port: 5432
max_connections: 100
# Memory settings
shared_buffers: 256MB
effective_cache_size: 768MB
work_mem: 4MB
maintenance_work_mem: 64MB
# WAL settings
wal_level: replica
max_wal_size: 1GB
min_wal_size: 80MB
# Logging
log_destination: csvlog
logging_collector: on
log_directory: pg_log
Service Configuration
VS Code Server
File: /etc/piglet/vscode.yml
vscode:
enabled: true
port: 8080
auth: password
extensions:
- ms-python.python
- rust-lang.rust-analyzer
JupyterLab
File: /etc/piglet/jupyter.yml
jupyter:
enabled: true
port: 8888
notebook_dir: /home/dba/notebooks
kernels:
- python3
- ir
Backup Configuration
File: /etc/piglet/backup.yml
backup:
enabled: true
schedule: "0 2 * * *"
retention: 7
database:
type: full
compress: true
files:
enabled: true
paths:
- /home/dba
- /etc/piglet
Network Configuration
File: /etc/piglet/network.yml
network:
# Domain settings
domain: localhost
# SSL settings
ssl:
enabled: false
cert: /etc/piglet/ssl/cert.pem
key: /etc/piglet/ssl/key.pem
# Proxy settings
proxy:
enabled: false
host: proxy.example.com
port: 8080
Environment Variables
Override configuration with environment variables:
export PIG_DATABASE_PORT=5433
export PIG_SERVICES_VSCODE_ENABLED=false
export PIG_LOGGING_LEVEL=debug
See Also
5.3 - VS Code Server
VS Code Server configuration and details for Piglet Run.
Overview
Piglet Run includes a pre-configured VS Code Server (code-server) for browser-based development.
Access
Default URL: http://<ip>/code
Configuration
File: /etc/piglet/vscode.yml
vscode:
enabled: true
bind_addr: 127.0.0.1:8080
auth: password
password: ${VSCODE_PASSWORD}
cert: false
# User data directory
user_data_dir: /home/dba/.local/share/code-server
# Extensions directory
extensions_dir: /home/dba/.local/share/code-server/extensions
Pre-installed Extensions
| Extension | Description |
|---|
ms-python.python | Python language support |
ms-toolsai.jupyter | Jupyter notebook support |
rust-lang.rust-analyzer | Rust language support |
golang.go | Go language support |
dbaeumer.vscode-eslint | JavaScript linting |
esbenp.prettier-vscode | Code formatter |
mtxr.sqltools | SQL tools |
Installing Extensions
Via CLI
code-server --install-extension ms-python.python
Via Settings
- Open VS Code in browser
- Go to Extensions (Ctrl+Shift+X)
- Search and install extensions
Settings
Default settings location: /home/dba/.local/share/code-server/User/settings.json
{
"editor.fontSize": 14,
"editor.tabSize": 2,
"editor.formatOnSave": true,
"terminal.integrated.defaultProfile.linux": "bash",
"python.defaultInterpreterPath": "/usr/bin/python3",
"files.autoSave": "afterDelay"
}
Service Management
# Start VS Code server
pig start vscode
# Stop VS Code server
pig stop vscode
# Restart VS Code server
pig restart vscode
# View logs
pig logs vscode
Keyboard Shortcuts
| Shortcut | Action |
|---|
Ctrl+Shift+P | Command palette |
Ctrl+P | Quick open file |
Ctrl+Shift+E | Explorer |
Ctrl+Shift+F | Search |
Ctrl+`` | Terminal |
Ctrl+Shift+G | Git |
Troubleshooting
Connection Issues
# Check service status
systemctl status code-server
# Check port binding
ss -tlnp | grep 8080
# View logs
journalctl -u code-server -f
Extension Issues
# List installed extensions
code-server --list-extensions
# Reinstall extension
code-server --uninstall-extension EXTENSION_ID
code-server --install-extension EXTENSION_ID
See Also
5.4 - JupyterLab
JupyterLab configuration and details for Piglet Run.
Overview
Piglet Run includes JupyterLab for interactive computing and data analysis.
Access
Default URL: http://<ip>/jupyter
Configuration
File: /etc/piglet/jupyter.yml
jupyter:
enabled: true
port: 8888
token: ${JUPYTER_TOKEN}
# Notebook directory
notebook_dir: /home/dba/notebooks
# Allowed origins
allow_origin: "*"
# Kernels
kernels:
- python3
- ir
- julia
JupyterLab Configuration
File: /home/dba/.jupyter/jupyter_lab_config.py
c.ServerApp.ip = '127.0.0.1'
c.ServerApp.port = 8888
c.ServerApp.open_browser = False
c.ServerApp.notebook_dir = '/home/dba/notebooks'
c.ServerApp.token = ''
c.ServerApp.allow_origin = '*'
Available Kernels
| Kernel | Language | Description |
|---|
python3 | Python | IPython kernel |
ir | R | R kernel |
julia | Julia | Julia kernel |
bash | Bash | Bash kernel |
Install Additional Kernels
# R kernel
R -e "IRkernel::installspec()"
# Julia kernel
julia -e 'using Pkg; Pkg.add("IJulia")'
Pre-installed Extensions
| Extension | Description |
|---|
jupyterlab-git | Git integration |
jupyterlab-lsp | Language server protocol |
jupyterlab-sql | SQL support |
Install Extensions
pip install jupyterlab-git
jupyter labextension install @jupyterlab/git
Service Management
# Start Jupyter
pig start jupyter
# Stop Jupyter
pig stop jupyter
# Restart Jupyter
pig restart jupyter
# View logs
pig logs jupyter
Connecting to PostgreSQL
import psycopg2
import pandas as pd
# Connect to database
conn = psycopg2.connect(
host="localhost",
database="postgres",
user="dba"
)
# Query data
df = pd.read_sql("SELECT * FROM my_table", conn)
df.head()
Using SQL Magic
%load_ext sql
%sql postgresql://dba@localhost/postgres
%%sql
SELECT * FROM pg_stat_activity LIMIT 5;
Keyboard Shortcuts
| Shortcut | Action |
|---|
Shift+Enter | Run cell |
Ctrl+Enter | Run cell, stay in cell |
Alt+Enter | Run cell, insert below |
Esc | Command mode |
Enter | Edit mode |
A | Insert cell above |
B | Insert cell below |
DD | Delete cell |
Troubleshooting
Connection Issues
# Check service status
systemctl status jupyter
# Check port binding
ss -tlnp | grep 8888
# View logs
journalctl -u jupyter -f
Kernel Issues
# List available kernels
jupyter kernelspec list
# Reinstall kernel
python -m ipykernel install --user
See Also
5.5 - PostgreSQL
PostgreSQL database configuration and details for Piglet Run.
Overview
Piglet Run includes PostgreSQL 17 as the primary database with optimized defaults.
Connection Details
| Parameter | Default Value |
|---|
| Host | localhost |
| Port | 5432 |
| User | dba |
| Database | postgres |
| Socket | /var/run/postgresql |
Connection Strings
Local Connection
TCP Connection
postgresql://dba@localhost:5432/postgres
With Password
postgresql://dba:password@localhost:5432/postgres
Configuration
File: /data/postgres/postgresql.conf
Memory Settings
# Memory
shared_buffers = 256MB
effective_cache_size = 768MB
work_mem = 4MB
maintenance_work_mem = 64MB
huge_pages = try
Connection Settings
# Connections
listen_addresses = 'localhost'
port = 5432
max_connections = 100
superuser_reserved_connections = 3
WAL Settings
# WAL
wal_level = replica
max_wal_size = 1GB
min_wal_size = 80MB
wal_buffers = 8MB
checkpoint_completion_target = 0.9
Logging
# Logging
log_destination = 'csvlog'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_statement = 'ddl'
log_min_duration_statement = 1000
Client Authentication
File: /data/postgres/pg_hba.conf
# TYPE DATABASE USER ADDRESS METHOD
local all all trust
host all all 127.0.0.1/32 scram-sha-256
host all all ::1/128 scram-sha-256
Service Management
# Start PostgreSQL
pig start postgres
# Stop PostgreSQL
pig stop postgres
# Restart PostgreSQL
pig restart postgres
# Reload configuration
pig reload postgres
# View logs
pig logs postgres
Database Management
# Create database
pig db create mydb
# List databases
pig db list
# Drop database
pig db drop mydb
# Connect to database
pig db connect mydb
User Management
# Create user
pig user create myuser
# Grant privileges
psql -c "GRANT ALL ON DATABASE mydb TO myuser"
# Change password
pig user passwd myuser
Backup and Restore
# Full backup
pig backup db --full
# Point-in-time recovery
pig restore db --time "2024-01-15 14:30:00"
Recommended Settings by RAM
| RAM | shared_buffers | effective_cache_size | work_mem |
|---|
| 4GB | 1GB | 3GB | 32MB |
| 8GB | 2GB | 6GB | 64MB |
| 16GB | 4GB | 12GB | 128MB |
| 32GB | 8GB | 24GB | 256MB |
Monitoring
# Connection statistics
psql -c "SELECT * FROM pg_stat_activity"
# Database size
psql -c "SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) FROM pg_database"
# Table statistics
psql -c "SELECT * FROM pg_stat_user_tables"
See Also
5.6 - Nginx
Nginx web server configuration for Piglet Run.
Overview
Piglet Run uses Nginx as a reverse proxy and web server for all services.
Configuration
Main config: /etc/nginx/nginx.conf
Site configs: /etc/nginx/conf.d/
Default Configuration
File: /etc/nginx/conf.d/piglet.conf
# Piglet Run Nginx Configuration
upstream vscode {
server 127.0.0.1:8080;
}
upstream jupyter {
server 127.0.0.1:8888;
}
upstream grafana {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name _;
# Homepage
location / {
root /www/piglet;
index index.html;
}
# VS Code Server
location /code {
proxy_pass http://vscode;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# JupyterLab
location /jupyter {
proxy_pass http://jupyter;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
# Grafana
location /ui {
proxy_pass http://grafana;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
SSL Configuration
File: /etc/nginx/conf.d/piglet-ssl.conf
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/piglet/ssl/cert.pem;
ssl_certificate_key /etc/piglet/ssl/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# ... location blocks ...
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
Service Management
# Start Nginx
pig start nginx
# Stop Nginx
pig stop nginx
# Restart Nginx
pig restart nginx
# Reload configuration
pig reload nginx
# Test configuration
nginx -t
Custom Site Configuration
Create custom site config:
cat > /etc/nginx/conf.d/mysite.conf << 'EOF'
server {
listen 80;
server_name mysite.example.com;
root /www/mysite;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
EOF
nginx -t && systemctl reload nginx
Proxy Configuration
WebSocket Support
location /ws {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
Node.js Application
location /app {
proxy_pass http://127.0.0.1:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Logging
Access log: /var/log/nginx/access.log
Error log: /var/log/nginx/error.log
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
Rate Limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
location /api {
limit_req zone=api burst=20 nodelay;
proxy_pass http://backend;
}
Troubleshooting
# Test configuration
nginx -t
# Check error log
tail -f /var/log/nginx/error.log
# Check access log
tail -f /var/log/nginx/access.log
# View connections
ss -tlnp | grep nginx
See Also
5.7 - Grafana
Grafana monitoring dashboard configuration for Piglet Run.
Overview
Piglet Run includes Grafana for comprehensive monitoring and visualization.
Access
Default URL: http://<ip>/ui
Default credentials:
- Username:
admin - Password: (shown during installation)
Configuration
File: /etc/grafana/grafana.ini
[server]
http_port = 3000
root_url = %(protocol)s://%(domain)s/ui/
serve_from_sub_path = true
[security]
admin_user = admin
admin_password = ${GRAFANA_PASSWORD}
[auth.anonymous]
enabled = false
[dashboards]
default_home_dashboard_path = /var/lib/grafana/dashboards/home.json
[database]
type = postgres
host = localhost:5432
name = grafana
user = grafana
Pre-installed Dashboards
| Dashboard | Description |
|---|
| Home | System overview |
| PostgreSQL | Database metrics |
| Node Exporter | Server resources |
| Nginx | Web server metrics |
| Logs | Log viewer |
Data Sources
Prometheus
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://localhost:9090
access: proxy
isDefault: true
PostgreSQL
datasources:
- name: PostgreSQL
type: postgres
url: localhost:5432
database: postgres
user: grafana
secureJsonData:
password: ${GRAFANA_DB_PASSWORD}
Loki (Logs)
datasources:
- name: Loki
type: loki
url: http://localhost:3100
access: proxy
Service Management
# Start Grafana
pig start grafana
# Stop Grafana
pig stop grafana
# Restart Grafana
pig restart grafana
# View logs
pig logs grafana
Creating Dashboards
Via UI
- Click “+” in left sidebar
- Select “Dashboard”
- Add panels with queries
- Save dashboard
Via Provisioning
Place JSON dashboard files in /var/lib/grafana/dashboards/
# /etc/grafana/provisioning/dashboards/default.yaml
apiVersion: 1
providers:
- name: default
folder: ''
type: file
options:
path: /var/lib/grafana/dashboards
Alerting
[smtp]
enabled = true
host = smtp.example.com:587
user = alerts@example.com
password = ${SMTP_PASSWORD}
from_address = alerts@example.com
Create Alert Rule
- Edit panel query
- Go to “Alert” tab
- Set conditions
- Configure notifications
Useful Queries
PostgreSQL Connections
CPU Usage
100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
Memory Usage
(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100
Disk Usage
(1 - node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100
API Access
Generate API key:
curl -X POST -H "Content-Type: application/json" \
-d '{"name":"mykey", "role": "Admin"}' \
http://admin:password@localhost:3000/api/auth/keys
Troubleshooting
# Check service status
systemctl status grafana-server
# View logs
journalctl -u grafana-server -f
# Test database connection
psql -U grafana -d grafana -c "SELECT 1"
See Also
5.8 - JuiceFS
JuiceFS distributed filesystem configuration for Piglet Run.
Overview
Piglet Run uses JuiceFS for distributed storage, enabling snapshots and fast cloning.
Architecture
JuiceFS consists of:
- Metadata Engine: PostgreSQL stores file metadata
- Object Storage: S3-compatible storage for data blocks
- FUSE Client: Mounts filesystem locally
Configuration
File: /etc/piglet/juicefs.yml
juicefs:
name: piglet
metadata: postgres://dba@localhost:5432/juicefs
storage: minio
bucket: http://localhost:9000/piglet
access_key: ${MINIO_ACCESS_KEY}
secret_key: ${MINIO_SECRET_KEY}
# Mount options
mount_point: /data/jfs
cache_dir: /var/cache/juicefs
cache_size: 10240 # 10GB
Mount Configuration
File: /etc/juicefs/piglet.conf
[piglet]
meta = postgres://dba@localhost:5432/juicefs
storage = minio
bucket = http://localhost:9000/piglet
access-key = ${MINIO_ACCESS_KEY}
secret-key = ${MINIO_SECRET_KEY}
# Performance options
cache-dir = /var/cache/juicefs
cache-size = 10240
buffer-size = 300
prefetch = 3
Service Management
# Mount filesystem
pig mount jfs
# Unmount filesystem
pig umount jfs
# Check status
pig status jfs
# View statistics
juicefs stats /data/jfs
Commands
juicefs format \
--storage minio \
--bucket http://localhost:9000/pig \
postgres://dba@localhost:5432/juicefs \
piglet
Mount Filesystem
juicefs mount \
postgres://dba@localhost:5432/juicefs \
/data/jfs \
--cache-dir /var/cache/juicefs \
--cache-size 10240
Check Filesystem
juicefs fsck postgres://dba@localhost:5432/juicefs
Snapshot Operations
Create Snapshot
juicefs snapshot create /data/jfs snap-$(date +%Y%m%d)
List Snapshots
juicefs snapshot list /data/jfs
Restore Snapshot
juicefs snapshot restore /data/jfs snap-20240115
Delete Snapshot
juicefs snapshot delete /data/jfs snap-20240115
Clone Operations
# Clone directory
juicefs clone /data/jfs/source /data/jfs/dest
# Clone with snapshot
juicefs clone /data/jfs/.snapshots/snap-20240115 /data/jfs/restored
Cache Settings
# Increase cache size
juicefs config /data/jfs --cache-size 20480
# Enable writeback
juicefs config /data/jfs --writeback
# Entry cache TTL
juicefs mount ... --entry-cache 3 --dir-entry-cache 3 --attr-cache 3
Monitoring
Statistics
Output:
usage: 10.2 GiB (1234567 inodes)
sessions: 1
trash: 0 (0 Bytes)
Prometheus Metrics
juicefs mount ... --metrics localhost:9567
Troubleshooting
# Check mount status
mount | grep juicefs
# View logs
journalctl -u juicefs -f
# Debug mode
juicefs mount --debug ...
# Repair filesystem
juicefs fsck --repair postgres://dba@localhost:5432/juicefs
See Also
5.9 - Extensions
PostgreSQL extensions available in Piglet Run.
Overview
Piglet Run includes 340+ PostgreSQL extensions from the Pigsty ecosystem.
Pre-installed Extensions
Core Extensions
| Extension | Version | Description |
|---|
pg_stat_statements | 1.10 | Track execution statistics |
pgcrypto | 1.3 | Cryptographic functions |
uuid-ossp | 1.1 | UUID generation |
hstore | 1.8 | Key-value store |
ltree | 1.2 | Hierarchical data |
pg_trgm | 1.6 | Trigram matching |
Vector & AI
| Extension | Version | Description |
|---|
pgvector | 0.7.0 | Vector similarity search |
pgvectorscale | 0.2.0 | Vector indexing |
pg_embedding | 0.3.6 | Embedding functions |
Time Series
| Extension | Version | Description |
|---|
timescaledb | 2.14 | Time-series database |
pg_partman | 5.0 | Partition management |
Geospatial
| Extension | Version | Description |
|---|
postgis | 3.4 | Geographic objects |
postgis_topology | 3.4 | Topology support |
postgis_raster | 3.4 | Raster data |
pgrouting | 3.6 | Routing algorithms |
Full Text Search
| Extension | Version | Description |
|---|
pg_jieba | 1.1 | Chinese word segmentation |
zhparser | 2.2 | Chinese parser |
Installing Extensions
Via SQL
-- Create extension
CREATE EXTENSION pgvector;
-- Create extension in specific schema
CREATE EXTENSION postgis SCHEMA public;
-- Update extension
ALTER EXTENSION pgvector UPDATE;
Via CLI
pig ext install pgvector
pig ext install postgis
Listing Extensions
Installed Extensions
SELECT * FROM pg_extension;
Available Extensions
SELECT * FROM pg_available_extensions ORDER BY name;
Extension Details
SELECT * FROM pg_available_extension_versions
WHERE name = 'pgvector';
Extension Configuration
pg_stat_statements
-- Enable tracking
ALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';
-- Configure
ALTER SYSTEM SET pg_stat_statements.track = 'all';
ALTER SYSTEM SET pg_stat_statements.max = 10000;
pgvector
-- Create extension
CREATE EXTENSION vector;
-- Create vector column
CREATE TABLE items (
id SERIAL PRIMARY KEY,
embedding vector(384)
);
-- Create index
CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
PostGIS
-- Create extension
CREATE EXTENSION postgis;
-- Create geometry column
CREATE TABLE locations (
id SERIAL PRIMARY KEY,
name TEXT,
geom geometry(Point, 4326)
);
-- Spatial query
SELECT name FROM locations
WHERE ST_DWithin(geom, ST_MakePoint(-122.4, 37.8)::geography, 1000);
TimescaleDB
-- Create extension
CREATE EXTENSION timescaledb;
-- Create hypertable
CREATE TABLE metrics (
time TIMESTAMPTZ NOT NULL,
device_id INTEGER,
value DOUBLE PRECISION
);
SELECT create_hypertable('metrics', 'time');
Managing Extensions
# List installed extensions
pig ext list
# Show extension info
pig ext info pgvector
# Remove extension
pig ext remove pgvector
See Also
5.10 - REST API
REST API reference for Piglet Run.
Overview
Piglet Run provides a REST API for programmatic access to all features.
Base URL
http://<ip>/api/v1
Authentication
API Key
curl -H "Authorization: Bearer YOUR_API_KEY" \
http://localhost/api/v1/status
Generate API Key
pig api key create --name mykey
Endpoints
System
Get Status
Response:
{
"status": "healthy",
"version": "2.5.0",
"uptime": 86400,
"services": {
"postgres": "running",
"vscode": "running",
"jupyter": "running"
}
}
Get System Info
Response:
{
"hostname": "piglet",
"os": "Ubuntu 22.04",
"cpu": 4,
"memory": "8GB",
"disk": "100GB"
}
Databases
List Databases
Response:
{
"databases": [
{
"name": "postgres",
"owner": "dba",
"size": "50MB"
}
]
}
Create Database
POST /api/v1/databases
Content-Type: application/json
{
"name": "mydb",
"owner": "dba"
}
Delete Database
DELETE /api/v1/databases/{name}
Backups
List Backups
Response:
{
"backups": [
{
"id": "backup-20240115",
"type": "full",
"size": "1.2GB",
"created_at": "2024-01-15T02:00:00Z"
}
]
}
Create Backup
POST /api/v1/backups
Content-Type: application/json
{
"type": "full",
"databases": ["postgres"]
}
Restore Backup
POST /api/v1/backups/{id}/restore
Content-Type: application/json
{
"target_time": "2024-01-15T14:30:00Z"
}
Services
List Services
Response:
{
"services": [
{
"name": "postgres",
"status": "running",
"port": 5432
},
{
"name": "vscode",
"status": "running",
"port": 8080
}
]
}
Control Service
POST /api/v1/services/{name}/{action}
Actions: start, stop, restart
Snapshots
List Snapshots
Create Snapshot
POST /api/v1/snapshots
Content-Type: application/json
{
"name": "snap-20240115",
"description": "Before upgrade"
}
Restore Snapshot
POST /api/v1/snapshots/{name}/restore
Users
List Users
Create User
POST /api/v1/users
Content-Type: application/json
{
"username": "newuser",
"password": "secure_password",
"databases": ["mydb"]
}
Error Responses
{
"error": {
"code": "NOT_FOUND",
"message": "Database not found",
"details": {
"database": "nonexistent"
}
}
}
Error Codes
| Code | HTTP Status | Description |
|---|
UNAUTHORIZED | 401 | Invalid or missing API key |
FORBIDDEN | 403 | Insufficient permissions |
NOT_FOUND | 404 | Resource not found |
CONFLICT | 409 | Resource already exists |
INTERNAL_ERROR | 500 | Server error |
Rate Limiting
- Default: 100 requests per minute
- Burst: 20 requests
Headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1704067200
SDK Examples
Python
import requests
api_key = "YOUR_API_KEY"
base_url = "http://localhost/api/v1"
headers = {"Authorization": f"Bearer {api_key}"}
# Get status
response = requests.get(f"{base_url}/status", headers=headers)
print(response.json())
# Create database
response = requests.post(
f"{base_url}/databases",
headers=headers,
json={"name": "mydb", "owner": "dba"}
)
JavaScript
const apiKey = "YOUR_API_KEY";
const baseUrl = "http://localhost/api/v1";
// Get status
fetch(`${baseUrl}/status`, {
headers: { Authorization: `Bearer ${apiKey}` }
})
.then(res => res.json())
.then(data => console.log(data));
See Also