WIP: add agent docs and example skills

Temporary commit with documentation and examples for agent features.
This commit can be reverted before merging.

Includes:
- docs/ENTRYPOINT_FEATURE.md - ENTRYPOINT implementation notes
- docs/mcp-integration.md - MCP integration design
- docs/agent-skills-changes.md - Skills feature changes
- docs/skill-registry-design.md - Registry design notes
- skills/ - Example skill implementations
- ducky.Agentfile - Example entrypoint agent
This commit is contained in:
ParthSareen 2025-12-30 15:00:26 -05:00
parent 89f74a8b05
commit 96d69ee2b2
25 changed files with 2740 additions and 0 deletions

211
docs/ENTRYPOINT_FEATURE.md Normal file
View File

@ -0,0 +1,211 @@
# ENTRYPOINT Feature for Ollama Agents
## Overview
The ENTRYPOINT command allows agents to specify an external program to run instead of the built-in Ollama chat loop. This makes Ollama a packaging/distribution mechanism for agents with custom runtimes.
## Status: Implemented ✓
## What Was Done
### 1. Types & API
**`types/model/config.go`**
- Added `Entrypoint string` field to `ConfigV2` struct
**`api/types.go`**
- Added `Entrypoint string` to `CreateRequest` (line ~576)
- Added `Entrypoint string` to `ShowResponse` (line ~632)
### 2. Parser
**`parser/parser.go`**
- Added "entrypoint" to `isValidCommand()` switch
- Added case in `CreateRequest()` to set `req.Entrypoint = c.Args`
- Updated `ParseFile()` to allow ENTRYPOINT without FROM (entrypoint-only agents)
- Added entrypoint serialization in `Command.String()`
### 3. Server
**`server/create.go`**
- Added `config.Entrypoint = r.Entrypoint` to store entrypoint in config
- Made FROM optional when ENTRYPOINT is specified:
```go
} else if r.Entrypoint != "" {
// Entrypoint-only agent: no base model needed
slog.Debug("create entrypoint-only agent", "entrypoint", r.Entrypoint)
}
```
**`server/routes.go`**
- Added `Entrypoint: m.Config.Entrypoint` to ShowResponse in `GetModelInfo()`
**`server/images.go`**
- Added entrypoint serialization in `Model.String()`:
```go
if m.Config.Entrypoint != "" {
modelfile.Commands = append(modelfile.Commands, parser.Command{
Name: "entrypoint",
Args: m.Config.Entrypoint,
})
}
```
### 4. CLI
**`cmd/cmd.go`**
- Added `Entrypoint string` to `runOptions` struct
- Updated agent detection to include Entrypoint check
- Added entrypoint check before interactive mode:
```go
if opts.Entrypoint != "" {
return runEntrypoint(cmd, opts)
}
```
- Implemented `runEntrypoint()` function:
- Parses entrypoint into command and args
- Appends user prompt as additional argument if provided
- Looks up command in PATH
- Creates subprocess with stdin/stdout/stderr connected
- Runs and waits for completion
- Updated `showInfo()` to display entrypoint in Agent section
- Updated `showInfo()` to hide Model section for entrypoint-only agents (no blank fields)
- Added `$PROMPT` placeholder support in `runEntrypoint()`:
- If entrypoint contains `$PROMPT`, it's replaced with the user's prompt
- If no placeholder, prompt is appended as positional argument (backwards compatible)
- If no prompt provided, `$PROMPT` is removed from the command
## Usage
### Agentfile
```dockerfile
# Minimal entrypoint agent (no model required)
ENTRYPOINT ducky
# Or with full path
ENTRYPOINT /usr/local/bin/ducky
# Or with arguments
ENTRYPOINT ducky --verbose
# Use $PROMPT placeholder to control where prompt is inserted
ENTRYPOINT ducky -p $PROMPT
# Without placeholder, prompt is appended as positional argument
ENTRYPOINT echo "Hello" # becomes: echo "Hello" <prompt>
# Can still bundle skills/MCPs with entrypoint agents
SKILL ./my-skill
MCP calculator python3 ./calc.py
ENTRYPOINT my-custom-runtime
```
### CLI
```bash
# Create the agent
ollama create ducky -f ducky.Agentfile
# Run it - starts the entrypoint (e.g., REPL)
ollama run ducky
# With prompt (passed as argument to entrypoint)
ollama run ducky "hello"
# Show agent info
ollama show ducky
# Agent
# entrypoint ducky
```
## Testing Done
1. **Basic entrypoint execution**: ✓
```bash
# Agentfile: ENTRYPOINT echo "Hello from entrypoint"
ollama run test-entry # Output: "Hello from entrypoint"
```
2. **Prompt passing (positional)**: ✓
```bash
# Agentfile: ENTRYPOINT echo "Args:"
ollama run echo-test "hello world" # Output: "Args:" hello world
```
3. **Prompt passing ($PROMPT placeholder)**: ✓
```bash
# Agentfile: ENTRYPOINT echo "Prompt was:" $PROMPT "end"
ollama run echo-placeholder "hello world" # Output: "Prompt was:" hello world "end"
ollama run echo-placeholder # Output: "Prompt was:" "end"
```
4. **Show command**: ✓
```bash
ollama show ducky
# Agent
# entrypoint ducky
# (Model section hidden for entrypoint-only agents)
```
5. **List command**: ✓
- Entrypoint-only agents show with small sizes (~200 bytes)
## Left Over / Future Enhancements
### 1. Context Passing via Environment Variables
Pass agent context to entrypoint via env vars:
- `OLLAMA_AGENT_NAME` - Name of the agent
- `OLLAMA_SKILLS_PATH` - Path to bundled skills
- `OLLAMA_MCPS` - JSON of MCP configurations
### ~~2. Arguments Placeholder~~ ✓ DONE
~~Support placeholder syntax for more control:~~
```dockerfile
# Now supported!
ENTRYPOINT ducky -p $PROMPT
```
### 3. Working Directory
Set working directory for entrypoint:
```dockerfile
WORKDIR /app
ENTRYPOINT ./run.sh
```
### 4. Interactive Mode Detection
Different behavior for REPL vs single-shot:
- Detect if stdin is a TTY
- Pass different flags based on mode
### 5. Signal Handling
Improved signal forwarding to subprocess:
- Forward SIGINT, SIGTERM gracefully
- Handle cleanup on parent exit
### 6. Entrypoint with Model
Allow both model and entrypoint:
```dockerfile
FROM llama3.2
ENTRYPOINT my-custom-ui
```
The entrypoint could then use the model via Ollama API.
### 7. Pull/Push for Entrypoint Agents
- Currently entrypoint agents can be created locally
- Need to test/verify push/pull to registry works correctly
- May need to handle entrypoint binaries (or just reference system commands)
### 8. Error Handling
- Better error messages when entrypoint command not found
- Validation of entrypoint during create (optional, warn if not found)
## Design Decisions
1. **Subprocess mode (not exec)**: Ollama stays as parent process to handle signals and cleanup
2. **No context passing initially**: Keep it simple, entrypoint handles its own config
3. **Skills/MCPs allowed**: Enables packaging assets with the agent even if entrypoint manages execution
4. **FROM optional**: Entrypoint agents don't need a model, just the runtime
5. **Prompt as argument**: User prompt is appended as argument to entrypoint command (simplest approach)

View File

@ -0,0 +1,332 @@
# Agent Skills Feature - Implementation Summary
This document summarizes all changes made to implement agent skills in Ollama, enabling `ollama run <agent>` with skill-based capabilities.
## Overview
Agents are models with attached skills. Skills are directories containing a `SKILL.md` file with instructions and optional executable scripts. When an agent runs, skills are loaded and injected into the system prompt, and the model can execute scripts via tool calls.
## Files Changed
### 1. `cmd/skills.go` (NEW FILE)
Core skills implementation:
```go
// Key types
type skillMetadata struct {
Name string `yaml:"name"`
Description string `yaml:"description"`
}
type skillDefinition struct {
Name string
Description string
Content string // SKILL.md body content
Dir string // Absolute path to skill directory
SkillPath string // Absolute path to SKILL.md
}
type skillCatalog struct {
Skills []skillDefinition
byName map[string]skillDefinition
}
```
**Key functions:**
- `loadSkills(paths []string)` - Walks skill directories, parses SKILL.md files
- `parseSkillFile(path, skillDir)` - Extracts YAML frontmatter and body content
- `SystemPrompt()` - Generates system prompt with skill instructions
- `Tools()` - Returns `run_skill_script` and `read_skill_file` tools
- `RunToolCall(call)` - Executes tool calls from the model
- `runSkillScript(skillDir, command)` - Executes shell commands in skill directory
**Tools provided to model:**
| Tool | Description |
|------|-------------|
| `run_skill_script` | Execute a script in a skill's directory |
| `read_skill_file` | Read a file from a skill's directory |
**Security note:** `runSkillScript` has documented limitations (no sandboxing, no path validation). See the function's doc comment for details.
---
### 2. `cmd/cmd.go`
**Changes to `runOptions` struct:**
```go
type runOptions struct {
// ... existing fields ...
IsAgent bool
AgentType string
Skills []string
}
```
**Agent detection in `RunHandler`** (~line 497-503):
```go
// Check if this is an agent
isAgent := info.AgentType != "" || len(info.Skills) > 0
if isAgent {
opts.IsAgent = true
opts.AgentType = info.AgentType
opts.Skills = info.Skills
}
```
**Route agents to chat API** (~line 557-562):
```go
// For agents, use chat API even in non-interactive mode to support tools
if opts.IsAgent {
opts.Messages = append(opts.Messages, api.Message{Role: "user", Content: opts.Prompt})
_, err := chat(cmd, opts)
return err
}
```
**Skills loading in `chat` function** (~line 1347-1361):
```go
var skillsCatalog *skillCatalog
if opts.IsAgent && len(opts.Skills) > 0 {
skillsCatalog, err = loadSkills(opts.Skills)
// ... error handling ...
// Print loaded skills
fmt.Fprintf(os.Stderr, "Loaded skills: %s\n", strings.Join(skillNames, ", "))
}
```
**System prompt injection** (~line 1448-1455):
- Skills system prompt is prepended to messages
**Tool execution** (~line 1497-1533):
- Executes pending tool calls via `skillsCatalog.RunToolCall()`
- Displays script execution and output to terminal
---
### 3. `parser/parser.go`
**New valid commands** in `isValidCommand()`:
```go
case "from", "license", "template", "system", "adapter", "renderer",
"parser", "parameter", "message", "requires", "skill", "agent_type":
```
**Command handling in `CreateRequest()`**:
```go
case "skill":
skills = append(skills, c.Args)
case "agent_type":
req.AgentType = c.Args
```
**Underscore support in command names** (~line 545):
```go
case isAlpha(r), r == '_':
return stateName, r, nil
```
---
### 4. `api/types.go`
**CreateRequest additions** (~line 560-564):
```go
// Skills is a list of skill directories for the agent
Skills []string `json:"skills,omitempty"`
// AgentType defines the type of agent (e.g., "conversational", "task-based")
AgentType string `json:"agent_type,omitempty"`
```
**ShowResponse additions** (~line 633-637):
```go
// Skills loaded for this agent
Skills []string `json:"skills,omitempty"`
// AgentType for this agent
AgentType string `json:"agent_type,omitempty"`
```
---
### 5. `types/model/config.go`
**ConfigV2 additions**:
```go
type ConfigV2 struct {
// ... existing fields ...
// Agent-specific fields
Skills []string `json:"skills,omitempty"`
AgentType string `json:"agent_type,omitempty"`
}
```
---
### 6. `server/create.go`
**Store agent fields** (~line 65-66):
```go
config.Skills = r.Skills
config.AgentType = r.AgentType
```
---
### 7. `server/routes.go`
**Return agent fields in ShowResponse** (~line 1107):
```go
resp := &api.ShowResponse{
// ... existing fields ...
Skills: m.Config.Skills,
AgentType: m.Config.AgentType,
}
```
---
### 8. `envconfig/config.go`
**Environment variable support**:
```go
func Skills() []string {
raw := strings.TrimSpace(Var("OLLAMA_SKILLS"))
if raw == "" {
return []string{}
}
return strings.Split(raw, ",")
}
```
---
## Agentfile Format
Agentfiles use the same syntax as Modelfiles with additional commands:
```dockerfile
FROM gpt-oss:20b
AGENT_TYPE conversational
SKILL /path/to/skills/directory
SYSTEM You are a helpful assistant.
PARAMETER temperature 0.3
PARAMETER top_p 0.9
```
| Command | Description |
|---------|-------------|
| `SKILL` | Path to a directory containing skill subdirectories |
| `AGENT_TYPE` | Type of agent (e.g., "conversational") |
---
## SKILL.md Format
Each skill is a directory with a `SKILL.md` file:
```
calculator-skill/
├── SKILL.md
└── scripts/
└── calculate.py
```
**SKILL.md structure:**
```markdown
---
name: calculator-skill
description: A skill for performing calculations.
---
# Calculator Skill
## Instructions
1. Use `run_skill_script` to execute calculations
2. Call: `python3 scripts/calculate.py '<expression>'`
## Examples
For "What is 25 * 4?":
- Call: run_skill_script with skill="calculator-skill" and command="python3 scripts/calculate.py '25 * 4'"
```
**Requirements:**
- `name` must match directory name
- `name` must be lowercase alphanumeric with hyphens only
- `name` max 64 characters
- `description` required, max 1024 characters
---
## Usage
```bash
# Create an agent
ollama create math-agent -f math-agent.Agentfile
# Run the agent
ollama run math-agent "What is 25 * 4?"
# Output:
# Loaded skills: calculator-skill
# Running script in calculator-skill: python3 scripts/calculate.py '25 * 4'
# Output:
# 25 * 4 = 100
```
---
## Flow Diagram
```
1. ollama run math-agent "query"
2. RunHandler detects agent (AgentType or Skills present)
3. Routes to chat() instead of generate()
4. loadSkills() parses SKILL.md files
5. SystemPrompt() injects skill instructions
6. Tools() provides run_skill_script, read_skill_file
7. Model generates response (may include tool calls)
8. RunToolCall() executes scripts, returns output
9. Display results to user
```
---
## Security Considerations
The `runSkillScript` function has known limitations documented in the code:
- No sandboxing (commands run with user permissions)
- No path validation (model can run any command)
- Shell injection risk (`sh -c` is used)
- No executable allowlist
- No environment isolation
**Potential improvements** (documented as TODOs):
- Restrict to skill directory paths only
- Allowlist executables (python3, node, bash)
- Use sandboxing (Docker, nsjail, seccomp)
- Require explicit script registration in SKILL.md

265
docs/mcp-integration.md Normal file
View File

@ -0,0 +1,265 @@
# MCP (Model Context Protocol) Integration
This document describes the MCP integration for Ollama agents, enabling agents to use external tools via the Model Context Protocol.
## Overview
MCP allows Ollama agents to communicate with external tool servers over JSON-RPC 2.0 via stdio. This enables agents to access capabilities like web search, file operations, databases, and more through standardized tool interfaces.
## Status
| Phase | Description | Status |
|-------|-------------|--------|
| Phase 1 | Types & Parser | ✅ Complete |
| Phase 2 | Layer Handling | ✅ Complete |
| Phase 3 | Runtime Manager | ✅ Complete |
| Phase 4 | CLI Commands | ✅ Complete |
## Agentfile Syntax
### Simple Command Format
```dockerfile
MCP <name> <command> [args...]
```
Example:
```dockerfile
FROM llama3.2
AGENT TYPE conversational
SYSTEM You are a helpful assistant with MCP tools.
MCP calculator python3 ./mcp-server.py
MCP websearch node ./search-server.js
```
### JSON Format
```dockerfile
MCP {"name": "custom", "command": "uv", "args": ["run", "server.py"], "env": {"API_KEY": "xxx"}}
```
## Architecture
### Type Definitions
**MCPRef** (`types/model/config.go`):
```go
type MCPRef struct {
Name string `json:"name,omitempty"`
Digest string `json:"digest,omitempty"`
Command string `json:"command,omitempty"`
Args []string `json:"args,omitempty"`
Env map[string]string `json:"env,omitempty"`
Type string `json:"type,omitempty"` // "stdio"
}
```
### Tool Namespacing
MCP tools are namespaced to avoid conflicts:
- Format: `mcp_{serverName}_{toolName}`
- Example: Server "calculator" with tool "add" → `mcp_calculator_add`
### Runtime Flow
1. Agent starts → MCP servers spawn as subprocesses
2. Initialize via JSON-RPC: `initialize``notifications/initialized`
3. Discover tools: `tools/list`
4. During chat, model calls tools → routed via `tools/call`
5. On shutdown, MCP servers are gracefully terminated
## Files
### Created
| File | Purpose |
|------|---------|
| `cmd/mcp.go` | Runtime MCP manager with JSON-RPC protocol |
| `cmd/mcp_cmd.go` | CLI commands for managing MCPs (push, pull, list, etc.) |
| `server/mcp.go` | MCP layer utilities (extraction, creation) |
### Modified
| File | Changes |
|------|---------|
| `types/model/config.go` | Added `MCPRef` type, `MCPs` field to `ConfigV2` |
| `types/model/name.go` | Added `"mcp"` to `ValidKinds` for 5-part name parsing |
| `api/types.go` | Added `MCPRef` alias, `MCPs` to `CreateRequest`/`ShowResponse` |
| `parser/parser.go` | Added `MCP` command parsing with JSON and simple formats |
| `server/create.go` | Added `setMCPLayers()` for MCP config handling |
| `server/routes.go` | Added `MCPs` to show response |
| `cmd/cmd.go` | MCP integration in `chat()` function |
| `cmd/interactive.go` | Added `/mcp` and `/mcps` REPL commands |
## Usage Example
### 1. Create an MCP Server
```python
#!/usr/bin/env python3
# mcp-server.py
import json
import sys
def handle_request(req):
method = req.get("method", "")
if method == "initialize":
return {
"protocolVersion": "2024-11-05",
"capabilities": {"tools": {}},
"serverInfo": {"name": "example", "version": "1.0"}
}
elif method == "tools/list":
return {
"tools": [{
"name": "add",
"description": "Adds two numbers",
"inputSchema": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
}
}]
}
elif method == "tools/call":
args = req["params"]["arguments"]
return {"content": [{"type": "text", "text": f"{args['a'] + args['b']}"}]}
return {}
for line in sys.stdin:
req = json.loads(line)
if "id" in req:
result = handle_request(req)
print(json.dumps({"jsonrpc": "2.0", "id": req["id"], "result": result}), flush=True)
```
### 2. Create an Agent
```dockerfile
# my-agent.Agentfile
FROM gpt-oss:20b
AGENT TYPE conversational
SYSTEM You have access to a calculator. Use the add tool when asked to add numbers.
MCP calculator python3 ./mcp-server.py
```
### 3. Build and Run
```bash
ollama create my-agent -f my-agent.Agentfile
ollama run my-agent "What is 15 + 27?"
```
Output:
```
Loaded MCP servers: calculator (1 tools)
Executing: mcp_calculator_add
Output: 42
The result is 42.
```
## CLI Commands
The `ollama mcp` command provides utilities for managing MCP servers:
### Global Config Commands
Add an MCP server to the global config (`~/.ollama/mcp.json`):
```bash
# Add MCP to global config (available to all agents)
ollama mcp add web-search uv run ./mcp-server.py
ollama mcp add calculator python3 /path/to/calc.py
# List global MCP servers (shows enabled/disabled status)
ollama mcp list-global
# Disable an MCP server (keeps in config but won't be loaded)
ollama mcp disable web-search
# Re-enable a disabled MCP server
ollama mcp enable web-search
# Remove from global config
ollama mcp remove-global web-search
```
### Registry Commands
Package and push MCPs to a registry:
```bash
# Push MCP to registry (creates locally first)
ollama mcp push mcp/websearch:1.0 ./my-mcp-server/
# Pull MCP from registry
ollama mcp pull mcp/websearch:1.0
# List installed MCPs (from registry)
ollama mcp list
# Show MCP details
ollama mcp show mcp/websearch:1.0
# Remove MCP
ollama mcp rm mcp/websearch:1.0
```
## REPL Commands
Inside `ollama run`, you can manage MCP servers dynamically:
```
>>> /mcp # Show all MCP servers (model + global)
>>> /mcp add calc python3 ./calc-server.py # Add MCP server to global config
>>> /mcp remove calc # Remove MCP server from global config
>>> /mcp disable calc # Disable an MCP server (keep in config)
>>> /mcp enable calc # Re-enable a disabled MCP server
>>> /? mcp # Get help for MCP commands
```
The `/mcp` command shows all available MCP servers (both bundled with the model and from global config). Disabled servers are shown with a `[disabled]` marker. Use `/mcp add` and `/mcp remove` to manage MCPs in `~/.ollama/mcp.json`. Changes take effect on the next message.
## Global Config
MCPs can be configured globally in `~/.ollama/mcp.json`:
```json
{
"mcpServers": {
"web-search": {
"type": "stdio",
"command": "uv",
"args": ["run", "./mcp-server.py"]
},
"calculator": {
"type": "stdio",
"command": "python3",
"args": ["/path/to/calc.py"],
"disabled": true
}
}
}
```
The `disabled` field is optional. When set to `true`, the MCP server will not be loaded when running agents.
## Future Enhancements
1. **Remote Registry Push/Pull**: Full support for pushing/pulling MCPs to/from remote registries
2. **Use go-sdk**: Consider using `github.com/modelcontextprotocol/go-sdk` for protocol handling
3. **Resource Support**: Add MCP resources (not just tools)
4. **Prompt Support**: Add MCP prompts
## Protocol Reference
MCP uses JSON-RPC 2.0 over stdio with these key methods:
| Method | Direction | Purpose |
|--------|-----------|---------|
| `initialize` | Client→Server | Handshake with capabilities |
| `notifications/initialized` | Client→Server | Confirm initialization |
| `tools/list` | Client→Server | Discover available tools |
| `tools/call` | Client→Server | Execute a tool |
See [MCP Specification](https://modelcontextprotocol.io/docs) for full details.

View File

@ -0,0 +1,362 @@
# Skill Registry Design
## Overview
Skills are distributable capability packages for Ollama agents. They can be:
- Bundled with agents at creation time (local paths)
- Pulled from the registry (skill references)
- Pushed to the registry for sharing
## User Experience
### Push a Skill
```bash
# Push a local skill directory to the registry
ollama skill push myname/calculator:1.0.0 ./skills/calculator-skill
# Output:
# Creating skill layer for skill/myname/calculator:1.0.0
# pushing sha256:abc123... 1.2KB
# pushing sha256:def456... 220B
# pushing manifest
# Successfully pushed skill/myname/calculator:1.0.0
```
### Pull a Skill
```bash
# Pull a skill from the registry
ollama skill pull calculator:1.0.0
# Output:
# pulling manifest
# pulling sha256:abc123... 1.2KB
# extracting skill...
# Successfully pulled skill/calculator:1.0.0
```
### List Installed Skills
```bash
ollama skill list
# Output:
# NAME TAG SIZE MODIFIED
# skill/calculator 1.0.0 1.2 KB 2 hours ago
# skill/myname/hello latest 0.8 KB 1 day ago
```
### Remove a Skill
```bash
ollama skill rm calculator:1.0.0
# Deleted 'skill/calculator:1.0.0'
```
### Use Skills in Agentfile
```dockerfile
FROM llama3.2:3b
AGENT_TYPE conversational
SKILL skill/calculator:1.0.0 # Registry reference
SKILL ./local-skill # Local path (for development)
SYSTEM You are a helpful assistant.
```
## Technical Implementation
### Skill Manifest Format
```json
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"digest": "sha256:config...",
"size": 220
},
"layers": [
{
"mediaType": "application/vnd.ollama.image.skill",
"digest": "sha256:skill...",
"size": 1234
}
]
}
```
### Skill Config Format
```json
{
"name": "calculator",
"description": "A skill for performing calculations",
"architecture": "amd64",
"os": "linux"
}
```
### Storage Layout
Skills use a 5-part manifest structure: `host/namespace/kind/model/tag`
```
~/.ollama/models/
├── blobs/
│ └── sha256-<skill-digest> # Skill tar.gz blob
├── manifests/
│ └── registry.ollama.ai/
│ └── library/
│ └── skill/ # Kind = skill
│ └── calculator/
│ └── 1.0.0
│ └── myname/
│ └── skill/ # User skills
│ └── my-skill/
│ └── latest
└── skills/
└── sha256-<digest>/ # Extracted skill cache
├── SKILL.md
└── scripts/
```
### Name Structure
Skills use a 5-part name structure with `kind` to distinguish from models:
| Skill Reference | Namespace | Kind | Model | Tag |
|-----------------|-----------|------|-------|-----|
| `skill/calculator:1.0.0` | library | skill | calculator | 1.0.0 |
| `myname/skill/calc:latest` | myname | skill | calc | latest |
### Media Type
```go
const MediaTypeSkill = "application/vnd.ollama.image.skill"
```
### Key Types
```go
// SkillRef represents a skill reference in agent config
type SkillRef struct {
Name string `json:"name,omitempty"` // "calculator-skill" or "myname/skill/calc:1.0.0"
Digest string `json:"digest,omitempty"` // "sha256:abc..." (set when bundled)
}
// model.Name represents a parsed 5-part name
type Name struct {
Host string // "registry.ollama.ai"
Namespace string // "library" or "myname"
Kind string // "skill" or "agent" or "" for models
Model string // "calculator"
Tag string // "1.0.0"
}
```
## Implementation Files
### Client (ollama)
| File | Purpose |
|------|---------|
| `server/skill.go` | Skill blob handling, path parsing, extraction |
| `cmd/skill_cmd.go` | CLI commands (push, pull, list, rm, show) |
| `cmd/skills.go` | Skill loading and catalog management |
| `server/create.go` | Skill layer creation during agent create |
| `server/images.go` | Skill extraction during pull |
| `types/model/config.go` | SkillRef type definition |
### Registry (ollama.com)
| File | Purpose |
|------|---------|
| `ollamadotcom/registry/store.go` | MediaTypeSkill constant |
| `ollamadotcom/store/store.go` | RecordPush handles skill layers |
## Registry Integration
### What Works
- Blob uploads (content-addressable, no auth required)
- Layer indexing (skill layers stored with mediatype)
- Manifest structure (4-part path compatible)
### What's Needed
1. **Namespace Configuration**: The `skill` namespace needs to be configured with:
- Public read access
- Authenticated write access
2. **Permission Model**: Decide who can push to `skill/` namespace:
- Only Ollama team (curated library)
- Verified publishers
- Anyone (open registry)
## Pull Flow
### Agent with Bundled Skills
```
ollama pull my-agent
→ GET manifest (includes skill layers)
→ Download all blobs (model + skills)
→ Extract skill blobs to ~/.ollama/models/skills/
→ Ready to run
```
### Standalone Skill
```
ollama skill pull calculator:1.0.0
→ Parse as skill/calculator:1.0.0
→ Convert to model.Name{Namespace: "skill", Model: "calculator", Tag: "1.0.0"}
→ GET manifest from registry
→ Download skill blob
→ Extract to ~/.ollama/models/skills/sha256-<digest>/
→ Available for agents to reference
```
## Push Flow
```
ollama skill push myname/calculator:1.0.0 ./my-skill
→ Validate SKILL.md exists
→ Create tar.gz of skill directory
→ Compute SHA256 digest
→ Store blob locally
→ Create skill manifest with config layer
→ Store manifest locally
→ Push blobs to registry
→ Push manifest to registry
```
## Backward Compatibility
- Old agents with `Skills: []string` (paths) continue to work
- New agents use `Skills: []SkillRef` with name and digest
- Parser detects format and handles both
## Local Registry Testing
To test push/pull locally, you need MinIO and the Docker registry running:
```bash
# 1. Start MinIO (for blob storage)
minio server ~/.minio-data --console-address ':9001' &
# 2. Create the ollama-dev bucket (first time only)
mc config host add local http://localhost:9000 minioadmin minioadmin
mc mb local/ollama-dev
# 3. Start the registry (from ollama.com repo)
cd /path/to/ollama.com/registry
go run cmd/registry/main.go serve config-dev.yml &
# 4. Verify registry is running
curl http://localhost:6000/v2/
```
**Important:** The `config-dev.yml` must have matching ports:
```yaml
http:
addr: :6000
host: http://localhost:6000 # Must match addr!
```
### Test Commands
```bash
# Push skill from local folder
ollama skill push localhost:6000/testuser/skill/calculator:1.0.0 ./skills/calculator-skill --insecure
# Pull skill from registry
ollama skill pull localhost:6000/testuser/skill/calculator:1.0.0 --insecure
# List skills
ollama skill list
# Show skill
ollama skill show localhost:6000/testuser/skill/calculator:1.0.0
```
## Architecture Diagram
```mermaid
graph TB
subgraph "Skill Naming Structure"
A["skill/calculator:1.0.0"] --> B["host: registry.ollama.ai"]
A --> C["namespace: library"]
A --> D["kind: skill"]
A --> E["model: calculator"]
A --> F["tag: 1.0.0"]
end
subgraph "Storage Layout"
G["~/.ollama/models/"]
G --> H["blobs/"]
H --> I["sha256-<skill-digest>"]
G --> J["manifests/"]
J --> K["registry.ollama.ai/"]
K --> L["library/skill/calculator/1.0.0"]
K --> M["myname/skill/my-skill/latest"]
G --> N["skills/"]
N --> O["sha256-<digest>/"]
O --> P["SKILL.md"]
O --> Q["scripts/"]
end
subgraph "Push Flow"
R["User Command: ollama skill push"]
R --> S["Validate SKILL.md"]
S --> T["Create tar.gz of skill dir"]
T --> U["Compute SHA256 digest"]
U --> V["Store blob locally"]
V --> W["Create skill manifest"]
W --> X["Store manifest locally"]
X --> Y["Push blobs to registry"]
Y --> Z["Push manifest to registry"]
end
subgraph "Pull Flow - Standalone Skill"
AA["User Command: ollama skill pull"]
AA --> AB["Parse name structure"]
AB --> AC["GET manifest from registry"]
AC --> AD["Download skill blob"]
AD --> AE["Extract to skills/ directory"]
AE --> AF["Available for agents"]
end
subgraph "Pull Flow - Agent with Skills"
AG["Pull Agent: ollama pull my-agent"]
AG --> AH["GET manifest (includes skill layers)"]
AH --> AI["Download all blobs (model + skills)"]
AI --> AJ["Extract skill blobs"]
AJ --> AK["Ready to run"]
end
subgraph "Agentfile Integration"
AL["Agentfile"]
AL --> AM["FROM llama3.2:3b"]
AL --> AN["SKILL skill/calculator:1.0.0"]
AL --> AO["SKILL ./local-skill"]
AO --> AP["Local path (development)"]
AN --> AQ["Registry reference"]
end
subgraph "Registry Components"
AR["Registry Server"]
AR --> AS["Blob Storage (MinIO)"]
AR --> AT["Layer Indexing"]
AR --> AU["Manifest Storage"]
AR --> AV["Namespace Config"]
end
Z --> AR
AC --> AR
AH --> AR
```

3
ducky.Agentfile Normal file
View File

@ -0,0 +1,3 @@
SKILL ./skills/calculator-skill
ENTRYPOINT ducky

View File

@ -0,0 +1,37 @@
---
name: calculator-skill
description: A skill for performing mathematical calculations using a Python script. Use when the user asks to calculate, compute, or do math operations.
---
# Calculator Skill
## Purpose
This skill performs mathematical calculations using a bundled Python script for accuracy.
## When to use
- The user asks to calculate something
- The user wants to do math (add, subtract, multiply, divide)
- The user asks about percentages or conversions
- Any arithmetic or mathematical operation is needed
## Instructions
1. When the user asks for a calculation, use the `run_skill_script` tool to execute the calculation script.
2. Call the script like this: `python3 scripts/calculate.py "<expression>"`
3. Return the result from the script output to the user.
## Examples
For "What is 25 * 4?":
- Call: `run_skill_script` with skill="calculator-skill" and command="python3 scripts/calculate.py '25 * 4'"
- Output: "25 * 4 = 100"
For "Calculate 15% of 200":
- Call: `run_skill_script` with skill="calculator-skill" and command="python3 scripts/calculate.py '15/100 * 200'"
- Output: "15/100 * 200 = 30.0"
For "Add 123 and 456":
- Call: `run_skill_script` with skill="calculator-skill" and command="python3 scripts/calculate.py '123 + 456'"
- Output: "123 + 456 = 579"

View File

@ -0,0 +1,41 @@
#!/usr/bin/env python3
"""
Calculator script for performing mathematical operations.
Usage: python calculate.py <expression>
Example: python calculate.py "25 * 4"
"""
import sys
import re
def safe_eval(expression):
"""Safely evaluate a mathematical expression."""
# Only allow numbers, operators, parentheses, and whitespace
if not re.match(r'^[\d\s\+\-\*\/\.\(\)\%]+$', expression):
raise ValueError(f"Invalid expression: {expression}")
# Replace % with /100* for percentage calculations
# e.g., "15% of 200" would be passed as "15/100*200"
try:
result = eval(expression)
return result
except Exception as e:
raise ValueError(f"Could not evaluate: {e}")
def main():
if len(sys.argv) < 2:
print("Usage: python calculate.py <expression>")
print("Example: python calculate.py '25 * 4'")
sys.exit(1)
expression = ' '.join(sys.argv[1:])
try:
result = safe_eval(expression)
print(f"{expression} = {result}")
except ValueError as e:
print(f"Error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,26 @@
FROM gpt-oss:20b
AGENT_TYPE conversational
SKILL /Users/parth/Documents/repos/ollama/skills/calculator-skill
SKILL /Users/parth/Documents/repos/ollama/skills/mock-logs-skill
SKILL /Users/parth/Documents/repos/ollama/skills/ducky-skill
SYSTEM """You are a helpful assistant with access to specialized skills.
When asked to perform calculations, use the calculator skill's run_skill_script tool.
When asked to generate logs or show sample log output, use the mock-logs skill's run_skill_script tool.
When asked to run ducky or process directories with ducky, use the ducky skill's run_skill_script tool.
CRITICAL INSTRUCTION - YOU MUST FOLLOW THIS:
After ANY tool call completes and returns output, you MUST write additional text analyzing, explaining, or summarizing the results. Your response is NOT complete until you have provided this analysis. Do NOT end your turn immediately after tool output appears.
Example workflow for mock logs:
1. Call run_skill_script to generate logs
2. Tool returns log output
3. YOU MUST THEN WRITE: An analysis of the logs - identify patterns, note log levels, highlight any errors/warnings, and explain what the logs show
Never just show raw output and stop. Always add your analysis afterwards."""
PARAMETER temperature 0.3
PARAMETER top_p 0.9

View File

@ -0,0 +1,38 @@
---
name: ducky
description: Run DuckY CLI tool for processing directories with AI models
---
# DuckY Skill
## Purpose
This skill provides access to the DuckY CLI tool, which processes directories using AI models.
## When to use
- User asks to run ducky on a directory
- User wants to process files with ducky
- User asks about ducky or wants to use ducky features
- User wants to poll a crumb
## Instructions
1. When the user asks to run ducky, use the `run_skill_script` tool
2. Call: `./scripts/run_ducky.sh [args]`
- `-d <directory>` - Directory to process
- `-m <model>` - Model to use
- `-l` - Run locally with Ollama
- `--poll <crumb>` - Poll a specific crumb
- `-i <seconds>` - Polling interval
## Examples
For "Run ducky on the current directory":
- Call: `run_skill_script` with skill="ducky" and command="./scripts/run_ducky.sh -d . -l"
For "Run ducky locally on src folder":
- Call: `run_skill_script` with skill="ducky" and command="./scripts/run_ducky.sh -d src -l"
For "Poll the build crumb every 30 seconds":
- Call: `run_skill_script` with skill="ducky" and command="./scripts/run_ducky.sh --poll build -i 30 -l"

View File

@ -0,0 +1,5 @@
#!/bin/bash
# Wrapper script for ducky CLI
# Pass all arguments to ducky
exec ducky "$@"

119
skills/excel-skill/SKILL.md Normal file
View File

@ -0,0 +1,119 @@
---
name: excel-skill
description: Help non-technical users process Excel and CSV data - summarize spreadsheets, find duplicates, filter rows, calculate statistics, and clean up data. Use when the user mentions Excel, spreadsheet, CSV, or asks about their data.
---
# Excel Data Processing Skill
## Purpose
This skill helps users work with Excel (.xlsx) and CSV files without needing technical knowledge. It can summarize data, find problems, answer questions about the data, and perform common cleanup tasks.
## When to use
- User uploads or mentions an Excel or CSV file
- User wants to understand what's in their data
- User asks about duplicates, missing values, or data quality
- User wants to filter, sort, or summarize data
- User asks questions like "how many", "what's the average", "show me the top 10"
## Instructions
### Step 1: Understand the data first
When a user provides a file, ALWAYS start by running a summary to understand what you're working with:
```
uv run scripts/process_data.py "<filepath>" summary
```
This shows:
- Number of rows and columns
- Column names and their data types
- Sample of the data
- Missing value counts
### Step 2: Answer their question
Based on what the user asks, use the appropriate command:
**Get statistics for a column:**
```
uv run scripts/process_data.py "<filepath>" stats "<column_name>"
```
Shows count, average, min, max, and common values.
**Find duplicate rows:**
```
uv run scripts/process_data.py "<filepath>" duplicates
```
Or check duplicates in specific columns:
```
uv run scripts/process_data.py "<filepath>" duplicates "<column_name>"
```
**Filter rows:**
```
uv run scripts/process_data.py "<filepath>" filter "<column>" "<operator>" "<value>"
```
Operators: equals, contains, greater, less, not_equals
Examples:
- `filter "Status" "equals" "Active"`
- `filter "Amount" "greater" "1000"`
- `filter "Name" "contains" "Smith"`
**Sort data:**
```
uv run scripts/process_data.py "<filepath>" sort "<column>" [asc|desc]
```
**Count values in a column:**
```
uv run scripts/process_data.py "<filepath>" count "<column_name>"
```
Shows how many times each value appears.
**Get top/bottom rows:**
```
uv run scripts/process_data.py "<filepath>" top "<column>" <number>
uv run scripts/process_data.py "<filepath>" bottom "<column>" <number>
```
**Find missing values:**
```
uv run scripts/process_data.py "<filepath>" missing
```
**Export filtered/processed data:**
Add `--output "<new_filepath>"` to any command to save results.
## Examples
**User: "What's in this spreadsheet?"**
Run: `uv run scripts/process_data.py "sales.xlsx" summary`
**User: "Are there any duplicate entries?"**
Run: `uv run scripts/process_data.py "sales.xlsx" duplicates`
**User: "How many sales per region?"**
Run: `uv run scripts/process_data.py "sales.xlsx" count "Region"`
**User: "Show me orders over $500"**
Run: `uv run scripts/process_data.py "orders.csv" filter "Amount" "greater" "500"`
**User: "What's the average order value?"**
Run: `uv run scripts/process_data.py "orders.csv" stats "Amount"`
**User: "Find all rows with missing email addresses"**
Run: `uv run scripts/process_data.py "contacts.xlsx" filter "Email" "equals" ""`
**User: "Show me the top 10 customers by revenue"**
Run: `uv run scripts/process_data.py "customers.csv" top "Revenue" 10`
## Tips for helping non-technical users
1. Always explain what you found in plain language
2. If there are issues (duplicates, missing data), explain why it matters
3. Offer to help fix problems you discover
4. When showing numbers, provide context ("this is high/low compared to...")
5. Ask clarifying questions if the column names are ambiguous

View File

@ -0,0 +1,11 @@
Name,Region,Amount,Status,Email
Alice,North,1500,Active,alice@example.com
Bob,South,2300,Active,bob@example.com
Charlie,North,800,Inactive,charlie@example.com
Diana,East,1500,Active,diana@example.com
Eve,South,3200,Active,
Frank,North,950,Inactive,frank@example.com
Grace,West,2100,Active,grace@example.com
Alice,North,1500,Active,alice@example.com
Henry,East,1800,Active,henry@example.com
Ivy,South,2300,Inactive,ivy@example.com
1 Name Region Amount Status Email
2 Alice North 1500 Active alice@example.com
3 Bob South 2300 Active bob@example.com
4 Charlie North 800 Inactive charlie@example.com
5 Diana East 1500 Active diana@example.com
6 Eve South 3200 Active
7 Frank North 950 Inactive frank@example.com
8 Grace West 2100 Active grace@example.com
9 Alice North 1500 Active alice@example.com
10 Henry East 1800 Active henry@example.com
11 Ivy South 2300 Inactive ivy@example.com

View File

@ -0,0 +1,395 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "pandas",
# "openpyxl",
# ]
# ///
"""
Excel/CSV Data Processing Script for non-technical users.
Handles common data operations: summary, statistics, filtering, duplicates, etc.
Usage: uv run scripts/process_data.py <filepath> <command> [args...] [--output <output_path>]
"""
import sys
import argparse
import pandas as pd
from pathlib import Path
def load_file(filepath):
"""Load Excel or CSV file into a DataFrame."""
path = Path(filepath)
if not path.exists():
print(f"Error: File not found: {filepath}")
sys.exit(1)
suffix = path.suffix.lower()
try:
if suffix in ['.xlsx', '.xls']:
df = pd.read_excel(filepath)
elif suffix == '.csv':
df = pd.read_csv(filepath)
else:
# Try CSV as default
df = pd.read_csv(filepath)
return df
except Exception as e:
print(f"Error reading file: {e}")
sys.exit(1)
def save_output(df, output_path):
"""Save DataFrame to file."""
path = Path(output_path)
suffix = path.suffix.lower()
try:
if suffix in ['.xlsx', '.xls']:
df.to_excel(output_path, index=False)
else:
df.to_csv(output_path, index=False)
print(f"\nSaved {len(df)} rows to: {output_path}")
except Exception as e:
print(f"Error saving file: {e}")
def cmd_summary(df, args):
"""Show overview of the data."""
print("=" * 60)
print("DATA SUMMARY")
print("=" * 60)
print(f"\nRows: {len(df):,}")
print(f"Columns: {len(df.columns)}")
print("\n" + "-" * 40)
print("COLUMNS:")
print("-" * 40)
for col in df.columns:
dtype = df[col].dtype
non_null = df[col].notna().sum()
null_count = df[col].isna().sum()
type_label = "text" if dtype == 'object' else ("number" if dtype in ['int64', 'float64'] else str(dtype))
null_info = f" ({null_count} missing)" if null_count > 0 else ""
print(f" - {col}: {type_label}{null_info}")
print("\n" + "-" * 40)
print("SAMPLE DATA (first 5 rows):")
print("-" * 40)
print(df.head().to_string())
return df
def cmd_stats(df, args):
"""Show statistics for a column."""
if not args.column:
print("Error: Please specify a column name")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
col = args.column
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
print(f"\nSTATISTICS FOR: {col}")
print("=" * 40)
series = df[col]
print(f"Total values: {len(series):,}")
print(f"Non-empty: {series.notna().sum():,}")
print(f"Empty/missing: {series.isna().sum():,}")
print(f"Unique values: {series.nunique():,}")
if pd.api.types.is_numeric_dtype(series):
print(f"\nNumeric Statistics:")
print(f" Sum: {series.sum():,.2f}")
print(f" Average: {series.mean():,.2f}")
print(f" Median: {series.median():,.2f}")
print(f" Min: {series.min():,.2f}")
print(f" Max: {series.max():,.2f}")
print(f" Std Dev: {series.std():,.2f}")
else:
print(f"\nMost common values:")
for val, count in series.value_counts().head(10).items():
pct = count / len(series) * 100
print(f" {val}: {count:,} ({pct:.1f}%)")
return df
def cmd_duplicates(df, args):
"""Find duplicate rows."""
col = args.column
if col:
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
dups = df[df.duplicated(subset=[col], keep=False)]
print(f"\nDUPLICATES IN COLUMN: {col}")
else:
dups = df[df.duplicated(keep=False)]
print(f"\nDUPLICATE ROWS (all columns)")
print("=" * 40)
if len(dups) == 0:
print("No duplicates found!")
else:
print(f"Found {len(dups):,} duplicate rows")
print("\nDuplicate entries:")
print(dups.to_string())
return dups
def cmd_filter(df, args):
"""Filter rows based on condition."""
if not args.column or not args.operator or args.value is None:
print("Error: Filter requires column, operator, and value")
print("Usage: filter <column> <operator> <value>")
print("Operators: equals, not_equals, contains, greater, less")
sys.exit(1)
col = args.column
op = args.operator.lower()
val = args.value
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
original_count = len(df)
if op == 'equals':
if val == '':
result = df[df[col].isna() | (df[col] == '')]
else:
# Try numeric comparison if possible
try:
result = df[df[col] == float(val)]
except:
result = df[df[col].astype(str).str.lower() == val.lower()]
elif op == 'not_equals':
try:
result = df[df[col] != float(val)]
except:
result = df[df[col].astype(str).str.lower() != val.lower()]
elif op == 'contains':
result = df[df[col].astype(str).str.lower().str.contains(val.lower(), na=False)]
elif op == 'greater':
try:
result = df[pd.to_numeric(df[col], errors='coerce') > float(val)]
except:
print(f"Error: Cannot compare '{col}' as numbers")
sys.exit(1)
elif op == 'less':
try:
result = df[pd.to_numeric(df[col], errors='coerce') < float(val)]
except:
print(f"Error: Cannot compare '{col}' as numbers")
sys.exit(1)
else:
print(f"Error: Unknown operator '{op}'")
print("Valid operators: equals, not_equals, contains, greater, less")
sys.exit(1)
print(f"\nFILTER: {col} {op} '{val}'")
print("=" * 40)
print(f"Found {len(result):,} matching rows (out of {original_count:,})")
if len(result) > 0:
print("\nResults:")
if len(result) > 50:
print(result.head(50).to_string())
print(f"\n... and {len(result) - 50} more rows")
else:
print(result.to_string())
return result
def cmd_sort(df, args):
"""Sort data by column."""
if not args.column:
print("Error: Please specify a column to sort by")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
col = args.column
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
ascending = args.order != 'desc'
result = df.sort_values(by=col, ascending=ascending)
order_label = "ascending" if ascending else "descending"
print(f"\nSORTED BY: {col} ({order_label})")
print("=" * 40)
if len(result) > 50:
print(result.head(50).to_string())
print(f"\n... and {len(result) - 50} more rows")
else:
print(result.to_string())
return result
def cmd_count(df, args):
"""Count values in a column."""
if not args.column:
print("Error: Please specify a column to count")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
col = args.column
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
counts = df[col].value_counts()
print(f"\nVALUE COUNTS FOR: {col}")
print("=" * 40)
print(f"Total unique values: {len(counts):,}")
print()
for val, count in counts.items():
pct = count / len(df) * 100
print(f" {val}: {count:,} ({pct:.1f}%)")
# Return as DataFrame for potential export
return counts.reset_index().rename(columns={'index': col, col: 'count'})
def cmd_top(df, args):
"""Get top N rows by column value."""
if not args.column:
print("Error: Please specify a column")
sys.exit(1)
col = args.column
# Number can be in args.operator position due to positional parsing
n = int(args.number) if args.number else (int(args.operator) if args.operator and args.operator.isdigit() else 10)
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
result = df.nlargest(n, col)
print(f"\nTOP {n} BY: {col}")
print("=" * 40)
print(result.to_string())
return result
def cmd_bottom(df, args):
"""Get bottom N rows by column value."""
if not args.column:
print("Error: Please specify a column")
sys.exit(1)
col = args.column
# Number can be in args.operator position due to positional parsing
n = int(args.number) if args.number else (int(args.operator) if args.operator and args.operator.isdigit() else 10)
if col not in df.columns:
print(f"Error: Column '{col}' not found")
print(f"Available columns: {', '.join(df.columns)}")
sys.exit(1)
result = df.nsmallest(n, col)
print(f"\nBOTTOM {n} BY: {col}")
print("=" * 40)
print(result.to_string())
return result
def cmd_missing(df, args):
"""Find rows with missing values."""
print("\nMISSING VALUE ANALYSIS")
print("=" * 40)
# Summary by column
print("\nMissing values per column:")
for col in df.columns:
missing = df[col].isna().sum()
if missing > 0:
pct = missing / len(df) * 100
print(f" {col}: {missing:,} ({pct:.1f}%)")
total_missing = df.isna().sum().sum()
if total_missing == 0:
print(" No missing values found!")
return df
# Rows with any missing values
rows_with_missing = df[df.isna().any(axis=1)]
print(f"\nRows with missing values: {len(rows_with_missing):,}")
if len(rows_with_missing) > 0 and len(rows_with_missing) <= 50:
print("\nRows with missing data:")
print(rows_with_missing.to_string())
elif len(rows_with_missing) > 50:
print("\nFirst 50 rows with missing data:")
print(rows_with_missing.head(50).to_string())
print(f"\n... and {len(rows_with_missing) - 50} more rows")
return rows_with_missing
def main():
parser = argparse.ArgumentParser(description='Process Excel/CSV data')
parser.add_argument('filepath', help='Path to Excel or CSV file')
parser.add_argument('command', choices=['summary', 'stats', 'duplicates', 'filter', 'sort', 'count', 'top', 'bottom', 'missing'],
help='Command to run')
parser.add_argument('column', nargs='?', help='Column name (for stats, filter, sort, count, top, bottom, duplicates)')
parser.add_argument('operator', nargs='?', help='Operator for filter (equals, contains, greater, less, not_equals)')
parser.add_argument('value', nargs='?', help='Value for filter')
parser.add_argument('number', nargs='?', help='Number for top/bottom')
parser.add_argument('--order', choices=['asc', 'desc'], default='asc', help='Sort order')
parser.add_argument('--output', '-o', help='Output file path')
args = parser.parse_args()
# Load the file
df = load_file(args.filepath)
# Run the command
commands = {
'summary': cmd_summary,
'stats': cmd_stats,
'duplicates': cmd_duplicates,
'filter': cmd_filter,
'sort': cmd_sort,
'count': cmd_count,
'top': cmd_top,
'bottom': cmd_bottom,
'missing': cmd_missing,
}
result = commands[args.command](df, args)
# Save output if requested
if args.output and isinstance(result, pd.DataFrame):
save_output(result, args.output)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,25 @@
---
name: hello-skill
description: Simple test skill for verifying Agent Skills integration in ollama run. Use when the user asks to test skills, sample skills, or wants a quick hello workflow.
---
# Hello Skill
## Purpose
This is a minimal skill to validate that skills load correctly and that tool calls can read additional files.
## When to use
- The user asks to test skills integration.
- The user wants a simple example skill.
## Instructions
1. Reply with a short greeting that mentions the skill name.
2. If you need a template greeting, read `references/GREETING.md` using the `read_skill_file` tool.
## Example
User: "Test the skills feature."
Assistant: "Hello from hello-skill."

View File

@ -0,0 +1,2 @@
Template greeting:
Hello from hello-skill. Skills are working.

View File

@ -0,0 +1,8 @@
FROM gpt-oss:20b
AGENT_TYPE conversational
SKILL /Users/parth/Documents/repos/ollama/skills
SYSTEM You are a helpful math assistant. Follow the instructions from your loaded skills when performing tasks.
PARAMETER temperature 0.3
PARAMETER top_p 0.9

View File

@ -0,0 +1,7 @@
FROM gpt-oss:20b
AGENT TYPE conversational
SYSTEM You are a helpful assistant with MCP tools. You can echo text and add numbers using the mcp_test-mcp_echo and mcp_test-mcp_add tools.
MCP test-mcp python3 ./test-mcp/server.py
SKILL ./skills/excel-skill
SKILL ./skills/pdf-skill

View File

@ -0,0 +1,36 @@
---
name: mock-logs
description: Outputs mock log entries for testing and demonstration purposes
---
# Mock Logs Skill
## Purpose
This skill generates mock log entries for testing, debugging, and demonstration purposes.
## When to use
- User asks to generate sample logs
- User wants to see example log output
- User needs test data for log parsing
- User asks about log formats
## Instructions
1. When the user asks for mock logs, use the `run_skill_script` tool
2. Call: `python3 scripts/generate_logs.py [count] [level]`
- count: Number of log entries (default: 5)
- level: Log level filter - info, warn, error, debug, or all (default: all)
3. Return the generated logs to the user
## Examples
For "Generate some sample logs":
- Call: `run_skill_script` with skill="mock-logs" and command="python3 scripts/generate_logs.py 5"
For "Show me 10 error logs":
- Call: `run_skill_script` with skill="mock-logs" and command="python3 scripts/generate_logs.py 10 error"
For "Generate debug logs":
- Call: `run_skill_script` with skill="mock-logs" and command="python3 scripts/generate_logs.py 5 debug"

View File

@ -0,0 +1,107 @@
#!/usr/bin/env python3
"""Generate mock log entries for testing."""
import sys
import random
from datetime import datetime, timedelta
LEVELS = ["INFO", "WARN", "ERROR", "DEBUG"]
SERVICES = [
"api-gateway",
"auth-service",
"user-service",
"payment-service",
"notification-service",
"cache-manager",
"db-connector",
"queue-worker",
]
MESSAGES = {
"INFO": [
"Request processed successfully",
"User session started",
"Cache hit for key: user_{}",
"Connection established to database",
"Health check passed",
"Configuration reloaded",
"Scheduled task completed",
"Message published to queue",
],
"WARN": [
"High memory usage detected: {}%",
"Slow query detected: {}ms",
"Rate limit approaching for client {}",
"Retry attempt {} of 3",
"Connection pool running low",
"Deprecated API endpoint called",
"Certificate expires in {} days",
],
"ERROR": [
"Failed to connect to database: timeout",
"Authentication failed for user {}",
"Payment processing error: insufficient funds",
"Service unavailable: upstream timeout",
"Invalid request payload",
"Queue message processing failed",
"Disk space critical: {}% used",
],
"DEBUG": [
"Entering function: process_request",
"Variable state: count={}",
"SQL query: SELECT * FROM users WHERE id={}",
"HTTP response: status={}, body_size={}",
"Cache miss for key: session_{}",
"Decoding JWT token",
"Validating input parameters",
],
}
def generate_log_entry(level=None, base_time=None):
if level is None:
level = random.choice(LEVELS)
service = random.choice(SERVICES)
message_template = random.choice(MESSAGES[level])
# Fill in placeholders with random values
message = message_template
while "{}" in message:
placeholder_value = random.randint(1, 9999)
message = message.replace("{}", str(placeholder_value), 1)
if base_time is None:
base_time = datetime.now()
timestamp = base_time.strftime("%Y-%m-%d %H:%M:%S.%f")[:-3]
return f"[{timestamp}] [{level:5}] [{service}] {message}"
def main():
count = 5
level_filter = None
if len(sys.argv) > 1:
try:
count = int(sys.argv[1])
except ValueError:
print(f"Error: Invalid count '{sys.argv[1]}'", file=sys.stderr)
sys.exit(1)
if len(sys.argv) > 2:
level_arg = sys.argv[2].upper()
if level_arg != "ALL" and level_arg in LEVELS:
level_filter = level_arg
elif level_arg != "ALL":
print(f"Error: Invalid level '{sys.argv[2]}'. Use: info, warn, error, debug, or all", file=sys.stderr)
sys.exit(1)
base_time = datetime.now() - timedelta(seconds=count)
for i in range(count):
log_time = base_time + timedelta(seconds=i, milliseconds=random.randint(0, 999))
print(generate_log_entry(level=level_filter, base_time=log_time))
if __name__ == "__main__":
main()

109
skills/pdf-skill/SKILL.md Normal file
View File

@ -0,0 +1,109 @@
---
name: pdf-skill
description: Help users work with PDF files - extract text, get document info, search content, extract pages, and merge PDFs. Use when the user mentions PDF, document extraction, or wants to read/combine PDF files.
---
# PDF Processing Skill
## Purpose
This skill helps users work with PDF files without needing technical knowledge. It can extract text, search for content, get document information, split and merge PDFs.
## When to use
- User uploads or mentions a PDF file
- User wants to extract text from a document
- User asks "what's in this PDF" or similar
- User wants to search for something in a PDF
- User wants to combine or split PDF files
- User asks about page counts or document info
## Instructions
### Step 1: Understand the document first
When a user provides a PDF, start by getting info about it:
```
uv run scripts/process_pdf.py "<filepath>" info
```
This shows:
- Number of pages
- Document metadata (title, author, etc.)
- File size
### Step 2: Perform the requested operation
Based on what the user asks, use the appropriate command:
**Extract all text:**
```
uv run scripts/process_pdf.py "<filepath>" text
```
Extracts text from all pages.
**Extract text from specific pages:**
```
uv run scripts/process_pdf.py "<filepath>" text --pages 1,2,3
uv run scripts/process_pdf.py "<filepath>" text --pages 1-5
```
**Search for text:**
```
uv run scripts/process_pdf.py "<filepath>" search "<query>"
```
Finds all occurrences and shows surrounding context.
**Extract tables:**
```
uv run scripts/process_pdf.py "<filepath>" tables
```
Attempts to extract tables from the PDF as CSV format.
**Extract specific pages to new PDF:**
```
uv run scripts/process_pdf.py "<filepath>" split --pages 1-3 --output "extracted.pdf"
```
**Merge multiple PDFs:**
```
uv run scripts/process_pdf.py merge "<file1.pdf>" "<file2.pdf>" --output "combined.pdf"
```
**Get word/character count:**
```
uv run scripts/process_pdf.py "<filepath>" count
```
## Examples
**User: "What's in this PDF?"**
Run: `uv run scripts/process_pdf.py "document.pdf" info`
Then: `uv run scripts/process_pdf.py "document.pdf" text --pages 1` (for first page preview)
**User: "Extract the text from this document"**
Run: `uv run scripts/process_pdf.py "document.pdf" text`
**User: "Find all mentions of 'invoice' in this PDF"**
Run: `uv run scripts/process_pdf.py "document.pdf" search "invoice"`
**User: "How many pages is this?"**
Run: `uv run scripts/process_pdf.py "document.pdf" info`
**User: "Get me just pages 5-10"**
Run: `uv run scripts/process_pdf.py "document.pdf" split --pages 5-10 --output "pages_5_10.pdf"`
**User: "Combine these two PDFs"**
Run: `uv run scripts/process_pdf.py merge "doc1.pdf" "doc2.pdf" --output "combined.pdf"`
**User: "Are there any tables in this PDF?"**
Run: `uv run scripts/process_pdf.py "document.pdf" tables`
## Tips for helping non-technical users
1. Always start with `info` to understand what you're working with
2. For long documents, extract just the first page first to preview
3. If text extraction looks garbled, the PDF might be scanned images (OCR needed)
4. Explain what you found in plain language
5. If tables don't extract well, mention that PDF tables can be tricky

View File

@ -0,0 +1,114 @@
%PDF-1.3
%éëñ¿
1 0 obj
<<
/Count 2
/Kids [3 0 R
5 0 R]
/MediaBox [0 0 595.28 841.89]
/Type /Pages
>>
endobj
2 0 obj
<<
/OpenAction [3 0 R /FitH null]
/PageLayout /OneColumn
/Pages 1 0 R
/Type /Catalog
>>
endobj
3 0 obj
<<
/Contents 4 0 R
/Parent 1 0 R
/Resources 9 0 R
/Type /Page
>>
endobj
4 0 obj
<<
/Filter /FlateDecode
/Length 442
>>
stream
xœ}”ÁNã0†ï<ÅHÛÃ"-Æ3öÄvoÀn%8<>ˆÄ9ƒŠhƒ²<C692>o¿Ž“P×j}Íÿeòyl)ØÀ¿“ËÎW¤…”P?ßz(JAŒ±±üÍû‡ëíg»~ô§P¿N­ç+¤½´B<C2B4>L`ÃS
~ )ÍIxì¬, ecçï¦÷K¸È¿€$Hp˜WÇÄÕÇß¾Ýøn <17>Wm÷ÞvM¿n·{CâÞ<C3A2>d…b¨<62>8<>‡{øþ;3ãOô~3#‰crX†Éòd<C3B2>¨t
¸ë¿æ¼ÒR`5ç•P.hB¡ªpÛ%ê5+ÁºŒ`ÅB«Q·}óV8½1Ï6)Щܼë§ßÃE&gã´P2…p¦fJQƒáø1<C3B8>/È ç27ES\Ï”<C38F>:@¾™T°U„‰+NU\æ*FHQÅ¢rcº¬b_èC*JŒYEˆ*L¦BI!mYEØ%…ãƒá»Ïáί¼ÏmŒœ¢<C593>sˆ 6]§ñÊÐ/s%Ȭã(äÀƒ?¤ÃB·»lË$5…rA4Í=ÿ´z:Á
endstream
endobj
5 0 obj
<<
/Contents 6 0 R
/Parent 1 0 R
/Resources 10 0 R
/Type /Page
>>
endobj
6 0 obj
<<
/Filter /FlateDecode
/Length 306
>>
stream
xœm½NÃ0…÷>Åa@ Ü8ÁM²ñ#: „ò®sÓ%o<>ÛP‰¡­ûwÎwí/‹Œ‰ûÅcƒåšƒ¯X¡éðÜRg¼BY•ì®BÓâª!ßHÛâÉÙVGíl¸Fóñ׿\çàüœDQ²zu”x“SO6B´#a¯ãN[Z9¸Ú~;­(Å‘^Ó‰až èå”êjô<6A>àL\¢w6îÌ„Žˆý™]WuÅòüÁi(Á{B§­4 ïGoSJ)"µ'~7ÃB8¯·‡vxR¤‡xÆFTLÔGµóø)rãÆˆ$NKÝ`0$A%n©"6Úm·÷ô#ûTbÊõÉJ&^!ÄmÆóâŒ#¯X<C2AF>Í?²“öÑ¥¹t{lÆ -…pqšû|
endstream
endobj
7 0 obj
<<
/BaseFont /Helvetica-Bold
/Encoding /WinAnsiEncoding
/Subtype /Type1
/Type /Font
>>
endobj
8 0 obj
<<
/BaseFont /Helvetica
/Encoding /WinAnsiEncoding
/Subtype /Type1
/Type /Font
>>
endobj
9 0 obj
<<
/Font <</F1 7 0 R
/F2 8 0 R>>
/ProcSet [/PDF /Text /ImageB /ImageC /ImageI]
>>
endobj
10 0 obj
<<
/Font <</F1 7 0 R
/F2 8 0 R>>
/ProcSet [/PDF /Text /ImageB /ImageC /ImageI]
>>
endobj
11 0 obj
<<
/CreationDate (D:20251230034342Z)
>>
endobj
xref
0 12
0000000000 65535 f
0000000015 00000 n
0000000108 00000 n
0000000211 00000 n
0000000291 00000 n
0000000805 00000 n
0000000886 00000 n
0000001264 00000 n
0000001366 00000 n
0000001463 00000 n
0000001560 00000 n
0000001658 00000 n
trailer
<<
/Size 12
/Root 2 0 R
/Info 11 0 R
/ID [<2B10F02FFCC93A7FD39B360714BACC88><2B10F02FFCC93A7FD39B360714BACC88>]
>>
startxref
1714
%%EOF

View File

@ -0,0 +1,367 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "pypdf",
# "pdfplumber",
# ]
# ///
"""
PDF Processing Script for non-technical users.
Handles common PDF operations: info, text extraction, search, split, merge.
Usage: uv run scripts/process_pdf.py <filepath> <command> [args...] [--output <output_path>]
"""
import sys
import argparse
import re
from pathlib import Path
def load_pdf_pypdf(filepath):
"""Load PDF using pypdf."""
from pypdf import PdfReader
path = Path(filepath)
if not path.exists():
print(f"Error: File not found: {filepath}")
sys.exit(1)
try:
return PdfReader(filepath)
except Exception as e:
print(f"Error reading PDF: {e}")
sys.exit(1)
def load_pdf_plumber(filepath):
"""Load PDF using pdfplumber (better for text/tables)."""
import pdfplumber
path = Path(filepath)
if not path.exists():
print(f"Error: File not found: {filepath}")
sys.exit(1)
try:
return pdfplumber.open(filepath)
except Exception as e:
print(f"Error reading PDF: {e}")
sys.exit(1)
def parse_page_range(pages_str, max_pages):
"""Parse page range string like '1,2,3' or '1-5' or '1,3-5,7'."""
if not pages_str:
return list(range(1, max_pages + 1))
pages = set()
parts = pages_str.split(',')
for part in parts:
part = part.strip()
if '-' in part:
start, end = part.split('-', 1)
start = int(start.strip())
end = int(end.strip())
pages.update(range(start, end + 1))
else:
pages.add(int(part))
# Filter to valid range and sort
valid_pages = sorted([p for p in pages if 1 <= p <= max_pages])
return valid_pages
def cmd_info(args):
"""Show PDF information."""
reader = load_pdf_pypdf(args.filepath)
print("=" * 60)
print("PDF INFORMATION")
print("=" * 60)
print(f"\nFile: {args.filepath}")
print(f"Pages: {len(reader.pages)}")
# File size
path = Path(args.filepath)
size_bytes = path.stat().st_size
if size_bytes < 1024:
size_str = f"{size_bytes} bytes"
elif size_bytes < 1024 * 1024:
size_str = f"{size_bytes / 1024:.1f} KB"
else:
size_str = f"{size_bytes / (1024 * 1024):.1f} MB"
print(f"Size: {size_str}")
# Metadata
meta = reader.metadata
if meta:
print("\n" + "-" * 40)
print("METADATA:")
print("-" * 40)
if meta.title:
print(f" Title: {meta.title}")
if meta.author:
print(f" Author: {meta.author}")
if meta.subject:
print(f" Subject: {meta.subject}")
if meta.creator:
print(f" Creator: {meta.creator}")
if meta.creation_date:
print(f" Created: {meta.creation_date}")
if meta.modification_date:
print(f" Modified: {meta.modification_date}")
def cmd_text(args):
"""Extract text from PDF."""
pdf = load_pdf_plumber(args.filepath)
pages = parse_page_range(args.pages, len(pdf.pages))
print("=" * 60)
if args.pages:
print(f"TEXT EXTRACTION (pages {args.pages})")
else:
print("TEXT EXTRACTION (all pages)")
print("=" * 60)
for page_num in pages:
page = pdf.pages[page_num - 1] # 0-indexed
text = page.extract_text() or ""
print(f"\n--- Page {page_num} ---\n")
if text.strip():
print(text)
else:
print("(No text found on this page - may be an image or scan)")
pdf.close()
def cmd_search(args):
"""Search for text in PDF."""
if not args.query:
print("Error: Please provide a search query")
sys.exit(1)
pdf = load_pdf_plumber(args.filepath)
query = args.query.lower()
print("=" * 60)
print(f"SEARCH RESULTS: '{args.query}'")
print("=" * 60)
total_matches = 0
for i, page in enumerate(pdf.pages):
page_num = i + 1
text = page.extract_text() or ""
# Find matches with context
text_lower = text.lower()
if query in text_lower:
# Count occurrences
count = text_lower.count(query)
total_matches += count
print(f"\n--- Page {page_num} ({count} match{'es' if count > 1 else ''}) ---")
# Show context around each match
lines = text.split('\n')
for j, line in enumerate(lines):
if query in line.lower():
# Highlight the match (uppercase)
highlighted = re.sub(
f'({re.escape(args.query)})',
r'>>>\1<<<',
line,
flags=re.IGNORECASE
)
print(f" {highlighted}")
print(f"\n{'=' * 40}")
if total_matches == 0:
print(f"No matches found for '{args.query}'")
else:
print(f"Total: {total_matches} match{'es' if total_matches > 1 else ''} found")
pdf.close()
def cmd_tables(args):
"""Extract tables from PDF."""
pdf = load_pdf_plumber(args.filepath)
print("=" * 60)
print("TABLE EXTRACTION")
print("=" * 60)
table_count = 0
for i, page in enumerate(pdf.pages):
page_num = i + 1
tables = page.extract_tables()
if tables:
for j, table in enumerate(tables):
table_count += 1
print(f"\n--- Table {table_count} (Page {page_num}) ---\n")
# Print as CSV-like format
for row in table:
# Clean up None values
cleaned = [str(cell).strip() if cell else "" for cell in row]
print(",".join(cleaned))
if table_count == 0:
print("\nNo tables found in this PDF.")
print("Note: Table extraction works best with clearly structured tables.")
else:
print(f"\n{'=' * 40}")
print(f"Total: {table_count} table{'s' if table_count > 1 else ''} found")
pdf.close()
def cmd_count(args):
"""Count words and characters in PDF."""
pdf = load_pdf_plumber(args.filepath)
total_chars = 0
total_words = 0
page_stats = []
for i, page in enumerate(pdf.pages):
text = page.extract_text() or ""
chars = len(text)
words = len(text.split())
total_chars += chars
total_words += words
page_stats.append((i + 1, words, chars))
print("=" * 60)
print("DOCUMENT STATISTICS")
print("=" * 60)
print(f"\nTotal pages: {len(pdf.pages)}")
print(f"Total words: {total_words:,}")
print(f"Total characters: {total_chars:,}")
if len(pdf.pages) > 1:
print(f"\nAverage words per page: {total_words // len(pdf.pages):,}")
print("\n" + "-" * 40)
print("PER-PAGE BREAKDOWN:")
print("-" * 40)
for page_num, words, chars in page_stats:
print(f" Page {page_num}: {words:,} words, {chars:,} chars")
pdf.close()
def cmd_split(args):
"""Extract specific pages to a new PDF."""
from pypdf import PdfReader, PdfWriter
if not args.output:
print("Error: Please specify output file with --output")
sys.exit(1)
reader = load_pdf_pypdf(args.filepath)
pages = parse_page_range(args.pages, len(reader.pages))
if not pages:
print("Error: No valid pages specified")
sys.exit(1)
writer = PdfWriter()
for page_num in pages:
writer.add_page(reader.pages[page_num - 1])
with open(args.output, 'wb') as f:
writer.write(f)
print(f"Extracted {len(pages)} page(s) to: {args.output}")
print(f"Pages included: {', '.join(map(str, pages))}")
def cmd_merge(args):
"""Merge multiple PDFs into one."""
from pypdf import PdfReader, PdfWriter
if not args.output:
print("Error: Please specify output file with --output")
sys.exit(1)
# Collect all input files
files = [args.filepath]
if args.query:
files.append(args.query)
if args.pages:
files.append(args.pages)
# Check for additional files in remaining args
# Validate all files exist
for f in files:
if not Path(f).exists():
print(f"Error: File not found: {f}")
sys.exit(1)
writer = PdfWriter()
total_pages = 0
for filepath in files:
reader = PdfReader(filepath)
for page in reader.pages:
writer.add_page(page)
total_pages += 1
print(f" Added: {filepath} ({len(reader.pages)} pages)")
with open(args.output, 'wb') as f:
writer.write(f)
print(f"\nMerged {len(files)} files ({total_pages} total pages) to: {args.output}")
def main():
parser = argparse.ArgumentParser(description='Process PDF files')
parser.add_argument('filepath', help='Path to PDF file (or "merge" command)')
parser.add_argument('command', nargs='?', default='info',
help='Command: info, text, search, tables, count, split, merge')
parser.add_argument('query', nargs='?', help='Search query or second file for merge')
parser.add_argument('--pages', '-p', help='Page range (e.g., "1-3" or "1,2,5")')
parser.add_argument('--output', '-o', help='Output file path')
args = parser.parse_args()
# Handle merge as special case (first arg is "merge")
if args.filepath == 'merge':
if not args.command:
print("Error: merge requires at least 2 PDF files")
print("Usage: process_pdf.py merge file1.pdf file2.pdf --output combined.pdf")
sys.exit(1)
# Shift args for merge
args.filepath = args.command
args.command = 'merge'
# Run the command
commands = {
'info': cmd_info,
'text': cmd_text,
'search': cmd_search,
'tables': cmd_tables,
'count': cmd_count,
'split': cmd_split,
'merge': cmd_merge,
}
if args.command not in commands:
print(f"Error: Unknown command '{args.command}'")
print(f"Available commands: {', '.join(commands.keys())}")
sys.exit(1)
commands[args.command](args)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,7 @@
FROM qwq
SYSTEM You are a test agent with calculator skills.
AGENT TYPE conversational
SKILL ./calculator-skill

4
skills/test-mcp/mcp.json Normal file
View File

@ -0,0 +1,4 @@
{
"name": "test-mcp",
"description": "A test MCP server"
}

109
skills/test-mcp/server.py Executable file
View File

@ -0,0 +1,109 @@
#!/usr/bin/env python3
"""
A simple test MCP server that exposes an echo tool.
"""
import json
import sys
def handle_request(req):
method = req.get("method", "")
if method == "initialize":
return {
"protocolVersion": "2024-11-05",
"capabilities": {"tools": {}},
"serverInfo": {"name": "test-mcp", "version": "1.0.0"}
}
elif method == "notifications/initialized":
# Notification, no response needed
return None
elif method == "tools/list":
return {
"tools": [
{
"name": "echo",
"description": "Echoes back the input text",
"inputSchema": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The text to echo"
}
},
"required": ["text"]
}
},
{
"name": "add",
"description": "Adds two numbers together",
"inputSchema": {
"type": "object",
"properties": {
"a": {
"type": "number",
"description": "First number"
},
"b": {
"type": "number",
"description": "Second number"
}
},
"required": ["a", "b"]
}
}
]
}
elif method == "tools/call":
params = req.get("params", {})
tool_name = params.get("name", "")
args = params.get("arguments", {})
if tool_name == "echo":
text = args.get("text", "")
return {
"content": [{"type": "text", "text": f"Echo: {text}"}]
}
elif tool_name == "add":
a = args.get("a", 0)
b = args.get("b", 0)
result = a + b
return {
"content": [{"type": "text", "text": f"Result: {a} + {b} = {result}"}]
}
else:
return {
"content": [{"type": "text", "text": f"Unknown tool: {tool_name}"}],
"isError": True
}
else:
return {}
def main():
for line in sys.stdin:
try:
req = json.loads(line.strip())
result = handle_request(req)
# Only send response if there's an ID (not a notification)
if "id" in req and result is not None:
resp = {
"jsonrpc": "2.0",
"id": req["id"],
"result": result
}
print(json.dumps(resp), flush=True)
except json.JSONDecodeError:
pass
except Exception as e:
if "id" in req:
resp = {
"jsonrpc": "2.0",
"id": req.get("id"),
"error": {"code": -32603, "message": str(e)}
}
print(json.dumps(resp), flush=True)
if __name__ == "__main__":
main()