30 endpoints  ·  REST/JSON  ·  PHP self-hosted

API Documentation

Complete reference for every Yuga endpoint. Train your model, query it, manage sessions, register tools, and run autonomous pipelines — all from a single REST API on your own server.

Base URL https://yug.ygmarketplace.com/api/

Authentication

Every request must include your API key. Pass it as a request header or as a query parameter. The global admin key is the plain api_key value from config.php.

Header (recommended)
curl -H "X-API-Key: yuga_live_YOUR_KEY" \ "https://yug.ygmarketplace.com/api/?action=status"
Query parameter
curl "https://yug.ygmarketplace.com/api/?action=status&key=yuga_live_YOUR_KEY"
Header: X-API-Key: yuga_live_… Query: ?key=yuga_live_… Admin key: plain api_key from config.php

Core

4 endpoints
GET ?action=status Returns the current state of a model: vocab size, training steps, last loss, crawl stats, and server base URL.

Returns the current state of a model: vocab size, training steps, last loss, crawl stats, and server base URL.

ParameterTypeDescription
model string Model name to query (defaults to "default") (optional)

* required

Request
?action=status&model=default
Response
{ "model": "default", "vocab_size": 847, "trained_steps": 15000, "last_loss": 0.76, "vocab_built": true, "total_chars": 482000, "base_url": "https://yoursite.com", "last_crawl": "2026-03-20T14:22:00Z", "pages_seen": 24 }
GET ?action=models List all named model instances on this server.

List all named model instances on this server.

Request
?action=models
Response
{ "models": ["default", "support", "sales"] }
POST ?action=create Create a new named model. Each model has completely independent weights, vocabulary, and training data.

Create a new named model. Each model has completely independent weights, vocabulary, and training data.

ParameterTypeDescription
model string Name for the new model (alphanumeric, dashes, underscores)

* required

Request
{ "model": "support" }
Response
{ "model": "support", "message": "Model created" }
POST ?action=delete Permanently delete a model and all of its training data. This action cannot be undone.

Permanently delete a model and all of its training data. This action cannot be undone.

ParameterTypeDescription
model string Name of the model to delete

* required

Request
{ "model": "old-model" }
Response
{ "message": "Model 'old-model' deleted" }

Chat & Generation

10 endpoints
POST ?action=chat Context-aware Q&A with persistent session memory. Uses BM25 retrieval to ground every answer in your trained content.

Context-aware Q&A with persistent session memory. Uses BM25 retrieval to ground every answer in your trained content.

ParameterTypeDescription
model string Model name
message string User message
session_id string Session ID for multi-turn memory. Omit to start a fresh context. (optional)
temperature float Sampling temperature 0–1. Default: 0.72 (optional)

* required

Request
{ "model": "default", "message": "What is your refund policy?", "session_id": "sess_abc", "temperature": 0.72 }
Response
{ "reply": "Our refund policy covers the first 30 days...", "session_id": "sess_abc", "turns": 3, "enriched_query": "refund policy return", "model": "default" }
POST ?action=smart_chat ChatGPT-style chat backed by an optional external LLM. Accepts a full conversation history array for multi-turn context.

ChatGPT-style chat backed by an optional external LLM. Accepts a full conversation history array for multi-turn context.

ParameterTypeDescription
model string Model name
message string Latest user message
history array Array of {role, content} objects for prior turns (optional)
max_tokens int Max tokens for LLM response (optional)
llm_model string Override the LLM backend model name (optional)

* required

Request
{ "model": "default", "message": "What payment methods do you accept?", "history": [ {"role":"user","content":"Hi"}, {"role":"assistant","content":"Hello!"} ] }
Response
{ "reply": "We accept Stripe and PayPal...", "history": [...] }
POST ?action=think Chain-of-thought reasoning. Decomposes the question into sub-problems, retrieves evidence for each, then synthesises a grounded answer.

Chain-of-thought reasoning. Decomposes the question into sub-problems, retrieves evidence for each, then synthesises a grounded answer.

ParameterTypeDescription
model string Model name
question string Question to reason about
temperature float Sampling temperature (optional)
trace bool Set true to include the full thinking trace in the response (optional)

* required

Request
{ "model": "default", "question": "Compare the Starter and Pro plans", "temperature": 0.65, "trace": true }
Response
{ "answer": "The Starter plan offers 500 calls/day while Pro gives 5,000 plus...", "type": "reasoning", "confidence": 0.87, "model": "default", "thinking": ["Step 1: Retrieve Starter plan info...", ...] }
POST ?action=reason Multi-hop reasoning. Follows chains of evidence across retrieved passages — ideal for questions that span multiple documents.

Multi-hop reasoning. Follows chains of evidence across retrieved passages — ideal for questions that span multiple documents.

ParameterTypeDescription
model string Model name
question string Question to answer
temperature float Sampling temperature (optional)
max_hops int Maximum reasoning hops (default 3) (optional)
refine bool Refine the answer after all hops (optional)

* required

Request
{ "model": "default", "question": "Which integrations support webhooks?", "max_hops": 3, "refine": true }
Response
{ "answer": "Telegram, Slack, and WhatsApp all support outgoing webhooks...", "hops": 2, "confidence": 0.91, "model": "default" }
POST ?action=agent_run Autonomous agent loop. Yuga plans its own actions toward the goal, executing search/reason/tool steps until satisfied (up to max_steps).

Autonomous agent loop. Yuga plans its own actions toward the goal, executing search/reason/tool steps until satisfied (up to max_steps).

ParameterTypeDescription
model string Model name
goal string High-level goal for the agent to pursue
max_steps int Max action steps 1–10 (default 5) (optional)
temperature float Sampling temperature (optional)
max_hops int Max hops per reason step (optional)
refine bool Refine final answer (optional)

* required

Request
{ "model": "default", "goal": "Find our most popular plan and write a 2-sentence pitch", "max_steps": 5, "temperature": 0.7 }
Response
{ "answer": "Our Pro plan is the most popular. For $29/month you get 5,000 API calls...", "steps_taken": 3, "actions": ["search", "reason", "generate"], "model": "default" }
POST ?action=beam_generate Higher-quality text generation using beam search. Produces more coherent multi-sentence output than greedy or top-p sampling.

Higher-quality text generation using beam search. Produces more coherent multi-sentence output than greedy or top-p sampling.

ParameterTypeDescription
model string Model name
prompt string Text prompt to continue
max_tokens int Maximum tokens to generate (optional)
beam_width int Number of beams to maintain (default 3) (optional)
length_penalty float Penalise shorter completions (default 0.9) (optional)

* required

Request
{ "model": "default", "prompt": "The main advantage of self-hosted AI is", "max_tokens": 100, "beam_width": 3, "length_penalty": 0.9 }
Response
{ "text": "complete data privacy — no content ever leaves your server.", "full": "The main advantage of self-hosted AI is complete data privacy...", "prompt": "The main advantage of self-hosted AI is", "beam_width": 3, "length_penalty": 0.9, "model": "default" }
POST ?action=generate Multi-mode text generation with fine-grained control over sampling parameters.

Multi-mode text generation with fine-grained control over sampling parameters.

ParameterTypeDescription
model string Model name
mode string complete (default) | expand | best_of | fill (optional)
prompt string Input prompt or template
temperature float Sampling temperature (optional)
top_p float Nucleus sampling threshold (optional)
rep_penalty float Repetition penalty (optional)
max_tokens int Max tokens to generate (optional)
n int Number of candidates for best_of mode (optional)

* required

Request
{ "model": "default", "mode": "complete", "prompt": "Our pricing", "max_tokens": 80, "temperature": 0.8 }
Response
{ "text": "starts at $29/month for the Pro plan, which includes...", "model": "default" }
POST ?action=yugagen_chat Character-level text generation from the YugaGen checkpoint. Requires a trained checkpoint (see yugagen_status).

Character-level text generation from the YugaGen checkpoint. Requires a trained checkpoint (see yugagen_status).

ParameterTypeDescription
model string Model name
prompt string Text prompt
max_chars int Max characters to generate (optional)
temperature float Sampling temperature (optional)
top_p float Nucleus sampling threshold (optional)

* required

Request
{ "model": "default", "prompt": "The top features of our platform are", "max_chars": 200, "temperature": 0.8, "top_p": 0.9 }
Response
{ "reply": "unlimited API calls, Telegram integration, and BM25 search...", "prompt": "The top features of our platform are", "model": "default", "steps": 8000, "loss": 0.62, "params": 1200000 }
GET ?action=yugagen_status Training status of the YugaGen checkpoint: steps completed, current and best loss, vocabulary size, parameter count, and architecture string.

Training status of the YugaGen checkpoint: steps completed, current and best loss, vocabulary size, parameter count, and architecture string.

ParameterTypeDescription
model string Model name

* required

Request
?action=yugagen_status&model=default
Response
{ "ready": true, "model": "default", "steps": 8000, "loss": 0.62, "best_loss": 0.58, "vocab_size": 512, "params": 1200000, "arch": "D=128 H=4 L=4 CTX=256", "ckpt_size": 4800 }
GET ?action=yugagen_models List all trained YugaGen checkpoints with their stats and file size.

List all trained YugaGen checkpoints with their stats and file size.

Request
?action=yugagen_models
Response
{ "models": [ { "name": "default", "steps": 8000, "loss": 0.62, "params": 1200000, "size_kb": 4800 } ] }

Training

6 endpoints
POST ?action=learn_text Train the model on any raw text string. Minimum 10 characters. Use the source parameter to tag the provenance of the content.

Train the model on any raw text string. Minimum 10 characters. Use the source parameter to tag the provenance of the content.

ParameterTypeDescription
model string Model name
text string Raw text to train on (min 10 chars)
source string Label describing where this content came from (optional)

* required

Request
{ "model": "default", "text": "Our Pro plan costs $29/month and includes unlimited API calls.", "source": "pricing-page" }
Response
{ "loss": 0.71, "steps": 5000, "vocab": 847, "model": "default" }
POST ?action=learn_url Fetch a single URL, extract its text content, and train the model on it.

Fetch a single URL, extract its text content, and train the model on it.

ParameterTypeDescription
model string Model name
url string URL to fetch and learn from

* required

Request
{ "model": "default", "url": "https://yoursite.com/faq" }
Response
{ "loss": 0.82, "steps": 5000, "model": "default", "url": "https://yoursite.com/faq" }
POST ?action=learn_site Crawl a site's homepage and sitemap (1 level deep), then train on all discovered pages.

Crawl a site's homepage and sitemap (1 level deep), then train on all discovered pages.

ParameterTypeDescription
model string Model name
url string Site root URL to crawl
max_pages int Maximum pages to crawl. Default: 30. Max: 100 (optional)

* required

Request
{ "model": "default", "url": "https://yoursite.com", "max_pages": 30 }
Response
{ "pages": 24, "loss": 0.76, "steps": 15000, "model": "default", "base_url": "https://yoursite.com" }
POST ?action=deep_crawl Recursive BFS full-site crawl with configurable depth. Discovers and trains on every reachable page within the domain.

Recursive BFS full-site crawl with configurable depth. Discovers and trains on every reachable page within the domain.

ParameterTypeDescription
model string Model name
url string Seed URL to start crawling from
max_pages int Max pages to process (default 100) (optional)
max_depth int Max BFS depth (default 4) (optional)
train_steps int Training steps per page batch (optional)
reset bool Clear crawl history before starting (optional)

* required

Request
{ "model": "default", "url": "https://yoursite.com", "max_pages": 100, "max_depth": 4, "reset": false }
Response
{ "pages": 87, "total_seen": 91, "queued": 0, "loss": 0.68, "steps": 45000, "chars": 1240000, "complete": true, "domain": "yoursite.com", "errors": 4, "model": "default" }
POST ?action=ingest Multi-format document ingestion. Accepts PDF, DOCX, CSV, JSON, and TXT either as a file upload (multipart) or by URL/text.

Multi-format document ingestion. Accepts PDF, DOCX, CSV, JSON, and TXT either as a file upload (multipart) or by URL/text.

ParameterTypeDescription
model string Model name
url string Remote URL of a document to fetch and ingest (optional)
text string Raw text content to ingest directly (optional)
source string Label for this content (used with text) (optional)
file_path string Server-side absolute path to a file (optional)
file file Multipart file upload field — PDF/DOCX/CSV/JSON/TXT (optional)

* required

Request
# JSON body (URL) { "model": "default", "url": "https://example.com/manual.pdf" } # Multipart upload curl -F "model=default" -F "file=@manual.pdf"
Response
{ "loss": 0.74, "steps": 8000, "chars": 14200, "source": "https://example.com/manual.pdf", "ingested": true, "model": "default" }
POST ?action=smart_learn Add content to the SmartChat BM25 knowledge base AND simultaneously train transformer weights.

Add content to the SmartChat BM25 knowledge base AND simultaneously train transformer weights.

ParameterTypeDescription
model string Model name
text string Content to add to knowledge base and train on
source string Label describing the content source (optional)

* required

Request
{ "model": "default", "text": "Refund policy: 30-day money back, no questions asked.", "source": "policy" }
Response
{ "message": "Knowledge added", "chars": 52 }

Web Search

1 endpoint

Sessions

4 endpoints

Sessions give chat persistent multi-turn memory. Create a session once, then pass the returned session_id on every subsequent call to maintain context across requests.

POST ?action=session_start Create a new conversation session associated with a model.

Create a new conversation session associated with a model.

ParameterTypeDescription
model string Model name

* required

Request
{ "model": "default" }
Response
{ "session_id": "sess_8f3a2b", "model": "default" }
GET ?action=session_history Retrieve the last N turns from a session as a structured history array.

Retrieve the last N turns from a session as a structured history array.

ParameterTypeDescription
session_id string Session ID to retrieve history for
last_n int Number of most recent turns to return (default 20) (optional)

* required

Request
# Pass as JSON body or query params { "session_id": "sess_8f3a2b", "last_n": 10 }
Response
{ "history": [ {"role":"user","content":"Hi","ts":1742800000}, {"role":"assistant","content":"Hello! How can I help?","ts":1742800002} ], "session_id": "sess_8f3a2b" }
POST ?action=session_end Permanently delete a session and free its stored context memory.

Permanently delete a session and free its stored context memory.

ParameterTypeDescription
session_id string Session ID to terminate

* required

Request
{ "session_id": "sess_8f3a2b" }
Response
{ "message": "Session ended", "session_id": "sess_8f3a2b" }
POST ?action=command Voice command and knowledge Q&A router. Detects device intents (play, set, open, call, turn on/off) vs knowledge questions and returns either a structured device action payload or a natural language reply.

Voice command and knowledge Q&A router. Detects device intents (play, set, open, call, turn on/off) vs knowledge questions and returns either a structured device action payload or a natural language reply.

ParameterTypeDescription
model string Model name
message string User message or voice command text
session_id string Session ID for conversation memory (optional)

* required

Request
{ "model": "default", "message": "What is the refund policy?", "session_id": "user_device_123" }
Response
{ "reply": "Our refund policy covers the first 30 days...", "action": null, "source": "knowledge", "session_id": "user_device_123" } // Device intent example: { "reply": "Playing music", "action": {"type":"play","target":"music"}, "source": "command", "session_id": "user_device_123" }

Tools & Pipelines

5 endpoints

Register HTTP callback tools that Yuga can auto-select and call based on natural language queries — self-hosted function calling with no OpenAI required. Pipelines chain multiple steps into deterministic workflows.

POST ?action=tool_register Register an external tool that Yuga can call. The description is used for semantic matching against user queries.

Register an external tool that Yuga can call. The description is used for semantic matching against user queries.

ParameterTypeDescription
name string Unique tool name (used when calling)
description string Plain English description — used for semantic matching
type string php | webhook | get | sql
endpoint string URL or handler path the tool calls
schema object Parameter schema object describing tool inputs (optional)

* required

Request
{ "name": "get_order", "description": "Look up an order by order ID", "type": "webhook", "endpoint": "https://api.yoursite.com/orders/{id}", "schema": {"id": {"type":"string","required":true}} }
Response
{ "tool_id": "tool_a1b2", "name": "get_order" }
GET ?action=tool_list List all registered tools and their metadata.

List all registered tools and their metadata.

Request
?action=tool_list
Response
{ "tools": [ { "name": "get_order", "description": "Look up an order by order ID", "type": "webhook" } ] }
POST ?action=tool_call Call a specific registered tool by name with explicit parameters.

Call a specific registered tool by name with explicit parameters.

ParameterTypeDescription
tool string Registered tool name to invoke
params object Parameter object to pass to the tool

* required

Request
{ "tool": "get_order", "params": {"id": "ORD-9981"} }
Response
{ "result": { "id": "ORD-9981", "status": "shipped", "eta": "2026-03-27" }, "duration_ms": 183 }
POST ?action=tool_match Auto-select the best matching tool for a natural language query using semantic similarity, then execute it.

Auto-select the best matching tool for a natural language query using semantic similarity, then execute it.

ParameterTypeDescription
query string Natural language query to match against registered tools

* required

Request
{ "query": "where is order number ORD-9981?" }
Response
{ "matched_tool": "get_order", "matched": true, "result": { "id": "ORD-9981", "status": "shipped", "eta": "2026-03-27" } }
POST ?action=pipeline_run Execute a deterministic multi-step workflow. Each step receives the previous step's output as {output}. Step types: search, tool, ingest, generate, reason, transform.

Execute a deterministic multi-step workflow. Each step receives the previous step's output as {output}. Step types: search, tool, ingest, generate, reason, transform.

ParameterTypeDescription
model string Model name
input string Initial input passed to the first step
steps array Array of step objects. Each step has a type and optional params.
vars object Variable map injected into step templates (optional)

* required

Request
{ "model": "default", "input": "Summarise our refund policy in plain English", "steps": [ {"type": "reason"}, {"type": "generate", "prompt": "Write a friendly version of: {output}"} ], "vars": {} }
Response
{ "output": "Our refund policy is simple: if you're not happy within 30 days, we'll refund you.", "steps_run": 2, "model": "default" }

Error Codes

Status Code Meaning
400 bad_request Missing or invalid request body. Check your JSON payload and required parameters.
401 unauthorized Invalid, missing, or expired API key. Pass your key in X-API-Key header or ?key= param.
429 rate_limited Daily API call limit exceeded for your plan. Limit resets at midnight UTC. Upgrade for higher limits.
404 unknown_action The action parameter does not match any known endpoint. Check spelling.
500 server_error Internal server error. Check your server error log. Usually caused by missing model weights or file permissions.

All error responses include a JSON body: {"error": "message", "code": "slug"}