YG3 API Models & Endpoints

A unified reference for all Concentrated Language Models (CLMs) available through the YG3 API.

YG3 provides three CLM-powered models:

All models share a consistent authentication system and an OpenAI-compatible interface.

Table of Contents

  1. Quick Start

  2. Authentication

  3. Models Overview
     • Elysia (Text)
     • Taurus (Vision)
     • Vani (Image Generation)

  4. Endpoints

  5. Integration Guides

  6. Advanced Usage

  7. Rate Limits & Pricing

  8. Support & Changelog


Quick Start

Get Your API Key

  1. Sign up at https://app.yg3.ai/get-started

  2. Go to API Keys: https://app.yg3.ai/api-keys

  3. Click Create New Key

  4. Copy your key (displayed once)

Make Your First Request (Elysia example)

curl https://elysia-api.ngrok.io/api/public/v1/chat/completions \
-H "Authorization: Bearer YOUR-API-KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "elysia",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'

Authentication

All API requests require an API key in the Authorization header:

Authorization: Bearer sk-your-api-key-here

Keep your API key secure! Never commit it to version control or expose it in client-side code.

Environment Variables

# .env
ELYSIA_API_KEY=sk-your-api-key-here
ELYSIA_BASE_URL=https://elysia-api.ngrok.io/api/public/v1

Models Overview

YG3 exposes three CLM-powered models.
Each model has its own input format and endpoint behavior.


Elysia — Text CLM

Modality: Text / Chat
Endpoint: /chat/completions
Use cases:

Example Request

from openai import OpenAI
client = OpenAI(
api_key="YOUR-API-KEY",
base_url="https://elysia-api.ngrok.io/api/public/v1"
)
response = client.chat.completions.create(
model="elysia",
messages=[
{"role": "user", "content": "Help me with my marketing strategy"}
]
)
print(response.choices[0].message.content)

Expected Behavior

Taurus — Vision CLM

Modality: Vision (image + text)
Endpoint: /chat/completions
Input format:
A single messages item containing a content list with multiple parts:

Use cases:

Example Request

def analyze_image_with_taurus(image_url, question):
img = Image.open(BytesIO(requests.get(image_url).content))
buffer = BytesIO()
img.save(buffer, format="PNG")
img_b64 = base64.b64encode(buffer.getvalue()).decode()
payload = {
"model": "taurus",
"messages": [{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{img_b64}"}},
{"type": "text", "text": question}
]
}]
}
r = requests.post(f"{API_BASE}/chat/completions", headers=headers, json=payload)
display(img)
print(r.json())

Expected Behavior

Vani — Image Generation CLM

Modality: Image Generation
Endpoint: /images/generations
Use cases:

Example Request

def create_image_with_vani(prompt, size="1024x1024"):
payload = {"model": "vani", "prompt": prompt, "size": size, "response_format": "b64_json"}
r = requests.post(f"{API_BASE}/images/generations", headers=headers, json=payload)
img_bytes = base64.b64decode(r.json()['data'][0]['b64_json'])
img = Image.open(BytesIO(img_bytes))
display(img)

Expected Behavior

Endpoints

YG3’s API supports multiple modalities across two primary endpoints:

Model

Endpoint

Modality

Description

Elysia

/chat/completions

Text

Chat-style reasoning

Taurus

/chat/completions

Vision + Text

Visual understanding

Vani

/images/generations

Image Gen

Create images

Chat Completions

POST /v1/chat/completions

Send messages to Elysia and receive responses.

Request Body:

{
  "model": "elysia",
  "messages": [
    {"role": "system", "content": "Optional system prompt"},
    {"role": "user", "content": "Your message"}
  ],
  "temperature": 0.7,
  "max_tokens": 1000
}

Parameters:

Parameter

Type

Default

Description

model

string

"elysia"

Model to use (currently only "elysia")

messages

array

required

Array of message objects

temperature

float

0.7

Controls randomness (0.0-2.0)

max_tokens

integer

1000

Maximum tokens in response

Response:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1699564800,
  "model": "elysia",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Response text here..."
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 50,
    "total_tokens": 70
  }
}

List Models

GET /v1/models

Get list of available models.

Response:

{
"object": "list",
"data": [
{ "id": "elysia", "object": "model", "created": 1677610602, "owned_by": "yg3" },
{ "id": "taurus", "object": "model", "created": 1677610602, "owned_by": "yg3" },
{ "id": "vani", "object": "model", "created": 1677610602, "owned_by": "yg3" }
]
}

Integration Guides

Python (OpenAI Library)

from openai import OpenAI
client = OpenAI(
    api_key="YOUR-API-KEY",
    base_url="https://elysia-api.ngrok.io/api/public/v1"
)
response = client.chat.completions.create(
    model="elysia",
    messages=[
        {"role": "user", "content": "Help me with my marketing strategy"}
    ]
)
print(response.choices[0].message.content)

LangChain

from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
# Initialize Elysia
clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1",
    temperature=0.7
)
# Simple usage
messages = [
    SystemMessage(content="You are a business growth strategist."),
    HumanMessage(content="How do I scale my SaaS business?")
]
response = clm.invoke(messages)
print(response.content)

LangChain Agents

from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import Tool
# Initialize Elysia
clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
# Define custom tools
def search_competitor_data(query: str) -> str:
    """Search for competitor information"""
    return f"Competitor data for: {query}"
tools = [
    Tool(
        name="competitor_search",
        func=search_competitor_data,
        description="Search for competitor data and market analysis"
    )
]
# Create agent prompt
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are Elysia, a marketing and business development expert with access to tools."),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# Create and run agent
agent = create_openai_tools_agent(clm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = agent_executor.invoke({
    "input": "Analyze my competitors in the SaaS email marketing space"
})
print(result["output"])

LangChain RAG (Retrieval-Augmented Generation)

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
# Initialize Elysia
clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
# Load and process documents
loader = TextLoader("your_business_docs.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200
)
splits = text_splitter.split_documents(documents)
# Create vector store (you'll need OpenAI embeddings or another provider)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents=splits, embedding=embeddings)
# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
    clm=clm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever(),
    return_source_documents=True
)
# Query with context
result = qa_chain.invoke({"query": "What's our pricing strategy?"})
print(result["result"])

LlamaIndex

from llama_index.clms.openai_like import OpenAILike
from llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader
# Initialize Elysia
clm = OpenAILike(
    model="elysia",
    api_key="YOUR-API-KEY",
    api_base="https://elysia-api.ngrok.io/api/public/v1",
    is_chat_model=True
)
# Set as default
Settings.clm = clm
# Load documents and create index
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
# Query
query_engine = index.as_query_engine()
response = query_engine.query("How should I position my product?")
print(response)

JavaScript/TypeScript

import OpenAI from 'openai';
const client = new OpenAI({
  apiKey: process.env.ELYSIA_API_KEY,
  baseURL: 'https://elysia-api.ngrok.io/api/public/v1'
});
async function chat() {
  const response = await client.chat.completions.create({
    model: 'elysia',
    messages: [
      { role: 'user', content: 'Help me with my marketing strategy' }
    ]
  });
  
  console.log(response.choices[0].message.content);
}
chat();

Node.js with LangChain

import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
const clm = new ChatOpenAI({
  modelName: "elysia",
  openAIApiKey: process.env.ELYSIA_API_KEY,
  configuration: {
    baseURL: "https://elysia-api.ngrok.io/api/public/v1"
  }
});
const response = await clm.invoke([
  new HumanMessage("What's the best way to grow my email list?")
]);
console.log(response.content);

cURL

curl https://elysia-api.ngrok.io/api/public/v1/chat/completions \
  -H "Authorization: Bearer $ELYSIA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "elysia",
    "messages": [
      {"role": "user", "content": "Give me 3 content ideas for LinkedIn"}
    ],
    "temperature": 0.7
  }'

Advanced Usage

Multi-Agent Systems

Build sophisticated multi-agent workflows using Elysia:

from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# Marketing Agent
marketing_clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
# Strategy Agent
strategy_clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
# Create specialized agents with different system prompts
marketing_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a marketing expert focused on content strategy and campaigns."),
    ("user", "{input}")
])
strategy_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a business strategist focused on growth and scaling."),
    ("user", "{input}")
])
# Use agents together in a workflow
def multi_agent_analysis(company_info: str):
    # Get marketing perspective
    marketing_response = marketing_clm.invoke(
        marketing_prompt.format(
            input=f"Analyze marketing opportunities for: {company_info}"
        )
    )
    
    # Get strategy perspective
    strategy_response = strategy_clm.invoke(
        strategy_prompt.format(
            input=f"Based on this marketing analysis: {marketing_response.content}, what's the growth strategy?"
        )
    )
    
    return {
        "marketing": marketing_response.content,
        "strategy": strategy_response.content
    }
result = multi_agent_analysis("SaaS company selling project management tools")
print(result)

Custom Tools Integration

from langchain.tools import Tool
from langchain.agents import initialize_agent, AgentType
def get_website_traffic(url: str) -> str:
    """Fetch website traffic data"""
    # Your implementation
    return f"Traffic data for {url}: 10k monthly visitors"
def analyze_competitors(industry: str) -> str:
    """Analyze competitor landscape"""
    # Your implementation
    return f"Top competitors in {industry}: Company A, B, C"
tools = [
    Tool(
        name="website_traffic",
        func=get_website_traffic,
        description="Get website traffic statistics"
    ),
    Tool(
        name="competitor_analysis",
        func=analyze_competitors,
        description="Analyze competitive landscape"
    )
]
clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
agent = initialize_agent(
    tools,
    clm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)
result = agent.run("Analyze my competitor's website traffic and suggest improvements")

Memory and Context Management

from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
memory = ConversationBufferMemory()
conversation = ConversationChain(
    clm=clm,
    memory=memory,
    verbose=True
)
# Maintains conversation context
response1 = conversation.predict(input="I'm launching a new product")
response2 = conversation.predict(input="What should my pricing be?")
# Elysia remembers the context about your new product

Structured Output

from langchain.output_parsers import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
class MarketingPlan(BaseModel):
    target_audience: str = Field(description="Target audience description")
    channels: list[str] = Field(description="Marketing channels to use")
    budget: str = Field(description="Suggested budget range")
    timeline: str = Field(description="Implementation timeline")
parser = PydanticOutputParser(pydantic_object=MarketingPlan)
clm = ChatOpenAI(
    model="elysia",
    openai_api_key="YOUR-API-KEY",
    openai_api_base="https://elysia-api.ngrok.io/api/public/v1"
)
prompt = f"""Create a marketing plan for a SaaS product.
{parser.get_format_instructions()}
Provide a comprehensive plan.
"""
response = clm.invoke(prompt)
plan = parser.parse(response.content)
print(f"Target Audience: {plan.target_audience}")
print(f"Channels: {plan.channels}")

Streaming Responses

from openai import OpenAI
client = OpenAI(
    api_key="YOUR-API-KEY",
    base_url="https://elysia-api.ngrok.io/api/public/v1"
)
stream = client.chat.completions.create(
    model="elysia",
    messages=[{"role": "user", "content": "Write a marketing plan"}],
    stream=True
)
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Best Practices

1. Maintain Conversation History

For multi-turn conversations, always include previous messages:

messages = [
    {"role": "user", "content": "I need help with marketing"},
    {"role": "assistant", "content": "I'd be happy to help..."},
    {"role": "user", "content": "Specifically social media"}
]

2. Use System Prompts Effectively

Override Elysia's default personality when needed:

messages = [
    {"role": "system", "content": "You are a technical SEO expert."},
    {"role": "user", "content": "Analyze my website's SEO"}
]

3. Handle Errors Gracefully

try:
    response = client.chat.completions.create(...)
except Exception as e:
    print(f"Error: {e}")
    # Implement retry logic or fallback

4. Optimize Token Usage

5. Security


Rate Limits & Pricing

Current Limits

Pricing

Usage tracked in your dashboard at https://app.yg3.ai/api-keys


Support


Changelog

v1.0.0 (2025-11-10)

v1.01 (2025-11-17)

Published with Nuclino