A unified reference for all Concentrated Language Models (CLMs) available through the YG3 API.
YG3 provides three CLM-powered models:
Elysia — Text reasoning
Taurus — Vision (images + text)
Vani — Image generation
All models share a consistent authentication system and an OpenAI-compatible interface.
Quick Start
Authentication
Models Overview
• Elysia (Text)
• Taurus (Vision)
• Vani (Image Generation)
Endpoints
Integration Guides
Advanced Usage
Rate Limits & Pricing
Support & Changelog
Sign up at https://app.yg3.ai/get-started
Go to API Keys: https://app.yg3.ai/api-keys
Click Create New Key
Copy your key (displayed once)
curl https://elysia-api.ngrok.io/api/public/v1/chat/completions \-H "Authorization: Bearer YOUR-API-KEY" \-H "Content-Type: application/json" \-d '{"model": "elysia","messages": [{"role": "user", "content": "Hello!"}]}'
All API requests require an API key in the Authorization header:
Authorization: Bearer sk-your-api-key-here
Keep your API key secure! Never commit it to version control or expose it in client-side code.
# .envELYSIA_API_KEY=sk-your-api-key-hereELYSIA_BASE_URL=https://elysia-api.ngrok.io/api/public/v1
YG3 exposes three CLM-powered models.
Each model has its own input format and endpoint behavior.
Modality: Text / Chat
Endpoint: /chat/completions
Use cases:
Reasoning
Conversation
Ideation
Strategy
Follow-ups after Taurus analysis
Prompt generation for Vani
from openai import OpenAI
client = OpenAI(
api_key="YOUR-API-KEY",
base_url="https://elysia-api.ngrok.io/api/public/v1"
)
response = client.chat.completions.create(
model="elysia",
messages=[
{"role": "user", "content": "Help me with my marketing strategy"}
]
)
print(response.choices[0].message.content)Accepts messages=[...]
Returns text only
Fully OpenAI-compatible
Modality: Vision (image + text)
Endpoint: /chat/completions
Input format:
A single messages item containing a content list with multiple parts:
{"type": "image_url", "image_url": {...}}
{"type": "text", "text": "your question"}
Use cases:
Visual understanding
Object & scene recognition
OCR-style tasks
Analyzing diagrams / screenshots
Feeding results into Elysia for deeper reasoning
def analyze_image_with_taurus(image_url, question):
img = Image.open(BytesIO(requests.get(image_url).content))
buffer = BytesIO()
img.save(buffer, format="PNG")
img_b64 = base64.b64encode(buffer.getvalue()).decode()
payload = {
"model": "taurus",
"messages": [{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{img_b64}"}},
{"type": "text", "text": question}
]
}]
}
r = requests.post(f"{API_BASE}/chat/completions", headers=headers, json=payload)
display(img)
print(r.json())Uses same endpoint as Elysia
Requires at least one image part
Returns text responses
Modality: Image Generation
Endpoint: /images/generations
Use cases:
Artwork
Visual concepts
UX mocks
Storyboards
Renderings based on Elysia prompts
def create_image_with_vani(prompt, size="1024x1024"):
payload = {"model": "vani", "prompt": prompt, "size": size, "response_format": "b64_json"}
r = requests.post(f"{API_BASE}/images/generations", headers=headers, json=payload)
img_bytes = base64.b64decode(r.json()['data'][0]['b64_json'])
img = Image.open(BytesIO(img_bytes))
display(img)
Not chat-based
Only accepts text prompts
Returns image bytes (base64)
YG3’s API supports multiple modalities across two primary endpoints:
Model | Endpoint | Modality | Description |
Elysia |
| Text | Chat-style reasoning |
Taurus |
| Vision + Text | Visual understanding |
Vani |
| Image Gen | Create images |
POST /v1/chat/completions
Send messages to Elysia and receive responses.
Request Body:
{"model": "elysia","messages": [{"role": "system", "content": "Optional system prompt"},{"role": "user", "content": "Your message"}],"temperature": 0.7,"max_tokens": 1000}
Parameters:
Parameter | Type | Default | Description |
| string | "elysia" | Model to use (currently only "elysia") |
| array | required | Array of message objects |
| float | 0.7 | Controls randomness (0.0-2.0) |
| integer | 1000 | Maximum tokens in response |
Response:
{"id": "chatcmpl-abc123","object": "chat.completion","created": 1699564800,"model": "elysia","choices": [{"index": 0,"message": {"role": "assistant","content": "Response text here..."},"finish_reason": "stop"}],"usage": {"prompt_tokens": 20,"completion_tokens": 50,"total_tokens": 70}}
GET /v1/models
Get list of available models.
Response:
{"object": "list","data": [{ "id": "elysia", "object": "model", "created": 1677610602, "owned_by": "yg3" },{ "id": "taurus", "object": "model", "created": 1677610602, "owned_by": "yg3" },{ "id": "vani", "object": "model", "created": 1677610602, "owned_by": "yg3" }]}
from openai import OpenAIclient = OpenAI(api_key="YOUR-API-KEY",base_url="https://elysia-api.ngrok.io/api/public/v1")response = client.chat.completions.create(model="elysia",messages=[{"role": "user", "content": "Help me with my marketing strategy"}])print(response.choices[0].message.content)
from langchain_openai import ChatOpenAIfrom langchain.schema import HumanMessage, SystemMessage# Initialize Elysiaclm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1",temperature=0.7)# Simple usagemessages = [SystemMessage(content="You are a business growth strategist."),HumanMessage(content="How do I scale my SaaS business?")]response = clm.invoke(messages)print(response.content)
from langchain.agents import AgentExecutor, create_openai_tools_agentfrom langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain.tools import Tool# Initialize Elysiaclm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")# Define custom toolsdef search_competitor_data(query: str) -> str:"""Search for competitor information"""return f"Competitor data for: {query}"tools = [Tool(name="competitor_search",func=search_competitor_data,description="Search for competitor data and market analysis")]# Create agent promptprompt = ChatPromptTemplate.from_messages([("system", "You are Elysia, a marketing and business development expert with access to tools."),("user", "{input}"),MessagesPlaceholder(variable_name="agent_scratchpad"),])# Create and run agentagent = create_openai_tools_agent(clm, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)result = agent_executor.invoke({"input": "Analyze my competitors in the SaaS email marketing space"})print(result["output"])
from langchain_openai import ChatOpenAI, OpenAIEmbeddingsfrom langchain_community.vectorstores import Chromafrom langchain.chains import RetrievalQAfrom langchain.text_splitter import RecursiveCharacterTextSplitterfrom langchain_community.document_loaders import TextLoader# Initialize Elysiaclm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")# Load and process documentsloader = TextLoader("your_business_docs.txt")documents = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000,chunk_overlap=200)splits = text_splitter.split_documents(documents)# Create vector store (you'll need OpenAI embeddings or another provider)embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_documents(documents=splits, embedding=embeddings)# Create RAG chainqa_chain = RetrievalQA.from_chain_type(clm=clm,chain_type="stuff",retriever=vectorstore.as_retriever(),return_source_documents=True)# Query with contextresult = qa_chain.invoke({"query": "What's our pricing strategy?"})print(result["result"])
from llama_index.clms.openai_like import OpenAILikefrom llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader# Initialize Elysiaclm = OpenAILike(model="elysia",api_key="YOUR-API-KEY",api_base="https://elysia-api.ngrok.io/api/public/v1",is_chat_model=True)# Set as defaultSettings.clm = clm# Load documents and create indexdocuments = SimpleDirectoryReader("./data").load_data()index = VectorStoreIndex.from_documents(documents)# Queryquery_engine = index.as_query_engine()response = query_engine.query("How should I position my product?")print(response)
import OpenAI from 'openai';const client = new OpenAI({apiKey: process.env.ELYSIA_API_KEY,baseURL: 'https://elysia-api.ngrok.io/api/public/v1'});async function chat() {const response = await client.chat.completions.create({model: 'elysia',messages: [{ role: 'user', content: 'Help me with my marketing strategy' }]});console.log(response.choices[0].message.content);}chat();
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";const clm = new ChatOpenAI({modelName: "elysia",openAIApiKey: process.env.ELYSIA_API_KEY,configuration: {baseURL: "https://elysia-api.ngrok.io/api/public/v1"}});const response = await clm.invoke([new HumanMessage("What's the best way to grow my email list?")]);console.log(response.content);
curl https://elysia-api.ngrok.io/api/public/v1/chat/completions \-H "Authorization: Bearer $ELYSIA_API_KEY" \-H "Content-Type: application/json" \-d '{"model": "elysia","messages": [{"role": "user", "content": "Give me 3 content ideas for LinkedIn"}],"temperature": 0.7}'
Build sophisticated multi-agent workflows using Elysia:
from langchain.agents import AgentExecutor, create_openai_tools_agentfrom langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplate# Marketing Agentmarketing_clm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")# Strategy Agentstrategy_clm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")# Create specialized agents with different system promptsmarketing_prompt = ChatPromptTemplate.from_messages([("system", "You are a marketing expert focused on content strategy and campaigns."),("user", "{input}")])strategy_prompt = ChatPromptTemplate.from_messages([("system", "You are a business strategist focused on growth and scaling."),("user", "{input}")])# Use agents together in a workflowdef multi_agent_analysis(company_info: str):# Get marketing perspectivemarketing_response = marketing_clm.invoke(marketing_prompt.format(input=f"Analyze marketing opportunities for: {company_info}"))# Get strategy perspectivestrategy_response = strategy_clm.invoke(strategy_prompt.format(input=f"Based on this marketing analysis: {marketing_response.content}, what's the growth strategy?"))return {"marketing": marketing_response.content,"strategy": strategy_response.content}result = multi_agent_analysis("SaaS company selling project management tools")print(result)
from langchain.tools import Toolfrom langchain.agents import initialize_agent, AgentTypedef get_website_traffic(url: str) -> str:"""Fetch website traffic data"""# Your implementationreturn f"Traffic data for {url}: 10k monthly visitors"def analyze_competitors(industry: str) -> str:"""Analyze competitor landscape"""# Your implementationreturn f"Top competitors in {industry}: Company A, B, C"tools = [Tool(name="website_traffic",func=get_website_traffic,description="Get website traffic statistics"),Tool(name="competitor_analysis",func=analyze_competitors,description="Analyze competitive landscape")]clm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")agent = initialize_agent(tools,clm,agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,verbose=True)result = agent.run("Analyze my competitor's website traffic and suggest improvements")
from langchain.memory import ConversationBufferMemoryfrom langchain.chains import ConversationChainclm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")memory = ConversationBufferMemory()conversation = ConversationChain(clm=clm,memory=memory,verbose=True)# Maintains conversation contextresponse1 = conversation.predict(input="I'm launching a new product")response2 = conversation.predict(input="What should my pricing be?")# Elysia remembers the context about your new product
from langchain.output_parsers import PydanticOutputParserfrom langchain_core.pydantic_v1 import BaseModel, Fieldclass MarketingPlan(BaseModel):target_audience: str = Field(description="Target audience description")channels: list[str] = Field(description="Marketing channels to use")budget: str = Field(description="Suggested budget range")timeline: str = Field(description="Implementation timeline")parser = PydanticOutputParser(pydantic_object=MarketingPlan)clm = ChatOpenAI(model="elysia",openai_api_key="YOUR-API-KEY",openai_api_base="https://elysia-api.ngrok.io/api/public/v1")prompt = f"""Create a marketing plan for a SaaS product.{parser.get_format_instructions()}Provide a comprehensive plan."""response = clm.invoke(prompt)plan = parser.parse(response.content)print(f"Target Audience: {plan.target_audience}")print(f"Channels: {plan.channels}")
from openai import OpenAIclient = OpenAI(api_key="YOUR-API-KEY",base_url="https://elysia-api.ngrok.io/api/public/v1")stream = client.chat.completions.create(model="elysia",messages=[{"role": "user", "content": "Write a marketing plan"}],stream=True)for chunk in stream:if chunk.choices[0].delta.content:print(chunk.choices[0].delta.content, end="")
For multi-turn conversations, always include previous messages:
messages = [{"role": "user", "content": "I need help with marketing"},{"role": "assistant", "content": "I'd be happy to help..."},{"role": "user", "content": "Specifically social media"}]
Override Elysia's default personality when needed:
messages = [{"role": "system", "content": "You are a technical SEO expert."},{"role": "user", "content": "Analyze my website's SEO"}]
try:response = client.chat.completions.create(...)except Exception as e:print(f"Error: {e}")# Implement retry logic or fallback
Keep system prompts concise
Trim old conversation history
Use max_tokens to control response length
Never expose API keys in client-side code
Use environment variables
Rotate keys periodically
Implement rate limiting on your end
Requests per hour: 100 (adjustable per key)
Max tokens per request: 8192
Free Tier: 10,000 tokens/month
Pro: $2 per 1M tokens
Enterprise: Custom pricing
Usage tracked in your dashboard at https://app.yg3.ai/api-keys
Email: team@yg3.ai
Initial API release
OpenAI-compatible chat completions
LangChain & LlamaIndex support
API key management dashboard
Updated with new models, Taurus and Vani.