Quick Start¶
This guide walks you through submitting your first inference job with the MicroDC client.
Set Up Authentication¶
Environment variable:
Or pass directly in code:
Your First Job¶
from microdc import Client, LLMComplete
# Initialize client
client = Client(api_key="mDC_...")
# Create a text generation job
job = LLMComplete(model="llama3.3")
job.set_prompt("Why is the sky blue?")
# Submit the job
job_id = client.send_job(job)
print(f"Job submitted: {job_id}")
# Wait for completion
client.wait_for_all()
# Get results
result = client.get_job_details(job_id)
print(result.result)
# Acknowledge the job
client.acknowledge_job(job_id)
Using a Context Manager¶
The client supports context managers for automatic cleanup:
from microdc import Client, LLMComplete
with Client(api_key="mDC_...") as client:
job = LLMComplete(model="llama3.3")
job.set_prompt("Hello!")
job_id = client.send_job(job)
client.wait_for_all()
result = client.get_job_details(job_id)
print(result.result)
# Client automatically closes
Chat Conversation¶
For multi-turn conversations, use LLMChat:
from microdc import Client, LLMChat
client = Client(api_key="mDC_...")
chat = LLMChat(model="gpt-4")
chat.set_system("You are a helpful assistant.")
chat.add_user_message("What is Python?")
job_id = client.send_job(chat)
client.wait_for_all()
result = client.get_job_details(job_id)
print(result.result)
Generate Embeddings¶
from microdc import Client, LLMEmbed
client = Client(api_key="mDC_...")
job = LLMEmbed(model="text-embedding-ada-002")
job.add_texts(["Hello world", "Goodbye world"])
job_id = client.send_job(job)
client.wait_for_all()
result = client.get_job_details(job_id)
embeddings = result.result['embeddings']
print(f"Generated {len(embeddings)} embeddings")
Next Steps¶
- Configuration -- Customize client behavior
- Guides -- Detailed usage guides for each job type
- Callbacks -- Async processing with callbacks