Callbacks & Async¶
The MicroDC client supports callback-based asynchronous processing. When a job completes, your callback function is automatically invoked by the background polling thread.
Basic Callback¶
from microdc import Client, LLMComplete
def handle_completion(client: Client, job_id: str):
details = client.get_job_details(job_id)
if details.is_successful():
print(f"Success: {details.result}")
client.acknowledge_job(job_id)
else:
print(f"Failed: {details.error_message}")
client = Client(api_key="mDC_...")
client.set_callback(handle_completion)
job = LLMComplete(model="llama3.3")
job.set_prompt("Hello!")
job_id = client.send_job(job)
client.wait_for_all()
Callback Signature¶
The callback function receives two arguments:
| Parameter | Type | Description |
|---|---|---|
client |
Client |
The client instance (use to fetch results) |
job_id |
str |
ID of the completed job |
Routing with Metadata¶
Use metadata to route jobs to different handlers:
from microdc import Client, LLMComplete
def callback(client: Client, job_id: str):
details = client.get_job_details(job_id)
job_type = details.metadata.get("type")
if job_type == "summarization":
handle_summarization(details)
elif job_type == "translation":
handle_translation(details)
if details.is_successful():
client.acknowledge_job(job_id)
client = Client(api_key="mDC_...")
client.set_callback(callback)
# Submit different job types
job1 = LLMComplete(model="llama3.3")
job1.set_prompt("Summarize: ...")
job1.metadata = {"type": "summarization", "doc_id": "123"}
client.send_job(job1)
job2 = LLMComplete(model="llama3.3")
job2.set_prompt("Translate: Hello")
job2.metadata = {"type": "translation", "target_lang": "es"}
client.send_job(job2)
client.wait_for_all()
Waiting for Jobs¶
Wait for All Jobs¶
# Block until all pending jobs complete
client.wait_for_all()
# With a timeout (raises TimeoutError if exceeded)
client.wait_for_all(timeout=300)
Wait for a Specific Job¶
# Block until one specific job completes
details = client.wait_for_job(job_id)
# With a timeout
details = client.wait_for_job(job_id, timeout=60)
print(details.result)
Polling Configuration¶
The background polling thread checks job status at a configurable interval:
# Default: polls every 2 seconds
client = Client(api_key="mDC_...")
# Disable auto-polling (call wait methods manually)
client = Client(api_key="mDC_...", auto_start_polling=False)
Pattern: Fire and Forget¶
Submit jobs and let callbacks handle everything:
from microdc import Client, LLMComplete
results = []
def collect_result(client: Client, job_id: str):
details = client.get_job_details(job_id)
if details.is_successful():
results.append(details.result)
client.acknowledge_job(job_id)
client = Client(api_key="mDC_...")
client.set_callback(collect_result)
# Submit many jobs
for question in questions:
job = LLMComplete(model="llama3.3")
job.set_prompt(question)
client.send_job(job)
# Wait for all to finish
client.wait_for_all(timeout=300)
print(f"Collected {len(results)} results")