Text Generation (LLMComplete)¶
LLMComplete is for single-turn text generation tasks where you provide a prompt and receive a completion.
Basic Usage¶
from microdc import Client, LLMComplete
client = Client(api_key="mDC_...")
job = LLMComplete(model="llama3.3")
job.set_prompt("Write a haiku about Python")
job_id = client.send_job(job)
client.wait_for_all()
result = client.get_job_details(job_id)
print(result.result)
# Acknowledge after processing
client.acknowledge_job(job_id)
Configuration Options¶
job = LLMComplete(
model="llama3.3", # Required: model name
temperature=0.7, # Sampling temperature (0.0-2.0)
max_tokens=500, # Maximum tokens to generate
top_p=1.0, # Nucleus sampling
top_k=None, # Top-k sampling
frequency_penalty=0.0, # Frequency penalty
presence_penalty=0.0, # Presence penalty
stop=None, # Stop sequences
stream=False # Enable streaming
)
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
str |
(required) | Model name |
temperature |
float |
0.7 |
Sampling temperature (0.0-2.0) |
max_tokens |
int |
None |
Maximum tokens to generate |
top_p |
float |
1.0 |
Nucleus sampling parameter |
top_k |
int |
None |
Top-k sampling parameter |
frequency_penalty |
float |
0.0 |
Frequency penalty |
presence_penalty |
float |
0.0 |
Presence penalty |
stop |
List[str] |
None |
Stop sequences |
stream |
bool |
False |
Enable streaming |
Multimodal Support¶
LLMComplete supports multimodal input and output:
job = LLMComplete(model="llama3.3")
job.input_modalities = ["text", "image"] # Accept text + image input
job.output_modalities = ["text"] # Generate text output
job.set_prompt("Describe the uploaded image")
With Metadata¶
Track jobs using custom metadata:
job = LLMComplete(model="llama3.3")
job.set_prompt("Summarize: ...")
job.metadata = {"type": "summarization", "doc_id": "123"}
job.priority = "high"
job_id = client.send_job(job)
Base Class Attributes¶
All job types inherit these from BaseCall:
| Attribute | Type | Default | Description |
|---|---|---|---|
metadata |
Dict[str, Any] |
{} |
Custom metadata |
priority |
str |
"standard" |
Priority: "standard", "high", "low" |
timeout |
int |
None |
Max execution time (seconds) |
callback_url |
str |
None |
Webhook URL for notifications |