Sign up and get your API key
Create a free account at pioneer.ai. The free tier includes $75 of usage with no credit card required.Once you’re signed in, go to Settings → API Keys and generate a new key. Copy it somewhere safe — you won’t be able to view it again after closing the dialog.Set your key as an environment variable so the examples below work as-is:
List available base models
Before running inference, you can browse the models Pioneer has available. Use The response lists model IDs you can pass directly to
GET /base-models to see the full catalog. Pass ?supports_inference=true to filter to models you can call immediately, or ?task_type=decoder to see LLMs only./inference. For example, fastino/gliner2-base-v1 is the GLiNER base model for NER tasks.Run your first inference call
Call The
POST /inference with a model ID, some text, and a schema that defines what you want to extract. The example below uses the GLiNER base model to extract named entities from a sentence.schema field tells the model what to look for. For encoder models (GLiNER), you can supply:entities— a list of entity type strings for NERclassifications— a list of{task, labels}objects for text classificationstructures— a dict of structure definitions for JSON extractionrelations— a list of relation definitions
"task": "generate" instead of a schema.The
model_id can be a base model ID like fastino/gliner2-base-v1, or the ID of a completed training job. Once you’ve fine-tuned a model, replace the base model ID with your job ID to serve predictions from your custom model.Use OpenAI or Anthropic-compatible endpoints (optional)
If you’re already using the OpenAI or Anthropic SDK, Pioneer provides drop-in compatible endpoints. Point your SDK at Pass Pioneer-specific fields like
https://api.pioneer.ai/v1 and use your Pioneer API key — no other changes needed.schema via extra_body when using the OpenAI Python SDK.Start a training job (optional)
When you’re ready to fine-tune on your own data, start a training job. You’ll need a dataset already uploaded or created — see Datasets for how to create one.The response includes a job
id. Poll GET /felix/training-jobs/{id} to check status. Once the job reaches complete, use the job ID as your model_id in /inference calls.Job status values: requested → running → complete (or failed / stopped).Next steps
Fine-tune an NER model
End-to-end walkthrough: dataset upload, training, evaluation, and inference.
Fine-tune an LLM
Fine-tune Qwen, Llama, or DeepSeek on your domain data with LoRA.
Synthetic data
Generate labeled training data without manual annotation.
API reference
Full reference for every endpoint with request and response schemas.

