model_id field accepts either a base model ID (like fastino/gliner2-base-v1) or the job ID returned from a completed training job (like job_abc123). Pioneer routes the request to the right deployment automatically.
Pioneer supports three request formats: its own native format, an OpenAI-compatible format, and an Anthropic-compatible format. All three reach the same underlying models.
Pioneer native format
UsePOST /inference with the Pioneer schema format. This is the most expressive option and gives you full control over extraction tasks.
Schema structure
Theschema field is a dictionary with optional keys. Include only the keys that apply to your task.
| Key | Type | Description |
|---|---|---|
entities | string[] | Entity type labels for named entity recognition (NER). |
classifications | object[] | Classification tasks, each with a task name and labels list. |
structures | object | Named structure definitions for JSON extraction. |
relations | object[] | Relation definitions linking extracted entities. |
Decoder models
For decoder models (LLMs), replaceschema with "task": "generate":
OpenAI-compatible format
Pioneer exposes an OpenAI-compatible endpoint athttps://api.pioneer.ai/v1. Point any existing OpenAI SDK or integration at this base URL and use your Pioneer API key — no other changes required.
| Method | Endpoint | Description |
|---|---|---|
POST | /v1/chat/completions | Chat completions |
POST | /v1/completions | Text completions |
POST | /v1/responses | Responses API |
GET | /v1/models | List available models |
Anthropic-compatible format
Pioneer also exposes an Anthropic-compatible endpoint. Set your SDK’sbase_url to https://api.pioneer.ai/v1 and use your Pioneer API key in place of an Anthropic key.
Inference history
Pioneer records every inference call. You can retrieve past results and submit corrections to improve future training data.GET /inferences: limit, offset, model_id, task, project_id, training_job_id.
