OpenAI
Use the Responses API pack generated from the canonical tooling endpoint.
make openai-api-boarding-packopenai.responses_api.tools.jsonrun_openai_response.sh
Use the same canonical tooling export for GPT, Claude, Gemini, DeepSeek, and Grok. The free tier gives 10 calls/day so teams can validate the flow quickly, then move billing to the web when they need more usage.
Use the Responses API pack generated from the canonical tooling endpoint.
make openai-api-boarding-packopenai.responses_api.tools.jsonrun_openai_response.shUse the shared Anthropic schema export and request template for assistant workflows.
make ai-api-boarding-packanthropic.tools.input_schema.jsonrun_anthropic_message.shUse the function declaration export directly from the same tooling source.
/direct-ai/toolinggemini.function_declarations.jsongoogle_genai.function_declarationsUse the OpenAI-compatible chat-completions pack for direct tool calling.
deepseek.chat_completions.tools.jsondeepseek.chat_completions.request.template.jsonrun_deepseek_chat.shUse the xAI Responses API pack generated from the same canonical tools.
grok.responses_api.tools.jsongrok.responses_api.request.template.jsonrun_grok_response.sh10 calls per day for public evaluation, demos, and routing validation.
Public API surface + provider packs + manifest links
Upgrade on the OrderCore website when you need more calls, production usage, or a broader rollout.
Billing stays on the web, not in the chat layer.
One canonical set of links, provider schemas, browser config, and quota/billing metadata for all AI services.