# Neutrino AI ## Docs - [Model Gateway](https://docs.neutrinoapp.com/gateway/gateway.md): Concurrently generate and compare responses from different models in a single request. - [Supported Models](https://docs.neutrinoapp.com/gateway/models.md) - [Optimized Inference Engines](https://docs.neutrinoapp.com/inference-engines/engines.md): Inference Engines are designed to generate optimal LLM inference for their respective use-case. They have access to a carefully curated model selection and intelligently route queries to the best-suited LLM for each prompt. Maximize response quality while optimizing for cost and latency. - [LangChain](https://docs.neutrinoapp.com/integrations/langchain.md): Call the Neutrino Router using LangChain - [LlamaIndex](https://docs.neutrinoapp.com/integrations/llamaindex.md): Call the Neutrino Router using LlamaIndex - [Quickstart](https://docs.neutrinoapp.com/introduction.md): Getting started with Neutrino AI - [Model Pricing](https://docs.neutrinoapp.com/pricing/models.md) - [Manually Upload Queries](https://docs.neutrinoapp.com/router-tags/batchupload.md): Ingest past queries with batch upload for Neutrino's exploration system. - [Quickstart](https://docs.neutrinoapp.com/router-tags/quickstart.md): Routing tags allow you to gather observability metrics for specific sections of your AI application, explore how different models perform on your use-case, and get the highest quality responses while balancing for cost and latency for your LLM queries. - [Function Calling](https://docs.neutrinoapp.com/structured-outputs/function-calling.md): Use OpenAI's function calling API with the Neutrino router and supported models. ## OpenAPI Specs - [openapi](https://docs.neutrinoapp.com/api-reference/openapi.json)