# Phoeniqs Model Service

The Phoeniqs Model Service gives you access to a curated suite of open-source Large Language Models (LLMs), hosted entirely on Swiss sovereign infrastructure via an OpenAI-compatible API.

Models are served using vLLM for optimized inference, with support for streamed responses and token-per-minute throughput pricing. A Bring-Your-Own-Model (BYOM) option is also available to deploy custom models from Hugging Face within the secure Phoeniqs environment.


# Quick links

View the full list of models currently live and available for inference.
active-models/
Step-by-step guides for inference, API calls, subscriptions, and more.
model-service-guides/