Skip to main content

托管 GitHub Copilot 对话助手的模型

了解如何为 Copilot 对话助手 托管不同的 AI 模型。

GitHub Copilot can use a variety of AI models. This article explains how these models are hosted and served.

OpenAI models

Used for:

  • GPT-4.1
  • GPT-5-Codex (supported in Visual Studio Code v1.104.1 or higher)
  • GPT-5 mini
  • GPT-5
  • GPT-5.1
  • GPT-5.1-Codex
  • GPT-5.1-Codex-Mini

These models are hosted by OpenAI and GitHub's Azure infrastructure.

OpenAI makes the following data commitment: We [OpenAI] do not train models on customer business data. Data processing follows OpenAI's enterprise privacy comments.

GitHub maintains a zero data retention agreement with OpenAI.

All input requests and output responses processed by GitHub Copilot's models continue to pass through GitHub Copilot's, content filtering systems. These filters include checks for public code matches (when applied) as well as mechanisms to detect and block harmful or offensive content.

OpenAI models fine-tuned by Microsoft

Used for:

  • Raptor mini

Raptor mini is deployed on GitHub managed Azure OpenAI tenant.

Anthropic models

Used for:

  • Claude Haiku 4.5
  • Claude Sonnet 4.5
  • Claude Opus 4.1
  • Claude Sonnet 4

These models are hosted by Amazon Web Services, Anthropic PBC, and Google Cloud Platform. GitHub has provider agreements in place to ensure data is not used for training. Additional details for each provider are included below:

To provide better service quality and reduce latency, GitHub uses prompt caching. You can read more about prompt caching on Anthropic PBC, Amazon Bedrock, and Google Cloud.

When using Claude, input prompts and output completions continue to run through GitHub Copilot's content filters for public code matching, when applied, along with those for harmful or offensive content.

Google models

Used for:

  • Gemini 2.5 Pro
  • Gemini 3 Pro

GitHub Copilot uses Gemini 3 Pro and Gemini 2.5 Pro hosted on Google Cloud Platform (GCP). When using Gemini models, prompts and metadata are sent to GCP, which makes the following data commitment: Gemini doesn't use your prompts, or its responses, as data to train its models.

To provide better service quality and reduce latency, GitHub uses prompt caching.

When using Gemini models, input prompts and output completions continue to run through GitHub Copilot's content filters for public code matching, when applied, along with those for harmful or offensive content.

xAI models

Complimentary access to Grok Code Fast 1 is continuing past the previously announced end time. A new end date has not been set. We may update or conclude this promotion at any time. Regular pricing applies after the extension ends.

Complimentary access to Grok Code Fast 1 is continuing past the previously announced end time. A new end date has not been set. We may update or conclude this promotion at any time. Regular pricing applies after the extension ends.

These models are hosted on xAI. xAI operates Grok Code Fast 1 in GitHub Copilot under a zero data retention API policy. This means xAI commits that user content (both inputs sent to the model and outputs generated by the model):

Will not be:

  • Logged for any purpose, including human review
  • Saved to disk or retained in any form, including as metadata
  • Accessible by xAI personnel
  • Used for model training

Will only:

  • Exist temporarily in RAM for the minimum time required to process and respond to each request
  • Be immediately deleted from memory once the response is delivered

When using xAI, input prompts and output completions continue to run through GitHub Copilot's content filters for public code matching, when applied, along with those for harmful or offensive content.

For more information, see xAI's enterprise terms of service on the xAI website.