Skip to main content

Configuration

The subsequent sections describe the various utilities and configurations of the CodeGPT application.

CodeGPT Stack

Explore the significant facets of the CodeGPT software stack.

Tools

  1. Chat: Engage in AI conversations using the models of your chosen provider or with agents on CodeGPT Plus.
  2. AI Agents Marketplace: Peruse viable agents in the Marketplace and interact with them. Ensure you choose CodeGPT Plus as your provider and establish the mandatory connection. You also have the option to explore directly from the Home button.
  3. React Sandbox: Experiment with React components through interaction or image upload. Visit React Sandbox for a hands-on experience.

Settings

These optional configurations enhance your user experience.

  1. Autocomplete: This feature provides code completion suggestions based on the developer's input. It covers multiple aspects, including variables, functions, methods, classes, and context-specific keywords. Check out this guide for further explanation. Remember to choose a provider first. If your choice is CodeGPT Enterprise, refer to this link.
  2. Theme: Alter the visual interface of the application by switching between the dark and light themes.

Help

Look here when you need help.

  • Help: If you require additional assistance, this section offers guidance on the various tools. Alternatively, consult the Docs and API Docs.
  • Issue Reporting: Any issues you encounter can be reported in the Github repository.

Select model provider

  1. Select your AI provider from the dropdown menu, then enter the API Key for the selected provider or follow the instructions for specific steps.
  1. Set the connection 🔑, status must change on the window. For more details, please check the page or every provider above.

Chat Settings

  • Explore this menu to adjust provider attributes, set token limits, refine temperature control, and manage window memory.

Max Token

  • Consider tokens as fragments of words. The API first disintegrate the input into tokens before executing any operation.
  • Each model follows a token limit that you can modify according to the anticipated response length and the model you operate on.
  • To understand more about tokens, refer to Tokens by OpenAI.

Temperature

Representative of the randomness or "creativity" in the text generation, it ranges between 0 and 1. A higher value generates more diverse output, while a lower value sticks closely to the training data. The default value is set at 0.3, with 0 being the most deterministic and 1 being the most random.

For more information about temperature settings, visit Temperature by Cohere.

Window Memory

This tool stores the historical log of your past conversations. However, it only accounts for the last 'K' conversations, ensuring the buffer doesn't exceed the token limit.

  • Default: 4
  • Minimum: 1
  • Maximum: 50

To visualize any changes made to these settings, click the tray-arrow-down button in your browser.