--- title: 🤖 Large language models (LLMs) --- ## Overview Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface. ## OpenAI To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys). Once you have obtained the key, you can use it like this: ```python import os from embedchain import Pipeline as App os.environ['OPENAI_API_KEY'] = 'xxx' app = App() app.add("https://en.wikipedia.org/wiki/OpenAI") app.query("What is OpenAI?") ``` If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a [yaml config](https://github.com/embedchain/embedchain/blob/main/configs/chroma.yaml) file. ```python main.py import os from embedchain import Pipeline as App os.environ['OPENAI_API_KEY'] = 'xxx' # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: openai config: model: 'gpt-3.5-turbo' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ## Google AI To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey) ```python main.py import os from embedchain import Pipeline as App os.environ["OPENAI_API_KEY"] = "sk-xxxx" os.environ["GOOGLE_API_KEY"] = "xxx" app = App.from_config(config_path="config.yaml") app.add("https://www.forbes.com/profile/elon-musk") response = app.query("What is the net worth of Elon Musk?") if app.llm.config.stream: # if stream is enabled, response is a generator for chunk in response: print(chunk) else: print(response) ``` ```yaml config.yaml llm: provider: google config: model: gemini-pro max_tokens: 1000 temperature: 0.5 top_p: 1 stream: false ``` ## Azure OpenAI To use Azure OpenAI model, you have to set some of the azure openai related environment variables as given in the code block below: ```python main.py import os from embedchain import Pipeline as App os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://xxx.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "xxx" os.environ["OPENAI_API_VERSION"] = "xxx" app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: azure_openai config: model: gpt-35-turbo deployment_name: your_llm_deployment_name temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false embedder: provider: azure_openai config: model: text-embedding-ada-002 deployment_name: you_embedding_model_deployment_name ``` You can find the list of models and deployment name on the [Azure OpenAI Platform](https://oai.azure.com/portal). ## Anthropic To use anthropic's model, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys). ```python main.py import os from embedchain import Pipeline as App os.environ["ANTHROPIC_API_KEY"] = "xxx" # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: anthropic config: model: 'claude-instant-1' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ## Cohere Install related dependencies using the following command: ```bash pip install --upgrade 'embedchain[cohere]' ``` Set the `COHERE_API_KEY` as environment variable which you can find on their [Account settings page](https://dashboard.cohere.com/api-keys). Once you have the API key, you are all set to use it with Embedchain. ```python main.py import os from embedchain import Pipeline as App os.environ["COHERE_API_KEY"] = "xxx" # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: cohere config: model: large temperature: 0.5 max_tokens: 1000 top_p: 1 ``` ## GPT4ALL Install related dependencies using the following command: ```bash pip install --upgrade 'embedchain[opensource]' ``` GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code: ```python main.py from embedchain import Pipeline as App # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: gpt4all config: model: 'orca-mini-3b-gguf2-q4_0.gguf' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false embedder: provider: gpt4all ``` ## JinaChat First, set `JINACHAT_API_KEY` in environment variable which you can obtain from [their platform](https://chat.jina.ai/api). Once you have the key, load the app using the config yaml file: ```python main.py import os from embedchain import Pipeline as App os.environ["JINACHAT_API_KEY"] = "xxx" # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: jina config: temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ## Hugging Face Install related dependencies using the following command: ```bash pip install --upgrade 'embedchain[huggingface-hub]' ``` First, set `HUGGINGFACE_ACCESS_TOKEN` in environment variable which you can obtain from [their platform](https://huggingface.co/settings/tokens). Once you have the token, load the app using the config yaml file: ```python main.py import os from embedchain import Pipeline as App os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "xxx" # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: huggingface config: model: 'google/flan-t5-xxl' temperature: 0.5 max_tokens: 1000 top_p: 0.5 stream: false ``` ## Llama2 Llama2 is integrated through [Replicate](https://replicate.com/). Set `REPLICATE_API_TOKEN` in environment variable which you can obtain from [their platform](https://replicate.com/account/api-tokens). Once you have the token, load the app using the config yaml file: ```python main.py import os from embedchain import Pipeline as App os.environ["REPLICATE_API_TOKEN"] = "xxx" # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: llama2 config: model: 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5' temperature: 0.5 max_tokens: 1000 top_p: 0.5 stream: false ``` ## Vertex AI Setup Google Cloud Platform application credentials by following the instruction on [GCP](https://cloud.google.com/docs/authentication/external/set-up-adc). Once setup is done, use the following code to create an app using VertexAI as provider: ```python main.py from embedchain import Pipeline as App # load llm configuration from config.yaml file app = App.from_config(config_path="config.yaml") ``` ```yaml config.yaml llm: provider: vertexai config: model: 'chat-bison' temperature: 0.5 top_p: 0.5 ```