--- title: ❓ FAQs description: 'Collections of all the frequently asked questions' --- Yes, it does. Please refer to the [OpenAI Assistant docs page](/examples/openai-assistant). Use the model provided on huggingface: `mistralai/Mistral-7B-v0.1` ```python main.py import os from embedchain import Pipeline as App os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_your_token" app = App.from_config("huggingface.yaml") ``` ```yaml huggingface.yaml llm: provider: huggingface config: model: 'mistralai/Mistral-7B-v0.1' temperature: 0.5 max_tokens: 1000 top_p: 0.5 stream: false embedder: provider: huggingface config: model: 'sentence-transformers/all-mpnet-base-v2' ``` Use the model `gpt-4-turbo` provided my openai. ```python main.py import os from embedchain import Pipeline as App os.environ['OPENAI_API_KEY'] = 'xxx' # load llm configuration from gpt4_turbo.yaml file app = App.from_config(config_path="gpt4_turbo.yaml") ``` ```yaml gpt4_turbo.yaml llm: provider: openai config: model: 'gpt-4-turbo' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ```python main.py import os from embedchain import Pipeline as App os.environ['OPENAI_API_KEY'] = 'xxx' # load llm configuration from gpt4.yaml file app = App.from_config(config_path="gpt4.yaml") ``` ```yaml gpt4.yaml llm: provider: openai config: model: 'gpt-4' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ```python main.py import os from embedchain import Pipeline as App os.environ['OPENAI_API_KEY'] = 'xxx' # load llm configuration from opensource.yaml file app = App.from_config(config_path="opensource.yaml") ``` ```yaml opensource.yaml llm: provider: gpt4all config: model: 'orca-mini-3b-gguf2-q4_0.gguf' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false embedder: provider: gpt4all config: model: 'all-MiniLM-L6-v2' ``` You can achieve this by setting `stream` to `true` in the config file. ```yaml openai.yaml llm: provider: openai config: model: 'gpt-3.5-turbo' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: true ``` ```python main.py import os from embedchain import Pipeline as App os.environ['OPENAI_API_KEY'] = 'sk-xxx' app = App.from_config(config_path="openai.yaml") app.add("https://www.forbes.com/profile/elon-musk") response = app.query("What is the net worth of Elon Musk?") # response will be streamed in stdout as it is generated. ``` #### Still have questions? If docs aren't sufficient, please feel free to reach out to us using one of the following methods: