llms.mdx 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789
  1. ---
  2. title: 🤖 Large language models (LLMs)
  3. ---
  4. ## Overview
  5. Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface.
  6. <CardGroup cols={4}>
  7. <Card title="OpenAI" href="#openai"></Card>
  8. <Card title="Google AI" href="#google-ai"></Card>
  9. <Card title="Azure OpenAI" href="#azure-openai"></Card>
  10. <Card title="Anthropic" href="#anthropic"></Card>
  11. <Card title="Cohere" href="#cohere"></Card>
  12. <Card title="Together" href="#together"></Card>
  13. <Card title="Ollama" href="#ollama"></Card>
  14. <Card title="vLLM" href="#vllm"></Card>
  15. <Card title="GPT4All" href="#gpt4all"></Card>
  16. <Card title="JinaChat" href="#jinachat"></Card>
  17. <Card title="Hugging Face" href="#hugging-face"></Card>
  18. <Card title="Llama2" href="#llama2"></Card>
  19. <Card title="Vertex AI" href="#vertex-ai"></Card>
  20. <Card title="Mistral AI" href="#mistral-ai"></Card>
  21. <Card title="AWS Bedrock" href="#aws-bedrock"></Card>
  22. <Card title="Groq" href="#groq"></Card>
  23. <Card title="NVIDIA AI" href="#nvidia-ai"></Card>
  24. </CardGroup>
  25. ## OpenAI
  26. To use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obtain the OpenAI API key from the [OpenAI Platform](https://platform.openai.com/account/api-keys).
  27. Once you have obtained the key, you can use it like this:
  28. ```python
  29. import os
  30. from embedchain import App
  31. os.environ['OPENAI_API_KEY'] = 'xxx'
  32. app = App()
  33. app.add("https://en.wikipedia.org/wiki/OpenAI")
  34. app.query("What is OpenAI?")
  35. ```
  36. If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a [yaml config](https://github.com/embedchain/embedchain/blob/main/configs/chroma.yaml) file.
  37. <CodeGroup>
  38. ```python main.py
  39. import os
  40. from embedchain import App
  41. os.environ['OPENAI_API_KEY'] = 'xxx'
  42. # load llm configuration from config.yaml file
  43. app = App.from_config(config_path="config.yaml")
  44. ```
  45. ```yaml config.yaml
  46. llm:
  47. provider: openai
  48. config:
  49. model: 'gpt-3.5-turbo'
  50. temperature: 0.5
  51. max_tokens: 1000
  52. top_p: 1
  53. stream: false
  54. ```
  55. </CodeGroup>
  56. ### Function Calling
  57. Embedchain supports OpenAI [Function calling](https://platform.openai.com/docs/guides/function-calling) with a single function. It accepts inputs in accordance with the [Langchain interface](https://python.langchain.com/docs/modules/model_io/chat/function_calling#legacy-args-functions-and-function_call).
  58. <Accordion title="Pydantic Model">
  59. ```python
  60. from pydantic import BaseModel
  61. class multiply(BaseModel):
  62. """Multiply two integers together."""
  63. a: int = Field(..., description="First integer")
  64. b: int = Field(..., description="Second integer")
  65. ```
  66. </Accordion>
  67. <Accordion title="Python function">
  68. ```python
  69. def multiply(a: int, b: int) -> int:
  70. """Multiply two integers together.
  71. Args:
  72. a: First integer
  73. b: Second integer
  74. """
  75. return a * b
  76. ```
  77. </Accordion>
  78. <Accordion title="OpenAI tool dictionary">
  79. ```python
  80. multiply = {
  81. "type": "function",
  82. "function": {
  83. "name": "multiply",
  84. "description": "Multiply two integers together.",
  85. "parameters": {
  86. "type": "object",
  87. "properties": {
  88. "a": {
  89. "description": "First integer",
  90. "type": "integer"
  91. },
  92. "b": {
  93. "description": "Second integer",
  94. "type": "integer"
  95. }
  96. },
  97. "required": [
  98. "a",
  99. "b"
  100. ]
  101. }
  102. }
  103. }
  104. ```
  105. </Accordion>
  106. With any of the previous inputs, the OpenAI LLM can be queried to provide the appropriate arguments for the function.
  107. ```python
  108. import os
  109. from embedchain import App
  110. from embedchain.llm.openai import OpenAILlm
  111. os.environ["OPENAI_API_KEY"] = "sk-xxx"
  112. llm = OpenAILlm(tools=multiply)
  113. app = App(llm=llm)
  114. result = app.query("What is the result of 125 multiplied by fifteen?")
  115. ```
  116. ## Google AI
  117. To use Google AI model, you have to set the `GOOGLE_API_KEY` environment variable. You can obtain the Google API key from the [Google Maker Suite](https://makersuite.google.com/app/apikey)
  118. <CodeGroup>
  119. ```python main.py
  120. import os
  121. from embedchain import App
  122. os.environ["GOOGLE_API_KEY"] = "xxx"
  123. app = App.from_config(config_path="config.yaml")
  124. app.add("https://www.forbes.com/profile/elon-musk")
  125. response = app.query("What is the net worth of Elon Musk?")
  126. if app.llm.config.stream: # if stream is enabled, response is a generator
  127. for chunk in response:
  128. print(chunk)
  129. else:
  130. print(response)
  131. ```
  132. ```yaml config.yaml
  133. llm:
  134. provider: google
  135. config:
  136. model: gemini-pro
  137. max_tokens: 1000
  138. temperature: 0.5
  139. top_p: 1
  140. stream: false
  141. embedder:
  142. provider: google
  143. config:
  144. model: 'models/embedding-001'
  145. task_type: "retrieval_document"
  146. title: "Embeddings for Embedchain"
  147. ```
  148. </CodeGroup>
  149. ## Azure OpenAI
  150. To use Azure OpenAI model, you have to set some of the azure openai related environment variables as given in the code block below:
  151. <CodeGroup>
  152. ```python main.py
  153. import os
  154. from embedchain import App
  155. os.environ["OPENAI_API_TYPE"] = "azure"
  156. os.environ["OPENAI_API_BASE"] = "https://xxx.openai.azure.com/"
  157. os.environ["OPENAI_API_KEY"] = "xxx"
  158. os.environ["OPENAI_API_VERSION"] = "xxx"
  159. app = App.from_config(config_path="config.yaml")
  160. ```
  161. ```yaml config.yaml
  162. llm:
  163. provider: azure_openai
  164. config:
  165. model: gpt-3.5-turbo
  166. deployment_name: your_llm_deployment_name
  167. temperature: 0.5
  168. max_tokens: 1000
  169. top_p: 1
  170. stream: false
  171. embedder:
  172. provider: azure_openai
  173. config:
  174. model: text-embedding-ada-002
  175. deployment_name: you_embedding_model_deployment_name
  176. ```
  177. </CodeGroup>
  178. You can find the list of models and deployment name on the [Azure OpenAI Platform](https://oai.azure.com/portal).
  179. ## Anthropic
  180. To use anthropic's model, please set the `ANTHROPIC_API_KEY` which you find on their [Account Settings Page](https://console.anthropic.com/account/keys).
  181. <CodeGroup>
  182. ```python main.py
  183. import os
  184. from embedchain import App
  185. os.environ["ANTHROPIC_API_KEY"] = "xxx"
  186. # load llm configuration from config.yaml file
  187. app = App.from_config(config_path="config.yaml")
  188. ```
  189. ```yaml config.yaml
  190. llm:
  191. provider: anthropic
  192. config:
  193. model: 'claude-instant-1'
  194. temperature: 0.5
  195. max_tokens: 1000
  196. top_p: 1
  197. stream: false
  198. ```
  199. </CodeGroup>
  200. ## Cohere
  201. Install related dependencies using the following command:
  202. ```bash
  203. pip install --upgrade 'embedchain[cohere]'
  204. ```
  205. Set the `COHERE_API_KEY` as environment variable which you can find on their [Account settings page](https://dashboard.cohere.com/api-keys).
  206. Once you have the API key, you are all set to use it with Embedchain.
  207. <CodeGroup>
  208. ```python main.py
  209. import os
  210. from embedchain import App
  211. os.environ["COHERE_API_KEY"] = "xxx"
  212. # load llm configuration from config.yaml file
  213. app = App.from_config(config_path="config.yaml")
  214. ```
  215. ```yaml config.yaml
  216. llm:
  217. provider: cohere
  218. config:
  219. model: large
  220. temperature: 0.5
  221. max_tokens: 1000
  222. top_p: 1
  223. ```
  224. </CodeGroup>
  225. ## Together
  226. Install related dependencies using the following command:
  227. ```bash
  228. pip install --upgrade 'embedchain[together]'
  229. ```
  230. Set the `TOGETHER_API_KEY` as environment variable which you can find on their [Account settings page](https://api.together.xyz/settings/api-keys).
  231. Once you have the API key, you are all set to use it with Embedchain.
  232. <CodeGroup>
  233. ```python main.py
  234. import os
  235. from embedchain import App
  236. os.environ["TOGETHER_API_KEY"] = "xxx"
  237. # load llm configuration from config.yaml file
  238. app = App.from_config(config_path="config.yaml")
  239. ```
  240. ```yaml config.yaml
  241. llm:
  242. provider: together
  243. config:
  244. model: togethercomputer/RedPajama-INCITE-7B-Base
  245. temperature: 0.5
  246. max_tokens: 1000
  247. top_p: 1
  248. ```
  249. </CodeGroup>
  250. ## Ollama
  251. Setup Ollama using https://github.com/jmorganca/ollama
  252. <CodeGroup>
  253. ```python main.py
  254. import os
  255. from embedchain import App
  256. # load llm configuration from config.yaml file
  257. app = App.from_config(config_path="config.yaml")
  258. ```
  259. ```yaml config.yaml
  260. llm:
  261. provider: ollama
  262. config:
  263. model: 'llama2'
  264. temperature: 0.5
  265. top_p: 1
  266. stream: true
  267. base_url: 'http://localhost:11434'
  268. ```
  269. </CodeGroup>
  270. ## vLLM
  271. Setup vLLM by following instructions given in [their docs](https://docs.vllm.ai/en/latest/getting_started/installation.html).
  272. <CodeGroup>
  273. ```python main.py
  274. import os
  275. from embedchain import App
  276. # load llm configuration from config.yaml file
  277. app = App.from_config(config_path="config.yaml")
  278. ```
  279. ```yaml config.yaml
  280. llm:
  281. provider: vllm
  282. config:
  283. model: 'meta-llama/Llama-2-70b-hf'
  284. temperature: 0.5
  285. top_p: 1
  286. top_k: 10
  287. stream: true
  288. trust_remote_code: true
  289. ```
  290. </CodeGroup>
  291. ## GPT4ALL
  292. Install related dependencies using the following command:
  293. ```bash
  294. pip install --upgrade 'embedchain[opensource]'
  295. ```
  296. GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code:
  297. <CodeGroup>
  298. ```python main.py
  299. from embedchain import App
  300. # load llm configuration from config.yaml file
  301. app = App.from_config(config_path="config.yaml")
  302. ```
  303. ```yaml config.yaml
  304. llm:
  305. provider: gpt4all
  306. config:
  307. model: 'orca-mini-3b-gguf2-q4_0.gguf'
  308. temperature: 0.5
  309. max_tokens: 1000
  310. top_p: 1
  311. stream: false
  312. embedder:
  313. provider: gpt4all
  314. ```
  315. </CodeGroup>
  316. ## JinaChat
  317. First, set `JINACHAT_API_KEY` in environment variable which you can obtain from [their platform](https://chat.jina.ai/api).
  318. Once you have the key, load the app using the config yaml file:
  319. <CodeGroup>
  320. ```python main.py
  321. import os
  322. from embedchain import App
  323. os.environ["JINACHAT_API_KEY"] = "xxx"
  324. # load llm configuration from config.yaml file
  325. app = App.from_config(config_path="config.yaml")
  326. ```
  327. ```yaml config.yaml
  328. llm:
  329. provider: jina
  330. config:
  331. temperature: 0.5
  332. max_tokens: 1000
  333. top_p: 1
  334. stream: false
  335. ```
  336. </CodeGroup>
  337. ## Hugging Face
  338. Install related dependencies using the following command:
  339. ```bash
  340. pip install --upgrade 'embedchain[huggingface-hub]'
  341. ```
  342. First, set `HUGGINGFACE_ACCESS_TOKEN` in environment variable which you can obtain from [their platform](https://huggingface.co/settings/tokens).
  343. You can load the LLMs from Hugging Face using three ways:
  344. - [Hugging Face Hub](#hugging-face-hub)
  345. - [Hugging Face Local Pipelines](#hugging-face-local-pipelines)
  346. - [Hugging Face Inference Endpoint](#hugging-face-inference-endpoint)
  347. ### Hugging Face Hub
  348. To load the model from Hugging Face Hub, use the following code:
  349. <CodeGroup>
  350. ```python main.py
  351. import os
  352. from embedchain import App
  353. os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "xxx"
  354. config = {
  355. "app": {"config": {"id": "my-app"}},
  356. "llm": {
  357. "provider": "huggingface",
  358. "config": {
  359. "model": "bigscience/bloom-1b7",
  360. "top_p": 0.5,
  361. "max_length": 200,
  362. "temperature": 0.1,
  363. },
  364. },
  365. }
  366. app = App.from_config(config=config)
  367. ```
  368. </CodeGroup>
  369. ### Hugging Face Local Pipelines
  370. If you want to load the locally downloaded model from Hugging Face, you can do so by following the code provided below:
  371. <CodeGroup>
  372. ```python main.py
  373. from embedchain import App
  374. config = {
  375. "app": {"config": {"id": "my-app"}},
  376. "llm": {
  377. "provider": "huggingface",
  378. "config": {
  379. "model": "Trendyol/Trendyol-LLM-7b-chat-v0.1",
  380. "local": True, # Necessary if you want to run model locally
  381. "top_p": 0.5,
  382. "max_tokens": 1000,
  383. "temperature": 0.1,
  384. },
  385. }
  386. }
  387. app = App.from_config(config=config)
  388. ```
  389. </CodeGroup>
  390. ### Hugging Face Inference Endpoint
  391. You can also use [Hugging Face Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index#-inference-endpoints) to access custom endpoints. First, set the `HUGGINGFACE_ACCESS_TOKEN` as above.
  392. Then, load the app using the config yaml file:
  393. <CodeGroup>
  394. ```python main.py
  395. from embedchain import App
  396. config = {
  397. "app": {"config": {"id": "my-app"}},
  398. "llm": {
  399. "provider": "huggingface",
  400. "config": {
  401. "endpoint": "https://api-inference.huggingface.co/models/gpt2",
  402. "model_params": {"temprature": 0.1, "max_new_tokens": 100}
  403. },
  404. },
  405. }
  406. app = App.from_config(config=config)
  407. ```
  408. </CodeGroup>
  409. Currently only supports `text-generation` and `text2text-generation` for now [[ref](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html?highlight=huggingfaceendpoint#)].
  410. See langchain's [hugging face endpoint](https://python.langchain.com/docs/integrations/chat/huggingface#huggingfaceendpoint) for more information.
  411. ## Llama2
  412. Llama2 is integrated through [Replicate](https://replicate.com/). Set `REPLICATE_API_TOKEN` in environment variable which you can obtain from [their platform](https://replicate.com/account/api-tokens).
  413. Once you have the token, load the app using the config yaml file:
  414. <CodeGroup>
  415. ```python main.py
  416. import os
  417. from embedchain import App
  418. os.environ["REPLICATE_API_TOKEN"] = "xxx"
  419. # load llm configuration from config.yaml file
  420. app = App.from_config(config_path="config.yaml")
  421. ```
  422. ```yaml config.yaml
  423. llm:
  424. provider: llama2
  425. config:
  426. model: 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5'
  427. temperature: 0.5
  428. max_tokens: 1000
  429. top_p: 0.5
  430. stream: false
  431. ```
  432. </CodeGroup>
  433. ## Vertex AI
  434. Setup Google Cloud Platform application credentials by following the instruction on [GCP](https://cloud.google.com/docs/authentication/external/set-up-adc). Once setup is done, use the following code to create an app using VertexAI as provider:
  435. <CodeGroup>
  436. ```python main.py
  437. from embedchain import App
  438. # load llm configuration from config.yaml file
  439. app = App.from_config(config_path="config.yaml")
  440. ```
  441. ```yaml config.yaml
  442. llm:
  443. provider: vertexai
  444. config:
  445. model: 'chat-bison'
  446. temperature: 0.5
  447. top_p: 0.5
  448. ```
  449. </CodeGroup>
  450. ## Mistral AI
  451. Obtain the Mistral AI api key from their [console](https://console.mistral.ai/).
  452. <CodeGroup>
  453. ```python main.py
  454. os.environ["MISTRAL_API_KEY"] = "xxx"
  455. app = App.from_config(config_path="config.yaml")
  456. app.add("https://www.forbes.com/profile/elon-musk")
  457. response = app.query("what is the net worth of Elon Musk?")
  458. # As of January 16, 2024, Elon Musk's net worth is $225.4 billion.
  459. response = app.chat("which companies does elon own?")
  460. # Elon Musk owns Tesla, SpaceX, Boring Company, Twitter, and X.
  461. response = app.chat("what question did I ask you already?")
  462. # You have asked me several times already which companies Elon Musk owns, specifically Tesla, SpaceX, Boring Company, Twitter, and X.
  463. ```
  464. ```yaml config.yaml
  465. llm:
  466. provider: mistralai
  467. config:
  468. model: mistral-tiny
  469. temperature: 0.5
  470. max_tokens: 1000
  471. top_p: 1
  472. embedder:
  473. provider: mistralai
  474. config:
  475. model: mistral-embed
  476. ```
  477. </CodeGroup>
  478. ## AWS Bedrock
  479. ### Setup
  480. - Before using the AWS Bedrock LLM, make sure you have the appropriate model access from [Bedrock Console](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/modelaccess).
  481. - You will also need to authenticate the `boto3` client by using a method in the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials)
  482. - You can optionally export an `AWS_REGION`
  483. ### Usage
  484. <CodeGroup>
  485. ```python main.py
  486. import os
  487. from embedchain import App
  488. os.environ["AWS_ACCESS_KEY_ID"] = "xxx"
  489. os.environ["AWS_SECRET_ACCESS_KEY"] = "xxx"
  490. os.environ["AWS_REGION"] = "us-west-2"
  491. app = App.from_config(config_path="config.yaml")
  492. ```
  493. ```yaml config.yaml
  494. llm:
  495. provider: aws_bedrock
  496. config:
  497. model: amazon.titan-text-express-v1
  498. # check notes below for model_kwargs
  499. model_kwargs:
  500. temperature: 0.5
  501. topP: 1
  502. maxTokenCount: 1000
  503. ```
  504. </CodeGroup>
  505. <br />
  506. <Note>
  507. The model arguments are different for each providers. Please refer to the [AWS Bedrock Documentation](https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/providers) to find the appropriate arguments for your model.
  508. </Note>
  509. <br/ >
  510. ## Groq
  511. [Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.
  512. ### Usage
  513. In order to use LLMs from Groq, go to their [platform](https://console.groq.com/keys) and get the API key.
  514. Set the API key as `GROQ_API_KEY` environment variable or pass in your app configuration to use the model as given below in the example.
  515. <CodeGroup>
  516. ```python main.py
  517. import os
  518. from embedchain import App
  519. # Set your API key here or pass as the environment variable
  520. groq_api_key = "gsk_xxxx"
  521. config = {
  522. "llm": {
  523. "provider": "groq",
  524. "config": {
  525. "model": "mixtral-8x7b-32768",
  526. "api_key": groq_api_key,
  527. "stream": True
  528. }
  529. }
  530. }
  531. app = App.from_config(config=config)
  532. # Add your data source here
  533. app.add("https://docs.embedchain.ai/sitemap.xml", data_type="sitemap")
  534. app.query("Write a poem about Embedchain")
  535. # In the realm of data, vast and wide,
  536. # Embedchain stands with knowledge as its guide.
  537. # A platform open, for all to try,
  538. # Building bots that can truly fly.
  539. # With REST API, data in reach,
  540. # Deployment a breeze, as easy as a speech.
  541. # Updating data sources, anytime, anyday,
  542. # Embedchain's power, never sway.
  543. # A knowledge base, an assistant so grand,
  544. # Connecting to platforms, near and far.
  545. # Discord, WhatsApp, Slack, and more,
  546. # Embedchain's potential, never a bore.
  547. ```
  548. </CodeGroup>
  549. ## NVIDIA AI
  550. [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) let you quickly use NVIDIA's AI models, such as Mixtral 8x7B, Llama 2 etc, through our API. These models are available in the [NVIDIA NGC catalog](https://catalog.ngc.nvidia.com/ai-foundation-models), fully optimized and ready to use on NVIDIA's AI platform. They are designed for high speed and easy customization, ensuring smooth performance on any accelerated setup.
  551. ### Usage
  552. In order to use LLMs from NVIDIA AI, create an account on [NVIDIA NGC Service](https://catalog.ngc.nvidia.com/).
  553. Generate an API key from their dashboard. Set the API key as `NVIDIA_API_KEY` environment variable. Note that the `NVIDIA_API_KEY` will start with `nvapi-`.
  554. Below is an example of how to use LLM model and embedding model from NVIDIA AI:
  555. <CodeGroup>
  556. ```python main.py
  557. import os
  558. from embedchain import App
  559. os.environ['NVIDIA_API_KEY'] = 'nvapi-xxxx'
  560. config = {
  561. "app": {
  562. "config": {
  563. "id": "my-app",
  564. },
  565. },
  566. "llm": {
  567. "provider": "nvidia",
  568. "config": {
  569. "model": "nemotron_steerlm_8b",
  570. },
  571. },
  572. "embedder": {
  573. "provider": "nvidia",
  574. "config": {
  575. "model": "nvolveqa_40k",
  576. "vector_dimension": 1024,
  577. },
  578. },
  579. }
  580. app = App.from_config(config=config)
  581. app.add("https://www.forbes.com/profile/elon-musk")
  582. answer = app.query("What is the net worth of Elon Musk today?")
  583. # Answer: The net worth of Elon Musk is subject to fluctuations based on the market value of his holdings in various companies.
  584. # As of March 1, 2024, his net worth is estimated to be approximately $210 billion. However, this figure can change rapidly due to stock market fluctuations and other factors.
  585. # Additionally, his net worth may include other assets such as real estate and art, which are not reflected in his stock portfolio.
  586. ```
  587. </CodeGroup>
  588. <br/ >
  589. <Snippet file="missing-llm-tip.mdx" />