Nvidia has jumped into the race to create AI-enabled chatbots with companies like OpenAI, Google, and Microsoft. The chip manufacturer has launched its first AI-powered chatbot called Chat with RTX, a locally available chatbot that does not need users to connect to the internet. In other words, users must download the LLM on their PC before they can start conversing with Chat with RTX.
What Is Chat With RTX?
Chat with RTX is a demo app that lets you personalize a GPT large language model connected to your content, such as docs, notes, videos, or other data. “Leveraging retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, you can query a custom chatbot to quickly get contextually relevant answers,” says Nvidia on its official website.
How Can One Use Chat With RTX?
The only downside of the chatbot could be the absence of real-time information. Since the chatbot doesn’t connect to the internet, it won’t be able to fetch citations or references from the internet. However, users can feed it large volumes of work data and ask it to create summaries, highlight patterns in the data, or answer specific questions.
Minimum System Requirements
In other words, Nvidia’s AI chatbot could be an impressive research tool. The minimum PC requirements for running Chat with RTX include a Windows 11 operating system, Nvidia GeForce RTX 30 or 40 Series GPU with at least 8GB of VRAM, 16GB or greater RAM, and a version 535.11 driver or later. Interested users can download the software from the official website.
According to a report by The Verge, Chat with RTX is like a web server with a Python instant. After downloading it, users have to download the Mistral or the Llama 2 models separately, which then train it over the data provided by the user. The publication also states that the Chat with RTX download file is 40GB in size.
You can follow Smartprix on Twitter, Facebook, Instagram, and Google News. Visit smartprix.com for the most recent news, reviews, and tech guides.