Estimated reading time: 1 minutes
Image: Nvidia
Nvidia is updating its experimental ChatRTX chatbot with more AI models for RTX GPU owners. The chatbot, which runs locally on a Windows PC, can already use Mistral or Llama 2 to query personal documents that you feed into it, but now the list of supported AI models is growing to include Google’s Gemma, ChatGLM3, and even OpenAI’s CLIP model to make it easier to search your photos.
Nvidia first introduced ChatRTX as “Chat with RTX” in February as a demo app, and you’ll need an RTX 30- or 40-series GPU with 8GB of VRAM or more to be able to run it. The app essentially creates a local chatbot server that you can access from a browser and feed your local documents and even YouTube videos to get a powerful search tool complete with summaries…
About The Author
Discover more from Artificial Race!
Subscribe to get the latest posts sent to your email.