Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
-
Updated
Nov 6, 2023 - Python
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
This Repositry is an experiment with an agent that searches documents and asks questions repeatedly in response to the main question. It automatically determines the optimal answer from the current documents or recognizes when there is no answer.
🐋 DeepSeek-R1: Retrieval-Augmented Generation for Document Q&A 📄
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
ContextAgent is a production-ready AI assistant backend with RAG, LangChain, and FastAPI. It ingests documents, uses OpenAI embeddings, and stores vectors in ChromaDB 🐙
Sistema RAG (Retrieval Augmented Generation) para asistencia de documentación técnica en español utilizando LangChain, OpenAI Y Streamlit para la interfaz visual
RAG chatbot designed for domain-specific queries using Ollama, Langchain, phi-3 and Faiss
⚡️ Local RAG API using FastAPI + LangChain + Ollama | Upload PDFs, DOCX, CSVs, XLSX and ask questions using your own documents — fully offline!
AI assistant backend for document-based question answering using RAG (LangChain, OpenAI, FastAPI, ChromaDB). Features modular architecture, multi-tool agents, conversational memory, semantic search, PDF/Docx/Markdown processing, and production-ready deployment with Docker.
A Document QA chatbot using LangChain, Pinecone for vector storage, and Amazon Bedrock (mistral.mixtral-8x7b-instruct for LLM and titan-embed-text for embeddings). Built with a Streamlit frontend for document uploads and contextual question answering.
AI agent API (Python/FastAPI) to upload documents (PDF/TXT) and answer questions using RAG with Azure OpenAI and LangChain.
A full-stack RAG application that enables intelligent document Q&A. Upload PDFs, DOCX, or TXT files and ask questions powered by LangChain, ChromaDB, and Claude/GPT. Features smart chunking, semantic search, conversation memory, and source citations. Built with FastAPI & React + TypeScript.
Sub-second RAG-based chatbot for medical Q&A over 10+ textbooks with source-cited responses.
AI-powered commission plan assistant featuring advanced RAG pipeline, Model Context Protocol (MCP) PostgreSQL server integration, multi-format document processing, and secure SELECT-only database operations. Guided 3-phase plan creation with conversational interface.
AI-powered document Q&A using RAG, FastAPI, Streamlit, and OpenAI. Upload PDFs and ask questions about their content.
🤖 Production-ready RAG system with Docker AI local embeddings & Gemini 2.5. Enterprise document Q&A with role-based access, semantic search, and SOLID architecture. FastAPI + PostgreSQL + Weaviate. Free, private, offline-capable.
An intelligent document assistant powered by Open-Source Large Language Models
🔒 Local AI Agent - Offline RAG system for secure document Q&A with no external APIs
Cross-document QA leveraging LLM and taxonomy
Add a description, image, and links to the document-qa topic page so that developers can more easily learn about it.
To associate your repository with the document-qa topic, visit your repo's landing page and select "manage topics."