|
|
2 years ago | |
|---|---|---|
| components | 2 years ago | |
| config | 2 years ago | |
| declarations | 2 years ago | |
| docs | 2 years ago | |
| pages | 2 years ago | |
| public | 2 years ago | |
| scripts | 2 years ago | |
| styles | 2 years ago | |
| types | 2 years ago | |
| utils | 2 years ago | |
| visual-guide | 2 years ago | |
| .env.example | 2 years ago | |
| .eslintrc.json | 2 years ago | |
| .gitignore | 2 years ago | |
| .prettierrc | 2 years ago | |
| README.md | 2 years ago | |
| next.config.js | 2 years ago | |
| package-lock.json | 2 years ago | |
| package.json | 2 years ago | |
| postcss.config.cjs | 2 years ago | |
| tailwind.config.cjs | 2 years ago | |
| tsconfig.json | 2 years ago | |
| webpack.config.js | 2 years ago | |
| yarn.lock | 2 years ago |
Use the new GPT-4 api to build a chatGPT chatbot for multiple Large PDF files.
Tech stack used includes LangChain, Pinecone, Typescript, Openai, and Next.js. LangChain is a framework that makes it easier to build scalable AI/LLM apps and chatbots. Pinecone is a vectorstore for storing embeddings and your PDF in text to later retrieve similar docs.
Join the discord if you have questions
The visual guide of this repo and tutorial is in the visual guide folder.
If you run into errors, please review the troubleshooting section further down this page.
Prelude: Please make sure you have already downloaded node on your system and the version is 18 or greater.
Clone the repo or download the ZIP
git clone [github https url]
Install packages
First run npm install yarn -g to install yarn globally (if you haven't already).
Then run:
yarn install
After installation, you should now see a node_modules folder.
.env fileCopy .env.example into .env
Your .env file should look like this:
OPENAI_API_KEY=
PINECONE_API_KEY=
PINECONE_ENVIRONMENT=
PINECONE_INDEX_NAME=
Visit openai to retrieve API keys and insert into your .env file.
Visit pinecone to create and retrieve your API keys, and also retrieve your environment and index name from the dashboard.
In the config folder, replace the PINECONE_NAME_SPACE with a namespace where you'd like to store your embeddings on Pinecone when you run npm run ingest. This namespace will later be used for queries and retrieval.
In utils/makechain.ts chain change the QA_PROMPT for your own usecase. Change modelName in new OpenAI to gpt-4, if you have access to gpt-4 api. Please verify outside this repo that you have access to gpt-4 api, otherwise the application will not work.
This repo can load multiple PDF files
Inside docs folder, add your pdf files or folders that contain pdf files.
Run the script npm run ingest to 'ingest' and embed your docs. If you run into errors troubleshoot below.
Check Pinecone dashboard to verify your namespace and vectors have been added.
Once you've verified that the embeddings and content have been successfully added to your Pinecone, you can run the app npm run dev to launch the local dev environment, and then type a question in the chat interface.
In general, keep an eye out in the issues and discussions section of this repo for solutions.
General errors
node -vConsole.log the env variables and make sure they are exposed..env file that contains your valid (and working) API keys, environment and index name.modelName in OpenAI, make sure you have access to the api for the appropriate model.env file from the project will be overwritten by systems env variable.process.env variables if there are still issues.Pinecone errors
environment and index matches the one in the pinecone.ts and .env files.1536.Frontend of this repo is inspired by langchain-chat-nextjs