vscode extension, add a file, ask gpt3.5 with all the files you pass into it up to 16k tokens
Go to file
2023-06-25 22:10:29 -04:00
.github/ISSUE_TEMPLATE Update issue templates 2023-06-22 11:53:36 -04:00
.vscode Base files for creation of extension 2023-06-18 15:42:52 -04:00
images update(gif): Added new gif to reflect program 2023-06-25 19:46:06 -04:00
test Base files for creation of extension 2023-06-18 15:42:52 -04:00
.eslintrc.json Base files for creation of extension 2023-06-18 15:42:52 -04:00
.gitignore Base files for creation of extension 2023-06-18 15:42:52 -04:00
.vscodeignore Base files for creation of extension 2023-06-18 15:42:52 -04:00
CHANGELOG.md Release 0.1.3 2023-06-25 13:32:16 -04:00
CODE_OF_CONDUCT.md Code of conduct relative to Contributor Covenant 2023-06-22 12:40:10 -04:00
CONTRIBUTING.md Update contribution guide, readme is documentation 2023-06-22 12:42:53 -04:00
extension.js feature(auto-load): will now show the response 2023-06-25 22:10:29 -04:00
jsconfig.json Base files for creation of extension 2023-06-18 15:42:52 -04:00
LICENSE.txt Update LICENSE.txt 2023-06-22 12:59:06 -04:00
package-lock.json feature(openai-api): Allows the user to submit the question and contents through to the model. Untested 2023-06-21 11:45:34 -04:00
package.json fix(explorer-context): Shows the add files context for any file 2023-06-25 11:51:30 -04:00
README.md fix(readme): updated readme to reflect new changes 2023-06-25 13:25:39 -04:00
vsc-extension-quickstart.md Base files for creation of extension 2023-06-18 15:42:52 -04:00

gpt-contextfiles

** currently in development, if you'll like to contribute or provide any feedback check out the link **

I was annoyed with copying responses into chatgpt and other LLMs for debugging my code across files, so I decided to make an extension that will do that.

You simply right click each file you want to pass through, check or uncheck the checkbox, then enter your question and pass along the response over the api to your LLM.

This extension uses the openai api, there are many models avaliable:

https://openai.com/pricing

However, being orientated with managing files this project defaults to the 16k context with GPT-3.5-turbo-16k

If you wish to change the model, you must change the model in the extension.js file

https://platform.openai.com/docs/models/gpt-3-5

Installation

Add your api key to OPENAI_API_KEY for your windows/linux environment variable (tested with system variable)

Features

Clear -> Clears the files currently available

Submit -> Submits the query to the api

Refresh -> refreshes the window so that all new files will be available for that session.

  • Right click to add files to the context window
  • Click on the extension addon to open the context window, refresh to update the files to check.
  • Select the files uses checkboxes
  • After submit is pressed, wait until the question disappears, this means the query is processed by openai and was fully sent
  • Click API Response to view your query

Examples

Demo of how to use the extension:

How it works

We can select two files we want to pass through, however we can uncheck one of them for later debugging and enter our question:

What does this do?
c:\dev\test\gpt-contextfiles-test\program.js:
\```
	window.alert("Hello World!")
\```

Functions based on the principle of files passed into it

Selected Files:
[x] c:\dev\test\gpt-contextfiles-test\program.js
[ ] c:\dev\test\gpt-contextfiles-test\program2.js