ChatGPT And Python: Some Basic Stuff

DALL-E 3

So lets say you have a project, where you want to write some code but you need some input from the chatbot somewhere in your execution. Maybe you want a sentence translated, or maybe you are trying to answer a question or analyze some data.

So the first part of this, of course, is that you need to generate an API key:

  • Log into https://platform.openai.com/apps
  • Select ‘API’ from the two choices.
  • You should be in the OpenAI dev platform. Choose api keys on the left menu.
  • Choose ‘create new secret key’ and copy it somewhere safe as you won’t see it again.

Now there are already a number of ways you can use this key in a python application, but we will stick with the OpenAI library for now. Feel free to look for alternatives though – it seems everyone is out there building their own implementations for this sort of thing now.

I’m going to assume you already have python installed here. Running ‘pip install openai’ should be enough to get the library on your machine or venv if that’s what you prefer. Referring to docs at https://pypi.org/project/openai/ indicates we need python 3.7+. If you are running something older than that… Shame on you. (Also take note of the async example provided in the pip docs if that floats your boat-rate limits are here)

So the fun part of this is that we can ask GPT itself to give us some example code to work with. Lets see what it comes up with:

import openai

# Replace 'your-api-key-here' with your actual OpenAI API key
openai.api_key = 'your-api-key-here'

response = openai.Completion.create(
    engine="text-davinci-003",  # This specifies which GPT model to use
    prompt="Translate the following English text to French: 'Hello, how are you?'",
    max_tokens=60
)

print(response.choices[0].text.strip())

OK. Looks like we will want to use model ‘gpt-4’ instead of ‘text-davinci-003’. Models are listed here but seem to change often so make a note of it. Explanation of ‘max_tokens’ is here.

Lets run the code above just to verify it even ‘works’, replacing the engine arg with ‘gpt-4’:

…<snip stacktrace> You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 – see the README at https://github.com/openai/openai-python for the API

Not cool. I guess we can’t trust GPT yet to tell us how to use it’s own library? A question for another day, but how does ChatGPT know what information is more relevant re: date? Anything at all technical should always prefer data that is more recent, right? Seems they need to work on that! (might be the April 2023 cutoff here though)

DALL-E 3

OK – lets fix this up and add argparse, because I can’t stand seeing credentials inside my code.

from openai import OpenAI
import argparse

parser = argparse.ArgumentParser(description="API Key Example")
parser.add_argument("--api_key", required=True, help="API Key for authentication with OpenAI")
args = parser.parse_args()


API_KEY=args.api_key
chat_client = OpenAI(api_key=API_KEY)

def main():
    chat_completion = chat_client.chat.completions.create(
        messages=[
            {
                "role": "user",
                "content": "Translate the following English text to French: 'Hello, how are you?'",
            }
        ],
        model="gpt-4",
    )

if __name__ == "__main__":
    main()

However this returns “The model gpt-4 does not exist or you do not have access to it.

Trying ‘gpt-3.5-turbo’ gives me a different error:

‘You exceeded your current quota, please check your plan and billing details.’

Well that’s sad. 20 bucks a month isn’t enough to get even a FEW api calls? Really? Why did the UI let me create an api key that I cannot use? Digging further in OpenAI help:

Is the ChatGPT API included in the ChatGPT Plus subscription?

  1. No, the ChatGPT API and ChatGPT Plus subscription are billed separately. The API has its own pricing, which can be found at https://openai.com/pricing. The ChatGPT Plus subscription covers usage on chat.openai.com only and costs $20/month.

Well that’s annoying. I get it, but really the sub should include a certain number of API calls.

Ten dollars later, “for science”, and also printing out the chat_completion message…

…..content=’Bonjour, comment allez-vous ?’…..

….usage=CompletionUsage(completion_tokens=7, prompt_tokens=22, total_tokens=29)….

Hmm – OK. I guess I can track costs that way as well. About a penny there. Switching to ‘gpt-4-1106-preview’ seems to work as well, though the answer is slightly different. Now what I want to know here is can I get it (gpt-4) to browse the web for me and then return a result through the API? Let’s try something simple like ‘browse the web and tell me what the current price is for MSFT stock’. Testing it turns out that the 1106-preview cannot do that. In fact none of them look to be able to use the browsing features that are currently built into the web UI for the chatbot.

If I didn’t have a day job here I might try to create a library that used selenium to ask the questions re: browsing, but I will leave things here for now. The API is probably much more stable than the browser chat, which seems to often have load issues and/or need to get refreshed.