OpenAI API Starting Guide
Table of Contents
Getting Started with OpenAI API
This guide walks you through the basics of using OpenAI’s API, including setting up an account, securely handling API keys, understanding tokens, roles, system messages, models, prompt best practices, API calls, rate limits, advanced parameters, interpreting API responses, and running your programs.
Step 1: Sign Up for OpenAI
First, you need an OpenAI account to access the API. Head over to OpenAI’s Sign-Up Page and create an account. Once you have an account, you can generate an API key, which you’ll need to access OpenAI’s services.
Step 2: Get Your API Key
After signing up, navigate to your API Keys page to generate your API key.
Important: Never hardcode your API key directly into your code, especially if you are using version control systems like Git. Even if you remove it later, the key can remain in the version history, and someone could retrieve it. Always store it securely and avoid sharing it publicly.
Step 3: What Does Exporting an API Key Do?
Exporting your API key as an environment variable allows your programs to access it securely without including it directly in your code. When you set an environment variable, your operating system temporarily stores the key in memory. The OpenAI library looks for the environment variable OPENAI_API_KEY
to authenticate your API requests.
Step 4: Exporting Your API Key (Windows, macOS, Linux)
Here’s how to export your API key on different operating systems:
Windows (Command Prompt)
-
Open Command Prompt: Press
Win + R
, typecmd
, and hit Enter. -
Set the environment variable:
setx OPENAI_API_KEY "your-api-key-here"
Close and reopen the Command Prompt to ensure the changes take effect.
macOS/Linux (Bash Shell)
-
Open Terminal.
-
Export the API key:
export OPENAI_API_KEY="your-api-key-here"
To make it persistent across sessions, add the export command to your
~/.bashrc
or~/.bash_profile
file:echo 'export OPENAI_API_KEY="your-api-key-here"' >> ~/.bashrc
Then reload your shell:
source ~/.bashrc
macOS/Linux (Zsh Shell)
If you’re using Zsh (default shell in recent macOS versions):
-
Open Terminal.
-
Export the API key:
export OPENAI_API_KEY="your-api-key-here"
To make it persistent, add the export command to your
~/.zshrc
file:echo 'export OPENAI_API_KEY="your-api-key-here"' >> ~/.zshrc
Then reload your shell:
source ~/.zshrc
macOS/Linux (Fish Shell)
If you’re using the Fish shell:
-
Open Terminal.
-
Set the environment variable:
set -Ux OPENAI_API_KEY "your-api-key-here"
Step 5: What Is an API Call?
An API call is a request made by your program to an external service (in this case, OpenAI’s servers) to perform a specific action, like generating text. Think of the API as a waiter in a restaurant: you place an order (API call), and the waiter (API) brings you what you asked for (response).
When you use OpenAI’s API, your program sends a request containing your prompt to OpenAI’s servers. The servers process your request using advanced language models and send back a response generated by the model.
Step 6: Install Python (if you haven’t already)
If you don’t have Python installed on your computer, you’ll need to install it.
-
Windows and macOS:
- Download Python: Get the latest version from the official website.
- Install Python: Run the installer and follow the prompts. Make sure to check the option to Add Python to PATH during installation.
-
Linux:
-
Check if Python is installed:
python --version
If that doesn’t work, try:
python3 --version
-
Install Python (if not installed):
For Debian/Ubuntu-based systems:
sudo apt-get update sudo apt-get install python3
For Arch Linux:
sudo pacman -Syu python
-
Step 7: Install the OpenAI Python Library
Open a terminal or command prompt and install the OpenAI Python library using pip
:
pip install openai
If pip
is not found, you may need to use pip3
or ensure that Python and pip are correctly installed.
Step 8: Writing Your First Program
Create a new file named openai_funny_example.py
and open it in a text editor or IDE (like Visual Studio Code, Sublime Text, or even Notepad).
Here’s a minimal and fun example:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a stand-up comedian who tells dad jokes."},
{"role": "user", "content": "Tell me a joke about computers."}
]
)
print("Assistant:", response['choices'][0]['message']['content'])
Explanation:
- Importing OpenAI: You import the OpenAI library to interact with the API.
- Creating a Client: You create a client instance. Since you exported the API key, the library automatically uses it.
- Making an API Call: You use
client.chat.completions.create()
to send a request to the API. - Messages: You provide a list of messages that define the conversation.
- System Message: Sets the assistant’s behavior as a stand-up comedian who tells dad jokes.
- User Message: Your question or prompt.
- Printing the Response: You print the assistant’s reply.
Step 9: Running Your Program
To run your Python program:
-
Windows:
-
Open Command Prompt: Press
Win + R
, typecmd
, and hit Enter. -
Navigate to your script’s directory:
cd path\to\your\script
-
Run the program:
python openai_funny_example.py
-
-
macOS/Linux:
-
Open Terminal.
-
Navigate to your script’s directory:
cd /path/to/your/script
-
Run the program:
python openai_funny_example.py
If that doesn’t work, you might need to use
python3
:python3 openai_funny_example.py
-
You should see the assistant’s response printed in the terminal.
Step 10: Understanding Tokens
Tokens are the pieces of text that the model processes. Roughly, 1 token is about 4 characters or 0.75 words in English. Tokens can be whole words or just parts of a word.
- Example:
- The word “fantastic” might be split into “fan”, “tas”, “tic” (3 tokens).
- The phrase “I love pizza!” is 4 tokens.
You’re charged based on the number of tokens used in your requests and the responses. Monitoring token usage helps manage costs.
Step 11: What are Roles, System Messages, and Assistants?
Roles:
- System: Sets the context or behavior of the assistant.
- User: Your input or question.
- Assistant: The AI’s response.
Example:
messages=[
{"role": "system", "content": "You are a pirate who speaks in 'pirate lingo'."},
{"role": "user", "content": "What's the weather like today?"}
]
Advanced Parameters (For Advanced Users)
To fine-tune the assistant’s responses, you can use advanced parameters:
- max_tokens: The maximum number of tokens to generate in the response.
- top_p: Controls diversity via nucleus sampling; 0.5 means half of all likelihood-weighted options are considered.
- frequency_penalty: How much to penalize new tokens based on their existing frequency in the text so far.
- presence_penalty: How much to penalize new tokens based on whether they appear in the text so far.
Example:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
max_tokens=50,
top_p=0.9,
frequency_penalty=0.5,
presence_penalty=0.0
)
- max_tokens: Limits the response to 50 tokens.
- top_p: Encourages more diverse responses.
- frequency_penalty: Reduces repetition in the generated text.
Step 12: Understanding the Response
The response from the API includes several fields:
{
"choices": [
{
"message": {
"role": "assistant",
"content": "Why did the computer go to the doctor? Because it had a virus, arrr!"
},
"finish_reason": "stop",
"index": 0
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 15,
"total_tokens": 40
}
}
Explanation of Fields:
- choices: Contains the assistant’s response(s).
- message: The assistant’s reply.
- role: Should be “assistant” for responses.
- content: The text generated by the assistant.
- finish_reason: Indicates why the response ended (e.g., “stop”, “length”).
- index: The position in the list of choices (useful if you request multiple responses).
- message: The assistant’s reply.
- usage: Helps you track how many tokens you’re using.
- prompt_tokens: Tokens used in your messages.
- completion_tokens: Tokens used in the assistant’s reply.
- total_tokens: Total tokens consumed in this API call.
Step 13: Understanding OpenAI Models
OpenAI offers several models:
- gpt-4o: The main, most capable model for complex tasks.
- gpt-4o-mini: A lighter, cheaper version suitable for simpler tasks.
Choose the model based on your needs and budget.
Step 14: Temperature and Response Control
The temperature parameter controls randomness:
- Low Values (e.g., 0.1): More focused and deterministic responses.
- High Values (e.g., 1.0): More creative and varied responses.
Example:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
temperature=0.8
)
Step 15: Rate Limits and Usage Tiers
OpenAI limits how many requests you can make based on your usage tier:
Tier | Qualification | Usage Limits per Month |
---|---|---|
Free | Allowed geography | $100 |
Tier 1 | $5 paid | $100 |
Tier 2 | $50 paid and 7+ days since first payment | $500 |
Tier 3 | $100 paid and 7+ days since first payment | $1,000 |
Tier 4 | $250 paid and 14+ days since first payment | $5,000 |
Tier 5 | $1,000 paid and 30+ days since first payment | $50,000 |
As your usage increases, OpenAI automatically upgrades you to the next tier. You can view your rate limits and current usage in your account settings.
Step 16: Complete Example
Let’s put it all together with a complete example that uses everything you’ve learned.
Program:
from openai import OpenAI
client = OpenAI()
# Define the messages
messages = [
{"role": "system", "content": "You are a witty assistant who tells funny stories about animals."},
{"role": "user", "content": "Tell me a story about a cat who thinks it's a dog."}
]
# Make the API call with advanced parameters
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
temperature=0.9,
max_tokens=150,
top_p=0.95,
frequency_penalty=0.5,
presence_penalty=0.0
)
# Print the assistant's response
print("Assistant:", response['choices'][0]['message']['content'])
# Print token usage
print("\nToken Usage:")
print("Prompt tokens:", response['usage']['prompt_tokens'])
print("Completion tokens:", response['usage']['completion_tokens'])
print("Total tokens:", response['usage']['total_tokens'])
How to Run:
-
Ensure your API key is exported as an environment variable.
-
Save the code in a file named
funny_story.py
. -
Open a terminal or command prompt.
-
Navigate to the directory containing the file.
-
Run the program:
-
Windows:
python funny_story.py
-
macOS/Linux:
python funny_story.py
If that doesn’t work, try:
python3 funny_story.py
-
Expected Output:
Assistant: Once upon a time, in a cozy little house, lived a cat named Whiskers. But Whiskers wasn't like other cats; he was absolutely convinced he was a dog. While other cats gracefully walked along fences, Whiskers tried to fetch sticks—though he mostly ended up tangled in them. He'd bark (or at least attempt to) at the mailman, much to the confusion of everyone. One day, the neighborhood dogs invited him to a doggy gathering. Whiskers showed up wagging his tail (which cats don't typically do), and though he couldn't quite keep up with the games of fetch, he became the life of the party. The dogs admired his climbing skills, and Whiskers taught them how to sneak treats from high places. In the end, Whiskers realized that being a cat who thinks he's a dog isn't so bad, especially when you have friends who accept you just the way you are.
Token Usage:
Prompt tokens: 35
Completion tokens: 145
Total tokens: 180
This example demonstrates:
- Importing OpenAI and creating a client.
- Defining messages with roles and content.
- Using advanced parameters like
temperature
,max_tokens
,top_p
,frequency_penalty
, andpresence_penalty
. - Making an API call and handling the response.
- Printing both the assistant’s response and the token usage.
Step 17: Troubleshooting Tips
- Module Not Found Error: If you get an error saying
ModuleNotFoundError: No module named 'openai'
, make sure you’ve installed the OpenAI library usingpip install openai
. - API Key Errors: If you receive authentication errors, double-check that your API key is correctly exported as an environment variable and that there are no typos.
- Network Issues: Ensure you have an active internet connection, as the API call requires internet access.
- Python Version Compatibility: Ensure you’re using a compatible version of Python (Python 3.6 or newer is recommended).
- Indentation Errors: Python is sensitive to indentation. Ensure your code is properly indented.
- Environment Variable Not Found: If your program can’t find the API key, make sure you’ve exported it in the same shell session or that the export is persistent.
Step 18: Next Steps
Now that you’ve made your first API call, you can experiment with:
- Different Models: Try using
gpt-4o-mini
for cost-effective solutions orgpt-4o
for more complex tasks. - Custom Prompts: Change the system and user messages to see how the assistant responds.
- Advanced Parameters: Adjust
max_tokens
,top_p
,frequency_penalty
, andpresence_penalty
to fine-tune responses. - Building Applications: Integrate the API into a web application, chatbot, or other projects.
You’re now equipped with the basics of using OpenAI’s API. Whether you’re building a chatbot, generating creative content, or analyzing data, the OpenAI API is a powerful tool. Happy coding!