Building AI-Powered Apps in Python — A Step-by-Step Guide
This tutorial provides a practical guide to building AI-powered applications using Python and the OpenAI SDK. We'll cover installation, authentication, core functionality like chat completions, and real-world examples, all while emphasizing best practices for error handling and managing rate limits. The OpenAI SDK provides a convenient interface to interact with OpenAI's powerful language models.
Before diving in, you'll need to install the OpenAI Python SDK. This can be easily done using pip: pip install openai. You'll also need a valid OpenAI API key. Sign up for an account at https://platform.openai.com/ to obtain your key. We'll be using base_url='https:///v1' for this tutorial, as it represents a hypothetical API endpoint for the OpenAI service.
Authentication is crucial for accessing the OpenAI API. You'll need to include your API key in the request headers. The openai.OpenAI() constructor handles this for us when the api_key parameter is used.
import openai
api_key = "YOUR_OPENAI_API_KEY"
client = openai.OpenAI(
api_key=api_key,
base_url='https:///v1'
)
Chat Completions Example
Let's start with a fundamental example: generating chat completions. This involves sending a conversation history to the model and receiving a response.
import openai
api_key = "YOUR_OPENAI_API_KEY"
client = openai.OpenAI(
api_key=api_key,
base_url='https:///v1'
)
def get_chat_completion(messages, model="gpt-3.5-turbo"): # Default model
"""
Generates a chat completion using the OpenAI API.
Args:
messages: A list of messages, where each message is a dictionary
with 'role' (e.g., "user", "system", "assistant")
and 'content' keys.
model: The model to use (e.g., "gpt-3.5-turbo").
Returns:
The content of the assistant's response, or None if an error occurs.
"""
try:
response = client.chat.completions.create(
model=model,
messages=messages
)
return response.choices[0].message.content
except Exception as e:
print(f"An error occurred: {e}")
return None
# Example usage:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a short poem about a cat."},
]
response_content = get_chat_completion(messages)
if response_content:
print(response_content)
SEO Audit Example
Let's build a more practical example: an SEO audit tool. This will involve prompting the model to analyze a snippet of text for SEO best practices.
import openai
api_key = "YOUR_OPENAI_API_KEY"
client = openai.OpenAI(
api_key=api_key,
base_url='https:///v1'
)
def perform_seo_audit(text_snippet, model="gpt-3.5-turbo"):
"""
Performs an SEO audit of a given text snippet.
Args:
text_snippet: The text to analyze.
model: The model to use.
Returns:
An SEO audit report, or None if an error occurs.
"""
messages = [
{"role": "system", "content": "You are an SEO expert. Analyze the following text snippet for SEO best practices. Provide suggestions for improvement, focusing on keywords, readability, and overall optimization."},
{"role": "user", "content": text_snippet},
]
return get_chat_completion(messages, model) # Re-use the get_chat_completion function
# Example usage:
text_to_audit = "This is a blog post about Python programming. Learn Python today!"
audit_report = perform_seo_audit(text_to_audit)
if audit_report:
print("SEO Audit Report:\n", audit_report)
Error Handling
Robust error handling is essential. The try...except block in get_chat_completion demonstrates how to catch potential exceptions. Consider handling specific exceptions (e.g., openai.RateLimitError, openai.APIConnectionError) for more granular control. Log errors appropriately for debugging.
OpenAI APIs have rate limits. Exceeding these limits results in errors. To mitigate this:
* Implement retry logic: Use libraries like tenacity to automatically retry failed requests with exponential backoff.
* Monitor your usage: Track your API calls and tokens used to stay within your limits. The OpenAI dashboard gives you this information.
* Batch requests: If possible, process multiple inputs in a single API call to reduce the number of requests. (Not demonstrated in this basic tutorial)
* Optimize prompts: Craft concise and clear prompts to minimize token usage.
* Handle API errors gracefully: Implement retries and potentially inform the user if the service is unavailable.
* Check the documentation: Always refer to the official OpenAI documentation for up-to-date information on rate limits and best practices.
Written by the Wingman Protocol team — developers building with AI APIs, cloud infrastructure, and automation tools daily. Our guides are based on hands-on experience running production systems.
Spin up cloud servers, managed databases, and Kubernetes clusters. New users get $200 in free credit.
Claim $200 Free Credit →· Fact-checked against official documentation and primary sources.
Related Services
Free Printable Resources
- Browse 20 free printables → — budget trackers, meal planners, home checklists & more. Print at home, free forever.