Day 8: Claude Series (Anthropic)
Anthropic was founded by former OpenAI researchers, with “safe AI” as its core value. The Claude series shows strengths in long context processing, instruction following, and safety.
Claude Model Comparison (By Lineup)
| Model Family | Features | Recommended Use |
|---|---|---|
| Claude Opus | Highest quality, complex analysis/reasoning | Difficult problem solving, quality-first workflows |
| Claude Sonnet | Performance/cost balance | General services, coding assistance, document summarization |
| Claude Haiku | Low latency/low cost, lightweight | Bulk classification, real-time responses, simple tasks |
Version snapshot names and pricing change frequently, so always check the official model documentation before making API calls.
Constitutional AI
Constitutional AI (CAI), Claude’s core technology, gives AI a “constitution” and has it self-regulate its own behavior.
Traditional RLHF approach:
Humans directly judge good/bad responses -> High cost, low consistency
Constitutional AI approach:
1. AI generates a response
2. AI self-evaluates the response against the constitution (principles)
3. Self-corrects when violations are found
4. Retrains on the corrected data
Constitution examples:
- "Do not provide harmful information"
- "Do not present biased claims as facts"
- "Acknowledge uncertainty when something is uncertain"
Anthropic API Basic Usage
# pip install anthropic
import anthropic
client = anthropic.Anthropic(api_key="YOUR_API_KEY") # Environment variable recommended
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system="You are a friendly AI tutor. Please answer in English.",
messages=[
{"role": "user", "content": "Explain the LLM training process using an analogy."},
],
)
print(message.content[0].text)
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")
Streaming and Multi-turn Conversation
import anthropic
client = anthropic.Anthropic()
# Streaming response
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=512,
messages=[
{"role": "user", "content": "Explain Python decorators."},
],
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
import anthropic
client = anthropic.Anthropic()
# Multi-turn conversation (include previous conversation in messages)
conversation = []
def chat_with_claude(user_message):
conversation.append({"role": "user", "content": user_message})
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system="You are a Python coding mentor.",
messages=conversation,
)
assistant_text = response.content[0].text
conversation.append({"role": "assistant", "content": assistant_text})
return assistant_text
# Multi-turn conversation
print(chat_with_claude("What is a generator?"))
print("---")
print(chat_with_claude("Then what's the difference from an iterator?"))
Claude vs GPT Key Differences
| Comparison | Claude | GPT |
|---|---|---|
| Context | 200K tokens | 128K tokens |
| Safety philosophy | Constitutional AI | RLHF |
| Long document processing | Very strong | Strong |
| Code generation | Strong in latest top-tier Claude models | Strong in latest top-tier GPT models |
| Instruction following | Precise adherence to detailed instructions | Flexible interpretation |
| Multimodal | Image input supported | Image + audio |
| API format | Messages API | Responses API (recommended), Chat Completions (legacy) |
Claude particularly excels in long document analysis, complex instruction execution, and code review.
Today’s Exercises
- Obtain an Anthropic API key and ask Claude “What is Python’s GIL?” Evaluate the accuracy and explanation style of the response.
- Summarize the differences between Constitutional AI and RLHF, and compare the pros and cons of each approach.
- Send the same prompt to both the latest OpenAI model and the latest Claude model, then compare the response style, length, and accuracy. What differences do you notice?