← Back

Building AI-Powered Applications

2026-04-13AIdevelopmentAPIs

Building AI-Powered Applications

Before reading this, make sure you have gone through Getting Started with AI and Understanding Large Language Models. This post focuses on the practical side — picking an API and shipping real features.

Choosing an AI API

Three providers dominate the space for developers:

| Provider | API | Best For | |----------|-----|----------| | Anthropic | Claude API | Long context, safety-focused, tool use | | OpenAI | OpenAI API | Broad ecosystem, multimodal | | Google | Gemini API | Google Workspace integration, 1 M context |

All three offer generous free tiers and pay-as-you-go pricing. Start with whichever has SDKs in your stack.

Core Patterns

1. Basic Completion

The simplest pattern: send a prompt, get a response.

import anthropic

client = anthropic.Anthropic()
message = client.messages.create(
    model="claude-opus-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Summarize this article: ..."}]
)
print(message.content[0].text)

2. Tool Use

Give the model tools (functions) it can call. The model decides when to invoke them based on the conversation.

tools = [{
    "name": "get_article",
    "description": "Fetch an article by URL",
    "input_schema": {
        "type": "object",
        "properties": {"url": {"type": "string"}},
        "required": ["url"]
    }
}]

The model returns a tool_use block when it wants to call a function. Your code executes it and sends the result back — the model then continues.

3. Streaming

For responsive UIs, stream tokens as they are generated rather than waiting for the full response:

with client.messages.stream(
    model="claude-opus-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": prompt}]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

Building an AI Content Pipeline

A simple pipeline to auto-generate post summaries:

  1. Read a Markdown file from disk
  2. Send the content to the AI API with a summarization prompt
  3. Write the summary back as frontmatter or a separate file
  4. Repeat for every new file detected by a filesystem watcher

This is the same pattern used by this platform's log-processor — a daemon that wakes on new input, processes it, and writes structured output.

Agents and Multi-Step Reasoning

An agent is a loop: the model acts, observes the result, then acts again. The loop continues until the model decides the task is complete.

User prompt → Model → Tool call → Tool result → Model → Tool call → ... → Final answer

Agents are powerful for tasks that require planning, searching the web, writing and running code, or coordinating multiple API calls. Most production agent frameworks (LangChain, LlamaIndex, Anthropic's agent SDK) implement this loop for you.

Start simple: get a single completion working, then add tools one at a time before building a full agent loop.