aiproductivitytoolingdeveloper-experiencezed

Editor-Level AI: Completions, Agents, and the Zed Setup I Actually Use

The middle layer of AI coding tools. Inline completions, agent panels, and external agents like Claude Code running inside your editor.

·6 min read

This is Part 2 of my AI coding workflow series. Part 1 covers terminal agents. Part 3 covers PR-level agents. This post is about the middle layer — AI inside your editor.

Editor AI Has Three Layers

I didn't realize this until I spent time configuring Zed properly. What looks like "editor AI" is actually three separate systems:

1. Edit Predictions (inline completions). The ghost text that appears as you type. Needs to be fast — under 100ms or it feels laggy. Only certain providers work here.

2. Agent Panel. A chat interface inside your editor. You can use any LLM provider. Good for reasoning about code, explaining errors, planning changes.

3. External Agents. Full coding agents (like Claude Code or OpenCode) running inside your editor instead of the terminal. Same capabilities as terminal agents, but with editor integration.

Most confusion comes from thinking these are all the same thing. They're not. Each has different provider options and different use cases.

Edit Predictions: What Actually Works

Inline completions need speed. The model has to respond while you're still typing, or the suggestions feel useless. This rules out most LLMs.

Works for inline completions:

  • GitHub Copilot ($10/month)
  • Supermaven
  • Zed's built-in (Zeta)

Doesn't work for inline completions:

  • Claude (too slow)
  • GPT-4 (too slow)
  • Gemini (too slow)

This surprised me initially. I assumed I could use Claude for everything. But there's a reason Copilot dominates inline completions — the model is optimized for speed, not reasoning depth.

For Zed, the config is:

{
  "edit_predictions": {
    "provider": "copilot"
  }
}

I stick with Copilot here. $10/month for unlimited completions, works everywhere, good enough quality for the "complete this line" use case.

The Agent Panel: Use Any LLM

The agent panel is different. It's a chat interface for longer interactions — explain this error, refactor this function, help me understand this codebase. Speed matters less. Reasoning quality matters more.

Here you can use whatever provider you want:

  • Anthropic (Claude)
  • OpenAI (GPT-4, o1)
  • Google (Gemini)
  • Ollama (local models)
  • OpenRouter
  • Custom providers

I keep z.ai's GLM 4.7 as my default. It's the cheapest option I've found that's still usable. For complex problems, I switch to Claude — but the per-token costs add up if you use it for everything.

{
  "agent": {
    "default_model": {
      "provider": "z.ai",
      "model": "glm-4.7"
    }
  }
}

Where API keys live: Zed stores them in macOS Keychain, not in your settings file. Add keys through the command palette: agent: open settings, find your provider, paste the key.

External Agents: The Hidden Third Layer

This is the part that confused me for weeks.

Zed supports "external agents" — full coding agents like Claude Code, OpenCode, Gemini CLI, or Codex. These aren't just chat interfaces. They have tools: file access, command execution, the ability to make changes across your codebase.

The key UX insight: click the + button in the Agent Panel to start a thread with a different agent.

That's it. That's how you switch. Each agent type gets its own thread. You can have multiple running.

AgentWhat it isAuth
OpenCodeOpen source agent, swap models freelyYour own API keys
Claude CodeAnthropic's official agentAnthropic API or Claude Pro/Max
Gemini CLIGoogle's agentGoogle OAuth or API key
CodexOpenAI's agentOpenAI API or ChatGPT subscription

OpenCode is what I use most. I can point it at cheap models for routine tasks, Claude for complex ones. The agent profiles (plan, build, autonomous) give different levels of autonomy.

Installing External Agents

Most are Zed extensions. Search the extension panel, install, and they appear as options when you click + in the Agent Panel.

Claude Code and some others use the ACP Registry — you might see prompts to install from there instead. Either way, once installed, they show up in the same place.

My Actual Config

Here's what I run:

{
  "edit_predictions": {
    "provider": "copilot"
  },
  "agent": {
    "default_model": {
      "provider": "z.ai",
      "model": "glm-4.7"
    }
  }
}

Why this setup:

  • Copilot for completions: Fast, cheap, good enough for inline suggestions
  • z.ai as default agent: Cheapest usable model I've found for routine questions
  • Claude/OpenAI configured but not default: Available when I need better reasoning, but not burning credits on "what's the syntax for X"

I have OpenCode and Claude Code extensions installed. When I need to make multi-file changes or run commands, I start an external agent thread instead of using the basic agent panel.

When to Use Each

Inline completions (Copilot):

  • Finishing the line you're typing
  • Boilerplate code
  • Obvious patterns

Agent panel (z.ai/Claude):

  • "Explain this error"
  • "How should I structure this?"
  • Code review before committing
  • Understanding unfamiliar code

External agents (OpenCode/Claude Code):

  • Multi-file changes
  • Running tests and iterating
  • Complex features that need planning
  • When you'd otherwise switch to terminal

The terminal vs editor decision is mostly preference. I use external agents in Zed when I'm already in the editor and don't want to context-switch. I use terminal agents (OpenCode CLI) when I'm doing a longer session and want the full terminal experience.

What I Got Wrong Initially

Assuming Claude would work for completions. It doesn't. Too slow. Use Copilot or Supermaven.

Not understanding the + button. I kept wondering how to "switch agents" — there's no toggle. You start a new thread with a different agent. Each thread is tied to its agent type.

Putting API keys in settings.json. They don't go there. Zed uses keychain storage. The settings file just points to providers; credentials are stored securely.

Using expensive models for everything. z.ai at ~$0.001 per request vs Claude at ~$0.01-0.03. For "remind me of this syntax" questions, the cheap model is fine.

Series Navigation

AI Coding Workflow Series:


This series covers my actual AI coding workflow. I'll update as tools evolve.