Original: Ampcode · 25/02/2026
Summary
It’s an LLM, a loop, and enough tokens. It’s not that hard to build a fully functioning, code-editing agent.Key Insights
“It’s an LLM, a loop, and enough tokens.” — Summarizing the simplicity behind creating an agent.
“This is every AI chat application you’ve ever used, except it’s in the terminal.” — Highlighting the essence of the agent being built.
“It’s not that hard to build a fully functioning, code-editing agent.” — Demystifying the process of building an agent.
Topics
Full Article
Thorsten Ball//April 15, 2025How to Build an Agent
or: The Emperor Has No Clothes It’s not that hard to build a fully functioning, code-editing agent. It seems like it would be. When you look at an agent editing files, running commands, wriggling itself out of errors, retrying different strategies - it seems like there has to be a secret behind it. There isn’t. It’s an LLM, a loop, and enough tokens. It’s what we’ve been saying on the podcast from the start. The rest, the stuff that makes Amp so addictive and impressive? Elbow grease. But building a small and yet highly impressive agent doesn’t even require that. You can do it in less than 400 lines of code, most of which is boilerplate. I’m going to show you how, right now. We’re going to write some code together and go from zero lines of code to “oh wow, this is… a game changer.” I urge you to follow along. No, really. You might think you can just read this and that you don’t have to type out the code, but it’s less than 400 lines of code. I need you to feel how little code it is and I want you to see this with your own eyes in your own terminal in your own folders. Here’s what we need:- Go
- Anthropic API key that you set as an environment variable,
ANTHROPIC_API_KEY
Pencils out!
Let’s dive right in and get ourselves a new Go project set up in four easy commands:main.go and, as a first step, put a skeleton of things we need in it:
Agent that has access to an anthropic.Client (which, by default, looks for ANTHROPIC_API_KEY) and that can get a user message by reading from stdin on the terminal.
Now let’s add the missing Run() method:
Run() that lets us talk to Claude. But that’s already the heartbeat of this program.
And for a heartbeat, it’s pretty straightforward: we first print a prompt, ask the user to type something, add it to the conversation, send it to Claude, add Claude’s response to the conversation, print the response, and around we go.
This is every AI chat application you’ve ever used, except it’s in the terminal.
Let’s run it:
conversation grows longer with every turn and we send the whole conversation every time. The server — Anthropic’s server — is stateless. It only sees what’s in the conversation slice. It’s up to us to maintain that.
Okay, let’s move on, because the nicknames suck and this is not an agent yet. What’s an agent? Here’s my definition: an LLM with access to tools, giving it the ability to modify something outside the context window.
A First Tool
An LLM with access to tools? What’s a tool? The basic idea is this: you send a prompt to the model that says it should reply in a certain way if it wants to use “a tool”. Then you, as the receiver of that message, “use the tool” by executing it and replying with the result. That’s it. Everything else we’ll see is just abstraction on top of it. Imagine you’re talking to a friend and you tell them: “in the following conversation, wink if you want me to raise my arm”. Weird thing to say, but an easy concept to grasp. We can already try it without changing any of our code. $ go run main.go Chat with Claude (use ‘ctrl-c’ to quit) You: You are a weather expert. When I ask you about the weather in a given location, I want you to reply withget\_weather(<location\_name>). I will then tell you what the weather in that location is. Understood?
Claude: I understand your instructions. When you ask me about the weather in a specific location, I will reply with get\_weather(<location\_name>). Then you’ll provide me with the weather information for that location, and I can discuss it further with you. I’m ready whenever you’d like to ask about the weather somewhere.
You: Hey, what’s the weather in Munich?
Claude: get_weather(Munich)
We told Claude to wink with get_weather when it wants to know about the weather. The next step is to raise our arm and reply with “result of the tool”:
You: hot and humid, 28 degrees celcius
Claude: Thank you for providing that information. The current weather in Munich is hot and humid at 28 degrees Celsius (about 82 degrees Fahrenheit). These are fairly warm conditions for Munich, especially with the added humidity which can make it feel even warmer than the temperature indicates.
If you’re in Munich or planning to visit, it would be good to stay hydrated, wear light clothing, and perhaps plan indoor activities during the hottest parts of the day. The combination of heat and humidity can be quite uncomfortable, particularly if you’re not accustomed to such weather.
Would you like to know about the weather in any other location?
That worked very well, on first try, didn’t it?
These models are trained and fine-tuned to use “tools” and they’re very eager to do so. By now, 2025, they kinda “know” that they don’t know everything and can use tools to get more information. (Of course that’s not precisely what’s going on, but it’s good enough an explanation for now.)
To summarize, all there is to tools and tool use are two things:
- You tell the model what tools are available
- When the model wants to execute the tool, it tells you, you execute the tool and send the response up
read_file
The read_file tool
In order to define the read_file tool, we’re going to use the types that the Anthropic SDK suggests, but keep in mind: under the hood, this will all end up as strings that are sent to the model. It’s all “wink if you want me to use read_file“.
Each tool we’re going to add will require the following:
- A name
- A description to tell the model what the tool does, when to use it, when to not use it, what it returns and so on
- An input schema that describes, as a JSON schema, what inputs this tool expects and in which form
- A function that actually executes the tool with the input the model sends to us and returns the result
Agent tool definitions:
runInference:
anthropic.String and ToolUnionParam to you. But, really, I swear, it’s very simple:
We send along our tool definitions, on the server Anthropic then wraps these definitions in this system prompt (which isn’t much), which it adds to our conversation, and the model then replies in a specific way if it wants to use that tool.
Alright, so tool definitions are being sent along, but we haven’t defined a tool yet. Let’s do that and define read_file:
ReadFile, and two descriptions the model will see: our Description that describes the tool itself ("Read the contents of a given relative file path. ...") and a description of the single input parameter this tool has ("The relative path of a ...").
The ReadFileInputSchema and GenerateSchema stuff? We need that so that we can generate a JSON schema for our tool definition which we send to the model. To do that, we use the jsonschema package, which we need to import and download:
main function, we need to make sure that we use the definition:
Agent’s Run method with this:
message back from Claude, we check wether Claude asked us to execute a tool by looking for content.Type == "tool_use", if so we hand over to executeTool, lookup the tool by name in our local registry, unmarshal the input, execute it, return the result. If it’s an error, we flip a boolean. That’s it.
(Yes, there is a loop in a loop, but it doesn’t matter.)
We execute the tool, send the result back up to Claude, and ask again for Claude’s response. Truly: that’s it. Let me show you.
Mise-en-place, run this:
secret-file.txt in our directory, containing a mysterious riddle.
In that very same directory, let’s run our new tool-using agent, and ask it to look at the file:
$ go run main.go
Chat with Claude (use ‘ctrl-c’ to quit)
You: Claude, buddy, help me solve the riddle in the secret-file.txt file
Claude: I’ll help you solve the riddle in the secret-file.txt file. Let me first read the contents of this file to see what the riddle is.
tool: read_file({“path”:“secret-file.txt”})
Claude: Great! I’ve found the riddle.
The answer to the riddle is: **A horse**
The riddle is playing on wordplay where “neigh” (the sound a horse makes) sounds like “nay” (a way to say “no” or disagree with something). So horses are described as disagreeable because they always say “neigh/nay” - implying they’re always disagreeing.
You:
Let’s take a deep breath and say it together. Ready? Here we go: holy shit. You just give it a tool and it… uses it when it thinks it’ll help solve the task. Remember: we didn’t say anything about “if a user asks you about a file, read the file”. We also didn’t say “if something looks like a filename, figure out how to read it”. No, none of that. We say “help me solve the thing in this file” and Claude realizes that it can read the file to answer that and off it goes.
Of course, we can be specific and really nudge it towards a tool, but it basically does it all on its own:
$ go run main.go
Chat with Claude (use ‘ctrl-c’ to quit)
You: What’s going on in main.go? Be brief!
Claude: I’ll check the main.go file to see what’s going on.
tool: read_file({“path”:“main.go”})
Claude: Based on my review, main.go implements a Claude AI assistant agent that can use tools. Here’s a brief summary:
- It creates a command-line chat interface with Claude using the Anthropic Go SDK
- Implements a tool system with a single “read_file” tool for reading file contents
- Manages the conversation flow with Claude, handling both regular text responses and tool use requests
- When Claude requests to use a tool, the agent executes it and returns the results
- The program loops continuously to maintain a conversation until the user exits The code essentially builds a simple CLI-based chatbot that can interact with the file system when requested by Claude.
The list_files tool
If you’re anything like me, the first thing you do when you log into a new computer is to get your bearings by running ls — list files.
Let’s give Claude the same ability, a tool to list files. And here’s the complete implementation of a list_files tool:
list_files returns the list of files and directories in the current folder. There’s a thousand optimizations we could (and probably should) make if this were a serious effort, but since I just want to show you what’s in the wizard’s hat, this is fine.
One thing to note: we return a list of strings and we denote directories with a trailing slash. That’s not required, it’s just something I just decided to do. There’s no fixed format. Anything goes as long as Claude can make sense of it and whether it can you need to figure out by experimentation. You could also prepend each directory with "directory: " or return a Markdown document with two headers: "directories" and "files". There’s a ton of options and which one you chose depends on what Claude can make the most sense of, how many tokens it requires, how fast it is to generate and read, and so on.
Here, we just want to create a small list_files tool and the easiest option wins.
Of course we need to tell Claude about list_files too:
- Git-related files and directories:
.git/directory with various Git configuration and tracking files.gitignorefile
- Go project files:
main.go- the main Go source code filego.mod- Go module definition filego.sum- Go module checksum file
- Documentation:
blogpost.md- a markdown file, likely containing blog content
- Environment configuration:
.envrc- likely an environment configuration file Would you like me to examine the contents of any specific file from this list?
- Implements a Claude AI agent that can interact with the local filesystem
- Uses the Anthropic SDK for Go to communicate with Claude AI models
- Provides two tools:
read\_fileandlist\_filesthat allow Claude to read file contents and list directory contents - Manages a conversation flow between the user and Claude, handling tool execution when Claude needs to access files **go.mod**:
- Defines the module dependencies
- Main dependencies are the Anthropic SDK for Go and a JSON schema library
- Uses Go 1.24.1 This appears to be a simple terminal-based application that allows users to chat with Claude while giving the AI access to read local files and list directories.
list_files and then it called read_file twice with the Go-related files that I asked it about.
Just… just like we would, right? I mean, here, what would you do if I ask you what version of Go we use in this project? Here’s what Claude does for me:
$ go run main.go
Chat with Claude (use ‘ctrl-c’ to quit)
You: What go version are we using in this project?
Claude: I’ll help you find the Go version being used in this project. Let me check the files to locate where the Go version might be specified.
tool: list_files({})
Claude: I can see there’s a Go project here with several important files like go.mod, go.sum, and main.go. Let me check the go.mod file first, as it typically contains the Go version information.
tool: read_file({“path”:“go.mod”})
Claude: According to the go.mod file, this project is using Go version 1.24.1.
Claude looks at the directory, looks at go.mod, and has the answer.
We’re at around 190 lines of code now. Let that sink in. Once you have, let’s add another tool.
Let it edit_file
The last tool we’re going to add is edit_file — a tool that lets Claude edit files.
“Holy shit”, you’re thinking now, “this is where the rubber hits the road, this is where he pulls the rabbit out of the hat.” Well, let’s see, shall we?
First, let’s add a definition for our new edit_file tool:
edit_file by telling
Claude it can edit files by replacing existing text with new text.
Now here’s the implementation of the EditFile function in Go:
OldStr with NewStr. Then it writes the content back to disk and returns "OK".
What’s missing still is createNewFile, which is just a tiny helper function that would be 70% shorter if this wasn’t Go:
- Defines a function called
fizzBuzzthat takes a parameternfor the upper limit - Loops from 1 to n (set to 100 in the execution)
- For each number:
- If it’s divisible by 3, it adds “Fizz” to the output
- If it’s divisible by 5, it adds “Buzz” to the output
- If it’s divisible by both, it adds both resulting in “FizzBuzz”
- If it’s not divisible by either 3 or 5, it outputs the number itself
- The script automatically executes the function for numbers 1 to 100 You can run this script using Node.js with the following command in your terminal: node fizzbuzz.js This will output the FizzBuzz sequence from 1 to 100 to your console.
edit_file — of an agent in general — you can probably come up with.
But, did it work? Yes, it did:
fizzbuzz.js so that it only prints until 15”:
$ go run main.go
Chat with Claude (use ‘ctrl-c’ to quit)
You: Please edit fizzbuzz.js so that it only prints until 15
Claude: I’ll edit the fizzbuzz.js file to print only until 15 instead of its current limit. First, let me check the current content of the file to understand what changes are needed.
tool: read_file({“path”:“fizzbuzz.js”})
Claude: I can see that the file currently runs FizzBuzz up to 100. I’ll modify it to only run up to 15 instead by changing the parameter in the fizzBuzz function call.
tool: edit_file({“path”:“fizzbuzz.js”,“old_str”:”// Execute FizzBuzz for numbers 1 to 100\nconsole.log(‘Running FizzBuzz:’);\nfizzBuzz(100);”,“new_str”:”// Execute FizzBuzz for numbers 1 to 15\nconsole.log(‘Running FizzBuzz:’);\nfizzBuzz(15);”})
Claude: I also need to update the comment at the top of the file to reflect this change:
tool: edit_file({“path”:“fizzbuzz.js”,“old_str”:”/**\n * FizzBuzz implementation in JavaScript\n * Prints numbers from 1 to 100, but:”,“new_str”:”/**\n * FizzBuzz implementation in JavaScript\n * Prints numbers from 1 to 15, but:”})
Claude: The changes have been successfully made to the fizzbuzz.js file. The program will now:
- Only print FizzBuzz from 1 to 15 (instead of 1 to 100)
- The documentation has been updated to reflect this change The function itself remains the same, as it takes a parameter to determine the upper limit, and we’ve simply changed that parameter from 100 to 15.
Create a congrats.js script that rot13-decodes the following string ‘Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!’ and prints itMaybe a tall order. Let’s see: $ go run main.go Chat with Claude (use ‘ctrl-c’ to quit) You: Create a congrats.js script that rot13-decodes the following string ‘Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!’ and prints it Claude: I’ll create a congrats.js script that decodes the ROT13-encoded string and prints it. ROT13 is a simple letter substitution cipher that replaces a letter with the 13th letter after it in the alphabet. Let me create this file for you: tool: edit_file({“path”:“congrats.js”,“old_str”:"",“new_str”:”/**\n * A script to decode a ROT13-encoded string and print the result\n */\n\nfunction rot13Decode(encodedStr) {\n return encodedStr.replace(/[a-zA-Z]/g, function(char) {\n // Get the character code\n const charCode = char.charCodeAt(0);\n \n // Handle uppercase letters\n if (charCode >= 65 && charCode <= 90) {\n return String.fromCharCode(((charCode - 65 + 13) % 26) + 65);\n }\n \n // Handle lowercase letters\n if (charCode >= 97 && charCode <= 122) {\n return String.fromCharCode(((charCode - 97 + 13) % 26) + 97);\n }\n \n // Return non-alphabetic characters as is\n return char;\n });\n}\n\nconst encodedMessage = ‘Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!’;\nconst decodedMessage = rot13Decode(encodedMessage);\n\nconsole.log(decodedMessage);”}) Claude: I’ve created a JavaScript file called congrats.js that:
- Defines a
rot13Decodefunction that takes an encoded string and applies the ROT13 cipher to decode it - Takes the provided string ‘Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!’
- Decodes it using the function
- Prints the decoded message to the console
You can run this script using Node.js by typing
node congrats.jsin your terminal. The script will decode and display the hidden congratulatory message!
Isn’t this amazing?
If you’re anything like all the engineers I’ve talked to in the past few months, chances are that, while reading this, you have been waiting for the rabbit to be pulled out of the hat, for me to say “well, in reality it’s much, much harder than this.” But it’s not. This is essentially all there is to the inner loop of a code-editing agent. Sure, integrating it into your editor, tweaking the system prompt, giving it the right feedback at the right time, a nice UI around it, better tooling around the tools, support for multiple agents, and so on — we’ve built all of that in Amp, but it didn’t require moments of genius. All that was required was practical engineering and elbow grease. These models are incredibly powerful now. 300 lines of code and three tools and now you’re to be able to talk to an alien intelligence that edits your code. If you think “well, but we didn’t really…” — go and try it! Go and see how far you can get with this. I bet it’s a lot farther than you think. That’s why we think everything’s changing.Related Articles
how to build a coding agent: free workshop
Geoffrey Huntley · tutorial · 80% similar
Introducing advanced tool use on the Claude Developer Platform
Anthropic Engineering · explanation · 70% similar
Introducing Showboat and Rodney, so agents can demo what they’ve built
Simon Willison · explanation · 66% similar
Originally published at https://ampcode.com/notes/how-to-build-an-agent.