If you feel like you’re getting left behind in the AI revolution, you likely are.
You probably use AI, but you don’t understand how everyone else is so effective at it and is praising its abilities to help increase their productivity and output.
Many engineers find AI to be dumb and error prone and don’t see the value in it. This is especially true of many software engineers because they feel that they can do things more efficiently themselves. They’ll often cite that their way is correct or perfect the first time vs AI slop that requires some massaging.
If this is you, and you’re reading this, I’m sure there’s still something that you likely sense that you need to investigate further when it comes to AI. You can’t shake that feeling. Otherwise you likely wouldn’t be reading this.
Discomfort looms in the wind when we sense we’re not seeing something that we feel we should. Our subconscious is telling us there’s more to it. This is nature’s security system alerting you to pay attention. I’m glad you are listening to that alarm. It’s important.
That discomfort you feel is your brain alerting you to a gap in your knowledge and your mind naturally wants to close that gap. This is metacognition in action.
For many, AI can feel like a step backwards if you’re used to doing things a certain way.
I’ve been there. I didn’t “get it” when I saw AI tooling and “vibe coding” I thought it was a fad and just another “no-code” revolution that would fizzle out.
Then I went all in, and tried it, and everything changed for me.
I’m going to show you how to have that experience too.
I’m going to outline, in 3 steps, how you can move from conscious incompetence to creative mastery in regards to coding with AI.
Within a period of 2 days you’ll see an initial shift in your workflow.
Within 3 weeks (if you stick with it) the way you build software will change, forever.
The process in which I’m going to explore here is targeted to a technical audience, mainly software engineers, but the theory transcends to other industries. Simply replace software with copywriting or marketing, or finance analysis, etc.
Lets get started.
Prerequisites
You’re going to need the following:
- An AI powered IDE (Cursor)
- A CLI based coding agent (cursor-agent)
For simplicity, an entry level subscription with Cursor ($20/mo) will suffice as Cursor provides the IDE and they have a cli agent, the cursor-agent. So I’d just go with that. I personally use Claude Code for my CLI based coding, yet there are many others available (AMP, or Codex CLI, etc).
At this point you only want one tool, so Cursor will work just fine.
Once you have that, you’re ready to begin.
Step 1: AI Skill Acquisition through Constraint-Induced Learning
“If you want to learn fast, tie one hand behind your back. You’ll be forced to discover how to do more with less.”— Unknown
Constraints, by nature, are limiting. You will be utilizing constraints (or more aptly – restraints) in this step in order to help re-program the way you develop software with AI.
The task is simple – build a web app with Cursor, using the built in Agent mode of Cursor.
There is only one key constraint that you must follow in order for this exercise to be transformational:
You are not allowed to write a single line of code yourself. You must have the agent do all the work.
Even if you see a simple line of code that should be changed and you know that you can do it faster yourself you must restrain yourself and have the agent do it.
I’m serious, and here’s why … (its rooted in constraint theory)
Why Constraint-Induced Learning Works so Well
Constraint-induced learning touches on how the brain reorganizes itself through intentional limitation.
When the easy path is blocked, the brain must find alternate paths in order to solve the problem.
If one of your hands is tied behind your back (or maybe its in a sling from an injury), and you must zip up your jacket, your brain has forge new paths to problem resolution. You use other fingers on the same hand in combination with holding the zipper, you lean against a wall to hold part of the jacket steady, etc.
The brain is reorganizing itself through intentional restraint.
That’s why you’re not allowed to write a single line of code. You are proverbially “tying your hands” behind your back so you cant code.
This step forces your brain to find new ways to do the same thing, forging new neuro pathways to form.
This method is based on Constraint-Inducted Movement Therapy (CIMT) which was pioneered by Edward Taub.
The brain prefers to run on autonomous and habitual patterns.
e.g. – “This is the same you’ve always done it”.
You tie your shoes the same everyday, you likely have the same exact shower routine and bedtime routine. This is why learning how to do something differently is so challenging. You’re forcing the brain to work overtime to solve a problem that it already knows how to solve.
This is why its so difficult for so many software engineers to adapt to an agent-first way of coding. Its backwards and does not align with how things have always been done (code-first).
The Cognitive Fixation Trap
Software engineers are used to writing software a particular way and that involved opening up an editor and getting their hands into code, crafting it themselves. For many, its an artisian or even god like like experience. You get to create something out of nothing. From your mind to reality, crafted with keystrokes.
The paradigm shift to an agent-first systematic approach to software development is subsconciously seen as an attack on personal identity and is emotionally felt as a blow the the ego.
This raises anxiety levels and defensiveness against a very real threat to their well being. This usually causes thoughts to ruminate:
This what I’ve always done! You’re telling me the machines are coming to take my job?
The answer to that is simply …
Yes, if you don’t adapt.
The good news is, you can adapt.
When many software engineers initially try to utilize AI, they approach the problem from a search-first mentality that has permeated the world since the inception of search engines.
They know what they want to code, so they start coding, and then they get to an area that they need help on, such as a new API or technology or design pattern they need to implement. The engineer will ask the AI how to implement it. The result that AI returns is often far better than you’d find on a search engines, and thats very useful to the engineer.
In this instance, AI is a glorified and improved version of a search engine.
This process gets repeated over and over and many don’t progress past this point of interacting with AI.
This is the cognitive fixation trap.
Cognitive fixation is the inability to see a problem from a new perspective. You’re “locked in” to familiar patterns of thought.
This is gets us back to the the original point –
You are not allowed to write a single line of code yourself. You must have the agent do all the work.
Instead of asking AI how to do it, ask AI to do it for you.
Since you’re not writing code, this is your only option – remember … your proverbial hands are now tied behind your back. Now that you know why its important that you tie your hands behind your back (dont write any code) lets create something.
Building with a Constructionist Learning Mindset
Ok, now its time to build.
This will arguably be the hardest part of learning in your agent-first development journey. Its all brand new and will challenge your desire to intervene. Stay strong.
Constraints, again, are critical here. You want something that works so you can see the end result easily and experience the agent-first software development flow.
Determining what to build build is critical.
Therefore, I advise you steer clear of complex topics, technologies and …
Focus on building a web application.
Yes, even if you’re a mobile, backend, desktop or systems developer, focus on a web application.
Why?
AI Agents are remarkably good at creating functional web applications through prompting.
Remember, the goal of this experience is learning how to migrate to an agent-first methodology, not ship a web app that you’re actually going to use. You’re not writing any code anyway.
That said …
You should expect to throw this project away.
The idea of disposing what you just built is based in Constructionist Learning (Seymour Papert, MIT 1980).
True learning often takes place during the act of creation – when you create something personally meaningful, regardless if the artifact endures.
In other words, you learn by doing.
The goal is to create a web app for the sake of learning. This removes the cognitive pressure of it having to be perfect. This allows your mind the creative freedom explore, make mistakes without reprocussions and ultimately form new connections without an attachment on the outcome of the project.
What web technology should you use?
If you’re already familiar with web development, use the one you’re familiar with. For me, I use Ruby on Rails.
If you don’t have a favorite, or if you really want to experience the raw power of an ai-first development experience, use Next.js. For whatever reason, at the time of this writing, LLM agents are insanely good at writing web apps with Next.js
What should the web app be about?
I advise that the web application that you build should be about a topic that you’re personally interested in or have knowledge of.
For example, I’m divorced and I co-parent my children with their mother. We use an app to help manage schedules and more. I cannot stand the app we use so I decided to build an alternative web app to use as a topic for me to learn how to learn agent-first development (this is also known as vibe coding in many circles).
The app I created can be found here: coparentkit.com
This app was 100% coded by an Agent. I prompted everything you see on the screen.
I’m not saying that you should create a co-parenting application. I’m saying that you should build something that you have personal experience in.
Maybe you have experience with real estate investing and you use some software to manage your rental properties. You could rebuild that as an exercise.
You could be someone who is really into 3d printing and you need a way to better manage your 3d printing files and a way to organize them, you could build a solution for that.
“What if I don’t have any extra things I work on or am interested in, what should I do then?”
If you can’t think of anything then I advise you create a simple multi-tenant protein tracker. Multi-tenant meaning that people can create their own accounts and track their own protein intake.
This protein tracker should allow people to:
Login (user accounts)
Enter how much protein they had for a meal.
- Meal Name
- Date Time of entry
- Total grams of protein consumed
- Multiple entries a day can take place.
You then need to display the data. You can display it in a dashboard like manner of how many times they’ve consumed protein in a day, how much is the total, a chart over a time series (last 7 days protein consumption bar chart). The app will support editing entries, listing all the entries, date grouping, and various other things. You could even extend it to include a “daily protein goal”. Then as the user starts entering protein you can report back how far along they are to completing to hitting their daily goal. This can be via text or a chart, etc. Experiment with it.
How to start developing the app
I advise creating an empty directory somewhere on your computer and then open Cursor and open this directory.
From here, you’re going to open the agent window, in “write mode”. Personally I prefer the Claude Sonnet models (4.5 at the time of writing), but you can use any one that you’d like. Other good models are GPT 5 by OpenAI, Gemini 2.5 by Google and the new Composer1 model by Cursor.
Once your agent window is open, give the agent a persona and just tell it to build something. Here’s an example:
You are an expert web developer. Your task is to build a daily protein tracker. This protein tracker should allow multiple users to enter their own protein entries, and they should not be allowed to see other peoples protein entries, everything is scoped to the logged in user. Data is not visible unless a user is logged into their account. User accounts should be created with email and password. The user can enter many protein entries per day and that will count, in aggregate, to their total protein consumption per day. Upon login, the user should be presented with a dashboard that shows how much protein they’ve consumed today and a list of entries that they’ve made today. Entries should be editable. If no entries, it should tell the user that there are no entries for today. Write a test for the user signup to make sure that the user can sign up via and that they can log in. Use next.js <or whatever tech you prefer> to create this. Please ask clarifying questions before you begin.
Fire that off to the agent. It will ask you questions and then it will start its work.
Maintaining Cognitive Control
The agent is going to do things that you see are wrong during this process.
Now is when the you have to exercise a significant level of restraint.
Instead of fixing each issue by hand, ask the agent to do it for you.
This is the most critical thing you need to do when learning ai-first development.
Maintaining this high level of restraint focuses attention and forces adaptation in your brain.
You’re adapting your brain to a new way of doing things.
This will be hard. You will want to write the code. You will want to change the code.
Don’t do it. Ask the agent to do it for you.
You can even interrupt the agent as it’s doing its work too. Don’t stop the execution, just fire off another message and the message will be queued up and the agent will get to it.
Remember, you’re not allowed to write a line of code.
This will feel backwards, and slower, and initially it will be.
Your goal here is to improve your prompting skills. Provide more context in your prompt.
Provide good examples and bad examples. Give as much information about a problem that you can, along with constraints to the agent, each time you have it do something.
You don’t have to write a novel each time you’re prompting the agent, and sometimes a simple sentence is all you need. However, I’ve found that the more information you can give an agent about a task the better it will perform in the long run.
By restraining yourself you’re forcing your brain off the comfortable highway of repeated habit and onto the dirt roads of deliberate thought.
You’re forcing yourself to learn a new way of doing things.
Continue to use this approach until you’ve built out your first feature or your fully functional app.
The Context Window: LLM’s Short Term Memory
While building out your application, you may run into situations where the agent “forgets” things you’ve told it to do before. For example, maybe you’ve provided instructions on how to name functions and variables, or where and how to structure different file types.
After you’re working with the agent for a bit, you’ll notice that sometimes it just does not remember the things you told it before.
You are running into the context window limitation.
The “Context Window” is the amount of context (stuff) that the agent can remember at once. This is what you see on various sites like “Model X has a context window of 200k”. This means that this LLM can store 200,000 “tokens” in its short term memory.
What is a token? Sometimes its a character, sometimes its a grouping of characters, but here’s a simple way to remember it:
1 token ≈ 4 characters of English text.
A context window with 200k tokens would give us about 375-500 pages of text.
This is an estimation. This is a simple mental model for you to reference.
This doesn’t mean that you get to use all of that context window for your inputs …
The context window is comprised of the text you give to the agent, and the content it returns. Each time you interact with the agent in the chat, the full chat is sent back as “context” of whats been done and then the agent reacts based on this.
The agent is on a server somewhere and has no recollection (no memory) of your conversation, so the entire conversation is sent up each time a chat iteration happens (there are some nuances, but this will help you grok whats happening behind the scenes).
This is also why you’ll see the agent slow down as the chat gets longer – the agent has to process more before it can reply (though this doesnt always happen).
When your context window fills up, the agent will sometimes start forgetting things. It has to get rid of the old stuff so it can accept the new. This is a typical FIFO Queue (First In First Out). There are many nuances to this in each model, but this is how it works at a high level. In Cursor you can see how much context window is used up in the chat by a small progress meter, when its close to being full, you’re about to run out of agent memory.
In this case, in the Cursor agent chat you can utilize the /summarize slash command and it will do some work to remove cruft from the convo and keep the important parts, trimming down the chat context dramatically. Learn more about summarization here.
Lastly, and very importantly …
Each new chat you start is a brand new agent who’s never seen your codebase.
Even though you’ve already had another chat with the agent in another window, this new chat window will have a brand new context window with no context from the previous chat. This is is where rules files and LLM context files come into play.
To recap – if your agent starts forgetting things, either start a new chat and give it enough context to start or continue a task or use the /summarize feature in Cursor IDE to get some context window compaction.
Error Debug Flow with Agent AI Coding
Often the agent will say that it’s done working. It will give you the commands to run the app. You run the app, and its broken and an error shows up in the console where you ran the app.
This is often when many developers throw their hands in the air and say: “See, I told you, this AI stuff is a waste of time, it doesn’t even work. This thing is stupid.“
Via the error message you might know the fix or feel compelled to dive in the the code to figure it out. Time for more restraint.
Once again, remember, you’re not to write a line of code, even if its a single one line fix.
Copy and paste the error message into agent tell it “When I do X, I get this error: “
Let the agent fix it.
Now rinse, wash and repeat until it works.
Once you’ve used an agent for all coding tasks you’re ready to move onto step 2.
Step 2: AI Skill Expansion through Knowledge Scaffolding
“Knowledge is a skyscraper. You can take a shortcut with a fragile foundation of memorization, or build slowly upon a steel frame of understanding.” — Naval Ravikant
The majority of your work in learning this new method of developing software with agents was done in step 1, so we won’t spend a ton of time on step 2 or step 3.
However, in order to progress from a a basic surface level understanding of how to work with AI agents to the next level we have to build on what we already know.
This is the essence of knowledge scaffolding.
Knowledge Scaffolding is when each level of learning supports the next level.
Step 1 laid the foundational principles and workflow for building a web app with Cursors agent mode.
In step 2 you’re going to switch from a GUI (the Cursor IDE) to the CLI (command line interface). You’ll still use the utilizing the Cursor IDE, but you’ll be using it to inspect the changes that the CLI has done and as a secondary ad-hoc ‘Ask’ agent (more on that below).
Why use the CLI?
The CLI, while not necessary to accomplish an agent-first workflow is the next logical step you’ll need to familiarize yourself with before going to step 3, full automation via cloud agents/etc.
Using the CLI Agent
Open your terminal of choice, and navigate to the location of your web app that you recently created and start the cursor-agent by typing cursor-agent. If you haven’t installed it, do so here. If you are using another agent, start that agent in that folder (such as claude or codex or amp).
You utilize the CLI agent just as you would the agent window in cursor, and its also has similar features. To inspect its features type the / into the agent and you’ll see various options pop up.
I mentioned the /summarize command from the Cursor IDE, but in the cursor-agent CLI tool this is known as /compress which is defined as: “summarize the conversation to reduce context“. Its the same thing, with a different slash command.
From here, you can ask the agent to do anything. Let’s add a new feature that add’s a toast message to the user when a new protein entry is added. Or if you already have a toast, have it change the color of the toast. Just type it in and let it run, approve its requests and then when its done run the app to see if it works.
Ask vs Plan vs Auto-Run
By default most CLI agents will ask you before they do anything. It will ask if it can edit a file, or run a bash command or anything. This is the default “Ask” mode. When you first fire up cursor-agent you will be in Ask mode.
To change its mode, you’ll press SHIFT+TAB. You can cycle between the different modes this way. Keep pressing SHIFT+TAB.
You can build features with the Ask mode, though it does get very annoying to interact with the agent every 5 seconds to approve what its going to do. If you’re using a version control system like Git, then allowing the agent to make autonomous coding decisions should not be a problem, becuase you can easily undo the changes. You’re committing frequently … right?
Plan mode allows the agent to craft a plan and list of TODO’s that it needs to do in order to implement what you’ve asked. After the agent is done planning it will return back to you a long plan of what its going to do an how. You can approve the plan or you can ask for changes.
Once you approve the changes the agent will begin and upon its first edit it will allow you to select “Auto Run” so it can make all the edits without asking for approval.
Experiment with each CLI Agent mode to see what it does and how well it works.
My personal workflow is like this:
- Enter plan mode
- Type prompt
- Edit/refine plan until I’m happy with it (sometimes thats immediately)
- Ask agent to implement
- Enable auto-run so it can work without me
- Check in occasionally to see when its done
- Review the changes in the IDE
- Run/test the changes, etc
Using the CLI and IDE Together
Reviewing code in the CLI is possible but its not a task I enjoy. I use the IDE for reviewing code.
When the agent is done, I’ll open the IDE and review the diff in the editor to see what changed.
I’ll review the code and sometimes I’ll commit it from the IDE, or I’ll return to the CLI agent and ask it to fix something or make some additional changes.
You can either commit the final result in the IDE or in your CLI agent. You can also ask the CLI agent to commit the code for you too. Typically it will write a good PR summary as well.
Using the IDE as a Secondary Ad-hoc Ask Agent
When the CLI Agent is running I’ll sometimes see something in the code that I dont understand or I’ll need to look into further. At this point I’ll hop over to the IDE, open the chat, in Ask mode and I’ll ask it what a particular block of code is and why its there or I’ll just have the agent explain it to me. This allows me to multi-task and learn as the agent is executing.
However, there are also times when I also see an unrelated code change that needs to happen that is not related to what the agent is doing. Maybe I see a bug, or something that needs to be improved.
In this case, I’ll make a note of it, or if I’m using an advanced multi-agent technique via git worktree’s (explained in Step 3 below) I’ll fire off another agent simultaneously to implement that change.
I use the IDE mainly to review code changes and to perform ad-hoc ask queries.
Automating Improvements with Self-Evaluation Loops
As mentioned in step 1, sometimes the agent will say that the work is one, yet there are errors. This is because the agent was not given any evaluation instructions on how to check its own work.
Self-evaluation in agentic-first coding is a way for an agent to verify on its own work.
For example, you can inform the agent to run all of the tests on the app, or to write tests and verify that they pass before it is complete.
The agent will then write the feature/app/etc and then the tests and verify that it works. You can also tell the agent to compile the program if its a compiled language. This will check for basic issues as well.
This is self-evaluation loop. When errors are found, the agent will see the errors and then investigate and fix them.
What you’re defining is the agent’s self-evaluation mechanism – the set of commands or tools that it can use to check and validate its work.
Without this, the agent will assume the code it wrote will work on the first try, and we know that’s not likely.
Rules for Coding Agents
We could have an entire post on rules, and we will soon, but suffice to say – you can provide rules via rules files to the agent. The agent will automatically look for these rules and apply them based on various conditions and/or file glob patterns.
For example, you could have rules like this:
- “Anytime you write a new API controller, you need to write tests for this to verify that it works as expected”
- “After you’re done with your work be sure to run the linter and formatter to make sure that the code is up to our standards. Use command “foo:bar xyz” (or whatever it is).
- “When you’re done, compile the code and run the test command on the module that changed to ensure nothing is broken.”
The great thing about rules files is that the Agents (CLI and IDE based) will automatically pick up the rules files, so once you write them once and edit as needed.
Learn more about rules files here https://cursor.com/docs/context/rules and here https://agents.md
Step 3: Moving from Unconscious Competence to Creative Mastery
“Mastery is not a function of genius or talent. It is a function of time and intense focus applied to a particular field of knowledge.” — Robert Greene
When you started, you wanted to learn more about agent-first coding and how to do it effectively. At this point you’ve been working with the IDE agent and the CLI agent quite a bit. You’ve explored the various modes (Ask, Plan, Auto Run). You’ve gone from idea, to implementation, and you’ve likely experienced a few ah-ha moments that have changed the way you think about agent-first coding (vibe coding).
Your mind has started automating the fundamentals and the result is an unlocking of a creative synthesis that was not there before.
You’re starting to develop the foundation of mastering a skill: time and intense focus applied to a particular field of knowledge.
Keep going.
Transformative Learning Shapes new Mental Models
You’re likely seeing new patterns emerge (mental models) in your development mindset with AI agents.
New “What if I try X” situations have materilized and you’re likely seeing that you’re way of developing might have shifted, potentially forever.
I know thats what happened to me when I finished creating coparentkit.com – I was amazed that in a short amount of time I was able to completely code an application that was functional and user friendly all via coding agents.
So, where do you go from here?
We’re literally just scratching as the surface of what’s possible, but below are a handful of things you should experiment with in order to up-level even more.
Multi-Agent Workflow with Git Worktrees
Above we talked about how we made a note about fixing something we didnt like so we could come back to it later. The reason why we wanted to come back to it later is so that we did not infect the context window with details that were not pertinent to the task the agent was working on. We needed to wait until it was done with its work. Then after we completed that work session we could start a new chat and do this other work with a new context window.
Git worktrees solve this.
With git worktrees you can create a new “tree” which is essentially a shallow copy of the git repo, but in a different folder. You can start an agent in the original folder as well as in this new work tree folder, The files are in different locations, so the agents will not collide in what they’re doing – since they’re not writing/reading/etc in the same directory on your file system. They’re in different logical “git branches” too – so your changes can be pushed as pull requests.
I’m just opening your mind here, go look into git worktrees and then fire up multiple agents at once. One to fix that bug, another one to implement that new feature, etc. You get the point. Its kind of mind bending and also feels like juggling plates at times.
Prompt Engineering
Learning how to prompt is a skill.
If you look at the prompt I provided to you above (building a protein tracker prompt) there are many components to the prompt that make if effective.
I gave the LLM …
- A role “expert web developer”
- Instructions: create a web app that does xyz
- Context on how the app works with entries and calculations
- A way to evaluate what it had done
This is the RICE method of prompting: Role, Instructions, Context, Evaluation mechanics
There are many different ways to write prompts and entire books and sites dedicated to it (promptingguide.ai, etc)
Some popular prompt formats that get results are:
- PTCF: Persona, Task, Context, Format
- PTCFC: Persona, Task, Context, Format, Constraints
- TCEPFT: Task, Context, Exemplar, Persona, Format, Tone
- RICE: Role, Instruction, Context, Evaluation (or Role, Instruction, Context, Examples)
- RICCE: Role, Instruction, Context, Constraints, Example
I highly advise you dive deep into Prompt engineering to learn how to prompt more effectively.
Initially I thought prompt engineering was more word salad for some over hyped nonsense that the industry was pushing. I was wrong. I truly did not realize this until I experimented more with some of the format above.
The key thing is, you need to experiment to see what works best for each model and prompt. Sometimes prompt A will work great with LLM A and LLM B, but prompt B, only works good with LLM B, for whatever reason. Experimentation is key here.
A properly constructed prompt can be life changing in regards to what agents can do. I’m not exaggerating.
For example – I’ve written 1,800+ word prompts that were essentially executable prompts that the AI could use to help decompose and detangle legacy codebases, saving me hundreds upon hundreds of hours of time vs doing it myself.
Cognitive Offloading with Cloud Agents
From here I advise you look into how you can distribute your cognition. Git worktrees work well, but they’re limited by your local resources on your machine and physical location – you’ve got to be at your computer.
What if you had a feature idea, or realized you had a bug in some software while walking on the treadmill at the gym, waiting in line at the grocery store or while waiting at the dentist?
What if you could fire off an agent to do that work while you took care of other things?
You can with cloud agents like Cursor Cloud Agents or Claude Code Web or Codex.
These tools will connect to your GitHub Repo, and then you can open up an app, type in your prompt on your phone (or desktop/etc) and have a background agent work on a task while you do other things.
I do this all the time. I’ve had agents update my Flutter apps to the latest version of Android or iOS, I’ve had them fix bugs in my rails apps, I’ve had them investigate if a particular data type is being used in correctly or used at all due to a backend change, and I’ve had it perform security reviews on code that might have changed in the last x number of days.
There are even project management tools like Linear that have integrations with Cursor Cloud Agents. Your create your tickets online, provide details, like a spec or bug report, etc. Assign that ticket to “Cursor” and a Cursor Cloud Agent picks it up and starts working on it. Yes, you can do this from your phone.
The options are endless.
Cloud agents are your own form of cognitive mesh. With the proper prompts, rules files and contextually aware tasks, you can accomplish far more with a team of agents than if you were doing all of this yourself.
Psychological Autonomy and Effortless Flow
The process of offloading tasks to agents, from IDE, to CLI to Cloud introduces you to a new form of autonomy – psychological autonomy. You’re freeing your cognition from the confines of day to day software development plumbing into a form of a cognitive mesh where ideas can flow freely, experimentation can happen in near real time. Though you still do need to review, and ensure all of this code is correct, but that’s a topic for another day.
Once you get used to this way of working, an effortless flow becomes possible because automation is extending your reach.
That’s a ton of stuff to simmer on, so I’ll leave it at that.
-Donn