← Back to Blog

The Future of AI

What lies ahead for AI? I give my two cents and provide real world examples of how I've used AI and what is exciting me about the future.

The last time I was this excited about a new technology was when I tried ChatGPT for the first time. The future of AI will be prompt engineering. Let me give you an example. I am currently building a personal portfolio website. In the past, an aspiring young software engineer would try to build a website from scratch to teach themselves how to code, and as a portfolio project to show potential employers. In this project, they would come across a couple youtube tutorials that teach them all about how to build a website. They would learn about the client and the server, database storage, API routes, html and CSS for layout and styling, and Javascript or Typescript with libraries like react for coding.

A few years ago's aspiring young software engineers may have used LLM's such as chatGPT to help them find relevent learning materials, teach them basic concepts and implement some basic code to help them create individual website components, such as a page, or a navigation bar, or an API call to a database to fetch images. At the time, this was a very powerful tool for these young entrepreneurs, because it mutliplied the speed at which they could learn concepts and mimized the skills needed to implement working minimum viable products (i.e. websites that actually functioned).

In 2023, AI code editors such as Cursor and Windsurf were developed and brought to market. These powerful tools were integrated into an integrated development environment (IDE). Think of an IDE as an all in one place to code, where all your files for a project are saved and you can code in different languages. these AI code editors are integrated into the IDE, so that in this coding environment, you can give AI access to all your files, all your code and all your projects. These AI code editors are very powerful, because not only do they function like an LLM, where they can answer questions about the specific IDE you are working in (For example, identify issues such as figuring out where files are stored, how to download a package, where to find the settings button etc.), they can also write code in individual files and across multiple files, with knowledge of the connections and dependencies between different files.

To fully appreciate how big this step from LLM to AI coding agents is, I will tell a story about my own coding experience. Last summer, 2024, I was the aspiring young software engineer from the past, who looked up youtube tutorials to learn how to build a website. With no prior experience, but with access to ChatGPT, my friend and I built a website from scratch called Trade Arena, an online stock market simulation trading game. We literally knew nothing about building a website, but through meetings with an advisor, youtube tutorials, and a lot of help from ChatGPT, we got a working website by the end of the summer. ChatGPT did many things for us. We would ask it for help to write us components, teach us the coding syntax, for example for writing React useState and useEffect hooks, and help us debug linter errors in the IDE. It also helped us learn how to route API calls, build user authentication, and FETCH and POST to a database. These were huge helps, and made it possible to get such a complex project idea to fruition.

This summer, 2025, I decided to build a personal website. I thought I had decent experience with building a website from the summer before, and I had heard about AI code editors and thought I should give it a try. So, I downloaded Cursor and got to work. It was an immediate game changer. Within a week, I had a webpage with much of the same functionality as my Trade Area webpage project. Shockingly, the limited familiarity with the coding environment and the skills I had developed from the previous summer were enough to build a website of the same complexity within a week. The AI code editor was much more powerful that chatGPT was, because it was integrated into the IDE, and so had access to my entire codebase and all my files for the project. To demonstrate by point, I'll tell you a little bit about my own experience with building the Trade Arena website.

Last summer, one of the tasks of the Trade Arena project was to find a database to store our user data and learn how to integrate it into the website. So naturally, I asked chatGPT how to do this. It walked me through how to create an account in MongoDB, an online database, how to find the MongoDB API code, how to paste it into the .env file, download the mongoose package, write a mongoose model, build a non-static component, ensure the component captured information specified in the mongoose model schema, and write FETCH and POST API routes. Each one of these tasks took multiple hours of prompting chatGPT to generate code for me, then paste that code into a file, then ask chatGPT to debug the file, and iterate until complete and working.

Cursor AI can literally do all of this in the matter of a minute with the prompt "Help me build a login component that connects to MongoDB using mongoose. Here is my MongoDB API key: '...'." Because it has access to all the files in your workspace, it can use an LLM to instantaneously generate code and write it directly into your files, with an understanding of where that file goes in the file structure and how it connects to other files. With this capability, I could now ask it these specific questions, and it would just automatically write all the code for me. If the code had an error, Cursor AI would fix it itself. If it needed approval for something, like downloading a package, deleting a file or committing to Github, Cursor AI would stop and ask for permission, then continue on. This feature can even be disabled for a full automated workflow. This automated, integrated AI coding agent saved me countless hours of consulting chatGPT and other LLMs over linter errors, organizing my file tree, building middleware and API routes, and countless other tasks.

This may already seem like the future of coding, but the technological advancements are far from over. Even as powerful as cursor AI is, it still has many inefficiencies. For example, in the development of my personal portfolio webpage, Cursor AI had a hard time with the layout and styling of components, because it could not see the webpage or test any of the functionality itself. As an example, a prompt that Cursor AI would struggle with is: "Write me a component that takes all my books that are marked as top reads, and puts them side by side with gaps between them. Make it so that when I hover over the book, a read more link pops up that, when clicked, takes it me to a short description of the book." This kind of prompt is very hard for Cursor AI to get right, because all it has access to are the coding tools used to generate that functionality, with no means of viewing or testing what the code it writes actually ends up creating. Even with Cursor, I sometimes spend hours adjusting the layout of a page because cursor doesn't know how the page looks, and I have to ask very specific prompts to get Cursor to use the right tools in the right ways.

Now, the following section details what I am so excited about and why.

Continuing from the example above about Cursor AI not being able to interact with the components it builds, There are other AI agents that have been developed that can read a website and interact with it, such as Playwright. Playwright can push buttons, click links, hover over objects and view all the web pages. There are ways to integrate these interactive agents with cursor AI, so that when a component is built by Cursor AI, it can then be viewed and tested by the interactive agent, which can tell Cursor exactly what tools to use and how to fix it, and then iteratively fix the layout and components without any human interaction.

Other AI agents, like Task Management Workflow agents such as Roocode and Task Master, have been developed to fix other issues that arise with Cursor AI. These agents help Cursor organize what it needs to do, how it needs to be done, and when to do it. These TMW agents do this by defining a structure that Cursor can use to complete its task. For example, Tast Master can turn one prompt such as "Create a personal portfolio website for me that contains all my projects." into a full webpage, my breaking the prompt into chunks.

It does this by first asking Cursor to develop a plan for the architecture of the website, such as what the file tree structure will be, how the website pages might connect, etc. Once it has the architecture, it will ask Cursor to come up with specific tasks that are necessary to implement the architecture. Then it will ask for how complex each task is, how many other tasks each task is dependent on, and the status of the task, i.e. whether it is complete or not. This helps Cursor track what has been done and what is still needed. If a task is too complex, Cursor can be asked to break it down into smaller more completable chunks. Then, Cursor will complete the tasks as it sees necessary. If it encounters any errors it will iteratively fix them, and then note to itself what error it encountered and how to fix it if the same error comes up again.

Tools like these are very powerful, because they can maximize the utility of these complex LLM's by designing prompts that specifically tell the model how the prompt should be understood, and how it should approach accomplishing the prompt's task. The future of programming will be this. We will not write any code ourselves anymore, but develop a web of interacting AI agents that can replace human functionality. Cursor AI is a human's hand that writes the code, Playwright is a human's eyes and hand on a mouse that can browse the web browser and test the functionality and Task Master is a human's planning capabilities and memory.

We can think of these AI agents much like how we think of libraries for a coding language like Python. In Python, you can download a whole bunch of different packages that serve different functionalities that make coding easier. For example, you can download NumPy, which defines ways in which you can work with multidimensional arrays. NumPy uses the base Python language to define these complex multidimensional array structures, and makes it easily accessible for others to use them without having to do the complex defining themselves. These AI agents serve the same purpose, but by abstracting away complexities that arise in prompting. These various agents create an infrastructure around the core capabilities of a LLM, by using its core capability of text and code generation to essentially prompt itself into functioning certain ways.

Because of this feedback cycle of generating structured prompts in order to generate more structured prompts etc etc, AI agents will eventually have the capability of generating other AI agents. Soon, we may not even see any code when we develop webpages. AI agents will be able to abstract away the act of coding itself, and new programming languages will be developed that are one level of abstraction above coding. Then, much like how algorithms were developed to determine the least complex way to write code in order to achieve a task most efficiently, algorithms will be developed to determine the least complex way to organize the web of AI agents to accomplish a task.

All of this is to say, the future of AI is very exciting, and software engineering as an industry will soon be dead and replaced by prompt engineering.