AI Agent

AI agents today are incredible!!!

I delivered software with an estimated $1M engineering cost 1 in just two days, and it only cost me $7.14 for the entire GitHub Copilot usage!

Footnote: 1Sloc, Cloc and Code (scc) is a fast code counter with complexity analysis and COCOMO cost estimation written in Go. I used scc to measure the size of the generated codebase and estimate the equivalent engineering cost, which is where the ~$1M software value figure comes from.
TypeScript89%
MDX8%
JSON1%
JavaScript0%
CSS2%
Markdown0%
Plain Text0%
SQL1%
Language
TypeScript
23732,0682,91380128,3544,579
MDX
313,36789002,4770
CSS
264379265380
SQL
141492982240
JSON
4174001740
Markdown
21493201170
JavaScript
3132161110510
Plain Text
110010
Total28136,9484,02293631,9904,589
$1,027,685estimated cost (COCOMO)
13.90 monthsestimated schedule
6.57 peopleestimated team size
scc · sloc cloc and code1.38 MB

I built this website—1chooo.com—from scratch in just two days (though I did borrow the design from shud.in). Not long ago, I spent over a year teaching myself the nuances of the React ecosystem just to make it as a large scale software builder. 2 This week, using GitHub Copilot and the Claude Sonnet 4.6 model, I built a more advanced version of my vision in just two days. Guess how much it cost? ONLY $7.14 for the entire GitHub Copilot usage!

Footnote: 2My 300+ stars GitHub repo where I crafted a personal portfolio—vCard with React, TypeScript, and Tailwind CSS took me over a year to build and refine.
1chooo
1chooo188 commits
#1
90.8k ++·48.0k --
claude
claude28 commits
#2
25.1k ++·11.5k --
216total commits
+115.9kadditions
-59.5kdeletions
87%
13%
git contributors
1chooo
1chooo
claude
claude

Building websites has become much more fun. Ideas that used to take days or weeks to deliver can now quickly turn into production-ready features. At this point, I'd say more than 80% of the code was generated by the AI agent. The remaining 20% was written by me—mostly code review, bug fixes, and small refinements.

It honestly feels like a new era of building.

#What amazes me

In the past, integrating libraries and tools into a project often required a large amount of manual effort. I had to carefully read documentation, resolve compatibility issues, and gradually stitch everything together. It was a slow and sometimes tedious process.

Today, AI agents can handle much of this work automatically. Instead of integrating each dependency step by step, I can simply provide high-level instructions (prompts) and let the agent orchestrate the implementation. Tasks that once required hours of manual setup can now be completed in minutes.

This becomes particularly valuable when working with UI component libraries such as Shadcn. When starting a new project with a standardized component library, things are usually straightforward. However, integrating such libraries into an existing codebase is often difficult. Legacy code tends to accumulate different patterns over time—components written in different styles, inconsistent structures, and duplicated implementations of similar functionality.

Historically, cleaning this up required extensive refactoring and decoupling. Developers would need to manually update components, align patterns, and gradually migrate the codebase to a consistent design system.

With AI, the process becomes significantly easier.

Instead of manually rewriting everything, I can provide the AI with a reference component that represents the desired style or architecture. The AI can then apply that pattern across the existing implementation, automatically adapting old components to match the new standard.

Even better, these guidelines can be formalized in files such as .github/copilot-instructions.md. By embedding style conventions and architectural rules there, the AI can reference them whenever it generates code. This ensures that newly generated components follow the same design patterns and styling rules as the rest of the project.

The result is a much more consistent interface with far less manual adjustment. For example, on my blog, this approach allows me to maintain a consistent visual style across different pages while quickly integrating small interactive toy examples alongside written explanations. This not only saves development time but also creates richer and more engaging experiences for readers.

Another advantage is iteration speed. When the AI generates styles that are slightly off due to prompt differences, I can quickly provide a reference component and ask it to align with that design. Instead of manually tweaking dozens of CSS classes, the AI can regenerate the implementation in seconds.

This fundamentally changes how frontend refactoring works.

#Pegatron

This shift reminds me of the AI agent I built during my internship as AI Engineer at Pegatron in the summer of 2023.

At that time, ChatGPT had just been released, and the industry was still exploring how to unlock its potential. I was already experimenting with building AI agents to integrate into existing workflows.

Back then, the limitations were obvious. A single API call could take around three seconds, and once the agent began reasoning about which tools to use, even a simple prompt could take over a minute to produce a response.

Under those constraints, building a practical AI-assisted workflow felt slow and sometimes frustrating. It was difficult to imagine how seamless the experience could eventually become.

Looking at today's tooling, the difference is remarkable. Latency has decreased dramatically, model capabilities have improved, and the ecosystem around AI-assisted development has matured.

What once felt experimental now feels like a natural part of the development process.

#Next Steps

After this practical experience, I'm excited to explore how AI agents can become an integral part of my development workflow.

For example, I've been integrating many of my previous toy example projects directly into my current website (see /notes). This process allows me to experiment with creating a more interactive reading experience for users, where explanations are paired with small runnable examples. At the same time, it gives me valuable experience integrating experimental prototypes into a real production codebase.

I think this distinction is important.

If we only rely on "vibe coding" and keep building isolated toy examples, it may feel productive in the short term. But for someone like me who wants to go further in software development, that approach alone isn't enough. Real-world software requires much more than simply making something work.

There are many important considerations in software engineering: code quality, maintainability, scalability, security, performance, and proper decoupling. These principles have always been central to how I approach my projects, and I don't want them to be overlooked in the era of AI-assisted development.

Instead, what I'm interested in is something deeper: exploring how to collaborate with AI agents while still preserving these core engineering principles.

Rather than replacing thoughtful engineering practices, I want AI agents to become tools that help reinforce them—assisting with refactoring, enforcing conventions, improving consistency, and accelerating development without sacrificing long-term quality.

In other words, the goal isn't just to build faster.

It's to build better systems, with AI as a collaborator rather than a shortcut.