Unlocking New Possibilities with AI Applications
Chad Kirby
Published on
June 23, 2025
In recent decades, software development has evolved significantly. We transitioned from hand-written code in Software 1.0 to machine-learned neural networks in Software 2.0. Now, in Software 3.0, we use natural language prompts to control Large Language Models (LLMs). These LLMs are becoming a new type of computer, managing memory and computing power to solve problems. This shift resembles the 1960s, when computing resources were expensive and centralized, requiring time-sharing models.
Unlike past tech revolutions, which were only for big corporations or governments, LLMs have made technology available to billions. Now, anyone who uses language well can become a “programmer,” creating new chances for innovation.
The Need for LLM Applications: Moving Beyond Direct Interaction
Chatting directly with LLMs like ChatGPT is powerful, but it's limited. It’s like using an operating system from a command line. Complex tasks need more than just text prompts.
LLMs have impressive abilities, like vast knowledge and quick recall. However, they also show “cognitive deficits,” such as hallucinations and uneven intelligence. To tackle these issues and use LLMs effectively, developers are creating apps that manage interactions and improve user experiences.
Key Characteristics and Opportunities of Successful LLM Apps
Partial Autonomy
Successful LLM apps use a "partial autonomy" model. They offer an “autonomy slider” that lets users control how much the AI is involved, from small auto-completions to major changes in code.
Context Management
Effective applications manage significant context for users, making interactions with LLMs easier and quicker.
Orchestration of Multiple Models
These applications ensure smooth interactions between embedding, chat, and code models behind the scenes. This creates a cohesive experience without user intervention.
Application-Specific GUIs
Graphical User Interfaces (GUIs) provide intuitive visual auditing, which is vital for checking outputs from AI systems. Visual cues make audits faster and more efficient than reading raw text.
Human-AI Cooperation
The best workflow combines AI-generated content with human checks. This creates a quick loop to keep accuracy and productivity high.
Keeping AI “On the Leash”
To prevent LLMs from going overboard, precise prompts and constrained interactions are necessary. Applications designed with these principles ensure reliable AI outputs.
Rapid Prototyping
LLMs allow quick prototyping. Teams can clarify and visualize ideas in hours, changing abstract concepts into concrete visuals and functional prototypes.
Examples of Emerging LLM Apps
Cursor (Coding): An LLM-powered coding environment that manages context, orchestrates AI models, provides specialized GUIs (like visual diffing), and includes adjustable autonomy.
Perplexity (Search/Research): This app facilitates advanced information retrieval with coordinated LLM interactions, citation-integrated GUIs, and adjustable autonomy for different research depths.
Menu Gen (Personal App): This app shows “vibe coding,” letting users create personalized iOS apps that generate images for restaurant menus from user photos. Although the initial generation is simple, it highlights ongoing challenges in traditional development.
Addressing New Bottlenecks and Product Value
Shifting Development Bottlenecks
In the past, software development speed was the main bottleneck. Now, with AI-assisted development increasing productivity by 15x to 30x, managing development backlogs and ensuring quality tasks are more challenging.
Backlog Curation
Increased speed risks overwhelming product backlogs with low-quality requests. Product teams must curate features carefully to add real value and avoid bloating applications.
Real-time User Insights
AI summaries may miss key insights. Direct human-to-human interactions and discovery calls are still vital to uncover user needs and innovation opportunities.
User-Paced Change
Rapid development must match users' ability to adapt. Teams should pace updates carefully to avoid confusion and maximize value.
Evolving Product Value
Future software success will focus less on delivery speed and more on market insights, user discovery, curated features, and proactive innovation.
New Software Economics
Faster development cuts software costs. This shift makes marketing, brand development, and customer loyalty vital for long-term success.
Adapting Infrastructure for LLM Agents and Apps
LLM-Friendly Documentation
Traditional documentation needs reformatting into LLM-friendly formats like Markdown for better accessibility.
Programmatic Commands
Instead of saying “click this,” use clear, executable commands (like curl) to make documentation actionable for LLM agents.
Data Ingestion Tools
Tools that convert human-oriented interfaces (like GitHub repositories) into LLM-readable formats help with efficient information access. Protocols like Anthropic’s Model Context Protocol streamline agent communications.
Meeting LLMs Halfway
While future LLMs may navigate traditional human interfaces easily, optimizing digital infrastructure now improves efficiency and reduces costs.
The Future is Now (and it’s Partially Autonomous)
We are entering a remarkable era of software innovation driven by partially autonomous systems and enhanced human capabilities. Instead of fully autonomous “Iron Man robots,” we’re creating “Iron Man suits”—tools that boost human skills. Over the next decade, we will gradually increase product autonomy, profoundly shaping the software landscape. Now is a prime time to innovate, as vast amounts of software are being reimagined. Embracing these evolving “fallible yet powerful” LLMs opens doors to creativity and technological advancement.
Related Articles
Explore more insights and best practices from our development team.
Share this article