Simulations play a critical role in how scientists and engineers build and explore complex products, from planes and skyscrapers to drugs and even shoes. Simulations allow teams to test products under a wide range of conditions, accelerating feedback and learning, which reduces risk, cost, and time-to-market.
What if we could simulate software the same way we simulate physical systems?
What Are Software Simulations?
A wind tunnel is a simulated environment that makes iteratively testing under varying wind conditions possible. An early wind tunnel allowed the Wright Brothers to test hundreds of wing designs within a few months, enabling them to build the first airplane to achieve powered flight.
For software, the varying conditions are the data that is passed through the system.
Software is essentially made up of data (states) and functions (methods). The goal of a software simulator is to generate different data that enables the developer to see how the various functions in a software system respond to different inputs.
We define software simulations as passing varying data through isolated parts of a software application and capturing the results for comparison, evaluation, and documentation. This could be for a single function, a group of interdependent functions, or all the way up to the high-level functions that provide the entry points into a software system.
Imagine writing a backend function that calculates pricing based on user tiers, discounts, and usage. CodeYam can simulate dozens of realistic input combinations to show you what the function does, providing immediate feedback about how the function responds to diverse inputs. Now imagine doing this across every frontend and backend function in your entire application within minutes.
Why Are Software Simulations Valuable?
Just as simulations help other research and engineering teams accelerate product development through faster feedback loops, reducing risk and cost, so too can software simulations accelerate the software development life cycle.
Imagine: as you write code (or as an AI agent writes your code), a separate AI-based system is showing you what happens when different data is passed into the code you are writing. The results of these simulations might be visual (e.g. a React component) or data (e.g. a backend function) or might involve calling third party services (e.g. an API or database).
With a variety of data scenarios, you can see what the code will do when presented with different data. This provides immediate feedback to the software developer (or the person ultimately responsible for the product if an AI agent is acting as the developer), helping them to more easily validate whether the code being written has the intended effect.
As new software functionality is developed, simulations can be shared internally with the team or even externally with customers to demonstrate progress and gather feedback. New developers can learn about the software system more efficiently by seeing how each piece of the overall system behaves. As a software system evolves, newer simulations can be compared to older simulations to ensure that nothing changed unexpectedly. This ensures there are always up-to-date simulations across the whole application for demonstration, documentation, and a robust “approval” test suite.
Why Now? The Rise of AI Software Development
Over the last few decades, software development has evolved significantly with more powerful dev tooling, robust CI/CD pipelines, and, more recently, with AI supporting code generation or AI agents that can write the code themselves. In this new world, we need new tools to help us understand what our code does, especially as more code is written by AI, not humans.
AI also makes robust simulations possible. Without AI, the technique closest to software simulation is fuzz testing, which involves passing in a wide range of random values into functions to ensure the system does not crash. Fuzz testing is valuable, but it is constrained to finding errors that cause the software to run too slowly or fail.
With AI, we can go beyond fuzz testing to ensure we not only avoid failures but also that the business logic is properly respected. AI can semantically understand what the software is trying to do and can attempt to both identify failures and generate successful simulations.
Looking towards the future of software development, as AI agents are used to write more and more code, robust simulations become mission critical. We will increasingly need tooling to help us understand if the AI is building the correct software.
The odds of missing bugs, issues, or deviations from business intent in code that an AI wrote is quite high. Moreover, AI agents may generate test suites that validate their own changes, but overlook the broader business context of the software, resulting in a false sense of correctness and regressions that go unnoticed until later.
With static code analysis and AI, CodeYam can do something previously not possible: for each change to a codebase, we can generate high quality simulations that make evaluating the results of an AI agent changing the code far easier.
Simulations as the IDE for Generative AI
As AI becomes a collaborator, not just a tool, we need new paradigms and interfaces for human-to-AI and AI agent-to-agent interactions. This is where simulation becomes not just a tool, but also the core of the development environment.
We need ways of visualizing and navigating software systems that allow us to easily specify which part of an application we want to change. We need to be able to isolate and discuss the change to a specific part of the system. Once that change has been written, we need to verify the results of this change. Simulations provide the necessary artifacts to find, discuss, and potentially change isolated parts of a software system, even as a non-technical user.
Simulations become the Integrated Development Environment (IDE) for working with AI agents.
CodeYam: 18+ Months of Software Simulation R&D
CodeYam represents over 18 months of R&D into how to best utilize AI in creating software simulations, leverage those simulations during the software development life cycle, and manage the complexity and significant amount of information created by these simulations. In this way, CodeYam has become a simulation-based IDE for humans to interact with AI agents around isolated parts of a complex software system.
The simulator uses a combination of both static code analysis and generative AI to ensure accurate and high quality results. The simulator currently supports TypeScript and frameworks such as Next.js and Remix, but the R&D has revealed a strategy that will allow the simulator to be ported to other languages more quickly and easily going forward.
The future of software isn’t just AI writing code; it’s humans and AI collaborating across the entire software development life cycle. Simulation is the shared language that makes that collaboration possible.
CodeYam is building that interface. If you’re building or using AI or agents for software development, we’d love to learn about and support what you’re doing.