Ask any software engineer what they dislike most about their job, and chances are “writing tests” will land near the top. Hiroshi Wes Nishio knows this firsthand—and he’s betting his latest startup, GitAuto, can eliminate that pain point for good.
We caught up with Wes to talk about the future of automated testing, why most AI coding tools fall short, and how his experience in finance, retail, and healthcare led him to create a tool that over 220 companies are already using.
⸻
Q: Wes, you describe GitAuto as an “autonomous QA agent.” How’s that different from other AI tools like GitHub Copilot?
Wes: The difference is initiative. Most tools, like Copilot or Cursor, respond to user input. They’re assistive. GitAuto is autonomous. It begins by analyzing test coverage reports from GitHub Actions, identifies functions or files that lack coverage, and then creates GitHub Issues on its own. From there, it generates tests, opens pull requests, runs the test suites, and, this is important, fixes failures and updates the code until everything passes. It’s not waiting for engineers to ask for help. It’s proactively improving your codebase.
⸻
Q: That sounds pretty hands-off. What does the developer actually do then?
Wes: The developer’s role shifts from writing boilerplate tests to reviewing intelligently generated PRs. They retain final approval but don’t have to spend hours figuring out which functions are uncovered, what sample inputs to use, or how to structure the test. In many cases, GitAuto even proposes reusable input and output fixtures across multiple test cases. It’s not just automation. It’s leverage.
⸻
Q: Where does this autonomy come from? LLMs usually struggle with large codebases.
Wes: Absolutely, and that’s one of the core innovations. GitAuto avoids trying to read the entire codebase. Instead, it parses config files like pytest.ini or jest.config.js to understand test patterns. It scopes its queries based on coverage reports and uses structured search strategies. It knows, for example, that a utils/ function with missing coverage might correspond to a test in tests/unit/utils/, and navigates the tree accordingly. This allows it to remain token-efficient while highly contextual.
⸻
Q: What kind of companies are using it right now?
Wes: We’ve seen adoption across sectors where quality really matters, including automotive, financial infrastructure, payments, and enterprise SaaS. One major user is an automotive OEM with over 10 billion dollars in revenue. Their QA backlog included hundreds of untested helper functions that were critical to their ADAS systems. GitAuto generated over 50 test PRs in less than a week, with over 90 percent of them merged without manual edits. For them, it was about de-risking releases tied to physical safety.
⸻
Q: What’s the efficiency gain compared to traditional methods?
Wes: Writing good tests requires context, such as what the function does, what it should return, and where it belongs. With tools like Copilot, the developer still has to feed all that manually. GitAuto learns patterns from one ticket and applies them across others. For example, it notices naming conventions like test_*.py, understands the fixture strategy used in the repo, and reuses that knowledge. One client reported that what used to take a QA engineer three days per module now takes GitAuto a few hours, fully automated.
⸻
Q: Your background isn’t pure software. How did you end up here?
Wes: I started in investment banking at Barclays, mostly on insurance and REITs. Then I moved into digital transformation for a retail enterprise in Japan, handling multi-system rollouts. What frustrated me was how consistently bugs derailed schedules, especially due to missing tests. No one wanted to own QA. That’s when I decided to learn coding. In 2021, I built a Slack AI assistant called Q just a week after GPT-3.5 launched. It hit 16,000 installs fast. That gave me confidence to tackle testing, one of the last unsolved pain points in development.
⸻
Q: Wait, you taught yourself?
Wes: Yeah, through necessity. When your project’s stuck and the engineers are overloaded, you either wait or figure it out. I chose the latter. Learning to code let me build tools for my own teams, and eventually, for others.
⸻
Q: Where do you see GitAuto going from here?
Wes: Long-term, I think AI QA agents will be mandatory in every CI and CD pipeline. GitAuto is already proposing test coverage plans and generating dozens of issues in a batch. We’re working on repo-level dashboards and auto-prioritization based on recent production bugs. Think of it as test planning, execution, and iteration all rolled into one. The big goal is to let developers focus on business logic while GitAuto keeps the test suite healthy and complete.
⸻
Q: Final question: You’ve gotten recognition from some pretty big names. What does that mean for you?
Wes: It means we’re on the right track. We were named a Top 20 global AI agent in the AI Agents Global Challenge judged by folks like the Wise CEO and the creator of NumPy. But more importantly, it’s helped us get in front of engineering leaders who actually feel the pain we’re solving. Recognition opens the door, but traction is what keeps us moving.