By Dmitrii Abanin, Senior Software Engineer
This article explores a comprehensive approach to testing web applications that rely on Dependency Injection (DI). By implementing the strategies detailed in this guide, backend developers and DevOps teams can streamline workflows, improve release quality, and significantly reduce production risks.
The need for better testing methodologies is clear. Docker’s 2024 State of Application Development Report, which surveyed over 1,300 developers, found that 20% struggle with debugging and testing. This indicates that at least one in five professionals could immediately benefit from the architectural approaches discussed here.
Dependency Injection Architecture
Most modern backend systems are built on Dependency Injection (DI), an architectural foundation that promotes modularity, testability, and maintainability. Regardless of the programming language used, DI shapes how services are connected, how components interact, and how the application behaves at runtime. The following examples illustrate its practical application:
Java: Spring Boot applications use @Autowired and component scanning to inject services, repositories, and controllers, managing the entire dependency graph. Micronaut and Quarkus employ annotation-driven DI to achieve fast startup times and native compilation, which is ideal for microservices with auto-configured infrastructure clients.
TypeScript/Node.js: NestJS uses @Injectable() decorators and modules to inject services into controllers, mirroring Spring’s patterns to build scalable APIs with TypeORM or Prisma integration. InversifyJS offers explicit DI binding for custom Node.js backends that handle business logic and external queues.
C#/.NET: ASP.NET Core utilises the built-in IServiceCollection for constructor injection of controllers, DbContexts, and middleware, supporting scoped lifetimes in cloud-native web applications.
Limitations of Unit Tests in Dependency Injection Frameworks
Dependency-injection (DI) frameworks go far beyond acting as simple factories. In modern systems, DI containers are responsible for:
- Scanning components or loading modules.
- Automatically binding interfaces to their implementations.
- Managing scoped lifetimes and lifecycle behaviours.
- Wiring dependencies via reflection or decorators.
- Handling conditional or environment-specific configurations.
- Auto-initialising infrastructure components like database clients, message brokers, and caches.
This means that application correctness doesn’t depend on business logic alone, it also hinges on the DI container correctly resolving dependencies under real runtime conditions.
Unit testing and integration testing are two widely-used testing methods. Unit testing focuses on verifying individual code components in isolation, whereas integration testing examines how these components interact. While valuable for validating isolated logic, unit testing has a critical blind spot in applications that rely on dependency injection (DI): it cannot confirm the validity of the DI context or ensure that the contract between your application and its external services is correctly configured and maintained.
Specifically, unit tests often fall short in validating:
- Whether the DI container can construct the full dependency graph.
- Whether configurations are valid for the current environment.
- Whether components that rely on proxies, interceptors, or decorators are initialised correctly.
- Whether modules load properly based on application profiles.
- Whether startup behaviours dependent on external systems succeed.
These critical checks can only be verified during full application bootstrapping, ideally in an environment closely resembling production. Thus, for dependency-injection web applications, thorough end-to-end validation during framework startup is essential.
Limitations of Mocks
While mocked test doubles can streamline and simplify certain testing scenarios, they often mask critical behaviours that only emerge during interactions with live infrastructure. Real-world databases, caches, and message brokers impose specific constraints and nuances that mocks simply cannot replicate with precision, such as:
- Schema compatibility, indexing rules, and strict type handling.
- Network dynamics, including connection pooling, latency, and timeouts.
- Authentic message formatting and serialisation protocols.
- Critical startup sequences that rely on the availability of external services.
- Security layers, including authentication, TLS handshakes, and intricate configuration details.
Furthermore, dependency injection (DI) frequently abstracts these interactions. A test reliant on mocks fails to verify whether a DI-wired component can actually communicate with the production-grade dependency it is designed to manage. To ensure the entire stack functions as intended, it is essential to validate your system against live instances of these services.
Benefits of Running External Dependencies Inside Docker Containers
To ensure a truly reliable dependency injection (DI) environment, a more thorough and robust approach is required – one that boots the entire application and tests it against real external dependencies. When implemented correctly, integration testing becomes a powerful tool for identifying DI wiring issues and troubleshooting interactions with external services in microservices architectures.
This brings us to the practice of integration testing with Dockerised external services. Lately, Testcontainers gained significant attention as a game changer for creating reproducible environments amidst the growing adoption of containers. Testcontainers enable developers to adapt containerised testing practices seamlessly to their applications.
Through my experience, I’ve found that running external dependencies within Docker containers is one of the most effective methods for building realistic test environments. This approach offers several key benefits:
- Consistency Across Environments. Each test run uses isolated and reproducible versions of external services. Local developer and continuous integration (CI) environments behave the same way because both rely on the same container images.
- Accurate Simulation of Production Environments. Testing with real versions of databases, message brokers, or caches eliminates uncertainty. If an application boots and performs correctly with containerised dependencies during testing, it is significantly less likely to fail in staging or production.
- Language Agnostic Flexibility. This method works seamlessly across ecosystems, whether you’re using Java, TypeScript, or C#. All these frameworks, which heavily rely on dependency injection, can benefit from containerised integration testing.
- Automated Provisioning. Tools like Testcontainers make it effortless to spin up containers programmatically during test execution. Regardless of the specific tooling used, the core principle remains consistent: real dependencies when possible, isolated environments, and reproducible configurations.
Thus, by embracing containerised external dependencies in your testing strategy, you can elevate the reliability of your applications, reduce uncertainty, and simulate production environments with precision.
Reusing Docker Compose for Local Development and Debugging
One of the biggest advantages of a containerised test environment is its versatility. The same Docker Compose files used for testing can also be used to run applications locally. This eliminates the need for various manual methods to start external services, allowing teams to share a single, unified configuration for a range of tasks:
- Local manual testing
- Debugging sessions
- CI integration tests
- Developer onboarding
- Infrastructure simulation
This shared configuration creates a seamless workflow. Developers can launch the entire application stack on their local machine with a single Docker Compose command, debug the application as it interacts with containerised services, and then use that same configuration for automated integration tests. This consistency blurs the lines between local and automated environments, making development more efficient and predictable.
Conclusion
Modern applications are built on a foundation of DI containers and real infrastructure services. While unit tests are valuable, they can’t fully guarantee that an application will start up correctly or function reliably in a production environment. This is where testing with Dockerised dependencies becomes essential.
This approach offers several key benefits:
- Validates Dependency Injection: It confirms that the dependency injection graph loads successfully.
- Seals Service Contracts: Tests solidify the agreed-upon contract between services and components.
- Verifies Configuration: It ensures the configuration is valid, matching production settings.
- Ensures Consistency: It maintains uniformity between CI pipelines and local development environments.
- Streamlines Debugging: Local debugging can be performed using the same stack as automated tests.
Crucially, this method is language, framework, and ecosystem-agnostic. Regardless of your chosen language or DI framework, running integration tests against real containers is one of the most effective ways to ensure your application behaves predictably in the real world.
Therefore, in a dependency-injected world, realistic integration testing is more than just a good idea – it’s a mandatory practice for building robust and reliable software.
Author:

Dmitrii Abanin is a Senior Software Engineer based in London with over eight years of experience building cutting-edge applications. He specialises in developing robust, scalable, high-performance systems using Java, Spring Boot, and microservices architecture. Dmitrii is an expert in database technologies like Postgres and Cassandra, and containerisation tools including Docker and Kubernetes.