Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did not come in here expecting to read such effusive praise for testcontainers. If you’re coming from a place where docker wasn’t really a thing I can see how it looks beautiful. And in a fair amount of use cases it can be really nice. But if you want it to play well with any other containerized workflow, good freaking luck.

Testcontainers is the library that convinced me that shelling out to docker as an abstraction via bash calls embedded in a library is a bad idea. Not because containerization as an abstraction is a bad idea. Rather it’s that having a library that custom shell calls to the docker CLI as part of its core functionality creates problems and complexity as soon as one introduces other containerized workflows. The library has the nasty habit of assuming it’s running on a host machine and nothing else docker related is running, and footguns itself with limitations accordingly. This makes it not much better than some non dockerized library in most cases and oftentimes much much worse.



Testcontainers is not shelling out to the docker CLI; at least on Java, it is using a Java implementation of the docker network protocol, and I believe that’s the case also for the other platforms.

Not sure this matters for the core argument you are making, just thought I’d point it out.


The comment you are replying to makes so many mistakes about how Testcontainers works on Java that I'm not sure what source code the commenter is looking at.


Why don't you point them out, please. We already know about testcontainers not using the shell, but rather talking to the HTTP API.

Making comments like "this is wrong, but I'm not gonna explain why" has no place here, imho.


[flagged]


Which is a perfectly fine counter-argument if you are, in fact, not going it right.

If you say you're doing continuous deployment because you deploy every Tuesday evening, it's perfectly fair to point out that that's not continuous deployment.

Of course, you should follow it up by explaining why as well, but many companies don't actually follow the agile principles.


This is a digression from the original topic, but my criticism of agile is in fact that nobody is doing it right. If the majority of companies that attempt it end up just wasting extra time, the process itself is broken.


I don't think you can claim that. It's the fate of most popular systems. they end up being poorly explained in elevators or adopted based on a blog article rather than investing the few hours or days or weeks to understand what made the process originally successful. It's so common we have a term for it: cargo culting. I don't think you can fault agile for the tribally-spread BS most places do today where points == hours. If anything, you can maybe fault it for seeming a bit too familiar and simple when there are a few important nuances.


> Making comments like "this is wrong, but I'm not gonna explain why" has no place here, imho.

Who are you quoting?


Using quotation marks as a way of summarizing what someone might say but didn't literally say is a fairly common practice. I think the quotation is a fairly accurate depiction of the sentiment in the comment it responds to, so I don't see any issue with it.


It may be common, but that doesn't mean it's warranted. Logical fallacies, for example, are also common (so are spelling and grammar mistakes), and yet personally I prefer to commit them less often rather than more often. You think the quotation is a fairly accurate depiction of the sentiment expressed by another person. I don't think that. In fact, I think the opposite. One way to settle the matter would be to ask the person.

What do you say, doctorpangloss? Do you think this is a fairly accurate depiction of the sentiment you were expressing?

"this is wrong, but I'm not gonna explain why"

I doubt this person will respond, of course (they're not obliged to).


> It may be common, but that doesn't mean it's warranted. Logical fallacies, for example, are also common (so are spelling and grammar mistakes), and yet personally I prefer to commit them less often rather than more often. You think the quotation is a fairly accurate depiction of the sentiment expressed by another person.

This works both ways; your questions phrasing assumed that the person you responded to felt the same way about how quotation marks should be used. It seems likely that you knew the answer was that they weren't intending to literally quote anyone, but you didn't ask them about that first, which is why it comes across as passive-aggressive.

For clarity, this is the comment that the quote was referring to:

> The comment you are replying to makes so many mistakes about how Testcontainers works on Java that I'm not sure what source code the commenter is looking at.

The comment quite literally calls something wrong ("The comment you are replying to makes so many mistakes"), and it doesn't give any evidence to the claim that there are "so many mistakes" to explain this. When someone says one thing and then claims that it shouldn't be taken literally because they intended something entirely different, that's called gaslighting.


> your questions phrasing assumed that the person you responded to felt the same way about how quotation marks should be used.

Questions don't assume things. People do.

> The comment quite literally calls something wrong ("The comment you are replying to makes so many mistakes"

Yes, but it doesn't literally say, "I'm not gonna explain why." Why did you omit that part?


It doesn't matter if it interfaces via CLI or not. Testcontainers tends to make things work but also introduces difficulty in resolution. You seem to have missed the point.


I'm interested to hear what you would do instead! I'm using Testcontainers in a very basic scenario: A web app with a PostgreSQL database. There are different database backends available (like Sqlite) but I use PostreSQL-specific features.

Currently, in my integration testing project, I use Testcontainers to spin up a PostgreSQL database in Docker and then use that for testing. I can control the database lifecycle from my test code. It works perfectly, both on my local PC and in the CI pipeline. To date it also did not interfere or conflict with other Docker containers I have running locally (like the development database).

From what I gather, that is exactly the use case for Testcontainers. How would you solve this instead? I'm on Windows by the way, the CI pipeline is on Linux.


There is no alternative if you want Postgres "embedded" within your test, I have researched that for a long time, as full PGSQL as Docker image sounded as overkill, but nothing else exists.



Pglite is not fully compatible and doesn't even support parameterized queries yet.


Anyone have an opinion of embedded-postgres vs https://github.com/opentable/otj-pg-embedded (of which its a fork) for Clojure use?


I think grandparent’s concerns were not with using Docker in general but instead with how the Testcontainers library orchestrates it. I assume there’s an alternate approach behind the critique, and that’s what I’m interested in.


I've been using Dagger to do container service lifecycle as the API is pretty powerful. But some other alternatives, depending on your use case

- Some CI/CD support container orchestration e.g. Github service containers. This is my usual recommendation for CI setups. You're unlikely to get local runs this way though. It also has a lot of similar limitations to testcontainers

- If you're building for Kube, managing the container lifecycle and services with kube is probably going to be more straightforward than docker networking stack. You can run a minikube and spawn your containers in it, for example. Then your client library that was previously running testcontainers is simply making kube calls. This gives you much more control over the container lifecycle

- If you're not spinning up and tearing down containers repeatedly (I've seen people just use the same container in unit tests as a service and just wipe it. Often for performance reasons to avoid the overhead of spin up and tear down), just bootstrap a compose file before running your tests.

Sometimes testcontainers is really going to be the best solution for your problem though. One common use case is if you need to spawn a LOT of the same container over and over for your unit test suite, and it has to be a fresh container. My advice is to try to isolate this part of the pipeline from the rest as much as possible if you have to do this.


Thank you taking the time to answer. Dagger looks cool, maybe I'll get the chance to use it in the future.


Never had any issues. We have 100+ build jobs running on Jenkins and most of them have some Testcontainer tests. These never collide if implemented correctly (randomised ports with check for e.g.) even when run in parallel. On my machine running several docker dev environments it was also never an issue. Can you specify what issues you had? Also I am pretty sure the library does not work as you describe. Isn't it using the Docker Engine API? I could be mistaken, never checked the source code.

Edit: Just checked the documentation. According to the docs it is using the Docker API.


> We have 100+ build jobs running on Jenkins and most of them have some Testcontainer tests.

That's probably why you can and do use it, Jenkins. Jenkins let's you install w/e on the hosts where as more modern systems default context is a docker container or at least speak it natively.

> Can you specify what issues you had?

Some of my devs have coded tests containers into their ITs. These are the only pipelines we can't containerize, because test containers don't seem to work inside docker, and won't work in k8s either.


Ah! That is not an issue of testcontainers. In our Jenkins every pipeline it dockerized and uses agents. So it is not running on the host directly. However it is also not running dind (Docker in Docker). Instead it is important to use dood (docker out of docker), which is best practice anyway. Your CI needs to be configured once and all your problems should go away IF tests have randomised ports or even better also run in docker so that those ports do not need to be exposed on host level. Most of my colleagues however prefer not to use dev containers so randomised ports was the solution for us for now.


Just as an example, Gitlab is (was?)kind of infamous about this - everything you do in gitlab for a long time was already in docker, so you by nature of running in gitlab were forced to do everything in DinD mode. They might have changed this in recent times.


I think as long as you can expose the docker socket (should also be possible in gitlab) you can use dood. Dind has never worked out for me. Sooner or later there will be issues.


That's not a solution. This requires you to expose those host docker socket mounted into a guest container, which breaks some security mechanisms by preventing isolation from the host systems.

Which also does not work on k8s. It does work in docker compose, but also isn't portable to places where the docker socket isn't available, eg hosted pipelines.

Test containers seem to work well if you're not already properly using containers in your pipelines, and are not interested in deployment to k8s.


Well for the environment you describe, there is no solution to this problem right? If you want to spawn test containers, you need a privileged container. At least I can not think of anyway to archive this without exposing the docker socket. You have to choose the right tooling for the right job.


I have had a similar intuition from when trying out testcontainers some years ago.

I do not know how the project has developed but at the time I tried it it felt very orthogonal or even incompabtible to more complex (as in multi language monorepo) projects, cde and containerized ci approaches.

I do not know how this has developed since, the emergence of cde standards like devcontainer and devfile might have improved this situation. Yet all projects I have started in the past 5 years where plain multilingual cde projects based on (mostly) a compose.yml file and not much more so no idea how really widespread their usage is.


I would guess that this speaks to an unattended (developer) user story related to other workflows, or perhaps the container-adjacent ecosystem overall. Testing with any workflow is always tricky to get just right, and tools that make it easy (like, "install a package and go" easy) are underrated.


Came here with exactly this on my mind. Thanks for confirming my suspicion.

That being said, having specific requirements for the environment of your integration tests is not necessarily bad IMO. It's just a question of checking these requirement and reporting any mismatches.



Their details might be wrong, but were perhaps not always wrong. Either way, the spirit of their argument is clearly not wrong.


> the spirit of their argument is clearly not wrong.

Uh, the jury’s out on that seeing as how the parent didn’t give any specifics other than some hand waving about “issues with other containerized workflows.” OK, elaborate on that please, otherwise the parent isn’t really making a valid point or may be misinformed about the current state of Testcontainers w.r.t. to these vague “other containerized workflows.”


yet people are showing up here and agreeing completely. So, it seems common enough not to need elaboration.


Counterpoint, I’m not the only person showing up in this thread saying “what exactly do you mean?” So I don’t think it’s clear what the OP is actually referring to. If you know, please enlighten us.


Similarly to CI agents, I run them in docker-in-docker container, which makes it a lot harder to break anything on the host.


    > The library has the nasty habit of assuming it’s running on a host machine and nothing else docker related is running
To be honest given that most tests run as part of an isolated CI/CD pipeline this is a very reasonable assumption to make.


Does anyone happen to know which testcontainer implementations shell out, if any?


Seems like the „community-maintained” ones they endorse, like the Rust implementation, do

I did not realize Rust wasn’t officially supported until I didn’t go to their GitHub and see in the readme that it’s a community project, and not their „official” one


could you elaborate what limitations does is have? how this does not play nice with remote docker/other docker containers?

I don't know this library but it looks like something that I started writing myself for exactly the same reasons so it would be great to know that's wrong with this implementation or why shouldn't I migrate to use that, thanks


A few examples of the difficulties I've had with testcontainers:

- Testcontainers running in a DinD configuration is complex and harder to get right

- Testcontainers needing to network or otherwise talk with other containers not orchestrated by testcontainers

- general flakiness of tests which are harder to debug because of the library abstraction around Docker

In general if anything else in your workflow other than testcontainers also spawns and manages container lifecycle, getting it to work together with testcontainers is basically trying to reconcile two different configuration sets of containers being spawned within Docker. I think the crux of the issue is that testcontainers inverts the control of tooling. Typically containers encapsulate applications, and in this case it's the other way around. Which is not necessarily a bad thing (indeed, I am a huge proponent of using code to control containers like this), but when you introduce a level of "container-ception" by having two different methodologies like this it creates a lot of complexity and subsequent pain.

Compose is much more straightforward in terms of playing well with other stuff and being simple but obviously isn't great for this kind of unit test thing that testcontainers excels at


explain what "any other containered workflow" encompasses




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: