Oh god, if this is the best CI in the game, I don't want to be part of this game anymore. I work with (as in write pipeline code for) Gitlab CI almost every day and it's absolutely horrible. Not only does YAML lack any type safety whatsoever, but every day I run into yet another open ticket in the Gitlab issue tracker because using a declarative (YAML-based) approach (as opposed to an imperative one) essentially means that the Gitlab devs need to think of every possible use case beforehand, so as to be able to define a declaration & write an implementation for it. Clearly this is impossible, so now I'm banging my head against a wall on the daily.
I want to define a setup job to build & push a base Docker image to our container registry. I then want to use this image as IMAGE in all subsequent jobs. This is impossible because the IMAGE field in YAML cannot be dynamic (determined at runtime) but I'd like to version/tag my image using the $CI_COMMIT_SHA.
This is definitely possible. We've been building an image in one phase and then running it in subsequent phases for years with gitlab.
I think you are going about this wrong. Are you generating an image tag dynamically? When you are tagging the image, make sure that you generate the tag deterministically based on information that is available to gitlab when the pipeline is created.
So for example, you could use the tag foo:$PIPELINE_ID instead of foo:$random
My apologies, I misspoke: I wanted the image tag not to be $CI_COMMIT_SHA but to be a hash dynamically generated from certain files in the repo. The issue is that IMAGE won't accept a dynamically generated environment variable (passed from job to job via a dotenv artifact).
I think you could just jam anything you need in pipeline yaml from the dotenv file into outputs/variables then do something like this https://stackoverflow.com/a/71575683/2751619
Did you know that with Gitlab you can generate gitlab ci yaml in a job runtime and then run that yaml as a child pipeline using trigger:include:artifact?
This was the only way I could create dynamic terraform pipelines which changed depending on a plan output.
I'm sure could use it to achieve what you've described.
Thank you, that's indeed a good point. And yes, I did consider that. However, then the Gitlab UI (pipelines overview etc.) ceases to be very useful as everything will be inside one big child pipeline (i.e. individual jobs will no longer be shown in the overview). My coworkers would have hated me.
The image may be determined at runtime, but it’s not required to exist until a runner picks up the job. So use the $CI_COMMIT_SHA in the image name and push the image in a job that runs before the other jobs that use the image.
You might also want to look into Downstream Pipelines.
I am surprised to hear this as well. We had Jenkins in my previous gig. It worked, but I spend all my time keeping it humming and learnt nothing else. In my current gig, we were a Drone shop. We switched to Harness CI enterprise and it’s worked really well for us. Their hosted builds are pretty speedy!
We did evaluate Gitlab CI but went with Drone. Gitlab CI is not a top 5 CI vendor IMO.
Thanks to recognizing Harness!
Full disclosure, I’m a harness employee, using Harness for CI/CD on a daily basis.
Some of the best things in Harness to make us more productive at Harness, using Harness:
* Harness CI is the fastest CI solution on the market - through feature like ML-powered Test Intelligence, that allows running only the tests that are related to a code change, as well as other innovative capabilities. We use it heavily with our java applications and see test cycle reduction up to 80%. It can be used with Java, Ruby, .Net and other languages as well and the savings are significant. It also lowered our infra spent - lower build time means less build infrastructure costs
* Advanced CD: Advanced use cases like Blue/Green and Canary deployment, and rollbacks are available out of the box with Harness. No scripting is needed for implementing complex deployment use cases
* Visual pipeline editor, fully integrated with git - you can author pipeline as code in your git repo, but also - have a great authoring experience in Harness UI using yaml or visual editors. The visual editor make it super easy to understand existing pipelines as well as modifying them
* Plugins - harness support thousands of community plugins including Drone Plugins, Github Actions and Bitrise Steps in your CI Pipelines.
* Unbeatable Governance and Compliance - Harness provides robust, enterprise-grade governance for CI/CD processes. Using OPA-based policies and granular templates, customers can centrally enforce quality and security standards across all pipelines (for example, require security scans to be executed before deployment is allowed or which community plugins are allowed) .
* Reports and insights - looker based dashboard gives you many valuable reports out-of-the-box, but also the flexibly to create your own reports, so you can slice and dice the data based on your needs.
This is really just the tip of the iceberg , I encourage you to check out our website harness.io to get the full scope of our capabilities.
I'm not following. You're surprised to hear complaints about Gitlab, even though you're not actually using Gitlab (in fact, you say "Gitlab CI is not a top 5 CI vendor IMO") and you are praising a completely different product (Harness CI)?
Granted, I haven't used Gitlab CI in 5 years, but I would not praise it either. In fact, the entire Gitlab UI and UX annoyed me so much I moved our company repos to GitHub.
The latest redesign where they grouped all the sidebar links into even more categories just made it ludicrously worse.
Oh god, if this is the best CI in the game, I don't want to be part of this game anymore. I work with (as in write pipeline code for) Gitlab CI almost every day and it's absolutely horrible. Not only does YAML lack any type safety whatsoever, but every day I run into yet another open ticket in the Gitlab issue tracker because using a declarative (YAML-based) approach (as opposed to an imperative one) essentially means that the Gitlab devs need to think of every possible use case beforehand, so as to be able to define a declaration & write an implementation for it. Clearly this is impossible, so now I'm banging my head against a wall on the daily.