Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The worst thing about Jenkins is that it works (2019) (twitchard.github.io)
262 points by prabhatsharma on Dec 3, 2023 | hide | past | favorite | 266 comments


Gitlab CI is still the best CI in the game IMO, but GitHub Actions gives it an incredible run for it's money because of how easy action re-use is.

I meant to make a blog post about this, but here's a good a place as any: GitLab absolutely innovated many hard parts of CI/CD as a platform-native piece, but it feels like they lose slightly to GitHub on what GitHub does best -- social virality for developers. The problem with losing slightly there is that the advantage compounds; if developers find it easy to make and share, then they make and share which makes more people make and share and generates tons of value for a platform.

It may not be safe, but being able to:

    - uses: some-developer/some-repo@some-branch
Is absolutely amazing. Everything else (generally) GitLab has, and (likely) did first, but the social viral stuff GitHub just gets right in a different way than GitLab does.

No one would accuse GitLab of being a "copy" in recent memory like they did in the bad old days, but IMHO they need to just copy the shit out of when GitHub gets social things (like this) right.


I'm in the apparently small group of people who ignores 90% of the made & shared features, and push back against making internal ops to depend too much to any particular CI system.

We know how to build, package, and release our software, and at work I firmly support that we write those operations as standalone scripts that in principle could even run in the developer's machine itself. Moving them to CI is just changing the machine that will run the scripts, and that's where I draw the line of how much coupling to accept with any given runner: run scripts, inject secrets, and maybe accumulate artifacts between jobs.

There are a myriad of things done with Actions, such as preparing NPM, or running Docker containers. I see no point in them, you should be able to manually run the same commands, one by one, locally on a VM or empty Docker container, so at that point why not write them as a script. Use CI as a dumb shell runner. Getting to depend on all those small Actions saves 5 minutes today, only to make migrations immensely painful tomorrow.


This is fine if you treat your CI provider as a "dumb shell runner". But good CI platforms have actually useful features and APIs (e.g. caching) and if you want to use them, a simple Makefile isn't going to work. For projects where the difference between a cold and warm cache build is tens of minutes, those features have meaningful quality of life improvements.

This may be a tradeoff you're ok with, but for a lot of people, it's not.


C++ with templating can take a while to compile, so for caching my build system would include mounting a Docker volume with contents of ~/.ccache/. This was well integrated with the rest of the scripts, so it would work equally fine if I ran them in my laptop or in the CI runner.

My point is that I really don't believe that CI systems provide anything so unique that it couldn't be also provided by local software in a developer's laptop. If the question of "how do I build this in my laptop" is "you cannot, must use CI because we truly require some of its features", I'd consider that an ops failure.

However, I fully admit my experience comes from small to middle size projects. I cannot talk about big or huge scale projects. Maybe that's where needs grow so complex that CI features become truly necessary.


> My point is that I really don't believe that CI systems provide anything so unique that it couldn't be also provided by local software in a developer's laptop. If the question of "how do I build this in my laptop" is "you cannot, must use CI because we truly require some of its features", I'd consider that an ops failure.

So I do agree with this. But I think there's nuance here.

The job of producing a build artifact involves steps that broadly break down into two categories: setting up the context, and doing the build. I totally agree that the "doing the build" bit should be a series of simple steps that are agnostic of their environment. A shell script, or a Makefile - something you can just invoke anywhere.

But the context bit is also super-important. On my laptop, I've already got the (e.g.) right JDK installed so that when I run `make`, the build succeeds. But I'm also not wiping my laptop before every build. On a CI platform, you're effectively starting from scratch every time, so whilst you can go and write your own code to set up the caching, download the tooling etc. etc. there's an enormous amount of value to be gained by re-using the CI-platform's software and features to do that as easily as possible. No-one wants to be writing code that works out how to go set up the right JDK in the right place, when `actions/setup-java` will basically do all that for you.

In theory, yes, you could go and curate a container image that has everything you need and just run your build inside that, but now you've got two bits of software to manage.

If you can't run the build locally, then yes, you're in a pickle.


Thanks for expanding on your point of view. Now that I read you, I'd say that we fundamentally agree.

My experience has been to use carefully constructed Docker images that have the required dependencies to build the project. This removed the need for installing everything in the correct version on each dev system, and ensured a commonly shared base system on which to build stuff.

However I agree that a "context" can have lots of moving parts, and I was disregarding most of them as "exercise for the reader" aka. devs ought to know what they're doing if they wanted to build locally. CI can help a lot with that.


This is something I’ve struggled to communicate internally.

The closest I came was a GitHub actions job that did preflight checks to make sure all the right dependencies and sdks were present before calling into the same build script we use locally.

Don’t care where or how you set up your build environment, if the pfc passes your build should too. We still have some holdouts that do environment setup with a make file :/


Caching is the biggest reason why you shouldn't rely on CI platform "features" because they're quite bad at it. Roll your own and be happier with lower prices and no lock in.


I agree with this, I recently had a pretty long network issue with our CI system and for the most part we just reran scripts or converted ci yml into scripts.

I like CI for secrets management, scrpted pipelines and deploying. And having one button do do all of that. But all the extra junk people do with it seems like another thing to have break.


Indeed - a whole generation seem to be doomed to learn the hard way the painful lessons of lock-in.


Same. I've also had to do things like migrate CI when a company was acquired, from Travis to Circle and Circle to GitLab in the past. It's very painful to do this if you've leaned into everything the CI service offers.


Agree, I like to have a top level "verify" script that does all checks and have CI run exclusively and minimally that one script. Ideally the verify script is just a wrapper around a build and test command


I definitely agree with and do this — it only falls apart when CI environment stuff interferes with file locations and other external state.

That said it is nice (if the scripts are simple) to do things like machine set up very easily.


Especially if you have to run and deploy on your own servers. Not everything can be in the cloud.


> Gitlab CI is still the best CI in the game IMO,

Oh god, if this is the best CI in the game, I don't want to be part of this game anymore. I work with (as in write pipeline code for) Gitlab CI almost every day and it's absolutely horrible. Not only does YAML lack any type safety whatsoever, but every day I run into yet another open ticket in the Gitlab issue tracker because using a declarative (YAML-based) approach (as opposed to an imperative one) essentially means that the Gitlab devs need to think of every possible use case beforehand, so as to be able to define a declaration & write an implementation for it. Clearly this is impossible, so now I'm banging my head against a wall on the daily.


That’s not my experience at all. You do know you can run arbitrary shell scripts, and that means they don’t need to think of every possible use case?

If your CI process is so complex that YAML + arbitrary code doesn’t work, you might want to get that checked, it’s not normal


Random example:

I want to define a setup job to build & push a base Docker image to our container registry. I then want to use this image as IMAGE in all subsequent jobs. This is impossible because the IMAGE field in YAML cannot be dynamic (determined at runtime) but I'd like to version/tag my image using the $CI_COMMIT_SHA.


This is definitely possible. We've been building an image in one phase and then running it in subsequent phases for years with gitlab.

I think you are going about this wrong. Are you generating an image tag dynamically? When you are tagging the image, make sure that you generate the tag deterministically based on information that is available to gitlab when the pipeline is created.

So for example, you could use the tag foo:$PIPELINE_ID instead of foo:$random


My apologies, I misspoke: I wanted the image tag not to be $CI_COMMIT_SHA but to be a hash dynamically generated from certain files in the repo. The issue is that IMAGE won't accept a dynamically generated environment variable (passed from job to job via a dotenv artifact).


I think you could just jam anything you need in pipeline yaml from the dotenv file into outputs/variables then do something like this https://stackoverflow.com/a/71575683/2751619


No, this doesn't work. Also note that the StackOverflow link is about Github Actions, not Gitlab CI.


Did you know that with Gitlab you can generate gitlab ci yaml in a job runtime and then run that yaml as a child pipeline using trigger:include:artifact?

This was the only way I could create dynamic terraform pipelines which changed depending on a plan output.

I'm sure could use it to achieve what you've described.


Thank you, that's indeed a good point. And yes, I did consider that. However, then the Gitlab UI (pipelines overview etc.) ceases to be very useful as everything will be inside one big child pipeline (i.e. individual jobs will no longer be shown in the overview). My coworkers would have hated me.


Can't you use build time variables for this? https://docs.gitlab.com/ee/ci/variables/#pass-an-environment...


I tried this but it didn't work.


Addendum: To see why it doesn't work, please see https://news.ycombinator.com/item?id=38527692 (which includes a slight correction of my original comment).


The image may be determined at runtime, but it’s not required to exist until a runner picks up the job. So use the $CI_COMMIT_SHA in the image name and push the image in a job that runs before the other jobs that use the image.

You might also want to look into Downstream Pipelines.


(Please see https://news.ycombinator.com/item?id=38527692 for a correction of my original comment.)

The issue is that the IMAGE field in YAML doesn't pick up environment variables that are being passed on from a previous job via a dotenv artifact.

As for downstream pipelines, please see https://news.ycombinator.com/item?id=38525954


I am surprised to hear this as well. We had Jenkins in my previous gig. It worked, but I spend all my time keeping it humming and learnt nothing else. In my current gig, we were a Drone shop. We switched to Harness CI enterprise and it’s worked really well for us. Their hosted builds are pretty speedy!

We did evaluate Gitlab CI but went with Drone. Gitlab CI is not a top 5 CI vendor IMO.


Thanks to recognizing Harness! Full disclosure, I’m a harness employee, using Harness for CI/CD on a daily basis. Some of the best things in Harness to make us more productive at Harness, using Harness:

* Harness CI is the fastest CI solution on the market - through feature like ML-powered Test Intelligence, that allows running only the tests that are related to a code change, as well as other innovative capabilities. We use it heavily with our java applications and see test cycle reduction up to 80%. It can be used with Java, Ruby, .Net and other languages as well and the savings are significant. It also lowered our infra spent - lower build time means less build infrastructure costs

* Advanced CD: Advanced use cases like Blue/Green and Canary deployment, and rollbacks are available out of the box with Harness. No scripting is needed for implementing complex deployment use cases

* Visual pipeline editor, fully integrated with git - you can author pipeline as code in your git repo, but also - have a great authoring experience in Harness UI using yaml or visual editors. The visual editor make it super easy to understand existing pipelines as well as modifying them

* Plugins - harness support thousands of community plugins including Drone Plugins, Github Actions and Bitrise Steps in your CI Pipelines.

* Unbeatable Governance and Compliance - Harness provides robust, enterprise-grade governance for CI/CD processes. Using OPA-based policies and granular templates, customers can centrally enforce quality and security standards across all pipelines (for example, require security scans to be executed before deployment is allowed or which community plugins are allowed) .

* Reports and insights - looker based dashboard gives you many valuable reports out-of-the-box, but also the flexibly to create your own reports, so you can slice and dice the data based on your needs.

This is really just the tip of the iceberg , I encourage you to check out our website harness.io to get the full scope of our capabilities.

Cheers


I'm not following. You're surprised to hear complaints about Gitlab, even though you're not actually using Gitlab (in fact, you say "Gitlab CI is not a top 5 CI vendor IMO") and you are praising a completely different product (Harness CI)?


Oh, I meant I was surprised Gitlab is a good CI vendor. Lol.


Granted, I haven't used Gitlab CI in 5 years, but I would not praise it either. In fact, the entire Gitlab UI and UX annoyed me so much I moved our company repos to GitHub.

The latest redesign where they grouped all the sidebar links into even more categories just made it ludicrously worse.


> GitLab absolutely innovated many hard parts of CI/CD as a platform-native piece

As someone who only used GitLabs CI briefly when it initially launched, what hard parts did they innovate on exactly?

As far as I could tell, it's a run-of-the-mill CI/CD platform, for better or worse, but nothing I'd call "innovative".

But again, maybe since the first time I tried it when it launched, it has changed, and I missed something really cool they did.


Innovative is a strong word, but they always had the best support for a variety of deployment archetypes, including baremetal, kubernetes, docker, docker+machine (autoscales VMs in a public cloud to run many Docker executor runners.) They also had JWT authentication support built into GitLab before GitHub even had a CI offering at all. The GitLab kubernetes agent makes their CI/build tool more like a CD tool for organizations that primarily deploy to kubernetes clusters, which is fairly unique for a CI tool to act as a CD tool, though with scripts people often just make the CI tools do deployments anyway.

My experience is limited to GitHub Actions, GitLab CI, and Jenkins. I've never been in an organization that managed to crack the code on running Jenkins in a sane way (always a poorly maintained, bloated mess of plugins, unfortunately) so I can't say I count Jenkins as a contender in this conversation. One could argue it's unfair to discount Jenkins because of organizations that manage it poorly, but there's something to be said about a tool that is so easy to accidentally run in an unmaintainable way. I also have some limited experience with Drone and Circle, but not enough to talk about them confidently.


They integrated in-repo configuration, self-hosted runners, composable workflow definitions, a robust API, and container building and registries for the first time in a CI tool.

Of course GitHub came and blew that out of the water with a composable social actions ecosystem. Which was brilliant and remains the killer feature that puts GitHub far beyond reach of other CI platforms, especially now that they've improved in-repo composability a few months ago.


> They integrated in-repo configuration, self-hosted runners, composable workflow definitions, a robust API, and container building and registries for the first time in a CI tool.

Out of those, I think only "composable workflow definitions" would be one of the features that other CI platforms didn't have, before GitLab even existed as a project. You might want to re-read the history and features of build platforms before GitLab.


I'm very familiar with the history of CI platforms before GitLab, having used Hudson, Jenkins, Circle, Travis, Appveyor, Codebuild, and a whole bunch of other platforms extensively. While some of them had these features, none of them had all of the features, few of them meaningfully integrated the features together, and most of them provided the features in a very half-assed and unreliable way.


Off the top of my head:

- Bring your own runner is easiest/most robust

- environment deploys and management

- tons of deployment integrations

- DevSecOps features

- Built in support ticket handling

- Free container & package registry

For most of these I’m pretty sure gitlab had them built into the platform (and free!) first


It’s possible to do ‘uses’ on GitLab, but there’s a few extras needed admittedly (branch name and path to file)

    include: https://gitlab.com/awesome-project/raw/main/.before-script-template.yml
https://docs.gitlab.com/ee/ci/yaml/includes.html#include-a-s...

But I agree, the social virality is what makes GitHub Actions what it is.

I used to only visit a GL project when I needed to look through Nvidia cuda container builds… everything else I needed was on GH :shrugs:


I find GitHub Actions incredibly counter-intuitive, personally.


It’s the confusion between actions(basically third party plugins) and workflows(the ci that you are building for yourself) that got me.


I think it was a mistake for Github to name the whole product "Actions" and then re-use the same word for a specific component within system. It's really natural to say something like "we need a push action for this repo" when it might be more correct to say "a push workflow"


> I think it was a mistake for Github to name the whole product "Actions" and then re-use the same word for a specific component within system.

You're right:

- product is called GitHub Actions

- your workflow (consisting of jobs and steps) is a workflow

- 3rd party workflow included in yours is an action, not workflow though

- I more often hear engineers refer to workflows as actions

But there are more annoying decisions

- (org-wide) required workflows being recently deprecated

- the feature was buggy (ie used alongside org-wide branch rules)

- but could have been fixed, not deprecated

- some kind of "marketplace" (with reviews, developers trust levels etc) for modular/pluggable actions (workflows) would be welcome; currently it's a "1st solution fitting the problem used" mess with little to no standarisation

- I find the necessity to write a step cloning the repo from which a workflow is running ridiculous - it should be at most a single configuration line somewhere at the top of a workflow


> 3rd party workflow included in yours is an action, not workflow though

Not entirely accurate. You can use reusable workflows from third parties and you can use actions from third parties. It being a third party's doesn't mean it's an action. Also, you can make your own actions, it doesn't need to be a third party's.

Just to add to the naming confusion.


Thank you for finally explaining this to me (have had to deal with github ci off and on for months now).


Travis and Circle CI predated Gitlab, with YAML-configured workflows checked in to the repository and the ability to use custom images.

GitlabCI's innovation was integration: making it a feature of repo hosting.


I moved from Jenkins to Travis several years ago, and Travis's UI and configuration was such a fresh breath. Back then, pretty much _every_ OS project used it because of their generous offerings for open source projects. I agree with this comment that, and from my own experience living through the transitions, Travis CI made a massive leap leveling up the modern CI we have today.


Yeah I didn’t say they innovated CI/CD — I said they innovated CI/CD as a platform-native offering!


Currently working on a gitlab api thing for work, and I gotta say their API us a complete joke. GitHub runs laps around them in this area. Gitlab is actually worse than bitbucket, believe it or not.


In my own project we actively cut down on external CI deps (non-official github actions) as much as possible, it seems incredible unsafe

When we do use an external dep (like in two places for our massive project) we pin it to a specific GIT sha that we manually curated to not contain anything weird


Very good summary. I think both Gitlab CI and Github Actions are very good, but I think gitlab ci is more practical because of how easy it is to build custom advanced pipelines vs the great re-use of github actions.


How is Gitlab CI materially different from the jenkins model?

I find that the only difference is that it's YAML - so even harder to debug, and maintains the same model where you must re run an entire pipeline every commit to test functionality.


Yeah YAML isn’t ideal, but I personally found jenkins to be terrible to configure, not super well documented, missing features compared to an installation of gitlab (ex built in registry).

Also the ability to rerun, target, and influence builds themselves is better on GitLab as well I think.


You can run locally first to test functionality : - )


oh that's nice I didn't know that! Still a stickler on yaml for code though


how dare you suggest something so pedestrian.


I love GitHub actions and think gitlab is okay.

I found actions easier to use


Jenkins is not perfect by any means, but I can say it is much better than existing solutions (taking pros and cons after trying and using many of them). Sure, it doesn't have fancy UI or flying buttons, nor is it rust/PWA/react/whatever-rage-is-these-days, but it does its job - it works, is free, and has been very well-maintained for ages. Download a single file and run it. Want to extend it? There is a plugin for everything under the sun (but be careful, you can easily bloat it).

IMHO, I don't want bells and whistles from the CI tool. CI tools are dry; I see them as a car drive shaft - it must work properly, and I don't expect it to be pretty. If you need to showcase it, you are probably doing something wrong.

And one more thing: I expect Jenkins to be around and free in the next 10 years. I doubt that will be the case for "insert-your-favorite-ci-from-some-company".


I hear you, I want my CI to be boring and just work, and using something old and and a little cooky is fine but...

omg switching away from jenkins (in our case to gitlab CI) was a revalation. SO much easier to use. There were a ton of things we were avoiding doing in CI (or at all) that we started doing (easily) once we switched.

> it must work properly, and I don't expect it to be pretty

Too often it really _didn't_ work properly, or the gap between where we were and "working properly" was a mysterious foggy ocean with no clear path.

To be fair, we drove it pretty hard, we had some jobs that ran thousands of tests on big clusters of nodes to validate and deploy huge ETL pipelines, but man it was nice to have that work smoothly with a nice UI that made sense with super-well-documented pipline commands. It did _basically_ work with jenkins, but the experience of troubleshooting problems and adding new features was a constant pain point that really dragged out a lot of work.


What did you switch to?


He wrote GitLab CI


[In the comment above, they] wrote Gitlab CI.

You make it sound as if they are Gitlab CI's author.


Jenkins, like many CI tools, is fine when you are maintaining 1-5 builds. Sure, your artisanal bash script pasted into the shell run of the Jenkins gui is stable and it just works.

When you go beyond that is when everything falls apart. And by fall apart I mean your org starts to grow to a technical ci debt by a thousand cuts until its taking some teams hours to build and ship their code or debug a spaghetti string of code strung together across undocumented boundaries.

Have you used other solutions at all? I used Jenkins for years and a lot of things that that came after that, from industry standards like GitHub Actions and Gitlab CI to more bleeding edge stuff like Earthly and Dagger. Jenkins is the worst of all of them, even if it may do a couple of things “better” than the others


+1. Jenkins is only simple as long as your problem domain is simple.

Once you need a tiny bit more, you are screwed. By that time you are already too heavily invested because initial barrier to entry was so low.


My 10 year old Jenkins build still runs. My CircleCI build.... less so...


I bet companies have years old zombie pipelines running without anyone's knowledge because Jenkins lacks discoverability.


> There is a plugin for everything under the sun

Yeah, that's the biggest problem. Almost every thing you need for a CI system comes in the format of a plugin, because Jenkins core is too simple. You can't always say no to your developers' plugin requests, because many of them are actually very reasonable.

Now you install one simple plugin, and it comes with 10 dependency plugins, and that's when your nightmare begins:

- out of these 10 plugins, 4 were owned by some random dudes on the internet and hasn't been updated in 3 years.

- 2 are not compatible with your jenkins core version.

- If you update Jenkins core, 3 other deprecated plugins will stop working.

> IMHO, I don't want bells and whistles from the CI tool.

I can see from the architecture design perspective, Jenkins is simple and boring -- it's a monolithic java app and data is just text files. But I would argue that, its UI is much more complicated and confusing than anything else. You have to install a bunch of plugins to integrate with code repo (github, gitlab, bitbucket...), compute systems (vm, cloud, kubernetes...), you have no choice but install a bunch of plugins, and they changes buttons, menus... I had worked for 3 different companies that used Jenkins, and they felt like 3 completely different websites.

To define a pipeline in Jenkins, your choices are either clickops, or the Groovy big gun. There's nothing in between. I don't have much complain about Groovy itself as a programming language, but if you ask your creative developers to use a programming language to define their CI pipeline, you are gonna see 20 different implementations to detect current git branch name. The biggest Jenkinsfile I have seen has 2000+ lines of business logic. Newer CI products like GitLab CI only allows you to define the pipeline in a yaml file, with a list of very limited directives. That's much more conservative. If needed, one can still run a shell step to capture some weird logic in a script, but that inconvenience is a good enough deterrent.


You can use jcasc (Jenkins configuration as code) with regular shell jobs, or Jenkins Job Builder, to name two alternatives "in between".


I am not sure if managing groovy/xml job definitions in a giant yaml file (where all your Jenkins core/plugin configuration live) is easier than Jenkinsfile in repositories.

Jenkins Job Builder can't configure plugins.

There are ways to do job templating, but last time I checked, they were either not powerful enough, or too complex that I would rather writing groovy.


Agreed - I'd much rather spend more time working around Jenkins' peculiarities once than constantly deal with change for the sake of change because some PM wants more "engagement" to justify their job and promotion.

Tools need to be boring and get out of my way - something "modern" tooling forgets since everyone's paycheck depends on forgetting that.


I don’t think the issue with Jenkins is the lack of bells and whistles. The UI it has is unintuitive - I always feel like I can’t find what I’m looking for anytime I am in there.

It has two different UIs (last I checked) which is kind of insane, blue ocean still sucked the last time I used it and the product can’t 100% move away from the old UI (again, last I checked…)

On top of all that it’s not really fun to administer (first-hand experience) where almost every environment will result in plugin hell. I wouldn’t describe the setup process “download a single file and run it” as I look at the documentation, either.

Jenkins can’t give you the “one less tool” advantage that you get with GitHub or Gitlab, either.

Don’t forget that Jenkins’ competition includes free options with mostly high longevity.


And Blue Ocean’s effectively deprecated!


TIL. https://plugins.jenkins.io/blueocean/

   Blue Ocean status

   Blue Ocean will not receive further functionality updates. 
   Blue Ocean will continue to provide easy-to-use Pipeline visualization, but it will not be enhanced further. It will only receive selective updates for significant security issues or functional defects.


Wow. I mean, you're joking right? No? Wow.

That furthers my belief that Jenkins is the quintissential mediocre open source project that only hangs on to life because it's free (and maybe just becasuse it's free as in beer).

If I ran a department and I was given an unlmited CI budget forever I can't see what would compel me to use Jenkins.


The car drive shaft analogy is also apt because a drive shaft is a precision part that _someone_ has to think about very passionately, in order for others to not have to think about at all.

~13m30s: https://youtu.be/xyeeksFRnn4?si=924f0OZYDZk7otFL


> I see them as a car drive shaft ...

That's a good metaphor, and it's significant that there are large innovations happening in car drive shafts as with CI tools:

https://electrek.co/2023/11/29/hyundai-kia-introduce-new-uni...


Basically a dumb pipe in the “dumb pipe, smart endpoint“ dynamic.

Drive shaft is a good analogy though because its not simply moving data (unless we’re getting really abstract).


It's infrastructure. It should be boring, unchanging, and without surprises. It should be so reliable that we take it for granted.


The thing that made me want to use hosted CI was realizing how much time I ended up spending on Jenkins infra and debugging. Upgrades to master or to plugins and then testing them - bleh!

GitHub's being slow? Well that's a GitHub problem. Look, it's been ack'd and they're working on it! Job done.


Jenkins will be around for as long as developers still code. I've always found that any significantly large software engineering office will have Jenkins running somewhere, happily on a local dev laptop/desktop or in a VM somewhere. I think it is being rarely used en-mass still which is a shame since it is free, open-source and easy to self-host.

-

I'm not sure why people are logging into Jenkins and complaining about the UI all the time; that just sounds like your pipeline needs better notifications and people are relying on Jenkins to look at build logs? If you want something prettier to look at just build out a dashboard on Grafana or similar, and centralise your build logs with artifacts and link to them with build failure notifications.


I like to borrow Churchill's quote about democracy and reframe it around Jenkins.

Jenkins is the worst CI/CD tool, except for all the others.


Jenkins BlueOcean is one of the best "fancy UIs"


Thanks! My team and I made that and it was a lot of fun :)


Thanks for that, we use it all day every day!

It's too bad it's been declared "end of life", without any replacement in sight, and nobody these days is fixing some bugs that are really bothering us sometimes...


I really wished we finished it and made it the default UI. Jenkins is very extensible, so there are a LOT of extension points. We certainly didn't have the right approach to the API and devex for creating react plugin extension points either.


We almost made it... Still bums me out. Good to see you're still lurking broseph ;-)


We almost did! It was a huge boulder to move. Nice to see you around too :) lets grab a beer soon


Thank you for building it - it's my go-to jenkins UI. Makes jenkins tolerable :)


Using that everyday all day. Thank you!!


No, thank you! It's fresh, it's infomation dense, it's fast, it works!


All of the test runners have their issues. I've had as much problems with Jenkins as I've had with Github Runners and similar solutions.

The best way out of this is to subdivide responsibilities:

- Put all build/publish/test logic in Makefiles or scripts in the repo. This means that devs can run it locally as well. The only interaction between a test runner and the codebase should be running `make <target>`.

- Put all permissions and code checkout and artifact publishing credentials in the test runner, but no logic. At most you would put processing of test output like making junit/coverage/tap more web-readable.

That's it. Split the efforts cleanly, and things fall into place. Also, you can switch runners easily - less needs to be reimplemented in whatever runner config DSL is picked.


> - Put all build/publish/test logic in Makefiles or scripts in the repo. This means that devs can run it locally as well.

This is critical!

In many teams Devs are "lazy", if any errors or weirdness happens in CI they just wave their hands and say "not a feature, not my problem". Even though they know exactly what's in the test code, the deploy code, and can see the issue.

Letting Devs see and run the CI scripts locally gives them _ownership_. They don't have to fix the issues, but they have control and understanding which radically reduces the arguments and pipeline drama :)


It also takes a lot less time to fix the tests locally too - rather than tweak it... push the change... oops it failed again... repeat...


This still happens, but the surface area is greatly reduced to figuring out that the directory, variables, credentials, dependencies between stages, etc. are set up correctly


This is basically what we do, using TeamCity. DevOps sets up all the permissions and VCS integration, and helps figure out all the AWS IAM stuff. Then we get a nice clean little sandbox to run our build and deploy scripts in, which we commit to version control, including CloudFormation templates. It works really well: it's a comfortable separation of responsibilities that allows our team to move quickly with relatively little risk, and it involves almost no hands-on attention from DevOps once it's up and running, so they can direct their efforts elsewhere.


>- Put all build/publish/test logic in Makefiles or scripts in the repo. This means that devs can run it locally as well. The only interaction between a test runner and the codebase should be running `make <target>`.

This always breaks because then each platform has a specific way of defining env vars or secrets for those Makefiles and bash scripts. End result is devs still can't really run CI "proper" the way it's configured in a runner.


Maybe this is just me, but needing to define environment variables or secrets for testing seems like a massive problem in the test design. Does that mean the tests are dependent on the use of an external service, and that service can't be run locally?

The only part of a CI pipeline I can imagine requiring secrets would be a release/publish step. However, those would only occur after the tests run successfully, so nothing up until that point would require secrets.


Just to give some examples:

- your code is using private dependencies, so you can't build the code without authentication

- you have some integration tests that use test containers using private docker images, so you need to do a "docker login" before running tests

Locally, that would work because you're locally authenticated (and likely have all dependencies already installed etc.)


Ah, I think I see. There is a dependency in the code that requires copying some external resource in order to build/run locally. The developer's local environment may have a different version of that resource than the CI, so a bug ends up not being reproducible. The developer may not run the CI directly, because the developer's access tokens should be separate from the CI's access tokens.

I was picturing a case where the CI was allowed privileged access to a resource to which developers were not allowed access at all.


> I was picturing a case where the CI was allowed privileged access to a resource to which developers were not allowed access at all.

That's also possible. Imagine a pipeline which carries out a deployment in an environment where normal devs are not allowed direct access to production machines / cluster (might happen in larger companies).


You're correct. I sometimes forward my devs the script they need to run locally in order to replicate what Jenkins does. It has the looks of

    make <target> THIS=THAT AND_THIS=THAT AND_THAT_THING=SOME_OTHER_THING ETC=ETC
Sometimes it won't fit into a Slack message, so I attach it as a text snippet instead.


This is fundamentally an interface problem. As you called out Env vars are a pretty common and well supported interface, across multiple platforms.

Do you have a better option to suggest?


Set up whatever maintains your secrets (vault?) so that it works the same way in dev, CI, and real. Have whatever manages your dev versions of services you depend on (vagrant? kubernetes-docker-whateveritisthesedays?) integrate with that so that you find your service endpoints and credentials the same way in every environment.


No, and I don't think any CI platform deals with it in a great way. I mean that once you account for the fact that your dev team may not have access to all of the secrets, the single bash script/Makefile stops being able to shield you from the specific CI platform, and now you have to start using Jenkins secret storage, or GitHub Actions secrets, or whatever.


This is such a great point!

Your pipelines should mostly be CI provider agnostic and runnable from anywhere, including your laptop during development.

If you ever need to change a CI provider, you just move your pipelines.

I'm definitely biased, I'm working at Garden[0] which allows you to do just that but it's mostly applicable for teams using Kubernetes. I paused at your comment because Garden has sometimes been called "a makefile for the cloud".

Whatever tooling you use, having portable pipelines that you can iterate on and debug from your laptop is the only sane approach imo.

[0] https://docs.garden.io/overview/use-cases#faster-simpler-and...


I totally agree. Make sure developers can run the same tasks locally as well.


Having worked with a huge Jenkins deployment at a large company somewhat recently (>10,000 jobs), I found it worked okay enough that company leadership never felt it was worth the pain if switching. But all the friction points added up to a system that was rarely touched by anyone who hadn’t learned where all the bodies were buried. Over time that meant as an engineering org we were underinvesting in CI; there was a lot of quality-oriented stuff beyond unit testing that never got implemented in CI because the typical dev was unaware of how to change it or fearful about trying to change it. The relative accessibility of Github Actions (and other config as code alternatives) transforms the average dev’s relationship to CI from consumer to owner/developer/maintainer, and I think that is extremely worthwhile even if these tools bring some new problems of their own.


As someone who has only ever been involved in CI/CD at smaller companies, I take for granted that our pipelines are maintainable by the developers writing the code being tested/packaged. That just seems like a positive indicator for QA culture.


Interesting, the side snipe about Docker

> One of the mostly-false promises of Docker, as it was sold to me by the true believers who introduced me to it, was that, if you do it right, you can run the same docker image, and therefore have basically the same environment in production, in CI, and on your local development machine.

I literally have never worked anywhere that hasn't used the same container in dev/test/prod.

People who have had to deal with processes that rebuilt images for each environment, can you tell me what was different in each image?


I'm general, it's an anti pattern since it's makes everything much more complicated. The typical reason to have it is so that there are dev tools in the dev container (autoformatter, linter etc) or support for hot reloading and then the prod container is locked down. The other pattern you'll sometimes see is that the prod container will include an agent or a certificate bundle, although it's more common to use sidecars for this.

It becomes problematic because it then becomes easy for engineers to have a completely different container for dev then is used for prod. I recently found an issue where a dev container was using a completely different base and had a different version of node installed compared to the prod container


Multi-stage docker files mostly solve this problem:

https://docs.docker.com/build/building/multi-stage/


I agree that multi-stage builds help address this, but that also means there are different images.


> It becomes problematic because it then becomes easy for engineers to have a completely different container for dev then is used for prod

If the differences between dev and prod are also programmatic in nature (eg based on flag, and mostly configuration-like values), it should be fine.


I think it's common (and important!) to build your "production" image once and deploy it into a dev/staging environment first. This means there are no re-builds between deploys to each environment.

However, I think it's less common to run automated testing within the same Docker image you build and deploy to these environments.

This is challenging because you probably don't want to include any build/test related packages in your production image (or layer) but you still want some level of confidence that your CI and Prod environments are the same.

I have often see builds pass automated testing but fail after deployment because our production Dockerfile/layer was missing packages or shared libraries that were present in the CI environment.


Why don't you want to include build/test related packages in your production image? Exactly because you mention you can't test the Docker image without these tools in the image, you cannot guarantee the production image works as you would expect it to work. Precisely because of that I think any packages that are required to run the tests should be part of the production image.


> However, I think it's less common to run automated testing within the same Docker image you build and deploy to these environments.

When you say "automated tests" do you mean unit or integration tests, or both?

Because unit should generally be in your compilation process, not part of any image, and integration shouldn't require anything different in the image compared to any other environment.

Build it, which includes unit tests, and then deploy and integration tests should be run against your stable API.


Great question! In my experience with ruby on rails, the application runtime is so dynamic, that even unit tests may not be reliable if run outside the production image.

I think normally this isn't much of an issue because other automated/manual integration testing in the staging environments will catch any major problems, to your point.

Another example would be for browser testing via chromedriver. I've usually seen this implemented along side unit tests (i.e., prior to the build phase) but since it generally serves as an integration level test for many applications, this has lead to issues due to the testing and production environments being out of sync.

I think multi-layer Docker images are a compelling solution to this, but it's not usually how I've seen it implemented.

Instead, I've typically seen that test and prod environments are manually maintained. Sometimes these are two separate Dockerfiles or some shared CI environment managed by different teams.


> Another example would be for browser testing via chromedriver.

That shouldn't be in your master image. Put your application in a container, and either just run your browser tests and point it at your dev/test/prod deployment, or build a second image with Chrome driver and the test scripts and point it at your deployed application.

Doesn't need to be multi-layer, and it doesn't need to be complex. You have essentially two apps here.


But then there have to be rebuilds, no?


PHP dev here, we have extensions for development that make no sense in production, xdebug for example. You need it for breakpoints and debugging in general but it should not be installed in production. So we extend our production image and install it on top of it. Similarly, we include Composer (package manager) in the dev image only as we only need the installed dependencies in production but not the package manager. Our dev image is a flavor of the production one, really.


Would multi-stage Docker builds not help here? Composer executes in one step and the result artefacts are copied into a "clean" PHP image without Composer installed.


Based on the description they are doing a multi stage build, but using the prod container as a base and then building the dev container atop that. But yes you could easily go the other way with dev building an artifact and adding it to a secure locked down container. This is less typical with dynamic languages that don't typically create a single binary, but still comes up. The downsides are that your prod container is now significantly different and for dynamic languages the fast feedback loop now has a slowish build step


This what we are doing for the prod container that does not have Composer installed yes.

But in development it's much easier to have it in the image. Additionally we do not bundle the code in the dev image but bind mount it in Docker Compose, which is much faster than rebuilding the image to test changes in development; PHP not being compiled allows us to do that to reduce the feedback loop duration.


In my experience, Xdebug absolutely made sense in production just not enabled by default for all requests. A lot of its functionality can be enabled via a cookie for a single session, and its made debugging production much easier as well as identifying bottlenecks in code or production infrastructure.


That's certainly possible but we have other tools for that such as NewRelic that served us well.


At a recent job, we had slightly different containers for local dev; our backend containers (for a Go app) had Air [1] installed for live reloading, plus Delve [2] running inside the container for VS Code's debugger to connect to. We also had a frontend container for local dev, which didn't get deployed as a container, just as static files.

[1] https://github.com/cosmtrek/air

[2] https://github.com/go-delve/delve/


> People who have had to deal with processes that rebuilt images for each environment, can you tell me what was different in each image?

Tensorflow used to generate multiple images for different use cases: cpu only, GPU, cpu with Jupyterhub, GPU with Jupyterhub.

So each release would generate minimum 4x container builds.

CPU only images were useful for local testing of ml code at CLI (does it run) when a GPU might no be available.

Jupyterhub variants for simple GUI for development work.

GPU for obvious reasons (run the damn thing).

I can’t remember if they used to do builds for their release against multiple CUDA / CuDNN versions… that might have been PyTorch.


I understand staging and prod running the same executable, but wouldn't dev typically be a debug build?


> The head of SRE championed Gitlab CI. I resisted this idea because I, the relatively inexperienced manager of a nascent team, was daunted by the prospect of trying to supplant Jenkins, Github, and JIRA all at once.

I think this is a really big scoping mistake. There is a clean glide path, and I have no idea why you'd think you need to replace Jira.

You can use Gitlab Runners in Github (https://docs.gitlab.com/ee/user/project/integrations/github....). Then with your mirrored repo you can try out the Gitlab features and realize it's better to use Gitlab for your repo if you're aso using it for CI. Having switched back and forth over the years (with tens of engineers using the repos), it's really not a lot of work to migrate between Github and Gitlab.

Finally, if you like Jira, just use that? Sure there are advantages to having everything in one place, but if you're already using Jira+Github then you'll get an equivalent experience with Jira+Gitlab.

I think this is one of those things that if you had actually prototyped the migration, you would find that it's much easier than you thought it was going to be.


Our company switched from GitHub / Jira to everything on GitLab because devops had the authority to make the switch and that's what they wanted.

PMs now complain that the GitLab "issue board" is nowhere near a replacement for Jira and us devs complain that GitHub had a nicer UX and less stability issues.

Can't make everyone happy at once I guess /shrug


The fact that Gitlab Issues aren't a replacement for Jira is a feature IMO. Jira is surprisingly awful at its main job. For example you can't have more than 2 levels of parent/child tasks (compared to Phabricator which has no limit). Changing an issue to a task or vice versa goes via a complex batch update mechanism. Over-configurability means you end up with a gazillion different task states (Done, Resolved, Finished, Closed, ...). You can't do basic things like reorder the backlog based on priority.

Perhaps most critically the interface is just insanely slow! It regularly causes waits in sprint meetings while we wait for someone to drag & drop an issue or for the page to load.

The only reason it's popular is because PMs like to make engineers do their job for them so they can just click a button and get pretty graphs that they can copy & paste to presentations.


> For example you can't have more than 2 levels of parent/child tasks (compared to Phabricator which has no limit)

They're deprecating Epic and introducing Parent right now in Cloud. Probably to solve this.


What management and what devs need from an issue manager is very different. Which is why every 5-10 years everyone wants to switch to a simpler tool, then slowly most of the complexity in the first tool gets added back as the people who needed that one feature complain loud enough, until the people who wanted simplicity become the loud voice and you switch to a simpler tool. It is never the tools fault above. (Some tools are better than others, but the real problem isn't the tool)


Gitlab is 29/month. It’s hard to stomach 29 + the money for GitHub and Jira, depending on the features you need. (Minimum 11/mo)


OP's company is paying for a dedicated CI team. Unless they have special needs, they probably aren't concerned about the CI hosting costs themselves.


I’ve worked at some places that were pretty huge, but the finance department absolutely fucking nickle and dime over everything.

Even the smallest EC2 instances had to be accounted for with a business reason that they would accept, with audits happening every few days.

Asking for a licence for something upfront was like pulling teeth.


I’ve encountered this too.

I always wonder why the eng departments never itemize the time they spend dealing with finance, then bill the finance team for the hours.


I’ve threatened to do this to a sales department - send them a bill for lost time, engineers working overtime, etc because they sold shit without a clue if it even was possible.

Lead balloons have flown better than that idea did :)


You were in a cost center, and/or your company had no concept of productivity.


The latter. Extremely “Penny wise, pound foolish” nonsense going on.


If you can't afford $30 a month, do you really have a business?


> Finally, if you like Jira, just use that? Sure there are advantages to having everything in one place, but if you're already using Jira+Github then you'll get an equivalent experience with Jira+Gitlab.

Sometimes you don't adopt X because its likely to make people want to adopt Y


Yup. I think this is the sort of perspective you get with experience.


I truly, truly dislike Jenkins but haven’t found a better locally hosting CI system that is repo agnostic.

My issues with Jenkins:

1. Groovy is a really annoying language to configure stuff in. I’ve never written it for other things but it has so many little idiosyncrasies that catch you out.

2. Jenkins doesn’t have much in the way of resource management. It’s easy for a job to take over the entire node and Jenkins doesn’t have the smarts out of the box to monitor this and put things in a different node.

3. So many random job killing bugs that boil down to poor job isolation and resource tracking. Every now and then killing one job can take down another.

4. The UI and UX is bad. I know there are better interfaces for it, but they all just have such limited interface exposure of state beyond “running” and “failed”. Estimate time is useless because it aggregates failed jobs.

5. Inter job dependency management and tracking is poor and annoying to setup. It’s doable, but really should be a better first class citizen.

In many ways, I start looking to other tools to fill my CI need that aren’t meant for CI: Render farm managers.

Film farm rendering is largely similar to CI at a structural level. But they have much higher throughput of individual jobs and have built around making the most of that. There’s better inter job dependency tracking and resource management because of that, and they’re built to expose more information up front to help non-tech savvy artists or render wranglers who need to see status quickly at a glance on a few hundred jobs at once.


I've written web servers in Groovy for at least 5 years.. Groovy is great, once you get to know it, but some aspects of its design certainly don't "click" for a while. I'd be using a syntax for years when I'd read in some blog post "Oh that's just xyz with a shorter syntax" and I'd go Ooooh!

Great little language though, IMHO one of the best for quick prototyping.


I don’t doubt at all that Groovy is great for doing stuff when you’re used to it and spend a reasonable amount of time with it. I’ve heard great things.

But things like the differences between single quotes and double quotes and a bunch of other subtleties like that, make it really frustrating when the only time I use it is updating build logic.

I’d honestly prefer a more consistent language like Python or Lua. Even if I don’t code them all day, they have more resources to lookup and fewer gotchas.


There are a large number of quirks I’ve had with the Kubernetes runner on Jenkins causing flaky behavior, but when it works, it’s mostly fine after figuring out Groovy. The 1 in 50 time a job needs to be kicked off again due to some strange error is annoying though.

More often than not, I end up creating a sandbox folder in Jenkins and cloning workflows to work on them by hand without having 90 commits of misc errors and typos. This has helped speed up some iteration.

Sort of off topic from the article, I think where almost every single CI engine falls short is the extremely limited scope they cater to - that every job ever made must be solved with the cargo cult ideology where every CI job mustn’t ever have human input, everything must be perfectly automated CI. This is wonderful when it is possible to do, but sucks when it isn’t feasible.

Jenkins’ strength is that it’s actually a workflow engine masquerading as a CI tool, and that gives a lot of flexibility for a number of bespoke processes.

Do you want to wait for someone to click a button to grant permission to run a job because the test might spin up extremely expensive resources and you’d really rather someone baby sit it? Jenkins has you covered.

Or maybe it just isn’t possible (or worth the time) for your team at a point in time to automate all input to a job, so spin up a Jenkins job that lets someone manually specify parameters.

Maybe your flow can trigger a release of software but you want a human to come behind to do a manual once-over as a final check, then also capture who/why approved it.

More CI tools need to accept the reality that not everyone can implement the perfect ideal of CI jobs. Some jobs will require a human step somewhere, be it custom input to a job or waiting for a response.


Jenkins is rock solid. Been using it for three years to deliver highly critical software. The development experience is the worst of all ci tools since no one wants to setup jenkinspipelineunit to test the pipelines. Dagger might be the salvation here, but even that tool didn't support Jenkins out of the gate. And for those that are reading this, the statement about pipeline groovy being a trap is exactly right. Avoid writing groovy at all costs. No one on the team wants to get good groovy and if you do, you're weird. The way we do it is to just use the Jenkins file to call the scripts (or tools) with bare minimal script code.

Edit, forgot to add that I recently interviewed a bunch of candidates for a high paying job at a ai hardware company and every one of those candidates stated they worked on or near a project that involved moving from Jenkins to Gitlab CI.


> And for those that are reading this, the statement about pipeline groovy being a trap is exactly right. Avoid writing groovy at all costs.

I disagree with this pretty strongly. I've worked with Jenkins where it was all freestyle jobs and it was a nightmare to maintain. Pipelines written in groovy are essential to doing Jenkins well, imo. It's also not exactly hard to write groovy. It's basically Java, not some weird esoteric language nobody knows.


I'd suggest you wait until you spend a whole day on script approval and @NonCPS idiosyncrasies before you define Jenkins pipeline Groovy DSL as "basically Java". This doesn't mean you shouldn't be writing it, you should — but for the reasons completely different from its supposed "similarity to Java".


For anyone who thinks JavaScript or yaml has funky implicit conversions and comparison rules, wait until you hear about “Groovy truth”.

Theres also the limitation that functions can be max 1000 bytes or whatever I don’t remember. Managing shared libs and testing them is not easy.

It’s not basically Java at all. Everything is subtly different and will surprise you in the least convenient moment.


> Avoid writing groovy at all costs. No one on the team wants to get good groovy and if you do, you're weird.

I guess I'm weird then, having produced about 100K LoC in Groovy over the last couple of years.

We'd move to Gitlab CI too — if it was anywhere near resembling a serious tool and not a sandbox toy collection.


Oh ye keep it simple. I worked as a Jenkins admin for half a year and one day I woke up from a nightmare and realized that I've made some sort of parallelization batch job processor communicating via outputting bat-files, in Groovy, to cut down build times.


Does Jenkins let you test workflows locally?

That's the thing I hate that about CircleCI, Github Actions, etc... having to "commit, upload, wait" to test every minor change =/


I put all the important steps in scripts (makefiles or as commands in package.json for JS projects) which can be run without Jenkins. Pipeline in the Jenkinsfile only runs those steps separately in the correct container build stage to collect separate step results.

If done correctly, the Jenkinsfile should be easy to read by a person who has never written a Jenkins pipeline before.

With this approach it is hard to mess things up by creating a monster pipeline which no one can understand, and if necessary, you can quickly change to different CI runner at a later point in time.


Jenkins has a “Replay” button on completed jobs and allows you to edit the Jenkinsfile before rerunning it. That is the easiest way to test changes without going through version control.


What would people recommend for a self-hosted CI system in 2023?

I'm currently leaning towards Jenkins, because neighboring teams are using it (with some pain due to their scale, but theirs is 10x-100x ours), and I have some ~10y-old experience with it.

(Our builds are compiling for embedded targets, with a Makefile-based system)


I've noticed the Avahi folks use Woodpecker CI [1], which I hadn't heard of before, because it supports priorities and pre-emption [2]

[1] https://woodpecker-ci.org/

[2] https://social.treehouse.systems/@marcan/111258515328251078



You can run a local gitlab instance, so I'd assume self-hosting as a rider on that. Unless I've been making some very bad mistakes.

GitLab's not too bad. I had a Gradle pile I had been using but once I knocked out some old Java bits I wanted to sleek my "required tools" down a bit. I've integrated some OpenProject stuff into GL also. Maybe someday . . for "minimal" clients . . I can just bring in the GitLab and call it a day, so far as server stuff is concerned.

That day is not today, however. Customers in my business want a lot - like, a LOT a lot - of project toys, and homebrewing those in GitLab is beyond my skills.


I use drone self hosted (which is what Woodpecker CI was forked from in one of the other replies). It's an alright system and it works with my Gitea instance to build executables and docker images.

I think though ironically my take away after setting everything up, was that I didn't really need to spend a lot of time to get it perfect, rather just getting it working was enough for me.


Buildkite is mixed-hosting; source code stays on your machines, on-prem if you like, but build logs / history / SCM triggers are hosted.

Unclear whether that’s enough for your situation but it’s by far my favourite because of how it lets you store the CI pipeline in the repository (so changes are reviewed via normal channels etc).


Because not every company wants to allow their source code to the cloud. Basically all the reasons organization self host their code repositories (and a bunch do) all apply.


Note that I asked "what" do people recommend, not "why" do people recommend...


The blog perfectly sums up the feelings of anyone who has worked deep in jenkins. It works, but goddamn it's hard not to hate.

Dagger is the answer to jenkins woes. It's saved my sanity and made CI development tolerable again.

* Real language support. No groovy, no yaml.

* Debugging available in your native IDE tools.

* Clear docs in any language of your choice.

* Reproducible builds you can run locally. Just need docker.

This all comes with the bonus that it can actually just consume GHA yaml with some of the tooling folks have made.


Jenkins is amazing for one thing in particular, and I've not really seen it replicated well elsewhere (like gitlab ci): common job execution. Letting people run jobs on/against shared equipment, rather than having people run scripts locally.

I wouldn't recommend it for general CI other than that. Think of it like a butler, able to do small jobs, not as your complete home assistant.


> “everybody should be writing and maintaining their own CI jobs”

And, of course, everybody should be using whatever language and version they want to, and run their software on whatever cloud they want. The notion of this makes me super uneasy, but at the same time it's a good explanation why many big orgs have 100s of engineers when they can probably easily do with 10s of them.


I had a gig (circa 2014) where Jenkins was used in atrocious ways where it was basically running the show. Yet there were no problems with Jenkins itself, it was running smoothly


I’ve found that people will use CI tools like a hammer once they get them working - everything’s a nail, even screws or bolts.

You often end up with these glorious Rube Goldberg machines where Jenkins does everything and becomes a single point of failure - if the instance falls over, everything’s fucked.


I once worked somewhere that had two very different Jenkins instances - one that was the typical dev CI instance, and one that was a job scheduler for all the prod batch jobs. It actually worked very well.


Yes, one multinational company who was a customer at my last employer lamented their "dystopian Jenkins hellscape", "glued together with 300+ Python scripts".


> Your editor won’t syntax highlight the Bash inside Groovy. You can’t run “shellcheck” (or any sort of Linter) on the Bash inside the groovy. You can’t very easily execute your shell commands to test them.

this lands, and not specifically for jenkins. CI generally is inscrutable and difficult to test

every CI has its quirk

github CI feels fundamentally worse than gitlab because job artifacts are native in gitlab -- this makes a million things easier. github's own action for attaching things to releases is deprecated. the upload-artifact action doesn't work in nektos/act (the only way to test locally)


One of the most frustrating things about gitlab ci is testing rules. There's just no way to do it without sending it live, especially for things that might be more rare (tags, releases, default branches, etc). Why that isn't a priority to let people see what the jobs/vars of a pipeline would be without filly executing it is beyond me.


yeah, really need a repl for CI. have run into exactly the same issue, conditional cases are impossible to test ahead of time


The real failure here was the failure to recognize the sunk cost fallacy. My team attempted the pursue the "supported" path with Concourse for a recent project, and I watched them bang their heads for a week, realized it wasn't going well, set a meeting where we all discussed and decided to move to GitHub actions. Now everyone is happy and the project is back on track and I don't have a write a blog post like this one about Concourse.


I really like Concourse but unless you've got someone who's fully drunk the koolaid, it can be hard to a team onboard. It's got some very strong opinions on how builds should be deterministic. "Why can't I just click re-run this failed job!?"

If you're doing something that doesn't line up with Pivotal / Cloud Foundry workflows, you're probably going to have a bad time.


The kubernetes plug-in for Jenkins works well for us, and as the checkout/clone happens in the pod you don’t have the workspace bind mount problem. We are using docker-in-docker at the moment but looking at buildah. We do parallel windows container builds as well.

Jenkins is a bit like Perl IMO: there are ways to use it that avoid the majority of its sharp edges (eg putting logic in bash and just call bash, avoid groovy) if you know the perils that lurk.


> We are uding docker-in-docker at the moment

You can also run a "less privileged" container with all the features of Docker by using rootless buildkit in Kubernetes. Here are some examples:

https://github.com/moby/buildkit/tree/master/examples/kubern...

https://github.com/moby/buildkit/blob/master/examples/kubern...

It's also possible to run dedicated buildkitd workers and connect to them remotely.


The biggest issue I have with the plugin is the container takes a significant amount of resources to run.

I don't know how that compares to other solutions like github though. Being able to run a really lightweight container in argowf has its advantages


Concourse CI is great, it felt like it would be easier to write Concourse resources for the custom things we needed rather than fighting Jenkins anymore.


I don't mind Jenkins pipeline limitations because my philosophy is that CI should be as simple and contain as little build logic as possible.

Every job should be a few simple commands or a python script or something I can run locally. This simplifies reproducing and debugging issues caught by CI -- there's nothing worse than wasting time trying to reproduce what happened.


I really resonate with this post after working with Jenkins for ages. In my experience I wasn't very happy with Bash in the end either and moved to Javascript for reliability, performance and tighter integration with the Cloud were deploying to. It also allowed us to run locally and avoid having to deploy to Jenkins to test.

The discoverability issue is also a massive issue, declarative pipelines massively over promise its capability. Pipeline configuration was mostly still done in the UI because extensions had 0 documentation about how to declaratively define its configuration. Searching on the internet for the magic incantations as the author puts it is such a waste of time and I just compromised to get it running.

I'm glad this post is out there to articulate these common challenges when you invest so much into Jenkins.


the worst think about Java (in general, Jenkins is Java) is that it solves a real problem that companies have.

I put, however, that only companies have the problem, not programmers, and not even regular (sized) engineering teams; except when they have a fast employee turnover speed.

I do not have a very clear idea, but roughly speaking I'm saying that java does really well in the "too many engineers are working on the same code but not at the same time" problem space


Isn't Go the New Java in that sense? I am no fan but it seems industrial strength for this kinda use case.


The complaint about a long feedback loop is true of every CI system I've used (jenkins, travis, gitlab, github actions). None of them really have a good way to test a pipeline locally. The best solution here is to put as much of the build logic as possible in scripts and your build system.


The part about using Docker is wrong. You can use the docker plugin and just run any image you want, and then run the code inside that specified image. You can pass additional parameters to also run it within the context of any particular docker build stage.

I use the same Dockerfile for production builds and for running all the CI related tasks, without any problems. And the same single pipeline is used for PR hooks and releases.

Post also mentions Blue Ocean UI, but that one is deprecated and should not be used at all.

Jenkins is far from perfect, but unless you can use Gitlab (and sadly you can not in every case), it is the most feature rich self-hosted tool out there. Just make sure you set it up with Jenkins Configuration as a Code plugin out of the box.


Jenkins is the worst CI system except for all the others. Everything feels like a pile of hacks upon hacks, maintaining it is tedious and unrewarding, new functionality is always bolted on and undiscoverable. But every alternative implements the first 20% of features that provide 80% of the functionality and then never gets around to implementing the other 80% (and interestingly every attempt to "modernize" Jenkins, such as "Blue Ocean" or "Declarative Pipelines" as mentioned in the article, suffers from exactly the same problem).

I wish there was a nicer tool that worked. But there isn't.


As somebody who has spent his share of time interacting with Jenkins as a user, and messing around inside it as a developer, I am definitely biased. For or against, I'm not sure tho ;-)

But the problem with Jenkins is the problem with all CI. Our (software in general, not Jenkins specifically) build process are all godforsaken nightmares that would put HP Lovecraft and Rube Goldberg into rubber rooms. You can't add a bunch of extra automation to any of these systems and get something better, let alone a system that needs to be able to automate so many different and unique snowflake horrors.


Maybe it's time for languages themselves to start defining what CI means, the way they currently define how builds and packages work.

They already include test frameworks.

It would be cool to have, at least as an experiment, a language with no snowflake horrors at all, if it runs at all, it's in the language or framework's One True Way All Tools Understand. Static typing, maybe even linter warnings as fatal errors, anything that makes adding CI a hassle just doesn't exist.

Maybe it could even refuse to run without full test coverage.

So you could take any project at all, click "Turn on CI" and that's that, there's only one CI config that exists.

I'm guessing the number of use cases would be tiny to nonexistant in the startup centric tech world, but they might be some of the least stressful projects ever made!


The problem is that a real green-field re-imagination of software development needs to go back to the 70s, and back to the (these days mostly fake) metal, meaning it's a massive job that will take years to even know if you might make it. You need to start with something simple-but-useless, and grow it from there, rather than attacking a single layer or two of our current enormous tower of babel. It will be difficult to adopt because it must by definition be alien to our current workflows, and it will take forever to find users. The only way to find success is to slowly and surely build something that is so obviously good existing devs will be willing to give up tools in which they've invested years and (for many of them) their identities, just to play with it.

It requires an iron clad vision, willingness to be a pariah among the coding masses and a dictator internally, excellent communication in order to present the vision and progress to the outsiders who are interested, and piles and piles of money to pay for it for years before it becomes sustainable, however you plan to make it so.


Needs a (2019)

Richard was my manager in early 2019, and was awesome.


Submitter editorialized the headline as well, Actual title is "Life is Too Short for Jenkins"


Submitter's is a way better title imo


You should change the title to match tbh


I’ve long had a very similar argument for why commuting in the Bay Area* sucks: because it’s just tolerable enough that people keep doing it. It’s not unmanageable, but it drains your life little by little every day.

* This, of course, applies to all long commutes everywhere, like how it applies not only to Jenkins, but lots of other software too… especially Jira.


This is my favourite quote:

    The worst thing about Jenkins is that it works. It can meet your needs. With a liiittle more effort, or by adopting sliiiightly lower standards, or with a liiiiitle more tolerance for pain, you can always get Jenkins to do aaaaalmost what you want it to.
This is great, opinionated writing!


I'm so much more happy with GitLab CI. The only thing that is easy to implement in Jenkins, but an outright horror in anything else, is the usual "helper scripts" stuff: sync the production database to integration (and remove PII in the process), run reports and data analysis... the best way I've figured out to implement this is to create an empty branch (git checkout --orphan helper_xyz) and then run pipelines against this branch either manually or on a scheduler, but it sucks.

And this is what keeps Jenkins afloat, despite it being a consistent source of CVEs both in core and in its myriad of plugins that don't bother to follow semantic versioning so every upgrade is playing Russian roulette with your company.


at my company this is all very easy. there is no version control of any kind and also no automated testing. solves a lot of problems.


I stayed away from jenkins for a long while because of its bad reputation, but after diving into it for a recent project I was surprisingly impressed how easy it was to set everything up (with the docker image at least), and it seemed that there was a solution for just about every convoluted build scenario I could come up with. In contrast, I've tried out other tools and encountered: confusing uis, too much focus on docker, broken interop with forges, poor documentation, too much yaml, ...

I guess the flexibility with jenkins is probably both a blessing and a curse.


Of everything I've tried, gitlab co is my favourite. It's just a shell script, and that makes it super simple to replicate yourself locally, and - I think most importantly - it's really obvious what it's doing. I think that's my biggest problem with Jenkins and GitHub and such is all the plugins that do something, but without digging through the source code you cannot work out what they really do


cloud is so good now it’s hard to justify not doing something bespoke. ec2 spot is insanely cheaper than turnkey cicd, and better in almost every way.

i’m delighted to pay 30% over infra cost for convenience, but not 500%. and it better actually be convenient, not just have a good landing page and sales team.

this month i learned localzones have even better spot prices. losangeles-1 is half the spot price of us-west-2.

for a runner, do something like this, but react to an http call instead of a s3 put[1].

for a web ui do something like this[2].

s3, lambda, and ec2 spot are a perfect fit for cicd and a lot more.

1. https://github.com/nathants/libaws/tree/91b1c27fc947e067ed46...

2. https://github.com/nathants/aws-exec/tree/e68769126b5aae0e35...


> Warning – I’m going to rant for many, many paragraphs. My advice is to skim

I had a good chuckle at this.


I agree, Jenkins is awful. We also started with Bash scripts, but eventually moved to using Ansible playbooks, which is much nicer than Groovy. Alternatively, I recommend using Earthly as well.


I agree with most of this. I end up with a lot of

      sh 'make all'
      sh 'make test'
In my `Jenkinsfile`, but there's not a great way to test things locally.


While we are plugging ci tools, I enjoyed buildkite at my last gig


Same, we are using it now and it is a much nicer experience than Jenkins


Wow, this title sums up in one sentence what we all hate about Jenkins. It's like the equivelent of timeouts being worse than fast fails.


Github actions works great Even AWS codebuild is good.


The best part about jekins is that it works. The worst part is that many people dont want to work with jenkins.


Well, to be fair, pretty much each of these points apply to all other CI tools in existence.


Jenkins is, like, uniquely awful though. Aside from the sheer staggering number of bugs in its core functionality (I encounter on average maybe one bug per new pipeline I write), it is also configured in almost the worst language I've ever touched (considerably worse than Nix, for example, which is really saying something). "Blue Ocean is Incomplete and Unintuitive" is entirely correct - you have to drop back to the old interface for essentially everything, and also Blue Ocean has the absolutely unforgiveable behaviour that it responds to any ESC keypress, even if the OS has focus.


Nix is The Best though?


The Nix language is awful. The Nix idea is The Best. When the CLI stabilisation work lands to separate out the store-manipulation plumbing commands from the porcelain Nixlang-related stuff, I have hope that a usable frontend will appear.


>The head of SRE championed Gitlab CI. I resisted this

Well there's your problem right there.


Jenkins has improve a bot. before github action has self host runner, Jenkins has worker node years ago.

using jenkins declarative pipeline, almost everything can explain in code.

updating jenkins is a walk in the past.

the most tedious part is dealing with no longer maitenance or support plugin.

make sure to never expose jenkins out. use a relayer to bring webhook traffic in


Really like the one line dissmissal of gitlab ci - haha, what a n00b.


Been using git hooks with in-repo Makefiles and small bash scripts to good effect for some years now.

It's fast, simple and took a couple hours to set up. Rarely ever needs attention. It just works.


The most interesting part IMO was at the end when op was noting his new company doesn't have teams/engineers create and maintain their own jobs. I've worked lots of different places now and something that's always frustrated me is how reticent engineering orgs are to reevaluate their processes and policies. Like it looks like his first org could have moved on from this problem by adopting the second org's policy. There might be other tradeoffs, but IME no org even wants to hear about this stuff, and they definitely don't have metrics for evaluating anything such that they might say "huh, this isn't working."


There are tradeoffs. What happens when the CI team is a bottleneck? What happens when you have to frontload tasks you might not need because you’re afraid your release could be compromised?

What happens when the CI team says no and it introduces complexity in your system? Or you end up doing “shadow CI” on your machines to avoid conflict and escalation?

The complexity has to live somewhere. It is always worth evaluating the optimum place for your organization at any given stage.


Sure, of course. My point is just that engineering orgs are generally super conservative and thus consistently fall prey to sunk cost fallacies everywhere. In a very irritatingly bureaucratic sense, you have to have a process for improving your processes, or you end up making awful decisions just because they're the least amount of change, and that metric always wins.


We're using Jenkins at work. For building artifacts, it's alright as our use case is simple. But then we're also using for our automated tests, which were alredy kind of a dumpster fire in terms of reliability (both because of mistakes made when automated testing was put into place and because of issues inherent to our product) and I don't think Jenkins is helping a lot here, even after consequent investment in dev time.


> On a previous team I had used Concourse CI to some extent, but I wasn’t really blown away by the experience. Travis and Circle were mentioned. I was a fool. I should have committed to seriously researching some of the contenders and making a more informed decision, but I lacked the willpower and the discernment.

The whole post can be summed up as he had very little CICD experience. Made lots of beginner mistakes, which is easy to do in Jenkins. Then decided to write a post where all his complaints about Jenkins are not only wrong but are the issues that plague all the other CICD tools.

> So instead of writing Bash directly, you’re writing Bash inside Groovy

Why are you doing that? You have a fully featured programming language and you are running `sh('npm install')`. You could do this instead https://github.com/DontShaveTheYak/jenkins-std-lib/blob/mast... . How is bash inside of YAML better?

> The trouble is: Groovy is a much, much worse language for executing commands than Bash. Bash is interpreted, has a REPL that is great for experimentation, does require a ton of imports, and has lightweight syntax. Groovy has none of these things.

Groovy has a language server, linters and a vscode IDE plugins. They are probably not as stable or full featured as the bash ones, but they are available and very few take advantage of them. Again, how is YAML+Bash better?

> The way that developers test their Groovy steps is by triggering a job on the remote Jenkins server to run them. The feedback loop is 2 orders of magnitude slower than it is for just executing Bash locally.

This is a rookie mistake. For about 60-75% of pipelines you can run them locally in a docker container on your local machine. You can even set up hot code reload so as you change your pipeline the Jenkins reloads it. You can also configure the job to kick off a build when it reloads the code. When Jenkins is configured correctly it has the fastest feedback loop of any CICD tool on the market. GitHub actions comes in a close second since it can also be run locally but you cant run a "clone" of what you run in production, like having the same secrets, so it gets second place. Beside Jenkins and GitHub actions, I dont know of any solutions for the other tools.

You can run a GitHub action on Jenkins. It's a very deep and complex system. It's like an iceberg and so many engineers dont leave the surface before deciding it sucks and one of the YAML CICD tools is better. Sure the YAML alternatives are EASY to get started with and to do basic stuff with. But they are Terrible at anything complex. While Jenkins is not easy to get started with, once mastered, you can build complex pipelines with ease.

I get that I'm a Jenkins fanboy. Most of the things I mentioned above, I either contribute to or I'm the author of. I know Jenkins has issues. I know it has hurt lots of people, I read the complaints online. But it's still the best out there. The best software in the world is not written in bash or yaml and the same is true of the best CICD pipelines in the world. It's a shame very few people get to see/use those pipelines.


This is "the worst part of Jenkins is that it works".

You shouldn't judge a developer tool by just what is possible to do with the tool. After all, with a little turing-completeness it is possible to do anything with anything -- you should judge the tool by what is easy to do with the tool. A good developer tool shouldn't require knowledge of a bunch of arcana to "configure correctly". A good tool protects you from "rookie mistakes" and makes sane choices the intuitive and obvious path of least resistance. Good tools can have a learning curve, but they assist the learning curve by making their abilities easy to discover and experiment with, they don't require you to dig into source code or do random searches on github to find some random pipeline somewhere that uses the configuration you need, as described in the post.


I wasn't judging the tools. I use several of them and with 0 complaints.

I'm judging the article for being incorrect about it's specific points and for focusing on a single tool while the alternatives also suffer from the exact same issues.


You are judging Jenkins favorably because it is possible (with a bunch of arcane knowledge) to build good CI on top of it.

I am saying you should judge Jenkins disfavorably for being hard to use instead of going "skill issue" when somebody describes the pain points.


Avoid Jenkins, and if you can, try https://www.jetbrains.com/teamcity

"TeamCity Professional is free – even for commercial use – and has no limitations on features, number of users, or build time. It allows you to configure up to 100 builds and run up to 3 builds in parallel, which is more than enough for most projects"


> TeamCity Professional is free – even for commercial use – and has no limitations on features, number of users, or build time.

Yeah, no. There's obviously gonna be a catch in the future.


To be fair, I've never seen a company turn around a licensing fiasco as well as Jetbrains did.

Initial announcement, 2015: https://blog.jetbrains.com/blog/2015/09/03/introducing-jetbr...

Community anger a couple days later: https://bytecrafter.blogspot.com/2015/09/how-jetbrains-lost-...

Significant changes after hearing the feedback, a couple weeks later: https://blog.jetbrains.com/blog/2015/09/18/final-update-on-t...

I'd trust Jetbrains -- as a company with a few decades of experience serving devs and only devs -- much more than your typical get-rich-quick cloud SaaS startup that's just looking for a quick exit.

Yes, they need income too, but they're one of those few tech companies not aiming for hypergrowth, just a sustained user base.

Even though VScode gets all the love these days, and Atlassian and Github and CircleCI etc. get all the enterprise use, they've made some really cool products of their own too (Space, TeamCity). And their IDEs continue to be excellent.

It's not like TeamCity even has a secret catch... their cloud version DOES have a monthly cost already. It's only the self-hosted version that's free.


To be fair, I've never seen a company turn around a licensing fiasco as well as Jetbrains did.

Absolutely, though it was quite dramatic.


Maybe, yes. But it's exactly the same license / pricing since i first used it and that's somewhat around 5-7 years. Quite some time so far.

I would recommend TeamCity to everyone, it's awesome.


The catch is your limited to a small number of parallel builds (3). Our CI pipeline would take hours to run with that limitation


Just as a note: TeamCity originally launched around 2005 sometime, so it's coming up to be 20 years old soon. I think TeamCity Professional, when self-hosting, has been free since always.


Yeah, I use TeamCity for both my test and production setups. I've only had positive experiences with it, very nice seamless integration with Jetbrains' IDEs as well.


TeamCity is even worse than Jenkins. Jenkins randomly breaks and you can eventually fix it, or kludge some kind of workaround. TeamCity randomly breaks and you call support and they can't fix it.


>configure up to 100 builds and run up to 3 builds in parallel

Funnily enough, Jenkins does not have that limitation.


It has plenty of others which more than make up for it though.


The responses to this post are hilarious.


free until it's not, of course


or enshittificated


Jetbrains is among the very few companies I’ll trust for the time being


"For the time being" is an understatement. The company has been around for 20+ years, and while I haven't used TeamCity, if it is anything like their other products, you can be certain it is built with a high level of attention to detail and that that won't change anytime soon. It is one of the few companies left I license stuff from.


It is a capitalist enterprise, which means control by its very owners is contingent on making enough profit to pay the bills, in the very best of cases. This is not my opinion.


Yeah, but this is JetBrains. They are one of the few companies whose products are so much better than anyone else’s that I’d pay for them, even as an independent developer. They’ve been around for decades and are still more than popular enough that making money shouldn’t be their issue. My company of 80,000+ employees has a JetBrains license for every developer.


...for now.


It's not a publicly traded company and not obligated to increase value for shareholders. Also, they are located in Czech Republic which has a very different business mentality, pursuing product quality over short term profits.


Same here. Although I am super nervous that they will either cash out by going public or selling to some investor. That will be the end of it.


They aren't a silly con valley startup and they've been around for 20+ years.

https://www.jetbrains.com/company/


Always praying that they never will. I’ve been using their products since my first “real” developer job over 12 years ago and I never want to switch.


Maybe they will at one point, but JetBrains so far did never take VC capital (and if I remember correctly said "no" more than once) [1]. That doesn't tell about the future, but they probably had many options to cash out and never did.

[1] https://www.bloomberg.com/news/articles/2020-12-18/czech-sta...


Jetbrains products are outstanding


Aside from the docker snipe I've already commented on, everywhere that I've had to deal with Jenkins it's been an issue because it's always a pet.

There are always issues, and like the author, it's "only" about 25% of the time. A quarter of all builds fail due to things absolutely not related to that specific build.

> oh, we were resource constraint at that one second, retry and it'll probably work

> oh, we can't replicate that error message from a tool that you have no visibility over, retry and it'll probably work

> oh, we have no idea why something timed out, retry and it will probably work

Now when I have a failed build I don't actually trust the build before I don't trust that there was an edge case in an integration test that failed. Because it's almost always something outside of the code.

[Edit] downvotes but no constructive responses? The issue isn't necessarily Jenkins, it's the type of people who use Jenkins to roll their own everything.


> oh, we were resource constraint at that one second, retry and it'll probably work

Had the same issues with (self-managed) gitlab CI. This is not jenkins-specific and usually something you can fix much easier in jenkins (other than just retrying)

> oh, we can't replicate that error message from a tool that you have no visibility over, retry and it'll probably work

This could maybe be better in other, more declarative CIs but honestly, its pretty much the same. The only thing some of them have is a local runner.

> oh, we have no idea why something timed out, retry and it will probably work

Jenkins has this, yes. But use another CI with docker and you can get the same and much worse.

But in essence, yes, at our company too, jenkins is a pet. But this means that someone knows it from the inside and outside and knows which things to fix in your pipeline. Good luck with trying to extend gitlab CI (or others) to fix things you need and/or opening an issue with them.

Gitlab has quite some high-profile issues sitting around for years with customer requests piling up. In jenkins, you'll find an extension that will do exactly what you want, or you could roll your own.


You appear to be reading more into what I'm saying.

I didn't say this was exclusive to Jenkins, just that Jenkins has always been a subset of this problem. Selfhosting this, and wanting to make it completely customised to fit the whim of whatever someone in compliance/security/accounting decided is an issue. It doesn't help that Blue Ocean is a terrible UI, but that's just an ugly facade to a system that enables bad habits. Other systems can be abused, but whenever Jenkins appears it is always abused.


Jenkins works...

No it dosen't or maybe it does for java which I don't care about.

The ui looks like from the 90s.

Finding anything in bigger projects/companies is impossible... I have browser bookmarks to find anything.

The log view is annoying at best.

I have nothing good to say about jenkins, it feels like it is stuck in the 2000s and refuses to change.


The 90s were the last actual advance in UI design anyway. Design has drifted in a terrible direction, in the past decade or two especially. Some of this is justified (by some) by optimizing for mobile, but that’s precisely the kind of thing I wouldn’t want creeping into my CI tooling anyway. I don’t do that on a phone.


For desktop apps maybe. Not for websites though. Jenkins has an absolutely objectively terrible UI. Super ugly, inconsistent and incomprehensible.

I've used Buildbot, Travis, Gitlab, GitHub and Teamcity. None of them are anywhere near as bad as Jenkins. Even Buildbot has a much much saner UI and that's a weird Python CI system that practically nobody uses.

Jenkins is just an objectively terrible choice.


This idea that design peaked in the 90s does a big hand wave over a whole bunch of horrible examples of the user interface from that era.

Is the Space Jam website peak UI design? What about Microsoft’s “hell of tabs” settings dialog boxes?

Or the original Amazon home page: https://www.versionmuseum.com/images/websites/amazon-website...

What link do I click to find my order status? Where do I go to search for a book by keyword? What will happen if I click the link for “first time customers click here?”

Same deal with Yahoo!, what to the “new” and “cool” and “more yahoos” buttons do? http://2.bp.blogspot.com/-JAp1M-Q_0_Y/TkAwsyueWxI/AAAAAAAAAM...

If I want to check a build status on my phone what’s the excuse for the web page to not display nicely on that device? You really think there’s no good reason for me to want to do that? Doesn’t modern CSS allow you to completely customize the viewing experience for both platforms in the same codebase?


Confusing web design and desktop UI design in the 90s does little to dispute the assertion that desktop UI design peaked in the 90s. In the late 90s, the browser was completely new - so there was a lot of skeuomorphism, borrowing from other media to try to make the web work well. Space Jam's format was familiar, though to users of multimedia CD-ROM and other interactive hypermedia of the era.

The desktop application GUI - which is really what people are claiming hit the peak in the 90s, really did. Menus, windows, tabs, dialogs, scroll bars etc... all were fairly well settled, and users understood them. A user who knew Word for Windows could do pretty well using WordPerfect for Windows. Most day-to-day applications were pretty easy to figure out because discoverability was very well done, and wizards and how-to dialogs helped users through the rough bits.

There was consistency between applications - save in areas where the OS didn't really provide GUI guidance - so design, CAD, and other creative apps (hi, Adobe) often had divergent ways of doing things and came with a steep learning curve. The web took off because it actually worked a lot like the multimedia CD-ROMs that preceded it - and websites were a lot easier for developers to build.


I'm not confusing web and desktop UI design. I presented examples of both areas that were miseralbe in the 90's.

There was not consistency between applications back then, only if you cherry-pick the ones that you like. Every Java GUI application would have a completely different UI from the base OS. Programs like America Online, RealPlayer, WinAmp, and Microsoft Bob, and Windows Media Player 7 and above would completely ignore existing OS conventions. Many programs had a habit of making the entire user interface out of bitmaps and only having a menu bar as the last vestige of the OS.

Websites might have been easier for developers to build, but they did almost nothing in comparison to what you can do with a web application now. Did Microsoft Word run entirely in a web browser like it does today? Were there any maps and driving directions where you could scroll the map without refreshing the page or order a taxi and visualize its progress? Remember typing your address into MapQuest and printing out your static map and directions list?

Arguably, delivering a 90's web app experience doesn't even require writing HTML code anymore and is therefore far easier than it was in the 90's.


> only if you cherry-pick the ones that you like.

As did you. Every item you've cherry-picked used a multimedia app paradigm (similar to a CD ROM of the era) or was a novelty (Winamp). If you look a best-selling titles of the era, you'll find a lot of buy-in to both Mac and Windows HIG. Back then people bought word processors, spreadsheets, and other software and had a pretty high expectation for interop and usability compared to now.

> Did Microsoft Word run entirely in a web browser like it does today?

Of course not! Developers were still figuring out what you could do with primitive browsers and limited servers. Most developers from the 90s were unaware of the web until 97-98.


Desktop program UI design mostly peaked in the 90s (search-to-launch becoming ubiquitous is about the only really good thing to happen since then). Desktop web design peaked somewhat later, in the two- and three-column era. ‘00s. It was nice when most sites looked and worked about the same way, and that way also happened to have great info density for a desktop screen.

Mobile web design, I’m not sure it’s had a peak yet. Phone os UI peak was iOS6.


> Phone os UI peak was iOS6.

I couldn't disagree more strongly with this.

The UI can't just be considered on its own without a discussion of the underlying functionality that it presents. iOS6 is like a caveman's OS compared to what today's OS versions can accomplish.

So it may be true that iOS felt simple and intuitive, but it was also presenting so much less functionality that it really makes broad comparisons seem disingenuous.


Most of the important functionality that’s been added didn’t require the changes to UI that I consider a downgrade.


The UI absolutely has to change to accommodate new functionality. New functionality means more “buttons and switches” to contend with.

For example, iMessage didn’t used to have any app integration to insert things. It used to be pictures/videos and text. Now it has the ability to insert a whole bunch of things and from a variety of first and third party apps. It can handle things like payments and location sharing. There has to be some kind of UI to handle that, and arguably there’s no way it can be “as clean” as an older version of the OS that simply didn’t have that functionality.

Another example: AirPlay 2 allows you to cast to multiple speakers at the same time and adjust volumes individually. You can also send audio from one app to one speaker and a different app to a different speaker and still play audio on the phone itself. So, now the AirPlay interface has radio buttons and more volume sliders, and it has a way to change which device’s audio you are controlling, and it has to fit and make sense somehow.

When the iPhone started there was just one volume bar for everything, so of course that UI was more intuitive - but it was also far less capable.


"refuses to change". Is there an active team working on it or is this anthropomorphism of a software project?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: