CI is essential for an efficient development process (such as validation of pull requests) and to exercice atypical build configurations to avoid the accumulation of bitrot. Gambit still uses Travis and AppVeyor because those were the “obvious” free services when CI was added to the repo. There are more options now, but always some limitations for free services. It is important that CI cover the main operating systems.
I’m open to changing the CI service, but I dont’ have much experience outside of Travis and Appveyor to make an informed decision.
I recommend moving out of those first-generation services. Switching from Travis/AppVeyor to Cirrus and GitLab was like being jolted into the future - things are so much faster, simpler and more versatile. This is *all* I needed to type to test on 9 platforms: https://github.com/lassik/upscheme/blob/master/.cirrus.yml.
The 1st-gen services will probably catch up at some point though.
Here is my wish list of features:
- Support for linux, macOS, and Windows is a minimum. Can other OSes and variants of linux be supported too? Can other processor architectures be added (ARM, risc-V, …)?
Cirrus CI has the widest OS support at the moment. It's the only one that has FreeBSD. Well, IIRC SourceHut has FreeBSD and OpenBSD but I don't know how it works. In case Amirouche is reading, do you use it?
I'm not aware of any CI services that have architectures other than x86-86 out of the box. I guess GitLab and some others let you install "runner" on your own ARM server. In that setup it becomes a bit like Jenkins, but presumably easier.
It's also possible to use QEmu in a x86-64 Linux job to emulate other OSes and architectures, but that may be slow and error-prone. I'd like to create a Docker-like multi-arch/multi-OS system based on QEmu. It would use a bare-bones binary image of each OS that has only the standard Unix utilities and a C compiler. The OS init script would get a build script from QEmu's host system via TFTP protocol, run it, and then shut down the emulated system. If we gathered a substantial set of ready-to-use images for different operating systems, it would make it possible to run cross-platform builds in CI.
I've used Jenkins at work but it's a bit of a mess. The nice thing about Docker (which most new CI services are increasingly based on) is that the OS is a clean slate on each build so you get more repeatable builds. If you have a build runner waiting on a server, that server tends to accumulate all kinds of cruft that can affect the build results.
- Must be free and offer fast feedback. Ideal would be to host the CI service locally so we have more control over the resources allocated. We have a few machines in our lab that could be used for this including multi-core linux machines (one with 64 cores!), a macOS machine with 6 cores, and a raspberry pi 4.
Cool! GitLab CI allows installing their runner agent on your own server. I don't know about the others.
Jenkins is the classic thing to use in this kind of heterogeneous self-hosted environment, but setting it up is a chore. I've never met anyone possessing a clear understanding of a complex Jenkins setup :) People google things and beat at it until it works most of the time. The nice thing about these "just make a .yml file and say which Docker image to use" services is that you don't have to host anything yourself, and you know what it's doing since the docker/git workflow is a reproducible pipeline.
The script githooks/onpush.sh was intended for this but is currently suffering from bitrot… (can the CI service operations be part of the CI tests???? :-)
I guess different CI services can cross-check each other :) Or a master CI job can check the others. Depends a lot on the details. Most projects have the status badge to show when something is failing, as does Gambit.
It would be good to integrate the Gambit forensics benchmarking tool so we can also keep track of performance (http://udem-dlteam.github.io/gambit-forensics/). This is still a WIP… don’t believe the numbers shown yet.
Having a centralized CI configuration file or directory would be good to easily add new configurations.
Many projects now have a "ci" directory at the top level of their Git repo for that purpose.
- As many backends as possible should be tested:
- C backend is clearly a must
- universal backend, at least the JS target which is the most useful currently
- native backends (x86, ARM, risc-v)
I wonder whether the free hosted CI services have some limit on how big the jobs can be (amount of RAM / CPU or running time).
On a related subject, the Gambit bootstrap process is causing issues lately. This suggests to me that I have to change my (our?) habit of directly committing to master and instead only work with pull requests that can be accepted into master when the CI indicates that the bootstrap hasn’t broken. That will make for a smoother build experience for the Gambit users.
Would it make sense for the CI job to run tests, and publish a "latest stable snapshot" from each build that passes all tests? People who follow Gambit development could then download those snapshots.
For example, if Gambit's bootstrap depends on a "gsc-boot" binary, there could be a download URL where we can always get the latest stable "gsc" binary to use as "gsc-boot" for building the master branch. This binary download could be provided for every platform that the CI builds; at least Linux and MacOS to get started with.