Discussion about moving Haiku repositories from Github to Codeberg

There are also other external services we integrate with from GitHub, like Netlify for automated website rebuilds, which may not work properly (if at all) with other services/webhooks/etc. This would also need to be investigated.

Found two potential ways to deploy to Netlify from Codeberg:

There’s a comparison between Gitea Actions and GitHub Actions, but IDK how much Forgejo Actions (what Codeberg uses) have diverged since it forked away from Gitea:

Forgejo documentation does mention this:

The syntax and semantics of the workflow files will be familiar to people used to GitHub Actions but they are not and will never be identical.

One additional thing to notw, codeberg directly maintains forgejo. We can contribute things that don’t worl aswell as we’d like now.

Codeberg also supports the agit flow which aligns more with gerrits way of working in addition to “normal” pull requests

A self-hosted runner is probably needed on Codeberg.

Do you mean codeberg accepting our sso?

Or our sso accepting codberg?

Both would probably be nice to have, though the foree needs support from codeberg probably.

Yep, one of them was previously involved with Haiku and is pretty friendly to the project, or so I’ve heard. :^)

I’d be happy to provide support and/or Haiku-specific advice regarding what would work well and what wouldn’t. You know where to find me. :slight_smile:

Heavily requested feature by other much bigger orgs as well (it’s planned), does not exist yet.

I would not suggest building the entirety of Haiku, but there is both a Forgejo Actions runner and a Woodpecker CI instance (that you have to request access under Codeberg-e.V./requests: Request what you need for your project: CI access, more repositories, storage, etc. - Codeberg.org). A thing one could do to prevent vendor lock-in or smooth out a transition (or prepare for any future transition) is move as much functionality into scripts, and then execute those scripts in the CI pipelines.

I actually wrote docs on this topic specifically for situations like this - received mixed feedback, would appreciate improvements if possible: Integrating with Keycloak | Codeberg Documentation

Seems like a vendor lock-in situation; I’d suggest looking into other approaches so as to prevent history from repeating itself (but with a more community-oriented single point of failure - vendor lock-in can still happen, albeit “unintentionally”). Forgejo itself mirrors its repository to https://code.forgejo.org and makes use of Forgejo’s registry feature on its own domain there, so it remains some degree of independence from the Codeberg.org domain: Packages - forgejo/forgejo - Forgejo: Beyond coding. We forge.

You don’t (and perhaps shouldn’t) have to use every single offering when “moving” (or sacrifice a lot of convenience when doing so - e.g. you can just use an external issue tracker, or even disable pull requests).

Recently found out about this bit in Codeberg’s Terms of Use:

§ 2 (3) Forks, migrations and testing repos are considered as inactive when they don’t contain unique contributions and are inactive for more than a month. They shouldn’t be kept for a prolonged amount of time, and thus might be removed after notifying maintainers and providing a 90 days period to ask for preservation.

Is this expected with most Git forge hosters and is this going to be a problem for any Haiku repos in theory?

I don’t see why this could cause any problems for Haiku.
They only say that,if you fork a repository and don’t work on it (no unique contributions = no stuff you did yourself on top of the forked repo) and just let it collecting dust for more than a month,they may notify you and if you don’t react for another 90 days,they may remove it.
Looks totally reasonable for me,considering that forks of large repositories cost a lot of storage space (=money) and if they don’t contain any changes,that’s wasted money.
If you’re working on a fork of a Haiku repo,there’s nothing to fear.
If you’ve forked it just in case,it may get removed to free up disk space,but you can simply fork it again later when you really need it.

Was thinking of what might happen if say, HaikuArchives is migrated to Codeberg wherein many repos remain inactive for years.

There is work on GitHub from some contributors that waited for some years before we could get to it, even for Haiku itself. I hope we have catched all these things and migrated them to Gerrit now (I’m thinking of things like the BFS resizing changes).

But it says “don’t contain unique contributions”, so it looks like no data would be lost in that case. As soon as you have made at least one commit, this doesn’t apply.

Once a month, just touch the README, then commit and push :smiley:

From my understanding, that still wouldn’t be a issue at all.
The ToS section is only about forks that have never seen any work on them, at least that’s my understanding.
So if you upload original work (archived Haiku applications that are not forked from any other Codeberg repo), you’re fine, even if you don’t update them regularly.

wears Codeberg hat again

There are a bunch of policies that are not clearly communicated as clearly as they should. The context is that forks are basically full copies of a repository – instead of everything being a “branch” per se, like on GitHub. It’s one of the reasons why we might encourage people to use the Gerrit-like AGit Workflow for “drive-by contributions”.

Forks are very often used as some sort of a “bookmarking mechanism”, so as to preserve a “snapshot” of a repository, so to say. In practice, this might not work very well (On Codeberg, if a repository is made private, this also affects forks. On GitHub, if a repository gets DMCA’d, the party taking down the repository can also easily include forks AFAIK).

It would make zero sense to take down repositories based on an expectation of developers to actively work on them. Such actions are manual – they would affect storage-consuming forks (i.e. “empty” forks of very big projects) that don’t really advance the goals stated in our Bylaws (basically “advancing and supporting free and open-source software”), rather than… merely inactive repositories, which, additionally, explicitly state that they are part of an “Archive”.

4 Likes

Yeah. Originally we were on docker.io for all of our infrastructure containers, but when docker briefly did the “no more free container repos” thing, we swapped over to ghcr.io.

We kind of abuse container hosts and upload quite a bit of data (tool-chain builder is several GiB of GCC toolchains for “all of our architectures”)

Local containers on Digital Ocean charge based on size, so I haven’t been excited to jump onto that (even though it would be “better” since it would be closer to our clusters)

quay.io used to be my resident backup plan… until IBM bought RedHat.

:sob: