There are also other external services we integrate with from GitHub, like Netlify for automated website rebuilds, which may not work properly (if at all) with other services/webhooks/etc. This would also need to be investigated.
Found two potential ways to deploy to Netlify from Codeberg:
Thereâs a comparison between Gitea Actions and GitHub Actions, but IDK how much Forgejo Actions (what Codeberg uses) have diverged since it forked away from Gitea:
Forgejo documentation does mention this:
The syntax and semantics of the
workflow
files will be familiar to people used to GitHub Actions but they are not and will never be identical.
One additional thing to notw, codeberg directly maintains forgejo. We can contribute things that donât worl aswell as weâd like now.
Codeberg also supports the agit flow which aligns more with gerrits way of working in addition to ânormalâ pull requests
A self-hosted runner is probably needed on Codeberg.
Do you mean codeberg accepting our sso?
Or our sso accepting codberg?
Both would probably be nice to have, though the foree needs support from codeberg probably.
Yep, one of them was previously involved with Haiku and is pretty friendly to the project, or so Iâve heard. :^)
Iâd be happy to provide support and/or Haiku-specific advice regarding what would work well and what wouldnât. You know where to find me.
Heavily requested feature by other much bigger orgs as well (itâs planned), does not exist yet.
I would not suggest building the entirety of Haiku, but there is both a Forgejo Actions runner and a Woodpecker CI instance (that you have to request access under Codeberg-e.V./requests: Request what you need for your project: CI access, more repositories, storage, etc. - Codeberg.org). A thing one could do to prevent vendor lock-in or smooth out a transition (or prepare for any future transition) is move as much functionality into scripts, and then execute those scripts in the CI pipelines.
I actually wrote docs on this topic specifically for situations like this - received mixed feedback, would appreciate improvements if possible: Integrating with Keycloak | Codeberg Documentation
Seems like a vendor lock-in situation; Iâd suggest looking into other approaches so as to prevent history from repeating itself (but with a more community-oriented single point of failure - vendor lock-in can still happen, albeit âunintentionallyâ). Forgejo itself mirrors its repository to https://code.forgejo.org and makes use of Forgejoâs registry feature on its own domain there, so it remains some degree of independence from the Codeberg.org domain: Packages - forgejo/forgejo - Forgejo: Beyond coding. We forge.
You donât (and perhaps shouldnât) have to use every single offering when âmovingâ (or sacrifice a lot of convenience when doing so - e.g. you can just use an external issue tracker, or even disable pull requests).
Recently found out about this bit in Codebergâs Terms of Use:
§ 2 (3) Forks, migrations and testing repos are considered as inactive when they donât contain unique contributions and are inactive for more than a month. They shouldnât be kept for a prolonged amount of time, and thus might be removed after notifying maintainers and providing a 90 days period to ask for preservation.
Is this expected with most Git forge hosters and is this going to be a problem for any Haiku repos in theory?
I donât see why this could cause any problems for Haiku.
They only say that,if you fork a repository and donât work on it (no unique contributions = no stuff you did yourself on top of the forked repo) and just let it collecting dust for more than a month,they may notify you and if you donât react for another 90 days,they may remove it.
Looks totally reasonable for me,considering that forks of large repositories cost a lot of storage space (=money) and if they donât contain any changes,thatâs wasted money.
If youâre working on a fork of a Haiku repo,thereâs nothing to fear.
If youâve forked it just in case,it may get removed to free up disk space,but you can simply fork it again later when you really need it.
Was thinking of what might happen if say, HaikuArchives is migrated to Codeberg wherein many repos remain inactive for years.
There is work on GitHub from some contributors that waited for some years before we could get to it, even for Haiku itself. I hope we have catched all these things and migrated them to Gerrit now (Iâm thinking of things like the BFS resizing changes).
But it says âdonât contain unique contributionsâ, so it looks like no data would be lost in that case. As soon as you have made at least one commit, this doesnât apply.
Once a month, just touch
the README, then commit and push
From my understanding, that still wouldnât be a issue at all.
The ToS section is only about forks that have never seen any work on them, at least thatâs my understanding.
So if you upload original work (archived Haiku applications that are not forked from any other Codeberg repo), youâre fine, even if you donât update them regularly.
wears Codeberg hat again
There are a bunch of policies that are not clearly communicated as clearly as they should. The context is that forks are basically full copies of a repository â instead of everything being a âbranchâ per se, like on GitHub. Itâs one of the reasons why we might encourage people to use the Gerrit-like AGit Workflow for âdrive-by contributionsâ.
Forks are very often used as some sort of a âbookmarking mechanismâ, so as to preserve a âsnapshotâ of a repository, so to say. In practice, this might not work very well (On Codeberg, if a repository is made private, this also affects forks. On GitHub, if a repository gets DMCAâd, the party taking down the repository can also easily include forks AFAIK).
It would make zero sense to take down repositories based on an expectation of developers to actively work on them. Such actions are manual â they would affect storage-consuming forks (i.e. âemptyâ forks of very big projects) that donât really advance the goals stated in our Bylaws (basically âadvancing and supporting free and open-source softwareâ), rather than⌠merely inactive repositories, which, additionally, explicitly state that they are part of an âArchiveâ.
Yeah. Originally we were on docker.io for all of our infrastructure containers, but when docker briefly did the âno more free container reposâ thing, we swapped over to ghcr.io.
We kind of abuse container hosts and upload quite a bit of data (tool-chain builder is several GiB of GCC toolchains for âall of our architecturesâ)
Local containers on Digital Ocean charge based on size, so I havenât been excited to jump onto that (even though it would be âbetterâ since it would be closer to our clusters)
quay.io used to be my resident backup plan⌠until IBM bought RedHat.