Jordan Lund: Mozharness now lives in Gecko |
continuous-integration and release jobs that use Mozharness will now get Mozharness from the Gecko repo that the job is running against.
Whether the job is a build (requires a full gecko checkout) or a test (only requires a Firefox/Fennec/Thunderbird/B2G binary), automation will first grab a copy of Mozharness from the gecko tree, even before checking out the rest of the tree. Effectively minimizing changes to our current infra.
This is thanks to a new relengapi endpoint, Archiver, and hg.mozilla.org's sub directory archiving abilities. Essentially Archiver will get a tar ball of Mozharness from within a target gecko repo, rev, and sub-repo-directory and upload it to Amazon's S3.
What's nice about Achiver is that it is not restricted to just grabbing Mozharness. You could, for example, put https://hg.mozilla.org/build-tools in the Gecko tree or, improving on our tests.zip model, simply grab subdirectories from within the testing/* part of the tree and request them on a suite by suite basis.
it depends. if you are...
1) developing on Mozharness
You will need to checkout gecko and patches will now land like any other gecko patch: 1) land on a development tree-branch (e.g. mozilla-inbound) 2) ride the trains. This now means:
This also means:
2) just needing to deploy Mozharness or get a copy of it without gecko
Like the usage docs linked to Archiver above, you could hit the API directly. But I recommend using the client that buildbot uses. The client will wait until the api call is complete, download the archive from a response location, and unpack it to a specified destination.
Let's take a look at that in action: say you want to download and unpack a copy of mozharness based on mozilla-beta at 93c0c5e4ec30 to some destination.
python archiver_client.py mozharness --repo releases/mozilla-beta --rev 93c0c5e4ec30 --destination /home/jlund/downloads/mozharness
Note: if that was the first time Archiver was polled for that repo + rev, it might take a few seconds as it has to download Mozharness from hgmo and then upload it to S3. Subsequent calls will happen near instantly
Note 2: if your --destination path already exists with a copy of Mozharness or something else, the client won't rm that path, it will merge (just like unpacking a tarball behaves)
3) a Release Engineering service that is still using hg.mozilla.org/build/mozharness
Not all Mozharness scripts are used for continuous integration / release jobs. There are a number of Releng services that are based on Mozharness: e.g. Bumper, vcs-sync, and merge_day. As these services transition to using Archiver, they will continue to use hgmo/build/mozharness as the Repository of Record (RoR).
If certain services that can not use gecko based Mozharness, then we can fork Mozharness and setup a separate repo. That will of course mean such services won't receive upstream changes from the gecko copy so we should avoid this if possible.
If you are an owner or major contributor to any of these releng services, we should meet and talk about such a transition. Archiver and its client should make deployments pretty painless in most cases.
If you want to move something into a larger repository or be able to pull something out of such a repository for lightweight deployments, feel free to chat to me about Archiver and Relengapi.
As always, please leave your questions, comments, and concerns below
http://jordan-lund.ghost.io/mozharness-goes-live-in-the-tree/
Комментировать | « Пред. запись — К дневнику — След. запись » | Страницы: [1] [Новые] |