One comment about PIP/NPM packages - it's a very different level of threat
IMHO.

Installing and even running commands via PIP does not expose GITHUB_TOKEN
(and this is the real threat). It at most exposes the local build
environment to be hacked for the time of build but as long you are using
Github Actions, the token is only available to those specific actions.
As Vladimir noticed - GITHUB_TOKEN is only available to actions where it is
specified in the action's Yaml file. It is not available if the
action (whether it is bash command or python command does not have ${
secrets.GITHUB_TOKEN } in their configuration as an environment variable or
part of the command. This is very different from any non-trivial action
because you have to pass the token to make it work.
Github Actions runners are fully isolated and managed (and monitored)
explicitly by the GitHub team so that each job is isolated (this is why
using self-hosted runner is not recommended for public repositories because
they do not provide the same level of isolation and monitoring).
See
https://docs.github.com/en/free-pro-team@latest/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories
where they explain that.

Surely it is possible for malicious users to do any change locally during
the build via npm/pip injection (and yes - for example, you could modify
the sources during the build). But then you won't be able to save those
changes to the repository.

Some targeted attacks exploiting that are of course possible - for example,
you could locally change the docker image before it's pushed to the
registry.
But there are also ways to mitigate that. You simply have to be careful
what you do with the artifacts of the build (never share them with
"users"). We have a very good example of that in Airflow - where we do not
push images from GitHub to DockerHub (security is one of the reasons).
Instead, we build all the "tagged" versions of our images (that can be used
by our users as reference) in the DockerHub and there we only use plain
bash scripts without installing any extra dependencies:
https://cwiki.apache.org/confluence/display/INFRA/Github+Actions+to+DockerHub.
That limits the threat to packages that are needed internally by Airflow.

Also, one other thing we do: all PIP/NPM actions either as part of a docker
build or inside a docker container - which further isolates every step of
every action and limits the vector of attack.


J.



On Wed, Dec 30, 2020 at 7:21 AM Paul King <pa...@asert.com.au> wrote:

> Just picking out one point below.
>
> On Wed, Dec 30, 2020 at 12:47 PM Greg Stein <gst...@gmail.com> wrote:
>
> > On Tue, Dec 29, 2020 at 8:08 PM Brennan Ashton <
> bash...@brennanashton.com>
> > wrote:
> > [...]
> > TBH I don't see how the threat surface here is that much different
> > > than pulling down
> > > packages from pypi to npm at build time.
> > >
> > And that is why those packages should be pinned and checksums verified,
> > too. Do people do that? Nope. Should they? Yup. (and Infra falls into the
> > "we could do better, too"; not casting stones)
> >
>
> Not for npm packages, but rather Maven repo artifacts, we have just started
> using
> Gradle's dependency verification mechanism[1]. It allows you to check
> checksums and
> signatures of all downloaded artifacts against an accepted list. You can
> think of this
> as double accounting to verify artifacts that make their way into our
> builds. Other
> projects using Gradle (version 6.2 and above) might also like to consider
> using that.
>
> Cheers, Paul.
> [1] https://docs.gradle.org/current/userguide/dependency_verification.html
>


-- 

Jarek Potiuk
Polidea <https://www.polidea.com/> | Principal Software Engineer

M: +48 660 796 129 <+48660796129>
[image: Polidea] <https://www.polidea.com/>

Reply via email to