Why Nostr? What is Njump?
2024-09-30 13:15:37

arjenstens on Nostr: **CI/CD pipelines are great tools for the development cycle of an application, ...

This is a long form article, you can read it in https://habla.news/a/naddr1qvzqqqr4gupzpwa4mkswz4t8j70s2s6q00wzqv7k7zamxrmj2y4fs88aktcfuf68qqnxx6rpd9hxjmn894j8vmtn94nx7u3ddehhxarj943kjcmy94cxjur9d35kuetn0uz742

CI/CD pipelines are great tools for the development cycle of an application, however, they’re also very much centralized on platforms like GitHub and GitLab. In this blog I’m exploring the idea of decentralizing these tools using Nostr, and how I think DVM’s will be the way forward. But first, let’s explore the problem itself…

Current situation

Let’s establish a baseline first. The average CI/CD pipeline looks something like this, where the first part represents the CI part and the green ones the CD part.

CI (⚫)

Pretty much any build starts with a commit being made on a project. That triggers the code to be pulled by a build agent. That code then gets fed to a Runner that will build your code and produce some kind of artifact as an output. That artifact is then pushed to a repository appropriate for the kind of project. Meaning a docker container might be pushed to Docker Hub, a library in TypeScript might be pushed to NPM and so on. That concludes the CI part of the pipeline.

CD (🟢)

While the CI part of CI/CD is often executed for every commit, the CD part is not. Most likely the CD part is only done when you want to release a new version of your software, either manually or automatically when you push to the main branch.

Here the pipeline will take whatever artifact has been produced from the artifact store, such as Docker Hub and download it. It will also fetch some configuration necessary to boot the application. And lastly the Artifact is booted with that configuration. CD pipelines tend to be much more customized to the needs of the person/organization running the artifact.

📜 Facts & Assumptions

  • This system works regardless of wether or not the maintainer is online.
  • You need authorization to push to most artifact stores
  • You trust the entity (company) that runs the build agents to:
    • keep login credentials safe.
    • not be compromised and inject malicious code or push malicious artifacts in your name.
    • not deplatform your project for political reasons.
  • Each step in the process is executed by the same agent, or at least -in case of multiple- they share state.
  • You trust the artifact store to store your build output and not alter the contents.
  • The person pulling the artifact for deployment trusts the artifact store to give them the correct file.

⚠️ The problem

Most people like the convenience of automated CI/CD pipelines to they don’t have to build, package and distribute every version of their software manually. But when dealing with sensitive software like BTC wallets or privacy tools, entrusting centralized entities with the building and distribution of that software can pose a big risk. Moving into the future, this risk will only increase overtime. Even if the entity is friendly towards these projects it can be forced to take action against any project.

The alternative is to build software manually anyway, or self-host these automation tools which can be cumbersome and not transparent when collaborating on a project. Also, this approach heavily relies on the presence of a project owner/maintainer. At the same time, the machines of targeted developers also risk being compromised and therefore risk distributing malicious software as well.

🌖 The goal

What if we can achieve the convenience and transparency of automated CI/CD pipelines, but without the risks of centralized entities?

I think Nostr gives us a framework to help us achieve that and even reduce risks of supply-chain attacks in the process.

So the goal is to:

  • Run automated software builds without risk of compromised build agents or developer computers.
  • Store the output artifacts of builds in immutable, verifyable, decentralized storage.
  • Sign and distribute software to users without chokepoints.

🧩 What to work on?

So there are 4 steps on the way from code to a running application. Each of these aspects need to be accounted for to achieve the goal:

  1. Source code collaboration
  2. Build/test processes
  3. Storage of artifacts
  4. Discovery of artifacts (artifact/app stores)

Let’s get into them:

  1. This is already covered by other Nostr projects that implement # NIP-34 (git stuff) so we won’t focus on that.
  2. The build and test process has no solution (known to me), so that needs most work.
  3. Storage of artifacts is covered by Blossom which gives us decentralized, hash-based file storage, but I don’t yet know when and by whom (npub) during the build process the file(s) should be uploaded. There also seems to be no current projects that act as artifact stores over Nostr for Docker images, NPM packages, etc…
  4. This is already being worked on with projects like zap.store, so we won’t focus on that either.

That means I’m focussing on number 2 and 3.

🧭 Possible approaches

Up until now I have mostly described the problem and where we want to go, but haven’t gone into the HOW yet. The following are my initial ideas on how to approach this problem and reach our goal. Please share your feedback, questions and ideas with me so we can discuss them.

Use of DVM’s

First and foremost, I believe Data Vending Machines (DVM) as defined in NIP-90 are to play a very important role in providing the compute for these decentralized pipelines.

The open market of DVM’s should be embraced and there should be many DVM vendors offering to execute specific tasks within a pipeline in the best way possible. There should be DVM’s that are competing on the fastest build times, the cheapest builds, or any other metric deemed important to the customer.

Utilizing a variety of vendors offering competing DVM’s can also be used to enhance security in the process.

[!NOTE] Whatever implementation of these pipelines is developed should not impose strict requirements on DVMs specialized in a certain task.

1) The Naive approach

The easiest and most straight-forward solution is to have each DVM is tasked with requesting the next DVM in line when it’s done performing its task, all the way to the end of the flow. This will probably be fine for low-stakes scenario’s like running unit-tests or running some (AI) code analysis.

Pro’s Cons
Easy to implement You need to trust every DVM in the chain to select trustworthy DVM for the next step
Good fit for low-stakes tasks (running unit-tests/analysis) Compromised DVM can gain full control over the rest of the process.

2) Using a ‘router’ DVM

Credits for this idea go to Dustin (npub1mgv…pdjc) To prevent any arbitrary DVM in the chain from making bad decisions on which next DVM to run, we can task one DVM for overseeing the process instead. We give it a mandate to execute all the steps of the pipeline. That way you don’t have to put full trust in every DVM you might use for your pipeline.

Pro’s Cons
Relatively easy to implement The router DVM becomes the weak link.
Reduce trust to one DVM instead of having to fully trust the whole chain. Compromised router can gain full control over the process.

3a) Spreading the mandate

The previous solutions still leave us with the risk of one single DVM being compromised and produce malicious output or execute the next step on another compromised DVM.

What if we counter that by adding more ‘eyes’ to the task. Instead of having one DVM perform a task, we choose 3 or more unrelated DVM’s that can execute the same task and have the human(oid) give each of them a piece of a multisig Nostr nsec. Then, when all three DVM’s produce an output, they have to check each other’s work and together sign the request for the next DVM in the chain to execute. That way, if one of the selected DVM’s goes rogue, it cannot make any decisions on its own because it doesn’t have full access to the nsec.

This checking mechanism can be especially useful for verifying that the artifact of a build step is identical across several DVM’s. That way you know that your build hasn’t been tampered with.

In practice this setup will mean there has to be some back and forth between the DVM’s to reach an agreement, which can be tricky to implement.

[!NOTE] For the sake of simplicity, I drew the Build & Deployment DVM steps as a single DVM. Ideally you would also run this one multiple times in parallel, just like the GIT Watcher.

Pro’s Cons
Single Compromised DVM cannot influence process. Complex, hard to implement
Instead of trusting a single DVM, you trust a group of DVM’s to not be fully compromised. Requires each DVM to know how to handle the consensus logic.
Can also be used to send Cashu funds down the chain to pay for DVM requests.

3b) Spreading the mandate + abstract complexity

A big issue with the previous setup is that the DVM’s that execute the specialized task, like running the build now also have to deal with all this consensus logic. This would be a big burden on the people developing DVM’s and would probably result in less DVM’s being built compatible with these pipelines. To get around this, a wrapper DVM could be added that will handle the multisig and perhaps do some alterations with the in and output to make it work with the underlying DVM. This could however introduce some new trust challenges but they can probably be contained within the logic around the wrappers.

Pro’s Cons
Specialized DVM’s don’t have to implement consensus logic Complex to design/build wrapper dvm logic
Wrapper DVM introduces new trust challenges

💥 Other Challenges

Software signing / Manual approvals

There will be scenario’s where somewhere during a pipeline the approval of a human is required. This is the case for signing an APK and likely too for uploading artifacts to an artifact store. This process will require the author/maintainer to sign some messages.

On Web of Trust & Policies

I think WoT will eventually become an integral part of curating which DVM’s can execute certain parts of your pipelines. Depending on the project and risks associated with that (are you building a BTC wallet a Flappy Bird clone?) you might want to change your strategy of selecting DVM’s.

Some format of selection criteria for DVM’s to run could be created by the human and passed down the chain. I have no clear idea yet on how this would work in practice.

Some example criteria:

  • Most trusted DVM operators
  • Good uptime metrics
  • Cheapest
  • Fastest

Definitions

Term Explanation Links
Solution Mystical response to a problem which only exists in the human mind. Often used by people actually talking about a good trade-off. Source
CI Continuous Integration, the test and build stages of an application.
CD Continuous Delivery, the deployment process of an application. Often involving configuring and booting an application on a server.
Build Agent A (virtual) machine tasked with executing steps of a CI/CD pipeline.
Runner Piece of software that executes jobs in a CI/CD pipeline GitLab Runner
Blossom File storage/distribution protocol. Essentially Nostr, but for file storage. Allows accessing files based on a file’s fingerprint (hash), rather than it’s location.
Multisig key A private key being divided into multiple private keys, where a certain threshold of keys have to sign a message for it to be valid. For example 35 keys need to sign a Nostr event before it can be published.
Author Public Key
npub1hw6amg8p24ne08c9gdq8hhpqx0t0pwanpae9z25crn7m9uy7yarse465gr