Build and Deploy Worker from Scratch

Build and Deploy a Worker Node from Scratch

This guide helps you build your worker by yourself without the help of the allocmd CLI tool

This document creates a setup where the worker node is supported by a side node providing inferences. They communicate through an endpoint so the worker will request inferences to the side node (the Inference Server). This makes an ultra-light worker node.

To build this setup, please follow these steps:

0. Requisites

You should have a head node to connect the worker to.

This can be either one public head node or a head node you can run locally. We will use this connectivity data of the head to connect to it from our worker.
To run your own head node locally, the instructions in Allora Inference Base (opens in a new tab) can be used to set it up.

Should you want to connect to a node in the network, please contact us for personalized access to connect to a head node.

1. Inference Server

Ensure you have your API gateway ready or any API server that can accept API requests to call your model. The goal of this server is to accept API requests from the main.py, your custom Python script executed by the node function (further details in the Node Function section below). It then responds with the corresponding inference obtained from the model. This inference is subsequently relayed by the worker node to the head node, which in turn dispatches it to the chain validators for eventual scoring.

If you have a model running but do not have the inference server, you can make a Dockerfile to bundle a simple Flask application that exposes an endpoint to access the model. Below is a sample structure of what your app.py and Dockerfile will look like. You can also find a working example here (opens in a new tab).

app.py

You will create a Flask application that imports the model module and calls the get_inference function (this could have any name) upon API request, with the required argument. Before proceeding, ensure that all necessary packages are listed in the requirements.txt file.

from flask import Flask
from model import get_inference  # Importing the hypothetical model
 
app = Flask(__name__)
 
@app.route('inference/<argument>')
def inference(argument):
    inference_data = get_inference(argument)
    return inference_data
 
if __name__ == '__main__':
    app.run(host='0.0.0.0')
Dockerfile

Next, you will create a Dockerfile like below and run the docker build -t inference-server . and docker run -p 8000:8000 inference-server

FROM python:3.8-slim
 
WORKDIR /app
 
COPY . /app
 
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
 
EXPOSE 8000
 
ENV NAME sample
 
# Run gunicorn when the container launches and bind port 8000 from app.py
CMD ["gunicorn", "-b", ":8000", "app:app"]

You can test your local inference server by performing a GET request on http://localhost:8000/inference/<argument> (e.g. for ETH prediction model, this argument value is ETH ).

2. Node Function Python Script

To communicate to the inference, you need to have the Python script to serve as the entry point from the worker node to your inference server. You will have to write your custom logic in the main.py consisting of the logic to call the inference server and other processing logic to get the data ready for the head node. You can find a simple main.py below:

import requests
import sys
import json
 
def process(argument):
    url = f"http://localhost:8000/inference/{argument}"
    inference_response = requests.get(url)
 
    if inference_response.status_code == 200:
        inference = inference_response.json()
        # Your custom processing logic can be written here
        # Data must be provided in a `{"value":"<actual-value>"}` json format.
        response = json.dumps(inference['data']) 
    else:
        response = json.dumps({"error": "Errors providing inference"})
 
    print(response)
 
 
if __name__ == "__main__":
    try:
      	topic_id = sys.argv[1]
        argument = sys.argv[2]
        process(argument)
    except Exception as e:
        response = json.dumps({"error": {str(e)}})
        print(response)
 

3. Identity Generation

Node identity is determined by its private key. Each node on the network is also known as a peer, which has an ID and a private key. You can generate node identity by running the following command:

docker run -it --entrypoint=bash -v ./:/data alloranetwork/allora-inference-base:latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"

4. Worker Node Main Dockerfile

Now that you have the node identity generated for your worker, and your node function pulling data from your Inference Server, you must bundle your worker node with Dockerfile to be tested as a whole. As stated earlier, your Dockerfile will have to pull the allora-inference-base image from the Allora Docker repository (opens in a new tab) and combine it with your custom main.py. Below is an example of what your Dockerfile_nodeshould look like:

# Pull base image
FROM alloranetwork/allora-inference-base:latest
 
# Copy requirements and install dependencies
COPY requirements.txt /app/
RUN pip3 install --requirement /app/requirements.txt
 
# Copy the main application file
COPY main.py /app/
    
CMD ["allora-node", "--role=worker", \
     "--peer-db=/data/peerdb", \
     "--function-db=/data/function-db", \
     "--runtime-path=/app/runtime", \
     "--runtime-cli=bls-runtime", \
     "--workspace=/data/workspace", \
     "--private-key=/data/keys/priv.bin", \
     "--log-level=debug", \
     "--port=9011", \
     "--topic=1", \
     "--boot-nodes=/ip4/{head-ip}/tcp/{head-port}/p2p/{head-id}"]
 
 

Note:

  1. The head-id/head-port value in BOOT_NODES variable are the peer id and p2p-port of the head node and they are 12D3KooWRX78c84ko4ZNiDFE5i2d9QT4SPxBkpT6kjzxUotQ8sNR/9010 and 12D3KooWEvNL9wXM6dzusvRpA7qo98QdsUE52x4vHtv2sttc6Za7/9011
  2. The BOOT_NODEvariable can take more than one address separated by comma.
    It can be expressed in an ip form (/ip4/127.0.0.1/tcp/9010/p2p/12D3KooWRX78c84ko4ZNiDFE5i2d9QT4SPxBkpT6kjzxUotQ8sNR) or a dns form ( /dns4/my-dns-name.xyz/tcp/9010/p2p/12D3KooWRX78c84ko4ZNiDFE5i2d9QT4SPxBkpT6kjzxUotQ8sNR)

With the existence of the Dockerfile in your root directory, you can now run

# Build your Dockerfile
docker build -f Dockerfile_node -t image-name .
 
# run your newly built docker image
docker run -d -v ./:/data --name container-name image-name

At this point, your worker node is setup. If all is OK, you'll see in the logs a line saying peer connected.

It now should be able to pick up published requests from the head node (for your specified topic) and return responses.

You can test by running the following curl command to test, and see if your worker shows activity on the logs.

curl --location 'http://{head-ip}:head-rest-api-port/api/v1/functions/execute' --header 'Accept: application/json, text/plain, */*' --header 'Content-Type: application/json;charset=UTF-8' --data '{
    "function_id": "bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm",
    "method": "allora-inference-function.wasm",
    "parameters": null,
    "topic": "1",
    "config": {
        "env_vars": [
            {                              
                "name": "BLS_REQUEST_PATH",
                "value": "/api"
            },
            {                              
                "name": "ALLORA_ARG_PARAMS",
                "value": "ETH"
            }
        ],
        "number_of_nodes": -1,
        "timeout" : 2
    }
}' | jq

It's important to note that although all Dockerfiles in this guide are designed to run independently, you have the option to run them together as services in a docker-compose file. In fact, it is advised to do so, as you can observe in the basic-coin-prediction-node example (opens in a new tab).

Please contact us for personalized access to connect to a head node.
You can otherwise run a head node following instructions on Allora Inference Base README (opens in a new tab).

Deploying a Worker Node

Now that you have built and tested your worker node, Your next goal will be to deploy it to production, where it runs forever. to do that, we use the Kubernetes cluster and Upshot Universal-helm chart. While you can deploy your node however you wish, you can also follow these steps if you are not opinionated on deployment.

1. Build, Tag, and push your Dockerfile

The first step toward deployment is pushing your Docker image to your preferred repository. The Universal-helm chart will use the pushed image to deploy the worker node to your Kubernetes cluster.

#login to docker repository eg Docker Hub
docker login
 
#tag your image
docker tag image-name:tag username/repository:tag
 
#push image to repository
docker push username/repository:tag

2. Add Universal-helm to Helm Repository

On your Kubernetes cluster on your preferred cloud service, you will have to add the universal-helm repository to your cluster.

helm repo add upshot https://upshot-tech.github.io/helm-charts

3. Create values.yaml File

Provide custom values in a values.yaml file.

statefulsets:
  - name: worker
    replicas: 1
    persistence:
      size: 1Gi
      storageClassName: gp2
      volumeMountPath: /data
    initContainers:
      - name: initialize-keys
        image: {image}:{tag}
        env:
          - name: APP_HOME
            value: "/data"
        workingDir: /data
        command:
          - /bin/sh
          - -c
          - |
            KEYS_PATH="${APP_HOME}/keys"
 
            if [ -d "$KEYS_PATH" ]; then
              echo "Keys exist"
            else
              echo "Generating New Node Identity"
              mkdir -p ${APP_HOME}/keys
              cd $KEYS_PATH
              allora-keys
            fi
        volumeMounts:
          - name: workers-data
            mountPath: /data
        securityContext:
          runAsUser: 1001
 
      - name: init-allora-account
        image: alloranetwork/allora-chain:latest
        env:
          - name: APP_HOME
            value: "/data"
          - name: SECRETS_DIR
            value: "/keys"
        workingDir: /data
        command:
          - /bin/sh
          - -c
          - |
            set -ex
 
            ACCOUNT_NAME=<account name>
            KEYRING_BACKEND=test 
 
            if allorad keys --keyring-backend $KEYRING_BACKEND show $ACCOUNT_NAME >/dev/null 2>&1 ; then
              echo "$ACCOUNT_NAME - account already imported"
            else
              allorad keys import-hex --home=${APP_HOME}/.allorad --keyring-backend $KEYRING_BACKEND $ACCOUNT_NAME <exported hex private key>
            fi
        securityContext:
          runAsUser: 1001
        volumeMounts:
          - name: worker-data
            mountPath: /data
 
    containers:
      - name: worker
        image:
          repository: <image>
          tag: <tag>
        env:
          - name: APP_HOME
            value: "/data"
        workingDir: /data
        command:
          - allora-node
          - --role=worker
          - --peer-db=$(APP_HOME)/peer-database
          - --function-db=$(APP_HOME)/function-database
          - --runtime-path=/app/runtime
          - --runtime-cli=bls-runtime
          - --workspace=/tmp/node
          - --private-key=$(APP_HOME)/keys/priv.bin
          - --log-level=debug
          - --port=9010
          - --boot-nodes=/dns4/head-0.testnet.allora.network/tcp/9010/p2p/12D3KooWGAA1C2UNyj51QaGX5aXnVzRqeaqhyhUFFJixdqshwWC3
          - --allora-node-rpc-address=http://sentry-allorad-rpc.testnet-sentries:26657
          - --allora-chain-home-dir=/data/.allorad
          - --allora-chain-key-name=<account name>
          - --allora-chain-topic-id=<preferred topic id>
          - --topic=<preferred topic id>
        ports:
          - name: p2p
            port: 9100
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 256m
            memory: 512Mi
        startupProbe:
          tcpSocket:
            port: 9100
          periodSeconds: 10
          failureThreshold: 6
        livenessProbe:
          tcpSocket:
            port: 9100
global:
  serviceAccount:
    name: node
  securityContext:
    fsGroup: 1001
    runAsUser: 1001
    runAsGroup: 1001
    fsGroupChangePolicy: "Always"
 

Note: Please replace the variables parameter in. <> respectively

4. Install Helm Chart

helm install index-provider upshot/universal-helm -f values.yaml

If all these are done correctly, your worker node should run successfully in your cloud cluster.