Carbon

Carbon is a decentralized generative art project where the rarity of the unique digital collectibles (NFTs) is determined by how someone's wallet balance at the time of minting compares with the wallet balance of previous minters at the time they minted their token. For that, the balance of each minter is stored on-chain in an Order Statistics Tree and is used to calculate the rank, i.e., its index in a sorted list of all balances.

Carbon

Overview

Carbon is a decentralized generative art project where the rarity of the unique digital collectibles (ERC721 NFTs) is determined by how someone's wallet balance at the time of minting compares with the wallet balance of previous minters at the time they minted their token.

Carbon tokens are fair-priced meaning that the cost for minting will always be 1% of that minter's wallet balance and is primarily decisive for the rarity of the minted NFT. The total supply of NFTs is limited to 10000 unique tokens and each image is stored decentralized on IPFS and Filecoin forever.


In a Nutshell

  • Decentralized generative art project living on Ethereum chain
  • Unique digital collectibles (ERC721 NFTs)
  • All about digital diamond image collectibles in different sizes!
  • 10 different diamond sizes and special diamonds for minters having the largest wallet balance at the time of minting
  • Diamond size is determined by how someone's wallet balance at the time of minting compares with the wallet balance of previous minters at the time they minted their token
  • Limited to 10000 unique tokens
  • Images and metadata stored on IPFS and Filecoin forever
  • Integrated into OpenSea marketplace collection
  • Smart contract is verified & publicly available on Etherscan and GitHub

Project Status

The project is still in development. The contract has been deployed on a Ethereum test chain. Website minting functionality works, but more tests has to be done before going live on a Ethereum mainnet.

Project also needs a new design update as the overall look & feel is not that good and some components are even buggy. We will work on it after getting everything tested!


Tech Stack & Dev Environment

Additionally, we store images decentralized on IPFS and Filecoin via nft.storage.


Ethereum Contract

The contract source code is open-sourced and can be found here.

When minting a Carbon token, we store on-chain your balance (or more specifically, the fee of 1% of your wallet balance) in an Order Statistics Tree in order to be able to efficiently find the rank of this balance, i.e. its index in a sorted list of balances of all minters. We calculate the Percentile Rank (PR), i.e., the percentage of balances in its frequency distribution that are less than the minter's current wallet balance. We bin the continuous PRs (e.g. 33% or 99%) uniformly into 10 intervals such that all bins have nearly identical widths. We map these 10 intervals (decentiles) to the 10 possible diamond sizes, so that PRs in [0,10) are mapped to diamond size 1, PRs in [20,30) are mapped to diamond size 2, ..., PRs in [90,100) are mapped to diamond size 10. Only the largest balance at the point in time of the mint will receive a special length, namely 11, which persists this information on the blockchain (i.e., see token URI and linked metadata fields).

In other words, this means that someone's diamond will be as small as possible (length 1 of 10) if that minter's balance at the time of minting is no greater or equal to 10% of all the balances of the previous minters. 90% of previous minters have made a larger investment to obtain a Carbon NFT.


Sustainability

Storing and verifying your diamond size on-chain consumes a lot of energy, therefore 30% of all primary market sales will be donated to the Trees for the Future reforestation project via the crypto charity The Giving Block.

The revenue is donated in bundles at regular intervals and this process is defined in the ethereum contract itself so that the donating is automated. Even the creators of the project will not be able to change or disable this after the deployment of the contract. This makes the whole system very secure!


Kubernetes Cluster Tutorial

As part of the project, an API was developed to which we send requests for token information needed on the website. The data is stored in a database and queried by the API. As soon as a user mints a NFT token or resells his token, an event is published on the blockchain, which we listen to via the API. Since potentially many NFTs are traded at the same time, we decided to use a Redis queue on which we dispatched jobs as soon as a token was created or passed on to another wallet.

In the following I explain how we can successfully deploy the API, DB and Redis Queue in a Kubernetes cluster. After configuring the cluster on the virtual machine, a push to the main branch of the repository will automatically trigger a new deployment on the cluster.

Prerequisites

  • Virtual Machine
  • Domain name and the ability to create DNS records in that domain.
  • Software
    • kubectl: The Kubernetes command-line tool which allows you to configure Kubernetes clusters
    • curl: A command-line tool for connecting to a web server using HTTP and HTTPS.

Install MicroK8s

First, connect to your virtual machine via SSH:

ssh root@<IP_OF_VIRTUAL_MACHINE>

Install MicroK8s, which is a minimal, lightweight Kubernetes you can run and use on practically any machine. It can be installed with a snap:

# Install
sudo snap install microk8s --classic --channel=1.24

# Verify Installation
microk8s status --wait-ready

We have to enable some addons that we will need later:

# From the virtual machine
microk8s.enable dns
microk8s.enable ingress

Access Remote Cluster

To access a cluster from your local machine, you need to know the location (i.e., the IP address of our virtual machine) of the cluster and have credentials to access it. We can get credentials to access the remote cluster by copyin the cluster config from the virtual machine to the local machine:

# From the virtual machine
sudo microk8s kubectl config view --raw > $HOME/.kube/config

# Copy the ~/.kube/config from the virtual machine to your local machine
scp root@<IP_OF_VIRTUAL_MACHINE>:~/.kube/config ~/.kube/config

# Open an SSH tunnel (everytime you need to use kubectl on the local machine to access remote cluster)
ssh -N -L localhost:16443:localhost:16443 root@<IP_OF_VIRTUAL_MACHINE>

If you have already configured other Kubernetes clusters, you should merge the output from the microk8s config with the existing config (copy the output, omitting the first two lines, and paste it onto the end of the existing config using a text editor). See here for more information.

Create Secrets

Your API might need some secrets that are use by the application as environment variables! Add for example an password secret for a database:

# On your machine
kubectl create secret generic cc-secrets --from-literal=mysql_password=<INSERT_MYSQL_PASSWORD>

Fetching Containers From Private Repositories

We use GitHub Packages to store our built API Docker container images alongside the project's code and allow private access to the published packages. Our Kubernetes cluster obtains the Docker API image from the private GitHub package registry, which requires setting up a personal access token that allows access to our published package. The GitHub documentation describes how to create a PTA. Make sure that you grant your token read:packages permissions, which allows the token holder to download packages from GitHub Package Registry.

If you want to fetch container images from a private repository from your Kubernetes cluster, you need a way for the kubelet on each node to authenticate to that repository. You can configure image pull secrets to make this possible.

echo -n <YOUR_GITHUB_USERNAME>:<PAT> | base64
# outputs <BASE_64_ENCODED_PAT>

Insert <BASE_64_ENCODED_PAT> into regcred.yml:

# regcred.yml
# stands for registry credentials
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
  name: regcred
  namespace: default
stringData:
  .dockerconfigjson: '{"auths":{"ghcr.io":{"auth":"<BASE_64_ENCODED_PAT>"}}}'

Apply the manifest:

kubectl apply -f regcred.yml

API, Database & Redis Manifests

Our infrastructure consist of three components: An API to retrieve NFT token data that is saved in the database component. When someone mints a new NFT token a Redis job is queued, which processes the minting of a new NFT token and stores new information in the database.

The manifest file for the API might look like this:

# api.yaml
apiVersion: v1
kind: Service
metadata:
  name: api # note the service name here
  labels:
    app: carbon
    tier: api
spec:
  ports:
  - port: 3001
  selector:
    app: carbon
    tier: api
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  labels:
    app: carbon
    tier: api
spec: # Replica Set
  selector:
    matchLabels:
      app: carbon
      tier: api
  strategy:
    type: Recreate
  replicas: 1
  template:
    metadata:
      labels:
        app: carbon
        tier: api
    spec:
      containers:
      - image: "ghcr.io/miguelfranken/api:latest"
        name: api

        imagePullPolicy: Always

        ports:
        - containerPort: 3001
          name: api

        env:
        - name: MYSQL_DATABASE
          value: "carbon"
        - name: MYSQL_USER
          value: "root"
        - name: MYSQL_HOST
          value: "db"
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: cc-secrets
              key: mysql_password
      imagePullSecrets: # we pull images from a private repository
      - name: regcred
# On your machine
kubectl apply -f db.yaml
kubectl apply -f redis.yaml
kubectl apply -f api.yaml

Manage TLS Certification

We use cert-manager in our cluster to generate and manage signed TLS certificates from Let's Encrypt, using an HTTP-01 challenge. This makes our API accessible over https.

We install cert-manager by using kubectl.

# Install cert-manager resources from official YAML manifest file on GitHub
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml

The installment will create three deployments, and a bunch of services and pods in a new namespace called cert-manager. We can verify if everything went as expected by running the follwoing command:

kubectl get pods --namespace cert-manager

Now we have to add an Issuer which is a custom resource telling cert-manager how to sign a certificate. Let's Encrypt uses the Automatic Certificate Management Environment (ACME) protocol which is why the configuration below is under a key called acme. The email address is only used by Let's Encrypt to remind you to renew the certificate after 30 days before expiry. You will only receive this email if something goes wrong when renewing the certificate with cert-manager.

# clusterissuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-production
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: <YOUR_EMAIL>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-production
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            class: public

Apply the issuer config:

kubectl apply -f clusterissuer.yaml

Manage External Cluster Access

The API is running at this point already inside the Kubernetes cluster but there is no route or proxy through which Internet clients can connect to it, yet! So you won't be able to reach the API yet. Now we will create a Kubernetes Ingress object and this will trigger the creation of a various services which together allow Internet clients to reach the API running inside the Kubernetes cluster.

Define the Ingress manifest:

# ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: default-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
spec:
  ingressClassName: public
  tls:
    - secretName: ssl-web
      hosts:
        - <YOUR_DOMAIN>
  rules:
    - host: <YOUR_DOMAIN>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api
                port:
                  number: 3001

Apply the Ingress manifest:

# from the local machine
kubectl apply -f ingress.yml

You can test the deployment via:

curl -v https://<YOUR_DOMAIN>/