Skip to content

cloud-barista/cb-tumblebug

Repository files navigation

CB-Tumblebug (Multi-Cloud Infra Management) πŸ‘‹

Go Report Card Build Top Language GitHub go.mod Go version Repo Size GoDoc Swagger API Doc

Release Version Pre Release Version License Slack

All Contributors

CB-Tumblebug (CB-TB for short) is a system for managing multi-cloud infrastructure consisting of resources from multiple cloud service providers. (Cloud-Barista)

[Note] Development of CB-Tumblebug is ongoing
CB-TB is not v1.0 yet.
We welcome any new suggestions, issues, opinions, and contributors!
Please note that the functionalities of Cloud-Barista are not stable and secure yet.
Be careful if you plan to use the current release in production.
If you have any difficulties in using Cloud-Barista, please let us know.
(Open an issue or join the Cloud-Barista Slack)
[Note] Localization and Globalization of CB-Tumblebug
As an open-source project initiated by Korean members, 
we would like to promote participation of Korean contributors during the initial stage of this project. 
So, the CB-TB repo will accept the use of the Korean language in its early stages.
However, we hope this project flourishes regardless of the contributor's country eventually.
So, the maintainers recommend using English at least for the titles of Issues, Pull Requests, and Commits, 
while the CB-TB repo accommodates local languages in their contents.

Index πŸ”—

  1. Prerequisites
  2. How to Run
  3. How to Use
  4. How to Build
  5. How to Contribute

Prerequisites 🌍

Envionment

  • Linux (recommend: Ubuntu 22.04)
  • Docker and Docker Compose
  • Golang (recommend: v1.21.6) to build the source

Dependency

Open source packages used in this project


How to Run πŸš€

(1) Download CB-Tumblebug

  • Clone the CB-Tumblebug repository:

    git clone https://github.com/cloud-barista/cb-tumblebug.git $HOME/go/src/github.com/cloud-barista/cb-tumblebug
    cd ~/go/src/github.com/cloud-barista/cb-tumblebug

    Optionally, you can register aliases for the CB-Tumblebug directory to simplify navigation:

    echo "alias cdtb='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug'" >> ~/.bashrc
    echo "alias cdtbsrc='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src'" >> ~/.bashrc
    echo "alias cdtbtest='cd $HOME/go/src/github.com/cloud-barista/cb-tumblebug/src/testclient/scripts'" >> ~/.bashrc
    source ~/.bashrc

(2) Run CB-TB and All Related Components

  • Check Docker Compose Installation:

    Ensure that Docker Engine and Docker Compose are installed on your system. If not, you can use the following script to install them (note: this script is not intended for production environments):

    cd ~/go/src/github.com/cloud-barista/cb-tumblebug
    ./scripts/installDocker.sh
  • Start All Components Using Docker Compose:

    To run all components, use the following command:

    cd ~/go/src/github.com/cloud-barista/cb-tumblebug
    sudo docker compose up

    This command will start all components as defined in the preconfigured docker-compose.yaml file. For configuration customization, please refer to the guide.

    The following components will be started:

    • ETCD: CB-Tumblebug KeyValue DB
    • CB-Spider: a Cloud API controller
    • CB-MapUI: a simple Map-based GUI web server
    • CB-Tumblebug: the system with API server

    image

    After running the command, you should see output similar to the following: image

    Now, the CB-Tumblebug API server is accessible at: http:https://localhost:1323/tumblebug/api Additionally, CB-MapUI is accessible at: http:https://localhost:1324

    Note: Before using CB-Tumblebug, you need to initialize it.


(3) Initialize CB-Tumblebug to configure Multi-Cloud info

To provisioning multi-cloud infrastructures with CB-TB, it is necessary to register the connection information (credentials) for clouds, as well as commonly used images and specifications.

  • Create credentials.yaml file and input your cloud credentials

    • Overview

      • credentials.yaml is a file that includes multiple credentials to use API of Clouds supported by CB-TB (AWS, GCP, AZURE, ALIBABA, etc.)
      • It should be located in the ~/.cloud-barista/ directory and securely managed.
      • Refer to the template.credentials.yaml for the template
    • Create credentials.yaml the file

      Automatically generate the credentials.yaml file in the ~/.cloud-barista/ directory using the CB-TB script

      cd ~/go/src/github.com/cloud-barista/cb-tumblebug
      ./init/genCredential.sh
    • Input credential data

      Put credential data to ~/.cloud-barista/credentials.yaml (Reference: How to obtain a credential for each CSP)

      ### Cloud credentials for credential holders (default: admin)
      credentialholder:
        admin:
          alibaba:
            # ClientId(ClientId): client ID of the EIAM application
            # Example: app_mkv7rgt4d7i4u7zqtzev2mxxxx
            ClientId:
            # ClientSecret(ClientSecret): client secret of the EIAM application
            # Example: CSEHDcHcrUKHw1CuxkJEHPveWRXBGqVqRsxxxx
            ClientSecret:
          aws:
            # ClientId(aws_access_key_id)
            # ex: AKIASSSSSSSSSSS56DJH
            ClientId:
            # ClientSecret(aws_secret_access_key)
            # ex: jrcy9y0Psejjfeosifj3/yxYcgadklwihjdljMIQ0
            ClientSecret:
          ...
      
  • Encrypt credentials.yaml into credentials.yaml.enc

    To protect sensitive information, credentials.yaml is not used directly. Instead, it must be encrypted using encCredential.sh. The encrypted file credentials.yaml.enc is then used by init.py. This approach ensures that sensitive credentials are not stored in plain text.

    • Encrypting Credentials
      init/encCredential.sh
      image

    If you need to update your credentials, decrypt the encrypted file using decCredential.sh, make the necessary changes to credentials.yaml, and then re-encrypt it.

    • Decrypting Credentials
      init/decCredential.sh
      image
  • (INIT) Register all multi-cloud connection information and common resources

    • How to register

      Refer to README.md for init.py, and execute the init.py script. (enter 'y' for confirmation prompts)

      cd ~/go/src/github.com/cloud-barista/cb-tumblebug
      ./init/init.sh
      • The credentials in ~/.cloud-barista/credentials.yaml.enc (encrypted file from the credentials.yaml) will be automatically registered (all CSP and region information recorded in cloudinfo.yaml will be automatically registered in the system)
        • Note: You can check the latest regions and zones of CSP using update-cloudinfo.py and review the file for updates. (contributions to updates are welcome)
      • Common images and specifications recorded in the cloudimage.csv and cloudspec.csv files in the assets directory will be automatically registered.

(4) Shutting down and Version Upgrade

  • Shutting down CB-TB and related components

    • Stop all containers by ctrl + c or type the command sudo docker compose stop / sudo docker compose down (When a shutdown event occurs to CB-TB, the system will be shutting down gracefully: API requests that can be processed within 10 seconds will be completed)

      image

    • In case of cleanup is needed due to internal system errors

      • Check and delete resources created through CB-TB
      • Delete CB-TB & CB-Spider metadata using the provided script
        cd ~/go/src/github.com/cloud-barista/cb-tumblebug
        ./scripts/cleanDB.sh
  • Upgrading the CB-TB & CB-Spider versions

    The following cleanup steps are unnecessary if you clearly understand the impact of the upgrade

    • Check and delete resources created through CB-TB
    • Delete CB-TB & CB-Spider metadata
      cd ~/go/src/github.com/cloud-barista/cb-tumblebug
      ./scripts/cleanDB.sh
    • Restart with the upgraded version

How to Use CB-TB Features 🌟

  1. Using CB-TB MapUI (recommended)
  2. Using CB-TB REST API (recommended)
  3. Using CB-TB Test Scripts

Using CB-TB MapUI

  • With CB-MapUI, you can create, view, and control Mutli-Cloud infra.
    • CB-MapUI is a project to visualize the deployment of MCIS in a map GUI.
    • Run the CB-MapUI container using the CB-TB script
      cd ~/go/src/github.com/cloud-barista/cb-tumblebug
      ./scripts/runMapUI.sh
    • Access via web browser at http:https://{HostIP}:1324 image

Using CB-TB REST API


Using CB-TB Scripts

src/testclient/scripts/ provides Bash shell-based scripts that simplify and automate the MCIS (MC-Infra) provisioning procedures, which require complex steps.

[Note] Details

Setup Test Environment

  1. Go to src/testclient/scripts/
  2. Configure conf.env
    • Provide basic test information such as CB-Spider and CB-TB server endpoints, cloud regions, test image names, test spec names, etc.
    • Much information for various cloud types has already been investigated and input, so it can be used without modification. (However, check for charges based on the specified spec)
  3. Configure testSet.env
    • Set the cloud and region configurations to be used for MCIS provisioning in a file (you can change the existing testSet.env or copy and use it)
    • Specify the types of CSPs to combine
      • Change the number in NumCSP= to specify the total number of CSPs to combine
      • Specify the types of CSPs to combine by rearranging the lines in L15-L24 (use up to the number specified in NumCSP)
      • Example: To combine aws and alibaba, change NumCSP=2 and rearrange IndexAWS=$((++IX)), IndexAlibaba=$((++IX))
    • Specify the regions of the CSPs to combine
      • Go to each CSP setting item # AWS (Total: 21 Regions)
      • Specify the number of regions to configure in NumRegion[$IndexAWS]=2 (in the example, it is set to 2)
      • Set the desired regions by rearranging the lines of the region list (if NumRegion[$IndexAWS]=2, the top 2 listed regions will be selected)
    • Be aware!
      • Be aware that creating VMs on public CSPs such as AWS, GCP, Azure, etc. may incur charges.
      • With the default setting of testSet.env, TestClouds (TestCloud01, TestCloud02, TestCloud03) will be used to create mock VMs.
      • TestCloud01, TestCloud02, TestCloud03 are not real CSPs. They are used for testing purposes (do not support SSH into VM).
      • Anyway, please be aware of cloud usage costs when using public CSPs.

Integrated Tests

  • You can test the entire process at once by executing create-all.sh and clean-all.sh included in src/testclient/scripts/sequentialFullTest/

    └── sequentialFullTest # Automatic testing from cloud information registration to NS creation, MCIR creation, and MCIS creation
        β”œβ”€β”€ check-test-config.sh # Check the multi-cloud infrastructure configuration specified in the current testSet
        β”œβ”€β”€ create-all.sh # Automatic testing from cloud information registration to NS creation, MCIR creation, and MCIS creation
        β”œβ”€β”€ gen-sshKey.sh # Generate SSH key files to access MCIS
        β”œβ”€β”€ command-mcis.sh # Execute remote commands on the created MCIS (multiple VMs)
        β”œβ”€β”€ deploy-nginx-mcis.sh # Automatically deploy Nginx on the created MCIS (multiple VMs)
        β”œβ”€β”€ create-mcis-for-df.sh # Create MCIS for hosting CB-Dragonfly
        β”œβ”€β”€ deploy-dragonfly-docker.sh # Automatically deploy CB-Dragonfly on MCIS and set up the environment
        β”œβ”€β”€ clean-all.sh # Delete all objects in reverse order of creation
        β”œβ”€β”€ create-k8scluster-only.sh # Create a K8s cluster for the multi-cloud infrastructure specified in the testSet
        β”œβ”€β”€ get-k8scluster.sh # Get K8s cluster information for the multi-cloud infrastructure specified in the testSet
        β”œβ”€β”€ clean-k8scluster-only.sh # Delete the K8s cluster for the multi-cloud infrastructure specified in the testSet
        β”œβ”€β”€ force-clean-k8scluster-only.sh # Force delete the K8s cluster for the multi-cloud infrastructure specified in the testSet if deletion fails
        β”œβ”€β”€ add-k8snodegroup.sh # Add a new K8s node group to the created K8s cluster
        β”œβ”€β”€ remove-k8snodegroup.sh # Delete the newly created K8s node group in the K8s cluster
        β”œβ”€β”€ set-k8snodegroup-autoscaling.sh # Change the autoscaling setting of the created K8s node group to off
        β”œβ”€β”€ change-k8snodegroup-autoscalesize.sh # Change the autoscale size of the created K8s node group
        β”œβ”€β”€ deploy-weavescope-to-k8scluster.sh # Deploy weavescope to the created K8s cluster
        └── executionStatus # Logs of the tests performed (information is added when testAll is executed and removed when cleanAll is executed. You can check the ongoing tasks)
  • MCIS Creation Test

    • ./create-all.sh -n shson -f ../testSetCustom.env # Create MCIS with the cloud combination configured in ../testSetCustom.env

    • Automatically proceed with the process to check the MCIS creation configuration specified in ../testSetCustom.env

    • Example of execution result

      Table: All VMs in the MCIS : cb-shson
      
      ID              Status   PublicIP       PrivateIP      CloudType  CloudRegion     CreatedTime
      --              ------   --------       ---------      ---------  -----------     -----------
      aws-ap-southeast-1-0   Running  xx.250.xx.73   192.168.2.180  aws        ap-southeast-1  2021-09-17   14:59:30
      aws-ca-central-1-0   Running  x.97.xx.230    192.168.4.98   aws        ca-central-1    2021-09-17   14:59:58
      gcp-asia-east1-0  Running  xx.229.xxx.26  192.168.3.2    gcp        asia-east1      2021-09-17   14:59:42
      
      [DATE: 17/09/2021 15:00:00] [ElapsedTime: 49s (0m:49s)] [Command: ./create-mcis-only.sh all 1 shson ../testSetCustom.env 1]
      
      [Executed Command List]
      [MCIR:aws-ap-southeast-1(28s)] create-mcir-ns-cloud.sh (MCIR) aws 1 shson ../testSetCustom.env
      [MCIR:aws-ca-central-1(34s)] create-mcir-ns-cloud.sh (MCIR) aws 2 shson ../testSetCustom.env
      [MCIR:gcp-asia-east1(93s)] create-mcir-ns-cloud.sh (MCIR) gcp 1 shson ../testSetCustom.env
      [MCIS:cb-shsonvm4(19s+More)] create-mcis-only.sh (MCIS) all 1 shson ../testSetCustom.env
      
      [DATE: 17/09/2021 15:00:00] [ElapsedTime: 149s (2m:29s)] [Command: ./create-all.sh -n shson -f ../testSetCustom.env -x 1]
  • MCIS Removal Test (Use the input parameters used in creation for deletion)

    • ./clean-all.sh -n shson -f ../testSetCustom.env # Perform removal of created resources according to ../testSetCustom.env
    • Be aware!
      • If you created MCIS (VMs) for testing in public clouds, the VMs may incur charges.
      • You need to terminate MCIS by using clean-all to avoid unexpected billing.
      • Anyway, please be aware of cloud usage costs when using public CSPs.
  • Generate MCIS SSH access keys and access each VM

    • ./gen-sshKey.sh -n shson -f ../testSetCustom.env # Return access keys for all VMs configured in MCIS

    • Example of execution result

      ...
      [GENERATED PRIVATE KEY (PEM, PPK)]
      [MCIS INFO: mc-shson]
       [VMIP]: 13.212.254.59   [MCISID]: mc-shson   [VMID]: aws-ap-southeast-1-0
       ./sshkey-tmp/aws-ap-southeast-1-shson.pem
       ./sshkey-tmp/aws-ap-southeast-1-shson.ppk
       ...
      
      [SSH COMMAND EXAMPLE]
       [VMIP]: 13.212.254.59   [MCISID]: mc-shson   [VMID]: aws-ap-southeast-1-0
       ssh -i ./sshkey-tmp/aws-ap-southeast-1-shson.pem [email protected] -o StrictHostKeyChecking=no
       ...
       [VMIP]: 35.182.30.37   [MCISID]: mc-shson   [VMID]: aws-ca-central-1-0
       ssh -i ./sshkey-tmp/aws-ca-central-1-shson.pem [email protected] -o StrictHostKeyChecking=no
  • Verify MCIS via SSH remote command execution

    • ./command-mcis.sh -n shson -f ../testSetCustom.env # Execute IP and hostname retrieval for all VMs in MCIS
  • K8s Cluster Test (WIP: Stability work in progress for each CSP)

    ./create-mcir-ns-cloud.sh -n tb -f ../testSet.env` # Create MCIR required for K8s cluster creation
    ./create-k8scluster-only.sh -n tb -f ../testSet.env -x 1 -z 1` # Create K8s cluster (-x maximum number of nodes, -z additional name for K8s node group and K8s cluster)
    ./get-k8scluster.sh -n tb -f ../testSet.env -z 1` # Get K8s cluster information
    ./add-k8snodegroup.sh -n tb -f ../testSet.env -x 1 -z 1` # Add a new K8s node group to the K8s cluster
    ./change-k8snodegroup-autoscalesize.sh -n tb -f ../testSet.env -x 1 -z 1` # Change the autoscale size of the specified K8s node group
    ./deploy-weavescope-to-k8scluster.sh -n tb -f ../testSet.env -y n` # Deploy weavescope to the created K8s cluster
    ./set-k8snodegroup-autoscaling.sh -n tb -f ../testSet.env -z 1` # Change the autoscaling setting of the new K8s node group to off
    ./remove-k8snodegroup.sh -n tb -f ../testSet.env -z 1` # Delete the newly created K8s node group
    ./clean-k8scluster-only.sh -n tb -f ../testSet.env -z 1` # Delete the created K8s cluster
    ./force-clean-k8scluster-only.sh -n tb -f ../testSet.env -z 1` # Force delete the created K8s cluster if deletion fails
    ./clean-mcir-ns-cloud.h -n tb -f ../testSet.env` # Delete the created MCIR

Multi-Cloud Infrastructure Use Cases

Deploying an MCIS Xonotic (3D FPS) Game Server

Distributed Deployment of MCIS Weave Scope Cluster Monitoring

Deploying MCIS Jitsi Video Conferencing

Automatic Configuration of MCIS Ansible Execution Environment


How to Build πŸ› οΈ

(1) Setup Prerequisites

  • Setup required tools

    • Install: git, gcc, make

      sudo apt update
      sudo apt install make gcc git
    • Install: Golang

      • Check https://golang.org/dl/ and setup Go

        • Download

          wget https://go.dev/dl/go1.21.6.linux-amd64.tar.gz;
          sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.21.6.linux-amd64.tar.gz
        • Setup environment

          echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc
          echo 'export GOPATH=$HOME/go' >> ~/.bashrc
          source ~/.bashrc
          echo $GOPATH
          go env
          go version

(2) Build and Run CB-Tumblebug

(2-1) Option 1: Run CB-Tumblebug with Docker Compose (Recommended)

  • Run Docker Compose with the build option

    To build the current CB-Tumblebug source code into a container image and run it along with the other containers, use the following command:

    cd ~/go/src/github.com/cloud-barista/cb-tumblebug
    sudo docker compose up --build

    This command will automatically build the CB-Tumblebug from the local source code and start it within a Docker container, along with any other necessary services as defined in the docker-compose.yml file.

(2-2) Option 2: Run CB-Tumblebug from the Makefile

  • Build the Golang source code using the Makefile

    cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src
    make

    All dependencies will be downloaded automatically by Go.

    The initial build will take some time, but subsequent builds will be faster by the Go build cache.

    Note To update the Swagger API documentation, run make swag

  • Set environment variables required to run CB-TB (in another tab)

    • Check and configure the contents of cb-tumblebug/conf/setup.env (CB-TB environment variables, modify as needed)
      • Apply the environment variables to the system
        cd ~/go/src/github.com/cloud-barista/cb-tumblebug
        source conf/setup.env
      • (Optional) Automatically set the TB_SELF_ENDPOINT environment variable (an externally accessible address) using a script if needed
        • This is necessary if you want to access and control the Swagger API Dashboard from outside when CB-TB is running
        cd ~/go/src/github.com/cloud-barista/cb-tumblebug
        source ./scripts/setPublicIP.sh
  • Execute the built cb-tumblebug binary by using make run

    cd ~/go/src/github.com/cloud-barista/cb-tumblebug/src
    make run

How to Contribute πŸ™

CB-TB welcomes improvements from both new and experienced contributors!

Check out CONTRIBUTING.

Contributors ✨

Thanks goes to these wonderful people (emoji key):

Seokho Son
Seokho Son

🚧 πŸ€” πŸ’» πŸ‘€
Jihoon Seo
Jihoon Seo

🚧 πŸ€” πŸ’» πŸ‘€
Yunkon Kim
Yunkon Kim

πŸ€” πŸ’» πŸ‘€ 🚧
jmleefree
jmleefree

πŸ’» πŸ‘€
ByoungSeob Kim
ByoungSeob Kim

πŸ€”
Sooyoung Kim
Sooyoung Kim

πŸ› πŸ€”
KANG DONG JAE
KANG DONG JAE

πŸ€”
Youngwoo-Jung
Youngwoo-Jung

πŸ€”
Sean Oh
Sean Oh

πŸ€”
MZC-CSC
MZC-CSC

πŸ› πŸ€”
Eunsang
Eunsang

πŸ““
hyokyungk
hyokyungk

πŸ““
pjini
pjini

πŸ““
sunmi
sunmi

πŸ““
sglim
sglim

πŸ“– πŸ’»
jangh-lee
jangh-lee

πŸ“– πŸ’»
μ΄λ„ν›ˆ
μ΄λ„ν›ˆ

πŸ“– πŸ’»
Park Beomsu
Park Beomsu

πŸ’»
Hassan Alsamahi
Hassan Alsamahi

πŸ’»
Taegeon An
Taegeon An

πŸ’»
INHYO
INHYO

πŸ’»
Modney
Modney

πŸ“– πŸ’»
Seongbin Bernie Cho
Seongbin Bernie Cho

πŸ’» πŸ“–
Gibaek Nam
Gibaek Nam

πŸ’»
Abidin Durdu
Abidin Durdu

πŸ’»
soyeon Park
soyeon Park

πŸ’»
Jayita Pramanik
Jayita Pramanik

πŸ“–
Mukul Kolpe
Mukul Kolpe

πŸ“–
EmmanuelMarianMat
EmmanuelMarianMat

πŸ’»
Carlos Felix
Carlos Felix

πŸ’»
Stuart Gilbert
Stuart Gilbert

πŸ’»
Ketan Deshmukh
Ketan Deshmukh

πŸ’»
TrΓ­ona Barrow
TrΓ­ona Barrow

πŸ’»
BamButz
BamButz

πŸ’»
dogfootman
dogfootman

πŸ““
Okhee Lee
Okhee Lee

πŸ““
joowon
joowon

πŸ““
Sanghong Kim
Sanghong Kim

πŸ’»
Rohit Rajput
Rohit Rajput

πŸ’»
Arshad
Arshad

πŸ’»
Jongwoo Han
Jongwoo Han

πŸ’»
Yoo Jae-Sung
Yoo Jae-Sung

πŸ““
Minhyeok LEE
Minhyeok LEE

πŸ‘€


License

FOSSA Status