Skip to content

Commit

Permalink
adding all the docs and example code
Browse files Browse the repository at this point in the history
  • Loading branch information
parithosh committed May 7, 2021
1 parent c882975 commit 3f59fd2
Show file tree
Hide file tree
Showing 19 changed files with 1,421 additions and 0 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.idea/
142 changes: 142 additions & 0 deletions 1.setup-new-proxmox-host.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# Setting up a new proxmox host and including it in the raw-iron proxmox cluster

# Pre-requisites:

- Make sure you have gone through the `raw-iron-docs` to help you select your hardware and know the purpose of setting
up a new host.
- A dedicated instance in any number of cloud providers, a dedicated instance is needed since we will install Proxmox as
our own virtualization layer
- SSH access to the host, ideally with debian pre-installed

# Instructions:

WARNING! The below commands are for a debian system. We are installing proxmox on top of debian to ensure easy installation
on a wide range of hardware.

0. Provision the host with your SSH keys, set hostname, disable root SSH, etc. Playbook for the same can be found [here](https://github.com/ethereum/eth2.0-devops/blob/raw-iron-documentation-update/raw-iron/ansible/playbooks/provision-proxmox-host.yml)
1. Add the APT source: `echo "deb http:https://download.proxmox.com/debian/pve buster pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list`
2. Add the key: `wget http:https://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg`
3. Update the packages with: `apt-get update && apt-get upgrade`
4. Upgrade the dist: `apt-get dist-upgrade`
5. Install the dependencies: `aptitude -q -y purge firmware-bnx2x firmware-realtek firmware-linux firmware-linux-free firmware-linux-nonfree`
6. Install proxmox with: `apt-get install proxmox-ve`
7. Reboot the host and check if proxmox kernel has been loaded with: `uname -rv`
8. Ensure the `kvm` module has been loaded with: `lsmod | grep kvm`

At this point, we have proxmox installed and "only" have to setup networking.

The dedicated instances I have seen so far do not come with a DHCP server or are not inside a private network. This networking
setup assumes your dedicated instance is the same.

My approach for networking is to use 2 virtual interfaces: 1 for public internet access and one subnet for all the VMs(VM subnet).
In order to help with peering/external access, One can set up a manual port forward between the two NICs. There is a script
to achieve this, I will link that in the end. We use NAT and masquerading to allow traffic to flow from the VM subnet to
the external internet. In order to SSH into the VMs, we then need to use the proxmox host as a jumphost - Information
on how to do this is shared later on.


9. Make a backup of the `/etc/network/interfaces` in case of an error.

WARNING!!!!!: Inline comments with "#" lead to an error in the `/etc/network/interfaces` file. Do not have inline comments.

10. Open `/etc/network/interfaces` using `nano` or any other editor, edit the file following this template:
```
# /etc/network/interfaces
# Loopback device:
auto lo
iface lo inet loopback
# device: eth0
auto enp0s31f6 <OR THE DEFAULT NIC NAME>
iface enp0s31f6 <OR THE DEFAULT NIC NAME> inet static
# This is the public IP used to SSH into the server
address <ENTER PUBLIC IP OF SERVER>
# The interface and IP is limited to just this one IP, its a /32 subnet and therefore we use a .255 netmas
netmask 255.255.255.255
# Pintopoint allows us to configure traffic forwarding from the VM interface
pointopoint <ENTER GATEWAY IP>
gateway <ENTER GATEWAY IP >
iface enp0s31f6 inet6 static
address <ENTER PUBLIC IPv6 OF SERVER>
netmask 128
gateway <ENTER GATEWAY IPv6 PROVIDED>
up sysctl -p
# for a Subnet
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1 <OR A DIFFERENT SUBNET IP>
# ENTER A NETMASK OF /24, ALLOWING FOR 256 ADDRESSES AND VMS IN THIS SUBNET
netmask 255.255.255.0
# THERE IS NO BRIDGE PORT AS SUCH, WE WILL USE MASQUERADING INSTEAD
bridge_ports none
bridge_stp off
bridge_fd 0
# IP FORWARDING NEEDS TO BE ENABLED FOR THIS TO WORK
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
# Setup NAT and Masquerading between the interfaces after the interface is active
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE <OR A DIFFERENT SUBNET IP>
# Delete it once the interface is down
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o enp0s31f6 -j MASQUERADE <OR A DIFFERENT SUBNET IP>
```

WARNING!!! Dragons be here!!!
Warning again, the networking template shown above is just that, a template. Please do not use that as the final
version without any changes.

11. Once you are sure (!!), Restart the networking stack with `systemctl restart networking`. Enter `ip addr list` to verify
the interface IPs and status.

12. Now we should have networking up, however we do not have DHCP yet (unless there is an upstream DHCP server). We will
setup the DHCP server to listen on `vmbr0`. Install DHCP server with `apt install isc-dhcp-server -y`.

13. Edit the `nano /etc/dhcp/dhcpd.conf` with your DHCP config as shown here:
```
option domain-name "proxmox.whatever" <OR NAME>;
option domain-name-servers 1.1.1.1,8.8.8.8;
authoritative;
subnet 10.10.10.0 netmask 255.255.255.0 {
range 10.10.10.20 10.10.10.200; <OR A DESIRED RANGE>
<FIXED IPs CAN BE SETUP HERE>
option routers 10.10.10.1; <THIS IS THE SAME IP ADDRESS SET IN THE vmbr0 CONFIGURATION>
}
```

14. Since we want the DHCP server to purely listen on the vm subnet, we edit the `/etc/default/isc-dhcp-server` file and set
`INTERFACESv4="vmbr0"`.

15. Once setup, restart the service so it picks up the config with `systemctl restart isc-dhcp-server`.

16. Now reboot the system and visit the proxmox UI at `http:https://PROXMOX-PUBLIC-IP:8006/` and login with your linux
username:password.
17. Remove `rpcbind` which is not needed by proxmox for most use cases and is a security hole:
```
sudo service stop rpcbind.target && sudo service disable rpcbind.target
sudo service stop rpcbind.socket && sudo service disable rpcbind.socket
sudo service stop rpcbind.service && sudo service disable rpcbind.service
```

# Create a VM
1. Login to the proxmox UI, Choose one of the hosts and choose the storage (usually called `local`).
2. Switch to the tab called `ISO Images`, choose `Upload` and select an VM ISO file (get one from the Ubuntu website)
3. Choose `Create VM` on the top right and go through the installer
4. Once the VM has started, you can click on the VM under the host and choose `Console` to get access to the visual output
5. Test internet functionality to ensure everything works as expected
6. Delete the VM to enable it to join the proxmox cluster. You cannot join a cluster when there are any resources on the
host

# Joining a Proxmox cluster
0. Ensure your host is provisioned EXACTLY how you want it. It is very hard to change a proxmox host once it joins a cluster.
1. Go to your existing proxmox cluster, choose `Datacenter > Cluster > Join Information` (Create Cluster if one doesn't exist at all)
2. Copy the join information. Go to your NEW proxmox instance, choose `Datacenter > Cluster > Join Cluster`. Paste the
join information.
3. Wait for the join to complete. Switch back to the old proxmox cluster GUI, the new host should be present in the cluster.

# LXC containers vs VMs
LXC stands for Linux Containers and KVM is an acronym for Kernel-Based Virtual Machine. The main difference here is that
virtual machines require their own kernel instance to run while containers share the same kernel.

The rest of this guide assumes that we want to run VMs.
42 changes: 42 additions & 0 deletions 2.prepare-terraform-access.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Setting up proxmox for automation
While the proxmox GUI is great for getting an overview, it is horrible for repetitive tasks and large workloads. We need
some automation. We can use Terraform to create the VMs and Ansible to provision the instances.

### Terraform

Terraform is used by EF devops to create and manage instances. Proxmox has a community plugin known as "Telmate/proxmox"
that can be used with Terraform. Have a look at the terraform example in `terraform-example/environment/example/main.tf`.
Another detailed example can be found here: https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/


## Pre-requisite
Before we use terraform, we need to create a template that we can base our images off of. This is similar to an
AMI on AWS. Follow this guide for creating an image: https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/
WARNING!!! After importing the disk image to proxmox storage, you will see a path where the image was imported. Use that
path to attach the disk to the virtual machine. Otherwise you will just get storage errors and cannot use the template.

WARNING!!! make sure you set a "unique" name for the template, ideally "name-$NAME-OF-NODE" or so. Having the same template
name across multiple nodes leads to a `500 non-shared storage` error since terraform tries to use a template
that is in a different host. Using unique names and setting it in the resource avoids this problem entirely.

E.g:
```
Run:
qm importdisk 9001 debian-10-openstack-amd64.qcow2 local
Output: Successfully imported disk as unused0:local:9001/vm-9001-disk-0.raw
And then: qm set 9001 -scsihw virtio-scsi-pci -virtio0 local:9001/vm-9001-disk-0.raw
```

## Instructions
1. Terraform requires a username:password to authenticate itself against the API, so create the requisite pair in the GUI
2. Add the `pm_url`,`pm_user` and `pm_password` directly in the `main.tf` OR export them to your env by
setting `TF_VAR_pm_user`,`TF_VAR_pm_password`,`TF_VAR_pm_api_url`
3. Modify the information in your `main.cf` to match your environment and needs. Target node refers to the physical proxmox
node on which the VM needs to be created. So you would need one module per region.
4. Run `terraform init` to init the provider
5. The terraform file `main.tf` uses `cloud-init` to provision the instance from a "Template image" and then runs custom
scripts,creates users, SSH keys, etc. look at `terraform-example/modules/instances/resource.tf`
and `terraform-example/environment/example/files/cloud_init_deb10.cloud_config`.

Note: Terraform doesn't explicitly create "Tags" with the proxmox provider, instead the tags can be saved as a string
in the "Description" field of proxmox.
17 changes: 17 additions & 0 deletions 3.prepare-ansible-access.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Setting up and using Ansible

Ansible allows for using dynamic inventory, this inventory can be a simple python script.

- No extra dependencies are needed besides those specified in `pyproject.toml`. `poetry install` installs the deps.
- Configure the credentials in `proxmox.json`.
- An example ansible inventory can be found here: `ansible-example/inventory/proxmox.py`, edit the `project_filters` and
`FetchPublicIPAddress` function (Reason described below).
- The dynamic inventory can be generated then with `ansible-inventory -i <PATH>/inventory/proxmox.py --list`.
- Test ping with `ansible -i <PATH>/inventory/proxmox.py -m ping all`
- Run an example ansible-playbook to confirm that you can reach the VMs and run playbooks against them.

Note:
While the inventory queries the public API endpoint, the ansible scripts are run on the VMs themselves. These VMs
will only return a `private subnet IP` and our system will not have a path to it. Which is why we need to use
the jump host method of connecting to the VMs. Make sure that you have SSH access(ideally via key) via `USERNAME@PROXMOX-HOST-IP`.
TL;DR: All the traffic will essentially "jump" through the proxmox host to the VM you want to interact with
25 changes: 25 additions & 0 deletions 4.setup-port-forwards.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Set up port forwards

The VMs are inside their own VM subnet and have no direct route to the outside world. This means they can only have outgoing
traffic. While this is fine for most usecases, it breaks down with eth2 nodes. Eth2 nodes use discv5 to handle finding peers,
in the absence of something like `upnp`, they would not be able to connect to any peers since they are completely isolated
to incoming traffic.

The solution for this would be to simply setup a port forward from the host public IP to the subnet IP. We can then
advertise the port as the `p2p-udp-port/p2p-tcp-port` via CLI flags to ensure connectivity. Naturally we don't want the ports
to overlap, so we always use a pre-defined offset (9000) + `vmid` (which is unique for each VM).

I've created a script to simplify the process, it creates a basic script with `iptables` to setup this route.

## Instructions
- Navigate to `scripts/proxmox-port-forwarding/inventory`
- Install dependencies with `poetry install`
- Run the script with:
`python3 proxmox-port-forwarding/inventory/proxmox.py --url=https://URL:8006/ --username=USERNAME@pve --password=PASSWORD --qemu_interface=ens18 --trust-invalid-certs --list --pretty`
- Move the generated `port-forwarding-script-NODE-NAME.sh` to the proxmox host with `scp` or whatever.
- Run `chmod +x port-forwarding-script-NODE-NAME.sh` and manually verify the script
- Run the script with `./port-forwarding-script-NODE-NAME.sh`
- Verify the forwards with `iptables -vnxL -tnat`

NOTE: You would need to redo the port forwards each time you setup a new VM that requires the port forward

12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Proxmox-Terraform-Ansible

This repo can be used to set up your own mini-cloud. The idea is to use dedicated infrastructure and install
our own virtualization engine, This engine can then be used in combination with Terraform and Ansible to
create an experience similar to just using any other cloud service provider.

# How to use this repo:
- Follow the numbered guides in order




2 changes: 2 additions & 0 deletions ansible-example/ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[defaults]
host_key_checking = False
3 changes: 3 additions & 0 deletions ansible-example/inventory/group_vars/all.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
ansible_python_interpreter: /usr/bin/python3
ansible_user: devops
pip_package: python3-pip
81 changes: 81 additions & 0 deletions ansible-example/inventory/poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 7 additions & 0 deletions ansible-example/inventory/proxmox.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"url": "https://URL:8006/",
"username": "USERNAME@pve",
"password": "PASSWORD",
"validateCert": false,
"qemu_interface": "eth0"
}
Loading

0 comments on commit 3f59fd2

Please sign in to comment.