Skip to content

A demo of VPC peering in πŸ–§ GCP and nginx as reverse proxy πŸš€

Notifications You must be signed in to change notification settings

umarfchy/MaxVpc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

25 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Phase

GCP demo: VPC peering & reverse proxy

The repo shows how to establish VPC (Virtual Private ☁️) network-peering in Google Cloud Platform (GCP πŸš€) and use the Nginx as a reverse proxy to interact with the server in the established network. In this demonstration, we'll use three VPCs that are located in three different geographical regions, each hosting a server (virtual machine) containing a single service. We'll connect the VPCs according to the following diagram (Figure 1)), establishing communication between an API server and a database via a proxy service.


diagram-001.png

Fig.1 - A schematic representation of the demo

the following table is a summary of the services that will be created in this demonstration. Throughout the tutorial, we'll refer to this table.


Table 1 - A summary of the services in GCP

sl VPC Name VPC Location VPC Subnet Name IPv4 Network VM Name VM IP Address Container name Exposed Port
1 vpc-api us east1 vpc-api-subnet 10.10.0.0/24 vm-api 10.10.0.2 api 3000
2 vpc-db us west1 vpc-db-subnet 192.168.0.0/24 vm-db 192.168.0.2 db 3306
3 vpc-proxy us central1 vpc-proxy-subnet 172.16.0.0/24 vm-proxy 172.16.0.2 reverse-proxy 80

Note If you're following along, throughout the demonstration, unless otherwise mentioned, keep every settings as default while creating any services in Google Cloud Platform (GCP) for the purpose of reproducibility.


Creating VPC in GCP

Let's start by creating our VPC. To create a VPC you need to do the following step:- Go to the VPC network from the menu and select VPC networks followed by Create VPC network. Now, we need to provide a name for the VPC. After that, we'll create a new subnet, providing the name of the subnet, region, and IPv4 range in CIDR notation. We'll allow the default firewall rules and click on the create button to finish off the VPC creation process.

We'll create 3 VPCs for our demo. They are vpc-api, vpc-db and vpc-proxy. The subnet name will follow this pattern <VPC_NAME>-subnet and the IPv4 range will be according to Table 1.

The step-by-step process for creating each VPC is shown below,

Creating vpc-api
vpc-image-001.png vpc-image-002.png vpc-image-003.png vpc-image-004.png
Creating vpc-db
vpc-image-005.png vpc-image-006.png vpc-image-007.png vpc-image-008.png
Creating vpc-proxy
vpc-image-009.png vpc-image-010.png vpc-image-011.png vpc-image-012.png

Creating VM in GCP

Once we're done with creating VPC, we'll start creating our servers (VM). To create VM, go to Compute Engine from the menu, select VM Instances followed by Create Instance. We'll provide a name, select a region and stick to the default zone of that region. Then go to Networking settings of the Advanced options. Now, specify a new network interface by selecting a network and a subnetwork. Here, we'll create a 3 VM for each VPC namely vm-api, vm-proxy and vm-db. The region will be the same as the corresponding VPC of each VM. The network and subnetwork will be the name of the VPC and its subnet for each VM respectively. We'll complete the process by clicking on the create button.


The step-by-step process for creating each VM is shown below,

Creating vm-api
vm-001.png vm-002.png vm-003.png vm-004.png vm-005.png vm-006.png
Creating vm-db
vm-007.png vm-008.png vm-009.png vm-010.png vm-011.png vm-012.png vm-013.png
Creating vm-proxy
vm-014.png vm-015.png vm-016.png vm-017.png vm-018.png vm-019.png vm-020.png

Now, we have three VM servers that are on three different VPCs. Therefore, these servers won't be able to communicate with each other. To establish a connection between two VPC's we need to develop a VPC peering connection. As this is a bidirectional connection, we need to establish the connection both ways e.g. for VPC A and VPC B, we'll create a peering network from VPC A to VPC B and then also, create another peering network from VPC B to VPC A. For the demo, we'll create a connection between vpc-api & vpc-proxy as well as vpc-proxy & vpc-db. We'll not create a connection between vpc-api & vpc-db as the API service will not directly communicate with the DB rather use the proxy server to establish that communication.

The step-by-step process for creating a peering connection is shown below,

Connecting vpc-api and vpc-proxy
peering-002.png peering-003.png peering-004.png peering-005.png
Connecting vpc-db and vpc-proxy
peering-006.png peering-007.png peering-008.png

Ping the server!

Now, we can test the connection between the VPC using ping command which sends an ICMP ECHO_REQUEST to network hosts. To test the connection we'll require the IP address of the servers within each VPC. We can see obtain that by going to the VM Instances panel in the Compute Engine option from the menu. Once we obtain the IP address, we can ssh into each server and use ping to test the connection with another server that is in the peered network of that VPC. Here, first, we'll ssh into the vm-proxy, and type ping <IP_OF_DESTINATION_SERVER> in the command line to connect with the vm-db server. We'll repeat the process for vm-db to vm-proxy, vm-proxy to vm-api and vm-api to vm-proxy to test those connections. If a connection gets established, it will log statistics in the cli.

Testing with ping
ping-test-001.png ping-test-002.png ping-test-003.png ping-test-004.png

Testing with Telnet

In our architecture (Fig 1.), the vm-proxy server will communicate with the vm-db server using port 3306. Similarly, the vm-api server will communicate with the vm-proxy server on port 80. We can use telnet to check if respective servers are listening to the intended port. We can do so by running telnet <DESTINATION_IP> <DESTINATION_PORT> on cli which will use TCP protocol to connect to the destination server. Now, using telnet will try to connect to DB from the proxy server, it will just keep on trying without ever establishing the connection.

Testing with telnet
telnet-test-001.png telnet-test-006.png

The reason for the telnet's failure is the infamous Firewall!

Providing Firewall Access

To allow TCP connection on a certain port to a VM inside a VPC, we need to allow the ingress to that port on the firewall. The enable the firewall, select the Firewall in the VPC network option in the menu. Click on CREATE FIREWALL RULE and provide a name. Now select the network. The network will be the VPC that you want to connect to. Select Targets as All instances in the network. Now, provide the source IPv4 range e.g. as the proxy server will try to connect to the DB server, this range will be the proxy server's IP or the network in which the server is hosted. Now, select the Specified protocols and ports radio button, check the TCP and type the desired port in the port field e.g. for DB server it will be 3306. Click on the create button to finish up the process.

The step-by-step process for creating a firewall rule is shown below,

Allow port 3306 from proxy to db
allow-3306-001.png allow-3306-002.png allow-3306-003.png
Allow port 80 from api to proxy
allow-80-001.png allow-80-002.png allow-80-003.png

Now, that we've allowed ingress for the DB server on port 3306 from the proxy server, and port 80 for the proxy server from the API server, running the telnet command mentioned previously will establish a successful connection.

Service Container Setup

Now, we can ssh into each VM server and start creating the services they will be hosting. To ease the process, this repo contains all the necessary files for docker containers that will act as services inside each VM. After getting into the cli of each VM, we'll update the system and install git, telnet, docker and docker-compose using the following command,

sudo apt update -y
sudo apt install git telnet docker docker-compose -y

Afterward, we'll clone this repository to each VM. We'll move to the directory depending on the service and use sudo docker-compose up -d command to start the service. For example, to spin up the database service, we'll ssh into vm-db, clone the repository, move to db directory using cd <REPOSITORY_NAME>/db and run sudo docker-compose up to start the db container. We'll do similar for api and reverse-proxy container

Note The services need to be in the following order db followed by reverse-proxy followed by api. This is cruicial as the api assumes that the db service is already running and will try to populate some inital data when it is started the first time.

The step-by-step process for creating service containers is shown below,

Creating DB container
container-db-001.png container-db-002.png container-db-003.png container-db-004.png container-db-005.png container-db-006.png container-db-007.png
Creating proxy container
container-proxy-001.png container-proxy-002.png container-proxy-003.png container-proxy-004.png container-proxy-005.png container-proxy-006.png
Creating api container
container-api-001.png container-api-002.png container-api-003.png container-api-004.png container-api-005.png container-api-006.png

Testing the Api

Once all the containers are up and running we can test the API using curl command from the cli of vm-api. Here, the api service is making requests to the proxy service for the data. As the proxy service gets the request it forwards the request to the db service which in turn responds depending on the query. This response from the db service then goes to the proxy service which in turn sends the response to the api service. You see the results below,

Result of testing the api
api-testing-001.png api-testing-002.png api-testing-003.png api-testing-004.png api-testing-005.png api-testing-006.png

We can see a successful response from the server which suggests the overall communication was successful. This marks the end of this tutorial.


Thanks for reading.