Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[STACK-2950]: Batch member update optimization for Rack flows #559

Open
wants to merge 7 commits into
base: feature/batch_member_update_enhancement
Choose a base branch
from

Conversation

NehaKembalkarA10
Copy link
Contributor

Description

Created batch tasks and added them into batch_member_update rack flows to gather batch of member related information beforehand instead of calling some tasks repeatedly for each member in a batch query using for loops.

Jira Ticket

https://a10networks.atlassian.net/browse/STACK-3178
https://a10networks.atlassian.net/browse/STACK-3179
https://a10networks.atlassian.net/browse/STACK-3180
https://a10networks.atlassian.net/browse/STACK-3181
https://a10networks.atlassian.net/browse/STACK-3184
https://a10networks.atlassian.net/browse/STACK-3185

Technical Approach

  • Created a batch task CountBatchMembersWithIPPortProtocol() for getting 2 dictionaries {member_ip_address: member_count_ip} and {member_ip_address: member_count_ip_port_protocol} which contains new members and old members for a batch update in it.
  • Create a batch task GetNatPoolEntryForBatchMembers() for getting a dictionary {member_subnet_id: nat_pool} specific to old members in a batch update.
  • Created a batch task DeleteSubnetAddressAndNatPoolForBatchMembers() for releasing subnet_address and nat_pool associated to all the member in a list of old_members in a batch update.
  • Modified ValidateSubnet() task sothat it can work on a list of members as well in case of batch update.
  • Modified MemberCreate() and MemberDelete() tasks such that member_count_ip and member_count_ip_port_protocol required in it can be fetched from the dictionaries got from the CountBatchMembersWithIPPortProtocol() batch tasks by looking into the dictionaries and getting the counts respective to the member which is to be deleted/created.
  • Added the created batch tasks into get_rack_vthunder_batch_update_members_flow before starting for loops for new_members, old_members and updated_members.
  • Removed the unwanted tasks in the loops after adding the batch tasks.

Config Changes

N/A

Manual Testing

  1. Create a flavorprofile and flavor for nat-pool

stack@openstack-3:~$ openstack loadbalancer flavorprofile show fpn

+---------------+------------------------------------------------------------------------------------------------------------+
| Field         | Value                                                                                                      |
+---------------+------------------------------------------------------------------------------------------------------------+
| id            | 6553359e-751d-4683-84b2-6c39997388a0                                                                       |
| name          | fpn                                                                                                        |
| provider_name | a10                                                                                                        |
| flavor_data   | {"nat-pool":{"pool-name":"pool1","start-address":"10.0.12.11","end-address":"10.0.12.12","netmask":"/24"}} |
+---------------+------------------------------------------------------------------------------------------------------------+

stack@openstack-3:~$ openstack loadbalancer flavor show fn

+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| id                | bcf588e8-ecc3-444f-a31b-8d624210fed7 |
| name              | fn                                   |
| flavor_profile_id | 6553359e-751d-4683-84b2-6c39997388a0 |
| enabled           | True                                 |
| description       |                                      |
+-------------------+--------------------------------------+
  1. Create a loadbalancer using above flavor, a listener and a pool

stack@openstack-3:~$ openstack loadbalancer create --name lb1 --vip-subnet-id public-11-subnet --flavor fn

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2021-12-01T06:49:46                  |
| description         |                                      |
| flavor_id           | bcf588e8-ecc3-444f-a31b-8d624210fed7 |
| id                  | 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| project_id          | 9ef5e94c53c940239a66dbe4a1058eee     |
| provider            | a10                                  |
| provisioning_status | PENDING_CREATE                       |
| updated_at          | None                                 |
| vip_address         | 10.0.11.182                          |
| vip_network_id      | 6df8ee71-7519-4f9d-8300-44acfa2fe325 |
| vip_port_id         | 6e0f7cd2-90b6-4a3e-92fc-b2d97fd0cd2f |
| vip_qos_policy_id   | None                                 |
| vip_subnet_id       | 184910cd-74d4-4fd5-a3b2-ebad65d4cd44 |
+---------------------+--------------------------------------+

stack@openstack-3:~$ openstack loadbalancer listener create --name l1 --protocol http --protocol-port 80 lb1

+-----------------------------+--------------------------------------+
| Field                       | Value                                |
+-----------------------------+--------------------------------------+
| admin_state_up              | True                                 |
| connection_limit            | -1                                   |
| created_at                  | 2021-12-01T06:50:12                  |
| default_pool_id             | None                                 |
| default_tls_container_ref   | None                                 |
| description                 |                                      |
| id                          | 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a |
| insert_headers              | None                                 |
| l7policies                  |                                      |
| loadbalancers               | 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 |
| name                        | l1                                   |
| operating_status            | OFFLINE                              |
| project_id                  | 9ef5e94c53c940239a66dbe4a1058eee     |
| protocol                    | HTTP                                 |
| protocol_port               | 80                                   |
| provisioning_status         | PENDING_CREATE                       |
| sni_container_refs          | []                                   |
| timeout_client_data         | 50000                                |
| timeout_member_connect      | 5000                                 |
| timeout_member_data         | 50000                                |
| timeout_tcp_inspect         | 0                                    |
| updated_at                  | None                                 |
| client_ca_tls_container_ref | None                                 |
| client_authentication       | NONE                                 |
| client_crl_container_ref    | None                                 |
+-----------------------------+--------------------------------------+

stack@openstack-3:~$ openstack loadbalancer pool create --name p1 --protocol HTTP --lb-algorithm LEAST_CONNECTIONS --listener l1

+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| admin_state_up       | True                                 |
| created_at           | 2021-12-01T06:50:26                  |
| description          |                                      |
| healthmonitor_id     |                                      |
| id                   | 70cfaa46-c7a6-441f-82c7-17e1052fc0d4 |
| lb_algorithm         | LEAST_CONNECTIONS                    |
| listeners            | 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a |
| loadbalancers        | 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 |
| members              |                                      |
| name                 | p1                                   |
| operating_status     | OFFLINE                              |
| project_id           | 9ef5e94c53c940239a66dbe4a1058eee     |
| protocol             | HTTP                                 |
| provisioning_status  | PENDING_CREATE                       |
| session_persistence  | None                                 |
| updated_at           | None                                 |
| tls_container_ref    | None                                 |
| ca_tls_container_ref | None                                 |
| crl_container_ref    | None                                 |
| tls_enabled          | False                                |
+----------------------+--------------------------------------+

Result on vThunder:

vThunder(NOLICENSE)#show running-config
!Current configuration: 283 bytes
!Configuration last updated at 06:50:25 GMT Wed Dec 1 2021
!Configuration last saved at 06:50:28 GMT Wed Dec 1 2021
!64-bit Advanced Core OS (ACOS) version 5.2.1, build 153 (Dec-11-2020,14:16)
!
!
interface management
  ip address dhcp
!
interface ethernet 1
!
interface ethernet 2
!
vrrp-a vrid 0
  floating-ip 10.0.11.164
!
ip nat pool pool1 10.0.12.11 10.0.12.12 netmask /24
!
slb service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4 tcp
  method least-connection
!
slb virtual-server 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 10.0.11.182
  port 80 http
    name 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a
    extended-stats
    source-nat pool pool1
    service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4
!
!
cloud-services meta-data
  enable
  provider openstack
!
end
  1. Create 5 members using following script for batch member update:
resource "openstack_lb_members_v2" "members_1" {
  pool_id = "70cfaa46-c7a6-441f-82c7-17e1052fc0d4"
  member {
    address       = "10.0.12.44"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 90
    name          = "m1"
    weight        = 10
  }
  member {
    address       = "10.0.12.45"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 91
    name          = "m2"
    weight        = 15
  }
  member {
    address       = "10.0.12.46"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 92
    name          = "m3"
    weight        = 15
  }
  member {
    address       = "10.0.12.47"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 93
    name          = "m4"
    weight        = 15
  }
  member {
    address       = "10.0.12.46"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 94
    name          = "m5"
    weight        = 15
  }
}

stack@openstack-3:~/neha$ time terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

  • create

Terraform will perform the following actions:

openstack_lb_members_v2.members_1 will be created

  • resource "openstack_lb_members_v2" "members_1" {
    • id = (known after apply)

    • pool_id = "70cfaa46-c7a6-441f-82c7-17e1052fc0d4"

    • region = (known after apply)

    • member {

      • address = "10.0.12.44"
      • admin_state_up = true
      • id = (known after apply)
      • name = "m1"
      • protocol_port = 90
      • subnet_id = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      • weight = 10
        }
    • member {

      • address = "10.0.12.45"
      • admin_state_up = true
      • id = (known after apply)
      • name = "m2"
      • protocol_port = 91
      • subnet_id = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      • weight = 15
        }
    • member {

      • address = "10.0.12.46"
      • admin_state_up = true
      • id = (known after apply)
      • name = "m3"
      • protocol_port = 92
      • subnet_id = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      • weight = 15
        }
    • member {

      • address = "10.0.12.46"
      • admin_state_up = true
      • id = (known after apply)
      • name = "m5"
      • protocol_port = 94
      • subnet_id = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      • weight = 15
        }
    • member {

      • address = "10.0.12.47"
      • admin_state_up = true
      • id = (known after apply)
      • name = "m4"
      • protocol_port = 93
      • subnet_id = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      • weight = 15
        }
        }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

openstack_lb_members_v2.members_1: Creating...
openstack_lb_members_v2.members_1: Still creating... [10s elapsed]
openstack_lb_members_v2.members_1: Creation complete after 11s [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

real 0m17.510s
user 0m1.928s
sys 0m0.382s

stack@openstack-3:~/neha$ openstack loadbalancer member list p1

+--------------------------------------+------+----------------------------------+---------------------+------------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address    | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+------------+---------------+------------------+--------+
| 10034af1-546e-4012-ac35-5c7e4cba21b5 | m2   | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.45 |            91 | NO_MONITOR       |     15 |
| 20e57ab3-3d10-4eeb-b5a9-8e60dbf8c13f | m5   | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.46 |            94 | NO_MONITOR       |     15 |
| 37d3dd11-c6a5-4fff-bd48-2b6029a527e0 | m3   | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.46 |            92 | NO_MONITOR       |     15 |
| 7985c590-5031-46ad-b8dc-9680169a9feb | m4   | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.47 |            93 | NO_MONITOR       |     15 |
| d7b23dab-e412-4c14-9271-591c07537fa1 | m1   | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.44 |            90 | NO_MONITOR       |     10 |
+--------------------------------------+------+----------------------------------+---------------------+------------+---------------+------------------+--------+

Result on vThunder:

vThunder(NOLICENSE)#show running-config
!Current configuration: 196 bytes
!Configuration last updated at 07:02:28 GMT Wed Dec 1 2021
!Configuration last saved at 07:02:31 GMT Wed Dec 1 2021
!64-bit Advanced Core OS (ACOS) version 5.2.1, build 153 (Dec-11-2020,14:16)
!
!
interface management
ip address dhcp
!
interface ethernet 1
!
interface ethernet 2
!
vrrp-a vrid 0
floating-ip 10.0.11.164
floating-ip 10.0.12.176
!
ip nat pool pool1 10.0.12.11 10.0.12.12 netmask /24
!
slb server 9ef5e_10_0_12_44 10.0.12.44
port 90 tcp
!
slb server 9ef5e_10_0_12_45 10.0.12.45
port 91 tcp
!
slb server 9ef5e_10_0_12_46 10.0.12.46
port 92 tcp
port 94 tcp
!
slb server 9ef5e_10_0_12_47 10.0.12.47
port 93 tcp

!
slb service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4 tcp
method least-connection
member 9ef5e_10_0_12_44 90
member 9ef5e_10_0_12_45 91
member 9ef5e_10_0_12_46 92
member 9ef5e_10_0_12_46 94
member 9ef5e_10_0_12_47 93

!
slb virtual-server 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 10.0.11.182
port 80 http
name 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a
extended-stats
source-nat pool pool1
service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4
!
!
cloud-services meta-data
enable
provider openstack
!
end
!Current config commit point for partition 0 is 0 & config mode is classical-mode

  1. Using following batch update script, delete members m1. m2 and m3, update members m4 and m5 and create new members m6, m7 and m8.
resource "openstack_lb_members_v2" "members_1" {
  pool_id = "70cfaa46-c7a6-441f-82c7-17e1052fc0d4"

  member {
    address       = "10.0.12.54"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 95
    name          = "m6"
    weight        = 10
  }

  member {
    address       = "10.0.12.55"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 96
    name          = "m7"
    weight        = 15
  }

  member {
    address       = "10.0.12.56"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 97
    name          = "m8"
    weight        = 15
  }

  member {
    address       = "10.0.12.47"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 93
    name          = "m4_update"
    weight        = 17
    admin_state_up = false
    admin_state_up = false
  }

  member {
    address       = "10.0.12.46"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 94
    name          = "m5_update"
    weight        = 19
    admin_state_up = false
  }
}

stack@openstack-3:~/neha$ time terraform apply
openstack_lb_members_v2.members_1: Refreshing state... [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place

Terraform will perform the following actions:

openstack_lb_members_v2.members_1 will be updated in-place

~ resource "openstack_lb_members_v2" "members_1" {
id = "70cfaa46-c7a6-441f-82c7-17e1052fc0d4"
# (1 unchanged attribute hidden)

  - member {
      - address        = "10.0.12.44" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "d7b23dab-e412-4c14-9271-591c07537fa1" -> null
      - name           = "m1" -> null
      - protocol_port  = 90 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 10 -> null
    }
  - member {
      - address        = "10.0.12.45" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "10034af1-546e-4012-ac35-5c7e4cba21b5" -> null
      - name           = "m2" -> null
      - protocol_port  = 91 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 15 -> null
    }
  + member {
      + address        = "10.0.12.46"
      + admin_state_up = false
      + id             = (known after apply)
      + name           = "m5_update"
      + protocol_port  = 94
      + subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      + weight         = 19
    }
  - member {
      - address        = "10.0.12.46" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "20e57ab3-3d10-4eeb-b5a9-8e60dbf8c13f" -> null
      - name           = "m5" -> null
      - protocol_port  = 94 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 15 -> null
    }
  - member {
      - address        = "10.0.12.46" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "37d3dd11-c6a5-4fff-bd48-2b6029a527e0" -> null
      - name           = "m3" -> null
      - protocol_port  = 92 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 15 -> null
    }
  + member {
      + address        = "10.0.12.47"
      + admin_state_up = false
      + id             = (known after apply)
      + name           = "m4_update"
      + protocol_port  = 93
      + subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      + weight         = 17
    }
  - member {
      - address        = "10.0.12.47" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "7985c590-5031-46ad-b8dc-9680169a9feb" -> null
      - name           = "m4" -> null
      - protocol_port  = 93 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 15 -> null
    }
  + member {
      + address        = "10.0.12.54"
      + admin_state_up = true
      + id             = (known after apply)
      + name           = "m6"
      + protocol_port  = 95
      + subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      + weight         = 10
    }
  + member {
      + address        = "10.0.12.55"
      + admin_state_up = true
      + id             = (known after apply)
      + name           = "m7"
      + protocol_port  = 96
      + subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      + weight         = 15
    }
  + member {
      + address        = "10.0.12.56"
      + admin_state_up = true
      + id             = (known after apply)
      + name           = "m8"
      + protocol_port  = 97
      + subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      + weight         = 15
    }
}

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

openstack_lb_members_v2.members_1: Modifying... [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]
openstack_lb_members_v2.members_1: Still modifying... [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4, 10s elapsed]
openstack_lb_members_v2.members_1: Modifications complete after 12s [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

real 0m15.133s
user 0m2.169s
sys 0m0.329s

stack@openstack-3:~/neha$ openstack loadbalancer member list p1

+--------------------------------------+-----------+----------------------------------+---------------------+------------+---------------+------------------+--------+
| id                                   | name      | project_id                       | provisioning_status | address    | protocol_port | operating_status | weight |
+--------------------------------------+-----------+----------------------------------+---------------------+------------+---------------+------------------+--------+
| 20e57ab3-3d10-4eeb-b5a9-8e60dbf8c13f | m5_update | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.46 |            94 | NO_MONITOR       |     19 |
| 7985c590-5031-46ad-b8dc-9680169a9feb | m4_update | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.47 |            93 | NO_MONITOR       |     17 |
| c605b3d6-b964-4b92-9c6f-1a2bac4eb369 | m8        | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.56 |            97 | NO_MONITOR       |     15 |
| e941f0be-942f-4648-8572-c891cc25b6a0 | m6        | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.54 |            95 | NO_MONITOR       |     10 |
| 70034cf7-efbe-4ef7-b71a-68016074d21b | m7        | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.55 |            96 | NO_MONITOR       |     15 |
+--------------------------------------+-----------+----------------------------------+---------------------+------------+---------------+------------------+--------+

Result on vThunder:

vThunder(NOLICENSE)#show running-config
!Current configuration: 196 bytes
!Configuration last updated at 07:08:10 GMT Wed Dec 1 2021
!Configuration last saved at 07:08:12 GMT Wed Dec 1 2021
!64-bit Advanced Core OS (ACOS) version 5.2.1, build 153 (Dec-11-2020,14:16)
!
!
interface management
ip address dhcp
!
interface ethernet 1
!
interface ethernet 2
!
vrrp-a vrid 0
floating-ip 10.0.11.164
floating-ip 10.0.12.130
!
ip nat pool pool1 10.0.12.11 10.0.12.12 netmask /24
!
slb server 9ef5e_10_0_12_47 10.0.12.47
disable
port 93 tcp
!
slb server 9ef5e_10_0_12_46 10.0.12.46
disable
port 94 tcp
!
slb server 9ef5e_10_0_12_54 10.0.12.54
port 95 tcp
!
slb server 9ef5e_10_0_12_55 10.0.12.55
port 96 tcp
!
slb server 9ef5e_10_0_12_56 10.0.12.56
port 97 tcp
!
slb service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4 tcp
method least-connection
member 9ef5e_10_0_12_47 93
member 9ef5e_10_0_12_54 95
member 9ef5e_10_0_12_55 96
member 9ef5e_10_0_12_56 97
!
slb virtual-server 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 10.0.11.182
port 80 http
name 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a
extended-stats
source-nat pool pool1
service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4
!
!
cloud-services meta-data
enable
provider openstack
!
end
!Current config commit point for partition 0 is 0 & config mode is classical-mode

  1. Delete all members except m6 using following batch update script:
resource "openstack_lb_members_v2" "members_1" {
  pool_id = "70cfaa46-c7a6-441f-82c7-17e1052fc0d4"

  member {
    address       = "10.0.12.54"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 95
    name          = "m6"
    weight        = 10
  }
}

stack@openstack-3:~/neha$ time terraform apply
openstack_lb_members_v2.members_1: Refreshing state... [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place

Terraform will perform the following actions:

openstack_lb_members_v2.members_1 will be updated in-place

~ resource "openstack_lb_members_v2" "members_1" {
id = "70cfaa46-c7a6-441f-82c7-17e1052fc0d4"
# (1 unchanged attribute hidden)

  - member {
      - address        = "10.0.12.46" -> null
      - admin_state_up = false -> null
      - backup         = false -> null
      - id             = "20e57ab3-3d10-4eeb-b5a9-8e60dbf8c13f" -> null
      - name           = "m5_update" -> null
      - protocol_port  = 94 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 19 -> null
    }
  - member {
      - address        = "10.0.12.47" -> null
      - admin_state_up = false -> null
      - backup         = false -> null
      - id             = "7985c590-5031-46ad-b8dc-9680169a9feb" -> null
      - name           = "m4_update" -> null
      - protocol_port  = 93 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 17 -> null
    }
  - member {
      - address        = "10.0.12.54" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "e941f0be-942f-4648-8572-c891cc25b6a0" -> null
      - name           = "m6" -> null
      - protocol_port  = 95 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 10 -> null
    }
  + member {
      + address        = "10.0.12.54"
      + admin_state_up = true
      + id             = "e941f0be-942f-4648-8572-c891cc25b6a0"
      + name           = "m6"
      + protocol_port  = 95
      + subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
      + weight         = 10
    }
  - member {
      - address        = "10.0.12.55" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "70034cf7-efbe-4ef7-b71a-68016074d21b" -> null
      - name           = "m7" -> null
      - protocol_port  = 96 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 15 -> null
    }
  - member {
      - address        = "10.0.12.56" -> null
      - admin_state_up = true -> null
      - backup         = false -> null
      - id             = "c605b3d6-b964-4b92-9c6f-1a2bac4eb369" -> null
      - name           = "m8" -> null
      - protocol_port  = 97 -> null
      - subnet_id      = "da29e3b4-c885-4834-b4ee-228d7d1bfac1" -> null
      - weight         = 15 -> null
    }
}

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

openstack_lb_members_v2.members_1: Modifying... [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]
openstack_lb_members_v2.members_1: Still modifying... [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4, 10s elapsed]
openstack_lb_members_v2.members_1: Modifications complete after 11s [id=70cfaa46-c7a6-441f-82c7-17e1052fc0d4]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

real 0m13.891s
user 0m1.908s
sys 0m0.623s

stack@openstack-3:~/neha$ openstack loadbalancer member list p1

+--------------------------------------+------+----------------------------------+---------------------+------------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address    | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+------------+---------------+------------------+--------+
| e941f0be-942f-4648-8572-c891cc25b6a0 | m6   | 9ef5e94c53c940239a66dbe4a1058eee | ACTIVE              | 10.0.12.54 |            95 | NO_MONITOR       |     10 |
+--------------------------------------+------+----------------------------------+---------------------+------------+---------------+------------------+--------+

Result on vThunder:

vThunder(NOLICENSE)#show running-config
!Current configuration: 196 bytes
!Configuration last updated at 07:13:49 GMT Wed Dec 1 2021
!Configuration last saved at 07:13:52 GMT Wed Dec 1 2021
!64-bit Advanced Core OS (ACOS) version 5.2.1, build 153 (Dec-11-2020,14:16)
!
!
interface management
ip address dhcp
!
interface ethernet 1
!
interface ethernet 2
!
vrrp-a vrid 0
floating-ip 10.0.11.164
floating-ip 10.0.12.54
!
ip nat pool pool1 10.0.12.11 10.0.12.12 netmask /24
!
slb server 9ef5e_10_0_12_54 10.0.12.54
port 95 tcp
!
slb service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4 tcp
method least-connection
member 9ef5e_10_0_12_54 95
!
slb virtual-server 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 10.0.11.182
port 80 http
name 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a
extended-stats
source-nat pool pool1
service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4
!
!
cloud-services meta-data
enable
provider openstack
!
end

  1. Delete the member m6:

stack@openstack-3:/neha$ openstack loadbalancer member delete p1 m6
stack@openstack-3:
/neha$ openstack loadbalancer member list p1

Result on vThunder:

vThunder(NOLICENSE)#show running-config
!Current configuration: 283 bytes
!Configuration last updated at 07:19:09 GMT Wed Dec 1 2021
!Configuration last saved at 07:19:13 GMT Wed Dec 1 2021
!64-bit Advanced Core OS (ACOS) version 5.2.1, build 153 (Dec-11-2020,14:16)
!
!
interface management
ip address dhcp
!
interface ethernet 1
!
interface ethernet 2
!
vrrp-a vrid 0
floating-ip 10.0.11.164
!
ip nat pool pool1 10.0.12.11 10.0.12.12 netmask /24
!
slb service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4 tcp
method least-connection
!
slb virtual-server 7ae7360f-3cdc-4bb3-83d4-362f22ed3499 10.0.11.182
port 80 http
name 7f81dbf3-f9dd-4ac2-a500-5d5f49f9485a
extended-stats
source-nat pool pool1
service-group 70cfaa46-c7a6-441f-82c7-17e1052fc0d4
!
!
cloud-services meta-data
enable
provider openstack
!
end

Copy link
Collaborator

@ytsai-a10 ytsai-a10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides running the batch flow parallel.

  • please help test the delete 2 (all) memeber in same ip address case.
  • The key for the enhancement is to reduce the database query. Since I/O take more time. So, If we just move the db query from loop to another loop. It may not help.
  • But with our investigation, the most time was spend on running axapi one by one. Even we reduce the db query times, may not help the performance.

member.ip_address in mem_count_ip_port_protocol_dict):
mem_cnt_ip_port_proto = mem_count_ip_port_protocol_dict[member.ip_address]
member_count_ip_port_protocol = mem_cnt_ip_port_proto

if member_count_ip <= 1:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can test this scenario?
create 2 members:

  member {
    address       = "10.0.12.54"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 95
    name          = "m6"
    weight        = 10
  }

  member {
    address       = "10.0.12.54"
    subnet_id     = "da29e3b4-c885-4834-b4ee-228d7d1bfac1"
    protocol_port = 96
    name          = "m7"
    weight        = 15
  }

and then delete both of them by batch member update. Will the slb server be delete on thunder? since the member_count_ip should be 2 in this case.

mem_count_ip_dict = {}
mem_count_ip_port_protocol_dict = {}
for member in members:
cnt_ip = self.member_repo.get_member_count_by_ip_address(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can skip the database query if mem_count_ip_dict[member.ip_address] already exist

if nat_flavor and 'pool_name' in nat_flavor:
try:
for member in members:
nat_pool = self.nat_pool_repo.get(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

skip the db query if nat_pool_dict[member.subnet_id] already exist

inject={a10constants.MEMBERS: members},
requires=[a10constants.NAT_FLAVOR],
provides=a10constants.NAT_POOL_DICT))
if new_members:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move the address release/reservation after member are created/deleted?
In case error happened, we don't need to implement reverse function for them.

member.ip_address, member_subnet.cidr)


class DeleteSubnetAddressAndNatPoolForBatchMembers(BaseNetworkTask):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is better we can aggregate the reference count. And then update db, reserver/release addresses.
The total db query and address reserver/release operation is the same as before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants