Optimize networking

The simplicity of Cloud Run functions lets you quickly develop code and run it in a serverless environment. At moderate scale, the cost of running functions is low, and optimizing your code might not seem like a high priority. As your deployment scales up, however, optimizing your code becomes increasingly important.

This document describes how to optimize networking for your functions. Some of the benefits of optimizing networking are as follows:

  • Reduce CPU time spent in establishing new outbound connections at each function call.
  • Reduce the likelihood of running out of connection or DNS quotas.

Maintaining Persistent Connections

This section gives examples of how to maintain persistent connections in a function. Failure to do so can result in quickly exhausting connection quotas.

The following scenarios are covered in this section:

  • HTTP/S
  • Google APIs

HTTP/S Requests

The optimized code snippet below shows how to maintain persistent connections instead of creating a new connection upon every function invocation:

Node.js

const fetch = require('node-fetch');

const http = require('http');
const https = require('https');

const functions = require('@google-cloud/functions-framework');

const httpAgent = new http.Agent({keepAlive: true});
const httpsAgent = new https.Agent({keepAlive: true});

/**
 * HTTP Cloud Function that caches an HTTP agent to pool HTTP connections.
 *
 * @param {Object} req Cloud Function request context.
 * @param {Object} res Cloud Function response context.
 */
functions.http('connectionPooling', async (req, res) => {
  try {
    // TODO(optional): replace this with your own URL.
    const url = 'https://www.example.com/';

    // Select the appropriate agent to use based on the URL.
    const agent = url.includes('https') ? httpsAgent : httpAgent;

    const fetchResponse = await fetch(url, {agent});
    const text = await fetchResponse.text();

    res.status(200).send(`Data: ${text}`);
  } catch (err) {
    res.status(500).send(`Error: ${err.message}`);
  }
});

Python

import functions_framework
import requests

# Create a global HTTP session (which provides connection pooling)
session = requests.Session()


@functions_framework.http
def connection_pooling(request):
    """
    HTTP Cloud Function that uses a connection pool to make HTTP requests.
    Args:
        request (flask.Request): The request object.
        <https://flask.pocoo.org/docs/1.0/api/#flask.Request>
    Returns:
        The response text, or any set of values that can be turned into a
        Response object using `make_response`
        <https://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>.
    """

    # The URL to send the request to
    url = "https://example.com"

    # Process the request
    response = session.get(url)
    response.raise_for_status()
    return "Success!"

Go


// Package http provides a set of HTTP Cloud Functions samples.
package http

import (
	"fmt"
	"net/http"
	"time"

	"github.com/GoogleCloudPlatform/functions-framework-go/functions"
)

var urlString = "https://example.com"

// client is used to make HTTP requests with a 10 second timeout.
// http.Clients should be reused instead of created as needed.
var client = &http.Client{
	Timeout: 10 * time.Second,
}

func init() {
	functions.HTTP("MakeRequest", MakeRequest)
}

// MakeRequest is an example of making an HTTP request. MakeRequest uses a
// single http.Client for all requests to take advantage of connection
// pooling and caching. See https://godoc.org/net/http#Client.
func MakeRequest(w http.ResponseWriter, r *http.Request) {
	resp, err := client.Get(urlString)
	if err != nil {
		http.Error(w, "Error making request", http.StatusInternalServerError)
		return
	}
	if resp.StatusCode != http.StatusOK {
		msg := fmt.Sprintf("Bad StatusCode: %d", resp.StatusCode)
		http.Error(w, msg, http.StatusInternalServerError)
		return
	}
	fmt.Fprintf(w, "ok")
}

PHP

We recommend using the Guzzle PHP HTTP Framework to send HTTP requests, as it handles persistent connections automatically.

Accessing Google APIs

The example below uses Cloud Pub/Sub, but this approach also works for other client libraries—for example, Cloud Natural Language or Cloud Spanner. Note that performance improvements may depend on the current implementation of particular client libraries.

Creating a Pub/Sub client object results in one connection and two DNS queries per invocation. To avoid unnecessary connections and DNS queries, create the Pub/Sub client object in global scope as shown in the sample below:

Node.js

const functions = require('@google-cloud/functions-framework');
const {PubSub} = require('@google-cloud/pubsub');
const pubsub = new PubSub();

/**
 * HTTP Cloud Function that uses a cached client library instance to
 * reduce the number of connections required per function invocation.
 *
 * @param {Object} req Cloud Function request context.
 * @param {Object} req.body Cloud Function request context body.
 * @param {String} req.body.topic The Cloud Pub/Sub topic to publish to.
 * @param {Object} res Cloud Function response context.
 */
functions.http('gcpApiCall', (req, res) => {
  const topic = pubsub.topic(req.body.topic);

  const data = Buffer.from('Test message');
  topic.publishMessage({data}, err => {
    if (err) {
      res.status(500).send(`Error publishing the message: ${err}`);
    } else {
      res.status(200).send('1 message published');
    }
  });
});

Python

import os

import functions_framework
from google.cloud import pubsub_v1


# Create a global Pub/Sub client to avoid unneeded network activity
pubsub = pubsub_v1.PublisherClient()


@functions_framework.http
def gcp_api_call(request):
    """
    HTTP Cloud Function that uses a cached client library instance to
    reduce the number of connections required per function invocation.
    Args:
        request (flask.Request): The request object.
    Returns:
        The response text, or any set of values that can be turned into a
        Response object using `make_response`
        <https://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>.
    """

    """
    The `GCP_PROJECT` environment variable is set automatically in the Python 3.7 runtime.
    In later runtimes, it must be specified by the user upon function deployment.
    See this page for more information:
        https://cloud.google.com/functions/docs/configuring/env-var#python_37_and_go_111
    """
    project = os.getenv("GCP_PROJECT")
    request_json = request.get_json()

    topic_name = request_json["topic"]
    topic_path = pubsub.topic_path(project, topic_name)

    # Process the request
    data = b"Test message"
    pubsub.publish(topic_path, data=data)

    return "1 message published"

Go


// Package contexttip is an example of how to use Pub/Sub and context.Context in
// a Cloud Function.
package contexttip

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
	"net/http"
	"os"
	"sync"

	"cloud.google.com/go/pubsub"
	"github.com/GoogleCloudPlatform/functions-framework-go/functions"
)

// client is a global Pub/Sub client, initialized once per instance.
var client *pubsub.Client
var once sync.Once

// createClient creates the global pubsub Client
func createClient() {
	// GOOGLE_CLOUD_PROJECT is a user-set environment variable.
	var projectID = os.Getenv("GOOGLE_CLOUD_PROJECT")
	// err is pre-declared to avoid shadowing client.
	var err error

	// client is initialized with context.Background() because it should
	// persist between function invocations.
	client, err = pubsub.NewClient(context.Background(), projectID)
	if err != nil {
		log.Fatalf("pubsub.NewClient: %v", err)
	}
}

func init() {
	// register http function
	functions.HTTP("PublishMessage", PublishMessage)
}

type publishRequest struct {
	Topic   string `json:"topic"`
	Message string `json:"message"`
}

// PublishMessage publishes a message to Pub/Sub. PublishMessage only works
// with topics that already exist.
func PublishMessage(w http.ResponseWriter, r *http.Request) {
	// use of sync.Once ensures client is only created once.
	once.Do(createClient)
	// Parse the request body to get the topic name and message.
	p := publishRequest{}

	if err := json.NewDecoder(r.Body).Decode(&p); err != nil {
		log.Printf("json.NewDecoder: %v", err)
		http.Error(w, "Error parsing request", http.StatusBadRequest)
		return
	}

	if p.Topic == "" || p.Message == "" {
		s := "missing 'topic' or 'message' parameter"
		log.Println(s)
		http.Error(w, s, http.StatusBadRequest)
		return
	}

	m := &pubsub.Message{
		Data: []byte(p.Message),
	}
	// Publish and Get use r.Context() because they are only needed for this
	// function invocation. If this were a background function, they would use
	// the ctx passed as an argument.
	id, err := client.Topic(p.Topic).Publish(r.Context(), m).Get(r.Context())
	if err != nil {
		log.Printf("topic(%s).Publish.Get: %v", p.Topic, err)
		http.Error(w, "Error publishing message", http.StatusInternalServerError)
		return
	}
	fmt.Fprintf(w, "Message published: %v", id)
}

Outbound connection resets

Connection streams from your function to both VPC and internet can be occasionally terminated and replaced when underlying infrastructure is restarted or updated. If your application reuses long-lived connections, we recommend that you configure your application to re-establish connections to avoid the reuse of a dead connection.

Load-testing Your Function

To measure how many connections your function performs on average, simply deploy it as a HTTP function and use a performance-testing framework to invoke it at certain QPS. One possible choice is Artillery, which you can invoke with a single line:

$ artillery quick -d 300 -r 30 URL

This command fetches the given URL at 30 QPS for 300 seconds.

After performing the test, check the usage of your connection quota on the Cloud Run functions API quota page in Cloud Console. If the usage is consistently around 30 (or its multiple), you are establishing one (or several) connections in every invocation. After you optimize your code, you should see a few (10-30) connections occur only at the beginning of the test.

You can also compare the CPU cost before and after the optimization on the CPU quota plot on the same page.