Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

233 tutorial cp als #245

Merged
merged 14 commits into from
Sep 27, 2023
353 changes: 353 additions & 0 deletions docs/source/tutorial/CP_ALS.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,353 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Alternating least squares for Canonical Polyadic (CP) Decomposition\n",
"\n",
"```\n",
"Copyright 2022 National Technology & Engineering Solutions of Sandia,\n",
"LLC (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the\n",
"U.S. Government retains certain rights in this software.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function `cp_als` computes an estimate of the best rank-R CP model of a tensor X using the well-known alternating least-squares algorithm (see, e.g., Kolda and Bader, SIAM Review, 2009, for more information). The input X can be almost any type of tensor including a `tensor`, `sptensor`, `ktensor`, or `ttensor`. The output CP model is a `ktensor`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"import pyttb as ttb\n",
"import numpy as np"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Pick the size and rank\n",
"R = 3\n",
"np.random.seed(0) # Set seed for reproducibility\n",
"X = ttb.tenrand(shape=(6, 8, 10))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic call to the method, specifying the data tensor and its rank"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This uses a *random* initial guess. At each iteration, it reports the *fit* `f` which is defined as \n",
"```\n",
"f = 1 - ( X.norm()**2 + M.norm()**2 - 2*<X,M> ) / X.norm()\n",
"``` \n",
"and is loosely the proportion of the data described by the CP model, i.e., a fit of 1 is perfect."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compute a solution with final ktensor stored in M1\n",
"np.random.seed(0) # Set seed for reproducibility\n",
"short_tutorial = 10 # Cut off solve early for demo\n",
"M1 = ttb.cp_als(X, R, maxiters=short_tutorial)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since we set only a single output, `M1` is actually a *tuple* containing:\n",
"1. `M1[0]`: the solution as a `ktensor`. \n",
"2. `M1[1]`: the initial guess as a `ktensor` that was generated at runtime since no initial guess was provided. \n",
"3. `M1[2]`: a dictionary containing runtime information with keys:\n",
" * `params`: parameters used by `cp_als`\n",
" * `iters`: number of iterations performed\n",
" * `normresidual`: the norm of the residual `X.norm()**2 + M.norm()**2 - 2*<X,M>`\n",
" * `fit`: the fit `f` described above"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f\"M1[2]['params']: {M1[2]['params']}\")\n",
"print(f\"M1[2]['iters']: {M1[2]['iters']}\")\n",
"print(f\"M1[2]['normresidual']: {M1[2]['normresidual']}\")\n",
"print(f\"M1[2]['fit']: {M1[2]['fit']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run again with a different initial guess, output the initial guess."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(1) # Set seed for reproducibility\n",
"M2bad, Minit, _ = ttb.cp_als(X, R, maxiters=short_tutorial)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Increase the maximium number of iterations\n",
"Note that the previous run kicked out at only 10 iterations, before reaching the specified convegence tolerance. Let's increase the maximum number of iterations and try again, using the same initial guess."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"less_short_tutorial = 10 * short_tutorial\n",
"M2 = ttb.cp_als(X, R, maxiters=less_short_tutorial, init=Minit)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compare the two solutions\n",
"Use the `ktensor` `score()` member function to compare the two solutions. A score of 1 indicates a perfect match."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M1_ktns = M1[0]\n",
"M2_ktns = M2[0]\n",
"score = M1_ktns.score(M2_ktns)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, `score()` returned a tuple `score` with the score as the first element:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"score[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See the `ktensor` documentation for more information about the return values of `score()`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Rerun with same initial guess\n",
"Using the same initial guess (and all other parameters) gives the exact same solution."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M2alt = ttb.cp_als(X, R, maxiters=less_short_tutorial, init=Minit)\n",
"M2alt_ktns = M2alt[0]\n",
"score = M2_ktns.score(M2alt_ktns) # Score of 1 indicates the same solution\n",
"print(f\"Score: {score[0]}.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Changing the output frequency\n",
"Using the `printitn` option to change the output frequency."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M2alt2 = ttb.cp_als(X, R, maxiters=less_short_tutorial, init=Minit, printitn=20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Suppress all output\n",
"Set `printitn` to zero to suppress all output."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M2alt2 = ttb.cp_als(X, R, printitn=0) # No output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use HOSVD initial guess\n",
"Use the `'nvecs'` option to use the leading mode-n singular vectors as the initial guess."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M3 = ttb.cp_als(X, R, init=\"nvecs\", printitn=20)\n",
"s = M2[0].score(M3[0])\n",
"print(f\"score(M2,M3) = {s[0]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Change the order of the dimensions in CP"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M4, _, info = ttb.cp_als(X, 3, dimorder=[1, 2, 0], init=\"nvecs\", printitn=20)\n",
"s = M2[0].score(M4)\n",
"print(f\"score(M2,M4) = {s[0]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the last example, we also collected the third output argument `info` which has runtime information in it. The field `info['iters']` has the total number of iterations. The field `info.['params']` has the information used to run the method. Unless the initialization method is 'random', passing the parameters back to the method will yield the exact same results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M4alt, _, info = ttb.cp_als(X, 3, **info[\"params\"])\n",
"s = M4alt.score(M4)\n",
"print(f\"score(M4alt,M4) = {s[0]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Change the tolerance\n",
"It's also possible to loosen or tighten the tolerance on the change in the fit. You may need to increase the number of iterations for it to converge."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"M5 = ttb.cp_als(X, 3, init=\"nvecs\", stoptol=1e-12, printitn=100)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Control sign ambiguity of factor matrices\n",
"The default behavior of `cp_als` is to make a call to `fixsigns()` to fix the sign ambiguity of the factor matrices. You can turn off this behavior by passing the `fixsigns` parameter value of `False` when calling `cp_als`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X = ttb.ktensor(\n",
" factor_matrices=[\n",
" np.array([[1.0, 1.0], [1.0, -10.0]]),\n",
" np.array([[1.0, 1.0], [1.0, -10.0]]),\n",
" ],\n",
" weights=np.array([1.0, 1.0]),\n",
")\n",
"M1 = ttb.cp_als(X, 2, printitn=1, init=ttb.ktensor(X.factor_matrices))\n",
"print(M1[0]) # default behavior, fixsigns called\n",
"M2 = ttb.cp_als(X, 2, printitn=1, init=ttb.ktensor(X.factor_matrices), fixsigns=False)\n",
"print(M2[0]) # fixsigns not called"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Recommendations\n",
"* Run multiple times with different guesses and select the solution with the best fit.\n",
"* Try different ranks and choose the solution that is the best descriptor for your data based on the combination of the fit and the interpretaton of the factors, e.g., by visualizing the results."
]
}
],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 2
}