Skip to content

Commit

Permalink
0.19 Squash
Browse files Browse the repository at this point in the history
  • Loading branch information
Circle CI authored and massich committed Sep 24, 2019
1 parent bc33172 commit 81de150
Show file tree
Hide file tree
Showing 5,066 changed files with 309,220 additions and 303,346 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
1 change: 1 addition & 0 deletions CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
mne.tools
2 changes: 1 addition & 1 deletion dev/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: dad83e579336d256e3faa5820bb33b33
config: 12e7fee00c54b2bda6c8bdd508d5448c
tags: 645f666f9bcd5a90fca523b33c5a78b7
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\nThe :class:`SourceEstimate <mne.SourceEstimate>` data structure\n===============================================================\n\n\nSource estimates, commonly referred to as STC (Source Time Courses),\nare obtained from source localization methods.\nSource localization method solve the so-called 'inverse problem'.\nMNE provides different methods for solving it:\ndSPM, sLORETA, LCMV, MxNE etc.\n\nSource localization consists in projecting the EEG/MEG sensor data into\na 3-dimensional 'source space' positioned in the individual subject's brain\nanatomy. Hence the data is transformed such that the recorded time series at\neach sensor location maps to time series at each spatial location of the\n'source space' where is defined our source estimates.\n\nAn STC object contains the amplitudes of the sources over time.\nIt only stores the amplitudes of activations but\nnot the locations of the sources. To get access to the locations\nyou need to have the :class:`source space <mne.SourceSpaces>`\n(often abbreviated ``src``) used to compute the\n:class:`forward operator <mne.Forward>` (often abbreviated `fwd`).\n\nSee `tut-forward` for more details on forward modeling, and\n`tut-inverse-methods`\nfor an example of source localization with dSPM, sLORETA or eLORETA.\n\nSource estimates come in different forms:\n\n - :class:`mne.SourceEstimate`: For cortically constrained source spaces.\n\n - :class:`mne.VolSourceEstimate`: For volumetric source spaces\n\n - :class:`mne.VectorSourceEstimate`: For cortically constrained source\n spaces with vector-valued source activations (strength and orientation)\n\n - :class:`mne.MixedSourceEstimate`: For source spaces formed of a\n combination of cortically constrained and volumetric sources.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>:class:`(Vector) <mne.VectorSourceEstimate>`\n :class:`SourceEstimate <mne.SourceEstimate>` are surface representations\n mostly used together with `FreeSurfer <tut-freesurfer>`\n surface representations.</p></div>\n\nLet's get ourselves an idea of what a :class:`mne.SourceEstimate` really\nis. We first set up the environment and load some data:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\n\nfrom mne import read_source_estimate\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# Paths to example data\nsample_dir_raw = sample.data_path()\nsample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')\nsubjects_dir = os.path.join(sample_dir_raw, 'subjects')\n\nfname_stc = os.path.join(sample_dir, 'sample_audvis-meg')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Load and inspect example data\n-----------------------------\n\nThis data set contains source estimation data from an audio visual task. It\nhas been mapped onto the inflated cortical surface representation obtained\nfrom `FreeSurfer <tut-freesurfer>`\nusing the dSPM method. It highlights a noticeable peak in the auditory\ncortices.\n\nLet's see how it looks like.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"stc = read_source_estimate(fname_stc, subject='sample')\n\n# Define plotting parameters\nsurfer_kwargs = dict(\n hemi='lh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=0.09, time_unit='s', size=(800, 800),\n smoothing_steps=5)\n\n# Plot surface\nbrain = stc.plot(**surfer_kwargs)\n\n# Add title\nbrain.add_text(0.1, 0.9, 'SourceEstimate', 'title', font_size=16)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SourceEstimate (stc)\n--------------------\n\nA source estimate contains the time series of a activations\nat spatial locations defined by the source space.\nIn the context of a FreeSurfer surfaces - which consist of 3D triangulations\n- we could call each data point on the inflated brain\nrepresentation a *vertex* . If every vertex represents the spatial location\nof a time series, the time series and spatial location can be written into a\nmatrix, where to each vertex (rows) at multiple time points (columns) a value\ncan be assigned. This value is the strength of our signal at a given point in\nspace and time. Exactly this matrix is stored in ``stc.data``.\n\nLet's have a look at the shape\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"shape = stc.data.shape\n\nprint('The data has %s vertex locations with %s sample points each.' % shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that stc carries 7498 time series of 25 samples length. Those time\nseries belong to 7498 vertices, which in turn represent locations\non the cortical surface. So where do those vertex values come from?\n\nFreeSurfer separates both hemispheres and creates surfaces\nrepresentation for left and right hemisphere. Indices to surface locations\nare stored in ``stc.vertices``. This is a list with two arrays of integers,\nthat index a particular vertex of the FreeSurfer mesh. A value of 42 would\nhence map to the x,y,z coordinates of the mesh with index 42.\nSee next section on how to get access to the positions in a\n:class:`mne.SourceSpaces` object.\n\nSince both hemispheres are always represented separately, both attributes\nintroduced above, can also be obtained by selecting the respective\nhemisphere. This is done by adding the correct prefix (``lh`` or ``rh``).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"shape_lh = stc.lh_data.shape\n\nprint('The left hemisphere has %s vertex locations with %s sample points each.'\n % shape_lh)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since we did not change the time representation, only the selected subset of\nvertices and hence only the row size of the matrix changed. We can check if\nthe rows of ``stc.lh_data`` and ``stc.rh_data`` sum up to the value we had\nbefore.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0]\n\nprint('The number of vertices in stc.lh_data and stc.rh_data do ' +\n ('not ' if not is_equal else '') +\n 'sum up to the number of rows in stc.data')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Indeed and as the mindful reader already suspected, the same can be said\nabout vertices. ``stc.lh_vertno`` thereby maps to the left and\n``stc.rh_vertno`` to the right inflated surface representation of\nFreeSurfer.\n\nRelationship to SourceSpaces (src)\n----------------------------------\n\nAs mentioned above, :class:`src <mne.SourceSpaces>` carries the mapping from\nstc to the surface. The surface is built up from a\n`triangulated mesh <https://en.wikipedia.org/wiki/Surface_triangulation>`_\nfor each hemisphere. Each triangle building up a face consists of 3 vertices.\nSince src is a list of two source spaces (left and right hemisphere), we can\naccess the respective data by selecting the source space first. Faces\nbuilding up the left hemisphere can be accessed via ``src[0]['tris']``, where\nthe index $0$ stands for the left and $1$ for the right\nhemisphere.\n\nThe values in src[0]['tris'] refer to row indices in ``src[0]['rr']``.\nHere we find the actual coordinates of the surface mesh. Hence every index\nvalue for vertices will select a coordinate from here. Furthermore\n``src[0]['vertno']`` stores the same data as ``stc.lh_vertno``,\nexcept when working with sparse solvers such as\n:func:`mne.inverse_sparse.mixed_norm`, as then only a fraction of\nvertices actually have non-zero activations.\n\nIn other words ``stc.lh_vertno`` equals ``src[0]['vertno']``, whereas\n``stc.rh_vertno`` equals ``src[1]['vertno']``. Thus the Nth time series in\nstc.lh_data corresponds to the Nth value in stc.lh_vertno and\nsrc[0]['vertno'] respectively, which in turn map the time series to a\nspecific location on the surface, represented as the set of cartesian\ncoordinates ``stc.lh_vertno[N]`` in ``src[0]['rr']``.\n\nLet's obtain the peak amplitude of the data as vertex and time point index\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True,\n time_as_index=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first value thereby indicates which vertex and the second which time\npoint index from within ``stc.lh_vertno`` or ``stc.lh_data`` is used. We can\nuse the respective information to get the index of the surface vertex\nresembling the peak and its value.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"peak_vertex_surf = stc.lh_vertno[peak_vertex]\n\npeak_value = stc.lh_data[peak_vertex, peak_time]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's visualize this as well, using the same ``surfer_kwargs`` as in the\nbeginning.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"brain = stc.plot(**surfer_kwargs)\n\n# We add the new peak coordinate (as vertex index) as an annotation dot\nbrain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue')\n\n# We add a title as well, stating the amplitude at this time and location\nbrain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_size=14)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Summary\n-------\n\n:class:`stc <mne.SourceEstimate>` is a class of MNE-Python, representing the\ntransformed time series obtained from source estimation. For both hemispheres\nthe data is stored separately in ``stc.lh_data`` and ``stc.rh_data`` in form\nof a $m \\times n$ matrix, where $m$ is the number of spatial\nlocations belonging to that hemisphere and $n$ the number of time\npoints.\n\n``stc.lh_vertno`` and ``stc.rh_vertno`` correspond to ``src[0]['vertno']``\nand ``src[1]['vertno']``. Those are the indices of locations on the surface\nrepresentation.\n\nThe surface's mesh coordinates are stored in ``src[0]['rr']`` and\n``src[1]['rr']`` for left and right hemisphere. 3D coordinates can be\naccessed by the above logic::\n\n >>> lh_coordinates = src[0]['rr'][stc.lh_vertno]\n >>> lh_data = stc.lh_data\n\nor::\n\n >>> rh_coordinates = src[1]['rr'][src[1]['vertno']]\n >>> rh_data = stc.rh_data\n\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

This file was deleted.

Loading

0 comments on commit 81de150

Please sign in to comment.