WO2018222729A1 - Platform for identification of biomarkers using navigation tasks and treatments based thereon - Google Patents
Platform for identification of biomarkers using navigation tasks and treatments based thereon Download PDFInfo
- Publication number
- WO2018222729A1 WO2018222729A1 PCT/US2018/035155 US2018035155W WO2018222729A1 WO 2018222729 A1 WO2018222729 A1 WO 2018222729A1 US 2018035155 W US2018035155 W US 2018035155W WO 2018222729 A1 WO2018222729 A1 WO 2018222729A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- individual
- environment
- task
- indicator
- navigation
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/02—Details
- A61N1/08—Arrangements or circuits for monitoring, protecting, controlling or indicating
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4082—Diagnosing or monitoring movement diseases, e.g. Parkinson, Huntington or Tourette
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4088—Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
- A61B5/744—Displaying an avatar, e.g. an animated cartoon character
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/18—Applying electric currents by contact electrodes
- A61N1/32—Applying electric currents by contact electrodes alternating or intermittent currents
- A61N1/36—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
- A61N1/36014—External stimulators, e.g. with patch electrodes
- A61N1/36025—External stimulators, e.g. with patch electrodes for treating a mental or cerebral condition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- Cognitive dysfunction is one of the characteristics exhibited by individuals with various neurodegenerative conditions such as Alzheimer's disease and Parkinson's disease.
- neurodegenerative conditions can affect areas of the brain such as the caudate nucleus, the hippocampus, and the entorhinal cortex. For example, the early stages of
- Alzheimer's disease can manifest with memory loss and spatial disorientation symptoms.
- the caudate nucleus is implicated in motor and spatial functions.
- Physiological techniques and other technology used to measure the state of these regions of the brain can be costly, inefficient, and time-consuming.
- apparatus, systems and methods are provided for quantifying aspects of cognition (including cognitive abilities).
- the indication of cognitive abilities of an individual can provide insight into the relative health or strength of portions of the brain of the individual.
- the example apparatus, systems and methods can be implemented for enhancing certain cognitive abilities of the individual.
- embodiments relate to an apparatus for generating an assessment of one or more cognitive skills in an individual.
- the apparatus includes a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory.
- the one or more processing units Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to present via the user interface a first task that requires navigation of a specified route through an environment, and to present via the user interface a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual.
- the one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time.
- the one or more processing units are configured to present via the user interface a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task.
- Measurement data is obtained by measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task.
- the measurement data is analyzed to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
- the target end-point may include a specified location in the environment, a specified landmark feature in the environment, and/or a specific object in the environment.
- the one or more processing units may be configured to return the second indicator to either: (a) a portion of the specified route that was navigated successfully, or (b) the initial point.
- the one or more processing units may be configured to present at least one directional aid via the user interface to indicate a correction to the turn or the direction.
- a degree of difficulty of the second task may be modified based on the number of directional aids displayed to the individual in performance of the second task.
- Generating the performance metric may include considering one or more of a total time taken to successfully complete the second task, a number of incorrect turns made by the second indicator, a number of incorrect directions of movement made by the second indicator, or a degree of deviation of the user-navigated route in the second task as compared to the specified route.
- inventions relate to an apparatus for generating an assessment of one or more cognitive skills in an individual.
- the apparatus includes a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory.
- the one or more processing units Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to present via the user interface a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment, and to present via the user interface a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point.
- the one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point.
- Data indicative of the relative orientation indicated using the second indicator is measured.
- the measurement data is analyzed to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
- the second indicator may include an avatar, a pointer tool, and/or a tool for drawing a line, each for indicating the relative orientation.
- Generating the performance metric may include considering a difference between data indicative of the relative orientation indicated using the second indicator and data indicative of actual relative orientation between the initial point and the target endpoint.
- the first task may include a free-exploration phase in which the one or more processing units are configured to allow the individual to control the first indicator to navigate in at least a portion of the environment without restriction or guidance.
- the one or more processing units may be configured to display limited visual information about the environment to the individual based on proximity and/or directionality relative to the second indicator.
- an apparatus for generating an assessment of one or more cognitive skills in an individual includes a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units.
- the one or more processing units are configured to present via the user interface a first task that requires the individual to navigate in an environment.
- the first task includes an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase.
- the environment includes a specified location, a specified landmark, and/or a specified object.
- the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task.
- the one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to the specified location, the specified landmark feature, and/or the specified object.
- the one or more processing units are configured to present via the user interface a second indicator configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions.
- the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task.
- Measurement data is obtained by measuring data indicative of the physical actions of the individual in performing the second task.
- the measurement data is analyzed to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
- an apparatus for generating an assessment of one or more cognitive skills in an individual includes a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory.
- the one or more processing units Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to present via the user interface a first task that requires the individual to navigate in an environment, a first portion of the first task including an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase.
- the environment includes a specified location, a specified landmark, and/or a specified object.
- the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first portion of the first task.
- the one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to the specified location, the specified landmark feature, and/or the specified object.
- Measurement data is obtained by measuring data indicative of the physical actions of the individual in performing the second portion of the first task.
- the measurement data is analyzed to generate a performance metric for the performance of the first task, the performance metric providing an indication of the cognitive ability of the individual.
- the one or more processing units may be configured such that the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second portion of the first task.
- the one or more processing units may be further configured to generate a scoring output indicative of at least one of (i) a likelihood of onset of a
- the one or more processing units may be further configured to adjust a difficulty level of the second task based at least in part on the analysis of the measurement data.
- the measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters including at least one of a measure of the individual's judgment about relative spatial positions between two points as determined based on distances relative to other objects in the environment, a measure of the individual's ability to plot a novel course through a portion of the environment that was previously known, or a measure of the individual's ability to spatially transform three or more memorized positions in the environment arranged to cover two or more dimensions.
- the neurodegenerative condition may be Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or schizophrenia.
- Generating the performance metric may further include computing one or more of a measure of accuracy in a subsequent navigation of the specified route, a measure of accuracy in measures of indication that the individual uses spatial memory rather than visual cues for the relative orientation to the initial point or to a different specified location in the environment, or a measure of a strategy implemented to explore the environment in a free-exploration phase.
- the measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters being measured as a function of time.
- the second indicator may include a virtual joystick.
- the virtual joystick may be controllable to provide one or more of an indication of a user's "head-orientation" in the environment, an intended direction of movement of the first indicator or the second indicator, or to provide a virtual indication of "looking around” to observe features in the environment.
- the one or more processing units may be further configured to apply a first predictive model to data indicative of the cognitive ability in the individual to classify the individual according to a level of expression of one or more of a beta amyloid, a cystatin, an alpha- synuclein, a huntingtin protein, or a tau protein.
- the first predictive model may be trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing an indication of a cognitive ability of the classified individual and data indicative of a diagnosis of a status or progression of a neurodegenerative condition in the classified individual.
- the first predictive model may serve as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
- the first predictive model may include a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, and/or an artificial neural network.
- the measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters including at least one of a measure of a navigation speed relative to the environment, an orientation relative to the environment, a velocity relative to the environment, a choice of navigation strategy, a measure of a wait or delay period or a period of inaction during navigation, a time interval to complete a course, or a degree of optimization of a navigation path through a course.
- the measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters including at least one of a direction of the individual's movement relative to the environment, a speed of the individual's movement relative to the environment, a measure of the individual's memory of landmarks, a measure of the individual's memory of turn-by-turn directions, or a frequency or number of times of referral to an aerial or elevated view of a view.
- the environment may include one or more passageways, one or more obstacles disposed at specified portions of the one or more passageways, and/or one or more walls having dimensions.
- the one or more passageways, obstacles, and dimensions may include
- a width (ai) of each of the one or more obstacles is greater than or about equal to a width (a 2 ) of each of the one or more passageways, and the width (ai) is smaller than a length (a 3 ) of each of the one or more walls of the environment.
- the width ai may be about twice width a 2
- width ai may be about one-fourth to one-fifth of length a 3 .
- One or more processing units may be configured to present navigation as a first person perspective or as a third person perspective.
- One or more processing units may be further configured to (i) adjust a difficulty of the second task to a second difficulty level; (ii) present a second instance of the second task at the second difficulty level; (iii) obtain a second set of measurement data by measuring data indicative of the physical actions of the individual in performing the second instance of the second task; and (iv) analyze the second set of measurement data to generate a second performance metric indicative of a change of the cognitive ability of the individual.
- the second difficulty level may be an increase in the difficulty or a decrease of the difficulty.
- the one or more processing units may be further configured to provide a measure of an enhancement of the cognitive ability of the individual based at least in part on the second performance metric.
- the apparatus may be configured as at least one of a smartphone, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, a portable computing device, a wearable computing device, or a gaming device.
- FIGs. 1 A - ID show non-limiting examples of computerized renderings of courses that present navigation tasks, according to the principles herein.
- FIGs. 2A - 2C show a computerized rendering of an entrance to an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 3 A - 3U show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 4 A - 4C show a computerized rendering of navigation to an exit from an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 5 A - 5 J show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 6 A - 6E show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 7 A - 7F show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 8A - 8H show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
- FIGs. 9 A - 9H show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
- FIG. 10 shows a non-limiting example of a graphical user interface rendered to a user, according to the principles herein.
- FIG. 11 shows an example apparatus according to the principles herein that can be used to implement the cognitive platform described herein.
- FIG. 12 is a block diagram of an example computing device that can be used as a computing component according to the principles herein.
- FIGs. 13 A - 13F show flowcharts of non-limiting example methods that can be implemented using a cognitive platform or platform product that includes at least one processing unit, according to principles herein.
- FIG. 14A shows a non-limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as a cognitive platform that is separate from, but configured for coupling with, one or more of the physiological components.
- the platform product including using an APP
- the cognitive platform is separate from, but configured for coupling with, one or more of the physiological components.
- FIG. 14B shows another non -limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as an integrated device, where the cognitive platform is integrated with one or more of the physiological components.
- the platform product including using an APP
- the cognitive platform is integrated with one or more of the physiological components.
- FIG. 15 shows a non-limiting example implementation where the platform product (including using an APP) is configured as a cognitive platform that is configured for coupling with a physiological component, according to principles herein.
- inventive methods, apparatus and systems comprising a cognitive platform configured for implementing one or more navigation task(s).
- the cognitive platform also can be configured for coupling with one or more other types of measurement components, and for analyzing data indicative of at least one measurement of the one or more other types of components.
- the cognitive platform can be configured for cognitive training and/or for clinical purposes.
- the cognitive platform may be integrated with one or more physiological or monitoring components and/or cognitive testing components.
- the term “includes” means includes but is not limited to, the term “including” means including but not limited to.
- the example platform products and cognitive platforms according to the principles described herein can be applicable to many different types of conditions, such as but not limited to depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, schizophrenia, or other cognitive condition.
- ADHD attention deficit hyperactivity disorder
- ADHD attention deficit hyperactivity disorder
- dementia dementia
- Parkinson's disease Huntington's disease
- Cushing's disease schizophrenia, or other cognitive condition.
- the ability of an individual to navigate from an initial point to a desired location in a real or virtual environment can depend at least in part on use of two different areas of the brain. These areas are the caudate nucleus region of the brain and the entorhinal cortex and hippocampal regions of the brain. See, e.g., Hafting et al., "Microstructure of a spatial map in the entorhinal cortex", Nature, vol. 436, issue 7052, pp. 801-806 (2005); Bohbot et al., "Gray matter differences correlate with spontaneous strategies in a human virtual navigation task," Journal of
- a dependent stimulus-response navigation strategy In an example where an individual performs a navigation task that activates the caudate nucleus region of the brain, the individual is learning a rigid set of stimulus-response type associations referred to as dependent stimulus-response navigation strategies.
- a non-limiting example of a dependent stimulus-response navigation strategy is, e.g., see the tree and turn right.
- hippocampal dependent spatial navigation strategies via activating the hippocampal region of the brain.
- An individual relying on the entorhinal cortex region of the brain for navigation forms a directionally-oriented topographically organized neural map of the spatial environment, which includes translational and directional information. That map is anchored to external landmarks, but can persist in the absence of those external landmarks.
- the contextual specificity of hippocampal representations suggests that during encoding, the hippocampus associates output from a generalized, path-integration-based coordinate system with landmarks or other features specific to a particular environment. Through back projections to the superficial layers of the entorhinal cortex, associations stored in the hippocampus may reset the path integrator as errors accumulate during exploration of an environment. Anchoring the output of the path integrator to external reference points stored in the hippocampus or other cortical areas of the brain may enable alignment of entorhinal maps from one trial to the next, even when the points of departure are different.
- An individual may navigate through a given environment using an allocentric form of navigation and/or an egocentric form of navigation.
- an individual uses differing portions of the brain.
- allocentric refers to a form of navigation where an individual identifies places in the environment independent of the individual's perspective (or direction) and ongoing behavior. In allocentric navigation, an individual centers their attention and actions on other items in the environment rather than their own perspective.
- Parameters that can be measured to indicate allocentric navigation include measures of an individual's judgment about the directional orientation and/or horizontal distance between two points (e.g., their relative spatial position as measured based on distances relative to other objects in the environment), an individual's ability to plot a novel course through a previously traversed (and therefore known) environment (i.e., a course that differs in at least one parameter from a previous course through the environment), and an individual's ability to spatially transform (e.g. rotate, translate, or scale) three or more memorized positions in an environment arranged to cover two or more dimensions.
- measures of an individual's judgment about the directional orientation and/or horizontal distance between two points e.g., their relative spatial position as measured based on distances relative to other objects in the environment
- an individual's ability to plot a novel course through a previously traversed (and therefore known) environment i.e., a course that differs in at least one parameter from a previous course through the environment
- Areas of the brain such as the entorhinal cortex and hippocampus are used for allocentric navigation.
- the allocentric navigation can involve spatial grid navigation and formulation of a memory of how various places are located on the spatial grid and relative to each other.
- the hippocampus is implicated in both spatial memory and navigation.
- the medial entorhinal cortex contributes to spatial information processing.
- autonomous refers to a form of navigation where points in the environment are defined in terms of their distance and direction from the individual.
- Parameters that can be measured to indicate egocentric navigation include the direction and speed of the individual's movements relative to the environment.
- positions in the environment are defined relative to the individual, such that movement of the individual is accompanied by an updating of the individual's perspective representation of a given point.
- Areas of the brain such as the caudate nucleus are used in egocentric navigation.
- the egocentric navigation can involve memory of landmarks and turn-by-turn directions.
- the caudate nucleus is implicated in motor and spatial functions.
- Measures of the relative strength of each area of the brain can inform the cognitive condition of an individual. According to the principles herein, analysis of data indicative of these measurement parameters can be used to detect the very early signs of conditions such as but not limited to Alzheimer's disease.
- method, and apparatus can be configured to generate a scoring output as an indication of a relative health or strength of the caudate nucleus region of the brain of the individual relative to the entorhinal cortex and hippocampal regions of the brain of the individual.
- the scoring output can be computed based on the analysis of the data collected from measurements as an individual performs a spatial navigation task.
- method, and apparatus can be configured to generate a scoring output as an indication of a cognitive ability of the individual, based on spatial memory capabilities of the individual that indicate a relative health or strength of the caudate nucleus region, the entorhinal cortex, and the hippocampal regions of the brain of the individual.
- the scoring output can be computed based on the analysis of the data collected from measurements as an individual performs physical actions to effect a spatial navigation task involving way- finding, path finding, path-plotting, route-learning, and/or path integration (dead-reckoning).
- method, and apparatus can be configured to generate a scoring output as an indication of a likelihood of onset of a neurodegenerative condition of the individual, or a stage of progression of the neurodegenerative condition, based at least in part on the analysis of at least one set of data (such as but not limited to a first set of data and a second set of data), based on the analysis of the data collected from measurements as an individual performs a navigation task involving way-finding, path finding, path-plotting, route- learning, and/or path integration (dead-reckoning).
- a scoring output as an indication of a likelihood of onset of a neurodegenerative condition of the individual, or a stage of progression of the neurodegenerative condition, based at least in part on the analysis of at least one set of data (such as but not limited to a first set of data and a second set of data), based on the analysis of the data collected from measurements as an individual performs a navigation task involving way-finding, path finding, path-plotting, route- learning,
- the example system, method, and apparatus can be configured to transmit the scoring output to the individual and/or display the scoring output on a user interface.
- AD Alzheimer's disease
- the hippocampus is one of the early regions of the brain to suffer damage resulting in the memory loss and spatial disorientation symptoms.
- Kunz et al., Science, vol. 350, issue 6259, p. 430 (2015) also proposed that Alzheimer's disease pathology starts in the entorhinal cortex, with the disease likely impairing local neural correlates of spatial navigation such as grid cells.
- Analysis of measurement data indicative of the individual's performance at navigation tasks can provide an indication of the relative strength of the hippocampus and entorhinal cortex.
- the analysis of data indicative of the individual's performance of the navigation tasks can be used to provide a measure of entorhinal and/or hippocampal dysfunction in individuals, thereby providing a measure of the likelihood of onset of Alzheimer's disease and/or the degree of progression of the disease.
- Alzheimer's disease, Parkinson's disease, vascular dementia, and mild cognitive impairment potentially have a greater effect on the hippocampal and entorhinal regions of the brain.
- attention deficit hyperactivity disorder Huntington's disease, obsessive-compulsive disorder, and depression (major depressive disorder) potentially have a greater effect on the caudate nucleus region of the brain.
- Example systems, methods, and apparatus herein can be implemented to collect data indicative of measures of the areas of the brain implicated in the differing types of navigation tasks.
- Data indicative of the individual's performance based on the type of navigation (i.e., allocentric navigation vs egocentric navigation) and/or the degree of success at navigation can be used to provide an indication of the relative strength of each area of the brain of the individual.
- an individual may implement an allocentric navigation strategy
- the individual is relying more on the activation of the hippocampal and the entorhinal cortex regions of the brain (needing the context of one or more features to guide navigation strategy).
- the individual's performance on a task requiring allocentric navigation skills could be an indicator of the level of activation of the hippocampal and/or the entorhinal cortex regions of the brain, such that poorer values of performance measure(s) could indicate poorer activation of the hippocampal and/or the entorhinal cortex regions of the brain.
- the entorhinal cortex region of the brain can become more efficient once a navigation strategy is processed by the hippocampal region.
- an individual may implement an egocentric navigation strategy
- the individual is relying more on the activation of the caudate nucleus region of the brain (navigation learning strategy based on using self as the point of reference).
- this could indicate that the individual takes fewer cues from the environment.
- the individual cannot use this mechanism to learn.
- the individual's performance on a task requiring egocentric navigation skills could be an indicator of the level of activation of the caudate nucleus region of the brain, such that poorer values of performance measure(s) could indicate poorer activation of the caudate nucleus region of the brain.
- Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual.
- An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method.
- An example method includes using the programmed one or more processing units to render a first task that requires navigation of a specified route through an environment, render a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual, configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time, and render a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task.
- the example method further includes measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task, and analyzing the measurement data to generate a performance metric for the performance of the second task, the performance
- the navigation indicator presented via the user interface for navigating in the computerized environment can be rendered and displayed to the individual via a visual representation as a first person view or as a third person view.
- the user interface is configured such that the views presented during navigation mimics the "eye-level" view of the environment.
- the user interface is configured such that views presented during navigation mimics a view of the environment from "behind", “to the side of, or "over the shoulder”, e.g., of an element on the user interface such as but not limited to an avatar or other object.
- the navigation indicator can be presented via the user interface as a single element or as two or more elements in the environment. Where the navigation indicator is presented as a single element, it can be displayed as an avatar, or other guidable element described herein. Where the navigation indicator is presented as two or more elements, it can be displayed as a first avatar or other guidable element that indicates a direction of relative movement and a second avatar or other guidable element that indicates an intended direction of movement, In another example, the navigation indicator can be presented to the user via a visual representation of relative progression along a path in the environment.
- Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual.
- An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method.
- An example method includes using the programmed one or more processing units to render a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment, render a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point, and configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point.
- the example method further includes measuring data indicative of the relative orientation indicated using the second indicator, and analyzing the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
- Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual.
- An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method.
- An example method includes using the programmed one or more processing units to render a first task that requires the individual to navigate in an environment.
- the first task includes an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase.
- the environment includes one or more of a specified location, a specified landmark, or a specified object
- the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task.
- the example method further includes using the programmed one or more processing units to configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object, and render a second indicator configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions.
- the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task.
- the example method further includes measuring data indicative of the physical actions of the individual in performing the second task, and analyzing the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
- Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual.
- An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method.
- An example method includes using the programmed one or more processing units to render a first task that requires the individual to navigate in an environment.
- the first task includes a first portion that is an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase.
- the environment comprises one or more of a specified location, a specified landmark, or a specified object.
- the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first portion of the first task.
- the example method further includes using the programmed one or more processing units to configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object, measure data indicative of the physical actions of the individual in performing the second portion of the first task, and analyze the measurement data to generate a
- the one or more processing units can be configured such that the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second portion of the first task.
- vigation refers to way-finding, path-plotting, route- learning, path integration (such as but not limited to dead-reckoning), seek or search and recovery, direction-giving, or other similar types of tasks.
- the instant disclosure is directed to computer-implemented devices formed as example platform products configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more navigation tasks, to provide a user performance metric.
- example platform products configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more navigation tasks, to provide a user performance metric.
- performance metrics can include data indicative of an individual's navigation speed, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route, a measure of accuracy in using spatial memory rather than visual cues to orient oneself relative to (including to point back to) a specific location in space (such as but not limited to the beginning of the current navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment.
- the measure can include values of any of these parameters as a function of time.
- the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near-shortest path through the course.
- the example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's condition (including cognitive condition).
- the performance metric can be used to derive measures of the relative strength of each area of the brain.
- Non-limiting example cognitive platforms or platform products can be configured to classify an individual as to relative health or strength of regions of the brain such as but not limited to the caudate nucleus region of the brain and the entorhinal cortex and hippocampal regions of the brain, and/or potential efficacy of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that data.
- regions of the brain such as but not limited to the caudate nucleus region of the brain and the entorhinal cortex and hippocampal regions of the brain, and/or potential efficacy of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that data.
- cognitive platforms or platform products can be configured to classify an individual as to likelihood of onset and/or stage of progression of a cognitive condition, based on the data collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that data.
- the cognitive condition can be, but is not limited to, depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or
- Any classification of an individual as to likelihood of onset and/or stage of progression of a cognitive condition can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage (such as but not limited to an amount, concentration, and//or dose titration) of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
- a change in dosage such as but not limited to an amount, concentration, and//or dose titration
- the platform product or cognitive platform can be configured as any combination of a medical device platform, a monitoring device platform, a screening device platform, or other device platform.
- the instant disclosure is also directed to example systems that include platform products and cognitive platforms that are configured for coupling with one or more
- the systems include platform products and cognitive platforms that are integrated with the one or more other physiological or monitoring components and/or cognitive testing components.
- the systems include platform products and cognitive platforms that are separately housed from and configured for communicating with the one or more physiological or monitoring components and/or cognitive testing components, to receive data indicative of measurements made using such one or more components.
- cData refers to data collected from measures of an interaction of a user with a computer-implemented device formed as a platform product.
- nData refers to other types of data that can be collected according to the principles herein. Any component used to provide nData is referred to herein as a nData component.
- the cData and/or nData can be collected in real-time.
- the nData can be collected from measurements using one or more physiological or monitoring components and/or cognitive testing components.
- the one or more physiological components are configured for performing physiological measurements.
- the physiological measurements provide quantitative measurement data of physiological parameters and/or data that can be used for visualization of physiological structure and/or functions.
- nData can be collected from measurements of types of protein and/or conformation of proteins in the tissue or fluid (including blood) of an individual and/or in tissue or fluid (including blood) collected from the individual.
- the tissue and or fluid can be in or taken from the individual' s brain.
- the measurement of the conformation of the proteins can provide an indication of amyloid formation (e.g., whether the proteins are forming aggregates).
- the nData can be collected from measurements of beta amyloid, cystatin, alpha-synuclein, huntingtin protein, and/or tau proteins.
- the nData can be collected from measurements of other types of proteins that may be implicated in the onset and/or progression of a neurodegenerative condition, such as but not limited to Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or schizophrenia.
- tau proteins are deposited first in the entorhinal cortex and then in the hippocampal area of the brain in Alzheimer's disease.
- nData can be a classification or grouping that can be assigned to an individual based on measurement data from the one or more physiological or monitoring components and/or cognitive testing components. For example, an individual can be classified as to amyloid status of amyloid positive (A+) or amyloid negative (A-).
- the nData can be an identification of a type of biologic, drug or other pharmaceutical agent administered or to be administered to an individual, and/or data collected from measurements of a level of the biologic, drug or other pharmaceutical agent in the tissue or fluid (including blood) of an individual, whether the measurement is made in situ or using tissue or fluid (including blood) collected from the individual.
- a biologic, drug or other pharmaceutical agent applicable to any example described herein include methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HC1, solanezumab, aducanumab, and crenezumab.
- drug herein encompasses a drug, a biologic and/or other pharmaceutical agent.
- nData can include any data that can be used to characterize an individual's status, such as but not limited to age, gender or other similar data.
- the data (including cData and nData) is collected with the individual's consent.
- the one or more physiological components can include any means of measuring physical characteristics of the body and nervous system, including electrical activity, heart rate, blood flow, and oxygenation levels, to provide the nData.
- This can include camera-based heart rate detection, measurement of galvanic skin response, blood pressure measurement, electroencephalogram, electrocardiogram, magnetic resonance imaging, near-infrared spectroscopy, and/or pupil dilation measures, to provide the nData.
- physiological measurements to provide nData include, but are not limited to, the measurement of body temperature, heart or other cardiac-related functioning using an electrocardiograph (ECG), electrical activity using an electroencephalogram (EEG), event-related potentials (ERPs), functional magnetic resonance imaging (fMRI), blood pressure, electrical potential at a portion of the skin, galvanic skin response (GSR), magneto- encephalogram (MEG), eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation, functional near-infrared spectroscopy (fNTRS), and/or a positron emission tomography (PET) scanner.
- EEG-fMRI or MEG-fMRI measurement allows for simultaneous acquisition of electrophysiology (EEG/MEG) nData and hemodynamic (fMRI) nData.
- the cognitive platform and systems including the cognitive platform can be configured to present computerized navigation tasks and platform interactions that inform cognitive assessment (including screening or monitoring) or deliver a treatment.
- Example systems, methods, and apparatus herein can be implemented to render at least a portion of the environment through limited visual information based on proximity and/or directionality relative to the in-environment representation of the individual.
- the individual may be presented with an overhead view (a substantially allocentric view) of at least a portion of the environment prior to performing the testing task. At least a portion of the environment may be obscured from visibility in this overhead view.
- the individual may be presented with a perspective view that is closer to the level of features or contents of the environment prior to performing the testing task.
- Example systems, methods, and apparatus herein can be implemented to render an environment and configure an exploration phase in the environment.
- the exploration phase can be a guided course through a specified route or a free-exploration phase.
- the computing system can be configured to issue instructions to the user to explore the environment for a specified period of time, in order to gain some familiarity with the layout of the environment.
- the individual is provided the opportunity to gain some familiarity with the location, lateral and vertical extent and/or relative proportions of obstacles and channels in the environment, and/or the location and type of strategically placed objects of interest in the environment.
- the objects of interest can be landmarks (e.g., pizza place, movie theater, statues, etc.) and/or specially shaped objects (e.g., cube, sphere, key, star, cone, etc., or other floating geometric object).
- landmarks e.g., pizza place, movie theater, statues, etc.
- specially shaped objects e.g., cube, sphere, key, star, cone, etc., or other floating geometric object.
- the exploration phase also allows a user to become familiar with the type of controls the computer device provides, including the degree and manner of modulation and control of direction, speed, angular orientation, and relative vertical position in the environment.
- Example systems, methods, and apparatus herein can be implemented to render an environment and configure a route-learning task in the environment.
- the route-learning can be initiated with computer device rendering guides to a user to travel a specified route through the environment.
- the guides can be rendered as arrows, lights, lines, a guiding avatar, voice commands or other audio feedback, vibrations (either of the computer device or an attachment thereto), or other visual, audio, or vibratory means.
- the guides are used to assist a user to navigate from a point of origin (A) to a specified end-point (B) along a specified route.
- the specified end-point (B) may be a specified location in the environment, a landmark, and/or a specially shaped object.
- the user can be instructed to perform a specified type of route-learning task.
- the computer device may be configured to render the instructions to the user as to the type and requirements of the one or more testing tasks at the beginning of the guided portion of the route-learning task and/or at the beginning of one or more of the testing phases of the route-learning task.
- the one or more testing phases of the route-learning task can require the user to backtrack, i.e., to navigate the reverse of the route specified in the guided portion of the route-learning task, to return to the point of origin (A) from end-point (B).
- the computer device in response to computer system detecting that the user is using the controls to make a wrong turn and/or move in an incorrect direction in trying to navigate the reverse route, can be configured to either return the user to the point of origin (A) or to the last successfully- navigated point on the reverse route.
- the user's performance may be scored ⁇ i.e., quantified) based on the time taken to navigate the reverse route successfully, and/or the number of incorrect turns or moves the user makes.
- the degree of difficulty of the user in finding the point of origin (A), including whether the user fails to do so, is included as a parameter in the scoring.
- the computer device in response to a computer system detecting that the user is using the controls to make a wrong turn and/or move in an incorrect direction in trying to navigate the reverse route, the computer device can be configured to provide a guide for a limited time to assist the user in making the correct turn or move in the correct direction.
- the user's performance may be scored ⁇ i.e., quantified) based on the time taken to navigate the reverse route successfully and/or the number of times a guide was provided to prevent a user from making an incorrect turn or move in the incorrect direction.
- the one or more testing phases of the route-learning task can cause the computing device to return the user to the point of origin (A) and require the user to navigate the specified route at least one additional time to the end-point (B), along the route specified in the guided portion.
- the computing device may be configured to have the user try to navigate the route either entirely without the aid of a guide, or with use of a guide at strategic points where the user is detected to be making a wrong turn or moving in an incorrect direction.
- the user's performance may be scored ⁇ i.e., quantified) based on the time taken to navigate the reverse route successfully, and/or the number of incorrect turns or moves the user makes, and/or the degree (such as percentage) of deviation of the user-navigated route as compared to the guided route.
- the degree of difficulty of the user in finding the end-point (B), including whether the user fails to do so, is included as a parameter in the scoring.
- a task is rendered that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment from an initial point of the course to a target end-point, and using an indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point.
- the turn can be one or more left or right turns in the course in discrete angular amounts, such as but not limited to about 30 degrees, about 60 degrees, about 90 degrees, or about 120 degrees.
- the example method further includes measuring data indicative of the relative orientation indicated using the second indicator, and analyzing the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
- the requirement of executing at least one turn of a discrete angular amount in navigating a path in an environment is used to introduce a test of the individual's cognitive abilities at visuospatial memory.
- Such cognitive abilities can be compromised or diminished as a result of an onset of, or a degree or stage of progression of, a neurodegenerative cognitive condition (including a disease, or a disorder such as but not limited to an executive function disorder).
- the requirement of the individual to control at least one component of the computing device to execute at least one turn of a discrete angular amount on a navigation path in the computerized environment has the effect of forcing a change in perspective in the environment, thereby limiting the overall visuospatial information available to the individual at any given time.
- Analysis of measurement data from the individual's performance of a navigation task requiring at least one turn of a discrete angular amount can be used to provide an indication of the individual's cognitive abilities, and also a scoring output indicative of at least one of (i) a likelihood of onset of a neurodegenerative condition of the individual, or (ii) a stage of progression of the neurodegenerative condition.
- the one or more testing phases can require the user to remain at end-point (B) and to indicate the relative orientation of the point of origin (A) relative to end-point (B).
- the computing device can be used to render a pointer tool, an avatar, or other means that allows the user to indicate where the user believes the point of origin (A) is relative to the user's position at end-point (B), such as but not limited to by pointing (e.g., using the avatar, pointer tool, or other indicator means) or by drawing a line.
- the user's performance may be scored (i.e., quantified) based on the degree (such as percentage) of deviation of the user-indicated orientation of the point of origin (A) as compared to the actual relative orientation.
- a relative-orientation task may also be referred to as a "path integration" task.
- Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual.
- An example system or apparatus for implementing the method can include a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method.
- An example method includes using the programmed one or more processing units to render a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment, render a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point, and configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point.
- the example method further includes measuring data indicative of the relative orientation indicated using the second indicator, and analyzing the measurement data to generate a performance metric for the performance of a second task, the performance metric providing an indication of the cognitive ability of the individual.
- Example systems, methods, and apparatus herein can be implemented to render an environment and configure combination tasks in the environment that requires a user to draw on their cognitive skills in way-finding, orientation, and route-learning tasks.
- the computing device configures the environment for an exploration phase.
- the exploration phase can be rendered either (i) as a guided exploration of a specified route to allow the individual to learn the route, or (ii) as a free-exploration phase such that the user is allowed to explore the environment freely.
- the exploration phase can be rendered for a specified period of time, in order for the individual to gain some familiarity with the layout of the environment (including the location, lateral and vertical extent and/or relative proportions of obstacles and channels in the environment) and/or the location and type of strategically placed objects of interest in the environment.
- the computing device issues instructions to the user to find a specified location and/or landmark and/or specially-shaped object in the environment and positions the user at either the same location of initial entry or at a different location in the environment.
- the exploration phase and the one or more testing phases can be differing portions of a single, uninterrupted task.
- the user's performance may be scored (i.e., quantified) based on the time taken to navigate successfully to the specified location and/or landmark and/or specially-shaped object, and/or the number of incorrect turns or moves the user makes, and/or the degree (such as percentage) of deviation of the user-navigated route as compared to a desired route from the entry point to the specified location and/or landmark and/or specially- shaped object.
- the desired path can be a determined "best path" or one or more optimal paths (determined using mathematical or algorithmic computational or modeling methods), including the shortest path, from the entry point to the specified location and/or landmark and/or specially-shaped object.
- the user may be returned to the same location of initial entry or at a different location in the environment, and instructed to find the same or a different location and/or landmark and/or specially-shaped object.
- Any of the tasks may be repeated any number of times over multiple sessions of a user interaction with a cognitive platform or platform product.
- one or more of the instructions issued to the user for performing the tasks may be rendered and displayed via a heads-up display (HUD) at a portion of the screen.
- HUD heads-up display
- Example systems, methods, and apparatus herein can be implemented to render differing types of control mechanisms for use by the user to navigate through the environments and/or to make the indications (e.g., of relative orientation) as required in a task.
- the computing device can be configured to render one or more virtual joysticks depending on where a user interacts with (including applies pressure to or makes contact with) display sensors or other type of display device (including a touch screen).
- the type of movement that the computing device ascribes to each virtual joystick can be dependent on location on the screen and type of the user interaction.
- user contact with the left or right side of the screen can cause the computing device to render joysticks that control left or right turns (e.g., in discrete angular amounts, such as but not limited to about 30 degrees, about 60 degrees, about 90 degrees, or about 120 degrees) and/or sweeping, continuous virtual gazes, respectively, relative to the environment.
- user contact with the median of the screen can cause the computing device to render a joystick that controls forward or backward movement (such as by movement or swiping up or down the screen, respectively) relative to the environment.
- Such movement can be at a constant speed or the computing device can be configured to allow acceleration or deceleration to change speed (e.g., by changes in type of contact or pressure of contact, or by a button press or other means).
- the forward or backward movement can be in a continuous manner or in discrete amounts (e.g., to jump to a junction, end of a hallway, etc.).
- the computing device can be configured to ascribe movement controls to the virtual joysticks dependent on the relative position and/or type of the display sensors, the user interactions with the display sensors, or other type of display device (including a touch screen).
- user indication or interaction at a first portion of the display can cause the computing device to control the absolute position of the user indicator in the environment (such as user point of view or avatar), while user indication or interaction at a second portion of the display can cause the computing device to control the gaze (such as user point of view or avatar), as either indicator of a user' s "head-orientation" in the environment for indicate intended direction of movement and/or to "look around” to observe features in the environment.
- the computing device can be configured, including with use of a camera or other optical sensors, to read gestures of a user as directions for navigating through the environment, including to make left or right turns, to move forward or backwards, and/or to change directions.
- the computing device can be configured to render controls similar to a steering wheel (e.g., of a car or boat) on the display sensors or other type of display device.
- User interaction with the steering wheel is used to signal a degree of a turn (including to signal direction of movement) while a touch at another portion of the display sensors or other type of display device (including via virtual joysticks or dedicated "buttons" on the display) is used to control speed to a set value or to modulate speed of movement from a minimal speed via accelerating or decelerating from higher speeds.
- a computing device may be configured to render one or more virtual joysticks on a display device (including a touch screen), to control direction or velocity of movement within the environment and/or direction of rendered perspective view relative to the position of a user indicator (such as but limited to an avatar or other guidable element) within the environment.
- a display device including a touch screen
- a user indicator such as but limited to an avatar or other guidable element
- a computing device may be configured to render a set of buttons, keys, or touch-sensitive locations on a touch-screen that a user can use to achieve a change of direction or velocity of movement within the environment and/or direction of rendered perspective view relative to the position of the (such as but limited to an avatar or other guidable element) within the environment.
- a computing device may be configured with position and/or orientation sensors, such that the computing device can detect user physical action to cause a tilting, rotation, shaking, or translation of the position and/or orientation sensors to achieve a change of direction or velocity of movement within the environment and/or direction of rendered perspective view relative to the position of the (such as but limited to an avatar or other guidable element) within the environment.
- the scoring can be based on normalization of the measurement of the fastest time and/or shortest path of navigation based on the variables/speeds used by the user to determine the efficiency of the path taken.
- the details of the path a user takes to navigate in the environment might be more instructive, and given increased weighting in the scoring, as compared to the time it takes a user to get to the desired endpoint, location, landmark, or specially-shaped object.
- FIGs. 1 A - ID show non-limiting examples of computerized renderings of courses (paths) that present navigation tasks.
- FIG. 1 A shows a non-limiting example of a computerized rendering of a course that can be used to present a navigation task according to the principles herein, including a route-learning task, or a relative-orientation task, or a way -finding task, or any combination thereof.
- the computing device is configured to present an elevated, overhead view of an environment 10 that includes one or more internal courses 12 and obstacles 14.
- portions of the course 12 are configured to include pathways and passageways that allow traversal of the user indicator (such as but not limited to an avatar or other guidable element 16).
- the environment is rendered as a city -block type structure, however, other example environments are encompassed in this disclosure.
- the Cartesian axes (x-, y-, and z-axes) directions in the environment are used merely as guides for the description in this disclosure, and are not intended to be limiting on the environment.
- the example environment also includes a number of strategically placed shaped objects 18 (such as a doughnut, a sphere, a cone, etc.) that a user is tasked to locate.
- the user is presented a perspective view of the landscape and obstacles that is sufficiently localized so that the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course or a significant portion of the course.
- the navigation task requires an individual to formulate a pathway about the strategically positioned obstacles 14 from an initial point to at least one of the shaped objects 18.
- the example environment can include one or more entry ways 19 that either remain at a same location or at differing locations relative to the environment 10.
- the computing device can be configured to present instructions to the individual in a testing phase to indicate the shaped objects 18 to be located, and optionally to allow the user an exploration phase (including a guided route phase or a free-exploration phase) to become familiar with location and type of the obstacles 14 and shaped object 18 in the environment 10.
- the computing device also can be configured to provide an individual with an input device or other type of control element (including the joystick, steering wheel, buttons, or other controls described hereinabove) that allows the individual to traverse the course 12, including specifying and/or controlling one or more of the speed of movement, orientation, velocity, choice of navigation strategy, the wait or delay period or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route (e.g., in the one or more testing phases), a measure of accuracy of a user in using spatial memory rather than visual cues to orient the user indicator relative to (including to point back to) a specific location in space (such as but not limited to the point of origin of the given pre-specified navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment.
- the measure can include values of any of these parameters as a function of time.
- the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near- shortest path through the course, the time to complete the task, or other scoring mechanism associated with a route-learning task, or a relative-orientation task, or a way -finding task, or any combination thereof (as described herein).
- the walls of the environment can be configured with differing colors, indicated as a color 1, color 2, color 3, and color 4, to provide a user with visual cues for navigating through the environment 10.
- each can be a different color, two or more can be the same color, or all can be the same color.
- a first specific color can be used to indicate walls crossing the x-axis of the environment (e.g., color 3 and color 4 are the same), while a second, different specific color can be used to indicate walls crossing the y-axis of the environment (e.g., color 3 and color 4 are the same).
- the computing device can be configured to collect data indicative of the performance metric that quantifies the navigation strategy (including path, speed, and number of turns and sweeping gazes) employed by the individual from the initial point ("A") or entry way 19 to reach one or more target locations, landmarks, shaped objects, or end-points ("B") in performing the route-learning task, way -finding task, or combination task.
- the computing device can be configured to collect data indicative of the individual's decisions to proceed from the initial point ("A") or entryway 19 along the dashed line or the dotted line, the speed of movement, the orientation of the user indicator (such as but not limited to the avatar or other guidable element 16), among other measures (as described hereinabove).
- the data can be collected in the one or more testing phases.
- the data also can be collected in the exploration phase to provide a baseline or other comparison metric for computing the scores described herein.
- performance metrics that can be measured using the computing device can include data indicative of the speed of movement, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), including values of any of these parameters as a function of time.
- the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near-shortest path through the course, the time to complete the task, or other scoring mechanism associated with a route-learning task, or a relative-orientation task, or a way -finding task, or any combination thereof (as described herein).
- the course 12 may include one or more targets (such as shaped objects 18, landmarks, or other desired location) that the individual is instructed to locate in traversing the course 12.
- the performance metric may include a scoring based on a specific type of target located, and/or the total number of targets located and/or the time taken to locate the targets.
- the individual may be instructed to navigate the course 12 such that the multiple targets are located in a specified sequence.
- the performance metric may include a scoring based on the number of targets located in sequence and/or the time taken to complete the sequence.
- FIG. IB shows a non-limiting example of another computerized rendering of an environment 20 that a computing device can render to present a navigation task according to the principles herein.
- portions of the course 22 are defined by obstacles 24, and are configured to allow traversal of the user indicator (such as but not limited to an avatar or other guidable element 26) from a point of origin 29 to a specified target.
- the point of origin 29 may be at the same or different location relative to the environment between the two testing phases.
- the obstacles 24 can have differing cross-sectional shapes, such as a substantially square cross-section of obstacle Oi compared to a longitudinal cross-section of obstacle 0 2 .
- the user is presented a perspective view of the landscape and obstacles that is sufficiently localized so that an individual is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course or a significant portion of the course.
- the computing device can be configured to collect data indicative of the individual's decision to proceed along the dashed line or the dotted line (such as but not limited to the forward or backtracking movement of a user in the testing phase of a route-learning task), and/or the speed of movement, and/or the orientation of the user indicator (such as but not limited to the avatar or other guidable element 26), such as but not limited to the point of origin pointing (or other indication) that may be required of a user in the testing phase of a route-learning task), among other measures.
- performance metrics that can be measured using the computing device relative to the localized landscape can include data indicative of one or more of the speed of movement, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route (e.g., in the one or more testing phases), a measure of accuracy of a user in using spatial memory rather than visual cues to orient the user indicator relative to (including to point back to) a specific location in space (such as but not limited to the point of origin of the given pre-specified navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment.
- the measure can include values of any of these parameters as a function of time.
- the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as but not limited to determining the shortest path or near-shortest path through the course.
- the performance metric may include a scoring based on the success in locating a specific target object, the number of targets located (including from multiple testing phases), and/or the time taken to locate the target(s).
- the individual may be instructed to navigate the course 22 such that the multiple targets are located in a specified sequence.
- the performance metric may include a scoring based on the number of targets located in sequence and/or the time taken to complete the sequence.
- a computing device can be configured to present an individual with the capability of changing, in at least one instance in a session, from a wider aerial view (such as but not limited to the view shown in FIGs. 1 A - IB) to a more localized, perspective view (such as but not limited to the perspective views shown in FIGs. 3 A - 3U hereinbelow).
- a wider aerial view such as but not limited to the view shown in FIGs. 1 A - IB
- a more localized, perspective view such as but not limited to the perspective views shown in FIGs. 3 A - 3U hereinbelow.
- an individual may be presented with an aerial view such as shown in FIG. 1 A or IB to obtain an overview of the course, but then be required to navigate the course from a more localized perspective view shown in FIGs. 3 A - 3U hereinbelow.
- an individual may be required to rely on allocentric navigate capabilities, to navigate the course by making selections and decisions from more localized, perspective views similar to that shown in FIGs. 3 A - 3U hereinbelow based on the spatial memory the individual forms from the wider aerial view of FIG. 1A or IB.
- FIG 1C shows a non-limiting example of the type of dimensional constraints that can be imposed on the passageways, obstacles, and dimensions of the environment.
- the width (ai) of the obstacles is greater than or about equal to the width (a 2 ) of the passageway.
- ai is about twice a 2 .
- the width (ai) is also smaller than the length of environment wall (a 3 ), such that no portion of the environment is rendered inaccessible by an obstacle.
- ai is about one-fourth or one- fifth of a 3 . While example proportionate values are given for the relative dimensions (width and lengths) of the passageway, obstacles, and environment walls, they are not intended to be limiting, other than to require that a 3 > a 2 > ai.
- FIG. ID shows a non-limiting example of a computerized environment, where the path 40 from point A to point B includes at least one turn 42 of a discrete angular amount (represented by angle ⁇ ).
- a user is required to navigate from an initial point A to a target end-point (C) via the path, and from point C use an indicator to "point" back to or otherwise indicate the point of origin A.
- the system is controllable to allow the user to indicate any angle within the range of 0° to at least about 180° about point C.
- the system is controllable to allow to the user to indicate any angle within the entire range of from 0° to 360° about point C.
- a measure of the degree of success of performance of the task is the measure of the delta angle ( ⁇ ) between what the user indicates as the relative orientation of the point of origin (dashed arrow 44) and the actual relative orientation (dashed arrow 46) of the point of origin.
- a navigation path in any example environment described herein, including in the example of any of FIGs. 2 A - 9H hereinbelow) may include a portion that is curved or substantially non-linear.
- FIGs. 2A - 9H show various perspective views of portions of computerized renderings of an environment during various non-limiting example navigation tasks according to the principles herein.
- the computing device is configured to present differing perspective views of a selected portion of an environment that the individual is required to navigate, but from the perspective of the user indicator (such as but not limited to and avatar or other guidable element).
- the example perspective views are illustrative of navigation through an example environment and are not to be limiting on the scope of the instant disclosure.
- the example images depict the type of sequence of perspective views that a user can encounter as the user navigates through the environment.
- FIGs. 2A - 2C show differing perspective views of an example entryway 200
- FIGs. 2A - 2C also show examples of the types of heads-up display (HUD) 202 that the computing device can be used to display to a user as they navigate the environment.
- the computing device prompts the user with the display of the instructions "READY TO EXPLORE" as the HUD 202.
- FIGs. 3A - 3U show non-limiting examples of a series of perspective views of an environment as the computing device allows a user to conduct an exploration to gain some familiarity with the environment.
- portions of the example course 302 are defined by obstacles 304, and a wall 306 and are configured to allow traversal of the user indicator (such as but not limited to an avatar or other guidable element), as the user explores the environment.
- a target shaped object 308 in this example, a sphere
- FIGs. 3B and 3C show examples of the perspective views rendered as the user actuates the computing device controls to turn and move around in the environment.
- FIGs. 3D - 3U show the perspective views of the environment as the user moves forward, moves backwards, and turns around obstacles in the environment.
- FIGs. 3D - 3U also show the non-limiting example HUD 310 display rendered to the user by the computing device to indicate that it is an exploration phase and the amount of time the user is allowed for the exploration (whether a guided route or a free-exploration), as well as a HUD 312 that indicates the time spent as the user navigates through the exploration phase.
- FIGs. 3D - 3U show the other non-limiting example shaped objects located about the environment, including a cone 14, a cube 316, and a doughnut 318.
- an individual may be presented with a perspective view such as shown in FIGs. 3 A - 3U, with verbal or visual instructions indicating that they have been placed at an unknown location within a previously-experienced virtual environment (through the exploration phase), and instructed to perform a navigation task from this unknown location.
- a navigation task an individual may be required to use the computing device controls to look around, determine their current location to the best of their ability, and point to a previously navigated (and presumed-known) location within the environment. Performance metrics for such a task would include the accuracy of the directional response, and the time required to generate this response.
- Performance metrics for such a task could include the time required to reach the goal location, and differences between the path used to reach the goal location and one or more optimal paths (e.g., optimal paths determined using mathematical or algorithmic computational or modeling methods).
- the relative dimensions the passageway, obstacles, and environment walls are configured such that that a 3 > a 2 > i (as described in connection with FIG. 1C) and such that a user presented with the perspective view is obstructed from observing the contents of adjacent passageways until the user is within a certain distance of a cross-channel or a turn.
- dimensions ⁇ 3 : ⁇ 2 : ⁇ can be related in a ratio of 10:2: 1.
- FIGs. 4 A - 4C show differing perspective views of an example entryway 400
- FIGs. 4A - 4C also show examples of the types of heads-up display (HUD) 402 that the computing device can be used to display to a user as they navigate the environment.
- the computing device prompts the user with the display of the instructions "READY TO SEARCH" as the HUD 402.
- FIGs. 5 A - 5 J show non-limiting examples of a series of perspective views of an environment as the computing device presents a first testing session to a user in the
- FIG. 5 A shows an example display of instructions 500 to the user to indicate the type of shaped object (a cone) to be located as well as a HUD 502 that indicates the time spent as the user navigates through the first testing phase.
- the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase).
- the user indicator is placed at a different starting location of the environment than for the exploration phase (shown in FIG. 3 A).
- the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the
- the user can make decisions as to direction and orientation of movement based on using the positions of non-target shaped objects 504, 506 and 508 as guides in formulating a navigation strategy.
- the individual may use the non-target shaped objects 504, 506 and 508 in a form of egocentric navigation.
- the user navigates to target shaped object 510, at which point the timer HUD 502 is frozen in time, the user is presented with a reward indicator 512 and is reset to the entryway for further session(s), if any.
- FIGs. 6 A - 6E show non-limiting examples of a series of perspective views of an environment as the computing device presents a second testing session to a user in the environment.
- FIG. 6A shows an example display of instructions 600 to the user to indicate the type of shaped object (a cone, similarly to FIG. 5 A) to be located as well as a dual HUD 602 that indicates both the time the user took to complete the first testing session (FIGs. 5 A - 5 J) as well as the time spent as the user navigates through the second testing phase.
- the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase and in first testing session (FIGs. 5A - 5J)).
- the user indicator is placed at a similar starting location of the environment as for the first testing session (shown in FIG. 5A).
- the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered.
- the user can make decisions as to direction and orientation of movement based on using the position of a non-target shaped object 604 as a guide in formulating a navigation strategy, in a form of egocentric navigation.
- the user navigates to target shaped object 606, at which point the timer HUD 608 tracking the time for the second testing session is frozen in time.
- the user is presented with a reward indicator 610 and is reset to the entryway for further session(s), if any.
- the time the user took to complete the first testing session (FIGs. 5 A - 5 J) is greater than the user took to navigate through the second testing phase.
- FIGs. 7 A - 7F show non-limiting examples of a series of perspective views of an environment as the computing device presents a third testing session to a user in the environment.
- FIG. 7A shows an example display of instructions 700 to the user to indicate the type of shaped object (a cube) to be located as well as a triple HUD 702 that indicates the time the user took to complete the first testing session (FIGs. 5 A - 5 J), the time the user took to complete the second testing session (FIGs. 6A - 6J), as well as the time spent as the user navigates through the third testing phase.
- FIG. 7A shows an example display of instructions 700 to the user to indicate the type of shaped object (a cube) to be located as well as a triple HUD 702 that indicates the time the user took to complete the first testing session (FIGs. 5 A - 5 J), the time the user took to complete the second testing session (FIGs. 6A - 6J), as well as the time spent as the user navigates through the third testing
- the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase, first testing session (FIGs. 5A - 5J), and second testing session (FIGs. 6A - 6E)).
- the user indicator is placed at a similar starting location of the environment as for the first and second testing sessions (shown in FIGs. 5A and 6A).
- the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered.
- FIG. 7A the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered.
- the user navigates to target shaped object 704, at which point the timer HUD 702 tracking the time for the third testing session is frozen in time.
- the user is presented with a reward indicator 706 and is reset to the entryway for further session(s), if any.
- the time the user took to navigate through the third testing session is significantly less than taken to complete the first testing session (FIGs. 5 A - 5 J) and the second testing session (FIGs. 6A - 6E).
- FIGs. 8 A - 8H show non-limiting examples of a series of perspective views of an environment as the computing device presents a fourth testing session to a user in the environment.
- FIG. 8A shows an example display of instructions 800 to the user to indicate the type of shaped object (a shpere) to be located as well as a quadruple HUD 802 that indicates the time the user took to complete the first, second, and third testing sessions (FIGs. 5A - 7F), as well as the time spent as the user navigates through the fourth testing phase.
- a shpere type of shaped object
- FIG. 8F shows an example display of instructions 800 to the user to indicate the type of shaped object (a shpere) to be located as well as a quadruple HUD 802 that indicates the time the user took to complete the first, second, and third testing sessions (FIGs. 5A - 7F), as well as the time spent as the user navigates through the fourth testing phase.
- the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase, and first, second, and third testing sessions (FIGs. 5A - 7F)).
- the user indicator is placed at a similar starting location of the environment as for the first, second and third testing sessions (shown in FIGs. 5A, 6A, and 7A).
- the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered.
- the user navigates to target shaped object 804, at which point the timer HUD 802 tracking the time for the fourth testing session is frozen in time.
- the user is presented with a reward indicator 806 and is reset to the entryway for further session(s), if any.
- the time the user took to navigate through the fourth testing session is comparable to the time taken to complete the first testing session (FIGs. 5 A - 5 J) and the second testing session (FIGs. 6A - 6E).
- FIGs. 9 A - 9H show non-limiting examples of a series of perspective views of an environment as the computing device presents a fifth testing session to a user in the environment.
- FIG. 9A shows an example display of instructions 800 to the user to indicate the type of shaped object (a cube, similar to FIG. 7 A) to be located as well as a quadruple HUD 902 that indicates the time the user took to complete the first, second, third, and fourth testing sessions (FIGs. 5 A - 8H), as well as the time spent as the user navigates through the fifth testing phase.
- the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase, and first, second, third, and fourth testing sessions (FIGs. 5A - 8H)).
- the user indicator is placed at a different starting location of the environment than for the previous testing sessions (shown in FIGs. 5A, 6A, 7A, and 8A).
- the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered.
- the user navigates to target shaped object 904, at which point the timer HUD 902 tracking the time for the fifth testing session is frozen in time.
- the user is reset to the entryway for further session(s), if any.
- the time the user took to navigate through the fifth testing session is comparable to the time taken to complete the first, second and fourth testing sessions (FIGs. 5A - 6E and 8A - 8H).
- FIG. 10 shows a non-limiting example of a graphical user interface rendered to a user including multiple input fields that allow a user to enter user identification 1002, user password 1004, and other information that can be used for authentication and validation of the user, and to determine a user's permission levels to enter a session.
- the graphical user interface can be rendered to display the user's performance data and
- the computing device can be configured to collect data indicative of the individual's decisions to proceed in the environment, and/or the speed of movement, and/or the orientation of the user indicator, among other measures.
- performance metrics that can be measured using the computing device relative to the localized, perspective landscape can include data indicative of one or more of the speed of movement, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route (e.g., in the one or more testing phases), a measure of accuracy of a user in using spatial memory rather than visual cues to orient the user indicator relative to (including to point back to) a specific location in space (such as but not limited to the point of origin of the given pre-specified navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment.
- the measure can include values of any of these parameters as a function of time.
- the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near-shortest path through the course, the time to complete the task, or other scoring mechanism associated with a route-learning task, or a relative-orientation task, or a way-finding task, or any combination thereof (as described herein).
- the course through an example environment may include land-based solid surfaces (including paved road, dirt road, or other types of ground surfaces) and/or waterways.
- the environment may instead be waterways defined by obstacles other than land-based obstacles, such as but not limited to buoys or other anchored floats, reefs, jetties or other applicable type of obstacles.
- obstacles other than land-based obstacles, such as but not limited to buoys or other anchored floats, reefs, jetties or other applicable type of obstacles.
- one or more navigation tasks can be computer- implemented as computerized elements which require position-specific and/or motion-specific responses from the user.
- the user response to the navigation task(s) can be recorded using an input device of the cognitive platform.
- input devices can include a touch, swipe or other gesture relative to a user interface or image capture device (such as but not limited to a keyboard, a touch-screen or other pressure sensitive screen, or a camera), including any form of graphical user interface configured for recording a user interaction.
- the user response recorded using the cognitive platform for the navigation task(s) can include user actions that cause changes in a position, orientation, or movement of a computing device including the cognitive platform.
- Such changes in a position, orientation, or movement of a computing device can be recorded using an input device disposed in or otherwise coupled to the computing device, such as but not limited to a sensor.
- sensors include a joystick, a mouse, a motion sensor, a position sensor, a pressure sensor, and/or an image capture device (such as but not limited to a camera).
- the computer device is configured (such as using at least one specially-programmed processing unit) to cause the cognitive platform to present to a user one or more different types of navigation tasks during a specified time frame.
- the time frame can be of any time interval at a resolution of up to about 30 seconds, about 1 minute, about 5 minutes, about 10 minutes, about 20 minutes, or longer.
- the platform product or cognitive platform can be configured to collect data indicative of a reaction time of a user's response relative to the time of presentation of the navigation tasks.
- the difficulty level of the navigation task can be changed by increasing the intricacy of the convolutions or number or density of misdirection portions of the course, reducing the time required to complete the course, increase the complexity of the target location requirements.
- a misdirection portion in a course causes the avatar or other guidable element to move off course, reach a portion of an obstacle that cannot be traversed, and/or not load to a desired target.
- the example platform product herein may be formed as, be based on, or be integrated with, an AKILI® platform product (also referred to herein as an "APP") by Akili Interactive Labs, Inc., Boston, MA.
- AKILI® platform product also referred to herein as an "APP”
- CSI computerized stimuli or interaction
- the navigation task can be presented to a user by rendering a graphical user interface to present the computerized stimuli or interaction (CSI) or other interactive elements.
- CSI computerized stimuli or interaction
- Description of use of (and analysis of data from) one or more CSIs in the various examples herein also encompasses use of (and analysis of data from) navigation tasks comprising the one or more CSIs in those examples.
- the at least one navigation task and at least one CSI can be rendered using the at least one graphical user interface.
- the computing device can be configured to measure data indicative of the responses as the user performs the at least one navigation task and to measure data indicative of the interactions with the at least one CSI.
- the rendered at least one graphical user interface can be configured to measure data indicative of the responses as the user performs the at least one navigation task and to measure data indicative of the interactions with the at least one CSI.
- the performance metric may include a scoring based on the number of reward items or other interaction elements located by the individual and/or the time taken to locate the reward items or other interaction elements.
- reward items or other interaction elements include coins, stars, faces (including faces having variations in emotional expression) or other dynamic element.
- the graphical user interface can be configured such that the CSI computerized element(s) are active, and may require at least one response from a user, such that the graphical user interface is configured to measure data indicative of the type or degree of interaction of the user with the platform product.
- the graphical user interface can be configured such that the CSI computerized element(s) are passive and are presented to the user using the at least one graphical user interface but may not require a response from the user.
- the at least one graphical user interface can be configured to exclude the recorded response of an interaction of the user, to apply a weighting factor to the data indicative of the response (e.g., to weight the response to lower or higher values), or to measure data indicative of the response of the user with the platform product as a measure of a misdirected response of the user (e.g., to issue a notification or other feedback to the user of the misdirected response).
- the platform product can be configured as a processor- implemented system, method or apparatus that includes and at least one processing unit.
- the at least one processing unit can be programmed to render at least one graphical user interface to present the navigation task(s) and one or more CSI to the user for interaction.
- the at least one processing unit can be programmed to cause a component of the program product to receive data indicative of the navigation and/or at least one user response based on the user interaction with the CSI (such as but not limited to cData), including responses provided using the input device.
- the at least one processing unit also can be programmed to: analyze the cData to provide a measure of the individual' s performance metric for a given type of navigation task (whether allocentric or egocentric), and/or analyze the differences in the individual's performance based on determining the differences between the user' s performance at allocentric navigation a compared to the user' s performance at egocentric navigation (including based on differences in the cData), and/or adjust the difficulty level of the navigation task(s) (including CSIs), based on the analysis of the cData (including the measures of the individual' s performance determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual' s performance metric, and/or cognitive abilities (including for screening, monitoring or assessment), and/or response to cognitive treatment, and/or assessed measures of cognition.
- analyze the cData to provide a measure of the individual' s performance metric for a given type of navigation task (whether allocentric or egocentric), and/
- the at least one processing unit also can be programmed to classify an individual as to amyloid status, and/or presence or expression level of tau proteins, and/or potential efficacy of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, and/or expected score from the individual's
- the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a condition, based on the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData.
- the condition can be, but is not limited to, depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, schizophrenia, or other condition.
- the platform product can be configured as a processor- implemented system, method or apparatus that includes a display component, an input device, and the at least one processing unit.
- the at least one processing unit can be programmed to render at least one graphical user interface, for display at the display component, to present the navigation task(s) (including the CSI) to the user for interaction.
- Non-limiting examples of an input device include a touch-screen, or other pressure-sensitive or touch-sensitive surface, a motion sensor, a position sensor, a pressure sensor, and/or an image capture device (such as but not limited to a camera).
- the analysis of the individual's performance may include using the computing device to compute percent accuracy at the navigation task, number of hits and/or misses at locating the target(s) during a session or from a previously completed session.
- Other indicia that can be used to compute performance measures is the amount time the individual takes to respond after the presentation of a task (e.g., as a targeting stimulus).
- Other indicia can include, but are not limited to, reaction time, response variance, number of correct hits, omission errors, false alarms, learning rate, spatial deviance, subjective ratings, and/or performance threshold, etc.
- the computerized element includes at least one element to indicate positive feedback to a user.
- Each element can include an auditory signal and/or a visual signal emitted to the user that indicates success at a navigation task or other platform interaction element, i.e., that the user responses at the platform product has exceeded a threshold success measure on a navigation task.
- the computerized element includes at least one element to indicate negative feedback to a user.
- Each element can include an auditory signal and/or a visual signal emitted to the user that indicates failure at a navigation task, i.e., that the user responses at the platform product has not met a threshold success measure on a navigation task.
- the computerized element includes at least one element for messaging, i.e., a communication to the user that is different from positive feedback or negative feedback.
- the computerized element includes at least one element for indicating a CSI that is a reward.
- a reward computer element can be a computer generated feature that is delivered to a user to promote user satisfaction with the navigation task and as a result, increase positive user interaction (and hence enjoyment of the user experience).
- cognition refers to the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. This includes, but is not limited to, psychological
- An example computer-implemented device can be configured to collect data indicative of user interaction with a platform product, and to compute metrics that quantify user performance.
- the quantifiers of user performance can be used to provide measures of cognition (for cognitive assessment) or to provide measures of status or progress of a cognitive treatment.
- treatment refers to any manipulation of CSI in a platform product (including in the form of an APP) that results in a measurable improvement of the abilities of a user, such as but not limited to improvements related to cognition, a user's mood, emotional state, and/or level of engagement or attention to the cognitive platform.
- the degree or level of improvement can be quantified based on user performance measures as describe herein.
- the term “treatment” may also refer to a therapy.
- the term "session” refers to a discrete time period, with a clear start and finish, during which a user interacts with a platform product to receive assessment or treatment from the platform product (including in the form of an APP).
- the term “assessment” refers to at least one session of user interaction with CSIs or other feature or element of a platform product.
- the data collected from one or more assessments performed by a user using a platform product can be used as to derive measures or other quantifiers of cognition, or other aspects of a user's abilities.
- cognitive load refers to the amount of mental resources that a user may need to expend to complete a task. This term also can be used to refer to the challenge or difficulty level of a navigation task.
- the platform product can be configured as a processor- implemented system, method or apparatus that includes at least one processing unit.
- the at least one processing unit can be programmed to render at least one graphical user interface to present the navigation task(s) and one or more CSI to the user for interaction.
- the at least one processing unit can be programmed to cause a component of the program product to receive data indicative of the performance of the navigation task and/or at least one user response based on the user interaction with the CSI (such as but not limited to cData), including responses provided using the input device.
- the platform product also can be configured to receive nData indicative of measurements made before, during, and/or after the user interacts with the cognitive platform (including nData from measurements of physiological or monitoring components and/or cognitive testing components).
- the at least one processing unit also can be programmed to: analyze the cData and/or nData to provide a measure of the individual's condition (including cognitive condition), analyze the cData and/or nData to provide a measure of the individual's performance metric for a given type of navigation task (whether the navigation task requires allocentric navigation and/or egocentric navigation), and/or analyze the differences in the individual's performance based on determining the differences between the user's performance at allocentric navigation as compared to the user's performance at egocentric navigation (including based on differences in the cData) and differences in the associated nData.
- the at least one processing unit also can be programmed to: adjust the difficulty level of the navigation task(s) (including CSIs), based on the analysis of the cData (including the measures of the individual's performance determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual's performance metric, and/or cognitive abilities (including for screening, monitoring or assessment), and/or response to cognitive treatment, and/or assessed measures of cognition.
- the at least one processing unit also can be programmed to classify an individual as to amyloid status, and/or presence or expression level of tau proteins, and/or potential efficacy of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, and/or expected score from the individual's performance of a TOVA® test and/or a RAVLTTM test, based on nData and the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData.
- the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a condition, based on nData and the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData.
- the condition can be, but is not limited to, depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, schizophrenia or other condition.
- ADHD attention deficit hyperactivity disorder
- the feedback from the differences in the individual's performance based on determining the differences between the measures of the user's first type and second type of responses and the nData can be used as an input in the cognitive platform that indicates real-time performance of the individual during one or more session(s).
- the data of the feedback can be used as an input to a computation component of the computing device to determine a degree of adjustment that the cognitive platform makes to a difficulty level of the navigation task (optionally with interference) with which the user interacts within the same ongoing session and/or within a subsequently-performed session.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to identify the type of navigation strategy that is being used by a participant.
- a platform product including using an APP
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to determine the relative strength of each navigation skill (whether egocentric navigation or allocentric navigation) for a given individual or set or population of individuals.
- a platform product including using an APP
- the weak areas in a disease population are strengthened with training on a cognitive platform configured to present a certain type of navigation task (e.g. allocentric navigation to strengthen the hippocampus as compared to egocentric navigation to strengthen the caudate nucleus), there could be transfer of benefit to the disease symptoms of the individual(s) related to that respective brain area (such as but not limited to navigation abilities and potentially memory related to the hippocampus, working memory, learning, and response selection related to the caudate nucleus).
- a cognitive platform configured to present a certain type of navigation task
- e.g. allocentric navigation to strengthen the hippocampus as compared to egocentric navigation to strengthen the caudate nucleus
- hippocampus constructs and maintains a cognitive map of a given environment, and retrieves previously constructed maps (including landscape or waterways maps) when the individual is presented with a new environment that appears similar to a previously visited environment, measurements of interest include speed and accuracy of learning a new map, employing an old map, and differentiating between maps that appear similar.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to evaluate the navigation strategy being used by an individual or group of individuals.
- the platform product may be configured to present a user with conflicting information, such as but not limited to, egocentric landmark cues that would suggest different path choices than the simultaneously available allocentric boundary and path integration information.
- the example platform product can be configured to measure data indicative of cues that dictate the path choices of the individual. This can provide an indication of the individual's strategy preference.
- the indication of the individual's strategy preference can be correlated with relative capabilities in respectively associated areas of the individual's brain (i.e., areas of the brain governing allocentric navigation versus egocentric navigation).
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to measure the change in navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) is set in similar virtual environments, but with varying levels of landmarks available for navigating or varying the salience of the landmarks (such as but not limited to making landmarks look more similar (i.e., fewer distinctions), smaller, less distinct color from the background, etc.).
- the example platform product can be configured to perform an analysis to compare these measurements. If the performance metrics indicate that an individual's performance gets worse as the number of landmarks decreases, the individual can be classified as more likely to be using egocentric navigation.
- the platform product (including using an APP) can be configured to analyze the measures of the individual's performance across the environments, and analyze how the individual's performance changes with the number of landmarks. This outcome from the analysis of the individual's performance can be compared between neurotypical individuals and/or individuals of known disease populations, to determine if the performance profile is different between the individual and the neurotypical individuals and/or individuals of known disease populations.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to measure the navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more targets (e.g., faster time is used as a metric of better performance), where the navigation task(s) is set in a virtual environment that is changing as the individual is traversing the environment.
- metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more targets (e.g., faster time is used as a metric of better performance), where the navigation task(s) is set in a virtual environment that is changing as the individual is traversing the environment.
- the landmark features can be changing (e.g., tree changing color in a forest), the landmarks may be duplicated (e.g., first landmark is a pink tree and more pink trees appear over time), the landmarks are changing locations relative to the target(s) and/or other landmarks, the salience of landmarks are changing (e.g., they are getting darker and/or the colors become less clear), or the ability to use landmarks changes (e.g., it becomes foggy and landmarks are less visible).
- the landmark features can be changing (e.g., tree changing color in a forest)
- the landmarks may be duplicated (e.g., first landmark is a pink tree and more pink trees appear over time)
- the landmarks are changing locations relative to the target(s) and/or other landmarks
- the salience of landmarks are changing (e.g., they are getting darker and/or the colors become less clear)
- the ability to use landmarks changes e.g., it becomes foggy and landmarks are less visible).
- the example platform product (including using an APP) can be configured to perform an analysis to compare performance metrics measured in the changing environment relative to a static environment, to identify the specific state of areas of the brain of an individual (e.g., whether these areas are similar to or different from that of a given population, or show any benefit or deficit) and the individual's specific navigation strategy preferences.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to measure the navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) occur in a previously explored virtual environment where the starting point and/or target(s) require traversal of the environment via paths to which the individual is not previously exposed (and thus were not previously learned).
- metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) occur in a previously explored virtual environment where the starting point and/or target(s)
- this can be achieved by configuring the platform product to introduce new obstacles in the way of previously displayed (and thereby known) paths of the course. In another example implementation, this can be achieved by configuring the platform product to place intermediary target(s) at locations that are outside of previously traveled paths of the course. In another example implementation, this can be achieved by configuring the platform product to introduce a completely different path that never intersects with the previously traversed (and thereby learned) paths of the course.
- the example platform product (including using an APP) can be configured to perform an analysis to determine an individual's ability to navigate in this condition as a better indication of tendency towards allocentric navigation than possible with repeated wayfinding tasks in previously known paths.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to measure the navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) is in a previously explored virtual environment that is being traversed one or more additional times, potentially after varying levels of delay between repeated trials in that environment.
- the platform product can be configured to present other activities to the individual into the intervening periods, to introduce cognitive interference.
- the platform product can be configured to present other navigation activities that introduce spatial-memory- specific interference, whereas non-navigation activities may be used to introduce other types of interference.
- the example platform product (including using an APP) can be configured to perform an analysis to compare the measurements from the previously explored virtual environment before and after the intervening periods to determine measures of the
- the example platform product (including using an APP) can be configured to perform an analysis to compare the measurements from the previously explored virtual environment before and after the intervening periods to determine measures of the changes in performance between to same-environment trials, and the degree of correlation with the amount of delay between to repetitions to determine the effect of time delay on an individual's ability at maintenance of spatial memories.
- the example platform product (including using an APP) can be configured to perform an analysis to compare the
- the example platform product can be configured to perform an analysis to compare the measurements from the previously explored virtual environment before and after the intervening periods to provide an indicator of the efficiency of spatial memory retrieval based on an analysis of the measures of the impact of spatial memory interference.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to measure the navigation performance (of an individual (as measured by the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) is in a virtual environment that is spatially analogous to a previously explored environment, but without the same visual cues.
- the analogous environment may be the same as the original environment but with little or no lighting.
- the analogous environment may be on a different vertical plane (e.g.
- the example platform product (including using an APP) can be configured to perform an analysis to determine a measure of the individual's ability to navigate in this condition as an indication of allocentric navigation.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to present an individual with a virtual environment that is spatially analogous to a previously explored environment, without the same visual cues, but not informing the individual which of multiple possible previous environments is the source.
- the example platform product (including using an APP) can be configured to measure the individual's ability to determine the actual source
- the example platform product can be configured to perform an analysis to determine a measure of the individual's ability to determine the source environment as an indication of ability to flexibly manipulate multiple cognitive maps under uncertainty, a specific form of active spatial memory interference.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to apply a predictive model to data indicative of the cognitive ability in the individual.
- the predictive model can be configured based on computational techniques and machine learning tools, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, or artificial neural networks, to the cData and nData to create composite variables or profiles that are more sensitive than each measurement alone for detecting disease or assessing cognitive health.
- An example system, method, and apparatus can be configured to train a predictive model of a measure of the cognitive capabilities of individuals based on the data measured from the performance at the navigation tasks (allocentric and/or egocentric navigation tasks) of individuals that are previously classified as to the measure of cognitive abilities of interest.
- a classifier can be trained using a plurality of training datasets, where each training dataset is associated with a previously classified individual from a group of individuals.
- Each of the training dataset includes data indicative of one or more parameters indicative of the performance of the classified individual at the task(s) (whether allocentric and/or egocentric navigation tasks), based on the classified individual's interaction with an example apparatus, system, or computing device described herein.
- the example classifier also can take as input data indicative of the performance of the classified individual at a cognitive test, and/or a behavioral test, and/or data indicative of a diagnosis of a likelihood of onset of, or stage of progression of, a neurodegenerative cognitive condition, a disease, or a disorder (including an executive function disorder) of the classified individual.
- the example trained predictive model can be used as an intelligent proxy for quantifiable assessments of an individual's cognitive abilities. That is, once a predictive model is trained, the predictive model output can be used to provide the indication of the cognitive capabilities of multiple individuals without use of a physiological measure, or another cognitive or behavioral assessment test. In an example, the trained predictive model can be used as an intelligent proxy to provide an indication of a likelihood of onset of a neurodegenerative condition of the individual, or the stage of progression of the neurodegenerative condition. In an example, the trained predictive model can be used as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with standard cognitive tasks for navigation, such as the pathway span task, the dynamic maze task, the radial arm maze, and the Morris water navigation task.
- standard cognitive tasks for navigation, such as the pathway span task, the dynamic maze task, the radial arm maze, and the Morris water navigation task.
- the combinations allow for greater precision in assessing brain function of an individual or group of individuals, standards setting, calibration of one metric as compared to another metric, and validation or corroboration of the results of one or the tools versus the others. That is, the standard cognitive tasks may test one type of navigation capability of the individual.
- the systems, methods, and apparatus herein provides for methods, and apparatus described herein can be used to generate indicators of the relative capabilities of the allocentric tasks versus the egocentric tasks.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with an interference processing or other multi-tasking task (such as but not limited to the dual task measurements performed using the Project: EVOTM platform.
- a platform product including using an APP
- an interference processing or other multi-tasking task such as but not limited to the dual task measurements performed using the Project: EVOTM platform.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with measurements of gross and fine motor function (as nData).
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with standard cognitive tasks for working memory, such as spatial working memory.
- a platform product including using an APP
- standard cognitive tasks for working memory such as spatial working memory.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with voice/speech monitoring based measures of cognitive and behavioral health.
- a platform product including using an APP
- voice/speech monitoring based measures of cognitive and behavioral health Through correlation of the results of the multiple performance measures described herein and two or more of the standard cognitive tasks, the combinations allow for greater precision in assessing brain function of an individual or group of individuals, standards setting, calibration of one metric as compared to another metric, and validation or corroboration of the results of one or more tools versus the others.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to improve allocentric navigation as a treatment.
- the example platform product can be configured to adapt and/or increase the difficulty level of the navigation task(as) to improve wayfinding function.
- the platform product can be configured to make it harder for the individual to rely on allocentric navigation by reducing the number of landmarks presented to the individual for use in a virtual space over time.
- the platform product can be configured to expand the size of the virtual environment so that there is more information for an individual to evaluate in order to make choices in the navigation.
- the platform product can be configured to make multiple virtual environments with the same visual landmarks in different positions so that interference of the landmark reduces the use of egocentric navigation.
- the platform product can be configured to present maps to the individual with increasingly incomplete information (for example, by gradually reducing the number of landmarks present in the landscape).
- the platform product can be configured to put obstacles in the way of the
- the platform product can be configured to place starting points and one or more targets in different locations than in a previous session in a given environment, to force an individual to use allocentric strategies.
- the platform product can be configured to cause the individual to interact with environments analogous to previously explored environments and require the individual to employ knowledge of the source environment to reach the one or more targets in the second
- the platform product can be configured to introduce interfering activities of varying difficulty and/or duration in between navigation trials to stress maintenance and retrieval of spatial memory.
- the platform product can be configured to vary the number of possible source environments for an analogous (second) environment and/or the amount of information or time available with which to determine which is the source environment.
- the platform product can be configured to present any combination of two or more of these changes at substantially the same time or at differing times within the same session.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to communicate with a physiological measurement component for measuring nData (from physiological
- fMRI fMRI ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
- the strength of hippocampal function can correlate with structural MRI measurements such as volume, cortical thickness, etc. This in turn can correlate with the ability of an individual to use allocentric navigation.
- the strength of caudate nucleus function can correlate with volume, and the ability of an individual to use egocentric navigation.
- Changes in hippocampal volume e.g. decreases resulting from disease progression or increases as a result of therapy, can correlate with an increase in the individual's ability to use allocentric navigation.
- Measurements of allocentric strategy efficiency can be used as indicators of disease progress or treatment efficacy. Such measures also can be used to determine the appropriate levels of difficulty to be used in the navigation-based treatment using the platform product(s) described herein.
- the cognitive platform based on interference processing can be the Project: EVOTM platform by Akili Interactive Labs, Inc., Boston, MA.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to set baseline performance metrics at the navigation task(s) in APP session(s) based on measurements nData indicative of physiological condition and/or cognition condition (including indicators of neuropsychological disorders), to increase accuracy of assessment and efficiency of treatment.
- the CSIs may be used to calibrate a nData component to individual user dynamics of nData.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to use nData to detect states of attentiveness or inattentiveness to optimize delivery of navigation task(s) related to treatment or assessment.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to use analysis of nData with navigation task(s) cData to detect and direct attention to specific CSIs related to treatment or assessment through subtle or overt manipulation of CSIs.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to monitor nData indicative of anger and/or frustration to promote continued user interaction with the cognitive platform by offering alternative navigation task(s) or disengagement from the navigation task(s).
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to combine signals from navigation task(s)I cData with nData to optimize individualized treatment promoting improvement of indicators of cognitive abilities, and thereby, cognition.
- a platform product including using an APP
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to use a profile of nData to confirm/verify/authenticate a user's identity.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to use nData to detect positive emotional response to CSIs in navigation task(s) in order to catalog individual user preferences to customize CSIs to optimize enjoyment and promote continued engagement with assessment or treatment sessions.
- a platform product including using an APP
- nData to detect positive emotional response to CSIs in navigation task(s) in order to catalog individual user preferences to customize CSIs to optimize enjoyment and promote continued engagement with assessment or treatment sessions.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to generate user profiles of cognitive improvement (such as but not limited to, user profiles associated with users classified or known to exhibit improved working memory, attention, processing speed, and/or perceptual detection/discrimination), and deliver a treatment that adapts navigation task(s) to optimize the profile of a new user as confirmed by profiles from nData.
- a platform product including using an APP
- user profiles of cognitive improvement such as but not limited to, user profiles associated with users classified or known to exhibit improved working memory, attention, processing speed, and/or perceptual detection/discrimination
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to provide to a user a selection of one or more profiles configured for cognitive improvement.
- a platform product including using an APP
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to monitor nData from auditory and visual physiological measurements to detect interference from external environmental sources that may interfere with the assessment or treatment being performed by a user using an APP.
- a platform product including using an APP
- nData from auditory and visual physiological measurements to detect interference from external environmental sources that may interfere with the assessment or treatment being performed by a user using an APP.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to use cData and/or nData (including metrics from analyzing the data) as a determinant or to make a decision as to whether a user (including a patient using a medical device) is likely to respond or not to respond to a treatment (such as but not limited to a cognitive treatment and/or a treatment using a biologic, a drug or other pharmaceutical agent).
- a platform product including using an APP
- nData including metrics from analyzing the data
- a user including a patient using a medical device
- a treatment such as but not limited to a cognitive treatment and/or a treatment using a biologic, a drug or other pharmaceutical agent.
- the system, method, and apparatus can be configured to select whether a user (including a patient using a medical device) should receive treatment based on specific physiological or cognitive measurements that can be used as signatures that have been validated to predict efficacy in a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status).
- a user including a patient using a medical device
- Such an example system, method, and apparatus configured to perform the analysis (and associated computation) described herein can be used as a biomarker to perform monitoring and/or screening.
- the example system, method and apparatus configured to provide a provide a quantitative measure of the degree of efficacy of a cognitive treatment (including the degree of efficacy in conjunction with use of a biologic, a drug or other pharmaceutical agent) for a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status).
- a cognitive treatment including the degree of efficacy in conjunction with use of a biologic, a drug or other pharmaceutical agent
- a given individual or certain individuals of the population e.g., individual(s) classified to a given group based on amyloid status.
- the individual or certain individuals of the population may be classified as having a certain condition, including a neurodegenerative condition.
- An example system, method, and apparatus includes a platform product (including using an APP) that is configured to use nData to monitor a user' s ability to anticipate the course of navigation task(s) and manipulate navigation task(s) patterns and/or rules to disrupt user anticipation of response to navigation task(s), to optimize treatment or assessment in an APP.
- a platform product including using an APP
- nData to monitor a user' s ability to anticipate the course of navigation task(s) and manipulate navigation task(s) patterns and/or rules to disrupt user anticipation of response to navigation task(s), to optimize treatment or assessment in an APP.
- FIG. 1 1 shows an example apparatus 1 100 according to the principles herein that can be used to implement the cognitive platform described herein.
- the example apparatus 1 100 includes at least one memory 1 102 and at least one processing unit 1104.
- the at least one processing unit 1104 is communicatively coupled to the at least one memory 1102.
- Example memory 1102 can include, but is not limited to, hardware memory, non-transitory tangible media, magnetic storage disks, optical disks, flash drives, computational device memory, random access memory, such as but not limited to DRAM, SRAM, EDO
- Example processing unit 1104 can include, but is not limited to, a microchip, a processor, a microprocessor, a special purpose processor, an application specific integrated circuit, a microcontroller, a field programmable gate array, any other suitable processor, or combinations thereof.
- the at least one memory 1102 is configured to store processor-executable instructions 1106 and a computing component 1108.
- the computing component 1 108 can be used to analyze the cData and/or nData received from the cognitive platform coupled with the one or more physiological or monitoring components and/or cognitive testing components as described herein.
- the memory 1102 also can be used to store data 1110, such as but not limited to the nData 1112 (including
- the data 1110 can be received from one or more physiological or monitoring components and/or cognitive testing components that are coupled to or integral with the apparatus 1100.
- the at least one processing unit 1104 executes the processor-executable instructions 1106 stored in the memory 1102 at least to analyze the cData and/or nData received from the cognitive platform coupled with the one or more physiological or monitoring components and/or cognitive testing components as described herein, using the computing component 1108.
- the at least one processing unit 1104 also executes processor- executable instructions 1 106 to control a transmission unit to transmit values indicative of the analysis of the cData and/or nData received from the cognitive platform coupled with the one or more physiological or monitoring components and/or cognitive testing components as described herein, and/or controls the memory 1102 to store values indicative of the analysis of the cData and/or nData.
- the at least one processing unit 1104 executes the processor-executable instructions 1106 stored in the memory 1102 at least to display a representation of navigating in the computerized environment in response to physical actions of the individual in performing a navigation task, to collect measurement data from measurements of physical actions of the individual in performing the navigation task, to adjust a difficulty of the navigation task, to compute a performance metric based on the measurement data, and/or to provide an indication of the cognitive ability of the individual.
- FIG. 12 is a block diagram of an example computing device 1210 that can be used as a computing component according to the principles herein.
- computing device 1210 can be configured as a console that receives user input to implement the computing component, including to display a representation of navigating in the
- FIG. 12 also refers back to and provides greater detail regarding various elements of the example system of FIG. 11.
- the computing device 1210 can include one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing examples.
- the non- transitory computer-readable media can include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives), and the like.
- memory 1102 included in the computing device 1210 can store computer-readable and computer- executable instructions or software for performing the operations disclosed herein.
- the memory 1102 can store a software application 240 which is configured to perform various of the disclosed operations (e.g., analyze cognitive platform measurement data and response data, including data responsive to physical actions of the individual in performing the navigation task(s)), display a representation of navigating in the computerized environment in response to physical actions of the individual in performing a navigation task, to collect measurement data from measurements of physical actions of the individual in performing the navigation task, to adjust a difficulty of the navigation task, to compute a performance metric based on the measurement data, and/or to provide an indication of the cognitive ability of the individual.
- a software application 240 which is configured to perform various of the disclosed operations (e.g., analyze cognitive platform measurement data and response data, including data responsive to physical actions of the individual in performing the navigation task(s)), display a representation of navigating in the computerized environment in response to physical actions of the individual in performing a navigation task, to collect measurement data from measurements of physical actions of the individual in performing the navigation task, to adjust a difficulty of the navigation task, to compute a performance metric based
- the computing device 1210 also includes configurable and/or programmable processor 1104 and an associated core 1214, and optionally, one or more additional configurable and/or programmable processing devices, e.g., processor(s) 1212' and associated core(s) 1214' (for example, in the case of computational devices having multiple
- processors/cores for executing computer-readable and computer-executable instructions or software stored in the memory 1102 and other programs for controlling system hardware.
- Processor 1104 and processor(s) 1212' can each be a single core processor or multiple core (1214 and 1214') processor.
- Virtualization can be employed in the computing device 1210 so that infrastructure and resources in the console can be shared dynamically.
- a virtual machine 1224 can be provided to handle a process executing on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
- Memory 1 102 can include a computational device memory or random access memory, such as but not limited to DRAM, SRAM, EDO RAM, and the like.
- Memory 1102 can include a non-volatile memory, such as but not limited to a hard-disk or flash memory.
- Memory 1102 can include other types of memory as well, or combinations thereof.
- the memory 1102 and at least one processing unit are arranged in a non-limiting example.
- the memory 1102 and at least one processing unit are arranged in a non-limiting example.
- the example peripheral device can be programmed to communicate with or otherwise couple to a primary computing device, to provide the functionality of any of the example cognitive platform and/or platform product, and implement any of the example analyses (including the associated computations) described herein.
- the peripheral device can be programmed to directly communicate with or otherwise couple to the primary computing device (such as but not limited to via a USB or HDMI input), or indirectly via a cable (including a coaxial cable), copper wire (including, but not limited to, PSTN, ISDN, and DSL), optical fiber, or other connector or adapter.
- the peripheral device can be programmed to communicate wirelessly (such as but not limited to Wi-Fi or Bluetooth®) with the primary computing device.
- the example primary computing device can be a smartphone (such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone), a television, a workstation, a desktop computer, a laptop, a tablet, a slate computer, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, a gaming device (such as but not limited to an Xbox®, or a Wii®), or other equivalent form of computing device.
- a smartphone such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
- a television such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
- a television such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
- a television such as but not limited
- a user can interact with the computing device 1210 through a visual display unit 1228, such as a computer monitor, which can display one or more user interfaces (UI) 1230 that can be provided in accordance with example systems and methods.
- UI user interfaces
- FIG. 12 encompasses a visual display unit 1228 as a component in communication with the computing device 1210, or a visual display 1228 configured as a display that is an integral portion of the computing device 1210 (such as but not limited to a touch screen or other contact or pressure sensitive screen of a computing device).
- the computing device 1210 can include other input/output (I/O) devices for receiving input from a user, for example, a keyboard or any suitable multi-point touch interface 1218, a pointing device 1220 (e.g., a mouse), a camera or other image recording device, a microphone or other sound recording device, an accelerometer, a gyroscope, a sensor for tactile, vibrational, or auditory signal, and/or at least one actuator.
- the keyboard 1218 and the pointing device 1220 can be coupled to the visual display unit 1228.
- the computing device 1210 can include other suitable conventional I/O peripherals.
- the computing device 1210 can also include one or more storage devices 1234 and an associated core 1236, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that perform operations disclosed herein.
- Example storage device 1234 (and associated core 1236) can also store one or more databases for storing any suitable information required to implement example systems and methods. The databases can be updated manually or automatically at any suitable time to add, delete, and/or update one or more items in the databases.
- the computing device 1210 can include a network interface 1222 configured to interface via one or more network devices 1232 with one or more networks, for example, Local Area Network (LAN), metropolitan area network (MAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.1 1, Tl, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above.
- LAN Local Area Network
- MAN metropolitan area network
- WAN Wide Area Network
- Internet Internet
- connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.1 1, Tl, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above.
- LAN Local Area Network
- MAN metropolitan area network
- WAN Wide Area
- the network interface 1222 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 1210 to any type of network capable of communication and performing the operations described herein.
- the computing device 1210 can be any computational device, such as a smartphone (such as but not limited to an iPhone®, a
- BlackBerry® or an AndroidTM-based smartphone
- a television a workstation, a desktop computer, a server, a laptop, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other equivalent form of computing or
- the one or more network devices 1232 may communicate using different types of protocols, such as but not limited to WAP (Wireless Application Protocol), TCP/IP (Transmission Control Protocol/Internet Protocol), NetBEUI (NetBIOS Extended User Interface), or IPX/SPX
- WAP Wireless Application Protocol
- TCP/IP Transmission Control Protocol/Internet Protocol
- NetBEUI NetBIOS Extended User Interface
- IPX/SPX IPX/SPX
- the computing device 1210 can execute any operating system 1226, such as any of the versions of the Microsoft® Windows® operating systems, iOS® operating system, AndroidTM operating system, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of executing on the console and performing the operations described herein.
- the operating system 1226 can be executed in native mode or emulated mode.
- the operating system 1226 can be executed on one or more cloud machine instances.
- Examples of the systems, methods and operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more thereof.
- Examples of the systems, methods and operations described herein can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
- the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
- a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
- the term "data processing apparatus” or “computing device” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing.
- the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
- a computer program (also known as a program, software, software application, script, application or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
- a computer program may, but need not, correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this specification can be performed by one or more programmable processors executing on one or more computer programs to perform actions by operating on input data and generating output.
- the processes and logic flows can also be performed by, and apparatuses can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read only memory or a random access memory or both.
- the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- a computer need not have such devices.
- a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), for example.
- Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD ROM and DVD-ROM disks.
- the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
- embodiments of the subject matter described in this specification can be implemented on a computer having a display device, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse, a stylus, touch screen or a trackball, by which the user can provide input to the computer.
- a display device for displaying information to the user
- a keyboard and a pointing device e.g., a mouse, a stylus, touch screen or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well.
- feedback i.e., output
- feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback
- input from the user can be received in any form, including acoustic, speech, or tactile input.
- a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
- a system, method or operation herein can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components.
- the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
- communication networks include a local area network ("LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to- peer networks (e.g., ad hoc peer-to-peer networks).
- LAN local area network
- WAN wide area network
- inter-network e.g., the Internet
- peer-to- peer networks e.g., ad hoc peer-to-peer networks.
- Example computing system herein can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs executing on the respective computers and having a client-server relationship to each other.
- a server transmits data to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
- FIGs. 13 A - 13B show a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
- the at least one processing unit Is used to render at least one graphical user interface to present the navigation task(s) (including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks) and one or more CSI to the user for interaction.
- the at least one processing unit is used to cause a cause a
- the at least one processing unit is used to cause a component of the program product to receive nData indicative of measurements made before, during, and/or after the user interacts with the cognitive platform (including nData from measurements of physiological or monitoring components and/or cognitive testing components).
- block 1304 may be performed in a similar timeframe, or substantially simultaneously with block 1306. In another example implementation of the method, block 1304 may be performed at different timepoints than block 1306.
- the at least one processing unit also is used to: analyze the cData and/or nData to provide a measure of the individual's condition (including cognitive condition), and/or analyze the cData and/or nData to provide a measure of the individual's performance metric for a given type of navigation task, including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks (whether the navigation task requires allocentric navigation and/or egocentric navigation), and/or analyze the differences in the individual's performance based on determining the differences between the user' s performance at allocentric navigation as compared to the user' s performance at egocentric navigation (including based on differences in the cData) and differences in the associated nData, and/or adjust the difficulty level of the navigation task(s), including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks (including CSIs), based on the analysis of the cData (including the measures of the individual's performance determined in the analysis), and/or provide an
- FIG. 13C shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
- the example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13C.
- the one or more processing units are used to present via the user interface a first task that requires navigation of a specified route through an environment.
- the one or more processing units are used to present via the user interface a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual.
- the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time.
- the one or more processing units are used to present via the user interface a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task.
- the one or more processing units are used to obtain measurement data by measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task.
- the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
- FIG. 13D shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
- the example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13D.
- the one or more processing units are used to present via the user interface a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment.
- the one or more processing units are used to present via the user interface a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point.
- the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point.
- the one or more processing units are used to measure data indicative of the relative orientation indicated using the second indicator.
- the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
- FIG. 13E shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
- the example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13E.
- the one or more processing units are used to present via the user interface a first task that requires the individual to navigate in an environment.
- the first task comprises an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase.
- the environment comprises one or more of a specified location, a specified landmark, or a specified object.
- the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task.
- the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object.
- the one or more processing units are used to present via the user interface a second indicator configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions, where the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task.
- the one or more processing units are used to obtain measurement data by measuring data indicative of the physical actions of the individual in performing the second task.
- the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
- FIG. 13F shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
- the example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13F.
- the one or more processing units are used to present via the user interface a first task that requires the individual to navigate in an environment.
- a first portion of the first task comprises an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase.
- the environment comprises one or more of a specified location, a specified landmark, or a specified object.
- the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first portion of the first task.
- the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object.
- the one or more processing units are used to obtain measurement data by measuring data indicative of the physical actions of the individual in performing the second portion of the first task.
- the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of the first task, the performance metric providing an indication of the cognitive ability of the individual.
- the at least one processing unit prior to rendering the tasks at the user interface, is configured to cause a component of the program product to receive nData indicative of one or more of an amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic being or to be administered to an individual. Based at least in part on the analysis of the cData collected from the individual's performance of the navigation task(s), the at least one processing unit is configured to generate an output to the user interface indicative of a change in the individual's cognitive ability.
- Any classification of an individual as to likelihood of onset and/or stage of progression of a condition (including a neurodegenerative condition) in block 1308 can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow
- formulation of a course of treatment for the individual or to modify an existing course of treatment including to determine a change in dosage (such as but not limited to an amount, concentration, and/or dose titration) of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
- a change in dosage such as but not limited to an amount, concentration, and/or dose titration
- the results of the analysis may be used to modify the difficulty level or other property of the navigation task(s), including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks, or CSIs.
- FIG. 14A shows a non-limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as a cognitive platform 1402 that is separate from, but configured for coupling with, one or more of the physiological components 1404.
- the platform product including using an APP
- the cognitive platform 1402 is separate from, but configured for coupling with, one or more of the physiological components 1404.
- FIG. 14B shows another non-limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as an integrated device 1410, where the cognitive platform 1412 that is integrated with one or more of the physiological components 1414.
- the platform product including using an APP
- the cognitive platform 1412 that is integrated with one or more of the physiological components 1414.
- FIG. 15 shows a non-limiting example implementation where the platform product (including using an APP) is configured as a cognitive platform 1502 that is configured for coupling with a physiological component 1504.
- the cognitive platform 1502 is configured as a tablet including at least one processor programmed to implement the processor-executable instructions associated with the tasks and CSIs described hereinabove, to receive cData associated with user responses from the user interaction with the cognitive platform 1502, to receive the nData from the physiological component 1504, to analyze the cData and/or nData as described hereinabove, and to analyze the cData and/or nData to provide a measure of the individual's physiological condition and/or cognitive condition, and/or analyze the differences in the individual's performance based on determining the differences between the user's responses and the nData, and/or adjust the difficulty level of the
- CSI computerized stimuli or interaction
- the physiological component 1504 is mounted to a user's head, to perform the measurements before, during and/or after user interaction with the cognitive platform 1502, to provide the nData.
- measurements are made using a cognitive platform that is configured for coupling with a fMRI, for use for medical application validation and personalized medicine.
- Consumer-level fMRI devices may be used to improve the accuracy and the validity of medical applications by tracking and detecting changes in brain part stimulation.
- fMRI measurements can be used to provide measurement data of the cortical thickness and other similar measurement data.
- the user interacts with a cognitive platform, and the fMRI is used to measure physiological data.
- the user is expected to have stimulation of a particular brain part or combination of brain parts based on the actions of the user while interacting with the cognitive platform.
- the platform product may be configured as an integrated device including the fMRI component coupled with the cognitive platform, or as a cognitive platform that is separate from, but configured for coupling with the fMRI component.
- measurement can be made of the stimulation of portions of the user brain, and analysis can be performed to detect changes to determining whether the user is exhibit the desired responses.
- the fMRI can be used to collect measurement data to be used to identify the progress of the user in interacting with the cognitive platform.
- the analysis can be used to determine whether the cognitive platform should be caused to provide tasks and/or CSIs to enforce or diminish these user results that the fMRI is detecting, by adjusting users experience in the application.
- CSIs can be made in real-time.
- various aspects of the invention may be embodied at least in part as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, compact disks, optical disks, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium or non-transitory medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the technology discussed above.
- the computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present technology as discussed above.
- program or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present technology as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present technology need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present technology.
- Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- functionality of the program modules may be combined or distributed as desired in various embodiments.
- the technology described herein may be embodied as a method, of which at least one example has been provided.
- the acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
- a reference to "A and/or B", when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
- the phrase "at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
- This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
- At least one of A and B can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Neurology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Developmental Disabilities (AREA)
- Psychiatry (AREA)
- Surgery (AREA)
- Neurosurgery (AREA)
- Physics & Mathematics (AREA)
- Hospice & Palliative Care (AREA)
- Child & Adolescent Psychology (AREA)
- Psychology (AREA)
- Molecular Biology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Social Psychology (AREA)
- Physiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Apparatus, systems and methods for generating an assessment of one or more cognitive abilities in an individual by analyzing measurement data collected from measurements of the physical actions of the individual in interacting with computerized tasks involving spatial navigation including one or more of way-finding, path-plotting, route-learning, or path integration. The analysis is used to provide a performance metric indicative of the cognitive ability of the individual, and optionally to generate an indication of a neurodegenerative condition of the individual.
Description
PLATFORM FOR IDENTIFICATION OF BIOMARKERS USING NAVIGATION TASKS AND TREATMENTS BASED THEREON
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims priority benefit of U.S. provisional application number 62/512,351, entitled "PLATFORM FOR IDENTIFICATION OF BIOMARKERS USING NAVIGATION TASKS AND TREATMENTS BASED THEREON" filed on May 30, 2017, and is a continuation-in-part of international application number PCT/US2017/066214, entitled "PLATFORM FOR IDENTIFICATION OF BIOMARKERS USING NAVIGATION TASKS AND TREATMENTS USING NAVIGATION TASKS" filed on December 13, 2017, each of which is incorporated herein by reference in its entirety, including drawings.
BACKGROUND OF THE DISCLOSURE
[002] Cognitive dysfunction is one of the characteristics exhibited by individuals with various neurodegenerative conditions such as Alzheimer's disease and Parkinson's disease. Studies show that neurodegenerative conditions can affect areas of the brain such as the caudate nucleus, the hippocampus, and the entorhinal cortex. For example, the early stages of
Alzheimer's disease can manifest with memory loss and spatial disorientation symptoms. The caudate nucleus is implicated in motor and spatial functions. Physiological techniques and other technology used to measure the state of these regions of the brain can be costly, inefficient, and time-consuming.
SUMMARY OF THE DISCLOSURE
[003] In view of the foregoing, apparatus, systems and methods are provided for quantifying aspects of cognition (including cognitive abilities). The indication of cognitive abilities of an individual can provide insight into the relative health or strength of portions of the brain of the individual. In certain configurations, the example apparatus, systems and methods can be implemented for enhancing certain cognitive abilities of the individual.
[004] In an aspect, embodiments relate to an apparatus for generating an assessment of one or more cognitive skills in an individual. The apparatus includes a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to present via the user interface a first task that requires navigation of a specified route through an environment, and to present via the user interface a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual. The one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time. The one or more processing units are configured to present via the user interface a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task. Measurement data is obtained by measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task. The measurement data is analyzed to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
[005] One or more of the following features may be included in the various aspects. The target end-point may include a specified location in the environment, a specified landmark feature in the environment, and/or a specific object in the environment.
In response to detecting that the second indicator is making in a wrong turn and/or moving in an incorrect direction based on analysis of the measurement data, the one or more processing units may be configured to return the second indicator to either: (a) a portion of the specified route that was navigated successfully, or (b) the initial point.
[006] In response to detecting that the second indicator is making in a wrong turn and/or moving in an incorrect direction at a portion of the environment based on analysis of the measurement data, the one or more processing units may be configured to present at least one directional aid via the user interface to indicate a correction to the turn or the direction.
[007] A degree of difficulty of the second task may be modified based on the number of directional aids displayed to the individual in performance of the second task.
[008] Generating the performance metric may include considering one or more of a total time taken to successfully complete the second task, a number of incorrect turns made by the second indicator, a number of incorrect directions of movement made by the second indicator, or a degree of deviation of the user-navigated route in the second task as compared to the specified route.
[009] In another aspect, embodiments relate to an apparatus for generating an assessment of one or more cognitive skills in an individual. The apparatus includes a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to present via the user interface a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment, and to present via the user interface a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point. The one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point. Data indicative of the relative orientation indicated using the second indicator is measured. The measurement data is analyzed to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
[0010] One or more of the following features may be included in the various aspects. The second indicator may include an avatar, a pointer tool, and/or a tool for drawing a line, each for indicating the relative orientation.
[0011] Generating the performance metric may include considering a difference between data indicative of the relative orientation indicated using the second indicator and data indicative of actual relative orientation between the initial point and the target endpoint.
[0012] The first task may include a free-exploration phase in which the one or more processing units are configured to allow the individual to control the first indicator to navigate in at least a portion of the environment without restriction or guidance.
[0013] The one or more processing units may be configured to display limited visual information about the environment to the individual based on proximity and/or directionality relative to the second indicator.
[0014] In yet another aspect, an apparatus for generating an assessment of one or more cognitive skills in an individual includes a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units. The one or more processing units are configured to present via the user interface a first task that requires the individual to navigate in an environment. The first task includes an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase. In the exploration phase, the environment includes a specified location, a specified landmark, and/or a specified object. The first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task. The one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to the specified location, the specified landmark feature, and/or the specified object. The one or more processing units are configured to present via the user interface a second indicator configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions. The specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task. Measurement data is obtained by measuring data indicative of the physical actions of the individual in performing the second task. The measurement data is analyzed to generate a performance
metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
[0015] In another aspect, an apparatus for generating an assessment of one or more cognitive skills in an individual includes a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to present via the user interface a first task that requires the individual to navigate in an environment, a first portion of the first task including an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase. In the exploration phase, the environment includes a specified location, a specified landmark, and/or a specified object. The first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first portion of the first task. The one or more processing units are configured to configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to the specified location, the specified landmark feature, and/or the specified object.
Measurement data is obtained by measuring data indicative of the physical actions of the individual in performing the second portion of the first task. The measurement data is analyzed to generate a performance metric for the performance of the first task, the performance metric providing an indication of the cognitive ability of the individual.
[0016] One or more of the following features may be included in the various aspects. The one or more processing units may be configured such that the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second portion of the first task. The one or more processing units may be further configured to generate a scoring output indicative of at least one of (i) a likelihood of onset of a
neurodegenerative condition of the individual, or (ii) a stage of progression of the
neurodegenerative condition, based at least in part on the analyses of the measurement data. The one or more processing units may be further configured to adjust a difficulty level of the second task based at least in part on the analysis of the measurement data. The measurement
data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters including at least one of a measure of the individual's judgment about relative spatial positions between two points as determined based on distances relative to other objects in the environment, a measure of the individual's ability to plot a novel course through a portion of the environment that was previously known, or a measure of the individual's ability to spatially transform three or more memorized positions in the environment arranged to cover two or more dimensions.
[0017] The neurodegenerative condition may be Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or schizophrenia.
[0018] Generating the performance metric may further include computing one or more of a measure of accuracy in a subsequent navigation of the specified route, a measure of accuracy in measures of indication that the individual uses spatial memory rather than visual cues for the relative orientation to the initial point or to a different specified location in the environment, or a measure of a strategy implemented to explore the environment in a free-exploration phase.
[0019] The measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters being measured as a function of time.
[0020] The second indicator may include a virtual joystick.
[0021] The virtual joystick may be controllable to provide one or more of an indication of a user's "head-orientation" in the environment, an intended direction of movement of the first indicator or the second indicator, or to provide a virtual indication of "looking around" to observe features in the environment.
[0022] The one or more processing units may be further configured to apply a first predictive model to data indicative of the cognitive ability in the individual to classify the individual according to a level of expression of one or more of a beta amyloid, a cystatin, an alpha- synuclein, a huntingtin protein, or a tau protein.
[0023] The first predictive model may be trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing an indication of a cognitive ability of the classified individual and data indicative of a diagnosis of a status or progression of a neurodegenerative condition in the classified individual.
[0024] The first predictive model may serve as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
[0025] The first predictive model may include a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, and/or an artificial neural network.
[0026] The measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters including at least one of a measure of a navigation speed relative to the environment, an orientation relative to the environment, a velocity relative to the environment, a choice of navigation strategy, a measure of a wait or delay period or a period of inaction during navigation, a time interval to complete a course, or a degree of optimization of a navigation path through a course.
[0027] The measurement data may include measures of one or more parameters indicative of a navigation strategy, the one or more parameters including at least one of a direction of the individual's movement relative to the environment, a speed of the individual's movement relative to the environment, a measure of the individual's memory of landmarks, a measure of the individual's memory of turn-by-turn directions, or a frequency or number of times of referral to an aerial or elevated view of a view.
[0028] The environment may include one or more passageways, one or more obstacles disposed at specified portions of the one or more passageways, and/or one or more walls having dimensions. The one or more passageways, obstacles, and dimensions may include
dimensional constraints, such that a width (ai) of each of the one or more obstacles is greater than or about equal to a width (a2) of each of the one or more passageways, and the width (ai) is smaller than a length (a3) of each of the one or more walls of the environment. The width ai may be about twice width a2, and width ai may be about one-fourth to one-fifth of length a3.
[0029] One or more processing units may be configured to present navigation as a first person perspective or as a third person perspective. One or more processing units may be further configured to (i) adjust a difficulty of the second task to a second difficulty level; (ii) present a second instance of the second task at the second difficulty level; (iii) obtain a second set of measurement data by measuring data indicative of the physical actions of the individual in performing the second instance of the second task; and (iv) analyze the second set of measurement data to generate a second performance metric indicative of a change of the cognitive ability of the individual. The second difficulty level may be an increase in the difficulty or a decrease of the difficulty. The one or more processing units may be further
configured to provide a measure of an enhancement of the cognitive ability of the individual based at least in part on the second performance metric.
[0030] The apparatus may be configured as at least one of a smartphone, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, a portable computing device, a wearable computing device, or a gaming device.
[0031] The details of one or more of the above aspects and implementations are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims.
BRIEF DESCRIPTION OF DRAWINGS [0032] The skilled artisan will understand that the figures, described herein, are for illustration purposes only. It is to be understood that in some instances various aspects of the described implementations may be shown exaggerated or enlarged to facilitate an understanding of the described implementations. In the drawings, like reference characters generally refer to like features, functionally similar and/or structurally similar elements throughout the various drawings. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the teachings. The drawings are not intended to limit the scope of the present teachings in any way. The system and method may be better understood from the following illustrative description with reference to the following drawings in which:
[0033] FIGs. 1 A - ID show non-limiting examples of computerized renderings of courses that present navigation tasks, according to the principles herein.
[0034] FIGs. 2A - 2C show a computerized rendering of an entrance to an environment of a non-limiting example navigation task, according to the principles herein.
[0035] FIGs. 3 A - 3U show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
[0036] FIGs. 4 A - 4C show a computerized rendering of navigation to an exit from an environment of a non-limiting example navigation task, according to the principles herein.
[0037] FIGs. 5 A - 5 J show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
[0038] FIGs. 6 A - 6E show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
[0039] FIGs. 7 A - 7F show views of portions of a computerized rendering of an environment
of a non-limiting example navigation task, according to the principles herein.
[0040] FIGs. 8A - 8H show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
[0041] FIGs. 9 A - 9H show views of portions of a computerized rendering of an environment of a non-limiting example navigation task, according to the principles herein.
[0042] FIG. 10 shows a non-limiting example of a graphical user interface rendered to a user, according to the principles herein.
[0043] FIG. 11 shows an example apparatus according to the principles herein that can be used to implement the cognitive platform described herein.
[0044] FIG. 12 is a block diagram of an example computing device that can be used as a computing component according to the principles herein.
[0045] FIGs. 13 A - 13F show flowcharts of non-limiting example methods that can be implemented using a cognitive platform or platform product that includes at least one processing unit, according to principles herein.
[0046] FIG. 14A shows a non-limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as a cognitive platform that is separate from, but configured for coupling with, one or more of the physiological components.
[0047] FIG. 14B shows another non -limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as an integrated device, where the cognitive platform is integrated with one or more of the physiological components.
[0048] FIG. 15 shows a non-limiting example implementation where the platform product (including using an APP) is configured as a cognitive platform that is configured for coupling with a physiological component, according to principles herein.
DETAILED DESCRIPTION
[0049] It should be appreciated that all combinations of the concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. It also should be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
[0050] Following below are more detailed descriptions of various concepts related to, and embodiments of, inventive methods, apparatus and systems comprising a cognitive platform configured for implementing one or more navigation task(s). The cognitive platform also can be configured for coupling with one or more other types of measurement components, and for analyzing data indicative of at least one measurement of the one or more other types of components. As non-limiting examples, the cognitive platform can be configured for cognitive training and/or for clinical purposes. According to the principles herein, the cognitive platform may be integrated with one or more physiological or monitoring components and/or cognitive testing components.
[0051] It should be appreciated that various concepts introduced above and discussed in greater detail below may be implemented in any of numerous ways, as the disclosed concepts are not limited to any particular manner of implementation. Examples of specific implementations and applications are provided primarily for illustrative purposes.
[0052] As used herein, the term "includes" means includes but is not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on.
[0053] The example platform products and cognitive platforms according to the principles described herein can be applicable to many different types of conditions, such as but not limited to depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, schizophrenia, or other cognitive condition.
[0054] The ability of an individual to navigate from an initial point to a desired location in a real or virtual environment (such as but not limited to a virtual town or small maze), including the ability to formulate and/or execute a strategy to find the way from the initial point to the goal location, can depend at least in part on use of two different areas of the brain. These areas are the caudate nucleus region of the brain and the entorhinal cortex and hippocampal regions of the brain. See, e.g., Hafting et al., "Microstructure of a spatial map in the entorhinal cortex", Nature, vol. 436, issue 7052, pp. 801-806 (2005); Bohbot et al., "Gray matter differences correlate with spontaneous strategies in a human virtual navigation task," Journal of
Neuroscience, vol. 27, issue 38, pp. 10078-10083 (2007).
[0055] In an example where an individual performs a navigation task that activates the caudate nucleus region of the brain, the individual is learning a rigid set of stimulus-response type
associations referred to as dependent stimulus-response navigation strategies. A non-limiting example of a dependent stimulus-response navigation strategy is, e.g., see the tree and turn right.
[0056] In an example where an individual performs a navigation task by learning the spatial relationship between the landmarks in an environment, the individual is relying on
hippocampal dependent spatial navigation strategies, via activating the hippocampal region of the brain. An individual relying on the entorhinal cortex region of the brain for navigation forms a directionally-oriented topographically organized neural map of the spatial environment, which includes translational and directional information. That map is anchored to external landmarks, but can persist in the absence of those external landmarks. The contextual specificity of hippocampal representations suggests that during encoding, the hippocampus associates output from a generalized, path-integration-based coordinate system with landmarks or other features specific to a particular environment. Through back projections to the superficial layers of the entorhinal cortex, associations stored in the hippocampus may reset the path integrator as errors accumulate during exploration of an environment. Anchoring the output of the path integrator to external reference points stored in the hippocampus or other cortical areas of the brain may enable alignment of entorhinal maps from one trial to the next, even when the points of departure are different.
[0057] An individual may navigate through a given environment using an allocentric form of navigation and/or an egocentric form of navigation. In implementing a given type of navigation strategy, an individual uses differing portions of the brain.
[0058] As used herein, "allocentric" refers to a form of navigation where an individual identifies places in the environment independent of the individual's perspective (or direction) and ongoing behavior. In allocentric navigation, an individual centers their attention and actions on other items in the environment rather than their own perspective. Parameters that can be measured to indicate allocentric navigation include measures of an individual's judgment about the directional orientation and/or horizontal distance between two points (e.g., their relative spatial position as measured based on distances relative to other objects in the environment), an individual's ability to plot a novel course through a previously traversed (and therefore known) environment (i.e., a course that differs in at least one parameter from a previous course through the environment), and an individual's ability to spatially transform
(e.g. rotate, translate, or scale) three or more memorized positions in an environment arranged to cover two or more dimensions.
[0059] Areas of the brain such as the entorhinal cortex and hippocampus are used for allocentric navigation. The allocentric navigation can involve spatial grid navigation and formulation of a memory of how various places are located on the spatial grid and relative to each other. The hippocampus is implicated in both spatial memory and navigation. The medial entorhinal cortex contributes to spatial information processing.
[0060] As used herein, "egocentric" refers to a form of navigation where points in the environment are defined in terms of their distance and direction from the individual.
Parameters that can be measured to indicate egocentric navigation include the direction and speed of the individual's movements relative to the environment. In an egocentric navigation system, positions in the environment are defined relative to the individual, such that movement of the individual is accompanied by an updating of the individual's perspective representation of a given point.
[0061] Areas of the brain such as the caudate nucleus are used in egocentric navigation. The egocentric navigation can involve memory of landmarks and turn-by-turn directions. The caudate nucleus is implicated in motor and spatial functions.
[0062] Measures of the relative strength of each area of the brain can inform the cognitive condition of an individual. According to the principles herein, analysis of data indicative of these measurement parameters can be used to detect the very early signs of conditions such as but not limited to Alzheimer's disease.
[0063] In an example system, method, and apparatus can be configured to generate a scoring output as an indication of a relative health or strength of the caudate nucleus region of the brain of the individual relative to the entorhinal cortex and hippocampal regions of the brain of the individual. The scoring output can be computed based on the analysis of the data collected from measurements as an individual performs a spatial navigation task.
[0064] In an example system, method, and apparatus can be configured to generate a scoring output as an indication of a cognitive ability of the individual, based on spatial memory capabilities of the individual that indicate a relative health or strength of the caudate nucleus region, the entorhinal cortex, and the hippocampal regions of the brain of the individual. The scoring output can be computed based on the analysis of the data collected from measurements
as an individual performs physical actions to effect a spatial navigation task involving way- finding, path finding, path-plotting, route-learning, and/or path integration (dead-reckoning).
[0065] In an example system, method, and apparatus can be configured to generate a scoring output as an indication of a likelihood of onset of a neurodegenerative condition of the individual, or a stage of progression of the neurodegenerative condition, based at least in part on the analysis of at least one set of data (such as but not limited to a first set of data and a second set of data), based on the analysis of the data collected from measurements as an individual performs a navigation task involving way-finding, path finding, path-plotting, route- learning, and/or path integration (dead-reckoning).
[0066] The example system, method, and apparatus can be configured to transmit the scoring output to the individual and/or display the scoring output on a user interface.
[0067] For example, the early stages of Alzheimer's disease (AD) can manifest with memory loss and spatial disorientation. The hippocampus is one of the early regions of the brain to suffer damage resulting in the memory loss and spatial disorientation symptoms. Kunz et al., Science, vol. 350, issue 6259, p. 430 (2015), also proposed that Alzheimer's disease pathology starts in the entorhinal cortex, with the disease likely impairing local neural correlates of spatial navigation such as grid cells. Analysis of measurement data indicative of the individual's performance at navigation tasks, such as data indicative of the type of navigation and/or the degree of success at the navigation task, can provide an indication of the relative strength of the hippocampus and entorhinal cortex. For example, the analysis of data indicative of the individual's performance of the navigation tasks can be used to provide a measure of entorhinal and/or hippocampal dysfunction in individuals, thereby providing a measure of the likelihood of onset of Alzheimer's disease and/or the degree of progression of the disease.
[0068] As non-limiting examples, Alzheimer's disease, Parkinson's disease, vascular dementia, and mild cognitive impairment potentially have a greater effect on the hippocampal and entorhinal regions of the brain.
[0069] As non-limiting examples, attention deficit hyperactivity disorder, Huntington's disease, obsessive-compulsive disorder, and depression (major depressive disorder) potentially have a greater effect on the caudate nucleus region of the brain.
[0070] Example systems, methods, and apparatus herein can be implemented to collect data indicative of measures of the areas of the brain implicated in the differing types of navigation tasks. Data indicative of the individual's performance based on the type of navigation (i.e.,
allocentric navigation vs egocentric navigation) and/or the degree of success at navigation can be used to provide an indication of the relative strength of each area of the brain of the individual.
[0071] In an example task where an individual may implement an allocentric navigation strategy, the individual is relying more on the activation of the hippocampal and the entorhinal cortex regions of the brain (needing the context of one or more features to guide navigation strategy). In an example, the individual's performance on a task requiring allocentric navigation skills could be an indicator of the level of activation of the hippocampal and/or the entorhinal cortex regions of the brain, such that poorer values of performance measure(s) could indicate poorer activation of the hippocampal and/or the entorhinal cortex regions of the brain. For example, the entorhinal cortex region of the brain can become more efficient once a navigation strategy is processed by the hippocampal region.
[0072] In an example task where an individual may implement an egocentric navigation strategy, the individual is relying more on the activation of the caudate nucleus region of the brain (navigation learning strategy based on using self as the point of reference). In an example where an individual's performance of the task requiring egocentric navigation is relatively poor, this could indicate that the individual takes fewer cues from the environment. Where the individual is less able to take cues from the environment, the individual cannot use this mechanism to learn. The individual's performance on a task requiring egocentric navigation skills could be an indicator of the level of activation of the caudate nucleus region of the brain, such that poorer values of performance measure(s) could indicate poorer activation of the caudate nucleus region of the brain.
[0073] Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual. An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method. An example method includes using the programmed one or more processing units to render a first task that requires navigation of a specified route through an environment, render a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual, configure the user interface to
display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time, and render a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task. The example method further includes measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task, and analyzing the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
[0074] In any example herein, the navigation indicator presented via the user interface for navigating in the computerized environment can be rendered and displayed to the individual via a visual representation as a first person view or as a third person view. In a non-limiting example of a first-person view, the user interface is configured such that the views presented during navigation mimics the "eye-level" view of the environment. In a non-limiting example of a third-person view, the user interface is configured such that views presented during navigation mimics a view of the environment from "behind", "to the side of, or "over the shoulder", e.g., of an element on the user interface such as but not limited to an avatar or other object.
[0075] In an example, the navigation indicator can be presented via the user interface as a single element or as two or more elements in the environment. Where the navigation indicator is presented as a single element, it can be displayed as an avatar, or other guidable element described herein. Where the navigation indicator is presented as two or more elements, it can be displayed as a first avatar or other guidable element that indicates a direction of relative movement and a second avatar or other guidable element that indicates an intended direction of movement, In another example, the navigation indicator can be presented to the user via a visual representation of relative progression along a path in the environment.
[0076] Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual. An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by
the one or more processing units, the example system or apparatus executes the method. An example method includes using the programmed one or more processing units to render a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment, render a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point, and configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point. The example method further includes measuring data indicative of the relative orientation indicated using the second indicator, and analyzing the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
[0077] Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual. An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method. An example method includes using the programmed one or more processing units to render a first task that requires the individual to navigate in an environment. The first task includes an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase. In the exploration phase, the environment includes one or more of a specified location, a specified landmark, or a specified object, The first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task. The example method further includes using the programmed one or more processing units to configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object, and render a second indicator
configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions. The specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task. The example method further includes measuring data indicative of the physical actions of the individual in performing the second task, and analyzing the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
[0078] Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual. An example system or apparatus for implementing the method can include a user interface, a memory to store processor- executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method. An example method includes using the programmed one or more processing units to render a first task that requires the individual to navigate in an environment. The first task includes a first portion that is an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase. In the exploration phase, the environment comprises one or more of a specified location, a specified landmark, or a specified object. The first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first portion of the first task. The example method further includes using the programmed one or more processing units to configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object, measure data indicative of the physical actions of the individual in performing the second portion of the first task, and analyze the measurement data to generate a
performance metric for the performance of the first task, the performance metric providing an indication of the cognitive ability of the individual.
[0079] In an example, the one or more processing units can be configured such that the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second portion of the first task.
[0080] As non-limiting examples, "navigation" refers to way-finding, path-plotting, route- learning, path integration (such as but not limited to dead-reckoning), seek or search and recovery, direction-giving, or other similar types of tasks.
[0081] The instant disclosure is directed to computer-implemented devices formed as example platform products configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more navigation tasks, to provide a user performance metric. As non-limiting examples,
performance metrics can include data indicative of an individual's navigation speed, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route, a measure of accuracy in using spatial memory rather than visual cues to orient oneself relative to (including to point back to) a specific location in space (such as but not limited to the beginning of the current navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment. In any example herein, the measure can include values of any of these parameters as a function of time. As another non-limiting example, the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near-shortest path through the course.
[0082] The example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's condition (including cognitive condition). In non-limiting examples, the performance metric can be used to derive measures of the relative strength of each area of the brain. Non-limiting example cognitive platforms or platform products according to the principles herein can be configured to classify an individual as to relative health or strength of regions of the brain such as but not limited to the caudate nucleus region of the brain and the entorhinal cortex and hippocampal regions of the brain, and/or potential efficacy of use of the cognitive platform or platform product when the
individual is administered a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that data. Yet other non-limiting example cognitive platforms or platform products according to the principles herein can be configured to classify an individual as to likelihood of onset and/or stage of progression of a cognitive condition, based on the data collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that data. The cognitive condition can be, but is not limited to, depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or
schizophrenia.
[0083] Any classification of an individual as to likelihood of onset and/or stage of progression of a cognitive condition according to the principles herein can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage (such as but not limited to an amount, concentration, and//or dose titration) of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
[0084] In any example herein, the platform product or cognitive platform can be configured as any combination of a medical device platform, a monitoring device platform, a screening device platform, or other device platform.
[0085] The instant disclosure is also directed to example systems that include platform products and cognitive platforms that are configured for coupling with one or more
physiological or monitoring components and/or cognitive testing components. In some examples, the systems include platform products and cognitive platforms that are integrated with the one or more other physiological or monitoring components and/or cognitive testing components. In other examples, the systems include platform products and cognitive platforms that are separately housed from and configured for communicating with the one or more physiological or monitoring components and/or cognitive testing components, to receive data indicative of measurements made using such one or more components.
[0086] As used herein, the term "cData" refers to data collected from measures of an interaction of a user with a computer-implemented device formed as a platform product.
[0087] As used herein, the term "nData" refers to other types of data that can be collected according to the principles herein. Any component used to provide nData is referred to herein as a nData component.
[0088] In any example herein, the cData and/or nData can be collected in real-time.
[0089] In non-limiting examples, the nData can be collected from measurements using one or more physiological or monitoring components and/or cognitive testing components. In any example herein, the one or more physiological components are configured for performing physiological measurements. The physiological measurements provide quantitative measurement data of physiological parameters and/or data that can be used for visualization of physiological structure and/or functions.
[0090] As a non-limiting example, nData can be collected from measurements of types of protein and/or conformation of proteins in the tissue or fluid (including blood) of an individual and/or in tissue or fluid (including blood) collected from the individual. In some examples, the tissue and or fluid can be in or taken from the individual' s brain. In other examples, the measurement of the conformation of the proteins can provide an indication of amyloid formation (e.g., whether the proteins are forming aggregates).
[0091] As a non-limiting example, the nData can be collected from measurements of beta amyloid, cystatin, alpha-synuclein, huntingtin protein, and/or tau proteins. In some examples, the nData can be collected from measurements of other types of proteins that may be implicated in the onset and/or progression of a neurodegenerative condition, such as but not limited to Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or schizophrenia. For example, tau proteins are deposited first in the entorhinal cortex and then in the hippocampal area of the brain in Alzheimer's disease.
[0092] In a non-limiting example, nData can be a classification or grouping that can be assigned to an individual based on measurement data from the one or more physiological or monitoring components and/or cognitive testing components. For example, an individual can be classified as to amyloid status of amyloid positive (A+) or amyloid negative (A-).
[0093] In some examples, the nData can be an identification of a type of biologic, drug or other pharmaceutical agent administered or to be administered to an individual, and/or data collected from measurements of a level of the biologic, drug or other pharmaceutical agent in the tissue
or fluid (including blood) of an individual, whether the measurement is made in situ or using tissue or fluid (including blood) collected from the individual. Non-limiting examples of a biologic, drug or other pharmaceutical agent applicable to any example described herein include methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HC1, solanezumab, aducanumab, and crenezumab.
[0094] It is understood that reference to "drug" herein encompasses a drug, a biologic and/or other pharmaceutical agent.
[0095] In other non-limiting examples, nData can include any data that can be used to characterize an individual's status, such as but not limited to age, gender or other similar data.
[0096] In any example herein, the data (including cData and nData) is collected with the individual's consent.
[0097] In any example herein, the one or more physiological components can include any means of measuring physical characteristics of the body and nervous system, including electrical activity, heart rate, blood flow, and oxygenation levels, to provide the nData. This can include camera-based heart rate detection, measurement of galvanic skin response, blood pressure measurement, electroencephalogram, electrocardiogram, magnetic resonance imaging, near-infrared spectroscopy, and/or pupil dilation measures, to provide the nData.
[0098] Other examples of physiological measurements to provide nData include, but are not limited to, the measurement of body temperature, heart or other cardiac-related functioning using an electrocardiograph (ECG), electrical activity using an electroencephalogram (EEG), event-related potentials (ERPs), functional magnetic resonance imaging (fMRI), blood pressure, electrical potential at a portion of the skin, galvanic skin response (GSR), magneto- encephalogram (MEG), eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation, functional near-infrared spectroscopy (fNTRS), and/or a positron emission tomography (PET) scanner. An EEG-fMRI or MEG-fMRI measurement allows for simultaneous acquisition of electrophysiology (EEG/MEG) nData and hemodynamic (fMRI) nData.
[0099] In any example herein, the cognitive platform and systems including the cognitive platform can be configured to present computerized navigation tasks and platform interactions that inform cognitive assessment (including screening or monitoring) or deliver a treatment.
[00100] Example systems, methods, and apparatus herein can be implemented to render at least a portion of the environment through limited visual information based on proximity
and/or directionality relative to the in-environment representation of the individual. For example, the individual may be presented with an overhead view (a substantially allocentric view) of at least a portion of the environment prior to performing the testing task. At least a portion of the environment may be obscured from visibility in this overhead view. In another example, the individual may be presented with a perspective view that is closer to the level of features or contents of the environment prior to performing the testing task.
[00101] Example systems, methods, and apparatus herein can be implemented to render an environment and configure an exploration phase in the environment. The exploration phase can be a guided course through a specified route or a free-exploration phase. For the free- exploration phase, the computing system can be configured to issue instructions to the user to explore the environment for a specified period of time, in order to gain some familiarity with the layout of the environment. In each of the guided route and the free-exploration phase, the individual is provided the opportunity to gain some familiarity with the location, lateral and vertical extent and/or relative proportions of obstacles and channels in the environment, and/or the location and type of strategically placed objects of interest in the environment. Depending on the type of environment, the objects of interest can be landmarks (e.g., pizza place, movie theater, statues, etc.) and/or specially shaped objects (e.g., cube, sphere, key, star, cone, etc., or other floating geometric object). The exploration phase also allows a user to become familiar with the type of controls the computer device provides, including the degree and manner of modulation and control of direction, speed, angular orientation, and relative vertical position in the environment.
[00102] Example systems, methods, and apparatus herein can be implemented to render an environment and configure a route-learning task in the environment. In an example, the route-learning can be initiated with computer device rendering guides to a user to travel a specified route through the environment. As non-limiting examples, the guides can be rendered as arrows, lights, lines, a guiding avatar, voice commands or other audio feedback, vibrations (either of the computer device or an attachment thereto), or other visual, audio, or vibratory means. The guides are used to assist a user to navigate from a point of origin (A) to a specified end-point (B) along a specified route. The specified end-point (B) may be a specified location in the environment, a landmark, and/or a specially shaped object. For subsequent testing sessions, the user can be instructed to perform a specified type of route-learning task. In any of the example implementations, the computer device may be configured to render the instructions
to the user as to the type and requirements of the one or more testing tasks at the beginning of the guided portion of the route-learning task and/or at the beginning of one or more of the testing phases of the route-learning task.
[00103] In a first example of a route-learning task, with completion of the guided route, the one or more testing phases of the route-learning task can require the user to backtrack, i.e., to navigate the reverse of the route specified in the guided portion of the route-learning task, to return to the point of origin (A) from end-point (B). In an example implementation, in response to computer system detecting that the user is using the controls to make a wrong turn and/or move in an incorrect direction in trying to navigate the reverse route, the computer device can be configured to either return the user to the point of origin (A) or to the last successfully- navigated point on the reverse route. In this example, the user's performance may be scored {i.e., quantified) based on the time taken to navigate the reverse route successfully, and/or the number of incorrect turns or moves the user makes. The degree of difficulty of the user in finding the point of origin (A), including whether the user fails to do so, is included as a parameter in the scoring. In another example implementation, in response to a computer system detecting that the user is using the controls to make a wrong turn and/or move in an incorrect direction in trying to navigate the reverse route, the computer device can be configured to provide a guide for a limited time to assist the user in making the correct turn or move in the correct direction. In this example, the user's performance may be scored {i.e., quantified) based on the time taken to navigate the reverse route successfully and/or the number of times a guide was provided to prevent a user from making an incorrect turn or move in the incorrect direction.
[00104] In a second example of a route-learning task, with completion of the guided route, the one or more testing phases of the route-learning task can cause the computing device to return the user to the point of origin (A) and require the user to navigate the specified route at least one additional time to the end-point (B), along the route specified in the guided portion. The computing device may be configured to have the user try to navigate the route either entirely without the aid of a guide, or with use of a guide at strategic points where the user is detected to be making a wrong turn or moving in an incorrect direction. In this example, the user's performance may be scored {i.e., quantified) based on the time taken to navigate the reverse route successfully, and/or the number of incorrect turns or moves the user makes, and/or the degree (such as percentage) of deviation of the user-navigated route as compared to
the guided route. The degree of difficulty of the user in finding the end-point (B), including whether the user fails to do so, is included as a parameter in the scoring.
[00105] In an example of a relative-orientation task, a task is rendered that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment from an initial point of the course to a target end-point, and using an indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point. The turn can be one or more left or right turns in the course in discrete angular amounts, such as but not limited to about 30 degrees, about 60 degrees, about 90 degrees, or about 120 degrees. The example method further includes measuring data indicative of the relative orientation indicated using the second indicator, and analyzing the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
[00106] In any example herein, the requirement of executing at least one turn of a discrete angular amount in navigating a path in an environment is used to introduce a test of the individual's cognitive abilities at visuospatial memory. Such cognitive abilities can be compromised or diminished as a result of an onset of, or a degree or stage of progression of, a neurodegenerative cognitive condition (including a disease, or a disorder such as but not limited to an executive function disorder). The requirement of the individual to control at least one component of the computing device to execute at least one turn of a discrete angular amount on a navigation path in the computerized environment has the effect of forcing a change in perspective in the environment, thereby limiting the overall visuospatial information available to the individual at any given time. This serves to test the individual's ability to construct a mental representation of the virtual space (computerized environment). With the visuospatial information limited in the way, the individual has less visual information of the environment to draw on for performing the task (such as but not limited to the relative orientation or other head orientation tasks). As a result, the individual has to build more of a mental representation of the environment (virtual space) in order to perform the physical actions to perform the navigation tasks. In having to perform the at least one turn of a discrete angular amount, the individual cannot rely on visual information of the environment that they ordinarily might be able to build if the path they navigate has only linear or substantially nonlinear portions (including circular or curved portions), where the individual is able to build
a more absolute representation. Analysis of measurement data from the individual's performance of a navigation task requiring at least one turn of a discrete angular amount (such as but not limited to a relative orientation/path integration or other head orientation task) can be used to provide an indication of the individual's cognitive abilities, and also a scoring output indicative of at least one of (i) a likelihood of onset of a neurodegenerative condition of the individual, or (ii) a stage of progression of the neurodegenerative condition.
[00107] As a non-limiting example of a relative-orientation task, with completion of a guided route that includes at least one turn of a discrete angular amount, the one or more testing phases can require the user to remain at end-point (B) and to indicate the relative orientation of the point of origin (A) relative to end-point (B). The computing device can be used to render a pointer tool, an avatar, or other means that allows the user to indicate where the user believes the point of origin (A) is relative to the user's position at end-point (B), such as but not limited to by pointing (e.g., using the avatar, pointer tool, or other indicator means) or by drawing a line. In this example, the user's performance may be scored (i.e., quantified) based on the degree (such as percentage) of deviation of the user-indicated orientation of the point of origin (A) as compared to the actual relative orientation. A relative-orientation task may also be referred to as a "path integration" task.
[00108] Example systems, methods, and apparatus herein can be implemented to generate an assessment of one or more cognitive skills in an individual. An example system or apparatus for implementing the method can include a user interface, a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the example system or apparatus executes the method. An example method includes using the programmed one or more processing units to render a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment, render a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point, and configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point. The example method further includes measuring data indicative of the relative orientation indicated using the second
indicator, and analyzing the measurement data to generate a performance metric for the performance of a second task, the performance metric providing an indication of the cognitive ability of the individual.
[00109] Example systems, methods, and apparatus herein can be implemented to render an environment and configure combination tasks in the environment that requires a user to draw on their cognitive skills in way-finding, orientation, and route-learning tasks. In an example implementation, the computing device configures the environment for an exploration phase. The exploration phase can be rendered either (i) as a guided exploration of a specified route to allow the individual to learn the route, or (ii) as a free-exploration phase such that the user is allowed to explore the environment freely. The exploration phase can be rendered for a specified period of time, in order for the individual to gain some familiarity with the layout of the environment (including the location, lateral and vertical extent and/or relative proportions of obstacles and channels in the environment) and/or the location and type of strategically placed objects of interest in the environment. In the one or more testing phases, the computing device issues instructions to the user to find a specified location and/or landmark and/or specially-shaped object in the environment and positions the user at either the same location of initial entry or at a different location in the environment. In an example implementation the exploration phase and the one or more testing phases can be differing portions of a single, uninterrupted task. In this example, the user's performance may be scored (i.e., quantified) based on the time taken to navigate successfully to the specified location and/or landmark and/or specially-shaped object, and/or the number of incorrect turns or moves the user makes, and/or the degree (such as percentage) of deviation of the user-navigated route as compared to a desired route from the entry point to the specified location and/or landmark and/or specially- shaped object. In an example, the desired path can be a determined "best path" or one or more optimal paths (determined using mathematical or algorithmic computational or modeling methods), including the shortest path, from the entry point to the specified location and/or landmark and/or specially-shaped object. In response to completion of a combination task, the user may be returned to the same location of initial entry or at a different location in the environment, and instructed to find the same or a different location and/or landmark and/or specially-shaped object.
[00110] Any of the tasks may be repeated any number of times over multiple sessions of a user interaction with a cognitive platform or platform product.
[001 1 1] In any of the examples herein, one or more of the instructions issued to the user for performing the tasks may be rendered and displayed via a heads-up display (HUD) at a portion of the screen.
[001 12] Example systems, methods, and apparatus herein can be implemented to render differing types of control mechanisms for use by the user to navigate through the environments and/or to make the indications (e.g., of relative orientation) as required in a task. For example, the computing device can be configured to render one or more virtual joysticks depending on where a user interacts with (including applies pressure to or makes contact with) display sensors or other type of display device (including a touch screen). The type of movement that the computing device ascribes to each virtual joystick can be dependent on location on the screen and type of the user interaction.
[001 13] For example, user contact with the left or right side of the screen (i.e., to the left or right side of a median area) can cause the computing device to render joysticks that control left or right turns (e.g., in discrete angular amounts, such as but not limited to about 30 degrees, about 60 degrees, about 90 degrees, or about 120 degrees) and/or sweeping, continuous virtual gazes, respectively, relative to the environment. In this example, user contact with the median of the screen can cause the computing device to render a joystick that controls forward or backward movement (such as by movement or swiping up or down the screen, respectively) relative to the environment. Such movement can be at a constant speed or the computing device can be configured to allow acceleration or deceleration to change speed (e.g., by changes in type of contact or pressure of contact, or by a button press or other means). The forward or backward movement can be in a continuous manner or in discrete amounts (e.g., to jump to a junction, end of a hallway, etc.).
[001 14] In another example, the computing device can be configured to ascribe movement controls to the virtual joysticks dependent on the relative position and/or type of the display sensors, the user interactions with the display sensors, or other type of display device (including a touch screen). For example, user indication or interaction at a first portion of the display can cause the computing device to control the absolute position of the user indicator in the environment (such as user point of view or avatar), while user indication or interaction at a second portion of the display can cause the computing device to control the gaze (such as user point of view or avatar), as either indicator of a user' s "head-orientation" in the environment
for indicate intended direction of movement and/or to "look around" to observe features in the environment.
[00115] In another example, the computing device can be configured, including with use of a camera or other optical sensors, to read gestures of a user as directions for navigating through the environment, including to make left or right turns, to move forward or backwards, and/or to change directions.
[00116] In another example, the computing device can be configured to render controls similar to a steering wheel (e.g., of a car or boat) on the display sensors or other type of display device. User interaction with the steering wheel is used to signal a degree of a turn (including to signal direction of movement) while a touch at another portion of the display sensors or other type of display device (including via virtual joysticks or dedicated "buttons" on the display) is used to control speed to a set value or to modulate speed of movement from a minimal speed via accelerating or decelerating from higher speeds.
[00117] As another non-limiting example implementation, a computing device may be configured to render one or more virtual joysticks on a display device (including a touch screen), to control direction or velocity of movement within the environment and/or direction of rendered perspective view relative to the position of a user indicator (such as but limited to an avatar or other guidable element) within the environment.
[00118] As another non-limiting example implementation, a computing device may be configured to render a set of buttons, keys, or touch-sensitive locations on a touch-screen that a user can use to achieve a change of direction or velocity of movement within the environment and/or direction of rendered perspective view relative to the position of the (such as but limited to an avatar or other guidable element) within the environment.
[00119] As another non-limiting example implementation, a computing device may be configured with position and/or orientation sensors, such that the computing device can detect user physical action to cause a tilting, rotation, shaking, or translation of the position and/or orientation sensors to achieve a change of direction or velocity of movement within the environment and/or direction of rendered perspective view relative to the position of the (such as but limited to an avatar or other guidable element) within the environment.
[00120] In any example herein where a computing device is configured to allow user modulation of speed of movement, the scoring can be based on normalization of the
measurement of the fastest time and/or shortest path of navigation based on the variables/speeds used by the user to determine the efficiency of the path taken.
[00121] In any example herein, the details of the path a user takes to navigate in the environment might be more instructive, and given increased weighting in the scoring, as compared to the time it takes a user to get to the desired endpoint, location, landmark, or specially-shaped object.
[00122] FIGs. 1 A - ID show non-limiting examples of computerized renderings of courses (paths) that present navigation tasks.
[00123] FIG. 1 A shows a non-limiting example of a computerized rendering of a course that can be used to present a navigation task according to the principles herein, including a route-learning task, or a relative-orientation task, or a way -finding task, or any combination thereof. In this example, the computing device is configured to present an elevated, overhead view of an environment 10 that includes one or more internal courses 12 and obstacles 14. In this example, portions of the course 12 are configured to include pathways and passageways that allow traversal of the user indicator (such as but not limited to an avatar or other guidable element 16). In this example, the environment is rendered as a city -block type structure, however, other example environments are encompassed in this disclosure. The Cartesian axes (x-, y-, and z-axes) directions in the environment are used merely as guides for the description in this disclosure, and are not intended to be limiting on the environment. The example environment also includes a number of strategically placed shaped objects 18 (such as a doughnut, a sphere, a cone, etc.) that a user is tasked to locate. In this example, the user is presented a perspective view of the landscape and obstacles that is sufficiently localized so that the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course or a significant portion of the course. The navigation task requires an individual to formulate a pathway about the strategically positioned obstacles 14 from an initial point to at least one of the shaped objects 18. The example environment can include one or more entry ways 19 that either remain at a same location or at differing locations relative to the environment 10. The computing device can be configured to present instructions to the individual in a testing phase to indicate the shaped objects 18 to be located, and optionally to allow the user an exploration phase (including a guided route phase or a free-exploration phase) to become familiar with location and type of the obstacles 14 and shaped object 18 in the environment 10. The computing device also can be configured to
provide an individual with an input device or other type of control element (including the joystick, steering wheel, buttons, or other controls described hereinabove) that allows the individual to traverse the course 12, including specifying and/or controlling one or more of the speed of movement, orientation, velocity, choice of navigation strategy, the wait or delay period or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route (e.g., in the one or more testing phases), a measure of accuracy of a user in using spatial memory rather than visual cues to orient the user indicator relative to (including to point back to) a specific location in space (such as but not limited to the point of origin of the given pre-specified navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment. In any example herein, the measure can include values of any of these parameters as a function of time. As non-limiting examples, the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near- shortest path through the course, the time to complete the task, or other scoring mechanism associated with a route-learning task, or a relative-orientation task, or a way -finding task, or any combination thereof (as described herein).
[00124] In an example implementation, the walls of the environment can be configured with differing colors, indicated as a color 1, color 2, color 3, and color 4, to provide a user with visual cues for navigating through the environment 10. For example, each can be a different color, two or more can be the same color, or all can be the same color. A first specific color can be used to indicate walls crossing the x-axis of the environment (e.g., color 3 and color 4 are the same), while a second, different specific color can be used to indicate walls crossing the y-axis of the environment (e.g., color 3 and color 4 are the same).
[00125] The computing device can be configured to collect data indicative of the performance metric that quantifies the navigation strategy (including path, speed, and number of turns and sweeping gazes) employed by the individual from the initial point ("A") or entry way 19 to reach one or more target locations, landmarks, shaped objects, or end-points ("B") in performing the route-learning task, way -finding task, or combination task. For example, the computing device can be configured to collect data indicative of the individual's decisions to proceed from the initial point ("A") or entryway 19 along the dashed line or the
dotted line, the speed of movement, the orientation of the user indicator (such as but not limited to the avatar or other guidable element 16), among other measures (as described hereinabove). The data can be collected in the one or more testing phases. The data also can be collected in the exploration phase to provide a baseline or other comparison metric for computing the scores described herein. In the various examples, performance metrics that can be measured using the computing device can include data indicative of the speed of movement, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), including values of any of these parameters as a function of time. As another non-limiting example, the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near-shortest path through the course, the time to complete the task, or other scoring mechanism associated with a route-learning task, or a relative-orientation task, or a way -finding task, or any combination thereof (as described herein).
[00126] As shown in the example of FIG. 1 A, the course 12 may include one or more targets (such as shaped objects 18, landmarks, or other desired location) that the individual is instructed to locate in traversing the course 12. In this example, the performance metric may include a scoring based on a specific type of target located, and/or the total number of targets located and/or the time taken to locate the targets. In a non-limiting example, the individual may be instructed to navigate the course 12 such that the multiple targets are located in a specified sequence. In this example, the performance metric may include a scoring based on the number of targets located in sequence and/or the time taken to complete the sequence.
[00127] FIG. IB shows a non-limiting example of another computerized rendering of an environment 20 that a computing device can render to present a navigation task according to the principles herein. In this example landscape 20, portions of the course 22 are defined by obstacles 24, and are configured to allow traversal of the user indicator (such as but not limited to an avatar or other guidable element 26) from a point of origin 29 to a specified target. As described hereinabove, the point of origin 29 may be at the same or different location relative to the environment between the two testing phases. As shown in FIG. IB, the obstacles 24 can have differing cross-sectional shapes, such as a substantially square cross-section of obstacle Oi compared to a longitudinal cross-section of obstacle 02. In this example, the user is
presented a perspective view of the landscape and obstacles that is sufficiently localized so that an individual is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course or a significant portion of the course. The computing device can be configured to collect data indicative of the individual's decision to proceed along the dashed line or the dotted line (such as but not limited to the forward or backtracking movement of a user in the testing phase of a route-learning task), and/or the speed of movement, and/or the orientation of the user indicator (such as but not limited to the avatar or other guidable element 26), such as but not limited to the point of origin pointing (or other indication) that may be required of a user in the testing phase of a route-learning task), among other measures. In this example, performance metrics that can be measured using the computing device relative to the localized landscape can include data indicative of one or more of the speed of movement, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route (e.g., in the one or more testing phases), a measure of accuracy of a user in using spatial memory rather than visual cues to orient the user indicator relative to (including to point back to) a specific location in space (such as but not limited to the point of origin of the given pre-specified navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment. In any example herein, the measure can include values of any of these parameters as a function of time. As another non-limiting example, the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as but not limited to determining the shortest path or near-shortest path through the course.
[00128] The example environment 20 includes multiple target shaped objects S, (/' = 1, 2, 3, 4) that the individual is instructed to locate in traversing the course 22 from point of origin 29. In this example, the performance metric may include a scoring based on the success in locating a specific target object, the number of targets located (including from multiple testing phases), and/or the time taken to locate the target(s). In a non-limiting example, the individual may be instructed to navigate the course 22 such that the multiple targets are located in a specified sequence. In this example, the performance metric may include a scoring based on the number of targets located in sequence and/or the time taken to complete the sequence.
[00129] In an example way -finding task, a computing device can be configured to present an individual with the capability of changing, in at least one instance in a session, from a wider aerial view (such as but not limited to the view shown in FIGs. 1 A - IB) to a more localized, perspective view (such as but not limited to the perspective views shown in FIGs. 3 A - 3U hereinbelow).
[00130] As a non-limiting example implementation of a way -finding task, an individual may be presented with an aerial view such as shown in FIG. 1 A or IB to obtain an overview of the course, but then be required to navigate the course from a more localized perspective view shown in FIGs. 3 A - 3U hereinbelow. In this example, an individual may be required to rely on allocentric navigate capabilities, to navigate the course by making selections and decisions from more localized, perspective views similar to that shown in FIGs. 3 A - 3U hereinbelow based on the spatial memory the individual forms from the wider aerial view of FIG. 1A or IB.
[00131] FIG 1C shows a non-limiting example of the type of dimensional constraints that can be imposed on the passageways, obstacles, and dimensions of the environment. As shown in FIG. 1C, the width (ai) of the obstacles is greater than or about equal to the width (a2) of the passageway. In a non-limiting example, ai is about twice a2. The width (ai) is also smaller than the length of environment wall (a3), such that no portion of the environment is rendered inaccessible by an obstacle. In a non-limiting example, ai is about one-fourth or one- fifth of a3. While example proportionate values are given for the relative dimensions (width and lengths) of the passageway, obstacles, and environment walls, they are not intended to be limiting, other than to require that a3 > a2 > ai.
[00132] FIG. ID shows a non-limiting example of a computerized environment, where the path 40 from point A to point B includes at least one turn 42 of a discrete angular amount (represented by angle θι). In an non-limiting example of a task involving path integration (such as but not limited to dead-reckoning), a user is required to navigate from an initial point A to a target end-point (C) via the path, and from point C use an indicator to "point" back to or otherwise indicate the point of origin A. In an example, the system is controllable to allow the user to indicate any angle within the range of 0° to at least about 180° about point C. In another example, the system is controllable to allow to the user to indicate any angle within the entire range of from 0° to 360° about point C. A measure of the degree of success of performance of the task is the measure of the delta angle (Δα) between what the user indicates
as the relative orientation of the point of origin (dashed arrow 44) and the actual relative orientation (dashed arrow 46) of the point of origin.
[00133] As shown in FIG. ID, a navigation path in any example environment described herein, including in the example of any of FIGs. 2 A - 9H hereinbelow) may include a portion that is curved or substantially non-linear.
[00134] FIGs. 2A - 9H show various perspective views of portions of computerized renderings of an environment during various non-limiting example navigation tasks according to the principles herein. In these examples, the computing device is configured to present differing perspective views of a selected portion of an environment that the individual is required to navigate, but from the perspective of the user indicator (such as but not limited to and avatar or other guidable element). The example perspective views are illustrative of navigation through an example environment and are not to be limiting on the scope of the instant disclosure. The example images depict the type of sequence of perspective views that a user can encounter as the user navigates through the environment.
[00135] FIGs. 2A - 2C show differing perspective views of an example entryway 200
(here depicted as a lit opening) as the user actuates the controls of the computing device to pass through the entryway to enter the environment. FIGs. 2A - 2C also show examples of the types of heads-up display (HUD) 202 that the computing device can be used to display to a user as they navigate the environment. In this example, the computing device prompts the user with the display of the instructions "READY TO EXPLORE" as the HUD 202.
[00136] FIGs. 3A - 3U show non-limiting examples of a series of perspective views of an environment as the computing device allows a user to conduct an exploration to gain some familiarity with the environment. In the example of FIG. 3 A, portions of the example course 302 are defined by obstacles 304, and a wall 306 and are configured to allow traversal of the user indicator (such as but not limited to an avatar or other guidable element), as the user explores the environment. Also shown is an example of a target shaped object 308 (in this example, a sphere) that the user may be instructed to locate in one or more testing sessions. FIGs. 3B and 3C show examples of the perspective views rendered as the user actuates the computing device controls to turn and move around in the environment. FIGs. 3D - 3U show the perspective views of the environment as the user moves forward, moves backwards, and turns around obstacles in the environment. FIGs. 3D - 3U also show the non-limiting example HUD 310 display rendered to the user by the computing device to indicate that it is an
exploration phase and the amount of time the user is allowed for the exploration (whether a guided route or a free-exploration), as well as a HUD 312 that indicates the time spent as the user navigates through the exploration phase. FIGs. 3D - 3U show the other non-limiting example shaped objects located about the environment, including a cone 14, a cube 316, and a doughnut 318.
[00137] In a non-limiting example, an individual may be presented with a perspective view such as shown in FIGs. 3 A - 3U, with verbal or visual instructions indicating that they have been placed at an unknown location within a previously-experienced virtual environment (through the exploration phase), and instructed to perform a navigation task from this unknown location. As an example of such a navigation task, an individual may be required to use the computing device controls to look around, determine their current location to the best of their ability, and point to a previously navigated (and presumed-known) location within the environment. Performance metrics for such a task would include the accuracy of the directional response, and the time required to generate this response. As another example of such a navigation task, an individual may be required to move their avatar from the unknown location to a presumed-known location within the environment. Performance metrics for such a task could include the time required to reach the goal location, and differences between the path used to reach the goal location and one or more optimal paths (e.g., optimal paths determined using mathematical or algorithmic computational or modeling methods).
[00138] As shown in FIGs. 3 A - 3U, the relative dimensions the passageway, obstacles, and environment walls are configured such that that a3 > a2 > i (as described in connection with FIG. 1C) and such that a user presented with the perspective view is obstructed from observing the contents of adjacent passageways until the user is within a certain distance of a cross-channel or a turn. As a non-limiting example, dimensions α3:α2:αι can be related in a ratio of 10:2: 1.
[00139] FIGs. 4 A - 4C show differing perspective views of an example entryway 400
(here depicted as a lit opening) as the user actuates the controls of the computing device to complete the exploration phase and to pass through the entryway to enter the environment for a testing session. FIGs. 4A - 4C also show examples of the types of heads-up display (HUD) 402 that the computing device can be used to display to a user as they navigate the environment. In this example, the computing device prompts the user with the display of the instructions "READY TO SEARCH" as the HUD 402.
[00140] FIGs. 5 A - 5 J show non-limiting examples of a series of perspective views of an environment as the computing device presents a first testing session to a user in the
environment. FIG. 5 A shows an example display of instructions 500 to the user to indicate the type of shaped object (a cone) to be located as well as a HUD 502 that indicates the time spent as the user navigates through the first testing phase. In this example, the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase). As shown in FIG. 5A, the user indicator is placed at a different starting location of the environment than for the exploration phase (shown in FIG. 3 A). In this example, the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the
environment wall colors, and any shaped objects encountered. As non-limiting examples, the user can make decisions as to direction and orientation of movement based on using the positions of non-target shaped objects 504, 506 and 508 as guides in formulating a navigation strategy. In this example, the individual may use the non-target shaped objects 504, 506 and 508 in a form of egocentric navigation. At FIG. 5H, the user navigates to target shaped object 510, at which point the timer HUD 502 is frozen in time, the user is presented with a reward indicator 512 and is reset to the entryway for further session(s), if any.
[00141] FIGs. 6 A - 6E show non-limiting examples of a series of perspective views of an environment as the computing device presents a second testing session to a user in the environment. FIG. 6A shows an example display of instructions 600 to the user to indicate the type of shaped object (a cone, similarly to FIG. 5 A) to be located as well as a dual HUD 602 that indicates both the time the user took to complete the first testing session (FIGs. 5 A - 5 J) as well as the time spent as the user navigates through the second testing phase. In this example, the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase and in first testing session (FIGs. 5A - 5J)). As shown in FIG. 6A, the user indicator is placed at a similar starting location of the environment as for the first testing session (shown in FIG. 5A). In this example, the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered. As non-limiting examples, the user can make decisions as to direction and
orientation of movement based on using the position of a non-target shaped object 604 as a guide in formulating a navigation strategy, in a form of egocentric navigation. At FIG. 6D, the user navigates to target shaped object 606, at which point the timer HUD 608 tracking the time for the second testing session is frozen in time. The user is presented with a reward indicator 610 and is reset to the entryway for further session(s), if any. As shown in the non-limiting example of FIG. 6D, the time the user took to complete the first testing session (FIGs. 5 A - 5 J) is greater than the user took to navigate through the second testing phase.
[00142] FIGs. 7 A - 7F show non-limiting examples of a series of perspective views of an environment as the computing device presents a third testing session to a user in the environment. FIG. 7A shows an example display of instructions 700 to the user to indicate the type of shaped object (a cube) to be located as well as a triple HUD 702 that indicates the time the user took to complete the first testing session (FIGs. 5 A - 5 J), the time the user took to complete the second testing session (FIGs. 6A - 6J), as well as the time spent as the user navigates through the third testing phase. In this example, the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase, first testing session (FIGs. 5A - 5J), and second testing session (FIGs. 6A - 6E)). As shown in FIG. 7A, the user indicator is placed at a similar starting location of the environment as for the first and second testing sessions (shown in FIGs. 5A and 6A). In this example, the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered. At FIG. 7E, the user navigates to target shaped object 704, at which point the timer HUD 702 tracking the time for the third testing session is frozen in time. The user is presented with a reward indicator 706 and is reset to the entryway for further session(s), if any. As shown in the non-limiting example of FIG. 7F, the time the user took to navigate through the third testing session is significantly less than taken to complete the first testing session (FIGs. 5 A - 5 J) and the second testing session (FIGs. 6A - 6E).
[00143] FIGs. 8 A - 8H show non-limiting examples of a series of perspective views of an environment as the computing device presents a fourth testing session to a user in the environment. FIG. 8A shows an example display of instructions 800 to the user to indicate the type of shaped object (a shpere) to be located as well as a quadruple HUD 802 that indicates the
time the user took to complete the first, second, and third testing sessions (FIGs. 5A - 7F), as well as the time spent as the user navigates through the fourth testing phase. In this example, the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase, and first, second, and third testing sessions (FIGs. 5A - 7F)). As shown in FIG. 8A, the user indicator is placed at a similar starting location of the environment as for the first, second and third testing sessions (shown in FIGs. 5A, 6A, and 7A). In this example, the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered. At FIG. 8H, the user navigates to target shaped object 804, at which point the timer HUD 802 tracking the time for the fourth testing session is frozen in time. The user is presented with a reward indicator 806 and is reset to the entryway for further session(s), if any. As shown in the non-limiting example of FIG. 8H, the time the user took to navigate through the fourth testing session is comparable to the time taken to complete the first testing session (FIGs. 5 A - 5 J) and the second testing session (FIGs. 6A - 6E).
[00144] FIGs. 9 A - 9H show non-limiting examples of a series of perspective views of an environment as the computing device presents a fifth testing session to a user in the environment. FIG. 9A shows an example display of instructions 800 to the user to indicate the type of shaped object (a cube, similar to FIG. 7 A) to be located as well as a quadruple HUD 902 that indicates the time the user took to complete the first, second, third, and fourth testing sessions (FIGs. 5 A - 8H), as well as the time spent as the user navigates through the fifth testing phase. In this example, the user is required to make selections or decisions on strategy to traverse the course without benefit of an aerial view of the entire course based on the user's spatial memory of the course (with the familiarity gained in the exploration phase, and first, second, third, and fourth testing sessions (FIGs. 5A - 8H)). As shown in FIG. 9A, the user indicator is placed at a different starting location of the environment than for the previous testing sessions (shown in FIGs. 5A, 6A, 7A, and 8A). In this example, the user is required to navigate the course by making selections and decisions based on the relative position of the user's indicator in the landscape, the environment wall colors, and any shaped objects encountered. At FIG. 9G, the user navigates to target shaped object 904, at which point the timer HUD 902 tracking the time for the fifth testing session is frozen in time. In FIG. 9H, the
user is reset to the entryway for further session(s), if any. As shown in the non-limiting example of FIG. 9H, the time the user took to navigate through the fifth testing session is comparable to the time taken to complete the first, second and fourth testing sessions (FIGs. 5A - 6E and 8A - 8H).
[00145] FIG. 10 shows a non-limiting example of a graphical user interface rendered to a user including multiple input fields that allow a user to enter user identification 1002, user password 1004, and other information that can be used for authentication and validation of the user, and to determine a user's permission levels to enter a session. In another example, the graphical user interface can be rendered to display the user's performance data and
performance metrics, or to provide a measure of the user's progress across multiple testing sessions.
[00146] In connection with the example of any one or more of FIGs. 5 A - 9H, the computing device can be configured to collect data indicative of the individual's decisions to proceed in the environment, and/or the speed of movement, and/or the orientation of the user indicator, among other measures. In this example, performance metrics that can be measured using the computing device relative to the localized, perspective landscape can include data indicative of one or more of the speed of movement, orientation, velocity, choice of navigation strategy, wait or delay period, or other period of inaction, prior to continuing in a given direction of a course or changing direction, time interval to complete a course, and/or frequency or number of times of referral to an aerial or elevated view of a landscape (including as a map), a measure of accuracy in recreating a previously learned route (e.g., in the one or more testing phases), a measure of accuracy of a user in using spatial memory rather than visual cues to orient the user indicator relative to (including to point back to) a specific location in space (such as but not limited to the point of origin of the given pre-specified navigation route), and/or a measure of the strategies employed in exploring and learning a novel environment. In any example herein, the measure can include values of any of these parameters as a function of time. As another non-limiting example, the performance metrics can include a measure of the degree of optimization of the path navigated by the individual through the course, such as determining the shortest path or near-shortest path through the course, the time to complete the task, or other scoring mechanism associated with a route-learning task, or a relative-orientation task, or a way-finding task, or any combination thereof (as described herein).
[00147] In any example herein, the course through an example environment may include land-based solid surfaces (including paved road, dirt road, or other types of ground surfaces) and/or waterways.
[00148] In any example, the environment may instead be waterways defined by obstacles other than land-based obstacles, such as but not limited to buoys or other anchored floats, reefs, jetties or other applicable type of obstacles.
[00149] In any example herein, one or more navigation tasks can be computer- implemented as computerized elements which require position-specific and/or motion-specific responses from the user. In non-limiting examples, the user response to the navigation task(s) can be recorded using an input device of the cognitive platform. Non-limiting examples of such input devices can include a touch, swipe or other gesture relative to a user interface or image capture device (such as but not limited to a keyboard, a touch-screen or other pressure sensitive screen, or a camera), including any form of graphical user interface configured for recording a user interaction. In other non-limiting examples, the user response recorded using the cognitive platform for the navigation task(s) can include user actions that cause changes in a position, orientation, or movement of a computing device including the cognitive platform. Such changes in a position, orientation, or movement of a computing device can be recorded using an input device disposed in or otherwise coupled to the computing device, such as but not limited to a sensor. Non-limiting examples of sensors include a joystick, a mouse, a motion sensor, a position sensor, a pressure sensor, and/or an image capture device (such as but not limited to a camera).
[00150] In an example implementation, the computer device is configured (such as using at least one specially-programmed processing unit) to cause the cognitive platform to present to a user one or more different types of navigation tasks during a specified time frame.
[00151] In some examples, the time frame can be of any time interval at a resolution of up to about 30 seconds, about 1 minute, about 5 minutes, about 10 minutes, about 20 minutes, or longer.
[00152] In some examples, the platform product or cognitive platform can be configured to collect data indicative of a reaction time of a user's response relative to the time of presentation of the navigation tasks.
[00153] In some examples, the difficulty level of the navigation task can be changed by increasing the intricacy of the convolutions or number or density of misdirection portions of the
course, reducing the time required to complete the course, increase the complexity of the target location requirements. In any example herein, a misdirection portion in a course causes the avatar or other guidable element to move off course, reach a portion of an obstacle that cannot be traversed, and/or not load to a desired target.
[00154] In a non-limiting example implementation, the example platform product herein may be formed as, be based on, or be integrated with, an AKILI® platform product (also referred to herein as an "APP") by Akili Interactive Labs, Inc., Boston, MA.
[00155] As used herein, the term "computerized stimuli or interaction" or "CSI" refers to a computerized element that is presented to a user to facilitate the user's performance of the navigation task.
[00156] For example, the navigation task can be presented to a user by rendering a graphical user interface to present the computerized stimuli or interaction (CSI) or other interactive elements. Description of use of (and analysis of data from) one or more CSIs in the various examples herein also encompasses use of (and analysis of data from) navigation tasks comprising the one or more CSIs in those examples.
[00157] In an example where the computing device is configured to present at least one navigation task comprising at least one CSI, the at least one navigation task and at least one CSI can be rendered using the at least one graphical user interface. The computing device can be configured to measure data indicative of the responses as the user performs the at least one navigation task and to measure data indicative of the interactions with the at least one CSI. In some examples, the rendered at least one graphical user interface can be configured to measure data indicative of the responses as the user performs the at least one navigation task and to measure data indicative of the interactions with the at least one CSI.
[00158] In any example according to the principles herein, the CSIs may be reward items or other interaction elements located at the one or more target points B, (/' = 1, 2, 3, ...) that the individual is instructed to locate in traversing a course. In this example, the performance metric may include a scoring based on the number of reward items or other interaction elements located by the individual and/or the time taken to locate the reward items or other interaction elements. Non-limiting examples of reward items or other interaction elements include coins, stars, faces (including faces having variations in emotional expression) or other dynamic element.
[00159] In a non-limiting example, the graphical user interface can be configured such that the CSI computerized element(s) are active, and may require at least one response from a user, such that the graphical user interface is configured to measure data indicative of the type or degree of interaction of the user with the platform product. In another example, the graphical user interface can be configured such that the CSI computerized element(s) are passive and are presented to the user using the at least one graphical user interface but may not require a response from the user. In this example, the at least one graphical user interface can be configured to exclude the recorded response of an interaction of the user, to apply a weighting factor to the data indicative of the response (e.g., to weight the response to lower or higher values), or to measure data indicative of the response of the user with the platform product as a measure of a misdirected response of the user (e.g., to issue a notification or other feedback to the user of the misdirected response).
[00160] In an example, the platform product can be configured as a processor- implemented system, method or apparatus that includes and at least one processing unit. In an example, the at least one processing unit can be programmed to render at least one graphical user interface to present the navigation task(s) and one or more CSI to the user for interaction. The at least one processing unit can be programmed to cause a component of the program product to receive data indicative of the navigation and/or at least one user response based on the user interaction with the CSI (such as but not limited to cData), including responses provided using the input device. The at least one processing unit also can be programmed to: analyze the cData to provide a measure of the individual' s performance metric for a given type of navigation task (whether allocentric or egocentric), and/or analyze the differences in the individual's performance based on determining the differences between the user' s performance at allocentric navigation a compared to the user' s performance at egocentric navigation (including based on differences in the cData), and/or adjust the difficulty level of the navigation task(s) (including CSIs), based on the analysis of the cData (including the measures of the individual' s performance determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual' s performance metric, and/or cognitive abilities (including for screening, monitoring or assessment), and/or response to cognitive treatment, and/or assessed measures of cognition. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to amyloid status, and/or presence or expression level of tau proteins, and/or potential efficacy
of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, and/or expected score from the individual's
performance of a TOVA® test and/or a RAVLT™ test, based on the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a condition, based on the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData. The condition can be, but is not limited to, depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, schizophrenia, or other condition.
[00161] In other examples, the platform product can be configured as a processor- implemented system, method or apparatus that includes a display component, an input device, and the at least one processing unit. The at least one processing unit can be programmed to render at least one graphical user interface, for display at the display component, to present the navigation task(s) (including the CSI) to the user for interaction.
[00162] Non-limiting examples of an input device include a touch-screen, or other pressure-sensitive or touch-sensitive surface, a motion sensor, a position sensor, a pressure sensor, and/or an image capture device (such as but not limited to a camera).
[00163] The analysis of the individual's performance may include using the computing device to compute percent accuracy at the navigation task, number of hits and/or misses at locating the target(s) during a session or from a previously completed session. Other indicia that can be used to compute performance measures is the amount time the individual takes to respond after the presentation of a task (e.g., as a targeting stimulus). Other indicia can include, but are not limited to, reaction time, response variance, number of correct hits, omission errors, false alarms, learning rate, spatial deviance, subjective ratings, and/or performance threshold, etc.
[00164] In a non-limiting example, the computerized element includes at least one element to indicate positive feedback to a user. Each element can include an auditory signal and/or a visual signal emitted to the user that indicates success at a navigation task or other
platform interaction element, i.e., that the user responses at the platform product has exceeded a threshold success measure on a navigation task.
[00165] In a non-limiting example, the computerized element includes at least one element to indicate negative feedback to a user. Each element can include an auditory signal and/or a visual signal emitted to the user that indicates failure at a navigation task, i.e., that the user responses at the platform product has not met a threshold success measure on a navigation task.
[00166] In a non-limiting example, the computerized element includes at least one element for messaging, i.e., a communication to the user that is different from positive feedback or negative feedback.
[00167] In a non-limiting example, the computerized element includes at least one element for indicating a CSI that is a reward. A reward computer element can be a computer generated feature that is delivered to a user to promote user satisfaction with the navigation task and as a result, increase positive user interaction (and hence enjoyment of the user experience).
[00168] According to the principles herein, the term "cognition" or "cognitive" refers to the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. This includes, but is not limited to, psychological
concepts/domains such as, executive function, memory, perception, attention, emotion, motor control, and interference processing. An example computer-implemented device according to the principles herein can be configured to collect data indicative of user interaction with a platform product, and to compute metrics that quantify user performance. The quantifiers of user performance can be used to provide measures of cognition (for cognitive assessment) or to provide measures of status or progress of a cognitive treatment.
[00169] According to the principles herein, the term "treatment" or "treat" refers to any manipulation of CSI in a platform product (including in the form of an APP) that results in a measurable improvement of the abilities of a user, such as but not limited to improvements related to cognition, a user's mood, emotional state, and/or level of engagement or attention to the cognitive platform. The degree or level of improvement can be quantified based on user performance measures as describe herein. In an example, the term "treatment" may also refer to a therapy.
[00170] According to the principles herein, the term "session" refers to a discrete time period, with a clear start and finish, during which a user interacts with a platform product to receive assessment or treatment from the platform product (including in the form of an APP).
[00171] According to the principles herein, the term "assessment" refers to at least one session of user interaction with CSIs or other feature or element of a platform product. The data collected from one or more assessments performed by a user using a platform product (including in the form of an APP) can be used as to derive measures or other quantifiers of cognition, or other aspects of a user's abilities.
[00172] According to the principles herein, the term "cognitive load" refers to the amount of mental resources that a user may need to expend to complete a task. This term also can be used to refer to the challenge or difficulty level of a navigation task.
[00173] In an example, the platform product can be configured as a processor- implemented system, method or apparatus that includes at least one processing unit. In an example, the at least one processing unit can be programmed to render at least one graphical user interface to present the navigation task(s) and one or more CSI to the user for interaction. The at least one processing unit can be programmed to cause a component of the program product to receive data indicative of the performance of the navigation task and/or at least one user response based on the user interaction with the CSI (such as but not limited to cData), including responses provided using the input device. The platform product also can be configured to receive nData indicative of measurements made before, during, and/or after the user interacts with the cognitive platform (including nData from measurements of physiological or monitoring components and/or cognitive testing components). The at least one processing unit also can be programmed to: analyze the cData and/or nData to provide a measure of the individual's condition (including cognitive condition), analyze the cData and/or nData to provide a measure of the individual's performance metric for a given type of navigation task (whether the navigation task requires allocentric navigation and/or egocentric navigation), and/or analyze the differences in the individual's performance based on determining the differences between the user's performance at allocentric navigation as compared to the user's performance at egocentric navigation (including based on differences in the cData) and differences in the associated nData. The at least one processing unit also can be programmed to: adjust the difficulty level of the navigation task(s) (including CSIs), based on the analysis of the cData (including the measures of the individual's performance determined in the
analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual's performance metric, and/or cognitive abilities (including for screening, monitoring or assessment), and/or response to cognitive treatment, and/or assessed measures of cognition. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to amyloid status, and/or presence or expression level of tau proteins, and/or potential efficacy of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, and/or expected score from the individual's performance of a TOVA® test and/or a RAVLT™ test, based on nData and the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a condition, based on nData and the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData. The condition can be, but is not limited to, depression, attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, schizophrenia or other condition.
[00174] In an example, the feedback from the differences in the individual's performance based on determining the differences between the measures of the user's first type and second type of responses and the nData can be used as an input in the cognitive platform that indicates real-time performance of the individual during one or more session(s). The data of the feedback can be used as an input to a computation component of the computing device to determine a degree of adjustment that the cognitive platform makes to a difficulty level of the navigation task (optionally with interference) with which the user interacts within the same ongoing session and/or within a subsequently-performed session.
[00175] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to identify the type of navigation strategy that is being used by a participant.
[00176] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to determine the
relative strength of each navigation skill (whether egocentric navigation or allocentric navigation) for a given individual or set or population of individuals.
[00177] For example, if the weak areas in a disease population (such as but not limited to Alzheimer's disease, recurrent major depression, Parkinson's Disease, Huntington's Disease, ADHD) are strengthened with training on a cognitive platform configured to present a certain type of navigation task (e.g. allocentric navigation to strengthen the hippocampus as compared to egocentric navigation to strengthen the caudate nucleus), there could be transfer of benefit to the disease symptoms of the individual(s) related to that respective brain area (such as but not limited to navigation abilities and potentially memory related to the hippocampus, working memory, learning, and response selection related to the caudate nucleus).
[00178] As the hippocampus constructs and maintains a cognitive map of a given environment, and retrieves previously constructed maps (including landscape or waterways maps) when the individual is presented with a new environment that appears similar to a previously visited environment, measurements of interest include speed and accuracy of learning a new map, employing an old map, and differentiating between maps that appear similar.
[00179] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to evaluate the navigation strategy being used by an individual or group of individuals.
[00180] For example, the platform product (including using an APP) may be configured to present a user with conflicting information, such as but not limited to, egocentric landmark cues that would suggest different path choices than the simultaneously available allocentric boundary and path integration information. The example platform product can be configured to measure data indicative of cues that dictate the path choices of the individual. This can provide an indication of the individual's strategy preference. The indication of the individual's strategy preference can be correlated with relative capabilities in respectively associated areas of the individual's brain (i.e., areas of the brain governing allocentric navigation versus egocentric navigation).
[00181] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to measure the change in navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a
metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) is set in similar virtual environments, but with varying levels of landmarks available for navigating or varying the salience of the landmarks (such as but not limited to making landmarks look more similar (i.e., fewer distinctions), smaller, less distinct color from the background, etc.). The example platform product (including using an APP) can be configured to perform an analysis to compare these measurements. If the performance metrics indicate that an individual's performance gets worse as the number of landmarks decreases, the individual can be classified as more likely to be using egocentric navigation.
[00182] In a non-limiting example, the platform product (including using an APP) can be configured to analyze the measures of the individual's performance across the environments, and analyze how the individual's performance changes with the number of landmarks. This outcome from the analysis of the individual's performance can be compared between neurotypical individuals and/or individuals of known disease populations, to determine if the performance profile is different between the individual and the neurotypical individuals and/or individuals of known disease populations.
[00183] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to measure the navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more targets (e.g., faster time is used as a metric of better performance), where the navigation task(s) is set in a virtual environment that is changing as the individual is traversing the environment. As non- limiting examples of changes, the landmark features can be changing (e.g., tree changing color in a forest), the landmarks may be duplicated (e.g., first landmark is a pink tree and more pink trees appear over time), the landmarks are changing locations relative to the target(s) and/or other landmarks, the salience of landmarks are changing (e.g., they are getting darker and/or the colors become less clear), or the ability to use landmarks changes (e.g., it becomes foggy and landmarks are less visible). The example platform product (including using an APP) can be configured to perform an analysis to compare performance metrics measured in the changing environment relative to a static environment, to identify the specific state of areas of the brain of an individual (e.g., whether these areas are similar to or different from that of a given
population, or show any benefit or deficit) and the individual's specific navigation strategy preferences.
[00184] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to measure the navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) occur in a previously explored virtual environment where the starting point and/or target(s) require traversal of the environment via paths to which the individual is not previously exposed (and thus were not previously learned). In one example implementation, this can be achieved by configuring the platform product to introduce new obstacles in the way of previously displayed (and thereby known) paths of the course. In another example implementation, this can be achieved by configuring the platform product to place intermediary target(s) at locations that are outside of previously traveled paths of the course. In another example implementation, this can be achieved by configuring the platform product to introduce a completely different path that never intersects with the previously traversed (and thereby learned) paths of the course. The example platform product (including using an APP) can be configured to perform an analysis to determine an individual's ability to navigate in this condition as a better indication of tendency towards allocentric navigation than possible with repeated wayfinding tasks in previously known paths.
[00185] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to measure the navigation performance of an individual as measured by metrics such as but not limited to the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) is in a previously explored virtual environment that is being traversed one or more additional times, potentially after varying levels of delay between repeated trials in that environment. In this example, the platform product can be configured to present other activities to the individual into the intervening periods, to introduce cognitive interference. In this example, the platform product can be configured to present other navigation activities that introduce spatial-memory-
specific interference, whereas non-navigation activities may be used to introduce other types of interference. The example platform product (including using an APP) can be configured to perform an analysis to compare the measurements from the previously explored virtual environment before and after the intervening periods to determine measures of the
improvement in the individual's performance over subsequent same-environment trials as an indication of the rate of learning. The example platform product (including using an APP) can be configured to perform an analysis to compare the measurements from the previously explored virtual environment before and after the intervening periods to determine measures of the changes in performance between to same-environment trials, and the degree of correlation with the amount of delay between to repetitions to determine the effect of time delay on an individual's ability at maintenance of spatial memories. The example platform product (including using an APP) can be configured to perform an analysis to compare the
measurements from the previously explored virtual environment before and after the intervening periods to determine measures contrasting trial -to-trial performance changes, where the intervening activities that introduced different types of interference can be used to provide a measure of how much of the interference effects are due specifically to any given type of interference (e.g., spatial memory interference) rather than just task- switching. The example platform product (including using an APP) can be configured to perform an analysis to compare the measurements from the previously explored virtual environment before and after the intervening periods to provide an indicator of the efficiency of spatial memory retrieval based on an analysis of the measures of the impact of spatial memory interference.
[00186] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to measure the navigation performance (of an individual (as measured by the distance traveled to reach one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more target (e.g., faster time is used as a metric of better performance), where the navigation task(s) is in a virtual environment that is spatially analogous to a previously explored environment, but without the same visual cues. For example, the analogous environment may be the same as the original environment but with little or no lighting. Alternatively, the analogous environment may be on a different vertical plane (e.g. on a different floor of the same building, in the sky, or underground). Similarly, the analogous environment may have the same shape, but be on a different scale than the
previously explored environment. The example platform product (including using an APP) can be configured to perform an analysis to determine a measure of the individual's ability to navigate in this condition as an indication of allocentric navigation.
[00187] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to present an individual with a virtual environment that is spatially analogous to a previously explored environment, without the same visual cues, but not informing the individual which of multiple possible previous environments is the source. The example platform product (including using an APP) can be configured to measure the individual's ability to determine the actual source
environment, either directly by prompting the individual to make a choice after sufficient exploration (as a non-limiting example, with performance measures of correctness of choice and the exploration time required to arrive at that choice) or indirectly by prompting the individual to perform movements and/or actions within the environment that correspond to locations within the source environment (as a non-limiting example, with performance measures of distance traveled to one or more targets (e.g., where a shorter distance is used as a metric of better performance) or by the amount of time taken to reach the one or more targets (e.g., faster time is used as a metric of better performance). The example platform product (including using an APP) can be configured to perform an analysis to determine a measure of the individual's ability to determine the source environment as an indication of ability to flexibly manipulate multiple cognitive maps under uncertainty, a specific form of active spatial memory interference.
[00188] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to apply a predictive model to data indicative of the cognitive ability in the individual. The predictive model can be configured based on computational techniques and machine learning tools, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, or artificial neural networks, to the cData and nData to create composite variables or profiles that are more sensitive than each measurement alone for detecting disease or assessing cognitive health.
[00189] An example system, method, and apparatus according to the principles herein can be configured to train a predictive model of a measure of the cognitive capabilities of individuals based on the data measured from the performance at the navigation tasks
(allocentric and/or egocentric navigation tasks) of individuals that are previously classified as to the measure of cognitive abilities of interest. For example, a classifier can be trained using a plurality of training datasets, where each training dataset is associated with a previously classified individual from a group of individuals. Each of the training dataset includes data indicative of one or more parameters indicative of the performance of the classified individual at the task(s) (whether allocentric and/or egocentric navigation tasks), based on the classified individual's interaction with an example apparatus, system, or computing device described herein. The example classifier also can take as input data indicative of the performance of the classified individual at a cognitive test, and/or a behavioral test, and/or data indicative of a diagnosis of a likelihood of onset of, or stage of progression of, a neurodegenerative cognitive condition, a disease, or a disorder (including an executive function disorder) of the classified individual.
[00190] In any example herein, the example trained predictive model can be used as an intelligent proxy for quantifiable assessments of an individual's cognitive abilities. That is, once a predictive model is trained, the predictive model output can be used to provide the indication of the cognitive capabilities of multiple individuals without use of a physiological measure, or another cognitive or behavioral assessment test. In an example, the trained predictive model can be used as an intelligent proxy to provide an indication of a likelihood of onset of a neurodegenerative condition of the individual, or the stage of progression of the neurodegenerative condition. In an example, the trained predictive model can be used as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
[00191] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with standard cognitive tasks for navigation, such as the pathway span task, the dynamic maze task, the radial arm maze, and the Morris water navigation task. Through correlation of the results of the multiple performance measures described herein and two or more of the standard cognitive tasks, the combinations allow for greater precision in assessing brain function of an individual or group of individuals, standards setting, calibration of one metric as compared to another metric, and validation or corroboration of the results of one or the tools versus the others. That is, the standard cognitive tasks may test one type of navigation capability of the individual. However, the systems, methods, and apparatus herein provides for methods, and apparatus
described herein can be used to generate indicators of the relative capabilities of the allocentric tasks versus the egocentric tasks.
[00192] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with an interference processing or other multi-tasking task (such as but not limited to the dual task measurements performed using the Project: EVO™ platform.
[00193] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with measurements of gross and fine motor function (as nData).
[00194] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with standard cognitive tasks for working memory, such as spatial working memory. Through correlation of the results of the multiple performance measures described herein and two or more of the standard cognitive tasks, the combinations allow for greater precision in assessing brain function of an individual or group of individuals, standards setting, calibration of one metric as compared to another metric, and validation or corroboration of the results of one or more tools versus the others.
[00195] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to present any combination of one or more of the above-described performance metrics with voice/speech monitoring based measures of cognitive and behavioral health. Through correlation of the results of the multiple performance measures described herein and two or more of the standard cognitive tasks, the combinations allow for greater precision in assessing brain function of an individual or group of individuals, standards setting, calibration of one metric as compared to another metric, and validation or corroboration of the results of one or more tools versus the others.
[00196] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to improve allocentric navigation as a treatment. For example, the example platform product can be configured to
adapt and/or increase the difficulty level of the navigation task(as) to improve wayfinding function. For example, the platform product can be configured to make it harder for the individual to rely on allocentric navigation by reducing the number of landmarks presented to the individual for use in a virtual space over time. As another example, the platform product can be configured to expand the size of the virtual environment so that there is more information for an individual to evaluate in order to make choices in the navigation. As another example, the platform product can be configured to make multiple virtual environments with the same visual landmarks in different positions so that interference of the landmark reduces the use of egocentric navigation. As another example, the platform product can be configured to present maps to the individual with increasingly incomplete information (for example, by gradually reducing the number of landmarks present in the landscape). As another example, the platform product can be configured to put obstacles in the way of the
known/previously trained route to increase difficulty and force an individual to use allocentric navigation techniques. As another example, the platform product can be configured to place starting points and one or more targets in different locations than in a previous session in a given environment, to force an individual to use allocentric strategies. As another example, the platform product can be configured to cause the individual to interact with environments analogous to previously explored environments and require the individual to employ knowledge of the source environment to reach the one or more targets in the second
environment, where the degree of difference between the source and analogous (second) environments may vary as desired. As another example, the platform product can be configured to introduce interfering activities of varying difficulty and/or duration in between navigation trials to stress maintenance and retrieval of spatial memory. As another example, the platform product can be configured to vary the number of possible source environments for an analogous (second) environment and/or the amount of information or time available with which to determine which is the source environment. As another example, the platform product can be configured to present any combination of two or more of these changes at substantially the same time or at differing times within the same session.
[00197] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to communicate with a physiological measurement component for measuring nData (from physiological
measurements). For example, to determine whether a person is actually using allocentric
navigation or egocentric navigation can be confirmed via fMRI while the individual performing a navigation task. If fMRI indicates that there is activity in the hippocampus (i.e. nData showing stronger bold fMRI contrast in this region of the brain), the individual is likely using an allocentric strategy. If fMRI indicates that there is activity in the caudate nucleus (i.e. nData showing stronger bold fMRI contrast in this region of the brain), the person is likely using an egocentric strategy.
[00198] The strength of hippocampal function can correlate with structural MRI measurements such as volume, cortical thickness, etc. This in turn can correlate with the ability of an individual to use allocentric navigation. The strength of caudate nucleus function can correlate with volume, and the ability of an individual to use egocentric navigation.
[00199] Changes in hippocampal volume, e.g. decreases resulting from disease progression or increases as a result of therapy, can correlate with an increase in the individual's ability to use allocentric navigation. Measurements of allocentric strategy efficiency can be used as indicators of disease progress or treatment efficacy. Such measures also can be used to determine the appropriate levels of difficulty to be used in the navigation-based treatment using the platform product(s) described herein.
[00200] As a non-limiting example, the cognitive platform based on interference processing can be the Project: EVO™ platform by Akili Interactive Labs, Inc., Boston, MA.
[00201] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to set baseline performance metrics at the navigation task(s) in APP session(s) based on measurements nData indicative of physiological condition and/or cognition condition (including indicators of neuropsychological disorders), to increase accuracy of assessment and efficiency of treatment. The CSIs may be used to calibrate a nData component to individual user dynamics of nData.
[00202] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to use nData to detect states of attentiveness or inattentiveness to optimize delivery of navigation task(s) related to treatment or assessment.
[00203] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to use analysis of nData with navigation task(s) cData to detect and direct attention to specific CSIs related to treatment or assessment through subtle or overt manipulation of CSIs.
[00204] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to monitor nData indicative of anger and/or frustration to promote continued user interaction with the cognitive platform by offering alternative navigation task(s) or disengagement from the navigation task(s).
[00205] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to combine signals from navigation task(s)I cData with nData to optimize individualized treatment promoting improvement of indicators of cognitive abilities, and thereby, cognition.
[00206] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to use a profile of nData to confirm/verify/authenticate a user's identity.
[00207] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to use nData to detect positive emotional response to CSIs in navigation task(s) in order to catalog individual user preferences to customize CSIs to optimize enjoyment and promote continued engagement with assessment or treatment sessions.
[00208] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to generate user profiles of cognitive improvement (such as but not limited to, user profiles associated with users classified or known to exhibit improved working memory, attention, processing speed, and/or perceptual detection/discrimination), and deliver a treatment that adapts navigation task(s) to optimize the profile of a new user as confirmed by profiles from nData.
[00209] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to provide to a user a selection of one or more profiles configured for cognitive improvement.
[00210] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to monitor nData from auditory and visual physiological measurements to detect interference from external environmental sources that may interfere with the assessment or treatment being performed by a user using an APP.
[00211] An example system, method, and apparatus according to the principles herein
includes a platform product (including using an APP) that is configured to use cData and/or nData (including metrics from analyzing the data) as a determinant or to make a decision as to whether a user (including a patient using a medical device) is likely to respond or not to respond to a treatment (such as but not limited to a cognitive treatment and/or a treatment using a biologic, a drug or other pharmaceutical agent). For example, the system, method, and apparatus can be configured to select whether a user (including a patient using a medical device) should receive treatment based on specific physiological or cognitive measurements that can be used as signatures that have been validated to predict efficacy in a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status). Such an example system, method, and apparatus configured to perform the analysis (and associated computation) described herein can be used as a biomarker to perform monitoring and/or screening. As a non-limiting example, the example system, method and apparatus configured to provide a provide a quantitative measure of the degree of efficacy of a cognitive treatment (including the degree of efficacy in conjunction with use of a biologic, a drug or other pharmaceutical agent) for a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status). In some examples, the individual or certain individuals of the population may be classified as having a certain condition, including a neurodegenerative condition.
[00212] An example system, method, and apparatus according to the principles herein includes a platform product (including using an APP) that is configured to use nData to monitor a user' s ability to anticipate the course of navigation task(s) and manipulate navigation task(s) patterns and/or rules to disrupt user anticipation of response to navigation task(s), to optimize treatment or assessment in an APP.
[00213] Non-limiting examples of analysis (and associated computations) that can be performed based on various combinations of different types of nData and cData are described. The following example analyses and associated computations can be implemented using any example system, method and apparatus according to the principles herein. As described hereinabove, the example systems, methods, and apparatus according to the principles herein can be implemented, using at least one processing unit of a programmed computing device, to provide the cognitive platform of a platform product. FIG. 1 1 shows an example apparatus 1 100 according to the principles herein that can be used to implement the cognitive platform described herein. The example apparatus 1 100 includes at least one memory 1 102 and at least
one processing unit 1104. The at least one processing unit 1104 is communicatively coupled to the at least one memory 1102.
[00214] Example memory 1102 can include, but is not limited to, hardware memory, non-transitory tangible media, magnetic storage disks, optical disks, flash drives, computational device memory, random access memory, such as but not limited to DRAM, SRAM, EDO
RAM, any other type of memory, or combinations thereof. Example processing unit 1104 can include, but is not limited to, a microchip, a processor, a microprocessor, a special purpose processor, an application specific integrated circuit, a microcontroller, a field programmable gate array, any other suitable processor, or combinations thereof.
[00215] The at least one memory 1102 is configured to store processor-executable instructions 1106 and a computing component 1108. In a non-limiting example, the computing component 1 108 can be used to analyze the cData and/or nData received from the cognitive platform coupled with the one or more physiological or monitoring components and/or cognitive testing components as described herein. As shown in FIG. 11, the memory 1102 also can be used to store data 1110, such as but not limited to the nData 1112 (including
measurement data from measurement s) using one or more physiological or monitoring components and/or cognitive testing components) and/or data indicative of the response of an individual to the one or more tasks (cData), including responses to tasks rendered at a graphical user interface of the apparatus 1100 and/or tasks generated using an auditory, tactile, or vibrational signal from an actuating component coupled to or integral with the apparatus 1100. The data 1110 can be received from one or more physiological or monitoring components and/or cognitive testing components that are coupled to or integral with the apparatus 1100.
[00216] In a non-limiting example, the at least one processing unit 1104 executes the processor-executable instructions 1106 stored in the memory 1102 at least to analyze the cData and/or nData received from the cognitive platform coupled with the one or more physiological or monitoring components and/or cognitive testing components as described herein, using the computing component 1108. The at least one processing unit 1104 also executes processor- executable instructions 1 106 to control a transmission unit to transmit values indicative of the analysis of the cData and/or nData received from the cognitive platform coupled with the one or more physiological or monitoring components and/or cognitive testing components as described herein, and/or controls the memory 1102 to store values indicative of the analysis of the cData and/or nData.
[00217] In another non-limiting example, the at least one processing unit 1104 executes the processor-executable instructions 1106 stored in the memory 1102 at least to display a representation of navigating in the computerized environment in response to physical actions of the individual in performing a navigation task, to collect measurement data from measurements of physical actions of the individual in performing the navigation task, to adjust a difficulty of the navigation task, to compute a performance metric based on the measurement data, and/or to provide an indication of the cognitive ability of the individual.
[00218] FIG. 12 is a block diagram of an example computing device 1210 that can be used as a computing component according to the principles herein. In any example herein, computing device 1210 can be configured as a console that receives user input to implement the computing component, including to display a representation of navigating in the
computerized environment in response to physical actions of the individual in performing a navigation task, to collect measurement data from measurements of physical actions of the individual in performing the navigation task, to adjust a difficulty of the navigation task, to compute a performance metric based on the measurement data, and/or to provide an indication of the cognitive ability of the individual. For clarity, FIG. 12 also refers back to and provides greater detail regarding various elements of the example system of FIG. 11. The computing device 1210 can include one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing examples. The non- transitory computer-readable media can include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives), and the like. For example, memory 1102 included in the computing device 1210 can store computer-readable and computer- executable instructions or software for performing the operations disclosed herein. For example, the memory 1102 can store a software application 240 which is configured to perform various of the disclosed operations (e.g., analyze cognitive platform measurement data and response data, including data responsive to physical actions of the individual in performing the navigation task(s)), display a representation of navigating in the computerized environment in response to physical actions of the individual in performing a navigation task, to collect measurement data from measurements of physical actions of the individual in performing the navigation task, to adjust a difficulty of the navigation task, to compute a performance metric based on the measurement data, and/or to provide an indication of the cognitive ability of the
individual. The computing device 1210 also includes configurable and/or programmable processor 1104 and an associated core 1214, and optionally, one or more additional configurable and/or programmable processing devices, e.g., processor(s) 1212' and associated core(s) 1214' (for example, in the case of computational devices having multiple
processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 1102 and other programs for controlling system hardware. Processor 1104 and processor(s) 1212' can each be a single core processor or multiple core (1214 and 1214') processor.
[00219] Virtualization can be employed in the computing device 1210 so that infrastructure and resources in the console can be shared dynamically. A virtual machine 1224 can be provided to handle a process executing on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
[00220] Memory 1 102 can include a computational device memory or random access memory, such as but not limited to DRAM, SRAM, EDO RAM, and the like. Memory 1102 can include a non-volatile memory, such as but not limited to a hard-disk or flash memory. Memory 1102 can include other types of memory as well, or combinations thereof.
[00221] In a non-limiting example, the memory 1102 and at least one processing unit
1104 can be components of a peripheral device, such as but not limited to a dongle (including an adapter) or other peripheral hardware. The example peripheral device can be programmed to communicate with or otherwise couple to a primary computing device, to provide the functionality of any of the example cognitive platform and/or platform product, and implement any of the example analyses (including the associated computations) described herein. In some examples, the peripheral device can be programmed to directly communicate with or otherwise couple to the primary computing device (such as but not limited to via a USB or HDMI input), or indirectly via a cable (including a coaxial cable), copper wire (including, but not limited to, PSTN, ISDN, and DSL), optical fiber, or other connector or adapter. In another example, the peripheral device can be programmed to communicate wirelessly (such as but not limited to Wi-Fi or Bluetooth®) with the primary computing device. The example primary computing device can be a smartphone (such as but not limited to an iPhone®, a BlackBerry®, or an Android™-based smartphone), a television, a workstation, a desktop computer, a laptop, a tablet, a slate computer, an electronic-reader (e-reader), a digital assistant, or other electronic
reader or hand-held, portable, or wearable computing device, or any other equivalent device, a gaming device (such as but not limited to an Xbox®, or a Wii®), or other equivalent form of computing device.
[00222] A user can interact with the computing device 1210 through a visual display unit 1228, such as a computer monitor, which can display one or more user interfaces (UI) 1230 that can be provided in accordance with example systems and methods. The example of FIG. 12 encompasses a visual display unit 1228 as a component in communication with the computing device 1210, or a visual display 1228 configured as a display that is an integral portion of the computing device 1210 (such as but not limited to a touch screen or other contact or pressure sensitive screen of a computing device). The computing device 1210 can include other input/output (I/O) devices for receiving input from a user, for example, a keyboard or any suitable multi-point touch interface 1218, a pointing device 1220 (e.g., a mouse), a camera or other image recording device, a microphone or other sound recording device, an accelerometer, a gyroscope, a sensor for tactile, vibrational, or auditory signal, and/or at least one actuator. The keyboard 1218 and the pointing device 1220 can be coupled to the visual display unit 1228. The computing device 1210 can include other suitable conventional I/O peripherals.
[00223] The computing device 1210 can also include one or more storage devices 1234 and an associated core 1236, such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that perform operations disclosed herein. Example storage device 1234 (and associated core 1236) can also store one or more databases for storing any suitable information required to implement example systems and methods. The databases can be updated manually or automatically at any suitable time to add, delete, and/or update one or more items in the databases.
[00224] The computing device 1210 can include a network interface 1222 configured to interface via one or more network devices 1232 with one or more networks, for example, Local Area Network (LAN), metropolitan area network (MAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.1 1, Tl, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. The network interface 1222 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for
interfacing the computing device 1210 to any type of network capable of communication and performing the operations described herein. Moreover, the computing device 1210 can be any computational device, such as a smartphone (such as but not limited to an iPhone®, a
BlackBerry®, or an Android™-based smartphone), a television, a workstation, a desktop computer, a server, a laptop, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other equivalent form of computing or
telecommunications device that is capable of communication and that has or can be coupled to sufficient processor power and memory capacity to perform the operations described herein. The one or more network devices 1232 may communicate using different types of protocols, such as but not limited to WAP (Wireless Application Protocol), TCP/IP (Transmission Control Protocol/Internet Protocol), NetBEUI (NetBIOS Extended User Interface), or IPX/SPX
(Internetwork Packet Exchange/Sequenced Packet Exchange).
[00225] The computing device 1210 can execute any operating system 1226, such as any of the versions of the Microsoft® Windows® operating systems, iOS® operating system, Android™ operating system, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of executing on the console and performing the operations described herein. In some examples, the operating system 1226 can be executed in native mode or emulated mode. In an example, the operating system 1226 can be executed on one or more cloud machine instances.
[00226] Examples of the systems, methods and operations described herein can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more thereof. Examples of the systems, methods and operations described herein can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. The program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage
medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
[00227] The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
[00228] The term "data processing apparatus" or "computing device" encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
[00229] A computer program (also known as a program, software, software application, script, application or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
[00230] The processes and logic flows described in this specification can be performed by one or more programmable processors executing on one or more computer programs to
perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatuses can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
[00231] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), for example. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[00232] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse, a stylus, touch screen or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback (i.e., output) provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
[00233] In some examples, a system, method or operation herein can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), an inter-network (e.g., the Internet), and peer-to- peer networks (e.g., ad hoc peer-to-peer networks).
[00234] Example computing system herein can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs executing on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
[00235] FIGs. 13 A - 13B show a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. In block 1302, the at least one processing unit Is used to render at least one graphical user interface to present the navigation task(s) (including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks) and one or more CSI to the user for interaction. In block 1304, the at least one processing unit is used to cause a cause a
component of the program product to receive data indicative of the performance of the navigation task (including route-learning tasks and/or relative-orientation tasks and/or way- finding tasks) and/or at least one user response based on the user interaction with the CSI (such as but not limited to cData), including responses provided using the input device. In block 1306, the at least one processing unit is used to cause a component of the program product to receive nData indicative of measurements made before, during, and/or after the user interacts with the cognitive platform (including nData from measurements of physiological or monitoring components and/or cognitive testing components). In an example implementation
of the method, block 1304 may be performed in a similar timeframe, or substantially simultaneously with block 1306. In another example implementation of the method, block 1304 may be performed at different timepoints than block 1306. In block 1308, the at least one processing unit also is used to: analyze the cData and/or nData to provide a measure of the individual's condition (including cognitive condition), and/or analyze the cData and/or nData to provide a measure of the individual's performance metric for a given type of navigation task, including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks (whether the navigation task requires allocentric navigation and/or egocentric navigation), and/or analyze the differences in the individual's performance based on determining the differences between the user' s performance at allocentric navigation as compared to the user' s performance at egocentric navigation (including based on differences in the cData) and differences in the associated nData, and/or adjust the difficulty level of the navigation task(s), including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks (including CSIs), based on the analysis of the cData (including the measures of the individual's performance determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual's performance metric, and/or cognitive abilities (including for screening, monitoring or assessment), and/or response to cognitive treatment, and/or assessed measures of cognition, and/or classify an individual as to amyloid status, and/or presence or expression level of tau proteins, and/or potential efficacy of use of the cognitive platform or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, and/or expected score from the individual's performance of a TOVA® test and/or a RAVLT™ test, and/or classify an individual as to likelihood of onset and/or stage of progression of a condition, and/or to determine a change in dosage (such as but not limited to an amount, concentration, and//or dose titration) of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual, based on nData and the cData collected from the individual's interaction with the cognitive platform or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData.
[00236] FIG. 13C shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. The example cognitive platform or platform product includes a memory to
store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13C. In block 1322, the one or more processing units are used to present via the user interface a first task that requires navigation of a specified route through an environment. In block 1324, the one or more processing units are used to present via the user interface a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual. In block 1326, the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time. In block 1328, the one or more processing units are used to present via the user interface a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task. In block 1330, the one or more processing units are used to obtain measurement data by measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task. In block 1332, the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
[00237] FIG. 13D shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. The example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13D. In block 1342, the one or more processing units are used to present via the user interface a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment. In block 1344, the one or more processing units are used to present via the user interface a first indicator configured to navigate in the environment in response to physical actions of the
individual to control the first indicator from an initial point of the course to a target end-point. In block 1346, the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point. In block 1348, the one or more processing units are used to measure data indicative of the relative orientation indicated using the second indicator. In block 1350, the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
[00238] FIG. 13E shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. The example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13E. In block 1362, the one or more processing units are used to present via the user interface a first task that requires the individual to navigate in an environment. The first task comprises an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase. In the exploration phase, the environment comprises one or more of a specified location, a specified landmark, or a specified object. The first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task. In block 1364, the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object. In block 1366, the one or more processing units are used to present via the user interface a second indicator configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a
speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions, where the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task. In block 1368, the one or more processing units are used to obtain measurement data by measuring data indicative of the physical actions of the individual in performing the second task. In block 1370, the one or more processing units are used to analyze the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
[00239] FIG. 13F shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. The example cognitive platform or platform product includes a memory to store processor-executable instructions, and one or more processing units communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to execute the method in the flowchart of FIG. 13F. In block 1382, the one or more processing units are used to present via the user interface a first task that requires the individual to navigate in an environment. A first portion of the first task comprises an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase. In the exploration phase, the environment comprises one or more of a specified location, a specified landmark, or a specified object. The first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first portion of the first task. In block 1384, the one or more processing units are used to configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object. In block 1386, the one or more processing units are used to obtain measurement data by measuring data indicative of the physical actions of the individual in performing the second portion of the first task. In block 1388, the one or more processing units are used to analyze the measurement data to generate a performance metric for
the performance of the first task, the performance metric providing an indication of the cognitive ability of the individual.
[00240] In an example system, method and apparatus, prior to rendering the tasks at the user interface, the at least one processing unit is configured to cause a component of the program product to receive nData indicative of one or more of an amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic being or to be administered to an individual. Based at least in part on the analysis of the cData collected from the individual's performance of the navigation task(s), the at least one processing unit is configured to generate an output to the user interface indicative of a change in the individual's cognitive ability.
[00241] Any classification of an individual as to likelihood of onset and/or stage of progression of a condition (including a neurodegenerative condition) in block 1308 can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow
formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage (such as but not limited to an amount, concentration, and//or dose titration) of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
[00242] In some examples, the results of the analysis may be used to modify the difficulty level or other property of the navigation task(s), including route-learning tasks and/or relative-orientation tasks and/or way-finding tasks, or CSIs.
[00243] FIG. 14A shows a non-limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as a cognitive platform 1402 that is separate from, but configured for coupling with, one or more of the physiological components 1404.
[00244] FIG. 14B shows another non-limiting example system, method, and apparatus according to the principles herein, where the platform product (including using an APP) is configured as an integrated device 1410, where the cognitive platform 1412 that is integrated with one or more of the physiological components 1414.
[00245] FIG. 15 shows a non-limiting example implementation where the platform product (including using an APP) is configured as a cognitive platform 1502 that is configured
for coupling with a physiological component 1504. In this example, the cognitive platform 1502 is configured as a tablet including at least one processor programmed to implement the processor-executable instructions associated with the tasks and CSIs described hereinabove, to receive cData associated with user responses from the user interaction with the cognitive platform 1502, to receive the nData from the physiological component 1504, to analyze the cData and/or nData as described hereinabove, and to analyze the cData and/or nData to provide a measure of the individual's physiological condition and/or cognitive condition, and/or analyze the differences in the individual's performance based on determining the differences between the user's responses and the nData, and/or adjust the difficulty level of the
computerized stimuli or interaction (CSI) or other interactive elements based on the individual's performance determined in the analysis and based on the analysis of the cData and/or nData, and/or provide an output or other feedback from the platform product indicative of the individual's performance, and/or cognitive assessment, and/or response to cognitive treatment, and/or assessed measures of cognition. In this example, the physiological component 1504 is mounted to a user's head, to perform the measurements before, during and/or after user interaction with the cognitive platform 1502, to provide the nData.
[00246] In a non-limiting example implementation, measurements are made using a cognitive platform that is configured for coupling with a fMRI, for use for medical application validation and personalized medicine. Consumer-level fMRI devices may be used to improve the accuracy and the validity of medical applications by tracking and detecting changes in brain part stimulation.
[00247] In a non-limiting example, fMRI measurements can be used to provide measurement data of the cortical thickness and other similar measurement data.
[00248] In a non-limiting examplary use for treatment validation, the user interacts with a cognitive platform, and the fMRI is used to measure physiological data. The user is expected to have stimulation of a particular brain part or combination of brain parts based on the actions of the user while interacting with the cognitive platform. In this example, the platform product may be configured as an integrated device including the fMRI component coupled with the cognitive platform, or as a cognitive platform that is separate from, but configured for coupling with the fMRI component. Using the application with the fMRI, measurement can be made of the stimulation of portions of the user brain, and analysis can be performed to detect changes to determining whether the user is exhibit the desired responses.
[00249] In a non-limiting examplary use for personalized medicine, the fMRI can be used to collect measurement data to be used to identify the progress of the user in interacting with the cognitive platform. The analysis can be used to determine whether the cognitive platform should be caused to provide tasks and/or CSIs to enforce or diminish these user results that the fMRI is detecting, by adjusting users experience in the application.
[00250] In any example herein, the adjustments to the type of navigation tasks and/or
CSIs can be made in real-time.
Conclusion
[00251] The above-described embodiments can be implemented in any of numerous ways. For example, some embodiments may be implemented using hardware, software or a combination thereof. When any aspect of an embodiment is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
[00252] In this respect, various aspects of the invention may be embodied at least in part as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, compact disks, optical disks, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium or non-transitory medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the technology discussed above. The computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present technology as discussed above.
[00253] The terms "program" or "software" are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present technology as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present technology need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present technology.
[00254] Computer-executable instructions may be in many forms, such as program
modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
[00255] Also, the technology described herein may be embodied as a method, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
[00256] All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
[00257] The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one." [00258] The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[00259] As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be
interpreted as indicating exclusive alternatives (i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of." "Consisting essentially of," when used in the claims, shall have its ordinary meaning as used in the field of patent law.
[00260] As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
[00261] In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of and "consisting essentially of shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03.
Claims
1. An apparatus for generating an assessment of one or more cognitive skills in an individual, said apparatus comprising:
a user interface;
a memory to store processor-executable instructions; and
one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to:
present via the user interface a first task that requires navigation of a specified route through an environment;
present via the user interface a first indicator configured to navigate the specified route from an initial point in the environment to a target end-point with or without input from the individual;
configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual either: (i) to navigate a reverse of at least a portion of the specified route, or (ii) to navigate at least a portion of the specified route at least one additional time;
present via the user interface a second indicator configured to navigate in the environment in response to physical actions of the individual to control one of (i) a relative direction of the second indicator, or (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to perform the second task;
obtain measurement data by measuring data indicative of the physical actions of the individual to control the second indicator in performing the second task; and
analyze the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
2. The apparatus of claim 1, wherein the target end-point comprises one or more of a specified location in the environment, a specified landmark feature in the environment, or a specific object in the environment.
3. The apparatus of claim 1, wherein, in response to detecting that the second indicator is making in a wrong turn and/or moving in an incorrect direction based on analysis of the measurement data, the one or more processing units are configured to return the second indicator to either: (a) a portion of the specified route that was navigated successfully, or (b) the initial point.
4. The apparatus of claim 1, wherein, in response to detecting that the second indicator is making in a wrong turn and/or moving in an incorrect direction at a portion of the environment based on analysis of the measurement data, the one or more processing units are configured to present at least one directional aid via the user interface to indicate a correction to the turn or the direction.
5. The apparatus of claim 4, wherein a degree of difficulty of the second task is modified based on the number of directional aids displayed to the individual in performance of the second task.
6. The apparatus of claim 1, wherein generating the performance metric comprises considering one or more of a total time taken to successfully complete the second task, a number of incorrect turns made by the second indicator, a number of incorrect directions of movement made by the second indicator, or a degree of deviation of the user-navigated route in the second task as compared to the specified route.
7. An apparatus for generating an assessment of one or more cognitive skills in an individual, said apparatus comprising:
a user interface;
a memory to store processor-executable instructions; and
one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to:
present via the user interface a first task that requires navigation of a course that includes at least one turn of a discrete angular amount in an environment;
present via the user interface a first indicator configured to navigate in the environment in response to physical actions of the individual to control the first indicator from an initial point of the course to a target end-point;
configure the user interface to display instructions to the individual to perform a second task, the second task requiring the individual to control a second indicator to indicate a relative orientation of the initial point or a different specified location in the environment relative to the target end-point;
measure data indicative of the relative orientation indicated using the second indicator; and
analyze the measurement data to generate a performance metric for the performance of second task, the performance metric providing an indication of the cognitive ability of the individual.
8. The apparatus of claim 7, wherein the second indicator comprises one or more of an avatar, a pointer tool, or a tool for drawing a line, each for indicating the relative orientation.
9. The apparatus of claim 7, wherein generating the performance metric comprises considering a difference between data indicative of the relative orientation indicated using the second indicator and data indicative of actual relative orientation between the initial point and the target endpoint.
10. The apparatus of claim 1 or 7, wherein the first task comprises a free-exploration phase in which the one or more processing units are configured to allow the individual to control the first indicator to navigate in at least a portion of the environment without restriction or guidance.
11. The apparatus of claim 1 or 7, wherein the one or more processing units are configured to display limited visual information about the environment to the individual based on proximity and/or directionality relative to the second indicator.
12. An apparatus for generating an assessment of one or more cognitive skills in an individual, said apparatus comprising:
a user interface;
a memory to store processor-executable instructions; and
one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to:
present via the user interface a first task that requires the individual to navigate in an environment;
wherein the first task comprises an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or (ii) without restriction in a free-exploration phase; and
wherein, in the exploration phase, the environment comprises one or more of a specified location, a specified landmark, or a specified object; and wherein the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or (iii) both (i) and (ii), to perform the first task;
configure the user interface to display instructions to the individual to perform a second task, the second task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object;
present via the user interface a second indicator configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the second indicator, (ii) a speed of movement of the second indicator, or (iii) both (i) and (ii), to navigate to the specified location, the specified landmark feature, or the specified object based on the instructions;
wherein the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second task;
obtain measurement data by measuring data indicative of the physical actions of the individual in performing the second task;
analyze the measurement data to generate a performance metric for the performance of the second task, the performance metric providing an indication of the cognitive ability of the individual.
13. An apparatus for generating an assessment of one or more cognitive skills in an individual, said apparatus comprising:
a user interface;
a memory to store processor-executable instructions; and
one or more processing units communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the one or more processing units, the one or more processing units are configured to:
present via the user interface a first task that requires the individual to navigate in an environment;
wherein a first portion of the first task comprises an exploration phase in which the one or more processing units are configured to provide to the individual control of a first indicator to navigate in at least a portion of the environment from an initial point either (i) along a specified route or
(ii) without restriction in a free-exploration phase; and
wherein, in the exploration phase, the environment comprises one or more of a specified location, a specified landmark, or a specified object; and wherein the first indicator is configured to navigate in the environment based on physical actions of the individual to control one of (i) a relative direction of the first indicator, (ii) a speed of movement of the first indicator, or
(iii) both (i) and (ii), to perform the first portion of the first task;
configure the user interface to display instructions to the individual to perform a second portion of the first task requiring navigation to one or more of the specified location, the specified landmark feature, or the specified object;
obtain measurement data by measuring data indicative of the physical actions of the individual in performing the second portion of the first task; and
analyze the measurement data to generate a performance metric for the performance of the first task, the performance metric providing an indication of the cognitive ability of the individual.
14. The apparatus of claim 13, wherein the one or more processing units are configured such that the specified location, the specified landmark, or the specified object are not displayed to the individual during performance of the second portion of the first task.
15. The apparatus of any one of claims 1, 7, 12, and 13, wherein the one or more processing units are further configured to generate a scoring output indicative of at least one of (i) a likelihood of onset of a neurodegenerative condition of the individual, or (ii) a stage of progression of the neurodegenerative condition, based at least in part on the analyses of the measurement data.
16. The apparatus of claim 15, wherein the one or more processing units are further configured to adjust a difficulty level of the second task based at least in part on the analysis of the measurement data.
17. The apparatus of claim 15, wherein the measurement data comprises measures of one or more parameters indicative of a navigation strategy, the one or more parameters comprising at least one of a measure of the individual's judgment about relative spatial positions between two points as determined based on distances relative to other objects in the environment, a measure of the individual's ability to plot a novel course through a portion of the environment that was previously known, or a measure of the individual's ability to spatially transform three or more memorized positions in the environment arranged to cover two or more dimensions.
18. The apparatus of claim 15, wherein the neurodegenerative condition is Alzheimer's disease, dementia, Parkinson's disease, Huntington's disease, Cushing's disease, or
schizophrenia.
19. The apparatus of any one of claims 1, 7, 12, and 13, wherein generating the performance metric further comprises computing one or more of a measure of accuracy in a subsequent navigation of the specified route, a measure of accuracy in measures of indication that the individual uses spatial memory rather than visual cues for the relative orientation to the initial
point or to a different specified location in the environment, or a measure of a strategy implemented to explore the environment in a free-exploration phase.
20. The apparatus of any one of claims 1, 7, 12, and 13, wherein the measurement data comprises measures of one or more parameters indicative of a navigation strategy, the one or more parameters being measured as a function of time.
21. The apparatus of any one of claims 1, 7, 12, and 13, wherein the second indicator comprises a virtual joystick.
22. The apparatus of claim 21, wherein the virtual joystick is controllable to provide one or more of an indication of a user's "head-orientation" in the environment, an intended direction of movement of the first indicator or the second indicator, or to provide a virtual indication of "looking around" to observe features in the environment.
23. The apparatus of any one of claims 1, 7, 12, and 13, wherein the one or more processing units are further configured to apply a first predictive model to data indicative of the cognitive ability in the individual to classify the individual according to a level of expression of one or more of a beta amyloid, a cystatin, an alpha-synuclein, a huntingtin protein, or a tau protein.
24. The apparatus of claim 23, wherein the first predictive model is trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset comprising data representing an indication of a cognitive ability of the classified individual and data indicative of a diagnosis of a status or progression of a neurodegenerative condition in the classified individual.
25. The apparatus of claim 24, wherein the first predictive model serves as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
26. The apparatus of claim 23, wherein the first predictive model comprises one or more of a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, or an artificial neural network.
27. The apparatus of any one of claims 1, 7, 12, and 13, wherein the measurement data comprises measures of one or more parameters indicative of a navigation strategy, the one or more parameters comprising at least one of a measure of a navigation speed relative to the environment, an orientation relative to the environment, a velocity relative to the environment, a choice of navigation strategy, a measure of a wait or delay period or a period of inaction during navigation, a time interval to complete a course, or a degree of optimization of a navigation path through a course.
28. The apparatus of any one of claims 1, 7, 12, and 13, wherein the measurement data comprises measures of one or more parameters indicative of a navigation strategy, the one or more parameters comprising at least one of a direction of the individual's movement relative to the environment, a speed of the individual's movement relative to the environment, a measure of the individual's memory of landmarks, a measure of the individual's memory of turn-by-turn directions, or a frequency or number of times of referral to an aerial or elevated view of a view.
29. The apparatus of any one of claims 1, 7, 12, and 13, wherein the environment comprises one or more passageways, one or more obstacles disposed at specified portions of the one or more passageways, and one or more walls having dimensions.
30. The apparatus of claim 29, wherein the one or more passageways, obstacles, and dimensions comprise dimensional constraints, such that a width (ai) of each of the one or more obstacles is greater than or about equal to a width (a2) of each of the one or more passageways, and the width (ai) is smaller than a length (a3) of each of the one or more walls of the environment.
31. The apparatus of claim 30, wherein width ai is about twice width a2, and wherein width ai is about one-fourth to one-fifth of length a3.
32. The apparatus of claim 29, wherein the one or more processing units are configured to present navigation in the environment as a first person perspective or as a third person perspective.
33. The apparatus of any one of claims 1, 7, and 12, wherein the one or more processing units are further configured to:
adjust a difficulty of the second task to a second difficulty level;
present a second instance of the second task at the second difficulty level;
obtain a second set of measurement data by measuring data indicative of the physical actions of the individual in performing the second instance of the second task; and
analyze the second set of measurement data to generate a second performance metric indicative of a change of the cognitive ability of the individual.
34. The apparatus of claim 33, wherein the second difficulty level is an increase in the difficulty or a decrease of the difficulty.
35. The apparatus of claim 33, wherein the one or more processing units are further configured to provide a measure of an enhancement of the cognitive ability of the individual based at least in part on the second performance metric.
36. A system comprising an apparatus of any one of claims 1 - 35, wherein the apparatus configured as at least one of a smartphone, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, a portable computing device, a wearable computing device, or a gaming device.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762512351P | 2017-05-30 | 2017-05-30 | |
US62/512,351 | 2017-05-30 | ||
PCT/US2017/066214 WO2018112103A1 (en) | 2016-12-13 | 2017-12-13 | Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks |
USPCT/US2017/066214 | 2017-12-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018222729A1 true WO2018222729A1 (en) | 2018-12-06 |
Family
ID=64456115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/035155 WO2018222729A1 (en) | 2017-05-30 | 2018-05-30 | Platform for identification of biomarkers using navigation tasks and treatments based thereon |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018222729A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022059266A1 (en) * | 2020-09-15 | 2022-03-24 | 株式会社Jvcケンウッド | Evaluation device, evaluation method, and evaluation program |
EP4139015A4 (en) * | 2020-04-21 | 2024-05-08 | Roblox Corporation | Systems and methods for accessible computer-user interactions |
JP7563069B2 (en) | 2020-09-15 | 2024-10-08 | 株式会社Jvcケンウッド | EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM |
JP7563068B2 (en) | 2020-09-15 | 2024-10-08 | 株式会社Jvcケンウッド | EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120108909A1 (en) * | 2010-11-03 | 2012-05-03 | HeadRehab, LLC | Assessment and Rehabilitation of Cognitive and Motor Functions Using Virtual Reality |
US20120190968A1 (en) * | 2007-05-09 | 2012-07-26 | Oregon Health & Science University | Object recognition testing tools and techniques for measuring cognitive ability and cognitive impairment |
WO2015066037A1 (en) * | 2013-10-28 | 2015-05-07 | Brown University | Virtual reality methods and systems |
WO2016118811A2 (en) * | 2015-01-24 | 2016-07-28 | The Trustees Of The University Of Pennsylvania | Method and apparatus for improving cognitive performance |
US20160262680A1 (en) * | 2015-03-12 | 2016-09-15 | Akili Interactive Labs, Inc. | Processor Implemented Systems and Methods for Measuring Cognitive Abilities |
-
2018
- 2018-05-30 WO PCT/US2018/035155 patent/WO2018222729A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120190968A1 (en) * | 2007-05-09 | 2012-07-26 | Oregon Health & Science University | Object recognition testing tools and techniques for measuring cognitive ability and cognitive impairment |
US20120108909A1 (en) * | 2010-11-03 | 2012-05-03 | HeadRehab, LLC | Assessment and Rehabilitation of Cognitive and Motor Functions Using Virtual Reality |
WO2015066037A1 (en) * | 2013-10-28 | 2015-05-07 | Brown University | Virtual reality methods and systems |
WO2016118811A2 (en) * | 2015-01-24 | 2016-07-28 | The Trustees Of The University Of Pennsylvania | Method and apparatus for improving cognitive performance |
US20160262680A1 (en) * | 2015-03-12 | 2016-09-15 | Akili Interactive Labs, Inc. | Processor Implemented Systems and Methods for Measuring Cognitive Abilities |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4139015A4 (en) * | 2020-04-21 | 2024-05-08 | Roblox Corporation | Systems and methods for accessible computer-user interactions |
EP4139774A4 (en) * | 2020-04-21 | 2024-06-05 | Roblox Corporation | Systems and methods for accessible computer-user scenarios |
WO2022059266A1 (en) * | 2020-09-15 | 2022-03-24 | 株式会社Jvcケンウッド | Evaluation device, evaluation method, and evaluation program |
JP7563069B2 (en) | 2020-09-15 | 2024-10-08 | 株式会社Jvcケンウッド | EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM |
JP7563068B2 (en) | 2020-09-15 | 2024-10-08 | 株式会社Jvcケンウッド | EVALUATION APPARATUS, EVALUATION METHOD, AND EVALUATION PROGRAM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7125390B2 (en) | Cognitive platforms configured as biomarkers or other types of markers | |
KR102449377B1 (en) | Platforms for implementing signal detection metrics in adaptive response deadline procedures | |
JP2023025083A (en) | Cognitive platform including computerized evocative elements | |
JP2022117988A (en) | Cognitive platform coupled with physiological component | |
US11839472B2 (en) | Platforms to implement signal detection metrics in adaptive response-deadline procedures | |
KR20200128555A (en) | Cognitive screens, monitors, and cognitive therapy targeting immune-mediated and neurodegenerative disorders | |
JP7442596B2 (en) | Platform for Biomarker Identification Using Navigation Tasks and Treatment Using Navigation Tasks | |
US20200402643A1 (en) | Cognitive screens, monitor and cognitive treatments targeting immune-mediated and neuro-degenerative disorders | |
JP2022506651A (en) | Facial expression detection for screening and treatment of emotional disorders | |
WO2018222729A1 (en) | Platform for identification of biomarkers using navigation tasks and treatments based thereon | |
JP2022502789A (en) | A cognitive platform for deriving effort metrics to optimize cognitive treatment | |
US20240081706A1 (en) | Platforms to implement signal detection metrics in adaptive response-deadline procedures | |
US12138069B2 (en) | Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18808998 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18808998 Country of ref document: EP Kind code of ref document: A1 |