GB2619532A - System for developing motor-skills - Google Patents
System for developing motor-skills Download PDFInfo
- Publication number
- GB2619532A GB2619532A GB2208385.1A GB202208385A GB2619532A GB 2619532 A GB2619532 A GB 2619532A GB 202208385 A GB202208385 A GB 202208385A GB 2619532 A GB2619532 A GB 2619532A
- Authority
- GB
- United Kingdom
- Prior art keywords
- pose
- user
- follower
- poses
- demonstrator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 110
- 238000004590 computer program Methods 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 36
- 230000009466 transformation Effects 0.000 claims description 12
- 238000005516 engineering process Methods 0.000 description 58
- 238000004891 communication Methods 0.000 description 21
- 238000005259 measurement Methods 0.000 description 15
- 230000000007 visual effect Effects 0.000 description 14
- 210000003414 extremity Anatomy 0.000 description 13
- 230000006870 function Effects 0.000 description 11
- 230000003068 static effect Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000002123 temporal effect Effects 0.000 description 7
- 238000011161 development Methods 0.000 description 6
- 210000001503 joint Anatomy 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 241000228740 Procrustes Species 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 208000027418 Wounds and injury Diseases 0.000 description 3
- 230000001680 brushing effect Effects 0.000 description 3
- 244000309466 calf Species 0.000 description 3
- 230000006378 damage Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000002310 elbow joint Anatomy 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 230000002250 progressing effect Effects 0.000 description 3
- 210000003371 toe Anatomy 0.000 description 3
- 210000003857 wrist joint Anatomy 0.000 description 3
- 210000003423 ankle Anatomy 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 210000002683 foot Anatomy 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 230000037081 physical activity Effects 0.000 description 2
- 238000005201 scrubbing Methods 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000003749 cleanliness Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000002324 mouth wash Substances 0.000 description 1
- 229940051866 mouthwash Drugs 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000000554 physical therapy Methods 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
- A63B2024/0012—Comparing movements or motion sequences with a registered reference
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
- A63B2071/0647—Visualisation of executed movements
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Physiology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Physical Education & Sports Medicine (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A computer implemented system and method for developing motor-skills in a pose follower user comprises presenting a reference pose on a display of an apparatus associated with a pose follower user, detecting a pose of the pose follower user, causing presentation of the detected pose of the pose follower user concurrently with the reference pose on the display, measuring in real-time a degree of congruity of the detected pose to the reference pose. Responsive to determining the measured degree of congruity, the method may also comprise determining another pose to present on the display.
Description
SYSTEM FOR DEVELOPING MOTOR-SKILLS
[0001] The present disclosure relates to a system for developing motor-skills in a user and to related aspects.
BACKGROUND
[0002] Motor skills may become impaired through age, disease, or accident. Different individuals have different movement capabilities and may accordingly find it difficult to different degrees to follow a reference pose or movement sequence when developing or improving motor skills. Motor skill development is important not just when re-learning motor skills, for example, in the case of rehabilitation of an individual, but also when learning new physical activities such as gymnastics, dance or yoga, as well as for other types of sports activities..
[0003] Access to an instructor, whether for learning a new physical activity such as one of the above or for physical therapy can be both time-consuming and difficult, as well as costly. This is especially the case where an individual needs to repeatedly access a physio professional face to face for one to one physio sessions to recover from an injury.
SUMMARY
[0004] Whilst the invention is defined by the accompanying claims, various aspects of the disclosed technology including the claimed technology are set out in this summary section with examples of some preferred embodiments and indications of possible technical benefits.
[0005] The disclosed technology relates to a pose demonstration system for a pose demonstrator user, also referred to herein as a demonstrator or demonstrator user, to demonstrate poses and pose sequences, in otherwords, movements, to one or more pose followers, also referred to herein as follower users and followers. The follower and demonstrator users access the system using computing devices, for example, personal computers, mobile smartphones or tablets, and the like.
[0006] In some embodiments of the system, when accessing a pose and performing a pose using the system, either as a demonstrator user or as a follower user, the devices of each user of the system allow that user to see how much they are conforming to each other or to a reference pose on a display of their device. Their device displays can also be used, sometimes along with audio, to provide feedback on how the follower user is conforming to that demonstrator's pose or to a reference library pose. This allows a follower to understand precisely how much movement is required to achieve a particular reference pose from the current pose the follower has currently adopted so as to see if they can better conform to that particular reference pose.
[0007] In some embodiments, the pose follower can use a library of reference poses demonstrated using video and/or an avatar, in which case the system may be implemented on the user's device by downloading the pose reference library from the system. In some embodiments, the pose follower may follow a live pose demonstration from a demonstrator user, in which case the follower's and the demonstrator's devices may form the system, with or without intermediating system infrastructure. It is also possible to have one demonstrator demonstrate a pose concurrently to a number of pose followers in some embodiments, and for the demonstrator device to concurrently show an overlay of the congruity of each pose follower's pose to the demonstrator's own pose in some embodiments. Dynamic poses, also referred to herein as pose sequences or movements, can also be demonstrated and feedback provided in some embodiments. For example, a demonstrator can see each pose follower's attempt(s) to replicate a demonstrated or reference pose or pose sequence, and based on this can assess if the follower's extent of movement matches the demonstrator.
[0008] Some embodiments of the system can accordingly help with the selection of a suitable pose or pose sequences to be presented to a follower user to copy based on metrics derived from tracking of the follower user's pose and movement capabilities. These metrics can be provided as visual feedback to both the follower user and the demonstrator user and may also be tracked over the long term to allow quantitative measures of motor skills development to be generated as followers improve their motor skills.
[0009] Some embodiments of the system disclosed may accordingly help train motor skills in a very time-efficient manner which may produce better improvements in a shorter period of time than conventional techniques permit.
[00010] In some embodiments, the system infrastructure, for example, one or more user devices and/or a server or similar platform, which may be distributed or cloud based, comprises a congruity measurement component and/or a pose measurement component to enable tracking and sharing of pose data both in real-time and off-line (non-real-time).
[00011] Advantageously, embodiments of the disclosed technology provide feedback, which may include audio in addition to video, to enable a pose follower user to understand if they are performing a sequence of poses, or movement(s), within a safe range of movement. The disclosed technology may also measure and present on a display and/or provide audio guidance on how closely a pose follower is matching a demonstrator's or reference pose(s).
[00012] Some embodiments of the disclosed technology may allow a pose demonstrator, for example, a sports instructor or physiotherapist type of user to specify safe ranges of movement for a pose sequence or pose, which may reduce the likelihood of injury or overstraining by a pose follower seeking to copy the pose or pose sequence. Some embodiments of the disclosed technology allow one or both of the pose demonstrator and one or more pose followers to share information about their movements in real time.
[00013] A first aspect of the disclosed technology comprises a method, for example, a computer implemented method, for developing motor-skills in a pose follower user, the method comprising: causing presentation of a reference pose on a display of an apparatus associated with a pose follower user, detecting a pose of the pose follower user, causing presentation of the detected pose of the pose follower user concurrently with the reference pose on the display, and measuring in real-time a degree of congruity of the detected pose to the reference pose.
[00014] Responsive to measuring the degree of congruity, the method may further comprise determining another pose to present on the display in some embodiments.
[00015] In some embodiments of the method, the method further comprises presenting an indication on the display of the degree of congruity. The indication may comprise an overlay of the reference pose and the pose follower's detected pose.
[00016] To learn motor skills properly, an individual ideally needs to see poses which will help them develop motor skills being demonstrated so that they can try to copy them, and to have feedback on their progress [00017] Advantageously, the disclosed technology including the method of the first aspect provides an additional and greater level of feedback, by providing a visual and/or audio indication of the degree of pose congruity, which indicates how well a pose follower is conforming to a reference pose. This may enable more efficient motor skills learning by the pose follower. Some embodiments of the disclosed technology provide quantitative feedback which allows improvements in motor skill development to be measurable and visible to both demonstrator and follower users.
[00018] In some embodiments, the detected pose includes detected depth information for the pose.
[00019] In some embodiments, one or more pose attributes of the reference pose are transformed prior to determining the degree of congruity with the pose of the pose follower user.
[00020] In some embodiments, one or more pose attributes of the reference pose and the follower pose are transformed prior to determining the degree of congruity with the pose of the pose follower user.
[00021] In some embodiments, the method further comprises displaying overlaid pose images of the pose follower's pose and the reference pose, whereby the overlaid pose images provide a visible indicator of the detected measure of congruity of the pose follower user's pose to the displayed reference pose on the display.
[00022] In some embodiments, the detection of the pose follower user's pose and measurement of the degree of congruity of the pose follower user's pose to the reference pose performed in real-time [00023] In some embodiments, displaying the pose follower user's pose on top or underneath the reference pose provides a visible indicator in real-time of the detected measure of congruity of the pose follower user's pose to the displayed reference pose on the display associated with the user. Advantageously in some embodiments the degree of congruity is shared with another user which may allow the other user to adapt or select a next pose or pose sequence for the pose follower to emulate and/or to adjust a speed at which a sequence of poses is to be displayed on the display of the apparatus associated with the pose follower user.
[00024] Advantageously this allows both the user and the other user, for example, a pose demonstrator to concurrently, in other words, effectively at the same time or simultaneously, see to the degree of congruity the pose follower is achieving in real-time, in other words to what extent the pose follower user is replicating the pose demonstrator's movements or the reference pose sequence.
[00025] In some embodiments, the reference pose comprises a demonstrator pose derived from a demonstrator in real-time.
[00026] Advantageously, by adjusting in real-time of one or more reference poses in a sequence of reference poses depending on the extent of measured congruity of one or more detected previous poses of a pose follower user with one or more previous reference poses, the selection of the next pose, which may or may not be from the same sequence can be performed to improve the likely congruity score of the pose follower user's next pose. For example, one or more characteristics of the reference poses or adjustments to a sequence of reference poses can be made to present a reference pose with one or more characteristics which are likely to improve the pose follower user's motor skills. For example, one or more or all of the poses in the sequence may be adapted to adjust the range of movement of the user which will result in the maximum degree of congruity with the reference poses so that the time for the user's rehabilitation of their motor skills can be minimized. The number of poses may be increased or reduced in the sequence, the number of repetitions of pose sequences may be increased or reduced, and in some embodiments the range of movement that a pose sequence defines may be changed. By selecting poses which are more likely to improve a pose follower user's motor skills, the method may be more efficiently performed in terms of the number of times it is used and the computational resources required, which can also reduce energy consumption by the apparatus used to implement the method.
[00027] Advantageously, the disclosed technology provides a flexible platform compared to a video, or worksheet for a user to use to develop motor skills and to demonstrate motor skill developing poses. The pose data may be provided as a sort of a framework that can be adapted to a follower user such as a patient in real-time and over time. Both real-time and recorded pose data can be used to allow the sharing of the data in a much more efficient way, for example, as a set of point positions. The pose data can be compressed for more efficient transfer by determining and storing pose point positions based on a confidence level that the pose is a valid in some embodiments, for example, for example, statistically improbable or physically impossible movements may be filtered out and/or otherwise disregarded. In some embodiments, a change in the point position of a pose can be assessed prior to storing so that a small change in a pose does not result in the pose point data being updated, so no data is stored.
[00028] Advantageously, a pose sharing platform gives the ability to provide a similarity score for the congruity of a follower user's, such as a patient's, pose compared to recorded or real-time pose data. In some embodiments, this can be determined as a congruity measure for the course of a series of poses such as a whole exercise and/or for just one pose, in each case providing an objective measure of motor skill development progress.
[00029] In some embodiments, each pose comprises one or more pose characteristics, and wherein at least one pose characteristic of each pose in the pose sequence differs from that pose characteristic in a previous pose by a dynamically determined amount based on one or more user pose performance metrics for the previous pose.
[00030] A pose characteristic may be a dynamic value such as a range of movement of a user's joint or joints or body or static such as a user's body dimensions whilst the user is performing a pose.
[00031] In some embodiments, the method further comprises obtaining one or more physical characteristics associated with one or more motor-skills of the pose follower user and using the obtained physical characteristics of the pose follower user to adjust one or more corresponding physical characteristics of the reference pose.
[00032] In some embodiments a transformation is used to align the reference pose with the follower pose.
[00033] In some embodiments, the transformation comprises one or both of: scaling either one of a bounding box of the reference pose and a bounding box of a follower pose to match each other and finding a rotation based on a global best whole pose fit to align the reference pose to the follower pose.
[00034] In some embodiments, another method known in the art to be suitable for the transformation to align pose geometry and point clouds may be used, for example, a Procrustes transformation technique may be used.
[00035] In some embodiments, a plurality of reference pose images are displayed in a sequence on the display, wherein the reference poses that are presented progress through the reference pose sequence in a forwards and/or backwards direction responsive to input from the follower user.
[00036] In some embodiments, the method further comprises overlaying or underlying on the display the demonstrator's current pose calibrated to the follower user concurrently with the pose of the follower user.
[00037] In some embodiments, the method further comprises: processing a video feed, detecting one or more reference poses in the video feed, causing presentation of the video on the display of the apparatus associated with the pose follower user, matching the detected one or more reference poses in the video feed to one or more detected poses of the user, and determining for each of the detected one or more reference poses in the video feed, a degree of congruity of a detected pose of the pose follower with a corresponding reference pose.
[00038] In some embodiments, the processing of the video feed is on the demonstrator's device. This allows the demonstrator pose data and/or the video feed to be sent to a follower user's device.
[00039] In some embodiments, the presentation speed of the video on the pose follower user's display is adjusted according to a measure of a degree of congruity of one or more poses of the pose follower user matched to one or more reference poses in the video.
[00040] Another aspect of the disclosed technology comprises system for developing motor-skills in a pose follower user, the system comprising: means or a module configured to cause presentation on a display of an apparatus associated with a pose follower user of a reference pose in a sequence of one or more reference poses, means or a module configured to detect a pose of the pose follower user, means or a module configured to measure in real-time a degree of congruity of the detected pose to the displayed reference pose, and means or a module configured, responsive to the determination of a measured degree of congruity, to determine another pose in the sequence of poses to present on the display of the apparatus associated with the pose follower user.
[00041] In some embodiments, the system comprises means or module(s) configured to implement a method according to the method aspect or any one of its embodiments disclosed herein.
[00042] In some embodiments, the system further comprises another apparatus, wherein the other apparatus is associated with a pose demonstrator user, and wherein the other apparatus comprises: means or a module configured to detect a pose of the pose demonstrator user, means or a module configured to share the detected pose as a reference pose with the pose follower user, means or a module configured to cause presentation on a display of the apparatus of the pose demonstrator of the pose performed by a pose follower user, means or a module configured to measure in real-time a degree of congruity of the detected pose of the pose follower with detected pose of the pose demonstrator, and means or a module configured, responsive to the determination of the measured degree of congruity, to cause a presentation of a representation of the measured degree of congruity on the display of the apparatus associated with the pose follower.
[00043] In some embodiments, the system further comprises another apparatus associated with a demonstrator user, wherein the apparatus is configured to provide reference pose data comprising images captured in real time of the demonstrator user performing one or more poses to the apparatus of the pose follower user.
In some embodiments, the system further comprises means or a module configured, responsive to the determination of the measured degree of congruity, to cause a presentation of a representation of the measured degree of congruity on the display of the other apparatus associated with the pose demonstrator.
[00044] In some embodiments, the representation of the measured degree of congruity is provided concurrently on the display of the apparatus of the pose follower and the display of the other apparatus of the pose demonstrator.
[00045] Another aspect of the disclosed technology comprises an apparatus for developing motor-skills in a pose follower user, the apparatus comprising a memory, one or more processors or processing circuitry; and computer program code, wherein the computer program code, when loaded from memory and executed by the one or more processors or processing circuitry, causes the apparatus to implement a method according to the method aspect or any one of its embodiments disclosed herein.
[00046] In some embodiments, the apparatus comprises means or a module configured to detect a pose of the pose follower user, wherein the means or a module generates pose tracking data for the pose follower user which is acquired via one or more of: an image capture device, for example, a camera or camera system, a hand controller operated by the pose follower user, a head-set worn by the pose follower user, a body-suit worn by the pose follower user, and one or more sensors attached to one or more limbs of the pose follower user.
[00047] In some embodiments, the an image capture device, for example, the camera or camera system are configured to provide depth information which is used for pose calibration.
[00048] Another aspect of the disclosed technology comprises computer program product comprising computer code which, when loaded from a memory and executed by one or more processors or processing circuitry of an apparatus, causes the apparatus to perform a method according to the disclosed method aspect or any one of its embodiments disclosed herein.
[00049] In some embodiments, the presentation speed of the video on the user's display is adjusted according to the current degree of congruity of one or more matched poses of the user to the video.
[00050] Advantageously, by performing pose detection of a video and matching detected video poses to a user's current positioning, i.e. to the user's current detected pose, it allows the video to be scrubbed through under the control of the user. For example, scrubbing through the video can be controlled by how closely the detected user's pose matches the detected pose in the video. In other words, the congruency of a detected pose follower user's position to a detected position presented in the video may be used to control how fast the pose follower user progressing through the video.
[00051] In some embodiments of the system, when the computer code is loaded from memory and executed by the one or more processors it causes the one or more apparatus to implement one or more of the embodiments of the above method aspect.
Another aspect of the disclosed technology relates to a computer-program comprising computer code stored in memory which, when the computer code is loaded from memory and executed by one or more processors causes the system aspect or any of its embodiments to perform the method aspects or any of the embodiments of the method aspect.
[00052] In some embodiments, the apparatus comprises one of: a smart-phone, a tablet, a personal computer, or a smart television.
[00053] In some embodiments, the means or a module may comprises hard-ware or circuitry, including dedicated circuitry or reprogrammable circuitry.
Another aspect of the disclosed technology comprises a computer-readable storage medium comprising computer-program code which, when executed by one or more processors or processing circuitry of an apparatus, causes an apparatus according to the apparatus aspect or any of the embodiments of the apparatus aspect to implement a method according to the first aspect or any of its embodiments disclosed herein.
[00054] Another aspect of the disclosed technology comprises a computer program carrier carrying a computer program comprising computer-program code, which, when loaded from the computer program carrier and executed by one or more processors or processing circuitry of an apparatus according to the apparatus aspect or any of the embodiments of the apparatus aspect causes the apparatus to implement a method according to the first aspect or any of its embodiments, wherein the computer program carrier is one of an electronic signal, optical signal, radio signal or computer-readable storage medium.
[00055] Another aspect of the disclosed technology comprises a computer program product comprising computer-code which when loaded from memory and executed by one or more processors of a control circuit of an apparatus according to the apparatus aspect or any of the embodiments of the apparatus aspect, causes the apparatus to implement a method according to the first aspect or any one of the embodiments of the method of the first aspect.
[00056] In some embodiments, instead or in addition, the means or a module comprises software or computer program code.
[00057] The disclosed aspects and embodiments may be combined with each other in any suitable manner which would be apparent to someone of ordinary skill in the art
LIST OF FIGURES
[00058] Some embodiments of the disclosed technology are described below with reference to the accompanying drawings which are by way of example only and in which: [00059] Figure 1A shows schematically an example of an embodiment of a system 10 for developing motor skills in a pose follower user according to an embodiment of the disclosed technology; [00060] Figure 1B shows schematically an example of an embodiment of a system 10 for developing motor skills in a pose follower user according to an embodiment of the disclosed technology; [00061] Figure 2A shows schematically an embodiment of a method according to some embodiments of the disclosed technology; [00062] Figure 2B shows schematically another embodiment of a method according to some embodiments of the disclosed technology; [00063] Figure 3A shows schematically how a reference pose may be adjusted for a specific pose follower user according to some embodiments of the disclosed technology; [00064] Figure 3B shows schematically how a measure of a degree of congruity may be indicated graphically according to some embodiments of the disclosed technology.
[00065] Figure 4 shows schematically an example of a sequence of poses and pose milestones according to some embodiments of the disclosed technology; [00066] Figure 5A shows schematically an example sequence of uncalibrated reference poses; [00067] Figure 5B shows schematically an example sequence of pose follower user poses; [00068] Figure SC shows schematically the sequence of reference poses in Figure SA calibrated to the pose follower user whose poses are illustrated in Figure SB.
[00069] Figure 5D shows schematically an overlay representation of the pose follower user's poses in Figure 5B with and the sequence of calibrated reference poses in Figure 5C; [00070] Figure 6A shows schematically how a follower user's limb lengths may be differently adjusted when calibrating that follower user's body pose to a reference pose for display; [00071] Figure 6B shows schematically a spatial progression of poses according to some embodiments of the disclosed technology; [00072] Figures 7A-7C each show schematically an example of a temporal progression of poses according to some embodiments of the disclosed technology; [00073] Figure 8 shows schematically how two different data feeds providing captured image data are temporally fused according to some embodiments of the disclosed technology; [00074] Figure 9 shows schematically a first example progression of a pose sequence according to some embodiments of the disclosed technology; [00075] Figure 10 shows schematically how the progression of the pose sequence in Figure 9 may be dynamically modified according to some embodiments of the disclosed technology; [00076] Figures 11A, 11B, 11C, and 11D shows schematically spatial reference data for moving points of a pose according to some embodiments of the disclosed technology; [00077] Figures 12b and 12B shows schematically temporal reference data for moving points of a pose according to some embodiments of the disclosed technology; [00078] Figures 13A to 13D show schematically examples of calibration poses according to some embodiments of the disclosed technology; and [00079] Figure 14A shows schematically an example of a sequence of poses for improving teeth brushing motor skills according to some embodiments of the disclosed technology; [00080] Figure 14B shows schematically demonstrator apparatus and follower apparatus being used to improve teeth brushing motor skills according to some embodiments of the disclosed technology; and [00081] Figure 15 shows an example of an apparatus according to some embodiments of the disclosed technology.
DETAILED DESCRIPTION OF THE DRAWINGS
[00082] Aspects of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings. The apparatus and method disclosed herein can, however, be realized in many different forms and should not be construed as being limited to the aspects set forth herein. Steps, whether explicitly referred to a such or if implicit, may be re-ordered or omitted if not essential to some of the disclosed embodiments. Like numbers in the drawings refer to like elements throughout.
[00083] The terminology used herein is for the purpose of describing particular aspects of the disclosure only, and is not intended to limit the disclosed technology embodiments described herein. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
[00084] Figure 1A shows schematically an example of an embodiment of a system 10 for developing motor skills in a pose follower user, such as user A as shown in Figure 1A. User A may be referred to herein as a pose follower user. The system 10 shown in Figure 1A comprises at least one apparatus 12 which is configured to present on a display 14 one or more reference pose(s) 22, for example, reference poses of a sequence of more reference poses which are demonstrated to the pose follower user, for example, user A, on the display 14. Reference poses may be stored in a reference pose library 25. As shown in Figure 1A, the reference pose library 25 comprises uncalibrated reference poses.
[00085] The term pose referred to herein includes both static and dynamic poses, where a dynamic pose includes transitional movement. The display 14 may be provided as a stand-alone display or it may be embedded into another apparatus having data communications interface, for example, the apparatus may comprise a smart-phone, table, computer or smart-television or the like which can be configured to use a wired or wireless data communications link to receive pose information. In some embodiments the display 14 may comprises a near-eye display such as a head-set.
[00086] The system 10 detects a pose of pose follower user, for example, user A" for example, by using a pose image capture apparatus 18, such as a camera. The pose image capturing apparatus may be integrated into apparatus 12 in some embodiments, for example, apparatus 12 may comprise a touchscreen mobile phone with an integrated camera, and the camera may be configurable in conjunction with positioning the apparatus so as to have a field of view which encompasses a user's pose. A camera may not have the required field of view to capture all of a pose if the camera is too close.
[00087] In some embodiments, instead or in addition, image capturing apparatus 18 is distinct from the apparatus 12, in other words, in some embodiments a stand-alone video camera may be used to capture a video of a pose follower user's pose(s) and movements. This may be useful where the entire body of the pose follower user is to perform a pose to be captured, rather than, for example, just the pose follower user's face or upper torso, or specific body part.
[00088] The pose image capture device 18 may also capture depth information in some embodiments, in other words, a depth camera may be used in some embodiments to capture pose performance metrics for a pose performed by a pose follower user.
[00089] As would be apparent to anyone of ordinary skill in the art, the image capture device, for example, a mobile phone, may need to be orientated and positioned suitably to allow a full-body pose to be captured, for example, a mobile phone may need to be propped at a suitable angle around 3-4 m away from a user to allow for full-body pose capture. However, if only upper body poses are being demonstrated/followed, the phone can be much closer. The exact positioning will depending on the pose, the physical stature of the user(s), and the camera lens, and typically a front facing camera would be used so as to provide real-time feedback on its screen. Alternatively, another external display could be used, for example the image on the phone could be mirrored on a nearby TV or monitor.
[00090] As a pose follower user's attempts to imitate each reference pose 22 presented on the display 14 of the apparatus 12, a video feed comprising the pose follower user's pose data is generated. The pose data is shared or made available by the image capturing device 18 with pose calibration module 27 of the system 10. Pose calibration module 27 and/or image capture apparatus 18 may be provided on the same apparatus in some embodiments.
[00091] In some embodiments, the pose calibration module has two functions. In some embodiments, a pose selection module is provided as part of the pose calibration module. This calibrates a reference pose based on the pose performance metrics and/or physical metrics of a pose follower user performing a set of calibration poses. The calibration may result in the next reference pose being selected or adjusted to make it easier or harder for the pose follower user to emulate.
[00092] Secondly, the pose calibration module may also be used to calibrate the reference pose image data by scaling it and/or aligning it to a particular pose follower user pose image data. This form of image calibration may allow a degree of congruity of the pose follower user's pose image data to image data of the reference pose to be consistently calculated and displayed. This calibration is achieved by scaling and aligning the displayed pose based on the pose follower user's, for example, on user A's, physical capabilities. In some embodiments, a separate scaling module may be provided to perform the scaling which may be based on one or more pose performance metrics generated for a follower's previous attempts to copy one or more demonstrated poses.
[00093] In some embodiments, the pose calibration module fits a reference pose (from a library or real-time), onto the follower's pose. This may be performed off-line, as a non-dynamic process, based on a single initial frame captured where the user attempts to copy a reference pose, where, by measuring the distance between pairs of points using pose-tracking of the user's joints, a number of scalar metrics such as the user's leg length, arm length, etc., can be obtained which the calibration unit then adjusts to corresponding lengths for the same pairs of points in reference data so that when displayed, the user's pose is perceived to "fit" the follower's pose if the two are presented side by side or one overlaid on top of each other. In other words, scale factors are obtained, for each joint, from the user in the initial position, these are used to scale the reference pose to the follower pose to increase the conformity. In some embodiments, reference pose data, which term includes programmed pose data and data representing a demonstrator's physical pose data, is stored for a kind of normalized, or ideally proportioned, reference person and then adapted to each user. If a user stored their data it would be converted back into this idealized reference person's dimensions. The poses may be adjusted for gender and medical conditions based on user settings in some embodiments.
[00094] Alternatively, or in addition, in some embodiments, a dynamic calibration process may be performed in which a user attempts to perform a series of range of movement exercises, in which case the calibration may also include scaling the pose range of movement so that this is adjusted accordingly to that user's measured range of movement [00095] According to the embodiments of the disclosed technology, the pose calibration module 27 fits pose data to pose data, in other words comparable sets of pose points are compared, which may be stored in the form of pose records, pose point positions or pose matrix points, which are described in more detail below. The image of a user or any image data of the user are never "scaled" directly to distort the image, instead, the disclosed technology scales and rotates that user's, and/or the demonstrator's, point position data.
[00096] The calibration process comprises extracting a follower user's point position data from the video feed generated by a pose image capture device 18 such a camera, for example, a camera on a smartphone or table or computer or the like. For example, the video feed data may be processed to derive representational images for display on the user's device and/or pose performance metrics related to the location of the user's limbs and/or timing information for the user's movements. The processing may use any suitable technique known in the art to anyone with an understanding of human motion data capture and related technical field. For an example, see Interactive Control of Avatars Animated with Human Motion Data by Lee et al available to download on 30th May 2022 from https://graphics.cs.cmu.edulprojects/Avatariavatar.pdt, which discloses how a motion database of avatar behaviors comprising an extended unlabeled sequence of motion data appropriate to the application can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Other examples of suitable techniques known to those of ordinary skill in the art are provided by Chen et al in "Monocular Human Pose Estimation: A Survey of Deep Learning-based methods", available to download from https://arxiv.org/pdf/2006.01423.pdf.
[00097] Some disclosed embodiments of the disclosed technology use a suitable 0-value measurement reference, to capture a body proportional avatar or joint points to generate a pose in a video image of a follower or demonstrator user.
[00098] In some embodiments of the disclosed technology, such as the system 10 in Figure 1A, a data feed, for example, a video feed of the pose follower user's pose data is processed to generate pose calibration data for that specific pose follower user and this may be stored on that user's device or in a remote server/the cloud. The data feed may be analyzed to detect one or more or a sequence and timing information for poses performed by the pose follower user and these may be measured to determine pose performance metrics, for example, performance metrics such as timing, flexibility metrics. The pose performance metrics are then used by the pose calibration module 27 to adapt an uncalibrated, for example, a normalized or idealized, pose such as pose 22 to a pose which is specific to the user A. The video feed data may be displayed on a display associated with the follower user A, or in embodiments such as the example shown in Figure 1B, just with a demonstrator user, or on both the follower and demonstrator users' displays concurrently to provide real-time video feedback to each of them. The user image which is displayed may be a true video image or an avatar image generated from a user's joint points data extracted from the video of that user, and an overlay of the user's joint points may also be provided for either type of displayed videos in some embodiments of the disclosed technology.
[00099] In some embodiments, a method of calibration is performed which comprises fitting the reference data to the follower user data points based on the measured dimensions of the person's body. The method comprises in some embodiments storing data comprising the follower user's data points.
[000100] In some embodiments, a method of calibration is performed which comprises fitting the reference data to the follower user data points based on the measured dimensions of the person's body. The method comprises, storing data comprising the follower user's data points as an n-joint model, where n is the number of points / joints in the model. The data is stored either locally on the user's device, or remotely, and may be stored in association with an identifier for the user. The reference data is also stored using an n-joint model in a reference pose repository or library, for example, a library data 25 shown in Figure 1B. When performing the calibration joint lengths of the demonstrator and follower users may be normalized to a reference /idealized body dimensions, in other words, to an idealized human form used to normalize pose dimensions stored in 25.
[000101] In some embodiments, for each pose, each joint position may be stored as an absolute position, alternatively, it may be stored as a relative position from an origin joint, for example, an elbow point may be stored as a 3D vector from a shoulder of the same arm. Each vector has a scalar multiplier determined by a user detected pose scalar value divided by a calibrated pose scalar value. For example, the scalar values may be determined based on a detected shoulder length, in which case if the detected shoulder length is 80% of the idealized reference, a scalar value of 0.8 should be applied to the elbow joint position vector).
[000102] The reference data may be placed as the origin (0,0,0) with the feet set on the ground plane. The initial alignment not only involves only scaling of the joint positions however, but also adjusts rotation and translation of the reference data to the detected user's pose position. The alignment of the reference pose data onto the follower data may be a single step process, where the initial first pose is mapped onto the follower, or it may be a dynamic and/or on-going process, where the reference pose is continually mapped onto the follower pose with a transformation matrix or similar.
[000103] In some embodiments, the transformation matrix is a global best whole pose fit for the reference pose and the follower pose.
[000104] In some embodiments of the disclosed technology, as shown in Figure 1A, each pose 22 may be stored in pose library 25 the form of a pose point matrix which has a size of 3 * n, where n is the number of joints, and 3 is the dimensionality of the pose joint in (x,y,z) in a reference pose library 25. In some embodiments, minimization of the Frobenius norm (or other norm) for each corresponding pose point below a specific threshold value could be used as a measure of congruity of a follower's pose against a reference pose. ;[000105] The Orthogonal Procrustes approach is used in some embodiments to find a transformation matrix which maps the pose points of one pose representation matrix A to the pose points of another pose representation matrix B. This approach is well known in the art, see for example, A generalized solution to the orthogonal Procrustes problem" by Peter H. SchOnemann, published in Psychometrika, Vol. 31, No. 1, March 1966, (which may also be available to download from https://web.stanford.edu/class/cs273/refs/procrustes.pdf). ;[000106] The calibration of a reference pose retrieved from the pose library 25 to a user's body dimensions which the pose calibration module 27 shown in Figure 1 A performs can be distinguished from scaling a pose, for example, to adapt a normalized reference pose model to take into account a followers' different flexibility, which may be more or less than the reference pose model. ;[000107] As mentioned above, flexibility is an example of a pose performance metric in which, based on the user's ability to adapt their body to conform to a reference pose, the reference pose or future reference pose is further modified. For example, a user may want to copy a reference pose to touch their ankles, but in fact, can only touch their knees, and the reference pose as a result may be reconfigured to guide the user to also touch their knees in some embodiments, or to guide the user to try to touch their calves, based possible on the user's flexibility pose performance metric measured by how close they were able to stretch to their toes. In some embodiments, a flexibility pose performance metric may be combined with one or more other pose performance metrics, such as, for example, the time it took for the user to achieve the closest approximation to the reference pose, and the reference pose be adapted accordingly. ;[000108] In some embodiments of the disclosed technology, any pose adaptation is based on pose performance metrics and as such is performed as a post processing step on the reference data. For example, the pose performance metric may be encoded as a threshold (or smooth step), between the maximum range of movement possible by that particular user and the reference pose. In some embodiments, for example, if the joint angle range of movement exceeds the user's possible range of motion, ROM, then the threshold may limit the angle demonstrated by a dynamic reference pose to an angle between the vectors representing that user's ROM. ;Other post-processing steps which the pose calibration unit may perform as part of scaling the calibrated pose images to a particular user may include, but are not necessarily limited to: exclusion of unnatural or unlikely body positions, interpolation between reference pose data, for example, if there is too large a step change between data points, smoothing in time or space of the joint / points positions so as to prevent 'jitter when presenting a dynamic pose or pose sequence. Timing of any pose movement may be provided in some embodiments by adjust the playback speed of the reference pose data, akin to changing the speed that a video is played at. For example, if reference data for a dynamic pose comprising a sequence of poses shows 5 poses per second, then some adaptations may adjust the timing to show only 2 poses per second. It is also possible to resample the reference pose data to accommodate a range of playback speeds or timings in some embodiments. ;[000109] The pose calibration requires access to data representing pose calibration metrics for pose follower user, for example, user A. The pose calibration metrics are used to calibrate the uncalibrated, normalized, pose data of each reference pose 22 to suit a particular pose follower user. This pose calibration data may be generated on the apparatus 12 by processing one or more pose images of the pose follower user in some embodiments. The pose calibration data for pose follower user is then shared with a pose calibration module 26 of the system 10 in some embodiments which uses the calibration data for user A to calibrate reference poses 22 by suitably scaling, aligning, and adjusting the reference pose data 24 to generate calibrated reference poses 28 which are then displayed on apparatus 12 of user A. The calibrated reference poses 28 for user A may also be stored in a calibrated reference pose library associated with an identifier for user A to access subsequently in some embodiments (not shown in Figure 1A). ;[000110] Alternatively, the pose image capture device 18 may be configured to share user pose video data with the pose calibration model 26, and the pose calibration model 26 can then generate calibration data for user A and use it as mentioned above to generate calibrated reference pose or pose sequence data 28 for display on apparatus 12. ;[000111] As shown in Figure 1A, when apparatus 12 receives the calibrated reference pose or pose sequence data 28, it presents the data on the display so that user A can attempt to emulate the presented calibrated reference pose or pose sequence. In some embodiments, the presentation of the calibrated reference pose or pose sequence is contemporaneous with a video of the user so that a visual comparison of each reference pose and the corresponding user's pose can be made. In some embodiments, the visual comparison is accompanied by a quantitative measure or score of a degree of congruity of the user's pose and the calibrated reference pose. The score may represent the size of an area of overlap of the user's pose image and the area of the calibrated reference pose. ;[000112] In some embodiments, instead or in addition, the degree of congruity is provided visually by causing an overlay to be presented of the user's pose and the calibrated pose. ;[000113] Based on the amount of measured pose congruity, for example, based on one or more measurements of pose congruity over a period of time, a pose congruity score may be determined in some embodiments. The pose congruity score may be used to determine if a new pose should be presented and if so, the type of pose. In some examples, the next pose is selected by a pose selection module based on one or more physical capabilities of user B and/or based at least in part on the measured congruity of the user's pose with a previous reference pose. ;[000114] For example, if a user scores 50% congruity against a reference pose showing a demonstrator image for touching ankles, for example, if the user can only touch half-way down their calves, then the next pose selected to be presented may be a different pose from the intended next pose in the sequence from which the touching toes image was taken which would be to extend the stretch by holding the user's toes. For example, a user may be presented with a more gentle stretch guiding them to touch their calves and to hold that stretch instead to help increase their flexibility. ;[000115] As another example, consider a three pose sequence where a user first stands on one leg for 30 seconds in the first pose of the sequence, reaches upwards with both arms towards the sky in a second pose of the sequence, and then leans over sideways 90 degrees and holds this third pose for 60 seconds. If the user cannot maintain the first pose for 30 seconds with any significant degree of congruity being measured, then the second and/or third poses may be adapted as a result. For example, one adaptation for a user may be to put both feet on the ground in the second pose, and to adapt the leaning pose so the user is guided to lean over by only 45 degrees. Another adaptable for another user may omit the third pose altogether and to terminate the sequence after the second pose. Another adaptation for a user may comprise reducing the amount of time from 60 to 30 seconds of the leaning in the third pose, but for another user to increase the amount of the third pose is to be maintained to longer, for example, 120 seconds. ;[000116] In some embodiments, which specific pose attributes and which poses are presented or adapted in a sequence are predetermined based on the physical capabilities of the user and the specific rehabilitation goals for that user. In other words, a pose sequence may be initialized based on one or more physical attributes associated with the user. ;[000117] In some embodiments, which specific pose attributes and which poses are presented or adapted in a sequence or if the sequence should be terminated is also or instead dynamically assessed depending on the congruity score of a user for a currently presented pose and/or one or more attributes associates with one or more current physical capabilities of the user, for example, how flexible they are. In some embodiments, the physical capability attributes of the user interfered or measured using historic captured pose image of the user performing one or more poses. In some embodiments, which specific pose attributes and which poses are presented or adapted in a sequence or if the sequence should be terminated is based on one or more specific rehabilitation goals for the user. In some embodiments, a pose selection module performs the pose selection. In some embodiments, the pose selection module is provided as part of a pose calibration module 27. ;[000118] In other words, in some embodiments, pose calibration module 27 shown in Figure 1A also functions as a pose selection module. The pose calibration module 27 then adapts the image of the pose/pose sequence which is displayed to user A so that user A perceives each of their captured pose(s) and each of the reference pose(s) suitably aligned, scaled or normalized with respect to each other so that a degree of congruity between the user's pose image and the reference image can be determined as well as selects the next pose to be presented on display 14 of the device of user A In some embodiments of the system 10, however, the pose selection module of the system 10 may be provided as a separate module from the pose calibration module as would be appreciated by anyone of ordinary skill in the art. ;[000119] In some embodiments, the user flow would be as follow, for a non-real-time, for example, a pre-recorded data embodiment, select exercise or pose, this would then present a screen indicating a starting position e.g. T pose or similar, hands by sides. Once the user is in the correct position, the reference pose data would display automatically and the alignment of the reference data would take place. It is likely that the measuring of the patient dimensions would not be a dynamic process, but would be completed once, when they start the app. There would then be a transformation matrix or similar which is stored and applied to the reference data to scale the reference data to the user captured pose. The pose calibration and the alignment are sort of separate steps, as the rotational and translational alignment would definitely take place every there is a new session (basically aligning the reference pose data onto the user detected pose position). ;[000120] The microphone input is not absolutely necessary, but it may be good to add a reference to microphone / voice call data that may accompany a real-time session, a kind of video call with the pose data shared rather than video. ;[000121] In some embodiments, the pose selection module, selects a pose or pose sequence from an uncalibrated reference pose data and the selected pose is then calibrated for user B and undergoes alignment, scaling or normalizing to allow a consistent degree of congruity to be measured of the extent to which user A is able to replicate the selected pose or pose sequence. The alignment, scaling or normalizing may be performed only once or may be ongoing. However, in some embodiments, the calibration of the reference pose data occurs first and the calibrated reference pose or pose sequence is then adapted. ;[000122] Thus some embodiments of system 10 perform method 100, 200 for developing motor-skills in a user, for example, the methods shown in Figures 2A and 2B described below. ;[000123] In some embodiments system 10 performs an embodiment according to the method of Figure 2A, the method comprising causing in 102 presentation on a display of a reference pose, for example a reference pose which may be a pose in a sequence of one or more reference poses to a user, detecting a user's pose in 104, detecting a measure of the congruity of the detected user's pose to the displayed reference pose in 106, and responsive to the determined a measure of congruity, selecting or determining another reference pose in the sequence of reference poses to present to the user in 110. The determined new reference pose is then presented on a display to the user in 112. ;[000124] The new pose may be presented as a static pose in which case selection may be based on one or more reference static pose physical attributes. In some embodiments, however, the selection conditions for selecting the next pose may include timing conditions associated with how quickly a user adopted a pose. Other selection conditions may include in addition to a speed of transitioning to the current pose, a duration for which the current pose has been held within a congruity threshold. ;[000125] In some embodiments where a pose is to be held for a period of time, a plurality of measurements of the degree of congruity of the user's pose to the reference pose may be made. In some embodiments, the average degree of congruity over the period of time may be calculated and used to determine the next pose or pose sequence. ;[000126] In some embodiments, the distribution of the degree of congruity may be assessed over time, so that if a user is particularly poorly holding a pose at the end of a pose holding timer period, this may be more weighted than if that user is good at holding the pose at the beginning of the time period. ;[000127] In some embodiments, there may be a measure of congruity presented to the user dynamically, for example, in the form of a score or rating or other indication as the user is performing the pose. Alternatively, or in addition, a measure of congruity may be presented at the end of the session as a kind of user session 'evaluation' score or rating. ;[000128] Figure 2B shows an example of another method which the system 10 may perform in some embodiments. As shown in Figure 2B, the pose displayed on apparatus 12 of a user comprises a pose in a sequence of poses. The method performed by system 10 may accordingly comprise displaying a pose in the sequence of poses in 202, detecting user A's pose in 204, taking at least one measurement of the congruity of user B's pose and pose timing with each reference pose and each reference pose's timing in 204, determining if another pose in the sequence of poses is to be presented to the user, and if not, continuing to present the pose and measuring the congruity of the user's pose the presented pose by returning to 202. If another pose is to be presented in the sequence, this may be the next pose in the sequence or another pose, depending on the selection conditions for each pose in the sequence and the measured degree of congruity of the previous pose. The other pose may be another pose in the same sequence, just not the next in the sequence currently being presented in some embodiments, however, it is also possible depending on the measured degree of congruity for a pose from a different pose sequence to be presented in instead, where the pose sequences are taken from a library or store of reference poses 22. ;[000129] In the above embodiments, reference pose 22 is shown being stored in an uncalibrated form in a data store functioning as a pose library 25. However, in some embodiments, as mentioned above, calibrated reference poses 22 in a calibrated form for a particular user. The calibrated reference poses may be stored in a pose library which is uniquely associated with a user, for example, it may be associated with a unique identifier for user A. The reference library for a user may be stored remotely and accessed by presenting some form of credentials including the unique identifier or stored and accessed locally on apparatus 12 in which case authentication by the apparatus of the user and association with the library with the apparatus may be sufficient. ;[000130] Figure 1B of the accompanying drawings shows schematically another example embodiment of the disclosed technology where the reference poses 22 are provided by another user B. User B is referred to herein as a pose demonstrator or demonstrator. User B provides demonstrator poses 36 which are captured by a motion capture device 18b associated in this example with apparatus9 in the embodiment of Figure 1B and user B's uncalibrated demonstrator pose data 32 is then calibrated by pose calibration module 27 to provide calibrated one or more calibrated demonstrator poses 38 to apparatus 12 of user A. The apparatus 12 of user A then presents the calibrated demonstrator pose or poses 38 on the display 14 of apparatus 12 in a manner which allows a visual comparison of the calibrated demonstrator poses 38 and the user's captured pose image 16. Similarly to the reference pose embodiment of Figure 1A, the visual comparison may also comprise a presentation of both the calibrated demonstrator poses and the pose of user B at the same time on screen. In some embodiments, a congruity score for the calibrated demonstrator and user poses is also displayed. In some embodiments, an overlay of the calibrated demonstrator and user poses is presented on the display to provide a visual indication of the degree of conformity between the two poses. ;[000131] In the embodiment of Figure 1B, it is also possible to present the degree of congruity on the display 14 of apparatus of the demonstrator (user B). This provides information to the demonstrator which may allow the demonstrator (user B) to change or adjust the subsequent poses they demonstrate to the user. ;[000132] The disclosed embodiments of a system 10 for developing motor skills displays one or more reference poses or sequences of a plurality of poses for a user to attempt to emulate. To ensure the user is physically following the poses, the system is configured to provide feedback to the user which indicates to what extent they have successfully replicated the pose and/or successfully transitioned from one pose to the other, where a successful transition may include copying the timing of the transition from one pose to another. The reference poses/pose sequences may be generated using one or more poses and/or sequences of poses derived from capturing image information of a demonstrator user, such as user B shown in Figure 1B, or be generated using an animation technique involving an avatar for the user, or a computer-generated graphic which is programmed to perform certain movements or taken from a reference library of poses such as is shown in Figure 1A. In other words, in some embodiments the pose and pose sequences use movements demonstrated in real-time such as the example shown in Figure 1B and in some embodiments, poses are extracted from a library 25 of poses which may be reference poses previously performed by a demonstrator. In addition to poses performed by a human demonstrator being used as reference poses, however, in some embodiments the reference pose and pose sequences comprise computer animated graphics. ;[000133] Examples of apparatus 12 include smartphones, tablets, computers, smart televisions and the like which are capable of recording video images. In some embodiments, apparatus 12 includes a display 14 and at least one electronic motion capture device 18, for example, a camera. ;[000134] The pose image capture device 18 may comprise any suitable motion capture device such as a video camera or the like, and may capture depth information in some embodiments either using a stereoscopic camera arrangement or a depth sensor such as a LIDAR sensor or the like. Depth information to be captured which may allow pose information to be captured in three spatial dimensions as well as temporally. Examples of suitable pose image capture devices 18 may be found on smart phones, tablets, computers, televisions, and similar electronic consumer devices 12 which may include an internal display 16 or suitable connection ports for connection to one or more external displays 16. ;[000135] User B may have a demonstrator role, for example, by virtue of being a physiotherapist, doctor, mobility assistant, coach or tutor in some embodiments. User A, for example, a follower of the demonstrator, seeks to replicate the poses and ranges of motion being demonstrated by the demonstrator. User A may also be referred to here as a patient, student or the like. ;[000136] In the example embodiments of system 10 shown in Figures 1A and 1B, motion skill development reference or demonstrator pose image data is processed to scale and calibrate the data for display on user B's device using a suitable calibration module of the system 10. ;[000137] In some embodiments, the user's pose attributes may be scaled prior to determining the degree of congruity. For example, they may be scaled according to one or more physical attributes of one or more limb dimensions of the user in some embodiments. ;[000138] Advantageously, by enabling the adjustment in real-time of one or more following poses in the sequence of poses to be adjusted depending on the extent of measured congruity of one or more detected previous poses of the user with one or more reference poses, one or more characteristics of the sequence of poses can be adjusted to adapt the poses according to the user's motor skills. For example, one or more or all of the poses in the sequence may be adapted to adjust the range of movement of the user which will result in the maximum degree of congruity with the reference poses so that the time for the user's rehabilitation of their motor skills can be minimized. In some embodiments, this may also reduce the chance of injury of a user by exceeding their range of movement, ROM. The number of poses may be increased or reduced in the sequence, the number of repetitions of pose sequences may be increased or reduced, and in some embodiments the range of movement that a pose sequence defines may be changed. ;[000139] In some embodiments, the detection of the user's pose and measurement of the degree of congruity of the user's pose to the reference pose is performed in real-time. ;[000140] In some embodiments, displaying the user's pose on top or underneath the reference pose provides a visible indicator of the detected measure of congruity of the follower user's pose to a displayed reference pose. This may be presented on a display associated with the user or on another display, for example, a display associated with a demonstrator whose pose data has been used to generate the displayed reference pose. ;[000141] The form of feedback for the follower user does not need to just be visual, audio and tactile feedback may also be provided in some embodiments. In some embodiments, [000142] a display provides a video or camera feed representing a current pose position of a follower user with an indicator of the reference pose which may be from different views at the same time e.g. front view, side view, etc., alongside or laid over the user's pose data as presented on the display. ;[000143] In some embodiments, each pose comprises one or more pose characteristics, and wherein at least one pose characteristic of each pose in the pose sequence differs from that pose characteristic in a previous pose by a dynamically determined amount based on one or more user pose performance metrics for the previous pose. ;[000144] In some embodiments, at least one pose characteristic of each pose in the pose sequence differs from that pose characteristic in a previous pose by a predetermined amount based on one or more user pose performance metrics for the previous pose. ;[000145] In some embodiments, a new pose characteristic is provided in a subsequently presented pose depending on one or more different pose characteristics of one or more previous poses. ;[000146] In some embodiments, the new pose characteristic provided in the subsequently presented pose is dependent on one or more user pose performance metrics for the one or more previous poses. ;[000147] In some embodiments, the determined other pose is the next pose in the sequence in a forwards progression of poses in the sequence of poses. ;[000148] In some embodiments, the determined other pose is a previously presented pose. ;[000149] In some embodiments, the determined other pose is a previously presented pose and the method progresses in a backwards progression of poses in the sequence before reverting to a forwards pose progression. ;[000150] In some embodiments, the method further comprises obtaining one or more physical characteristics associated with one or more motor-skills of the user. Examples of physical characteristics of a user include limb dimensions. Depending on which physical characteristics of a user are affecting their motor-skills, pose reference data may be generated which provides custom ranges of movement adjusted for that user's range of movement. ;[000151] In some embodiments, the method further comprises obtaining one or more physical characteristics associated with one or more motor-skills of the user and using the obtained physical characteristics of the user to make one or more corresponding physical characteristics of a reference pose model for the user consistent with the corresponding physical characteristics of the user. ;[000152] In some embodiments, the user is able to cause the displayed pose images to manually progress through the reference pose sequence in a forwards and/or backwards direction if the reference pose sequence comprises more than one pose. ;[000153] In some embodiments, the demonstrator apparatus 12b of a demonstrator user 8 communicate pose information in real-time with the apparatus 12 of a follower user A. The pose information may comprise at least one reference pose from a library or a demonstrator pose of the user B. The information may be communicated in real-time to cause the device 12 of the follower user A to display in real-time at least one calibrated reference pose received from the device associated with the demonstrator user B. [000154] In some embodiments, the apparatus 12b associated with the demonstrator user B is configured to control the follower apparatus 12b to cause a real-time display of at least one reference pose in a sequence of one or more reference poses which is presented on the display of the user's device 12b. ;[000155] In some embodiments, the demonstrator apparatus 12b is configured to control the timing of the sequential display of one or more reference or demonstrator poses on the follower's apparatus 12b. ;[000156] Advantageously this may allow, for example, the demonstrator user to cause a sequence of poses to be played more slowly on the follower's device or more quickly. ;[000157] In some embodiments, the method further comprises causing presentation on a display 14b of apparatus 12b associated with a demonstrator user, of a reference pose in a sequence of one or more reference poses comprising captured pose images of the demonstrator, and the method further comprises receiving pose reference data from a follower, calibrating the demonstrator pose data to the follower's data, determining a measure of the congruity of the demonstrator's pose to the displayed pose of the follower, and displaying a visual indication of the congruity on at least the apparatus 12b of the demonstrator. ;[000158] In some embodiments, the measure of congruity may be indicated by a score or percentile indication. ;[000159] In some embodiments, the visual indication of the measure of congruity is also provided on the follower's apparatus 12b. ;[000160] In some embodiments, the measure of congruity may be graphically indicated, for example, by displaying the follower user's pose on top or underneath the reference pose of the demonstrator user to provide a visible indicator of the detected measure of congruity of the follower user's pose to the displayed reference pose of the demonstrator user on the display of the apparatus 12b associated with the demonstrator user. ;[000161] In some embodiments, displaying the follower user's pose on top or underneath the reference pose of the demonstrator user provides a visible indicator of the detected measure of congruity of the follower user's pose to the displayed reference or demonstrator pose of the demonstrator user on the display 16 associated with the apparatus 12b of the demonstrator user occurs in real-time and occurs concurrently with displaying the follower user's pose on top or underneath the demonstrator or reference pose. This advantageously provides a consistent visible indicator of the detected measure of congruity of the follower user's pose to the displayed demonstrator or reference pose on both the follower and demonstrator apparatus. Advantageously this allows both the user and the other user, for example, the instructor to concurrently, in other words, effectively at the same time or simultaneously to see to what extent they are mirroring each other's movements. ;[000162] In some embodiments, the method further comprises: processing a video feed, detecting one or more poses in the video feed, presenting the video on the display of the device associated with the user, detecting one or more poses of the user; matching the detected one or more poses in the video feed to one or more detected poses of the user; and determining for each of the detected one or more poses in the video feed, a degree of congruity of a detected pose in the video feed to a matched detected pose of the user. ;[000163] In some embodiments, the presentation speed of the video on the user's display is adjusted according to the current degree of congruity of one or more matched poses of the user to the video. Advantageously, by performing pose detection of a video and matching detected video poses to a user's current positioning, i.e. to the user's current detected pose, it allows the video to be scrubbed through under the control of the user. For example, scrubbing through the video can be controlled by how closely the detected user's pose matches the detected pose in the video. In other words, the congruency of a detected user's position to a detected position presented in the video may be used to control how fast the user progressing through the video. ;[000164] In some embodiments, the device associated with the user provides a virtual reality or augmented reality reference pose. ;[000165] In some embodiments, the pose tracking data is acquired via one or more of: a hand controller operated by the user; a head-set operated by the user; a body-suit worn by the user; one or more sensors attached to limbs of the user. ;[000166] In some embodiments, the method matches the pose tracking data captured using any of the above device with the virtual reality representation of the reference data [000167] Some embodiments of system 10 comprise one or more electronic apparatus 12b, 12b configured to communicate pose image data with each other, wherein one or more of the electronic apparatus 12b, 12b comprise a suitable form of memory such as memory 40 shown in Figure 15, one or more processors 42, and computer-code, wherein when the computer code is loaded from memory 40 and executed by the one or more processors 42 it causes the one or more apparatus to implement a computer implemented method for developing motor-skills in a user according to any of the disclosed embodiments described above. In some embodiments, the system 100 comprises at least one apparatus 12 having in memory: a pose display module 52 configured to cause presentation on a display 16 of a follower apparatus 12b of a reference or demonstrator pose in a sequence of one or more reference or demonstrator poses to a user, a pose detection module 54 configured to detect a user's pose, and a congruity measurement module 56 configured to measure in real-time a degree of congruity of a detected follower user's pose to the displayed demonstrator pose or reference pose; and responsive to the measured degree of congruity, a pose selection module 50 determining another pose in the sequence of poses to present to the user. ;[000168] In some embodiments, the apparatus 12 further includes image capture device configured to capture pose image from the follower user. ;[000169] In some embodiments of the system, when the computer code is loaded from memory 40 and executed by the one or more processors 42 it causes the one or more apparatus 12 to implement one or more of the embodiments of the above method aspect. ;[000170] Another aspect of the disclosed technology relates to a computer-program comprising computer code stored in memory 40 which, when the computer code is loaded from memory 40 and executed by one or more processors 42 causes the system 100 to perform the method 100, 200 or any of the embodiments disclosed herein. ;[000171] Figure 15 shows an example embodiment of a communications-enable apparatus 12 for training motor skills in a user, where the apparatus 12 comprises a pose display means or module for causing presentation on a display of a reference pose in a sequence of one or more reference poses to a user, a pose detection means or a module for detecting a user's pose, congruity measurement means or module 56 for detecting a measure of the congruity of the detected user's pose to the displayed reference pose, and pose selection means or a module 54 which, based on the measure of congruity, determines if another pose in the sequence of poses to present to the user, and if so, which pose is to be presented. ;[000172] In some embodiments, one or more of the means or a module comprises hard-ware or circuitry, including dedicated circuitry or reprogrammable circuitry, configured to perform one or more steps of method 100, 200. In some embodiments, instead or in addition, the means or a module comprises software or computer program code configured to perform one or more steps in an embodiment of method 100, 200.. ;[000173] Some embodiments of the disclosed technology relate to a computer-readable storage medium comprising computer-program code which, when executed by one or more processors or processing circuitry of an apparatus, causes the apparatus to perform one or more of the method aspects or any of their embodiments. ;[000174] Some embodiments of the disclosed technology also relate to a computer program carrier carrying a computer program comprising computer-program code, which, when loaded from the computer program carrier and executed by one or more processors or processing circuitry 42 of an apparatus 12 causes the apparatus 12 to implement a method 100, 200 or one of the disclosed embodiments of the method 100, 200. The computer program carrier is one of an electronic signal, optical signal, radio signal or computer-readable storage medium. ;[000175] Another aspect of the disclosed technology comprises a computer program product comprising computer-code which when loaded from memory 40 and executed by one or more processors 42 of a control circuit of an apparatus 12 or of the system 10 causes the apparatus 12 and/or the system 10 to implement an embodiment of one or both of methods 100, 200 or any of their disclosed embodiments. ;[000176] Figure 3A shows schematically an example of a calibration which is performed to measure the congruity of user A's pose with a reference or demonstrator pose. Figure 3B shows schematically how user's A pose is overlaid with the calibrated reference or demonstrator pose data to provide an overlay pose image which gives a visual indication of the measure of congruity or alignment between the two pose images. ;[000177] A pose image is what is displayed, in other words presented to a user after all post-processing, scaling, alignment has taken place. The pose is detected from images, for example, from a camera feed, which are used by a pose detection ML model. The ML pose detection model is trained on images of known pose positions and their pose point data so that, when fed with a user's image, it is able to determining a probably pose contained in the image, and outputs a set of pose joint positions which meet one or more confidence or similar selection criteria.. The calibration is always between the ML detected features /landmarks, in other words, the joint /pose positions, rather than the actual images themselves, which may use different aspect ratios etc. [000178] In this way, a user may perceive a level of congruity between their body's pose image as displayed, which is a visual representation of that user's joint positions on the screen, and their body position and they are able to adjust their body position accordingly to better match their displayed body's pose image to a reference pose. The system and the platform accordingly do not work from images directly, as this would simply "stretch" or "shorten" the entire image, whereas the disclosed embodiments can scale a user's upper body in a different way from their lower body if appropriate to match a reference image using pose detection and subsequent calibration of pairs of joint data points and alignment so as to generate pose data that is comparable when displayed on a screen. ;[000179] Pose data for the reference and/or demonstrator poses may be streamed over any suitable connection, wired or wireless, known in the art to allow images to be communicated from the calibration module of the system to apparatus 12 in a timely way without undue jitter or delay. Such connections are now available via a variety of communication networks, including by way of example, Wi-Fi, cellular and/or new radio networks, such as 5G and 6G networks, and also over direct peer-to-peer communications links such as, for example, Wi-Fi Direct or BluetoothIm. In embodiments where user B's pose information is communicated to the apparatus of user A for display with the demonstrator pose of user A, calibration of user B's pose data may be performed to present a measure of congruity on the apparatus of user A, for example, in the form of an overlay model. Alternatively, the data stream provided by the calibration module 27 may be provided to the apparatus of user A and the apparatus of user B which allows the same overlay images and measures of congruity of the two user's poses to be presented concurrently, in other words, at the same time, on each of the demonstrator's and follower's apparatus 12b, 12b. ;[000180] Accordingly, some embodiments of the systems shown in Figures 1A and 1B, the electronic device 14b performs a computer-implemented method 200 of developing motor-skills in a user, the method 200 comprising: displaying 202 a pose in a sequence of poses to a user, detecting 204 a measure of the congruity of the user's pose to the displayed pose, and, responsive to the measure of congruity, determining 206 another pose in the sequence of poses to present to the user. An example of such a method is also shown in Figure 2B of the accompanying drawings. The displayed pose may be provided by a demonstrator or be retrieved from a reference library. ;[000181] Whilst some embodiments of the disclosed technology scale image data based on the physical pose metrics of user A and/or user B, in some embodiments, the scaling may comprise scaling the physical pose metrics of both users A and B to a set of normalized pose metrics, and then overlaying the scaled physical pose metrics of both users on the displays 16a, 16b of one or both of user devices 14a, 14b. ;[000182] Figure 4 shows an example pose sequence such as may be provided by user A or pose library 22 and displayed on display 16b of user B by an embodiment of system 10. ;[000183] In this example, a pose sequence comprises six poses, pose 1, pose 2, pose 3, pose 4, pose 5 and pose 6 which are to be performed by user B in that order. In some embodiments, such as that shown schematically in Figure 4, two pose sequence milestones may be provided. The first pose sequence milestone is between pose 3 and pose 4, and the second is the sequence completion milestone after pose 6. ;[000184] A milestone pose is a user-selected or generated, or alternatively, an Al-generated significant, in other words important, pose that once reached allows the follower user to progress to a next stage in a pose sequence. So if a user has to learn motor skills to allow them to brush their teeth, the first milestone pose may be a pose sequence where they bring a toothbrush up to their mouth. This may allow access to a subsequent reference or demonstrated pose sequence where the user opens their mouth whilst holding the toothbrush against their mouth. ;[000185] Figure SA shows an example uncalibrated or scaled reference or demonstrator pose sequence data record comprising six poses (pose 1 to pose 6) is generated by a user A or by a computer-system, for example, it may be Al generated in some embodiments, and which is stored with a normalized height Href which is taller than the height of the user Kr& shown in Figure 5 B. [000186] Figure SB shows a user's attempt to emulate the reference or demonstrator pose sequence. In this example, the user's height He"r is shorter than that of the uncalibrated, in otherwise the normalized reference pose demonstrator's height Href. ;[000187] Figure SC illustrates how the reference pose sequence is scaled and calibrated in this case using just one physical metric of user A, their height, so that the height of the pose image data of user B (here the demonstrator) is scaled to match the image height of the poses of user A. As will be appreciated, one or more or many other physical metrics of users would be captured in real embodiments, for example, their visible or estimated girth, or their limb lengths etc. [000188] Figure SD shows the scaled and/or calibrated demonstrator or reference pose sequence with the attempted emulating poses of user B. As shown, not all poses, especially the second pose and fourth pose in the pose sequence are performed correctly by user B in Figure SD. # [000189] Figure 6A shows schematically how a follower user's limb lengths may be differently adjusted when calibrating that follower user's body pose to a reference pose for display. In Figure 6A, each pose point in the set of pose joint points P1, P2, P3 for a detected pose (P) of the follower user is mapped to a reference set of pose joint points 01, 02, 03 for a reference or demonstrator pose (Q) of a demonstrator user. As shown in Figure 6A, the pose follower's joint lengths are different from the corresponding joint lengths of the pose being demonstrated. For example, if P3 is the shoulder, P2 an elbow, and pl a wrist joint of a user's arm, then the distance between P3 and P2 for that user is longer than the corresponding distance 03 to 02 in the reference pose, whilst the distance between the elbow joint P2 and wrist joint P1 of the follower user's arm is shorter than the distance between the elbow joint 02 and wrist joint 01 of the reference pose. In this example embodiment, calibration results in the reference poise data being adjusted so that the length 03-02 in the displayed reference pose matches P3-P2 of the follower user, and the distance 02-01 matches P2-P1 of the follower user so that if the user performs the pose correctly, they will not be able to distinguish between the positions of the joints when displayed. Of course, a user may not perform a pose correctly, but if the distance or congruity measurement, in other words,. the degree to which P matches Q when displayed, is below a threshold then a pose sequence being followed by that user could progress further and present another pose in the sequence. ;[000190] Figure 6B shows how poses sequences may progress by the distance between sets of points P1, P2, P3 associated with pose #1 for example, and points P'1, P'2, and P'3 associated with pose #2 for example. ;[000191] Figure 7A shows how a user may progress in a cyclical manner or loop through a demonstrator or reference pose sequence comprising temporal progression of three distance progression poses #1, #2, #3 over time. Here pose #1 must be completed correctly before the user is able to progress to pose #2, and pose #2 must be completed correctly before the user is able to progress to pose #3. In this case, a user who completes pose #3 correctly is then guided back to pose #1. ;[000192] Figure 7B shows how a user may progress by oscillating between two different poses: pose #1, and pose #2, for where a pose sequence takes on a ping-pong type of temporal progression. Once the user has correctly performed pose #1, they must then perform pose #2. Once pose #2 has been correctly performed, the user must perform pose #3. ;[000193] Figure 7C shows a pose sequence comprising a linear progression of three poses, where pose #1 must be completed correctly before the user B is able to progress to pose #2, and pose #2 must be completed correctly before the user is able to progress to pose #3. In some embodiments the sequences may loop a predetermined number of times, or until a certain measure of congruity is reached. Similarly, if a milestone pose is not performed with sufficient congruity to a reference pose in a pose sequence the pose sequence may iterate. ;[000194] In some embodiments, if a user fails to perform a pose sufficiently well, in other words, if a sequence continuation condition is not met by the measured degree of congruity, the sequence may be aborted, or revert back to the initial pose or to a previous pose. The system may determine when to revert back based on the extent of congruity of the pose follower user's pose metrics conform with the calibrated reference pose metrics and/or by checking to see if the pose follower user's pose metrics break any rules for automatically progressing to the next pose, and/or meet any rules to revert back to repeat one or more previous poses. ;[000195] Figure 8 shows a pose library 29 in which each pose library record comprises one or more poses and/or pose sequences including pose timing data associated with a particular follower user. The pose library 29 comprises in some embodiments poses which are pre-calibrated to that user which can be distinguished accordingly from normalized or uncalibrated reference poses, see Figure 1A, such as poses 22 stored in an uncalibrated pose library 25. If the user is following pre-calibrated pose data, then in some embodiments, a congruity measure for a pose extracted from the library can be directly determined based on the detected pose joint positions of a data feed including a pose of the pose follower user associated with that calibrated library. Advantageously, using pre-calibrated pose data this may reduce any processing delay which might otherwise occur if uncalibrated pose data were stored as this would need to be calibrated to that specific pose follower user in real time before being displayed and the pose congruity measurements obtained. ;[000196] However, as the transformation of uncalibrated pose data should nonetheless be relatively very quick and achievable in real-time, for example, as soon as a pose transformation matrix has been determined the matrix can be applied to a user's detected joint positions in real-time to calibrate their pose to the reference pose on a display. The sources of the reference poses include real-time poses from a demonstrator user, an uncalibrated reference pose library, or a user's own local storage of poses as calibrated reference poses, which may include that pose data for that user's own poses as recorded from their previous poses, or data comprising poses sent from and/or stored by another user on their device. Another source of reference pose data may comprise a kind of de-centralized library for poses. ;[000197] Figure 9 shows how some embodiments of system 10, in a pose sequence or progression, if a pose follower user does not reach a sufficient degree of conformity the pose sequence may change dynamically and revert back to an earlier pose. This may be the previous pose or an earlier pose, for example, to the beginning of the last pose milestone. As shown above, pose 2 is not sufficiently poorly performed to trigger the reference pose sequence to revert or loop, and the next pose shown is pose 3, which the user performs sufficiently well for pose 4 to be displayed next. However, pose 4 is sufficiently poorly performed for the reference pose 4 to be maintained and the user's next attempt does not result in sufficient improvement, the system reverts back to an earlier pose, pose 3, from which the user then attempts pose 4. This is performed incorrectly, and the system displays pose 3 as a reference pose again, and after this pose 4 is correctly performed by the user. The pose sequence then continues with the user performing pose 5 and 6. This illustrates schematically how, in some embodiments, a pose sequence may be changed dynamically based on a user's performance, for example, based on one or more spatial and/or temporal metrics indicative of the extent to which the user's displayed representation is conforming with the scaled and/or calibrated guidance or reference pose representation. A user may want to rehearse a previous pose, in which case the pose displayed is that which most closely matches the current follower position. ;[000198] Figure 10 shows schematically an embodiment where instead of adjusting dynamically the sequence of poses provided by a pose library, one or more of the poses in the sequence are dynamically modified. In the example illustrated in Figure 8B, pose 4* is dynamically added to the sequence to help the user transition from pose 4 back to pose 3.
[000199] Figures 11A to 11D show additional examples of how reference pose data may be scaled and/or otherwise calibrated to allow the reference pose data to be displayed as an overlay image with pose data captured for a user on an electronic device. Figure 11A shows how reference pose image data may be translated spatially, in other words transformation points 01, Q2, Q3 of a pose Q such as Figure 6B showed. Figure 11B shows how reference pose image may be rotated or reoriented. Figure 11C shows how reference pose image data may be physically transformed based on a series of physical user metrics, such as height, waist, and limb measurements. Figure 11D shows how a range of motion may be scaled from a reference image pose so as to present a user with a more attainable range of motion based on their currently physical capabilities. In this manner, reference pose image data may be statically scaled and/or calibrated, for example, as part of a system initialization or based on stored physical pose and/or user metrics and also dynamically scaled based on feedback from a user in real time. Both still poses and pose sequences involving a range of motion may be scaled and/or calibrated this way.
[000200] Figures 12b and 12B show how reference pose data presented to a user may be provided with additional annotation, such as graduated progression outlines and/or movement emphasis. In some embodiments the pose may be roved across in time, in other words, the time the pose positions would move (in real time) is adjusted to make the velocity constant, for example, to make the gap between close points small so that the follower user can see more detail of how the pose is transitioning in sequence. Graduated progress reference pose data may be stored as a sequence of poses. Interpolation may also be used to break down a sequence of stored poses into a more detailed sequence. In other words, a higher pose image density sequence can be generated by interpolation in some embodiments. Pose interpolation may also be used to dynamically adjust reference pose data when presenting poses to a follower user.
[000201] Figures 13A to 13D show examples of poses a user may adapt from which static or temporal physical metrics to scale and/or calibrate poses and/or pose sequences can be captured. In other words, the poses illustrated allow calibration data to be captured for adjusting demonstrator or reference poses to be calibrated demonstrator poses or calibrated reference poses for that user.
The calibration poses shown in Figures 13A to 13D are preferable performed prior to accessing an library of uncalibrated reference poses or receiving uncalibrated demonstrator poses from a demonstrator. The calibration may be just spatial based on one or more static poses in some embodiments, however, where range of movement is being determined, for example, the time taken to achieve a sequence of poses, in other words to transition from one pose to the next may be used as a calibration metric as well in some embodiments. As shown in Figure 13A, a user may start in a random pose at time tO. Responsive to a set of one or more calibration poses, pose #1, #2, #3 shown sequentially on the user's device and as Figures 13B, 13C and 13D respectively, the user's electronic device is able to capture physical metrics of the user. Physical metrics may fall into static and dynamic categories. For example, user measurements such as limbs, height, habitus / build are relatively static. Dynamic range of movement are determined experimentally and dynamically via performance metrics such as flexibility, as mentioned above. For example, a user may complete range of motion, ROM, exercises or similar, in which case a ROM performance metric may represent the degrees of freedom at a joint which a person has and the range. For example, typically an elbow has 0-160 degrees of freedom. In some embodiments, the time taken for a user to adopt a pose which sufficiently conforms with the calibration pose is also captured. This can provide an indication of the user's range of movement and/or flexibility. For example, so for temporal data a change in measured body point positions over time can be captured and used to generate performance metrics. If the measured velocity for a certain movement is below a threshold, this may indicate de-conditioning, weakness, loss of flexibility as mentioned above.
[000202] Figure 14A shows schematically a sequence of poses involving a user holding a toothbrush and moving the toothbrush across their teeth. In this example, an initial pose does not move the brush across the teeth, just up and down for an initial period of time, for example, 1 second. The next four images form a cyclical pose sequence in which a complete takes approximately 20 seconds to complete. This may be repeated a predetermined number of times or based on sensor or other image feedback (e.g. a dental dye may be used to indicate how effective the brushing is) until a certain level of cleanliness is reached, at which point the pose information shown is a user gargling. This pose image is held for approximately 10 seconds on screen before the next pose (if any, e.g. removing the mouth wash) is displayed.
[000203] Figure 14B shows schematically an embodiment of system 10 in which a demonstrator, for example, user B in Figure 1B, who controls or provides reference pose image data to a follower, for example, user A in Figure 1B, is able to advance or rewind a pose sequence (which may be provided as a sequence of still or video images) and/or advance or rewind concurrently with the image data or independently of it any text and/or audio commentary (if provided). Similarly in some embodiments, a follower, such as user A in Figure 1B, is able to advance or rewind a pose sequence and/or any accompanying text or audio commentary independently of the actions of the demonstrator. It is also possible in some embodiments for the demonstrator and follower roles to dynamically change.
[000204] In some embodiments, an option may be provided to the demonstrator and/or the user to share the point in the pose image sequence to which they have rewound with the other party so they can both see the same pose image data and resume the pose sequence. In some examples, the demonstrator may be a library pose sequence demonstrator, in which case, only the pose follower user may be able to rewind/advance the pose sequence as shown (although the system could do this, this would only be if a user does not sufficiently conform with a reference pose, as was shown in Figures 9 and 10).
[000205] In some embodiments, a class may be taking the same pose sequence with an instructor and multiple overlays may be provided in real time to a demonstrator on a demonstrator/instructor screen. This allows for example, the possibility to have multiple overlays of different remote follower users overlays with a reference pose image.
[000206] A variety of mappings between user devices are possible apart from embodiments with a one to one demonstrator to follower user configuration such as Figure 1A shows schematically where the demonstrator poses are in a pose library, and as Figure 1B shows schematically where there is a demonstrator and a user exchanging pose data. For example, a many to one device configuration may be used to share pose data between multiple demonstrator or observer user's devices and a follower user, for example, which allows a user to share their pose information concurrently or asynchronously with multiple demonstrators or other parties. This may be useful where a patient is working with multiple therapists to develop or recover motor skills. Another device configuration may be one to many, a typical group instruction scenario where one demonstrator shares their pose information with several follower users, such as in a yoga class or the like. Each user perceives a mapping of their pose data with the demonstrator's pose data on their own device. Another possible device configuration which would support a many to many mapping of pose data is a network of devices, which may also provide reference data to a distributed system such as a cloud sharing platform. In embodiments with a many to many configuration of devices, there may be many demonstrators and many follower users, who all make their pose data available to each other, either without or with access restrictions.
[000207] Figure 15 shows schematically an example an electronic apparatus 12 comprising display 14 and motion capture camera 18 suitable for displaying and capturing pose data of a user, a memory 40, one or more processors or processing circuitry 42 configured at least in part to implement one or more method aspects disclosed herein. Apparatus 14 also comprises a suitable transceiver and antenna arrangement for data communications shown as TX/RX 28 in Figure 15 connectable to a data interface 46 configured to receive pose data and to cause it to be displayed on display 14. Apparatus 12 also includes a suitable power supply which may comprises a mains source and/or a battery source in some embodiments where the apparatus is portable.
[000208] In some embodiments, the apparatus 12 is associated with a pose follower user and comprises apparatus for developing motor-skills in a pose follower user comprising a memory 40, one or more processors or processing circuitry 42, and computer program code stored in the memory 40, wherein the computer program code 50, when loaded from memory 40 and executed by the one or more processors or processing circuitry 42, causes the apparatus 12 to implement a method for developing motor-skills in a pose follower user, the method comprising: causing presentation of a reference pose on a display 14 of the apparatus 12 associated with a pose follower user; detecting a pose of the pose follower user, for example, a pose detected from a camera image and/or a video feed, and causing presentation of the pose of the pose follower user concurrently with the reference pose on the display 14, measuring in real-time a degree of congruity of the detected pose to the reference pose, and responsive to the measured degree of congruity, determining another pose to present on the display 14.
[000209] The display 14 may be integrated with the apparatus 12, for example, if apparatus comprise as computer, smart phone or tablet, the display may comprise an integrated touch-screen display which combines user interface functionality with the display. Alternatively, the display 14 may be connected to the apparatus 12 temporarily or permanently, either as a secondary display or as a primary display.
[000210] In some embodiments, the computer program code 50 held in memory 40 comprises a reference pose display module or circuitry 54 which when executed by the apparatus 12 causes presentation of a reference pose on a display 14 of the apparatus associated with a pose follower user, a pose detection module or circuitry 56 when executed by the apparatus 12 causes the apparatus 12 to detect a pose of the pose follower user, for example, using the image capture device 18 such as a camera or from a video feed, a pose display module or circuitry 56, which when loaded by the apparatus 12 causes presentation of the pose of the pose follower user concurrently with the reference pose on the display, a congruity measurement module or circuitry 60 is configured when executed to measure in real-time a degree of congruity of the detected pose to the reference pose, and a pose selection module 62 configured when executed, responsive to the measured degree of congruity, to determine another pose to present on the display 14 of the device.
[000211] In some embodiments, one or more pose attributes, in other words, pose joint data points of the reference pose are transformed, or calibrated to corresponding pose joint data points the user, for example, by executing a pose calibration module or circuitry 58 held in memory 40, prior to determining the degree of congruity with the pose of the pose follower user for display on the display 14.
[000212] In some embodiments, the method further comprises displaying an overlaid pose images of the pose follower's pose and the reference pose, whereby the overlaid pose images provide a visible indicator of the detected measure of congruity of the pose follower user's pose to the displayed reference pose on the display.
[000213] In some embodiments, the reference pose comprises a demonstrator pose derived from a demonstrator in real-time.
[000214] In some embodiments, each pose comprises one or more pose characteristics, and wherein at least one pose characteristic of each pose in the pose sequence differs from that pose characteristic in a previous pose by a dynamically determined amount based on one or more user pose performance metrics for the previous pose.
[000215] In some embodiments, the computer program code, for example, in some embodiments, the calibration module or circuitry 58, further causes the apparatus 12 to obtain one or more physical characteristics associated with one or more motor-skills of the pose follower user and using the obtained physical characteristics of the pose follower user to adjust one or more corresponding physical characteristics of the reference pose. For example, the physical characteristics may comprise one or more of the following: one or more limb dimensions such a limb length, flexibility performance metrics based on a user following static or dynamic, in other words, time-sequence of poses. Static metrics may be detected from an initial pose by a user using a depth camera or the like or be entered manually by a user or other party or otherwise provided as input to the apparatus, for example, they may be imported from a data file or repository such as AppleTM Health app in some embodiments. In some embodiments other physical characteristics which affect the development of motor-skills may be used to adjust pose data, for example, as mentioned above, a custom RoM, may be provided to adjust a normalized uncalibrated pose or pose sequence to a particular user's RoM.
[000216] In some embodiments, a plurality of reference pose images are displayed in a sequence on the display, wherein the reference poses are presented progress through the reference pose sequence in a forwards and/or backwards direction responsive to input from the follower user.
[000217] In some embodiments, the apparatus is configurable to co-operate with data received from another apparatus which provides demonstrator reference poses.
[000218] In some embodiments, an apparatus 12b associated with another user, not shown in Figure 15, but see for example apparatus 12b in Figure 1B, comprises at least a memory holding computer program code, a processor for loading and executing the computer program code, an a transmitter/receiver antenna arrangement may be configured the computer program code held in its memory is loaded from memory by its processors or processing circuitry to cause and control the presentation of the one or more reference poses on the display of the apparatus 12 associated with the pose follower user shown in Figure 15 as apparatus 12 (and as shown in Figure 15).
[000219] For example, in some embodiments, the user of the other apparatus comprises a pose demonstrator user and wherein the reference pose comprises a demonstrator pose, and the demonstrator's apparatus maybe configured to capture demonstrator pose information using an image capture device associated with the demonstrators apparatus and to communicate the demonstrator pose information in real-time to the apparatus 12 of the pose follower user. In some embodiments, the calibration module or circuitry 58 of apparatus 12 is configured to calibrate, based on demonstrator pose information which includes at least one captured demonstrator pose, a demonstrator pose to a calibrated pose for the pose follower user using apparatus 12.
[000220] In some embodiments, the apparatus 12b associated with the demonstrator user is configured to cause a real-time image to be displayed of their pose calibrated to the follower user over laying or underlying a pose of the follower user. Apparatus 12b may also be configured to provide a voice communications channel to allow additional voice guidance or other guidance or encouragement to be provided from a demonstrator user to a follower user.
[000221] The apparatus 12 may also be configured in some embodiments to process a video feed, detect one or more reference poses in the video feed, cause presentation of the video on the display 14 of the apparatus 12 associated with the pose follower user, seek to match the detected one or more reference poses in the video feed to one or more detected poses of the user, and determine for each of the detected one or more reference poses in the video feed, a degree of congruity of a detected pose in the video feed to a matched detected pose of the user.
[000222] In some embodiments, a user is able to share their video feed data with one or more other users or just share the pose data directly in real-time with one or more other users.
[000223] In some embodiments, the presentation speed of a pre-recorded pose video being presented on the pose follower user's display 14 is adjusted according to a determined measure of a degree of congruity of one or more poses of the pose follower user matched to one or more reference poses in the video.
[000224] Another embodiment of the disclosed technology relates to a system for developing motor-skills in a pose follower user such as the systems 1 shown in Figure 1A and Figure 1B for example. In some embodiments, the system comprises means or a module configured to cause presentation on a display of an apparatus associated with a pose follower user of a reference pose in a sequence of one or more reference poses, means or a module configured to detect a pose of the pose follower user, which may use any suitable means known in the art such as those mentioned above, including, for example, an open source human pose estimation model such as one of those referred to in "Deep 3D human pose estimation: A review, Computer Vision and Image Understanding ", by Wang et al, Volume 210, 2021, 103225, ISSN 1077-3142, https://doi.org/10.1016/j.cviu.2021.103225 The means or a module may be configured to measure in real-time a degree of congruity of the detected pose to the displayed reference pose; and means or a module configured, responsive to the determination of a measured degree of congruity, to determine another pose in the sequence of poses to present on the display of the apparatus associated with the pose follower user.
[000225] Another embodiment of the disclosed technology comprises an apparatus for developing motor-skills in a pose follower user, the apparatus comprising a memory, one or more processors or processing circuitry, and computer program code, wherein the computer program code, when loaded from memory and executed by the one or more processors or processing circuitry, causes the apparatus to implement a method according to any one of the disclosed embodiments, including the methods 100 and 200 shown in Figures 2A and 2B respectively.
[000226] In some embodiments, the apparatus 12 or 12b comprises means or a module, which may be implemented in software or in circuitry, which is configured to detect a pose of the pose follower user, for example, a pose image capture device 18 such is shown in Figure 1A or 1B. In some embodiments, however, in addition or instead, the means or a module generates pose tracking data for the pose follower user which is acquired via one or more of a hand controller operated by the pose follower user, a head-set worn by the pose follower user, a body-suit worn by the pose follower user and one or more sensors attached to one or more limbs of the pose follower user.
[000227] The apparatus 12 show in Figure 9 (and also 12b shown in Figure 1B) is an example electronic device which may be configured to perform an embodiment of any of the disclosed methods or embodiments thereof. Example embodiments of such an electronic device include wireless communications devices, for example, a smart phone, a desktop computer, a personal digital assistant, PDA with image capture capabilities, a wearable device, such as a watch, a wireless camera with communications functionality and a display, a gaming console or device, a music storage device, a playback appliance, for example, a fridge with a Wi-Fi connection and a display, a tablet, a laptop, a laptop-embedded equipment, LEE, a laptop-mounted equipment, LME, any other suitable smart device such as a television and the like. The disclosed method may also be implemented using a kiosk or dedicated pose capture and display terminal or other computer provided with a camera and microphone and display. The TX/RX may use wired communications in some embodiments.
[000228] In some embodiments, apparatus 12 comprises a smart phone or tablet with a touch-sensitive display which is configured to perform some embodiments of the disclosed methods 100, 200. As illustrated, the apparatus 12 comprises an electronic device which, as mentioned above includes a memory 40, one or more processor(s) or processing circuitry 42, for example, CPUs which may be general or dedicated, for example, a graphics processing unit, GP. Other components which are well known to those of ordinary skill in the art may be omitted from Figure 15 where their inclusion and functionality is implicit. For example, some embodiments of apparatus 12, 12b may include a memory controller for controlling system components including the data interface 46, to send and receive pose data to various components of the apparatus such as memory 40 or the display 14.
[000229] In some embodiments, where the device is wireless communications enabled, then RF circuitry may be provided for data to be send and/or received via one or more antennas. Audio circuitry may be configured to provide audio data to a speaker or headset connected to the apparatus and to receive microphone data input. A display controller may be provided for an internal, or in some embodiments external, display. An optical sensor controller may control one or more optical sensors of the motion capture camera 18, for example, and/or for any depth imaging sensors in some embodiments (not shown in Figure 15). Other input device controllers may also be provided in some embodiments for other sensor and input devices.
[000230] It will be understood that in other embodiments the apparatus as illustrated in Figure 15 may have more or fewer components than shown, may combine two or more components, or a may have a different configuration or arrangement of the components. The various components shown in Figure 9 may be implemented in hardware, software or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
[000231] Memory 42 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices.
[000232] The data interface 46 may couple input and output peripherals of the apparatus 12 to the processors or processing circuitry 42 and memory 40. The one or more processors 42 may run or execute various software programs and/or sets of coded instructions stored in memory 42 to perform various functions and to process data in addition to implementing an embodiment of the disclosed method aspects..
[000233] In embodiments where the apparatus 12 is configured to perform wireless voice and/or data communications, RE, radio frequency, RX/TX circuitry 48 may be configured to receive and send RE signals to enable communication over communications networks with one or more communications enabled devices such as apparatus 12b of a demonstrator user. By way of example only, RE circuitry may include but not limited to an antenna system associated with the TX/RX transceiver 48, a subscriber identity module (SIM) card, memory, and so forth.
[000234] The apparatus 12 is according configured to communicate over data networks, such as the Internet, also referred to as the World Wide Web (WWW), intranets and/or a wireless networks, such as a cellular telephone networks, a wireless local area networks (LAN) and/or metropolitan area networks (MAN), with other devices such as apparatus 12b of a demonstrator user by wireless communication. The wireless communication may use any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), voice over Internet Protocol (VolP), WiMAX, a protocol for email (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), and/or Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS)), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
[000235] The display 14 is configured to display visual output which may include graphics, text, icons, video, and any combination thereof (collectively termed "graphics") and may in some embodiments provide a user interface which may include one or more affordances. An affordance is a graphical element or object presented in a user interface which includes a feature or graphic that presents an explicit or implicit prompt or cue on what can be done with the graphical element or object in the user interface. An example of an affordance is an icon represented by a tick-mark to accept a displayed condition. Another example of an affordance is a loudspeaker symbol icon whose selection triggers or enables audio or music output. In some embodiments, some or all of the visual output may correspond to user-interface objects, further details of which are described below.
[000236] The display 14 may use any suitable technology including but not limited to LCD (liquid crystal display) technology, or LPD (light emitting polymer display) technology.
[000237] The apparatus 12, 12b may be powered from an alternating mains source or from a direct current, DC, source such as a battery.
[000238] The optical sensors 18, 122 may comprise a camera 18 and/or include one or more optical sensors. Examples of optical sensors include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors which converts received light into data representing an image. The image capture device 18 may comprise a camera which is a forwards facing camera of the apparatus 12 which is implemented as imaging module and may be configured to capture still images in addition to video. The forward-facing camera is located on the same, front, side as display 14 in some embodiments. Display 18 may be used as a viewfinder for camera 18 to check the device is detecting a sufficient amount of a user's body so that pose data can be obtained by processing the image captured by the camera.
[000239] The operating system includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management) etc.) and facilitates communication between various hardware and software components.
[000240] The modules shown in Figure 9 which form computer code 50 correspond to a set of instructions for performing one or more functions described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. In some embodiments, memory 40 may store a subset of the modules and data structures identified above. Furthermore, memory 40 may store additional modules and data structures not described above.
[000241] Where the disclosed technology is described with reference to drawings in the form of block diagrams and/or flowcharts, it is understood that several entities in the drawings, e.g., blocks of the block diagrams, and also combinations of entities in the drawings, can be implemented by computer program instructions, which instructions can be stored in a computer-readable memory, and also loaded onto a computer or other programmable data processing apparatus. Such computer program instructions can be provided to a processor of a general purpose computer, a special purpose computer and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
[000242] In some implementations and according to some aspects of the disclosure, the functions or steps noted in the blocks can occur out of the order noted in the operational illustrations. For example, two blocks shown in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved. Also, the functions or steps noted in the blocks can according to some aspects of the disclosure be executed continuously in a loop.
[000243] In the drawings and specification, there have been disclosed exemplary aspects of the disclosure. However, many variations and modifications can be made to these aspects without substantially departing from the principles of the present disclosure. Thus, the disclosure should be regarded as illustrative rather than restrictive, and not as being limited to the particular aspects discussed above. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.
[000244] The description of the example embodiments provided herein have been presented for purposes of illustration. The description is not intended to be exhaustive or to limit example embodiments to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of various alternatives to the provided embodiments. The examples discussed herein were chosen and described in order to explain the principles and the nature of various example embodiments and its practical application to enable one skilled in the art to utilize the example embodiments in various manners and with various modifications as are suited to the particular use contemplated. The features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products. It should be appreciated that the example embodiments presented herein may be practiced in any combination with each other.
[000245] It should be noted that the word "comprising" does not necessarily exclude the presence of other elements, features, functions, or steps than those listed and the words "a" or "an" preceding an element do not exclude the presence of a plurality of such elements, features, functions, or steps. It should further be noted that any reference signs do not limit the scope of the claims, that the example embodiments may be implemented at least in part by means of both hardware and software, and that several "means", "units" or "devices" may be represented by the same item of hardware.
[000246] The various example embodiments described herein are described in the general context of methods, and may refer to elements, functions, steps or processes, one or more or all of which may be implemented in one aspect by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments.
[000247] A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory, RAM), which may be static RAM, SRAM, or dynamic RAM, DRAM. ROM may be programmable ROM, PROM, or EPROM, erasable programmable ROM, or electrically erasable programmable ROM, EEPROM. Suitable storage components for memory may be integrated as chips into a printed circuit board or other substrate connected with one or more processors or processing modules, or provided as removable components, for example, by flash memory (also known as USB sticks), compact discs (CDs), digital versatile discs (DVD), and any other suitable forms of memory. Unless not suitable for the application at hand, memory may also be distributed over a various forms of memory and storage components, and may be provided remotely on a server or servers, such as may be provided by a cloud-based storage solution. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
[000248] The memory used by any apparatus whatever its form of electronic device described herein accordingly comprise any suitable device readable and/or writeable medium, examples of which include, but are not limited to: any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by processing circuitry. Memory may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry and, utilized by the apparatus in whatever form of electronic device. Memory may be used to store any calculations made by processing circuitry and/or any data received via a user or communications or other type of data interface. In some embodiments, processing circuitry and memory are integrated. Memory may be also dispersed amongst one or more system or apparatus components. For example, memory may comprises a plurality of different memory modules, including modules located on other network nodes in some embodiments.
[000249] In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation, the scope of the embodiments being defined by the following claims.
Claims (20)
- CLAIMS1. A computer implemented method for developing motor-skills in a pose follower user, the method comprising: causing presentation of a reference pose on a display of an apparatus associated with a pose follower user; detecting a pose of the pose follower user; causing presentation of the detected pose of the pose follower user concurrently with the reference pose on the display; measuring in real-time a degree of congruity of the detected pose to the reference pose; and responsive to the measured degree of congruity, determining another pose to present on the display.
- 2. A method according to claim 1, wherein one or more pose attributes of the reference pose are transformed prior to determining the degree of congruity with the pose of the pose follower user.
- 3. A method according to claim 1 or 2, wherein the method further comprises displaying overlaid pose images of the pose follower's pose and the reference pose, whereby the overlaid pose images provide a visible indicator of the detected measure of congruity of the pose follower user's pose to the displayed reference pose on the display.
- 4. A method according to any one of the previous claims, wherein the reference pose comprises a demonstrator pose derived from a demonstrator in real-time.
- 5. A method according to any one of the previous claims, wherein each pose comprises one or more pose characteristics, and wherein at least one pose characteristic of each pose in the pose sequence differs from that pose characteristic in a previous pose by a dynamically determined amount based on one or more user pose performance metrics for the previous pose.
- 6. A method according to any one of the previous claims, wherein the method further comprises obtaining one or more physical characteristics associated with one or more motor-skills of the pose follower user and using the obtained physical characteristics of the pose follower user to adjust one or more corresponding physical characteristics of the reference pose.
- 7. A method according to claim 6, wherein a transformation is used to align the reference pose with the follower pose.
- 8. A method according to any one of the previous claims, wherein a plurality of reference pose images are displayed in a sequence on the display, wherein the reference poses are presented progress through the reference pose sequence in a forwards and/or backwards direction responsive to input from the follower user.
- 9. A method according to any one of the previous claims, wherein determining another pose to present on the display comprises receiving one or more reference poses from an apparatus associated with another user.
- 10. The method of claim 9, wherein the other user comprise a poses demonstrator and wherein the received one or more reference poses comprise one or more demonstrator poses captured using an image capture device associated with the pose demonstrator and communicated in real-time to the apparatus of the pose follower user.
- 11. The method of claim 10, wherein the method further comprises: overlaying or under-laying on the display the demonstrator's current pose calibrated to the follower user concurrently with the pose of the follower user.
- 12. The method of any one of the previous claims, wherein the method further comprises: processing a video feed; detecting one or more reference poses in the video feed; causing presentation of the video on the display of the apparatus associated with the pose follower user; matching the detected one or more reference poses in the video feed to one or more detected poses of the user; and determining for each of the detected one or more reference poses in the video feed, a degree of congruity of a detected pose in the video feed to a matched detected pose of the user.
- 13. The method of claim 12, wherein the presentation speed of the video on the pose follower user's display is adjusted according to a measure of a degree of congruity of one or more poses of the pose follower user matched to one or more reference poses in the video.
- 14. An apparatus for developing motor-skills in a pose follower user, the apparatus comprising: a memory; one or more processors or processing circuitry; and computer program code, wherein the computer program code, when loaded from memory and executed by the one or more processors or processing circuitry, causes the apparatus to implement a method according to any one of claims 1 to 13.
- 15. An apparatus according to claim 14, wherein the apparatus comprises means or a module configured to detect a pose of the pose follower user, wherein the means or a module generates pose tracking data for the pose follower user which is acquired via one or more of: an image capture device comprising a camera or camera system; a hand controller operated by the pose follower user; a head-set worn by the pose follower user; a body-suit worn by the pose follower user; and one or more sensors attached to one or more limbs of the pose follower user.
- 16. A computer program product comprising computer code which, when loaded from a memory and executed by one or more processors or processing circuitry of an apparatus, causes the apparatus to perform a method according to any one of claims 1 to 13.
- 17. A system for developing motor-skills in a pose follower user, the system comprising: means or a module configured to cause presentation on a display of an apparatus associated with a pose follower user of a reference pose in a sequence of one or more reference poses; means or a module configured to detect a pose of the pose follower user; means or a module configured to measure in real-time a degree of congruity of the detected pose to the displayed reference pose; and means or a module configured, responsive to the determination of a measured degree of congruity, to determine another pose in the sequence of poses to present on the display of the apparatus associated with the pose follower user.
- 18. The system of claim 17, further comprising: another apparatus, wherein the other apparatus is associated with a pose demonstrator user, and wherein the other apparatus comprises: means or a module configured to detect a pose of the pose demonstrator user; means or a module configured to share the detected pose as a reference pose with the pose follower user; means or a module configured to cause presentation on a display of the apparatus of the pose demonstrator of the pose performed by a pose follower user; means or a module configured to measure in real-time a degree of congruity of the detected pose of the pose follower with detected pose of the pose demonstrator; and means or a module configured, responsive to the determination of the measured degree of congruity, to cause a presentation of a representation of the measured degree of congruity on the display of the apparatus associated with the pose follower.
- 19. The system of claim 17, further comprising means or a module configured, responsive to the determination of the measured degree of congruity, to cause a presentation of a representation of the measured degree of congruity on the display of the other apparatus associated with the pose demonstrator.
- 20. The system of claims 18 and 19, wherein the representation of the measured degree of congruity is provided concurrently on the display of the apparatus of the pose follower and the display of the other apparatus of the pose demonstrator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2208385.1A GB2619532A (en) | 2022-06-08 | 2022-06-08 | System for developing motor-skills |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2208385.1A GB2619532A (en) | 2022-06-08 | 2022-06-08 | System for developing motor-skills |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202208385D0 GB202208385D0 (en) | 2022-07-20 |
GB2619532A true GB2619532A (en) | 2023-12-13 |
Family
ID=82404598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2208385.1A Pending GB2619532A (en) | 2022-06-08 | 2022-06-08 | System for developing motor-skills |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2619532A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110306396A1 (en) * | 2010-06-11 | 2011-12-15 | Harmonix Music Systems, Inc. | Dance Game and Tutuorial |
US20120183940A1 (en) * | 2010-11-05 | 2012-07-19 | Nike, Inc. | Method and system for automated personal training |
US20120277891A1 (en) * | 2010-11-05 | 2012-11-01 | Nike, Inc. | Method and System for Automated Personal Training that Includes Training Programs |
US20150039106A1 (en) * | 2012-02-14 | 2015-02-05 | Pixformance Sports Gmbh | Fitness device and method for automatically checking for the correct performance of a fitness exercise |
KR20190113265A (en) * | 2018-03-28 | 2019-10-08 | 주식회사 스탠스 | Augmented reality display apparatus for health care and health care system using the same |
US20200009444A1 (en) * | 2018-05-29 | 2020-01-09 | Curiouser Products Inc. | Reflective video display apparatus for interactive training and demonstration and methods of using same |
US20210093920A1 (en) * | 2019-09-26 | 2021-04-01 | True Adherence, Inc. | Personal Fitness Training System With Biomechanical Feedback |
US20210339110A1 (en) * | 2020-04-30 | 2021-11-04 | Curiouser Products Inc. | Reflective video display apparatus for interactive training and demonstration and methods of using same |
-
2022
- 2022-06-08 GB GB2208385.1A patent/GB2619532A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110306396A1 (en) * | 2010-06-11 | 2011-12-15 | Harmonix Music Systems, Inc. | Dance Game and Tutuorial |
US20120183940A1 (en) * | 2010-11-05 | 2012-07-19 | Nike, Inc. | Method and system for automated personal training |
US20120277891A1 (en) * | 2010-11-05 | 2012-11-01 | Nike, Inc. | Method and System for Automated Personal Training that Includes Training Programs |
US20150039106A1 (en) * | 2012-02-14 | 2015-02-05 | Pixformance Sports Gmbh | Fitness device and method for automatically checking for the correct performance of a fitness exercise |
KR20190113265A (en) * | 2018-03-28 | 2019-10-08 | 주식회사 스탠스 | Augmented reality display apparatus for health care and health care system using the same |
US20200009444A1 (en) * | 2018-05-29 | 2020-01-09 | Curiouser Products Inc. | Reflective video display apparatus for interactive training and demonstration and methods of using same |
US20210093920A1 (en) * | 2019-09-26 | 2021-04-01 | True Adherence, Inc. | Personal Fitness Training System With Biomechanical Feedback |
US20210339110A1 (en) * | 2020-04-30 | 2021-11-04 | Curiouser Products Inc. | Reflective video display apparatus for interactive training and demonstration and methods of using same |
Also Published As
Publication number | Publication date |
---|---|
GB202208385D0 (en) | 2022-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020216054A1 (en) | Sight line tracking model training method, and sight line tracking method and device | |
KR102097190B1 (en) | Method for analyzing and displaying a realtime exercise motion using a smart mirror and smart mirror for the same | |
US11733769B2 (en) | Presenting avatars in three-dimensional environments | |
CN110349081B (en) | Image generation method and device, storage medium and electronic equipment | |
US9892655B2 (en) | Method to provide feedback to a physical therapy patient or athlete | |
US11610333B2 (en) | Methods and devices for electronically altering captured images | |
US11682157B2 (en) | Motion-based online interactive platform | |
EP3847628A1 (en) | Marker-less augmented reality system for mammoplasty pre-visualization | |
JP6165815B2 (en) | Learning system, learning method, program, recording medium | |
US20200406098A1 (en) | Techniques for golf swing measurement and optimization | |
EP3786971A1 (en) | Advancement manager in a handheld user device | |
US12067664B2 (en) | System and method for matching a test frame sequence with a reference frame sequence | |
US11049321B2 (en) | Sensor-based object tracking and monitoring | |
US20230249031A1 (en) | Systems and methods for personalized exercise protocols and tracking thereof | |
KR20230102011A (en) | Sports performance analysis and coaching system and the method thereof | |
CN113033526A (en) | Computer-implemented method, electronic device and computer program product | |
GB2619532A (en) | System for developing motor-skills | |
US11998798B2 (en) | Virtual guided fitness routines for augmented reality experiences | |
CN113298013A (en) | Motion correction method, motion correction device, storage medium and electronic equipment | |
Yang et al. | Bimanual natural user interaction for 3D modelling application using stereo computer vision | |
KR102589169B1 (en) | Method for golf lesson using motion picture | |
CN111966213A (en) | Image processing method, device, equipment and storage medium | |
US20240135617A1 (en) | Online interactive platform with motion detection | |
US20240050831A1 (en) | Instructor avatars for augmented reality experiences | |
CN114356100B (en) | Body-building action guiding method, body-building action guiding device, electronic equipment and storage medium |