US20140052731A1 - Music track exploration and playlist creation - Google Patents

Music track exploration and playlist creation Download PDF

Info

Publication number
US20140052731A1
US20140052731A1 US13/762,834 US201313762834A US2014052731A1 US 20140052731 A1 US20140052731 A1 US 20140052731A1 US 201313762834 A US201313762834 A US 201313762834A US 2014052731 A1 US2014052731 A1 US 2014052731A1
Authority
US
United States
Prior art keywords
music
mood
trajectory
genogram
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/762,834
Inventor
Rahul Kashinathrao DAHULE
Shubhangi Mahadeo Jadhav
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of US20140052731A1 publication Critical patent/US20140052731A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F17/30772
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4755End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user preferences, e.g. favourite actors or genre
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • H04N21/8113Monomedia components thereof involving special audio data, e.g. different tracks for different languages comprising music, e.g. song in MP3 format

Definitions

  • the present invention relates to the exploration of music tracks in a playlist and the creation of playlists. More specifically, the invention relates to a computer-implemented method for the creation of a playlist, a music genogram data structure and a computer implemented method to explore music tracks using the music genogram data structure.
  • meta-data typically includes media characteristics like artist, title, producer, genre, style, composer and year of release.
  • the meta-data may include classification information from music experts, from friends or from an online community to enable music recommendations.
  • a playlist is a collection of media items grouped together under a particular logic.
  • Known online music portals such as e.g. Spotify (www.spotify.com) and Pandora (www.pandora.com) offer tools to make, share, and listen to playlists in the form of sequences of music tracks. Individual tracks in the playlist are selectable from an online library of music.
  • the playlist can be created such to e.g. reflect a particular mood, accompany a particular activity (e.g. work, romance, sports), serve as background music, or to explore novel songs for music discoveries.
  • Playlist may be generated either automatically or manually.
  • Automatically created playlists typically contain media items from similar artists, genres and styles. Manually selection of e.g. a particular music track is typically driven by listening to a particular track or artist, a recommendation of a track or artists or a preset playlist. It is possible that a user provides a manually selected track or particular meta-data as a query and that a playlist is generated automatically as indicated above in response to this query.
  • a user's response to e.g. music typically depends on the type of user.
  • Four types of users can generally be identified: users indifferent to music; users casual to music; users enthusiastic to music; and music savants. Indifferent users typically would not loose much sleep if music would cease to exist.
  • Statistically 40% of users in the age group of 16-45 are of this type.
  • Casual users typically find that music plays a welcome role but other things are far more important. Their focus is on music listening and playlists should be offered in a transparent manner.
  • Statistically 32% of users in the age group of 16-45 are of this type.
  • Enthusiastic users typically find that music is a key part of life but it is balanced by other interests. Their focus is on music discovery and playlists may be created using more complex recommendations. Statistically 21% of users in the age group of 16-45 are of this type. Savant users typically feel that everything in life is tied up with music. Their focus is on the question “what's hot in music?”. Statistically 7% of users in the age group of 16-45 are of this type. Known tools typically target a specific type of user and do not take into account different types of users.
  • WO2010/027509 discloses an algorithm that produces a playlist based on similarity metrics that includes relative information from five time-varying emotional classes per track.
  • a method for personalizing content based on mood is disclosed in US2006/0143647.
  • An initial mood and a destination mood are identified for a user.
  • a mood-based playlisting system identifies the mood destination for the user playlist.
  • the mood destination may relate to a planned advertisement.
  • the mood-based playlisting system has a mood sensor such as a camera to provide mood information to a mood model.
  • the camera captures an image of the user.
  • the image is analyzed to determine a current mood for the user so that content may be selected to transition the user from the initial mood to the destination mood responsive to the determined mood of the user.
  • the user has no direct control over the desired mood destination and the transition path.
  • use of a camera to capture the current mood may result in a less or non-preferred playlist, as there can be a mismatch between the desire of the user to reach a certain mood and the mood reflected per his/her facial expressions.
  • a computer-implemented method for exploring music tracks.
  • the method comprises displaying a graphical representation of a first music genogram of a first music track.
  • the first music genogram has a data structure.
  • the method further comprises receiving a first exploration input indicating a selection of one of the tags in one of the sections.
  • the method further comprises, if the first exploration input indicates a pro-tag, displaying a link to a second music genogram of a second music track, and/or, if the pro-tag comprises two or more micro-pro tags, displaying a graphical representation of the decomposition of the pro-tag and a link to a third music genogram of a third music track for each of the micro-pro tags.
  • the music genogram data structure comprises sub-segment data identifying one or more sub-segments to define a decomposition in time of a music track. Each sub-segment has a start time and an end time.
  • the music genogram data structure further comprises band data identifying one or more bands to define a decomposition for the time length of the music track. A cross section of a sub-segment and a band forms a section.
  • the music genogram data structure further comprises tag data identifying one or more tags in one or more sections.
  • a tag can be a deceptive tag to indicate a surprising effect.
  • the tag can be a pro-tag to identify a starting point for an exploration of another music genogram based on similarities.
  • the tag can be a sudden change tag to indicate a substantial change in scale, pitch or tempo.
  • the tag can be a hook tag to identify a unique sound feature in the music track.
  • the tag can be a micro-pro tag to enable a decomposition of the pro-tag.
  • the embodiment of claim 2 advantageously enables more precise tagging.
  • the embodiment of claim 3 advantageously enables the music genogram to be particularly suitable for Indian music.
  • the embodiment of claim 15 advantageously enables exploring music tracks from a mood based playlist to be explored in a user friendly way.
  • the mood based playlist can be created in a user friendly way.
  • Examples of a media item are a music track, a video or a picture.
  • the playlist may comprise a mixture of music tracks, videos and/or pictures.
  • the embodiment of claim 5 advantageously enables a quick an easy creation of a mood trajectory.
  • the embodiment of claim 6 advantageously enables a user to manipulate the mood trajectory.
  • the embodiment of claim 7 advantageously enables more media items to fulfill the selection criteria thus become selectable when creating the playlist.
  • the embodiment of claim 8 advantageously enables a user to place restrictions on the playlist and/or the media items in the playlist.
  • Examples of a media characteristic are artist, title, producer, genre, style, composer and year of release.
  • the embodiment of claim 9 advantageously enables a user to increase the number of media items for a specific mood in the playlist.
  • the embodiment of claim 10 advantageously enables a user to see where in the mood trajectory a decrease or increase in the number of media items with specific media characteristics can be expected.
  • the embodiment of claim 11 advantageously enables future creation of a playlist to take into account past user actions related to a particular media item.
  • the computer program product comprises software code portions configured for, when run on a computer, executing one or more of the above mentioned method steps.
  • FIG. 1 shows a graphical representation of a prior art mood wheel
  • FIG. 2 shows a graphical representation of mood trajectories of an exemplary embodiment of the invention
  • FIG. 3 shows a graphical representation of a mood trajectory of an exemplary embodiment of the invention
  • FIG. 4 shows a graphical representation of a mathematical algorithm of an exemplary embodiment of the invention
  • FIG. 5 shows a graphical representation of a mood trajectory with weighted points of an exemplary embodiment of the invention
  • FIG. 6 shows a graphical representation of a mood trajectory with shock points of an exemplary embodiment of the invention
  • FIG. 7 shows a graphical representation of a music genogram of an exemplary embodiment of the invention.
  • FIG. 8 shows a collection of music genogram tags of an exemplary embodiment of the invention
  • FIG. 9 shows a graphical representation of exploring a music genogram of an exemplary embodiment of the invention as may be displayed on a user interface
  • FIG. 10 schematically shows a non-real time exploration initiative of an exemplary embodiment of the invention.
  • FIG. 11 schematically shows a real time initiative per music track in the emotional wheel of an exemplary embodiment of the invention
  • FIG. 12 shows a graphical representation of a mapping of real time and non-real time initiatives on the emotional wheel of an exemplary embodiment of the invention
  • FIG. 13 schematically shows steps of a method of an exemplary embodiment of the invention
  • FIG. 14 schematically shows steps of a method of an exemplary embodiment of the invention.
  • FIG. 15 schematically shows steps of a method of an exemplary embodiment of the invention.
  • FIG. 16 schematically shows steps of a method of an exemplary embodiment of the invention.
  • FIG. 1 a model for classification of emotions is shown, known as an emotional wheel 10 .
  • the emotion wheel 10 is founded on psychology results and views of scientists like Plutchik in 1980, Russell in 1980, Thayer in 1989 and Russell and Feldman Barrett in 1999.
  • the emotional wheel 10 captures a wide range of significant variations in emotions in a two dimensional space. Emotions can be located in the two dimensional Cartesian system along the various intensities of emotions and degree of activation.
  • the x-axis defines the level of valence.
  • the y-axis defines the level of arousal.
  • Each emotional state can be understood as a linear combination of these two dimensions.
  • the four quadrants of the emotional wheel identify the primary emotions joy, anger, sadness and neutral.
  • the invention enables a user to map an emotional trajectory on the emotion wheel as a basis for the generation of a playlist. Moreover, the invention provides for a unique visualization of the playlist using the emotional wheel representation.
  • a user will be given the option to specify an initial mood and a destination mood. The trajectory between the initial mood and the destination mood may be steered through the emotions falling in-between. The thus obtained mood trajectory is then populated by music tracks to form a playlist.
  • FIG. 2 shows an emotional wheel as shown in FIG. 1 .
  • the labels indicating the axis, primary emotions and secondary emotions as shown in FIG. 1 are not shown in FIG. 2 .
  • a user may select the starting point 1 as starting point for a playlist.
  • the starting point typically corresponds to a current emotion of the user.
  • the user may select the end point 2 as the end point for the playlist.
  • Music tracks are then populated along a trajectory in-between the starting point 1 and the end point 2 , e.g. trajectory 11 , 12 or 13 , which tracks fall exactly on the trajectory and/or near-exact locations within an incremental distance of the trajectory.
  • a graphical user interface showing the emotional wheel may be presented, through which the user first affixes a starting point by using a mouse pointer to click on the spot on the emotional wheel that resembles a percepted prevailing mood or emotion.
  • the end-point resembling a desired future mood or emotion is then also affixed in a similar fashion.
  • a mood trajectory is drawn between the two points, either as a simple straight line or in a more complex form such as trajectory 11 , 12 or 13 .
  • the trajectory may be altered using the mouse pointer to form a desired path through desired moods or emotions. If e.g. the automatically fetched trajectory resembles trajectory 11 , this trajectory may be changed by e.g. moving the mouse pointer from a top left location to a bottom right location on the graphical user interface.
  • the user interface can be programmed to accept other mouse gestures to achieve a similar effect.
  • the user interface may use a touch screen interface to control the pointer, or any other man-machine interaction means.
  • a music trajectory 14 between a starting point 1 and an end point 2 self intersects at one or more points 3 , thus resulting in one or more recurring emotions along the trajectory.
  • predefined mood trajectories obtainable from a backend database. Examples of predefined mood trajectories are “from sad to happy” and “from excited to calm”. The selected predefined mood trajectory will be displayed on the emotional wheel and may be altered as described above.
  • a backend music engine which is implemented as a software module or a set of software modules, uses a mathematical algorithm to populate the music tracks along the trajectory.
  • the backend music engine may reside at a server, such as the online music portal server, or be partially or entirely running on the client device where the user interface is active.
  • An example of a mathematical algorithm is the virtual creation of circular sections along the trajectory as visualized in FIG. 4 .
  • the circular sections are used to smoothen the trajectory 15 and define an area wherein the to be selected music tracks are to be found.
  • a first series of circular sections 21 are calculated having their centre points lying along the trajectory 15 .
  • a second series of circular section 22 are calculated having their center points at the points of intersection of the first series circular sections.
  • Music tracks residing within the area of the first and second series of circular sections cover the music tracks being selectable along the mood trajectory 15 for insertion in the playlist.
  • the area around the trajectory 15 may be calculated by a probabilistic distribution function virtually forming a regular or irregular shaped area following the trajectory 15 .
  • Another example of a mathematical algorithm uses a distance function to virtually create one or more additional trajectories following the shape of the mood trajectory may be calculated at a predefined distances to a mood trajectory.
  • the mathematical algorithm is then applied to the mood trajectory and the one or more additional trajectories.
  • the thus obtained areas along the different trajectories together form a larger area.
  • Music tracks residing within the larger area cover the music tracks being selectable along the mood trajectory for insertion in the playlist.
  • the playlist may be refined in various ways. Refinements may have a real-time impact on the playlist.
  • the meta-data is e.g. used to select music tracks from a single genre or a combination of two or more genres, an artist, overlapping of two or more artists, a year of release or a time frame for release years, or any other meta-data or combinations of meta-data.
  • the user interface may be used to display the total time-length of the music tracks and/or the total number of music tracks sequenced to be played in the playlist.
  • An option may be displayed to make the playlist shorter using input parameters such as the total number of songs and/or the total time-length of the playlist.
  • the user interface may display an option to partially play the music tracks or a selection of the music tracks in the playlist.
  • An option is e.g. displayed to play e.g. 30%, 50% or 100% of the total time-length of each music track. This way the total playlist can be played with short-play.
  • an automatic suggestive option to affect only selective tracks in the playlist. Through this option e.g. the most liked music tracks will be played for a longer duration while music tracks with a predefined low rating will be short-played.
  • the user may be given an option to rate, save, share, favorite, ban and/or comment self constructed trajectories.
  • These self constructed trajectories related actions may be logged to determine a level of interactiveness in exploring music before music tracks are actually selected or played.
  • FIG. 5 There may be a weighted distribution option along the trajectory path on different emotions along with different meta-data elements like genres and artists. With this option it possible to selectively amplify different types of weights falling along the trajectory.
  • FIG. 5 An example of how this may be visualized in the user interface is shown in FIG. 5 .
  • points 5 , 6 may be added using the mouse pointer.
  • the size of points 4 , 5 and 6 at the particular moods or emotions defines the probability that music tracks will be selected at the respective parts of the trajectory. In FIG. 5 points 4 are the smallest, point 5 is made bigger and points 6 are made biggest.
  • the mood trajectory between the starting point and the end point may be automatically calculated taking into account user specific mood characteristics.
  • a backend learning engine which is implemented as a software module or a set of software modules and typically runs on a server such as the online music portal server, may log music tracks selected or played for a particular mood and for transitioning from one mood to another mood. This enables the learning engine to make predictions in desired mood transitions to get from the starting mood to the end mood.
  • the calculation of the mood trajectory may use various criteria such as music discoveries and/or subjective and cognitive factors.
  • the training algorithm of the backend learning engine may log the relative positioning of a music track in a playlist and capture allied unfavorable and favorable musical shocks, possibly in real-time.
  • musical shocks are activities such as rating, favorite, skipping, banning and emotionally tagging music tracks.
  • An unfavorable shock is e.g. skipping or banning of a music track.
  • a favorable shocks is e.g. to favorite a music track.
  • a continual part of the mood trajectory contains no musical shocks while a disturbed part of the mood trajectory contains one or more shocks.
  • FIG. 6 shows an example of a mood trajectory 17 with shocks 10 at several locations on the trajectory.
  • Tag points 9 a - 9 d are added to distinguish continual from disturbed trajectories. For readability purposes only one shock has a reference number 10 .
  • the partial trajectory between points 7 and 9 a is thus a continual trajectory.
  • Between tag-point 9 a and tag-point 9 b thirteen shocks are recorded.
  • the partial trajectory between points 9 a and 9 b is thus a disturbed trajectory.
  • tag-point 9 b and tag-point 9 c and between tag-point 9 c and tag-point 9 d no shocks are recorded.
  • the partial trajectories between points 9 b and 9 c and between points 9 c and 9 d are thus continual trajectories. Between tag-point 9 d and end point 8 three shocks are recorded. The partial trajectory between points 9 d and 8 is thus a disturbed trajectory.
  • the backend learning engine may be queried to determine if one or more shocks have previously been recorded or if the partial mood trajectory is known to be a continual or disturbed trajectory.
  • the shock information may be used to avoid predictive leads from the earlier reactions. For example, music tracks along a disturbed trajectory will have lesser probability to populate when the shock is unfavorable.
  • the selection of music tracks along the mood trajectory may use personalized meta-data or other forms of personalized music characteristics.
  • a structured personalization method will be discussed, but it is to be understood that the invention can make use of any form of personalized music characteristics that has been created in any other manner.
  • the personalized music characteristics method provides a way to tag music tracks with moods or emotions experienced or felt by a user when listening to a particular music track.
  • the following basic format is used to define the mood or emotion:
  • the Box1 information is related to primary and/or secondary emotions, such as the emotions of the four quadrants and the emotions within the four quadrants as shown in FIG. 1 .
  • the user interface displays the emotional wheel allowing the user to point and click an emotion.
  • emotions are shown in any other representation format from which a user may select an emotion using the user interface.
  • Box2 information is used to further personalize the music characteristics. Box2 typically describes a situation from the past or future. Since there are substantially infinite possible situations, Box2 preferably allows a free text input.
  • Each of the two boxes Box1 and Box2 may be linked to pictures. Since the Box1 possibilities are predefined using the emotional wheel logic, there can be a database on the server with one or more pictures for each of the emotional tags.
  • the Box1 pictures may be randomly recalled and displayed along with the text if the user has personalized a music track.
  • For Box2 an option may be presented to upload a personal picture. Also this uploaded picture can be displayed along with the text if the user has personalized a music track.
  • the server stores not only the current personalized music characteristics of a music track but also past characteristics that have been modified.
  • a third information element Box3 may be added to the personalized music characteristics to enable addition of a time factor.
  • the time factor limits the perceived emotion or mood to the selected time.
  • the Box3 information enables e.g. selection of one of the following elements: early morning, morning, mid-morning, afternoon, evening, night/bed time, summer time, winter time, drizzling rains, pouring rains and spring time. Any other moments in time may be defined for the purpose of Box3.
  • Music characteristics created through the structured personalization method may be used by the backend learning engine to learn a cognitive pattern of the user. More specifically, causation and effectuation can be systematically predicted along a chain reaction from a particular stimulus. Furthermore, the backend trajectory engine may use this information to calculate a chain of emotions or moods from a current emotion to aid the automatic creation of the mood trajectory from the starting point to the end point through intermediary calculated points.
  • a side step may be made to explore other music that is somehow (i.e. through music genograms, as will be explained) related to the currently playing music.
  • the user interface may display a button to open a music genogram of the currently playing music track, from which other music tracks may be explored.
  • the other music tracks that are explored using the music genograms do not necessarily match the current mood along the mood trajectory.
  • the user may leave the music exploration path and return to the playlist as defined by the mood trajectory.
  • the music genogram is a ‘genetic structure’ of a music track.
  • FIG. 7 an example of a typical music genogram 30 for an Indian music track is shown.
  • the gene structure may be different for music tracks originating from different geographies or for music in different genres.
  • the music genogram 30 divides the music track in seven sub-segments in time: an intro part 34 , followed by a main vocals part 35 (e.g. mukhada in an Indian parlance), followed by an instrumental part 36 , followed by a stanza vocal part 37 (e.g. antara in an Indian parlance), followed by another instrumental part 36 , followed by another stanza vocal 37 (e.g. antara in an Indian parlance), followed by a coda part 38 .
  • the length of each sub-segment may vary.
  • the music genogram 30 further divides the music track in three different horizontal bands: a beat band 31 , a tune band 32 and a special tune band 33 .
  • the beat band 31 addresses musical features related to rhythm and percussion. Base and acoustic guitar strokes which form a rhythmic pattern are also included in the beat band.
  • the tune band 32 includes tune attributes in vocals and accompanied instrumentals.
  • the special tune band 33 relates to special tones such as chorus, yodel, whistle, use of different language, whispering, breathing sounds, screaming, etcetera.
  • Each of the three bands 31 , 32 , 33 is divided equally in the seven sub-segments described above.
  • each section may be further subdivided into subsections, e.g. in three subsection to identify a beginning subsection, a middle subsection and an end subsection of that section.
  • One or more unique sections of the music genogram can be tagged using one or more of the following types of tags.
  • the individual tags are shown in FIG. 8 for identification purposes.
  • Pro-tags 42 are used to explore similarities.
  • Micro-pro tags 45 are used as a decomposed part of the pro-tag 42 to explore similarities.
  • Hook factor tags 44 are used to trigger novelty.
  • Deceptive tags 41 are used to trigger serendipity.
  • Sudden change tags 43 are used to indicate a sudden change in beat (e.g. a 30% or more change in tempo), in scale (e.g. going from a C scale to a C# scale or going from whispering to screaming) and to trigger similarity. Sudden change tags 43 capture substantial change in the scale/pitch or the tempo of the music track.
  • the pro-tags 42 , micro-pro tags 45 and sudden change tags 43 are used to find similarities between the current music track and another music track.
  • the hook tag 44 indicates a unique feature in the music track that catches the ear of the listener.
  • the deceptive tag 41 is typically allocated to intro parts. After listening to an intro part the instrumentals and/or vocals used in the intro part may tempt a user (in anticipation) to expect or explore another music track with a familiar tune. This may result in the user ending up listening to a completely different music track.
  • Each type of tag can be visualized by a unique identification icon as shown in FIG. 8 to enable displaying of the music genogram in the user interface. Instead of the icons shown in FIG. 8 any other icon may be used to visualize the tags.
  • tags are added to various sections of the music track.
  • the music genogram 30 including tags may be displayed in the user interface as shown in FIG. 7 .
  • a deceptive tag 41 is located in the intro part 34 of the beat band 31 . Seeing this deceptive tag 41 may suggest or force the user to think that there is an intelligence connected to this section. Based on the characteristics of the deceptive tag the user will typically expect a different music track than normal.
  • a pro-tag 42 is located in the intro part 34 of the tune band 32 , which is further decomposed into two micro-pro tags 45 each of which is connected to other similar elements.
  • the pro-tag 42 may e.g. be a combination of a trumpet and violin.
  • This combination may have been used subtly or obviously in other music tracks. It is likely that a trumpet playing style or a violin playing style is used in other music tracks.
  • a trumpet and a violin form two micro-pro-tags 45 .
  • Icon 46 indicates that the tag intelligence, the pro-tag 42 in this case, can be expected in the end sub-section.
  • To indicate the beginning sub-section or the middle sub-section icons 48 and 47 may be used, respectively.
  • Another pro-tag 42 is located in the second instrumental part 36 of the beat band 31 . This pro-tag 42 is not affiliated to any micro-tag.
  • a hook 44 of the music track is located in the main vocals part 35 of the special tune band 33 .
  • vocals of the song are characterized by special tunes such as whistling or gargling or a combination thereof.
  • a sudden change tag 43 is located in the second instrumental part 36 of the special tune band 33 . This tempts one to expect that there could be a change in scale e.g. with the chorus effect during that section.
  • each tag may have a connection to one or more other music tracks to enable exploration of other music tracks.
  • FIG. 9 shows an example of how this may be displayed in the user interface.
  • the music genogram 30 of FIG. 7 is shown in FIG. 9 , together with connections to music genograms of six other music tracks via clickable buttons 51 , 52 , 53 , 54 , 55 and 56 , respectively.
  • the pro-tag 42 on a combination of violin and trumpet, in the intro part of the tune band 32 is connected to three different music tracks, whose music genograms can be selected by clicking one of the top-most buttons 51 , 52 and 53 , respectively.
  • the pro-tag 42 can be further decomposed into two micro-tags each of which is further connected to three other music tracks.
  • the micro-tag 45 related to the violin is connected to a fourth music track, music genogram of which can be selected by clicking button 54 next to the violin indicator 61 .
  • the micro-tag 45 related to the trumpet is connected to a fifth and sixth music track, whose music genograms can be selected by clicking buttons 55 and 56 , respectively, next to the trumpet indicator 62 .
  • the backend learning engine may be configured to constantly monitor users for the kind of music genogram tags that are explored and recommend the user what more to explore.
  • a vector format may be used to store the connections in a database.
  • the following vector format is preferred, but any other format may be used:
  • node-1 identifies the master music track and node-2 identifies a slave music track to which a connection is made.
  • the connections information identifies the locations of the connecting tags.
  • Tag coordinates for node-1 include an identity of the master music track, an indication of the section in the music genogram of the master music track and a tag-type identifier.
  • Tag coordinates for node-2 include an identity of the slave music track, an indication of the section in the music genogram of the slave music track and an indication whether a connection is made to a pro-tag only or also to a micro-tag of the pro-tag.
  • Similarity tag(s) of the music genogram may be weighed over the connection(s) with weights e.g. ranging from 1 to 5, where 1 indicates an exact or very obvious match and 5 indicates a very subtle match.
  • the positioning and the affiliated tag connections are typically added manually by a music expert or may be automated with algorithms, and stored in a meta-dataset.
  • the meta-dataset is typically stored in a database and may be formatted as a positional matrix indicating the functional connections between music tracks. The matrix reflects the connections between similar music tracks.
  • the meta-dataset forms a relationship matrix for the music tracks.
  • Assigning music attributes as a function of building block of a music track using the music genogram structure enables to learn exploiting similarities and novelties in music tracks by other musicians or artists.
  • a single music attribute can be used subtly or obviously anywhere in a music track and in combination with other music attributes. For example a particular tune could be used in two songs with different/distinguishable construction/built up of building blocks with respect to their music genogram structure. This helps users to discover how the same tune can be used to create a different effect when used in different constructions of music genograms.
  • the objective of the first option is to have the user intervene to play selected slave music tracks with only overlapping connections with respect to the master music track.
  • two slave genograms of S22 and S23 are selected to explore.
  • the system keeps track of explored slave music tracks and at this point in time it knows that S21 is still to be explored.
  • the master genogram of M2 is marked to be incompletely explored.
  • the genogram exploration of M2 is marked to be completed.
  • the playlist is updated from ⁇ M2, M3, M4 ⁇ to ⁇ M2, S22, S23 ⁇ M3 ⁇ M4 ⁇ . It is noted that M1 is not in the playlist because it has been played already. M2 is still in the playlist as it is currently being played. M2, S22, S23, M3 and M4 will be played one after the other. Only connecting tags of the slave genogram(s) with respect to the master genogram are displayed.
  • the objective of the second option is to give the user instant gratification.
  • the second option is particularly suitable for expert users intending to study exact setting of a music track.
  • M2 is being played and S21 is recalled to discover the connection.
  • S21 will start playing fulfilling the following criteria.
  • the positional section of the connecting tag in S21 is to be located, for example BB4 (fourth part of the beat band).
  • the parts before and after the located tag are identified by adding 1 and ⁇ 1 to the part number. This gives BB3 and BB5.
  • S21 will start playing for the time range spanning the three parts BB3, BB4 and BB5. Only connecting tags of the slave genogram(s) with respect to the master genogram are displayed.
  • the third option music tracks are explored and discovered along the long tail of master to slave to slave's slave, etcetera, until the user intervenes.
  • the objective of the third option is to give the user an instant gratification.
  • the third option is particularly suitable for savant users. M2 is being played and S21 is recalled to discover the connection. S21 will start playing using the criteria shown for the second option. The difference with the second option is that all tags of S21 will be displayed. In other words, the displayed genogram of S21 not only includes overlapping tags with respect to the master music track M2, but also includes tags overlapping with other music tracks. Non-overlapping tags of S21 connected to the other music tracks will be discovered in this option.
  • the music genogram recommendation system has many advantages. Seeing and exploring descriptive tags of the music genogram has a significant effect on stimulating a curiosity in the most logical ways. This offers instant gratification. It offers a novel, contextual, engaging and structured recommendation that renders a truly transparent and steerable navigation through music tracks.
  • the music genogram representation creates a paradigm shift in conventional recommendation systems.
  • the music genogram captures music essence at macro and micro elements of music tracks.
  • the active and transparent recommendation structure lets user anticipate the type and positioning of the recommended features. It helps the user to discover the recommendations in a systematic way thereby avoiding randomized or hitting-in-the-dark discoveries of music.
  • the recommendation method enables systematically discovering in a huge volume of undiscovered content.
  • the music genogram includes novel and serendipitous recommendation elements. This aspect inspires users to gain trust over the recommendation method. Users can muse over the recommendation and grow their learning potential in music.
  • the size of descriptive tags may be directly proportional to the strength of the overlap between the elements. This is useful for predictive anticipation on the functional connection(s) of the connecting tags.
  • a decision tree on the exploration can further be mapped and studied. Each of the descriptive tags may be rated (like/dislike).
  • Metric 1 Number of explored items/number of recommended items. This metric indicates the discovering initiative on a quantitative basis.
  • Metric 2 Number of explored items of the similar type/number of recommended items of the similar type. This metric indicates the discovering initiative of a user on a qualitative basis and provides for similarity exploration for subtle similarities and/or obvious similarities, novelty exploration and serendipitous exploration.
  • the music genogram may be used to make a side step from the playlist generated from the mood trajectory. It is to be understood that the master music track for the music exploration does not necessarily come from the playlist. It is possible to explore music using music genograms starting from any music track.
  • the hybrid of the unique features of the described mood trajectory, structured personalization and music genogram enables users to grow to higher levels on the scale of indifferent to casual to enthusiastic to savant.
  • First a mood trajectory is selected by either building an own trajectory, following one of the recommended trajectories, selecting trajectories from the expert library or following the trajectories inputted from the shared community.
  • the options include pre-selecting and loading music tracks per artist/genre on the emotional wheel, adding differential weights along the selective emotions of the trajectory and assigning a probabilistic distribution on personalized, favorite, rated and/or banned songs and/or types of genograms (e.g. 2 stanzas, 3 stanzas, instrumental) and/or favorite music genograms and/or incompleted music genograms for the music tracks to get populated.
  • genograms e.g. 2 stanzas, 3 stanzas, instrumental
  • the user listens to the music tracks along the emotion trajectory on the emotion wheel.
  • the user is able to manually tag each of the music tracks populated in the playlist algorithm of the trajectory by banning, favorite and/or rate a music track.
  • the user is able to personalize each of the music tracks populated in the playlist algorithm of the trajectory.
  • the user is able to see the music genogram of each of the music tracks populated in the playlist algorithm of the trajectory.
  • Furthermore the user is able to discover the tags of the music genogram by exploring type of the tag and the connecting songs at the macro/micro tags.
  • the user wants to immediately explore the visual connection(s) as revealed from the music genogram of the master music track, then the user is able to selectively/completely queue the connecting song(s) in the playlist of the trajectory.
  • the user wants to immediately explore the visual connection(s) as revealed from a slave's music genogram, then the user is able to selectively/completely queue the connecting song(s) in the playlist of the trajectory.
  • This logic can also be extended to slave's slave's genogram in an infinitesimal loop as triggered by interactive initiative of the user.
  • the user is able to favorite the music genogram.
  • the user is able to rate (like ‘thumbs up’ or ‘thumbs down’) the connecting node(s) of the music track generated in the playlist when they are being played.
  • the user is able to tag the music genogram for a reminder. This feature is useful if the user has incompletely explored the tags revealed in the music genogram and wants to complete the in-completed discovery of the tags in the latter time/event.
  • the user is able to share the music genogram or only selective tag(s) in the music genogram within the online community.
  • the user typically follows a decision tree when creating mood trajectories (typically non-real time) and exploring individual media items such as music tracks (typically real time).
  • FIG. 10 shows an example of a decomposition of how non-real time user initiatives are mapped to discovering/exploring music.
  • Block 100 indicates the start of the non-real time exploration initiative.
  • Block 101 indicates the start of different trajectory structures.
  • Block 102 indicates building an own trajectory.
  • Block 103 indicates using recommended trajectories.
  • Block 104 indicates using an expert pre-mapped library with pre-stored trajectories.
  • Block 105 indicates using a community induced trajectory.
  • Block 106 indicates pre-selecting and loading music tracks per artist or genre on the emotional wheel.
  • Block 107 indicates adding emotion weights on the different locations of the trajectory.
  • Block 108 indicates assigning probabilistic distribution on selecting personalized and/or type of genogram and/or incomplete and/or completed genogram and/or favorite and/or ban music tracks to get populated as a playlist.
  • Block 109 indicates ‘following which type of recommendation?’.
  • Block 110 indicates a link to a real time initiative as shown in FIG. 11 .
  • FIG. 11 shows an example of a decomposition of a real-time initiative per music track on the emotional trajectory.
  • Block 200 indicates the start of the real time initiative.
  • Block 201 indicates explicit tagging.
  • Block 202 indicates a favorite action.
  • Block 203 indicates a rating action.
  • Block 204 indicates a ban action.
  • Block 205 indicates a skip action.
  • Block 206 indicates a personalization initiative.
  • Blocks 207 indicate an addition.
  • Block 208 indicates emotion coordinates on the trajectory, either exact or near exact.
  • Block 209 indicates primary/secondary emotion on the first box.
  • Block 210 indicates a comparison.
  • Block 211 indicates ‘match?’.
  • Block 212 indicates the result ‘close’.
  • Block 213 indicates the result ‘in-between’.
  • Block 214 indicates the result ‘far’.
  • Block 215 indicates original trajectory completion.
  • Block 216 indicates fast track.
  • Block 217 indicates full track.
  • Block 218 indicates continual track.
  • Block 219 indicates disturbed track.
  • Block 220 indicates a favored shock.
  • Block 221 indicates an unfavored shock.
  • Block 222 indicates favored shock as discovery initiative.
  • Block 223 indicates a genogram initiative.
  • Block 224 indicates option 1 .
  • Block 225 indicates option 2 .
  • Block 226 indicates option 3 .
  • Block 227 indicates master-slave.
  • Block 228 indicates master-slave-slave . . . n.
  • Block 229 indicates quantitative metrics.
  • Block 230 indicates qualitative metrics.
  • Block 231 indicates completion.
  • Block 232 indicates in-completion.
  • Block 233 indicates new genograms.
  • Block 234 indicates existing genograms.
  • Block 235 indicates similarity.
  • Block 236 indicates novelty.
  • Block 237 indicates serendipitous.
  • Block 238 indicates on connection.
  • Block 239 indicates community
  • Each of the non-real time and real time initiatives for music discovery/exploration as shown in FIG. 10 and FIG. 11 may be allocated in terms of coordinates on the emotion wheel to enable statistical reports and recommendations.
  • Traditional data regression modeling techniques can be deployed per music track populated in a quadrant of the emotion wheel. These techniques thus map a music track as an input to the respective emotion coordinates and respective extent of the discovery initiatives.
  • Differential weights are assigned on different music discovery initiatives mapped in the dual effort of real time and non-real time. For example, within non-real time initiatives building a trajectory receives more weight on music discovery potential than when following recommended trajectories.
  • a similar logic of assigning differential weights can also be extended to gauze the music discovery potential in real time.
  • FIG. 12 shows an example of a mapping of real time and non-real time initiatives on the emotional wheel.
  • a first cluster 71 of music discovery initiatives represents personalization initiatives of the user.
  • a second cluster 72 of music discovery initiatives represents music genogram discoveries of the user.
  • a third cluster 73 of music discovery initiatives represents the continual trajectory.
  • personalized music style in the first quadrant 20% PI+55% GDI+50% CT. Note that the coverage of the clusters may overlap over the different sets of music initiatives and may therefore not add to 100%.
  • a radical change in the number of attempt(s) in the same quadrant and on the same reference of the loaded data may be notified to the user.
  • the expression may be expanded or optimized to cover more quadrants on the emotional wheel and/or include other music discovery initiatives.
  • the following examples show four alternative clusters called ‘what inspires me?’, ‘what is working for me?’, ‘what are the possibilities for me?’ and ‘what is missing for me?’. It is to be understood that any other cluster may be defined.
  • the cluster ‘what inspires me?’ is for example a cluster on: personalized music tracks with positive emotions; music genogram initiatives involving types of music genogram, types of music genogram tags and option among the three methods to exploring a genogram; music tracks/artists following the above habits; and high score on cumulative initiative index on discovering music, either real time or non-real time.
  • the cluster ‘what is working for me?’ is for example a cluster on: trajectories with all favorable music shocks; types of genograms; option among the three methods to exploring a genogram; and music tracks/artists following the above habits.
  • the cluster ‘what are the possibilities for me?’ is for example a cluster on: trajectories with no music shocks; music tracks/artists following the above habits; and little tried option(s) among the three methods to exploring a genogram.
  • the cluster ‘what is missing for me?’ is for example a cluster on: trajectories which are never or very little followed; music tracks/artists following the above habits; unattempted option(s) among the three methods to exploring genogram; and low score on cumulative initiative index on discovering music, either in real time or non-real time.
  • the described hybrid architecture takes care of tangible (incremental or radical) change in user's growing music understanding. It features dynamically revising the perception about music along the basic emotions as well as combinations of emotions and opposites of basic emotions. It measures and expresses perception change of the user on rating the songs on a micro levels of emotions such as carry-over beliefs and current beliefs. It supplements users with an intuitive user interface for instant gratification on micro elements used in building the play listing. It offers the feature of simultaneously generating personalized options with choosing personalized solutions. It captures learning potential and rate of change learning potential of the listener in real time. It captures learning potential and rate of change learning potential of the listener in the experimenting time. It gives an individualized expression on the music listening style capturing extrinsic and intrinsic habits related to personal traits and related to music discoveries.
  • An aggregate metric on one's music-style is highly desirable since a changed music-style expression shall reflect a change in the personal expression and forces one to think about his/her listening habit/style. It optimizes for personalized recommendations whilst offering possibilities to fully discover expert recommendations.
  • This combined aspect of playlist recommendation is highly desirable. It gives an option to track partially discovered/explored songs. It generates recommendation adapting to the universal music styles. It fully monitors the reasons to following a continuum (music tracks being played without disturbances) and shocks on the songs of the playlist. It evaluates the songs populated in the playlist to create positive experiences on music discoveries at macro and/or micro levels of a music track.
  • playlists containing music tracks are given of playlists containing music tracks. It is to be understood that the invention is not limited to playlists containing music tracks. Any media item can be included in the playlist, such as a music track, a video or a picture.
  • a playlist may contain music items, videos or pictures only. Alternatively a playlist contains a mixture of music items, videos and/or pictures.
  • FIG. 13 shows an example of steps performed in a computer-implemented method for creating a playlist.
  • a graphical representation of an emotional wheel is displayed.
  • a first input is received indicating a starting point of a mood trajectory in the emotional wheel.
  • a second input is received indicating an end point of the mood trajectory in the emotional wheel.
  • the mood trajectory is defined by connecting the starting point to the end point via one or more intermediate points.
  • a graphical representation of the mood trajectory is displayed in the graphical representation of the emotional wheel.
  • the media items are selected by searching in the meta-data for emotion characteristics or mood characteristics that match the initial mood, the intermediate moods and the destination mood, respectively.
  • the playlist of media items is created in an order from initial mood to destination mood.
  • FIG. 14 shows an example of steps performed in a computer-implemented method for creating personalized meta-data for a media item.
  • emotion data is received indicative of a primary emotion or mood or a secondary emotion or mood experienced or felt by a user when listening to or watching the media item.
  • description data is received indicative of a situation indicated by the user to be related to the media item.
  • the emotion data and the description data are stored in the meta-data.
  • time data is received indicative of a moment in time indicated by the user to be related to the media item.
  • the time date is stored in the meta-data.
  • FIG. 15 shows an example of steps performed in a computer-implemented method for exploring music tracks.
  • a graphical representation is displayed of a first music genogram of a first music track.
  • a first exploration input is received indicating a selection of one of the tags in one of the sections. If the first exploration input indicates a pro-tag, then in step 1023 a link is displayed to a second music genogram of a second music track. If the pro-tag comprises two or more micro-pro tags, then in step 1024 a graphical representation is displayed of the decomposition of the pro-tag and a link to a third music genogram of a third music track for each of the micro-pro tags. The decision is taken in step 1025 .
  • FIG. 16 shows an example of steps performed in a computer-implemented method for enabling personalized media items recommendations.
  • step 1031 real time and/or non-real time initiatives data is collected.
  • step 1032 the real time and/or non-real time initiatives data is mapped on an emotional wheel.
  • step 1033 an intuitive expression is aggregated to define a personalized music style by meticulously analyzing the real time and/or non-real time initiatives data.
  • any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments.
  • One embodiment of the invention may be implemented as a program product for use with a computer system.
  • the program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media.
  • Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory or flash memory) on which alterable information is stored.
  • non-writable storage media e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory
  • writable storage media e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory or flash memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Processing Or Creating Images (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention enables music tracks in a playlist to be explored using a unique music genogram data format. The invention further enables a user to map an emotional trajectory on the emotion wheel as a basis for the generation of the playlist with music tracks. The invention provides for a unique visualization of the playlist using the emotional wheel representation. A user will be given the option to specify an initial mood and a destination mood. The trajectory between the initial mood and the destination mood may be steered through the emotions falling in-between. The thus obtained mood trajectory is then populated by music tracks to form a playlist.

Description

    FIELD OF THE INVENTION
  • The present invention relates to the exploration of music tracks in a playlist and the creation of playlists. More specifically, the invention relates to a computer-implemented method for the creation of a playlist, a music genogram data structure and a computer implemented method to explore music tracks using the music genogram data structure.
  • BACKGROUND
  • Driven by the rapid expansion of the Internet, media items, such as music, video and pictures, are becoming available in digital and exchangeable format more and more. It is to be expected that at some point in time substantially all music will be available online, e.g. through music portal websites. A music track is a single song or instrumental piece of music. With potentially billions of music tracks from new and existing artists being added to the worldwide online available music collection on a monthly time scale, it is becoming very difficult to find favorite music tracks or new music tracks to ones liking from the vast collection of music. Similarly, the number of videos and pictures available through online video and picture collections are growing.
  • To enable searching for and/or selection of particular music track, artist, music album, etcetera, digitized music is typically provided with textual information in the form of meta-data. The meta-data typically includes media characteristics like artist, title, producer, genre, style, composer and year of release. The meta-data may include classification information from music experts, from friends or from an online community to enable music recommendations.
  • A playlist is a collection of media items grouped together under a particular logic. Known online music portals, such as e.g. Spotify (www.spotify.com) and Pandora (www.pandora.com) offer tools to make, share, and listen to playlists in the form of sequences of music tracks. Individual tracks in the playlist are selectable from an online library of music. The playlist can be created such to e.g. reflect a particular mood, accompany a particular activity (e.g. work, romance, sports), serve as background music, or to explore novel songs for music discoveries.
  • Playlist may be generated either automatically or manually. Automatically created playlists typically contain media items from similar artists, genres and styles. Manually selection of e.g. a particular music track is typically driven by listening to a particular track or artist, a recommendation of a track or artists or a preset playlist. It is possible that a user provides a manually selected track or particular meta-data as a query and that a playlist is generated automatically as indicated above in response to this query.
  • Known tools for finding media items and creating playlists do not take into account different tastes of individual users, which is further compounded by additional differences on their demographics. Moreover, a user's response to e.g. music typically depends on the type of user. Four types of users can generally be identified: users indifferent to music; users casual to music; users enthusiastic to music; and music savants. Indifferent users typically would not loose much sleep if music would cease to exist. Statistically 40% of users in the age group of 16-45 are of this type. Casual users typically find that music plays a welcome role but other things are far more important. Their focus is on music listening and playlists should be offered in a transparent manner. Statistically 32% of users in the age group of 16-45 are of this type. Enthusiastic users typically find that music is a key part of life but it is balanced by other interests. Their focus is on music discovery and playlists may be created using more complex recommendations. Statistically 21% of users in the age group of 16-45 are of this type. Savant users typically feel that everything in life is tied up with music. Their focus is on the question “what's hot in music?”. Statistically 7% of users in the age group of 16-45 are of this type. Known tools typically target a specific type of user and do not take into account different types of users.
  • It is known that emotions can be used to generate a playlist. Users or experts, such as music experts, may e.g. add mood classification data to the meta-data to enable generation of a playlist with tracks in a particular mood. WO2010/027509 discloses an algorithm that produces a playlist based on similarity metrics that includes relative information from five time-varying emotional classes per track.
  • A method for personalizing content based on mood is disclosed in US2006/0143647. An initial mood and a destination mood are identified for a user. A mood-based playlisting system identifies the mood destination for the user playlist. The mood destination may relate to a planned advertisement. The mood-based playlisting system has a mood sensor such as a camera to provide mood information to a mood model. The camera captures an image of the user. The image is analyzed to determine a current mood for the user so that content may be selected to transition the user from the initial mood to the destination mood responsive to the determined mood of the user. The user has no direct control over the desired mood destination and the transition path. Moreover, use of a camera to capture the current mood may result in a less or non-preferred playlist, as there can be a mismatch between the desire of the user to reach a certain mood and the mood reflected per his/her facial expressions.
  • There is a need for a user friendly method for exploring music and creating and manipulating mood based playlists for different types of users and from a vast and growing amount of available online media items.
  • SUMMARY OF THE INVENTION
  • According to an aspect of the invention a computer-implemented method is proposed for exploring music tracks. The method comprises displaying a graphical representation of a first music genogram of a first music track. The first music genogram has a data structure. The method further comprises receiving a first exploration input indicating a selection of one of the tags in one of the sections. The method further comprises, if the first exploration input indicates a pro-tag, displaying a link to a second music genogram of a second music track, and/or, if the pro-tag comprises two or more micro-pro tags, displaying a graphical representation of the decomposition of the pro-tag and a link to a third music genogram of a third music track for each of the micro-pro tags. The music genogram data structure comprises sub-segment data identifying one or more sub-segments to define a decomposition in time of a music track. Each sub-segment has a start time and an end time. The music genogram data structure further comprises band data identifying one or more bands to define a decomposition for the time length of the music track. A cross section of a sub-segment and a band forms a section. The music genogram data structure further comprises tag data identifying one or more tags in one or more sections. A tag can be a deceptive tag to indicate a surprising effect. The tag can be a pro-tag to identify a starting point for an exploration of another music genogram based on similarities. The tag can be a sudden change tag to indicate a substantial change in scale, pitch or tempo. The tag can be a hook tag to identify a unique sound feature in the music track. The tag can be a micro-pro tag to enable a decomposition of the pro-tag.
  • Thus, music tracks can be explored in a user friendly way.
  • The embodiment of claim 2 advantageously enables more precise tagging.
  • The embodiment of claim 3 advantageously enables the music genogram to be particularly suitable for Indian music.
  • The embodiment of claim 15 advantageously enables exploring music tracks from a mood based playlist to be explored in a user friendly way. The mood based playlist can be created in a user friendly way.
  • Examples of a media item are a music track, a video or a picture. The playlist may comprise a mixture of music tracks, videos and/or pictures.
  • The embodiment of claim 5 advantageously enables a quick an easy creation of a mood trajectory.
  • The embodiment of claim 6 advantageously enables a user to manipulate the mood trajectory.
  • The embodiment of claim 7 advantageously enables more media items to fulfill the selection criteria thus become selectable when creating the playlist.
  • The embodiment of claim 8 advantageously enables a user to place restrictions on the playlist and/or the media items in the playlist. Examples of a media characteristic are artist, title, producer, genre, style, composer and year of release.
  • The embodiment of claim 9 advantageously enables a user to increase the number of media items for a specific mood in the playlist.
  • The embodiment of claim 10 advantageously enables a user to see where in the mood trajectory a decrease or increase in the number of media items with specific media characteristics can be expected.
  • The embodiment of claim 11 advantageously enables future creation of a playlist to take into account past user actions related to a particular media item.
  • According to an aspect of the invention a computer program product is proposed. The computer program product comprises software code portions configured for, when run on a computer, executing one or more of the above mentioned method steps.
  • Hereinafter, embodiments of the invention will be described in further detail. It should be appreciated, however, that these embodiments may not be construed as limiting the scope of protection for the present invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Aspects of the invention will be explained in greater detail by reference to exemplary embodiments shown in the drawings, in which:
  • FIG. 1 shows a graphical representation of a prior art mood wheel;
  • FIG. 2 shows a graphical representation of mood trajectories of an exemplary embodiment of the invention;
  • FIG. 3 shows a graphical representation of a mood trajectory of an exemplary embodiment of the invention;
  • FIG. 4 shows a graphical representation of a mathematical algorithm of an exemplary embodiment of the invention;
  • FIG. 5 shows a graphical representation of a mood trajectory with weighted points of an exemplary embodiment of the invention;
  • FIG. 6 shows a graphical representation of a mood trajectory with shock points of an exemplary embodiment of the invention;
  • FIG. 7 shows a graphical representation of a music genogram of an exemplary embodiment of the invention;
  • FIG. 8 shows a collection of music genogram tags of an exemplary embodiment of the invention;
  • FIG. 9 shows a graphical representation of exploring a music genogram of an exemplary embodiment of the invention as may be displayed on a user interface;
  • FIG. 10 schematically shows a non-real time exploration initiative of an exemplary embodiment of the invention;
  • FIG. 11 schematically shows a real time initiative per music track in the emotional wheel of an exemplary embodiment of the invention;
  • FIG. 12 shows a graphical representation of a mapping of real time and non-real time initiatives on the emotional wheel of an exemplary embodiment of the invention;
  • FIG. 13 schematically shows steps of a method of an exemplary embodiment of the invention;
  • FIG. 14 schematically shows steps of a method of an exemplary embodiment of the invention;
  • FIG. 15 schematically shows steps of a method of an exemplary embodiment of the invention; and
  • FIG. 16 schematically shows steps of a method of an exemplary embodiment of the invention.
  • DETAILED DESCRIPTION
  • In FIG. 1 a model for classification of emotions is shown, known as an emotional wheel 10. The emotion wheel 10 is founded on psychology results and views of scientists like Plutchik in 1980, Russell in 1980, Thayer in 1989 and Russell and Feldman Barrett in 1999. The emotional wheel 10 captures a wide range of significant variations in emotions in a two dimensional space. Emotions can be located in the two dimensional Cartesian system along the various intensities of emotions and degree of activation. The x-axis defines the level of valence. The y-axis defines the level of arousal. Each emotional state can be understood as a linear combination of these two dimensions. The four quadrants of the emotional wheel identify the primary emotions joy, anger, sadness and neutral. Secondary emotions, providing a more detailed classification, are indicated in italics and include the emotions pleased, happy, interested, exited, alarmed, annoyed, nervous, afraid, angry, furious, terrified, sad, depressed, bored, sleepy, calm, relaxed, content and serene. It is possible to define other and/or more primary and/or secondary emotions.
  • The invention enables a user to map an emotional trajectory on the emotion wheel as a basis for the generation of a playlist. Moreover, the invention provides for a unique visualization of the playlist using the emotional wheel representation. A user will be given the option to specify an initial mood and a destination mood. The trajectory between the initial mood and the destination mood may be steered through the emotions falling in-between. The thus obtained mood trajectory is then populated by music tracks to form a playlist.
  • FIG. 2 shows an emotional wheel as shown in FIG. 1. For readability purposes the labels indicating the axis, primary emotions and secondary emotions as shown in FIG. 1 are not shown in FIG. 2. A user may select the starting point 1 as starting point for a playlist. The starting point typically corresponds to a current emotion of the user. Furthermore the user may select the end point 2 as the end point for the playlist. Music tracks are then populated along a trajectory in-between the starting point 1 and the end point 2, e.g. trajectory 11, 12 or 13, which tracks fall exactly on the trajectory and/or near-exact locations within an incremental distance of the trajectory.
  • A graphical user interface showing the emotional wheel may be presented, through which the user first affixes a starting point by using a mouse pointer to click on the spot on the emotional wheel that resembles a percepted prevailing mood or emotion. The end-point resembling a desired future mood or emotion is then also affixed in a similar fashion. Next a mood trajectory is drawn between the two points, either as a simple straight line or in a more complex form such as trajectory 11, 12 or 13. The trajectory may be altered using the mouse pointer to form a desired path through desired moods or emotions. If e.g. the automatically fetched trajectory resembles trajectory 11, this trajectory may be changed by e.g. moving the mouse pointer from a top left location to a bottom right location on the graphical user interface.
  • It will be understood that the user interface can be programmed to accept other mouse gestures to achieve a similar effect. Alternatively the user interface may use a touch screen interface to control the pointer, or any other man-machine interaction means.
  • As shown in FIG. 3, it is possible that a music trajectory 14 between a starting point 1 and an end point 2 self intersects at one or more points 3, thus resulting in one or more recurring emotions along the trajectory.
  • Instead of selecting a starting point and an end point it is possible to have the user interface display one or more predefined mood trajectories obtainable from a backend database. Examples of predefined mood trajectories are “from sad to happy” and “from excited to calm”. The selected predefined mood trajectory will be displayed on the emotional wheel and may be altered as described above.
  • Once the mood trajectory is selected a backend music engine, which is implemented as a software module or a set of software modules, uses a mathematical algorithm to populate the music tracks along the trajectory. The backend music engine may reside at a server, such as the online music portal server, or be partially or entirely running on the client device where the user interface is active.
  • An example of a mathematical algorithm is the virtual creation of circular sections along the trajectory as visualized in FIG. 4. For readability purposes the emotional wheel is not shown. The circular sections are used to smoothen the trajectory 15 and define an area wherein the to be selected music tracks are to be found. A first series of circular sections 21 are calculated having their centre points lying along the trajectory 15. A second series of circular section 22 are calculated having their center points at the points of intersection of the first series circular sections. Music tracks residing within the area of the first and second series of circular sections cover the music tracks being selectable along the mood trajectory 15 for insertion in the playlist.
  • Instead of calculating circular sections as shown in FIG. 4, the area around the trajectory 15 may be calculated by a probabilistic distribution function virtually forming a regular or irregular shaped area following the trajectory 15.
  • Another example of a mathematical algorithm uses a distance function to virtually create one or more additional trajectories following the shape of the mood trajectory may be calculated at a predefined distances to a mood trajectory. The mathematical algorithm is then applied to the mood trajectory and the one or more additional trajectories. The thus obtained areas along the different trajectories together form a larger area. Music tracks residing within the larger area cover the music tracks being selectable along the mood trajectory for insertion in the playlist.
  • The playlist may be refined in various ways. Refinements may have a real-time impact on the playlist.
  • Through the user interface a user may be given an option to restrict the selectable music tracks using available meta-data. The meta-data is e.g. used to select music tracks from a single genre or a combination of two or more genres, an artist, overlapping of two or more artists, a year of release or a time frame for release years, or any other meta-data or combinations of meta-data.
  • The user interface may be used to display the total time-length of the music tracks and/or the total number of music tracks sequenced to be played in the playlist. An option may be displayed to make the playlist shorter using input parameters such as the total number of songs and/or the total time-length of the playlist.
  • The user interface may display an option to partially play the music tracks or a selection of the music tracks in the playlist. An option is e.g. displayed to play e.g. 30%, 50% or 100% of the total time-length of each music track. This way the total playlist can be played with short-play. Additionally there may be an automatic suggestive option to affect only selective tracks in the playlist. Through this option e.g. the most liked music tracks will be played for a longer duration while music tracks with a predefined low rating will be short-played.
  • Through the user interface the user may be given an option to rate, save, share, favorite, ban and/or comment self constructed trajectories. These self constructed trajectories related actions may be logged to determine a level of interactiveness in exploring music before music tracks are actually selected or played.
  • There may be a weighted distribution option along the trajectory path on different emotions along with different meta-data elements like genres and artists. With this option it possible to selectively amplify different types of weights falling along the trajectory. An example of how this may be visualized in the user interface is shown in FIG. 5. Along the mood trajectory 16 one or more points 5, 6 may be added using the mouse pointer. The size of points 4, 5 and 6 at the particular moods or emotions defines the probability that music tracks will be selected at the respective parts of the trajectory. In FIG. 5 points 4 are the smallest, point 5 is made bigger and points 6 are made biggest.
  • The mood trajectory between the starting point and the end point may be automatically calculated taking into account user specific mood characteristics. Per user a backend learning engine, which is implemented as a software module or a set of software modules and typically runs on a server such as the online music portal server, may log music tracks selected or played for a particular mood and for transitioning from one mood to another mood. This enables the learning engine to make predictions in desired mood transitions to get from the starting mood to the end mood. The calculation of the mood trajectory may use various criteria such as music discoveries and/or subjective and cognitive factors.
  • The training algorithm of the backend learning engine may log the relative positioning of a music track in a playlist and capture allied unfavorable and favorable musical shocks, possibly in real-time. In this context musical shocks are activities such as rating, favorite, skipping, banning and emotionally tagging music tracks. An unfavorable shock is e.g. skipping or banning of a music track. A favorable shocks is e.g. to favorite a music track. A continual part of the mood trajectory contains no musical shocks while a disturbed part of the mood trajectory contains one or more shocks.
  • FIG. 6 shows an example of a mood trajectory 17 with shocks 10 at several locations on the trajectory. Tag points 9 a-9 d are added to distinguish continual from disturbed trajectories. For readability purposes only one shock has a reference number 10. Between the starting point 7 and tag-point 9 a no shocks are recorded. The partial trajectory between points 7 and 9 a is thus a continual trajectory. Between tag-point 9 a and tag-point 9 b thirteen shocks are recorded. The partial trajectory between points 9 a and 9 b is thus a disturbed trajectory. Between tag-point 9 b and tag-point 9 c and between tag-point 9 c and tag-point 9 d no shocks are recorded. The partial trajectories between points 9 b and 9 c and between points 9 c and 9 d are thus continual trajectories. Between tag-point 9 d and end point 8 three shocks are recorded. The partial trajectory between points 9 d and 8 is thus a disturbed trajectory.
  • When a partial mood trajectory is automatically generated the backend learning engine may be queried to determine if one or more shocks have previously been recorded or if the partial mood trajectory is known to be a continual or disturbed trajectory. The shock information may be used to avoid predictive leads from the earlier reactions. For example, music tracks along a disturbed trajectory will have lesser probability to populate when the shock is unfavorable.
  • The selection of music tracks along the mood trajectory may use personalized meta-data or other forms of personalized music characteristics. A structured personalization method will be discussed, but it is to be understood that the invention can make use of any form of personalized music characteristics that has been created in any other manner.
  • The personalized music characteristics method provides a way to tag music tracks with moods or emotions experienced or felt by a user when listening to a particular music track. The following basic format is used to define the mood or emotion:
  • “I recall ______ (Box1) when ______ (Box2)”
  • When listening to a particular song, a user could e.g. choose “love” for Box1 and “I had my first date” for Box2, thus creating the line “I recall love when I had my first date”.
  • The Box1 information is related to primary and/or secondary emotions, such as the emotions of the four quadrants and the emotions within the four quadrants as shown in FIG. 1. To select the Box1 emotion, the user interface displays the emotional wheel allowing the user to point and click an emotion. Alternatively emotions are shown in any other representation format from which a user may select an emotion using the user interface.
  • Box2 information is used to further personalize the music characteristics. Box2 typically describes a situation from the past or future. Since there are substantially infinite possible situations, Box2 preferably allows a free text input.
  • Each of the two boxes Box1 and Box2 may be linked to pictures. Since the Box1 possibilities are predefined using the emotional wheel logic, there can be a database on the server with one or more pictures for each of the emotional tags. The Box1 pictures may be randomly recalled and displayed along with the text if the user has personalized a music track. For Box2 an option may be presented to upload a personal picture. Also this uploaded picture can be displayed along with the text if the user has personalized a music track.
  • Over time perceived emotions or moods may change for a particular music track. Hereto the user can be given the possibility to alter the Box1 and/or Box2 information for a particular music track. Preferably the server stores not only the current personalized music characteristics of a music track but also past characteristics that have been modified.
  • A third information element Box3 may be added to the personalized music characteristics to enable addition of a time factor. The time factor limits the perceived emotion or mood to the selected time. The Box3 information enables e.g. selection of one of the following elements: early morning, morning, mid-morning, afternoon, evening, night/bed time, summer time, winter time, drizzling rains, pouring rains and spring time. Any other moments in time may be defined for the purpose of Box3.
  • Music characteristics created through the structured personalization method may be used by the backend learning engine to learn a cognitive pattern of the user. More specifically, causation and effectuation can be systematically predicted along a chain reaction from a particular stimulus. Furthermore, the backend trajectory engine may use this information to calculate a chain of emotions or moods from a current emotion to aid the automatic creation of the mood trajectory from the starting point to the end point through intermediary calculated points.
  • When listening to a particular music track in the playlist a side step may be made to explore other music that is somehow (i.e. through music genograms, as will be explained) related to the currently playing music. Hereto the user interface may display a button to open a music genogram of the currently playing music track, from which other music tracks may be explored. The other music tracks that are explored using the music genograms do not necessarily match the current mood along the mood trajectory. At any point in time the user may leave the music exploration path and return to the playlist as defined by the mood trajectory.
  • The music genogram is a ‘genetic structure’ of a music track. In FIG. 7 an example of a typical music genogram 30 for an Indian music track is shown. The gene structure may be different for music tracks originating from different geographies or for music in different genres.
  • The music genogram 30 divides the music track in seven sub-segments in time: an intro part 34, followed by a main vocals part 35 (e.g. mukhada in an Indian parlance), followed by an instrumental part 36, followed by a stanza vocal part 37 (e.g. antara in an Indian parlance), followed by another instrumental part 36, followed by another stanza vocal 37 (e.g. antara in an Indian parlance), followed by a coda part 38. The length of each sub-segment may vary.
  • The music genogram 30 further divides the music track in three different horizontal bands: a beat band 31, a tune band 32 and a special tune band 33. The beat band 31 addresses musical features related to rhythm and percussion. Base and acoustic guitar strokes which form a rhythmic pattern are also included in the beat band. The tune band 32 includes tune attributes in vocals and accompanied instrumentals. The special tune band 33 relates to special tones such as chorus, yodel, whistle, use of different language, whispering, breathing sounds, screaming, etcetera. Each of the three bands 31, 32, 33 is divided equally in the seven sub-segments described above.
  • Thus, in the example of the music genogram 30 there are 21 unique sections. Other music genograms may have different sub-segments and/or bands resulting in a different number of unique sections. Each section may be further subdivided into subsections, e.g. in three subsection to identify a beginning subsection, a middle subsection and an end subsection of that section.
  • One or more unique sections of the music genogram can be tagged using one or more of the following types of tags. The individual tags are shown in FIG. 8 for identification purposes. Pro-tags 42 are used to explore similarities. Micro-pro tags 45 are used as a decomposed part of the pro-tag 42 to explore similarities. Hook factor tags 44 are used to trigger novelty. Deceptive tags 41 are used to trigger serendipity. Sudden change tags 43 are used to indicate a sudden change in beat (e.g. a 30% or more change in tempo), in scale (e.g. going from a C scale to a C# scale or going from whispering to screaming) and to trigger similarity. Sudden change tags 43 capture substantial change in the scale/pitch or the tempo of the music track. The pro-tags 42, micro-pro tags 45 and sudden change tags 43 are used to find similarities between the current music track and another music track. The hook tag 44 indicates a unique feature in the music track that catches the ear of the listener. The deceptive tag 41 is typically allocated to intro parts. After listening to an intro part the instrumentals and/or vocals used in the intro part may tempt a user (in anticipation) to expect or explore another music track with a familiar tune. This may result in the user ending up listening to a completely different music track. Each type of tag can be visualized by a unique identification icon as shown in FIG. 8 to enable displaying of the music genogram in the user interface. Instead of the icons shown in FIG. 8 any other icon may be used to visualize the tags.
  • In the music genogram 30 of FIG. 7 tags are added to various sections of the music track. The music genogram 30 including tags may be displayed in the user interface as shown in FIG. 7. A deceptive tag 41 is located in the intro part 34 of the beat band 31. Seeing this deceptive tag 41 may suggest or force the user to think that there is an intelligence connected to this section. Based on the characteristics of the deceptive tag the user will typically expect a different music track than normal. A pro-tag 42 is located in the intro part 34 of the tune band 32, which is further decomposed into two micro-pro tags 45 each of which is connected to other similar elements. The pro-tag 42 may e.g. be a combination of a trumpet and violin. This combination may have been used subtly or obviously in other music tracks. It is likely that a trumpet playing style or a violin playing style is used in other music tracks. In the example music genogram 30 a trumpet and a violin form two micro-pro-tags 45. Icon 46 indicates that the tag intelligence, the pro-tag 42 in this case, can be expected in the end sub-section. To indicate the beginning sub-section or the middle sub-section icons 48 and 47 may be used, respectively. Another pro-tag 42 is located in the second instrumental part 36 of the beat band 31. This pro-tag 42 is not affiliated to any micro-tag. A hook 44 of the music track is located in the main vocals part 35 of the special tune band 33. It indicates that the vocals of the song are characterized by special tunes such as whistling or gargling or a combination thereof. A sudden change tag 43 is located in the second instrumental part 36 of the special tune band 33. This tempts one to expect that there could be a change in scale e.g. with the chorus effect during that section.
  • Except for hook tags, each tag may have a connection to one or more other music tracks to enable exploration of other music tracks. FIG. 9 shows an example of how this may be displayed in the user interface. The music genogram 30 of FIG. 7 is shown in FIG. 9, together with connections to music genograms of six other music tracks via clickable buttons 51, 52, 53, 54, 55 and 56, respectively. In the example of FIG. 9 the pro-tag 42, on a combination of violin and trumpet, in the intro part of the tune band 32 is connected to three different music tracks, whose music genograms can be selected by clicking one of the top-most buttons 51, 52 and 53, respectively. The pro-tag 42 can be further decomposed into two micro-tags each of which is further connected to three other music tracks. The micro-tag 45 related to the violin is connected to a fourth music track, music genogram of which can be selected by clicking button 54 next to the violin indicator 61. The micro-tag 45 related to the trumpet is connected to a fifth and sixth music track, whose music genograms can be selected by clicking buttons 55 and 56, respectively, next to the trumpet indicator 62.
  • When exploring other music tracks through the music genogram connections the selected music track may be played or added to the playlist, either automatically or manually. The backend learning engine may be configured to constantly monitor users for the kind of music genogram tags that are explored and recommend the user what more to explore.
  • Thus, in a music genogram the positioning and the connections of the tags in a master music track (i.e. the music track that is currently being played or explored) are shown. A vector format may be used to store the connections in a database. The following vector format is preferred, but any other format may be used:
  • { node-1,
      node-2,
      connection(s) of node-1 and node-2,
      weightage(s) over respective connections }
  • Herein node-1 identifies the master music track and node-2 identifies a slave music track to which a connection is made. The connections information identifies the locations of the connecting tags. Tag coordinates for node-1 include an identity of the master music track, an indication of the section in the music genogram of the master music track and a tag-type identifier. Tag coordinates for node-2 include an identity of the slave music track, an indication of the section in the music genogram of the slave music track and an indication whether a connection is made to a pro-tag only or also to a micro-tag of the pro-tag. Similarity tag(s) of the music genogram may be weighed over the connection(s) with weights e.g. ranging from 1 to 5, where 1 indicates an exact or very obvious match and 5 indicates a very subtle match.
  • The positioning and the affiliated tag connections are typically added manually by a music expert or may be automated with algorithms, and stored in a meta-dataset. The meta-dataset is typically stored in a database and may be formatted as a positional matrix indicating the functional connections between music tracks. The matrix reflects the connections between similar music tracks.
  • User may be given the possibility to modify a music genogram or add connections, but preferably such modifications are to be moderated by a music expert before storing the results in the meta-dataset. The meta-dataset forms a relationship matrix for the music tracks. Assigning music attributes as a function of building block of a music track using the music genogram structure enables to learn exploiting similarities and novelties in music tracks by other musicians or artists. A single music attribute can be used subtly or obviously anywhere in a music track and in combination with other music attributes. For example a particular tune could be used in two songs with different/distinguishable construction/built up of building blocks with respect to their music genogram structure. This helps users to discover how the same tune can be used to create a different effect when used in different constructions of music genograms.
  • To illustrate the use of music genograms three different options to explore music tracks will be described. Consider a pre-populated playlist of the 5 different music tracks. These are master music tracks indicated by M1, M2, M3, M4 and M5. Expect that the music tracks will be played in the order of M1, M2, M3, M4, M5. M2 is being currently played and the user is offered the music genogram of M2 in the user interface. The music genogram is tagged with three distinct tags which are connected to three distinct slave music tracks S21, S22, and S23. In the indication of the slave music tracks the first digit indicates the master music track and the second digit indicates the slave music track. By moving the mouse pointer over a genogram tag of M2, only one slave genogram (i.e. S21, S22 or S23) will be displayed depending on the selected tag. The master genogram for M2 remains displayed.
  • In the first option music tracks are explored and discovered in the context of the master music track. The objective of the first option is to have the user intervene to play selected slave music tracks with only overlapping connections with respect to the master music track. Following the master genogram of M2, two slave genograms of S22 and S23 are selected to explore. The system keeps track of explored slave music tracks and at this point in time it knows that S21 is still to be explored. Hence, the master genogram of M2 is marked to be incompletely explored. When all the slave connections have been selected for exploring (in this case the three slave genograms S21, S22 and S23) then the genogram exploration of M2 is marked to be completed. After selecting S22 and S23 the playlist is updated from {M2, M3, M4} to {M2, S22, S23 μM3 μM4}. It is noted that M1 is not in the playlist because it has been played already. M2 is still in the playlist as it is currently being played. M2, S22, S23, M3 and M4 will be played one after the other. Only connecting tags of the slave genogram(s) with respect to the master genogram are displayed.
  • In the second option music tracks are explored and discovered instantaneously in the context of a master music track. The objective of the second option is to give the user instant gratification. The second option is particularly suitable for expert users intending to study exact setting of a music track. M2 is being played and S21 is recalled to discover the connection. S21 will start playing fulfilling the following criteria. The positional section of the connecting tag in S21 is to be located, for example BB4 (fourth part of the beat band). The parts before and after the located tag are identified by adding 1 and −1 to the part number. This gives BB3 and BB5. S21 will start playing for the time range spanning the three parts BB3, BB4 and BB5. Only connecting tags of the slave genogram(s) with respect to the master genogram are displayed.
  • In the third option music tracks are explored and discovered along the long tail of master to slave to slave's slave, etcetera, until the user intervenes. The objective of the third option is to give the user an instant gratification. The third option is particularly suitable for savant users. M2 is being played and S21 is recalled to discover the connection. S21 will start playing using the criteria shown for the second option. The difference with the second option is that all tags of S21 will be displayed. In other words, the displayed genogram of S21 not only includes overlapping tags with respect to the master music track M2, but also includes tags overlapping with other music tracks. Non-overlapping tags of S21 connected to the other music tracks will be discovered in this option.
  • The music genogram recommendation system has many advantages. Seeing and exploring descriptive tags of the music genogram has a significant effect on stimulating a curiosity in the most logical ways. This offers instant gratification. It offers a novel, contextual, engaging and structured recommendation that renders a truly transparent and steerable navigation through music tracks. The music genogram representation creates a paradigm shift in conventional recommendation systems. The music genogram captures music essence at macro and micro elements of music tracks. The active and transparent recommendation structure lets user anticipate the type and positioning of the recommended features. It helps the user to discover the recommendations in a systematic way thereby avoiding randomized or hitting-in-the-dark discoveries of music. The recommendation method enables systematically discovering in a huge volume of undiscovered content. It is highly interactive and transparent to the user and the user can have lots of control over choosing the recommended content. The logically designed interaction with the tags of the music genogram can lead the user to steering towards the long tail of the relevant content. Apart from items of similar features for music track discovery, the music genogram includes novel and serendipitous recommendation elements. This aspect inspires users to gain trust over the recommendation method. Users can muse over the recommendation and grow their learning potential in music. The size of descriptive tags may be directly proportional to the strength of the overlap between the elements. This is useful for predictive anticipation on the functional connection(s) of the connecting tags. A decision tree on the exploration can further be mapped and studied. Each of the descriptive tags may be rated (like/dislike). Once the descriptive tags in the music genogram are explored completely by the user and connecting music tracks are listened to, tags then become grey (or any other indication is given to the tag). This way music genogram discovery can be evaluated as either incomplete or complete. Furthermore, the following metrics are envisaged in the algorithm that tells about the personalized liking of the user on the recommended items. Metric 1: Number of explored items/number of recommended items. This metric indicates the discovering initiative on a quantitative basis. Metric 2: Number of explored items of the similar type/number of recommended items of the similar type. This metric indicates the discovering initiative of a user on a qualitative basis and provides for similarity exploration for subtle similarities and/or obvious similarities, novelty exploration and serendipitous exploration.
  • It has been described how the music genogram may be used to make a side step from the playlist generated from the mood trajectory. It is to be understood that the master music track for the music exploration does not necessarily come from the playlist. It is possible to explore music using music genograms starting from any music track.
  • The hybrid of the unique features of the described mood trajectory, structured personalization and music genogram enables users to grow to higher levels on the scale of indifferent to casual to enthusiastic to savant.
  • An example of a hybrid usage is given in the following sequence of steps.
  • First a mood trajectory is selected by either building an own trajectory, following one of the recommended trajectories, selecting trajectories from the expert library or following the trajectories inputted from the shared community.
  • Next, options are explored and selected to refine the trajectory. The options include pre-selecting and loading music tracks per artist/genre on the emotional wheel, adding differential weights along the selective emotions of the trajectory and assigning a probabilistic distribution on personalized, favorite, rated and/or banned songs and/or types of genograms (e.g. 2 stanzas, 3 stanzas, instrumental) and/or favorite music genograms and/or incompleted music genograms for the music tracks to get populated.
  • Next, the user listens to the music tracks along the emotion trajectory on the emotion wheel.
  • The user is able to manually tag each of the music tracks populated in the playlist algorithm of the trajectory by banning, favorite and/or rate a music track. The user is able to personalize each of the music tracks populated in the playlist algorithm of the trajectory. The user is able to see the music genogram of each of the music tracks populated in the playlist algorithm of the trajectory. Furthermore the user is able to discover the tags of the music genogram by exploring type of the tag and the connecting songs at the macro/micro tags.
  • If the user wants to immediately explore the visual connection(s) as revealed from the music genogram of the master music track, then the user is able to selectively/completely queue the connecting song(s) in the playlist of the trajectory.
  • If the user wants to immediately explore the visual connection(s) as revealed from a slave's music genogram, then the user is able to selectively/completely queue the connecting song(s) in the playlist of the trajectory. This logic can also be extended to slave's slave's genogram in an infinitesimal loop as triggered by interactive initiative of the user.
  • The user is able to favorite the music genogram. The user is able to rate (like ‘thumbs up’ or ‘thumbs down’) the connecting node(s) of the music track generated in the playlist when they are being played. The user is able to tag the music genogram for a reminder. This feature is useful if the user has incompletely explored the tags revealed in the music genogram and wants to complete the in-completed discovery of the tags in the latter time/event. The user is able to share the music genogram or only selective tag(s) in the music genogram within the online community.
  • The user typically follows a decision tree when creating mood trajectories (typically non-real time) and exploring individual media items such as music tracks (typically real time).
  • FIG. 10 shows an example of a decomposition of how non-real time user initiatives are mapped to discovering/exploring music. Block 100 indicates the start of the non-real time exploration initiative. Block 101 indicates the start of different trajectory structures. Block 102 indicates building an own trajectory. Block 103 indicates using recommended trajectories. Block 104 indicates using an expert pre-mapped library with pre-stored trajectories. Block 105 indicates using a community induced trajectory. Block 106 indicates pre-selecting and loading music tracks per artist or genre on the emotional wheel. Block 107 indicates adding emotion weights on the different locations of the trajectory. Block 108 indicates assigning probabilistic distribution on selecting personalized and/or type of genogram and/or incomplete and/or completed genogram and/or favorite and/or ban music tracks to get populated as a playlist. Block 109 indicates ‘following which type of recommendation?’. Block 110 indicates a link to a real time initiative as shown in FIG. 11.
  • FIG. 11 shows an example of a decomposition of a real-time initiative per music track on the emotional trajectory. Block 200 indicates the start of the real time initiative. Block 201 indicates explicit tagging. Block 202 indicates a favorite action. Block 203 indicates a rating action. Block 204 indicates a ban action. Block 205 indicates a skip action. Block 206 indicates a personalization initiative. Blocks 207 indicate an addition. Block 208 indicates emotion coordinates on the trajectory, either exact or near exact. Block 209 indicates primary/secondary emotion on the first box. Block 210 indicates a comparison. Block 211 indicates ‘match?’. Block 212 indicates the result ‘close’. Block 213 indicates the result ‘in-between’. Block 214 indicates the result ‘far’. Block 215 indicates original trajectory completion. Block 216 indicates fast track. Block 217 indicates full track. Block 218 indicates continual track. Block 219 indicates disturbed track. Block 220 indicates a favored shock. Block 221 indicates an unfavored shock. Block 222 indicates favored shock as discovery initiative. Block 223 indicates a genogram initiative. Block 224 indicates option 1. Block 225 indicates option 2. Block 226 indicates option 3. Block 227 indicates master-slave. Block 228 indicates master-slave-slave . . . n. Block 229 indicates quantitative metrics. Block 230 indicates qualitative metrics. Block 231 indicates completion. Block 232 indicates in-completion. Block 233 indicates new genograms. Block 234 indicates existing genograms. Block 235 indicates similarity. Block 236 indicates novelty. Block 237 indicates serendipitous. Block 238 indicates on connection. Block 239 indicates community sharing. Block 240 indicates subtle thumbs up or down. Block 241 indicates obvious thumbs up or down.
  • Each of the non-real time and real time initiatives for music discovery/exploration as shown in FIG. 10 and FIG. 11 may be allocated in terms of coordinates on the emotion wheel to enable statistical reports and recommendations. Traditional data regression modeling techniques can be deployed per music track populated in a quadrant of the emotion wheel. These techniques thus map a music track as an input to the respective emotion coordinates and respective extent of the discovery initiatives. Differential weights are assigned on different music discovery initiatives mapped in the dual effort of real time and non-real time. For example, within non-real time initiatives building a trajectory receives more weight on music discovery potential than when following recommended trajectories. A similar logic of assigning differential weights can also be extended to gauze the music discovery potential in real time.
  • FIG. 12 shows an example of a mapping of real time and non-real time initiatives on the emotional wheel. A first cluster 71 of music discovery initiatives represents personalization initiatives of the user. A second cluster 72 of music discovery initiatives represents music genogram discoveries of the user. A third cluster 73 of music discovery initiatives represents the continual trajectory.
  • For the first quadrant of the emotional wheel of FIG. 12 an intuitive expression can be aggregated by meticulously studying the varied initiatives to music discoveries. The expression for the personalized music style of the user in the first quadrant is as follows: personalized music style in first quadrant=% coverage on emotion wheel to personalization initiative (PI)+% coverage on emotion wheel to music genogram discovery initiative (GDI)+% coverage on the emotional wheel to continued trajectory (CT). For example, the personalized music style in the first quadrant=20% PI+55% GDI+50% CT. Note that the coverage of the clusters may overlap over the different sets of music initiatives and may therefore not add to 100%. A radical change in the number of attempt(s) in the same quadrant and on the same reference of the loaded data may be notified to the user.
  • The expression may be expanded or optimized to cover more quadrants on the emotional wheel and/or include other music discovery initiatives. The following examples show four alternative clusters called ‘what inspires me?’, ‘what is working for me?’, ‘what are the possibilities for me?’ and ‘what is missing for me?’. It is to be understood that any other cluster may be defined.
  • The cluster ‘what inspires me?’ is for example a cluster on: personalized music tracks with positive emotions; music genogram initiatives involving types of music genogram, types of music genogram tags and option among the three methods to exploring a genogram; music tracks/artists following the above habits; and high score on cumulative initiative index on discovering music, either real time or non-real time.
  • The cluster ‘what is working for me?’ is for example a cluster on: trajectories with all favorable music shocks; types of genograms; option among the three methods to exploring a genogram; and music tracks/artists following the above habits.
  • The cluster ‘what are the possibilities for me?’ is for example a cluster on: trajectories with no music shocks; music tracks/artists following the above habits; and little tried option(s) among the three methods to exploring a genogram.
  • The cluster ‘what is missing for me?’ is for example a cluster on: trajectories which are never or very little followed; music tracks/artists following the above habits; unattempted option(s) among the three methods to exploring genogram; and low score on cumulative initiative index on discovering music, either in real time or non-real time.
  • The described hybrid architecture takes care of tangible (incremental or radical) change in user's growing music understanding. It features dynamically revising the perception about music along the basic emotions as well as combinations of emotions and opposites of basic emotions. It measures and expresses perception change of the user on rating the songs on a micro levels of emotions such as carry-over beliefs and current beliefs. It supplements users with an intuitive user interface for instant gratification on micro elements used in building the play listing. It offers the feature of simultaneously generating personalized options with choosing personalized solutions. It captures learning potential and rate of change learning potential of the listener in real time. It captures learning potential and rate of change learning potential of the listener in the experimenting time. It gives an individualized expression on the music listening style capturing extrinsic and intrinsic habits related to personal traits and related to music discoveries. An aggregate metric on one's music-style is highly desirable since a changed music-style expression shall reflect a change in the personal expression and forces one to think about his/her listening habit/style. It optimizes for personalized recommendations whilst offering possibilities to fully discover expert recommendations. This combined aspect of playlist recommendation is highly desirable. It gives an option to track partially discovered/explored songs. It generates recommendation adapting to the universal music styles. It fully monitors the reasons to following a continuum (music tracks being played without disturbances) and shocks on the songs of the playlist. It evaluates the songs populated in the playlist to create positive experiences on music discoveries at macro and/or micro levels of a music track.
  • In the foregoing examples are given of playlists containing music tracks. It is to be understood that the invention is not limited to playlists containing music tracks. Any media item can be included in the playlist, such as a music track, a video or a picture. A playlist may contain music items, videos or pictures only. Alternatively a playlist contains a mixture of music items, videos and/or pictures.
  • FIG. 13 shows an example of steps performed in a computer-implemented method for creating a playlist. In step 1001 a graphical representation of an emotional wheel is displayed. In step 1002 a first input is received indicating a starting point of a mood trajectory in the emotional wheel. In step 1003 a second input is received indicating an end point of the mood trajectory in the emotional wheel. In step 1004 the mood trajectory is defined by connecting the starting point to the end point via one or more intermediate points. In step 1005 a graphical representation of the mood trajectory is displayed in the graphical representation of the emotional wheel. In step 1006 the media items are selected by searching in the meta-data for emotion characteristics or mood characteristics that match the initial mood, the intermediate moods and the destination mood, respectively. In step 1007 the playlist of media items is created in an order from initial mood to destination mood.
  • FIG. 14 shows an example of steps performed in a computer-implemented method for creating personalized meta-data for a media item. In step 1011 emotion data is received indicative of a primary emotion or mood or a secondary emotion or mood experienced or felt by a user when listening to or watching the media item. In step 1012 description data is received indicative of a situation indicated by the user to be related to the media item. In step 1013 the emotion data and the description data are stored in the meta-data. In optional step 1014 time data is received indicative of a moment in time indicated by the user to be related to the media item. In step 1015 the time date is stored in the meta-data.
  • FIG. 15 shows an example of steps performed in a computer-implemented method for exploring music tracks. In step 1021 a graphical representation is displayed of a first music genogram of a first music track. In step 1022 a first exploration input is received indicating a selection of one of the tags in one of the sections. If the first exploration input indicates a pro-tag, then in step 1023 a link is displayed to a second music genogram of a second music track. If the pro-tag comprises two or more micro-pro tags, then in step 1024 a graphical representation is displayed of the decomposition of the pro-tag and a link to a third music genogram of a third music track for each of the micro-pro tags. The decision is taken in step 1025.
  • FIG. 16 shows an example of steps performed in a computer-implemented method for enabling personalized media items recommendations. In step 1031 real time and/or non-real time initiatives data is collected. In step 1032 the real time and/or non-real time initiatives data is mapped on an emotional wheel. In step 1033 an intuitive expression is aggregated to define a personalized music style by meticulously analyzing the real time and/or non-real time initiatives data.
  • It is to be understood that the order of the steps shown in FIG. 13, FIG. 14, FIG. 15 and FIG. 16 can be different than shown.
  • It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory or flash memory) on which alterable information is stored. Moreover, the invention is not limited to the embodiments described above, which may be varied within the scope of the accompanying claims.

Claims (12)

1. A computer-implemented method for exploring music tracks, the method comprising:
displaying a graphical representation of a first music genogram of a first music track, the first music genogram having a music genogram data structure for defining the music genogram;
receiving a first exploration input indicating a selection of one of the tags in one of the sections; and
if the first exploration input indicates a pro-tag, displaying a first link to a second music genogram of a second music track, and/or if the pro-tag comprises two or more micro-pro tags, displaying a graphical representation of the decomposition of the pro-tag and a second link to a third music genogram of a third music track for each of the micro-pro tags, wherein the music genogram data structure comprises:
sub-segment data identifying one or more sub-segments to define a decomposition in time of a music track, each sub-segment having a start time and an end time; and
band data identifying one or more bands to define a decomposition for the time length of the music track, wherein a cross section of a sub-segment and a band forms a section, the music genogram data structure further comprising:
tag data identifying one or more tags in one or more sections, wherein a tag is one of:
a deceptive tag to indicate a surprising effect;
a pro-tag to identify a starting point for an exploration, using a graphical user interface, of another music genogram based on similarities;
a sudden change tag to indicate a substantial change in scale, pitch or tempo;
a hook tag to identify a unique sound feature in the music track; or
a micro-pro tag to enable a decomposition of the pro-tag.
2. The method according to claim 1, wherein each section is subdivided in two or more subsections and wherein the tag data identifies the one or more tags in a subsection.
3. The method according to claim 1, wherein each sub-segment is one of an intro part, a main vocals part, an instrumental part, a stanza vocals part or a coda part, and wherein each band is one of a beat band, a tune band or a special tune band.
4. The method according to claim 1, wherein the first music track is one of two or more media items in a playlist, wherein each media item comprises meta-data indicating one or more characteristics of the media item, the method further comprising:
displaying a graphical representation of an emotional wheel, the emotional wheel being a two dimensional Cartesian coordinate system based model for classification of emotions wherein emotions are located at predefined coordinates;
receiving a first input indicating a starting point of a mood trajectory in the emotional wheel, the starting point corresponding to an initial mood at one of the coordinates;
receiving a second input indicating an end point of the mood trajectory in the emotional wheel, the end point corresponding to a destination mood at one of the coordinates;
defining the mood trajectory by connecting the starting point to the end point via one or more intermediate points, the intermediate points corresponding to one or more intermediate moods;
displaying a graphical representation of the mood trajectory in the graphical representation of the emotional wheel;
selecting the media items by searching in the meta-data for emotion characteristics or mood characteristics that match the initial mood, the intermediate moods and the destination mood, respectively; and
creating the playlist of media items in an order from initial mood to destination mood.
5. The method according to claim 4, wherein the starting point, the end point and the intermediate points are predefined to form a predefined mood trajectory, the method further comprising receiving a third input to select the predefined mood trajectory.
6. The method according to claim 4, further comprising:
receiving a fourth input indicating a change in coordinates of one or more of the intermediate points; and
redefining the mood trajectory by connecting the starting point to the end point via the changed intermediate points.
7. The method according to claim 4, further comprising:
calculating a first series of intersecting circular sections along the mood trajectory, wherein each circular section in the first series has a center point on the mood trajectory; and
calculating a second series of circular sections, wherein each circular section in the second series has a center point at an intersection point of two circular sections in the first series,
wherein the first series of circular sections and the second series of circular sections together form an area around the mood trajectory,
or further comprising:
calculating an area around the trajectory using a probabilistic distribution function or distance function to form a regular or irregular shaped area around the mood trajectory, and wherein selecting the media items by searching in the meta-data for emotion characteristics or mood characteristics that match the initial mood, the destination mood, and moods with coordinates within the area around the trajectory, respectively.
8. The method according to claim 4, further comprising receiving a fifth input indicating one or more media characteristics and/or receiving a sixth input indicating a maximum number of media items in the playlist and/or a maximum time-length of the playlist, and wherein the selecting of the media items is restricted to the one or more further media characteristics and/or the maximum number of media items and/or the maximum time-length.
9. The method according to claim 4, further comprising receiving a seventh input indicating a weight factor for one or more of the starting point, the intermediate points and the end point, and wherein in the playlist the number of media items selected for the starting point, the intermediate point or the end point for which the weight factor is received is dependent on the value of the weight factor.
10. The method according to claim 9, further comprising displaying in the graphical representation of the mood trajectory one or more resized points having a size indicating a probability that media items are selected at the coordinates of the resized points.
11. The method according to claim 4, further comprising storing shock data comprising an indication of a media shock applied by a user to a particular media item in the playlist and an indication of a relative position of the particular media item in the playlist or mood trajectory,
and wherein the selecting of the media items uses the stored shock data to influence the probability that the particular media item is selected.
12. A computer program product comprising software code portions configured for, when run on a computer, executing the method steps according to claim 1.
US13/762,834 2010-08-09 2013-02-08 Music track exploration and playlist creation Abandoned US20140052731A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/061550 WO2012019637A1 (en) 2010-08-09 2010-08-09 Visual music playlist creation and visual music track exploration

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/061550 Continuation WO2012019637A1 (en) 2010-08-09 2010-08-09 Visual music playlist creation and visual music track exploration

Publications (1)

Publication Number Publication Date
US20140052731A1 true US20140052731A1 (en) 2014-02-20

Family

ID=43299641

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/130,733 Active 2034-01-16 US9747009B2 (en) 2010-08-09 2012-07-04 User interface for creating a playlist
US13/762,834 Abandoned US20140052731A1 (en) 2010-08-09 2013-02-08 Music track exploration and playlist creation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/130,733 Active 2034-01-16 US9747009B2 (en) 2010-08-09 2012-07-04 User interface for creating a playlist

Country Status (2)

Country Link
US (2) US9747009B2 (en)
WO (2) WO2012019637A1 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120226706A1 (en) * 2011-03-03 2012-09-06 Samsung Electronics Co. Ltd. System, apparatus and method for sorting music files based on moods
US20140068474A1 (en) * 2011-11-30 2014-03-06 JVC Kenwood Corporation Content selection apparatus, content selection method, and computer readable storage medium
US20140172431A1 (en) * 2012-12-13 2014-06-19 National Chiao Tung University Music playing system and music playing method based on speech emotion recognition
US20150206523A1 (en) * 2014-01-23 2015-07-23 National Chiao Tung University Method for selecting music based on face recognition, music selecting system and electronic apparatus
US20150220633A1 (en) * 2013-03-14 2015-08-06 Aperture Investments, Llc Music selection and organization using rhythm, texture and pitch
US20150268800A1 (en) * 2014-03-18 2015-09-24 Timothy Chester O'Konski Method and System for Dynamic Playlist Generation
US20150281783A1 (en) * 2014-03-18 2015-10-01 Vixs Systems, Inc. Audio/video system with viewer-state based recommendations and methods for use therewith
US9542616B1 (en) 2015-06-29 2017-01-10 International Business Machines Corporation Determining user preferences for data visualizations
US20170092148A1 (en) * 2014-05-13 2017-03-30 Cellrebirth Ltd. Emotion and mood data input, display, and analysis device
US9639871B2 (en) 2013-03-14 2017-05-02 Apperture Investments, Llc Methods and apparatuses for assigning moods to content and searching for moods to select content
CN106844504A (en) * 2016-12-27 2017-06-13 广州酷狗计算机科技有限公司 A kind of method and apparatus for sending the single mark of song
US9875304B2 (en) 2013-03-14 2018-01-23 Aperture Investments, Llc Music selection and organization using audio fingerprints
US10061476B2 (en) 2013-03-14 2018-08-28 Aperture Investments, Llc Systems and methods for identifying, searching, organizing, selecting and distributing content based on mood
US10225328B2 (en) 2013-03-14 2019-03-05 Aperture Investments, Llc Music selection and organization using audio fingerprints
US10372301B2 (en) * 2002-09-16 2019-08-06 Touch Tunes Music Corporation Jukebox with customizable avatar
US10599979B2 (en) 2015-09-23 2020-03-24 International Business Machines Corporation Candidate visualization techniques for use with genetic algorithms
US10623480B2 (en) 2013-03-14 2020-04-14 Aperture Investments, Llc Music categorization using rhythm, texture and pitch
US10685035B2 (en) 2016-06-30 2020-06-16 International Business Machines Corporation Determining a collection of data visualizations
US10754614B1 (en) * 2019-09-23 2020-08-25 Sonos, Inc. Mood detection and/or influence via audio playback devices
US10754890B2 (en) 2014-03-18 2020-08-25 Timothy Chester O'Konski Method and system for dynamic playlist generation
US11029823B2 (en) * 2002-09-16 2021-06-08 Touchtunes Music Corporation Jukebox with customizable avatar
US11093542B2 (en) * 2017-09-28 2021-08-17 International Business Machines Corporation Multimedia object search
WO2021168563A1 (en) 2020-02-24 2021-09-02 Labbe Aaron Method, system, and medium for affective music recommendation and composition
US20220067114A1 (en) * 2016-06-09 2022-03-03 Spotify Ab Search media content based upon tempo
US11271993B2 (en) 2013-03-14 2022-03-08 Aperture Investments, Llc Streaming music categorization using rhythm, texture and pitch
CN114999611A (en) * 2022-07-29 2022-09-02 支付宝(杭州)信息技术有限公司 Model training and information recommendation method and device
US11609948B2 (en) 2014-03-27 2023-03-21 Aperture Investments, Llc Music streaming, playlist creation and streaming architecture
KR102643081B1 (en) * 2022-09-22 2024-03-04 뉴튠(주) Method and apparatus for providing audio mixing interface and playlist service using real-time communication
US20240202236A1 (en) * 2022-12-16 2024-06-20 Hyundai Motor Company Apparatus and method for providing content
US12032620B2 (en) 2016-06-09 2024-07-09 Spotify Ab Identifying media content

Families Citing this family (144)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US20120309363A1 (en) 2011-06-03 2012-12-06 Apple Inc. Triggering notifications associated with tasks items that represent tasks to perform
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
AU2012328143A1 (en) * 2011-10-24 2014-06-12 Omnifone Ltd Method, system and computer program product for navigating digital media content
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
CN102867028B (en) * 2012-08-28 2015-10-14 北京邮电大学 A kind of emotion mapping method and emotion parse of a sentential form method being applied to search engine
CN113744733B (en) 2013-02-07 2022-10-25 苹果公司 Voice trigger of digital assistant
EP2957087B1 (en) * 2013-02-15 2019-05-08 Nec Corporation Method and system for providing content in content delivery networks
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
US20140281981A1 (en) * 2013-03-15 2014-09-18 Miselu, Inc Enabling music listener feedback
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
AU2014278592B2 (en) 2013-06-09 2017-09-07 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
CN103309617A (en) * 2013-06-27 2013-09-18 广东威创视讯科技股份有限公司 Method and device for rapidly recognizing gesture
CN105453026A (en) 2013-08-06 2016-03-30 苹果公司 Auto-activating smart responses based on activities from remote devices
US10320413B2 (en) * 2013-11-07 2019-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices for vector segmentation for coding
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
JP6433660B2 (en) * 2014-01-10 2018-12-05 株式会社Nttぷらら Distribution system, distribution method, information input device, distribution device, and computer program
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
CN110797019B (en) 2014-05-30 2023-08-29 苹果公司 Multi-command single speech input method
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
KR102217562B1 (en) 2014-07-01 2021-02-19 엘지전자 주식회사 Device and control method for the device
CN104239690B (en) * 2014-08-20 2015-10-28 腾讯科技(深圳)有限公司 Computing method consuming time and device
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
WO2016054006A1 (en) * 2014-09-30 2016-04-07 Thomson Licensing Methods and systems for multi-state recommendations
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
CN105589875B (en) * 2014-10-22 2019-10-25 方正国际软件(北京)有限公司 A kind of method and device that multi-trace is drawn
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10082939B2 (en) 2015-05-15 2018-09-25 Spotify Ab Playback of media streams at social gatherings
US10719290B2 (en) 2015-05-15 2020-07-21 Spotify Ab Methods and devices for adjustment of the energy level of a played audio stream
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US20160335046A1 (en) 2015-05-15 2016-11-17 Spotify Ab Methods and electronic devices for dynamic control of playlists
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) * 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
US10860646B2 (en) 2016-08-18 2020-12-08 Spotify Ab Systems, methods, and computer-readable products for track selection
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK201770429A1 (en) 2017-05-12 2018-12-14 Apple Inc. Low-latency intelligent automated assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770411A1 (en) 2017-05-15 2018-12-20 Apple Inc. Multi-modal interfaces
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
DK179549B1 (en) 2017-05-16 2019-02-12 Apple Inc. Far-field extension for digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
USD928801S1 (en) * 2017-10-17 2021-08-24 Sony Corporation Display panel or screen or portion thereof with graphical user interface
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
CA3082744A1 (en) * 2017-12-09 2019-06-13 Shubhangi Mahadeo Jadhav System and method for recommending visual-map based playlists
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
DK201970510A1 (en) 2019-05-31 2021-02-11 Apple Inc Voice identification in digital assistant systems
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11468890B2 (en) 2019-06-01 2022-10-11 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
CN111428075A (en) * 2020-03-23 2020-07-17 王爽 Method for matching music composition through gesture track input
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11038934B1 (en) 2020-05-11 2021-06-15 Apple Inc. Digital assistant hardware abstraction
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
US11461965B2 (en) * 2020-05-29 2022-10-04 Unity Technologies Sf Method for generating splines based on surface intersection constraints in a computer image generation system
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
US20230315194A1 (en) * 2020-08-24 2023-10-05 Sonos, Inc. Mood detection and/or influence via audio playback devices
US12013825B2 (en) 2022-03-01 2024-06-18 Bank Of America Corporation Predictive value engine for logical map generation and injection
CN114756734B (en) * 2022-03-08 2023-08-22 上海暖禾脑科学技术有限公司 Music piece subsection emotion marking system and method based on machine learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030089218A1 (en) * 2000-06-29 2003-05-15 Dan Gang System and method for prediction of musical preferences
US20040237759A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content
US20090228799A1 (en) * 2008-02-29 2009-09-10 Sony Corporation Method for visualizing audio data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4588968B2 (en) * 2002-10-01 2010-12-01 パイオニア株式会社 Information recording medium, information recording apparatus and method, information reproducing apparatus and method, information recording / reproducing apparatus and method, computer program for recording or reproduction control, and data structure including control signal
WO2005113099A2 (en) * 2003-05-30 2005-12-01 America Online, Inc. Personalizing content
US7050072B2 (en) * 2003-09-29 2006-05-23 Galleryplayer, Inc. Method and system for specifying a pan path
US20120233164A1 (en) 2008-09-05 2012-09-13 Sourcetone, Llc Music classification system and method
US8281244B2 (en) * 2009-06-11 2012-10-02 Apple Inc. User interface for media playback

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030089218A1 (en) * 2000-06-29 2003-05-15 Dan Gang System and method for prediction of musical preferences
US20040237759A1 (en) * 2003-05-30 2004-12-02 Bill David S. Personalizing content
US20090228799A1 (en) * 2008-02-29 2009-09-10 Sony Corporation Method for visualizing audio data

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10372301B2 (en) * 2002-09-16 2019-08-06 Touch Tunes Music Corporation Jukebox with customizable avatar
US11567641B2 (en) * 2002-09-16 2023-01-31 Touchtunes Music Company, Llc Jukebox with customizable avatar
US11314390B2 (en) * 2002-09-16 2022-04-26 Touchtunes Music Corporation Jukebox with customizable avatar
US20210271361A1 (en) * 2002-09-16 2021-09-02 Touchtunes Music Corporation Jukebox with customizable avatar
US11029823B2 (en) * 2002-09-16 2021-06-08 Touchtunes Music Corporation Jukebox with customizable avatar
US10452237B2 (en) * 2002-09-16 2019-10-22 Touchtunes Music Corporation Jukebox with customizable avatar
US20120226706A1 (en) * 2011-03-03 2012-09-06 Samsung Electronics Co. Ltd. System, apparatus and method for sorting music files based on moods
US20140068474A1 (en) * 2011-11-30 2014-03-06 JVC Kenwood Corporation Content selection apparatus, content selection method, and computer readable storage medium
US20140172431A1 (en) * 2012-12-13 2014-06-19 National Chiao Tung University Music playing system and music playing method based on speech emotion recognition
US9570091B2 (en) * 2012-12-13 2017-02-14 National Chiao Tung University Music playing system and music playing method based on speech emotion recognition
US10623480B2 (en) 2013-03-14 2020-04-14 Aperture Investments, Llc Music categorization using rhythm, texture and pitch
US10242097B2 (en) * 2013-03-14 2019-03-26 Aperture Investments, Llc Music selection and organization using rhythm, texture and pitch
US20150220633A1 (en) * 2013-03-14 2015-08-06 Aperture Investments, Llc Music selection and organization using rhythm, texture and pitch
US9875304B2 (en) 2013-03-14 2018-01-23 Aperture Investments, Llc Music selection and organization using audio fingerprints
US10061476B2 (en) 2013-03-14 2018-08-28 Aperture Investments, Llc Systems and methods for identifying, searching, organizing, selecting and distributing content based on mood
US9639871B2 (en) 2013-03-14 2017-05-02 Apperture Investments, Llc Methods and apparatuses for assigning moods to content and searching for moods to select content
US10225328B2 (en) 2013-03-14 2019-03-05 Aperture Investments, Llc Music selection and organization using audio fingerprints
US11271993B2 (en) 2013-03-14 2022-03-08 Aperture Investments, Llc Streaming music categorization using rhythm, texture and pitch
US9489934B2 (en) * 2014-01-23 2016-11-08 National Chiao Tung University Method for selecting music based on face recognition, music selecting system and electronic apparatus
US20150206523A1 (en) * 2014-01-23 2015-07-23 National Chiao Tung University Method for selecting music based on face recognition, music selecting system and electronic apparatus
US20150268800A1 (en) * 2014-03-18 2015-09-24 Timothy Chester O'Konski Method and System for Dynamic Playlist Generation
US10754890B2 (en) 2014-03-18 2020-08-25 Timothy Chester O'Konski Method and system for dynamic playlist generation
US20150281783A1 (en) * 2014-03-18 2015-10-01 Vixs Systems, Inc. Audio/video system with viewer-state based recommendations and methods for use therewith
US11609948B2 (en) 2014-03-27 2023-03-21 Aperture Investments, Llc Music streaming, playlist creation and streaming architecture
US11899713B2 (en) 2014-03-27 2024-02-13 Aperture Investments, Llc Music streaming, playlist creation and streaming architecture
US10163362B2 (en) * 2014-05-13 2018-12-25 Cellrebirth Ltd. Emotion and mood data input, display, and analysis device
US20170092148A1 (en) * 2014-05-13 2017-03-30 Cellrebirth Ltd. Emotion and mood data input, display, and analysis device
US9542616B1 (en) 2015-06-29 2017-01-10 International Business Machines Corporation Determining user preferences for data visualizations
US10599979B2 (en) 2015-09-23 2020-03-24 International Business Machines Corporation Candidate visualization techniques for use with genetic algorithms
US10607139B2 (en) 2015-09-23 2020-03-31 International Business Machines Corporation Candidate visualization techniques for use with genetic algorithms
US11651233B2 (en) 2015-09-23 2023-05-16 International Business Machines Corporation Candidate visualization techniques for use with genetic algorithms
US12032639B2 (en) * 2016-06-09 2024-07-09 Spotify Ab Search media content based upon tempo
US12032620B2 (en) 2016-06-09 2024-07-09 Spotify Ab Identifying media content
US20220067114A1 (en) * 2016-06-09 2022-03-03 Spotify Ab Search media content based upon tempo
US10685035B2 (en) 2016-06-30 2020-06-16 International Business Machines Corporation Determining a collection of data visualizations
US10949444B2 (en) 2016-06-30 2021-03-16 International Business Machines Corporation Determining a collection of data visualizations
CN106844504A (en) * 2016-12-27 2017-06-13 广州酷狗计算机科技有限公司 A kind of method and apparatus for sending the single mark of song
US11093542B2 (en) * 2017-09-28 2021-08-17 International Business Machines Corporation Multimedia object search
US10754614B1 (en) * 2019-09-23 2020-08-25 Sonos, Inc. Mood detection and/or influence via audio playback devices
US11137975B2 (en) * 2019-09-23 2021-10-05 Sonos, Inc. Mood detection and/or influence via audio playback devices
US11709649B2 (en) * 2019-09-23 2023-07-25 Sonos, Inc. Playlist generation based on a desired mental state
WO2021062437A1 (en) * 2019-09-23 2021-04-01 Sonos, Inc. Play list generation using mood detection
EP4111448A4 (en) * 2020-02-24 2023-12-13 Lucid Inc. Method, system, and medium for affective music recommendation and composition
WO2021168563A1 (en) 2020-02-24 2021-09-02 Labbe Aaron Method, system, and medium for affective music recommendation and composition
CN114999611A (en) * 2022-07-29 2022-09-02 支付宝(杭州)信息技术有限公司 Model training and information recommendation method and device
KR102643081B1 (en) * 2022-09-22 2024-03-04 뉴튠(주) Method and apparatus for providing audio mixing interface and playlist service using real-time communication
US20240202236A1 (en) * 2022-12-16 2024-06-20 Hyundai Motor Company Apparatus and method for providing content

Also Published As

Publication number Publication date
US20140164998A1 (en) 2014-06-12
WO2012019637A1 (en) 2012-02-16
US9747009B2 (en) 2017-08-29
WO2012019827A1 (en) 2012-02-16

Similar Documents

Publication Publication Date Title
US20140052731A1 (en) Music track exploration and playlist creation
Bogdanov et al. Semantic audio content-based music recommendation and visualization based on user preference examples
JP4723481B2 (en) Content recommendation device having an array engine
US11969656B2 (en) Dynamic music creation in gaming
CN1249609C (en) System for browsing collection of information units
US7792782B2 (en) Internet music composition application with pattern-combination method
US9830351B2 (en) System and method for generating a playlist from a mood gradient
JP5118283B2 (en) Search user interface with improved accessibility and usability features based on visual metaphor
JP2006526827A (en) Content recommendation device with user feedback
US20090063971A1 (en) Media discovery interface
US8233999B2 (en) System and method for interactive visualization of music properties
US20140229864A1 (en) Method, system and device for content recommendation
KR20080035617A (en) Single action media playlist generation
US8271111B2 (en) Device and method for music playback, and recording medium therefor
WO2014066390A2 (en) Personalized media stations
TW200805129A (en) Information processing apparatus, method and program
JP7512903B2 (en) Sensitivity calculation device, sensitivity calculation method, and program
WO2016040398A1 (en) A method and system to enable user related content preferences intelligently on a headphone
Stober et al. Adaptive music retrieval–a state of the art
Schedl et al. User-aware music retrieval
US20100053192A1 (en) Information Processing Apparatus, Program, and Information Processing Method
JP5348619B2 (en) Information analysis processing system and information analysis processing method
US20200301962A1 (en) System and Method For Recommending Visual-Map Based Playlists
WO2022044646A1 (en) Information processing method, information processing program, and information processing device
JP5397394B2 (en) Exercise support device, exercise support method and program

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION