Senac et al., 2017 - Google Patents
Music feature maps with convolutional neural networks for music genre classificationSenac et al., 2017
View PDF- Document ID
- 87020133354800939
- Author
- Senac C
- Pellegrini T
- Mouret F
- Pinquier J
- Publication year
- Publication venue
- Proceedings of the 15th international workshop on content-based multimedia indexing
External Links
Snippet
Nowadays, deep learning is more and more used for Music Genre Classification: particularly Convolutional Neural Networks (CNN) taking as entry a spectrogram considered as an image on which are sought different types of structure. But, facing the criticism relating to the …
- 230000001537 neural 0 title abstract description 14
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30755—Query formulation specially adapted for audio data retrieval
- G06F17/30758—Query by example, e.g. query by humming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30743—Audio data retrieval using features automatically derived from the audio content, e.g. descriptors, fingerprints, signatures, MEP-cepstral coefficients, musical score, tempo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30749—Audio data retrieval using information manually generated or using information not derived from the audio data, e.g. title and artist information, time and location information, usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/30017—Multimedia data retrieval; Retrieval of more than one type of audiovisual media
- G06F17/30023—Querying
- G06F17/30029—Querying by filtering; by personalisation, e.g. querying making use of user profiles
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Senac et al. | Music feature maps with convolutional neural networks for music genre classification | |
Hizlisoy et al. | Music emotion recognition using convolutional long short term memory deep neural networks | |
Malik et al. | Stacked convolutional and recurrent neural networks for music emotion recognition | |
Levy et al. | Music information retrieval using social tags and audio | |
Lehner et al. | A low-latency, real-time-capable singing voice detection method with LSTM recurrent neural networks | |
Aljanaki et al. | A data-driven approach to mid-level perceptual musical feature modeling | |
Soleymani et al. | Content-based music recommendation using underlying music preference structure | |
Lim et al. | Music genre/mood classification using a feature-based modulation spectrum | |
Hoffmann et al. | Music recommendation system | |
Huang et al. | Pop music highlighter: Marking the emotion keypoints | |
Kostek et al. | Creating a reliable music discovery and recommendation system | |
Bogdanov et al. | From low-level to high-level: Comparative study of music similarity measures | |
Pachet et al. | Improving multilabel analysis of music titles: A large-scale validation of the correction approach | |
Singhal et al. | Classification of Music Genres using Feature Selection and Hyperparameter Tuning | |
Jiang et al. | Using k-means clustering to classify protest songs based on conceptual and descriptive audio features | |
WO2015170126A1 (en) | Methods, systems and computer program products for identifying commonalities of rhythm between disparate musical tracks and using that information to make music recommendations | |
Ujlambkar et al. | Mood classification of Indian popular music | |
Arora et al. | Mood based music player | |
Gupta | Deep audio embeddings and attention based music emotion recognition | |
Chi et al. | The power of words: Enhancing music mood estimation with textual input of lyrics | |
Bhalke et al. | Automatic genre classification using fractional fourier transform based mel frequency cepstral coefficient and timbral features | |
Shao et al. | Automatic summarization of music videos | |
Kartikay et al. | Classification of music into moods using musical features | |
Rajan et al. | Multi-channel CNN-Based Rāga Recognition in Carnatic Music Using Sequential Aggregation Strategy | |
Cho et al. | Effective music genre classification using late fusion convolutional neural network with multiple spectral features |