Skip to content

dipeshgyanchandani/Transliteration-Indian-Languages

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sequence to Sequence Encoder-Decoder Model for Machine Transliteration

In an increasingly globalized world, the barrier posed by language differences significantly impedes the accessibility of information. This research paper delves into the challenges faced by individuals unable to read content in foreign languages, proposing a solution through the development and comparison of sequence-to-sequence Encoder Decoder models, with and without an attention mechanism, aimed at demystifying content written in foreign languages. By leveraging these models, the study aims to facilitate a deeper understanding and easier access to content across linguistic boundaries. Preliminary findings suggest that the inclusion of an attention mechanism enhances model performance by improving transliteration accuracy and handling long-range dependencies more effectively. Continuing from the previously outlined research context, this paper will specifically target transliteration between English and Hindi languages using a curated dataset designed to evaluate the efficacy of models. The models are based on LSTM architecture for enhancing transliteration accuracy by effectively managing long-range dependencies.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published