Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
-
Updated
Aug 28, 2024 - Python
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
all kinds of text classification models and more with deep learning
A TensorFlow Implementation of the Transformer: Attention Is All You Need
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
A collection of important graph embedding, classification and representation learning papers with implementations.
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Keras Attention Layer (Luong and Bahdanau scores).
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Graph Attention Networks (https://arxiv.org/abs/1710.10903)
Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
This repository contains my full work and notes on Coursera's NLP Specialization (Natural Language Processing) taught by the instructor Younes Bensouda Mourri and Łukasz Kaiser offered by deeplearning.ai
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
A concise but complete full-attention transformer with a set of promising experimental features from various papers
Text classifier for Hierarchical Attention Networks for Document Classification
Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
TensorFlow Implementation of "Show, Attend and Tell"
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
Add a description, image, and links to the attention-mechanism topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanism topic, visit your repo's landing page and select "manage topics."