Skip to content
View gowtham07's full-sized avatar

Block or report gowtham07

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
gowtham07/README.md

Hi 👋, I'm Gowtham

I am very passionate about utilizing Machine learning to build products and also involve in deploying those complex models to bring value

  • 🔭 I’m currently working as AI Intern student at AUDI

  • 🔭 Worked as Machine learning Working student at DFKI on Explainable AI

  • 🔭 Currently Exploring LLM's to build secure RAG's( amazon bedrock, langchain, guardrails)

  • 🔭 Currently Exploring various diffusers for image inpainting

  • 📫 How to reach me [email protected]

Languages and Tools:

aws c docker git jenkins kubernetes linux pandas python pytorch scikit_learn seaborn

My published work:

Enhancing the interpretability and consistency of machine learning models is critical to their deployment in real-world applications. Feature attribution methods have gained significant attention, which provide local explanations of model predictions by attributing importance to individual input features. This study examines the generalization of feature attributions across various deep learning architectures, such as convolutional neural networks (CNNs) and vision transformers. We aim to assess the feasibility of utilizing a feature attribution method as a future detector and examine how these features can be harmonized across multiple models employing distinct architectures but trained on the same data distribution. By exploring this harmonization, we aim to develop a more coherent and optimistic understanding of feature attributions, enhancing the consistency of local explanations across diverse deep-learning models. Our findings highlight the potential for harmonized feature attribution methods to improve interpretability and foster trust in machine learning applications, regardless of the underlying architecture.

Listeners use short interjections, so-called backchannels, to signify attention or express agreement. The automatic analysis of this behavior is of key importance for human conversation analysis and interactive conversational agents. Current state-of-the-art approaches for backchannel analysis from visual behavior make use of two types of features: features based on body pose and features based on facial behavior. At the same time, transformer neural networks have been established as an effective means to fuse input from different data sources, but they have not yet been applied to backchannel analysis. In this work, we conduct a comprehensive evaluation of multi-modal transformer architectures for automatic backchannel analysis based on pose and facial information. We address both the detection of backchannels as well as the task of estimating the agreement expressed in a backchannel. In evaluations on the MultiMediate’22 backchannel detection challenge, we reach 66.4% accuracy with a one-layer transformer architecture, outperforming the previous state of the art. With a two-layer transformer architecture, we furthermore set a new state of the art (0.0604 MSE) on the task of estimating the amount of agreement expressed in a backchannel.

  • Backdoor Attack against NLP models with Robustness-Aware Perturbation defense

    Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), such that the attacked model performs well on benign samples, whereas its prediction will be maliciously changed if the hidden backdoor is activated by the attacker defined trigger. This threat could happen when the training process is not fully controlled, such as training on third-party data-sets or adopting third-party models. There has been a lot of research and different methods to defend such type of backdoor attacks, one being robustness-aware perturbation-based defense method. This method mainly exploits big gap of robustness between poisoned and clean samples. In our work, we break this defense by controlling the robustness gap between poisoned and clean samples using adversarial training step. Presentation

Featured Projects

Projects Tech Stack
Back channel Detection Python, Transformers, Pytorch
Hotel Recommender Python, Pytorch ,Cohere Multi-Language Model
Kaggle Hubmap semantic segmentation Python, Pytorch , Deep Neural Networks
Kaggle Disaster Tweets Python, Pytorch , BERT, Pandas, Data cleaning, Data Processing
Evasion Attack on Images Python, Pytorch, Deep Learning
Question and Answer Kaggle NLP NLP, Pytorch, Deep Learning

Blogs

Blogs
Calibration Before Few Short Learning of LLM's

Top Languages

Profile Views

Popular repositories Loading

  1. HotelRecommendation HotelRecommendation Public

    Python 2

  2. Introduction-to-Structured-Query-Language-SQL Introduction-to-Structured-Query-Language-SQL Public

    1

  3. examples examples Public

    Forked from prraoo/examples

    A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

    Python

  4. MLnotes MLnotes Public

  5. AIND-NLP AIND-NLP Public

    Forked from nik1806/AIND-NLP

    Coding exercises for the Natural Language Processing concentration, part of Udacity's AIND program.

    Jupyter Notebook

  6. mrscc mrscc Public

    Forked from rupertmenneer/mrscc

    Using the Hugging Face Trainer I use the RoBERTa model to compete in the Microsoft Research Sentence Completion Challenge to achieve an accuracy of 82.6. A CBOW model is also implemented on the MRS…

    Jupyter Notebook