-
IISc Bangalore / TU München
- 127.0.0.1 ✧ কলকাতা ✧ গুৱাহাটী
-
02:27
(UTC +05:30) - neilblaze.live
- in/Neilblaze
- @Neilzblaze007
- @[email protected]
- @Neilblaze
Highlights
Lists (2)
Sort Oldest
Starred repositories
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
JavaScript Gaussian Splatting library.
ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Experts via Clustering
Visualize streams of multimodal data. Free, fast, easy to use, and simple to integrate. Built in Rust.
[SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
Intuitive scientific computing with dimension types for Jax, PyTorch, TensorFlow & NumPy
Universal and Transferable Attacks on Aligned Language Models
[CVPR2024] DisCo: Referring Human Dance Generation in Real World
A multi-backend implementation of the Keras API, with support for TensorFlow, JAX, and PyTorch.
FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, and beyond.
📘 Notion easy export converts your notion documents & databases into an ebook.
[CVPR 2023] Unifying Short and Long-Term Tracking with Graph Hierarchies
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
Chat with NeRF enables users to interact with a NeRF model by typing in natural language.
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Real time transcription with OpenAI Whisper.
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Code release for "Avatars Grow Legs Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model", CVPR 2023
The PASS dataset: pretrained models and how to get the data
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.