Block or Report
Block or report warmtan
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage: Jupyter Notebook
Sort by: Most stars
Starred repositories
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
100-Days-Of-ML-Code中文版
Foundational Models for State-of-the-Art Speech and Text Translation
面向开发者的 LLM 入门教程,吴恩达大模型系列课程中文版
AISystem 主要是指AI系统,包括AI芯片、AI编译器、AI推理和训练框架等AI全栈底层技术
强化学习中文教程(蘑菇书🍄),在线阅读地址:https://datawhalechina.github.io/easy-rl/
PyTorch code and models for the DINOv2 self-supervised learning method.
Inpaint anything using Segment Anything and inpainting models.
✔(已完结)最全面的 深度学习 笔记【土堆 Pytorch】【李沐 动手学深度学习】【吴恩达 深度学习】
Tutorials for creating and using ONNX models
Official Repository for "Eureka: Human-Level Reward Design via Coding Large Language Models" (ICLR 2024)
PyTorch入门教程,在线阅读地址:https://datawhalechina.github.io/thorough-pytorch/
📚 Jupyter notebook tutorials for OpenVINO™
Metric depth estimation from a single image
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
🔥3D点云目标检测&语义分割(深度学习)-SOTA方法,代码,论文,数据集等
Large dataset of hand-object contact, hand- and object-pose, and 2.9 M RGB-D grasp images.
A Tutorial on Manipulator Differential Kinematics
Maximum Entropy and Maximum Causal Entropy Inverse Reinforcement Learning Implementation in Python
Tutorial on how to get started with MuJoCo Simulation Platform. MuJoCo stands for Multi-Joint dynamics with Contact. It was acquired and made freely available by DeepMind in October 2021, and open …
[CVPR 2023 Highlight] GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts.
cornell grasp dataset analyses and process
Python implementation of mpc controller for path tracking
Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.