-
Notifications
You must be signed in to change notification settings - Fork 449
Insights: pytorch/FBGEMM
Overview
-
- 0 Merged pull requests
- 10 Open pull requests
- 0 Closed issues
- 3 New issues
Loading
Could not load contribution data
Please try again later
Loading
10 Pull requests opened by 6 people
-
Add missing Pyre mode headers] [batch:27/308] [shard:5/N] [A]
#2810 opened
Jul 8, 2024 -
Use better exponent rounding in Triton MX4 quantize kernel
#2816 opened
Jul 10, 2024 -
use at::parallel_for in cpu kernel
#2817 opened
Jul 10, 2024 -
use at::parallel_for in cpu kernel (#2817)
#2818 opened
Jul 10, 2024 -
[fbgemm_gpu] Bazel and docs fixes
#2819 opened
Jul 10, 2024 -
Triton MX4 Quantize Rounding Mode Support.
#2821 opened
Jul 10, 2024 -
Remove redundant torch.abs in sim check
#2822 opened
Jul 11, 2024 -
Bazel and docs fixes
#2823 opened
Jul 11, 2024 -
Support padding based on row_dim (apf part)
#2827 opened
Jul 11, 2024 -
Back out "Back out "[KT.regroup Ops][1-2/N] benchmark of fbgemm operator""
#2828 opened
Jul 11, 2024
3 Issues opened by 3 people
-
[Question FBGEMM_GPU] Adam optmizer not optimized
#2824 opened
Jul 11, 2024 -
Undefined symbol: `cublasLtMatmulDescCreate` in fbgemm_gpu_experimental_gen_ai_py.so
#2808 opened
Jul 8, 2024
4 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
FP32 Autovec Final Optimization
#2586 commented on
Jul 10, 2024 • 0 new comments -
add a new function to update sparse delta
#2768 commented on
Jul 9, 2024 • 0 new comments -
MX4 ops front-end API
#2777 commented on
Jul 8, 2024 • 0 new comments -
Implement some custom fb op out variant kernels
#2793 commented on
Jul 11, 2024 • 0 new comments