Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature][MMSIG] Add UniFormer Pose Estimation to Projects folder #2501

Merged
merged 14 commits into from
Jul 24, 2023
Merged

[Feature][MMSIG] Add UniFormer Pose Estimation to Projects folder #2501

merged 14 commits into from
Jul 24, 2023

Conversation

xin-li-67
Copy link
Contributor

Motivation

MMSIG task

Modification

projects/uniformer/*

BC-breaking (Optional)

Use cases (Optional)

Checklist

Before PR:

  • I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
  • Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit tests to ensure correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • CLA has been signed and all committers have signed the CLA in this PR.

@xin-li-67 xin-li-67 changed the base branch from main to dev-1.x July 1, 2023 11:33
@codecov
Copy link

codecov bot commented Jul 1, 2023

Codecov Report

Patch coverage has no change and project coverage change: -0.05 ⚠️

Comparison is base (4233a61) 80.82% compared to head (f93c8ed) 80.77%.

❗ Current head f93c8ed differs from pull request most recent head 9ee9014. Consider uploading reports for the commit 9ee9014 to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x    #2501      +/-   ##
===========================================
- Coverage    80.82%   80.77%   -0.05%     
===========================================
  Files          230      230              
  Lines        14437    14437              
  Branches      2498     2498              
===========================================
- Hits         11668    11662       -6     
- Misses        2129     2136       +7     
+ Partials       640      639       -1     
Flag Coverage Δ
unittests 80.77% <ø> (-0.05%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

see 2 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@xin-li-67
Copy link
Contributor Author

test result sample of projects/uniformer/configs/td-hm_uniformer-s-8xb128-210e_coco-384x288.py:

Loads checkpoint by local backend from path: projects/uniformer/pose_model/top_down_384x288_global_small.pth
07/21 17:46:03 - mmengine - INFO - Load checkpoint from projects/uniformer/pose_model/top_down_384x288_global_small.pth
07/21 17:46:53 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:05:55  time: 0.996443  data_time: 0.062145  memory: 6542  
07/21 17:47:39 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:04:56  time: 0.933922  data_time: 0.034630  memory: 6542  
07/21 17:48:26 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:04:05  time: 0.930108  data_time: 0.034428  memory: 6542  
07/21 17:49:13 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:03:16  time: 0.937324  data_time: 0.039884  memory: 6542  
07/21 17:50:00 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:02:28  time: 0.938158  data_time: 0.035234  memory: 6542  
07/21 17:50:46 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:01:41  time: 0.929169  data_time: 0.036719  memory: 6542  
07/21 17:51:32 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:00:53  time: 0.927817  data_time: 0.034636  memory: 6542  
07/21 17:52:19 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:06  time: 0.929423  data_time: 0.035859  memory: 6542  
07/21 17:52:39 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=1.54s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=3.81s).
Accumulating evaluation results...
DONE (t=0.12s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.759
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.906
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.830
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.722
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.830
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.810
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.944
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.873
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.768
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.873
07/21 17:52:44 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.758717  coco/AP .5: 0.906003  coco/AP .75: 0.829609  coco/AP (M): 0.721588  coco/AP (L): 0.829766  coco/AR: 0.810217  coco/AR .5: 0.943955  coco/AR .75: 0.873111  coco/AR (M): 0.767850  coco/AR (L): 0.872612  data_time: 0.039037  time: 0.939339

@xin-li-67 xin-li-67 changed the title [WIP] Add UniFormer Pose Estimation to Projects folder Add UniFormer Pose Estimation to Projects folder Jul 23, 2023
@xin-li-67
Copy link
Contributor Author

With the latest commit, I have fixed the error which blocked the training process, and now I can run training on a single GPU, and the log is quite similar to the original one. Here is a part of it:

2023/07/22 14:32:59 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
2023/07/22 14:32:59 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
2023/07/22 14:34:04 - mmengine - INFO - LR is set based on batch size of 1024 and the current batch size is 32. Scaling the original LR by 0.03125.
2023/07/22 14:34:10 - mmengine - INFO - load model from: /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
2023/07/22 14:34:10 - mmengine - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
2023/07/22 14:34:10 - mmengine - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, norm4.weight, norm4.bias

Name of parameter - Initialization information

backbone.patch_embed1.norm.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed1.norm.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed1.proj.weight - torch.Size([64, 3, 4, 4]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed1.proj.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed2.norm.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed2.norm.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed2.proj.weight - torch.Size([128, 64, 2, 2]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed2.proj.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed3.norm.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed3.norm.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed3.proj.weight - torch.Size([320, 128, 2, 2]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed3.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed4.norm.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed4.norm.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed4.proj.weight - torch.Size([512, 320, 2, 2]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.patch_embed4.proj.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.pos_embed.weight - torch.Size([64, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.pos_embed.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.norm1.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.norm1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.conv1.weight - torch.Size([64, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.conv1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.conv2.weight - torch.Size([64, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.conv2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.attn.weight - torch.Size([64, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.attn.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.norm2.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.norm2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.mlp.fc1.weight - torch.Size([256, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.mlp.fc1.bias - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.mlp.fc2.weight - torch.Size([64, 256, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.0.mlp.fc2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.pos_embed.weight - torch.Size([64, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.pos_embed.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.norm1.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.norm1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.conv1.weight - torch.Size([64, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.conv1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.conv2.weight - torch.Size([64, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.conv2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.attn.weight - torch.Size([64, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.attn.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.norm2.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.norm2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.mlp.fc1.weight - torch.Size([256, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.mlp.fc1.bias - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.mlp.fc2.weight - torch.Size([64, 256, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.1.mlp.fc2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.pos_embed.weight - torch.Size([64, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.pos_embed.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.norm1.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.norm1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.conv1.weight - torch.Size([64, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.conv1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.conv2.weight - torch.Size([64, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.conv2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.attn.weight - torch.Size([64, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.attn.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.norm2.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.norm2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.mlp.fc1.weight - torch.Size([256, 64, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.mlp.fc1.bias - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.mlp.fc2.weight - torch.Size([64, 256, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks1.2.mlp.fc2.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm1.weight - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm1.bias - torch.Size([64]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.pos_embed.weight - torch.Size([128, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.pos_embed.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.norm1.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.norm1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.conv1.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.conv1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.conv2.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.conv2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.attn.weight - torch.Size([128, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.attn.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.norm2.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.norm2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.mlp.fc1.weight - torch.Size([512, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.mlp.fc1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.mlp.fc2.weight - torch.Size([128, 512, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.0.mlp.fc2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.pos_embed.weight - torch.Size([128, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.pos_embed.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.norm1.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.norm1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.conv1.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.conv1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.conv2.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.conv2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.attn.weight - torch.Size([128, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.attn.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.norm2.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.norm2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.mlp.fc1.weight - torch.Size([512, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.mlp.fc1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.mlp.fc2.weight - torch.Size([128, 512, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.1.mlp.fc2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.pos_embed.weight - torch.Size([128, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.pos_embed.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.norm1.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.norm1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.conv1.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.conv1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.conv2.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.conv2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.attn.weight - torch.Size([128, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.attn.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.norm2.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.norm2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.mlp.fc1.weight - torch.Size([512, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.mlp.fc1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.mlp.fc2.weight - torch.Size([128, 512, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.2.mlp.fc2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.pos_embed.weight - torch.Size([128, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.pos_embed.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.norm1.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.norm1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.conv1.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.conv1.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.conv2.weight - torch.Size([128, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.conv2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.attn.weight - torch.Size([128, 1, 5, 5]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.attn.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.norm2.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.norm2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.mlp.fc1.weight - torch.Size([512, 128, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.mlp.fc1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.mlp.fc2.weight - torch.Size([128, 512, 1, 1]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks2.3.mlp.fc2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm2.weight - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm2.bias - torch.Size([128]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.0.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.1.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.2.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.3.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.4.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.5.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.6.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.pos_embed.weight - torch.Size([320, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.pos_embed.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.norm1.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.norm1.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.attn.qkv.weight - torch.Size([960, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.attn.qkv.bias - torch.Size([960]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.attn.proj.weight - torch.Size([320, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.attn.proj.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.norm2.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.norm2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.mlp.fc1.weight - torch.Size([1280, 320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.mlp.fc1.bias - torch.Size([1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.mlp.fc2.weight - torch.Size([320, 1280]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks3.7.mlp.fc2.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm3.weight - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm3.bias - torch.Size([320]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.pos_embed.weight - torch.Size([512, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.pos_embed.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.norm1.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.norm1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.attn.qkv.weight - torch.Size([1536, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.attn.qkv.bias - torch.Size([1536]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.attn.proj.weight - torch.Size([512, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.attn.proj.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.norm2.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.norm2.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.mlp.fc1.weight - torch.Size([2048, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.mlp.fc1.bias - torch.Size([2048]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.mlp.fc2.weight - torch.Size([512, 2048]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.0.mlp.fc2.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.pos_embed.weight - torch.Size([512, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.pos_embed.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.norm1.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.norm1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.attn.qkv.weight - torch.Size([1536, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.attn.qkv.bias - torch.Size([1536]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.attn.proj.weight - torch.Size([512, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.attn.proj.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.norm2.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.norm2.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.mlp.fc1.weight - torch.Size([2048, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.mlp.fc1.bias - torch.Size([2048]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.mlp.fc2.weight - torch.Size([512, 2048]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.1.mlp.fc2.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.pos_embed.weight - torch.Size([512, 1, 3, 3]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.pos_embed.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.norm1.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.norm1.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.attn.qkv.weight - torch.Size([1536, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.attn.qkv.bias - torch.Size([1536]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.attn.proj.weight - torch.Size([512, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.attn.proj.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.norm2.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.norm2.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.mlp.fc1.weight - torch.Size([2048, 512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.mlp.fc1.bias - torch.Size([2048]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.mlp.fc2.weight - torch.Size([512, 2048]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.blocks4.2.mlp.fc2.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm4.weight - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

backbone.norm4.bias - torch.Size([512]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.deconv_layers.0.weight - torch.Size([512, 256, 4, 4]): 
NormalInit: mean=0, std=0.001, bias=0 

head.deconv_layers.1.weight - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.deconv_layers.1.bias - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.deconv_layers.3.weight - torch.Size([256, 256, 4, 4]): 
NormalInit: mean=0, std=0.001, bias=0 

head.deconv_layers.4.weight - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.deconv_layers.4.bias - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.deconv_layers.6.weight - torch.Size([256, 256, 4, 4]): 
NormalInit: mean=0, std=0.001, bias=0 

head.deconv_layers.7.weight - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.deconv_layers.7.bias - torch.Size([256]): 
The value is the same before and after calling `init_weights` of TopdownPoseEstimator  

head.final_layer.weight - torch.Size([17, 256, 1, 1]): 
NormalInit: mean=0, std=0.001, bias=0 

head.final_layer.bias - torch.Size([17]): 
NormalInit: mean=0, std=0.001, bias=0 
2023/07/22 14:34:10 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
2023/07/22 14:34:10 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
2023/07/22 14:34:10 - mmengine - INFO - Checkpoints will be saved to /root/mmpose/work_dirs/td-hm_uniformer-s-8xb128-210e_coco-256x192.
2023/07/22 14:34:22 - mmengine - INFO - Epoch(train)   [1][  50/4682]  lr: 6.193637e-06  eta: 2 days, 17:10:57  time: 0.238675  data_time: 0.088113  memory: 2769  loss: 0.002342  loss_kpt: 0.002342  acc_pose: 0.025147
2023/07/22 14:34:33 - mmengine - INFO - Epoch(train)   [1][ 100/4682]  lr: 1.244990e-05  eta: 2 days, 13:50:20  time: 0.214210  data_time: 0.070430  memory: 2769  loss: 0.002225  loss_kpt: 0.002225  acc_pose: 0.063447
2023/07/22 14:34:44 - mmengine - INFO - Epoch(train)   [1][ 150/4682]  lr: 1.870616e-05  eta: 2 days, 13:04:57  time: 0.218167  data_time: 0.071140  memory: 2769  loss: 0.002192  loss_kpt: 0.002192  acc_pose: 0.088471
2023/07/22 14:34:56 - mmengine - INFO - Epoch(train)   [1][ 200/4682]  lr: 2.496242e-05  eta: 2 days, 13:38:20  time: 0.231884  data_time: 0.085791  memory: 2769  loss: 0.002208  loss_kpt: 0.002208  acc_pose: 0.068286
2023/07/22 14:35:06 - mmengine - INFO - Epoch(train)   [1][ 250/4682]  lr: 3.121869e-05  eta: 2 days, 13:07:07  time: 0.216263  data_time: 0.070398  memory: 2769  loss: 0.002174  loss_kpt: 0.002174  acc_pose: 0.134264
2023/07/22 14:35:17 - mmengine - INFO - Epoch(train)   [1][ 300/4682]  lr: 3.747495e-05  eta: 2 days, 12:45:42  time: 0.216062  data_time: 0.071069  memory: 2769  loss: 0.002154  loss_kpt: 0.002154  acc_pose: 0.088193
2023/07/22 14:35:28 - mmengine - INFO - Epoch(train)   [1][ 350/4682]  lr: 4.373121e-05  eta: 2 days, 12:29:12  time: 0.215576  data_time: 0.070412  memory: 2769  loss: 0.002122  loss_kpt: 0.002122  acc_pose: 0.120076
2023/07/22 14:35:39 - mmengine - INFO - Epoch(train)   [1][ 400/4682]  lr: 4.998747e-05  eta: 2 days, 12:23:18  time: 0.218757  data_time: 0.069984  memory: 2769  loss: 0.002127  loss_kpt: 0.002127  acc_pose: 0.137982
2023/07/22 14:35:50 - mmengine - INFO - Epoch(train)   [1][ 450/4682]  lr: 5.624374e-05  eta: 2 days, 12:10:00  time: 0.213989  data_time: 0.068708  memory: 2769  loss: 0.002121  loss_kpt: 0.002121  acc_pose: 0.125615
2023/07/22 14:36:01 - mmengine - INFO - Epoch(train)   [1][ 500/4682]  lr: 6.250000e-05  eta: 2 days, 12:24:21  time: 0.229271  data_time: 0.084079  memory: 2769  loss: 0.002046  loss_kpt: 0.002046  acc_pose: 0.100560
2023/07/22 14:36:12 - mmengine - INFO - Epoch(train)   [1][ 550/4682]  lr: 6.250000e-05  eta: 2 days, 12:11:56  time: 0.213064  data_time: 0.067762  memory: 2769  loss: 0.002069  loss_kpt: 0.002069  acc_pose: 0.101174
2023/07/22 14:36:23 - mmengine - INFO - Epoch(train)   [1][ 600/4682]  lr: 6.250000e-05  eta: 2 days, 12:05:39  time: 0.216078  data_time: 0.068032  memory: 2769  loss: 0.002138  loss_kpt: 0.002138  acc_pose: 0.123952
2023/07/22 14:36:34 - mmengine - INFO - Epoch(train)   [1][ 650/4682]  lr: 6.250000e-05  eta: 2 days, 12:11:38  time: 0.225055  data_time: 0.075938  memory: 2769  loss: 0.002037  loss_kpt: 0.002037  acc_pose: 0.181745
2023/07/22 14:36:45 - mmengine - INFO - Epoch(train)   [1][ 700/4682]  lr: 6.250000e-05  eta: 2 days, 12:07:41  time: 0.217328  data_time: 0.070013  memory: 2769  loss: 0.002077  loss_kpt: 0.002077  acc_pose: 0.156886
2023/07/22 14:36:55 - mmengine - INFO - Epoch(train)   [1][ 750/4682]  lr: 6.250000e-05  eta: 2 days, 12:02:47  time: 0.215984  data_time: 0.070167  memory: 2769  loss: 0.002072  loss_kpt: 0.002072  acc_pose: 0.183034
2023/07/22 14:37:06 - mmengine - INFO - Epoch(train)   [1][ 800/4682]  lr: 6.250000e-05  eta: 2 days, 12:01:03  time: 0.218508  data_time: 0.070894  memory: 2769  loss: 0.002069  loss_kpt: 0.002069  acc_pose: 0.116141
2023/07/22 14:37:19 - mmengine - INFO - Epoch(train)   [1][ 850/4682]  lr: 6.250000e-05  eta: 2 days, 12:28:41  time: 0.248821  data_time: 0.101981  memory: 2769  loss: 0.002058  loss_kpt: 0.002058  acc_pose: 0.152378
2023/07/22 14:37:30 - mmengine - INFO - Epoch(train)   [1][ 900/4682]  lr: 6.250000e-05  eta: 2 days, 12:24:00  time: 0.216682  data_time: 0.069223  memory: 2769  loss: 0.002018  loss_kpt: 0.002018  acc_pose: 0.162742
2023/07/22 14:37:41 - mmengine - INFO - Epoch(train)   [1][ 950/4682]  lr: 6.250000e-05  eta: 2 days, 12:24:08  time: 0.221729  data_time: 0.070929  memory: 2769  loss: 0.002030  loss_kpt: 0.002030  acc_pose: 0.157651
2023/07/22 14:37:52 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:37:52 - mmengine - INFO - Epoch(train)   [1][1000/4682]  lr: 6.250000e-05  eta: 2 days, 12:22:31  time: 0.219609  data_time: 0.071100  memory: 2769  loss: 0.002040  loss_kpt: 0.002040  acc_pose: 0.212475
2023/07/22 14:38:03 - mmengine - INFO - Epoch(train)   [1][1050/4682]  lr: 6.250000e-05  eta: 2 days, 12:19:44  time: 0.217952  data_time: 0.070976  memory: 2769  loss: 0.002039  loss_kpt: 0.002039  acc_pose: 0.177589
2023/07/22 14:38:14 - mmengine - INFO - Epoch(train)   [1][1100/4682]  lr: 6.250000e-05  eta: 2 days, 12:17:50  time: 0.218819  data_time: 0.071152  memory: 2769  loss: 0.001997  loss_kpt: 0.001997  acc_pose: 0.181745
2023/07/22 14:38:24 - mmengine - INFO - Epoch(train)   [1][1150/4682]  lr: 6.250000e-05  eta: 2 days, 12:12:54  time: 0.214366  data_time: 0.069868  memory: 2769  loss: 0.002044  loss_kpt: 0.002044  acc_pose: 0.247371
2023/07/22 14:38:35 - mmengine - INFO - Epoch(train)   [1][1200/4682]  lr: 6.250000e-05  eta: 2 days, 12:09:43  time: 0.216320  data_time: 0.070588  memory: 2769  loss: 0.001998  loss_kpt: 0.001998  acc_pose: 0.182767
2023/07/22 14:38:46 - mmengine - INFO - Epoch(train)   [1][1250/4682]  lr: 6.250000e-05  eta: 2 days, 12:10:15  time: 0.221654  data_time: 0.072237  memory: 2769  loss: 0.002001  loss_kpt: 0.002001  acc_pose: 0.192956
2023/07/22 14:38:57 - mmengine - INFO - Epoch(train)   [1][1300/4682]  lr: 6.250000e-05  eta: 2 days, 12:07:12  time: 0.216048  data_time: 0.067855  memory: 2769  loss: 0.001997  loss_kpt: 0.001997  acc_pose: 0.205860
2023/07/22 14:39:08 - mmengine - INFO - Epoch(train)   [1][1350/4682]  lr: 6.250000e-05  eta: 2 days, 12:05:19  time: 0.217611  data_time: 0.069244  memory: 2769  loss: 0.002010  loss_kpt: 0.002010  acc_pose: 0.177655
2023/07/22 14:39:19 - mmengine - INFO - Epoch(train)   [1][1400/4682]  lr: 6.250000e-05  eta: 2 days, 12:03:52  time: 0.218147  data_time: 0.071651  memory: 2769  loss: 0.002012  loss_kpt: 0.002012  acc_pose: 0.169542
2023/07/22 14:39:30 - mmengine - INFO - Epoch(train)   [1][1450/4682]  lr: 6.250000e-05  eta: 2 days, 12:03:20  time: 0.219611  data_time: 0.072449  memory: 2769  loss: 0.001992  loss_kpt: 0.001992  acc_pose: 0.252034
2023/07/22 14:39:41 - mmengine - INFO - Epoch(train)   [1][1500/4682]  lr: 6.250000e-05  eta: 2 days, 12:09:23  time: 0.231631  data_time: 0.084115  memory: 2769  loss: 0.002023  loss_kpt: 0.002023  acc_pose: 0.185021
2023/07/22 14:39:52 - mmengine - INFO - Epoch(train)   [1][1550/4682]  lr: 6.250000e-05  eta: 2 days, 12:07:28  time: 0.217318  data_time: 0.070157  memory: 2769  loss: 0.001949  loss_kpt: 0.001949  acc_pose: 0.195677
2023/07/22 14:40:03 - mmengine - INFO - Epoch(train)   [1][1600/4682]  lr: 6.250000e-05  eta: 2 days, 12:05:48  time: 0.217586  data_time: 0.070765  memory: 2769  loss: 0.002003  loss_kpt: 0.002003  acc_pose: 0.202623
2023/07/22 14:40:14 - mmengine - INFO - Epoch(train)   [1][1650/4682]  lr: 6.250000e-05  eta: 2 days, 12:04:59  time: 0.219127  data_time: 0.071461  memory: 2769  loss: 0.001976  loss_kpt: 0.001976  acc_pose: 0.170747
2023/07/22 14:40:25 - mmengine - INFO - Epoch(train)   [1][1700/4682]  lr: 6.250000e-05  eta: 2 days, 12:05:16  time: 0.221335  data_time: 0.070725  memory: 2769  loss: 0.001958  loss_kpt: 0.001958  acc_pose: 0.270792
2023/07/22 14:40:36 - mmengine - INFO - Epoch(train)   [1][1750/4682]  lr: 6.250000e-05  eta: 2 days, 12:02:18  time: 0.214426  data_time: 0.068187  memory: 2769  loss: 0.001904  loss_kpt: 0.001904  acc_pose: 0.204290
2023/07/22 14:40:47 - mmengine - INFO - Epoch(train)   [1][1800/4682]  lr: 6.250000e-05  eta: 2 days, 12:01:06  time: 0.217989  data_time: 0.070411  memory: 2769  loss: 0.001948  loss_kpt: 0.001948  acc_pose: 0.202963
2023/07/22 14:40:57 - mmengine - INFO - Epoch(train)   [1][1850/4682]  lr: 6.250000e-05  eta: 2 days, 11:58:54  time: 0.215582  data_time: 0.068443  memory: 2769  loss: 0.001918  loss_kpt: 0.001918  acc_pose: 0.250597
2023/07/22 14:41:08 - mmengine - INFO - Epoch(train)   [1][1900/4682]  lr: 6.250000e-05  eta: 2 days, 11:57:27  time: 0.217119  data_time: 0.070256  memory: 2769  loss: 0.001875  loss_kpt: 0.001875  acc_pose: 0.208480
2023/07/22 14:41:19 - mmengine - INFO - Epoch(train)   [1][1950/4682]  lr: 6.250000e-05  eta: 2 days, 11:57:06  time: 0.219533  data_time: 0.069367  memory: 2769  loss: 0.001934  loss_kpt: 0.001934  acc_pose: 0.274872
2023/07/22 14:41:30 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:41:30 - mmengine - INFO - Epoch(train)   [1][2000/4682]  lr: 6.250000e-05  eta: 2 days, 11:55:49  time: 0.217253  data_time: 0.068804  memory: 2769  loss: 0.001888  loss_kpt: 0.001888  acc_pose: 0.344551
2023/07/22 14:41:41 - mmengine - INFO - Epoch(train)   [1][2050/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:12  time: 0.213803  data_time: 0.066914  memory: 2769  loss: 0.001902  loss_kpt: 0.001902  acc_pose: 0.159386
2023/07/22 14:41:52 - mmengine - INFO - Epoch(train)   [1][2100/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:51  time: 0.216723  data_time: 0.069914  memory: 2769  loss: 0.001916  loss_kpt: 0.001916  acc_pose: 0.237694
2023/07/22 14:42:03 - mmengine - INFO - Epoch(train)   [1][2150/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:26  time: 0.216423  data_time: 0.069900  memory: 2769  loss: 0.001918  loss_kpt: 0.001918  acc_pose: 0.260449
2023/07/22 14:42:13 - mmengine - INFO - Epoch(train)   [1][2200/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:06  time: 0.216480  data_time: 0.069685  memory: 2769  loss: 0.001904  loss_kpt: 0.001904  acc_pose: 0.303737
2023/07/22 14:42:24 - mmengine - INFO - Epoch(train)   [1][2250/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:44  time: 0.219002  data_time: 0.070634  memory: 2769  loss: 0.001892  loss_kpt: 0.001892  acc_pose: 0.248940
2023/07/22 14:42:36 - mmengine - INFO - Epoch(train)   [1][2300/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:11  time: 0.235337  data_time: 0.086533  memory: 2769  loss: 0.001909  loss_kpt: 0.001909  acc_pose: 0.245801
2023/07/22 14:42:47 - mmengine - INFO - Epoch(train)   [1][2350/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:04  time: 0.220070  data_time: 0.072084  memory: 2769  loss: 0.001897  loss_kpt: 0.001897  acc_pose: 0.228306
2023/07/22 14:42:58 - mmengine - INFO - Epoch(train)   [1][2400/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:57  time: 0.217078  data_time: 0.068393  memory: 2769  loss: 0.001846  loss_kpt: 0.001846  acc_pose: 0.324069
2023/07/22 14:43:09 - mmengine - INFO - Epoch(train)   [1][2450/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:16  time: 0.218289  data_time: 0.070318  memory: 2769  loss: 0.001890  loss_kpt: 0.001890  acc_pose: 0.288004
2023/07/22 14:43:20 - mmengine - INFO - Epoch(train)   [1][2500/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:27  time: 0.217856  data_time: 0.070853  memory: 2769  loss: 0.001838  loss_kpt: 0.001838  acc_pose: 0.192583
2023/07/22 14:43:31 - mmengine - INFO - Epoch(train)   [1][2550/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:23  time: 0.216951  data_time: 0.069948  memory: 2769  loss: 0.001860  loss_kpt: 0.001860  acc_pose: 0.141178
2023/07/22 14:43:42 - mmengine - INFO - Epoch(train)   [1][2600/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:43  time: 0.230845  data_time: 0.082083  memory: 2769  loss: 0.001872  loss_kpt: 0.001872  acc_pose: 0.358413
2023/07/22 14:43:53 - mmengine - INFO - Epoch(train)   [1][2650/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:38  time: 0.216960  data_time: 0.067662  memory: 2769  loss: 0.001869  loss_kpt: 0.001869  acc_pose: 0.262805
2023/07/22 14:44:04 - mmengine - INFO - Epoch(train)   [1][2700/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:27  time: 0.219842  data_time: 0.070447  memory: 2769  loss: 0.001856  loss_kpt: 0.001856  acc_pose: 0.302262
2023/07/22 14:44:15 - mmengine - INFO - Epoch(train)   [1][2750/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:47  time: 0.214871  data_time: 0.066616  memory: 2769  loss: 0.001869  loss_kpt: 0.001869  acc_pose: 0.253689
2023/07/22 14:44:26 - mmengine - INFO - Epoch(train)   [1][2800/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:37  time: 0.216359  data_time: 0.068392  memory: 2769  loss: 0.001831  loss_kpt: 0.001831  acc_pose: 0.297514
2023/07/22 14:44:36 - mmengine - INFO - Epoch(train)   [1][2850/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:25  time: 0.216140  data_time: 0.067781  memory: 2769  loss: 0.001805  loss_kpt: 0.001805  acc_pose: 0.306196
2023/07/22 14:44:48 - mmengine - INFO - Epoch(train)   [1][2900/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:33  time: 0.238467  data_time: 0.088258  memory: 2769  loss: 0.001801  loss_kpt: 0.001801  acc_pose: 0.312398
2023/07/22 14:44:59 - mmengine - INFO - Epoch(train)   [1][2950/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:38  time: 0.220876  data_time: 0.071713  memory: 2769  loss: 0.001849  loss_kpt: 0.001849  acc_pose: 0.312699
2023/07/22 14:45:10 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:45:10 - mmengine - INFO - Epoch(train)   [1][3000/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:36  time: 0.220552  data_time: 0.070842  memory: 2769  loss: 0.001837  loss_kpt: 0.001837  acc_pose: 0.216881
2023/07/22 14:45:21 - mmengine - INFO - Epoch(train)   [1][3050/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:20  time: 0.219633  data_time: 0.069806  memory: 2769  loss: 0.001814  loss_kpt: 0.001814  acc_pose: 0.309035
2023/07/22 14:45:32 - mmengine - INFO - Epoch(train)   [1][3100/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:54  time: 0.219044  data_time: 0.071651  memory: 2769  loss: 0.001802  loss_kpt: 0.001802  acc_pose: 0.220713
2023/07/22 14:45:43 - mmengine - INFO - Epoch(train)   [1][3150/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:56  time: 0.216899  data_time: 0.068939  memory: 2769  loss: 0.001801  loss_kpt: 0.001801  acc_pose: 0.334904
2023/07/22 14:45:54 - mmengine - INFO - Epoch(train)   [1][3200/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:20  time: 0.218274  data_time: 0.071068  memory: 2769  loss: 0.001809  loss_kpt: 0.001809  acc_pose: 0.260054
2023/07/22 14:46:05 - mmengine - INFO - Epoch(train)   [1][3250/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:28  time: 0.217118  data_time: 0.068969  memory: 2769  loss: 0.001812  loss_kpt: 0.001812  acc_pose: 0.250341
2023/07/22 14:46:16 - mmengine - INFO - Epoch(train)   [1][3300/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:52  time: 0.218194  data_time: 0.070332  memory: 2769  loss: 0.001810  loss_kpt: 0.001810  acc_pose: 0.296835
2023/07/22 14:46:26 - mmengine - INFO - Epoch(train)   [1][3350/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:04  time: 0.213188  data_time: 0.064518  memory: 2769  loss: 0.001807  loss_kpt: 0.001807  acc_pose: 0.303540
2023/07/22 14:46:38 - mmengine - INFO - Epoch(train)   [1][3400/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:11  time: 0.233414  data_time: 0.085296  memory: 2769  loss: 0.001820  loss_kpt: 0.001820  acc_pose: 0.321955
2023/07/22 14:46:49 - mmengine - INFO - Epoch(train)   [1][3450/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:17  time: 0.216865  data_time: 0.067705  memory: 2769  loss: 0.001838  loss_kpt: 0.001838  acc_pose: 0.429913
2023/07/22 14:47:00 - mmengine - INFO - Epoch(train)   [1][3500/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:18  time: 0.220740  data_time: 0.072034  memory: 2769  loss: 0.001760  loss_kpt: 0.001760  acc_pose: 0.327256
2023/07/22 14:47:11 - mmengine - INFO - Epoch(train)   [1][3550/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:22  time: 0.225282  data_time: 0.073994  memory: 2769  loss: 0.001783  loss_kpt: 0.001783  acc_pose: 0.312063
2023/07/22 14:47:22 - mmengine - INFO - Epoch(train)   [1][3600/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:50  time: 0.218429  data_time: 0.070870  memory: 2769  loss: 0.001785  loss_kpt: 0.001785  acc_pose: 0.276079
2023/07/22 14:47:33 - mmengine - INFO - Epoch(train)   [1][3650/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:37  time: 0.219818  data_time: 0.070142  memory: 2769  loss: 0.001771  loss_kpt: 0.001771  acc_pose: 0.268007
2023/07/22 14:47:45 - mmengine - INFO - Epoch(train)   [1][3700/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:25  time: 0.233418  data_time: 0.084324  memory: 2769  loss: 0.001789  loss_kpt: 0.001789  acc_pose: 0.282685
2023/07/22 14:47:56 - mmengine - INFO - Epoch(train)   [1][3750/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:16  time: 0.215661  data_time: 0.068282  memory: 2769  loss: 0.001733  loss_kpt: 0.001733  acc_pose: 0.289396
2023/07/22 14:48:07 - mmengine - INFO - Epoch(train)   [1][3800/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:05  time: 0.220118  data_time: 0.071339  memory: 2769  loss: 0.001777  loss_kpt: 0.001777  acc_pose: 0.330327
2023/07/22 14:48:18 - mmengine - INFO - Epoch(train)   [1][3850/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:53  time: 0.219957  data_time: 0.071816  memory: 2769  loss: 0.001771  loss_kpt: 0.001771  acc_pose: 0.306766
2023/07/22 14:48:29 - mmengine - INFO - Epoch(train)   [1][3900/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:44  time: 0.220199  data_time: 0.070299  memory: 2769  loss: 0.001763  loss_kpt: 0.001763  acc_pose: 0.408379
2023/07/22 14:48:40 - mmengine - INFO - Epoch(train)   [1][3950/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:41  time: 0.220680  data_time: 0.071206  memory: 2769  loss: 0.001723  loss_kpt: 0.001723  acc_pose: 0.331735
2023/07/22 14:48:51 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:48:51 - mmengine - INFO - Epoch(train)   [1][4000/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:02  time: 0.217765  data_time: 0.067463  memory: 2769  loss: 0.001758  loss_kpt: 0.001758  acc_pose: 0.276604
2023/07/22 14:49:01 - mmengine - INFO - Epoch(train)   [1][4050/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:21  time: 0.217575  data_time: 0.069876  memory: 2769  loss: 0.001735  loss_kpt: 0.001735  acc_pose: 0.264040
2023/07/22 14:49:12 - mmengine - INFO - Epoch(train)   [1][4100/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:06  time: 0.219668  data_time: 0.071190  memory: 2769  loss: 0.001745  loss_kpt: 0.001745  acc_pose: 0.303010
2023/07/22 14:49:23 - mmengine - INFO - Epoch(train)   [1][4150/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:07  time: 0.215914  data_time: 0.067799  memory: 2769  loss: 0.001749  loss_kpt: 0.001749  acc_pose: 0.337708
2023/07/22 14:49:34 - mmengine - INFO - Epoch(train)   [1][4200/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:51  time: 0.219527  data_time: 0.072417  memory: 2769  loss: 0.001757  loss_kpt: 0.001757  acc_pose: 0.374240
2023/07/22 14:49:46 - mmengine - INFO - Epoch(train)   [1][4250/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:34  time: 0.229830  data_time: 0.082045  memory: 2769  loss: 0.001735  loss_kpt: 0.001735  acc_pose: 0.337551
2023/07/22 14:49:57 - mmengine - INFO - Epoch(train)   [1][4300/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:57  time: 0.217770  data_time: 0.069308  memory: 2769  loss: 0.001732  loss_kpt: 0.001732  acc_pose: 0.346927
2023/07/22 14:50:07 - mmengine - INFO - Epoch(train)   [1][4350/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:09  time: 0.216777  data_time: 0.069380  memory: 2769  loss: 0.001723  loss_kpt: 0.001723  acc_pose: 0.381805
2023/07/22 14:50:18 - mmengine - INFO - Epoch(train)   [1][4400/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:13  time: 0.215922  data_time: 0.067738  memory: 2769  loss: 0.001733  loss_kpt: 0.001733  acc_pose: 0.369610
2023/07/22 14:50:29 - mmengine - INFO - Epoch(train)   [1][4450/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:20  time: 0.221661  data_time: 0.070542  memory: 2769  loss: 0.001743  loss_kpt: 0.001743  acc_pose: 0.432129
2023/07/22 14:50:40 - mmengine - INFO - Epoch(train)   [1][4500/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:48  time: 0.218011  data_time: 0.069297  memory: 2769  loss: 0.001714  loss_kpt: 0.001714  acc_pose: 0.343639
2023/07/22 14:50:51 - mmengine - INFO - Epoch(train)   [1][4550/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:11  time: 0.217507  data_time: 0.069888  memory: 2769  loss: 0.001740  loss_kpt: 0.001740  acc_pose: 0.400089
2023/07/22 14:51:02 - mmengine - INFO - Epoch(train)   [1][4600/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:46  time: 0.218590  data_time: 0.070766  memory: 2769  loss: 0.001733  loss_kpt: 0.001733  acc_pose: 0.294756
2023/07/22 14:51:13 - mmengine - INFO - Epoch(train)   [1][4650/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:15  time: 0.218027  data_time: 0.069461  memory: 2769  loss: 0.001701  loss_kpt: 0.001701  acc_pose: 0.332855
2023/07/22 14:51:20 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:51:31 - mmengine - INFO - Epoch(train)   [2][  50/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:26  time: 0.225645  data_time: 0.075325  memory: 2769  loss: 0.001687  loss_kpt: 0.001687  acc_pose: 0.337012
2023/07/22 14:51:42 - mmengine - INFO - Epoch(train)   [2][ 100/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:55  time: 0.223896  data_time: 0.072022  memory: 2769  loss: 0.001708  loss_kpt: 0.001708  acc_pose: 0.409553
2023/07/22 14:51:54 - mmengine - INFO - Epoch(train)   [2][ 150/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:22  time: 0.223711  data_time: 0.073127  memory: 2769  loss: 0.001702  loss_kpt: 0.001702  acc_pose: 0.340780
2023/07/22 14:52:05 - mmengine - INFO - Epoch(train)   [2][ 200/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:35  time: 0.234287  data_time: 0.085480  memory: 2769  loss: 0.001711  loss_kpt: 0.001711  acc_pose: 0.273863
2023/07/22 14:52:16 - mmengine - INFO - Epoch(train)   [2][ 250/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:30  time: 0.220832  data_time: 0.072730  memory: 2769  loss: 0.001690  loss_kpt: 0.001690  acc_pose: 0.433614
2023/07/22 14:52:27 - mmengine - INFO - Epoch(train)   [2][ 300/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:15  time: 0.219682  data_time: 0.070356  memory: 2769  loss: 0.001731  loss_kpt: 0.001731  acc_pose: 0.315223
2023/07/22 14:52:31 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:52:38 - mmengine - INFO - Epoch(train)   [2][ 350/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:05  time: 0.220255  data_time: 0.069510  memory: 2769  loss: 0.001698  loss_kpt: 0.001698  acc_pose: 0.330739
2023/07/22 14:52:49 - mmengine - INFO - Epoch(train)   [2][ 400/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:45  time: 0.219247  data_time: 0.069325  memory: 2769  loss: 0.001724  loss_kpt: 0.001724  acc_pose: 0.337666
2023/07/22 14:53:01 - mmengine - INFO - Epoch(train)   [2][ 450/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:30  time: 0.232288  data_time: 0.083366  memory: 2769  loss: 0.001677  loss_kpt: 0.001677  acc_pose: 0.298622
2023/07/22 14:53:12 - mmengine - INFO - Epoch(train)   [2][ 500/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:27  time: 0.227438  data_time: 0.078858  memory: 2769  loss: 0.001654  loss_kpt: 0.001654  acc_pose: 0.362969
2023/07/22 14:53:23 - mmengine - INFO - Epoch(train)   [2][ 550/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:50  time: 0.217559  data_time: 0.068187  memory: 2769  loss: 0.001678  loss_kpt: 0.001678  acc_pose: 0.286935
2023/07/22 14:53:35 - mmengine - INFO - Epoch(train)   [2][ 600/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:01  time: 0.235668  data_time: 0.070694  memory: 2769  loss: 0.001674  loss_kpt: 0.001674  acc_pose: 0.399984
2023/07/22 14:53:46 - mmengine - INFO - Epoch(train)   [2][ 650/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:46  time: 0.220053  data_time: 0.071188  memory: 2769  loss: 0.001672  loss_kpt: 0.001672  acc_pose: 0.365492
2023/07/22 14:53:57 - mmengine - INFO - Epoch(train)   [2][ 700/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:57  time: 0.216242  data_time: 0.066703  memory: 2769  loss: 0.001661  loss_kpt: 0.001661  acc_pose: 0.302438
2023/07/22 14:54:08 - mmengine - INFO - Epoch(train)   [2][ 750/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:47  time: 0.220451  data_time: 0.071079  memory: 2769  loss: 0.001660  loss_kpt: 0.001660  acc_pose: 0.423986
2023/07/22 14:54:19 - mmengine - INFO - Epoch(train)   [2][ 800/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:47  time: 0.221668  data_time: 0.072982  memory: 2769  loss: 0.001686  loss_kpt: 0.001686  acc_pose: 0.256086
2023/07/22 14:54:30 - mmengine - INFO - Epoch(train)   [2][ 850/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:04  time: 0.216771  data_time: 0.067996  memory: 2769  loss: 0.001644  loss_kpt: 0.001644  acc_pose: 0.331493
2023/07/22 14:54:41 - mmengine - INFO - Epoch(train)   [2][ 900/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:53  time: 0.220389  data_time: 0.070694  memory: 2769  loss: 0.001686  loss_kpt: 0.001686  acc_pose: 0.356108
2023/07/22 14:54:52 - mmengine - INFO - Epoch(train)   [2][ 950/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:16  time: 0.217441  data_time: 0.068453  memory: 2769  loss: 0.001645  loss_kpt: 0.001645  acc_pose: 0.416431
2023/07/22 14:55:02 - mmengine - INFO - Epoch(train)   [2][1000/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:32  time: 0.216543  data_time: 0.068501  memory: 2769  loss: 0.001684  loss_kpt: 0.001684  acc_pose: 0.459218
2023/07/22 14:55:13 - mmengine - INFO - Epoch(train)   [2][1050/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:11  time: 0.219055  data_time: 0.070455  memory: 2769  loss: 0.001674  loss_kpt: 0.001674  acc_pose: 0.400068
2023/07/22 14:55:24 - mmengine - INFO - Epoch(train)   [2][1100/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:26  time: 0.223386  data_time: 0.070582  memory: 2769  loss: 0.001666  loss_kpt: 0.001666  acc_pose: 0.369216
2023/07/22 14:55:36 - mmengine - INFO - Epoch(train)   [2][1150/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:20  time: 0.220941  data_time: 0.069895  memory: 2769  loss: 0.001662  loss_kpt: 0.001662  acc_pose: 0.378371
2023/07/22 14:55:47 - mmengine - INFO - Epoch(train)   [2][1200/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:56  time: 0.233237  data_time: 0.083369  memory: 2769  loss: 0.001662  loss_kpt: 0.001662  acc_pose: 0.401072
2023/07/22 14:55:58 - mmengine - INFO - Epoch(train)   [2][1250/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:19  time: 0.217277  data_time: 0.068816  memory: 2769  loss: 0.001647  loss_kpt: 0.001647  acc_pose: 0.406680
2023/07/22 14:56:09 - mmengine - INFO - Epoch(train)   [2][1300/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:59  time: 0.219428  data_time: 0.068816  memory: 2769  loss: 0.001606  loss_kpt: 0.001606  acc_pose: 0.403298
2023/07/22 14:56:13 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 14:56:20 - mmengine - INFO - Epoch(train)   [2][1350/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:27  time: 0.217827  data_time: 0.068824  memory: 2769  loss: 0.001631  loss_kpt: 0.001631  acc_pose: 0.400999
2023/07/22 14:56:31 - mmengine - INFO - Epoch(train)   [2][1400/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:07  time: 0.219201  data_time: 0.067744  memory: 2769  loss: 0.001673  loss_kpt: 0.001673  acc_pose: 0.393899
2023/07/22 14:56:42 - mmengine - INFO - Epoch(train)   [2][1450/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:01  time: 0.221023  data_time: 0.070715  memory: 2769  loss: 0.001647  loss_kpt: 0.001647  acc_pose: 0.310082
2023/07/22 14:56:53 - mmengine - INFO - Epoch(train)   [2][1500/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:49  time: 0.220228  data_time: 0.070339  memory: 2769  loss: 0.001657  loss_kpt: 0.001657  acc_pose: 0.389150
2023/07/22 14:57:04 - mmengine - INFO - Epoch(train)   [2][1550/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:03  time: 0.215965  data_time: 0.066791  memory: 2769  loss: 0.001663  loss_kpt: 0.001663  acc_pose: 0.416658
2023/07/22 14:57:15 - mmengine - INFO - Epoch(train)   [2][1600/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:41  time: 0.234430  data_time: 0.068986  memory: 2769  loss: 0.001657  loss_kpt: 0.001657  acc_pose: 0.388935
2023/07/22 14:57:26 - mmengine - INFO - Epoch(train)   [2][1650/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:02  time: 0.216811  data_time: 0.067282  memory: 2769  loss: 0.001636  loss_kpt: 0.001636  acc_pose: 0.377669
2023/07/22 14:57:37 - mmengine - INFO - Epoch(train)   [2][1700/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:46  time: 0.219776  data_time: 0.068558  memory: 2769  loss: 0.001636  loss_kpt: 0.001636  acc_pose: 0.409638
2023/07/22 14:57:48 - mmengine - INFO - Epoch(train)   [2][1750/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:21  time: 0.218620  data_time: 0.070469  memory: 2769  loss: 0.001649  loss_kpt: 0.001649  acc_pose: 0.349039
2023/07/22 14:57:59 - mmengine - INFO - Epoch(train)   [2][1800/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:04  time: 0.219574  data_time: 0.070088  memory: 2769  loss: 0.001590  loss_kpt: 0.001590  acc_pose: 0.519241
2023/07/22 14:58:10 - mmengine - INFO - Epoch(train)   [2][1850/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:28  time: 0.217086  data_time: 0.067470  memory: 2769  loss: 0.001633  loss_kpt: 0.001633  acc_pose: 0.438281
2023/07/22 14:58:21 - mmengine - INFO - Epoch(train)   [2][1900/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:04  time: 0.218644  data_time: 0.070237  memory: 2769  loss: 0.001624  loss_kpt: 0.001624  acc_pose: 0.376196
2023/07/22 14:58:32 - mmengine - INFO - Epoch(train)   [2][1950/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:56  time: 0.220802  data_time: 0.069396  memory: 2769  loss: 0.001622  loss_kpt: 0.001622  acc_pose: 0.458305
2023/07/22 14:58:43 - mmengine - INFO - Epoch(train)   [2][2000/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:42  time: 0.220006  data_time: 0.069541  memory: 2769  loss: 0.001628  loss_kpt: 0.001628  acc_pose: 0.244268
2023/07/22 14:58:54 - mmengine - INFO - Epoch(train)   [2][2050/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:12  time: 0.217762  data_time: 0.068479  memory: 2769  loss: 0.001629  loss_kpt: 0.001629  acc_pose: 0.360035
2023/07/22 14:59:05 - mmengine - INFO - Epoch(train)   [2][2100/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:33  time: 0.216387  data_time: 0.067631  memory: 2769  loss: 0.001620  loss_kpt: 0.001620  acc_pose: 0.432375
2023/07/22 14:59:16 - mmengine - INFO - Epoch(train)   [2][2150/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:02  time: 0.217608  data_time: 0.067910  memory: 2769  loss: 0.001639  loss_kpt: 0.001639  acc_pose: 0.461053
2023/07/22 14:59:27 - mmengine - INFO - Epoch(train)   [2][2200/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:20  time: 0.232736  data_time: 0.083698  memory: 2769  loss: 0.001621  loss_kpt: 0.001621  acc_pose: 0.363318
2023/07/22 14:59:38 - mmengine - INFO - Epoch(train)   [2][2250/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:09  time: 0.220476  data_time: 0.069601  memory: 2769  loss: 0.001612  loss_kpt: 0.001612  acc_pose: 0.423775
2023/07/22 14:59:49 - mmengine - INFO - Epoch(train)   [2][2300/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:40  time: 0.217828  data_time: 0.067601  memory: 2769  loss: 0.001628  loss_kpt: 0.001628  acc_pose: 0.288921
2023/07/22 14:59:53 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:00:00 - mmengine - INFO - Epoch(train)   [2][2350/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:54  time: 0.215318  data_time: 0.066771  memory: 2769  loss: 0.001574  loss_kpt: 0.001574  acc_pose: 0.390623
2023/07/22 15:00:11 - mmengine - INFO - Epoch(train)   [2][2400/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:45  time: 0.220696  data_time: 0.070551  memory: 2769  loss: 0.001605  loss_kpt: 0.001605  acc_pose: 0.384320
2023/07/22 15:00:22 - mmengine - INFO - Epoch(train)   [2][2450/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:45  time: 0.221823  data_time: 0.069686  memory: 2769  loss: 0.001636  loss_kpt: 0.001636  acc_pose: 0.353418
2023/07/22 15:00:33 - mmengine - INFO - Epoch(train)   [2][2500/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:15  time: 0.217565  data_time: 0.069915  memory: 2769  loss: 0.001615  loss_kpt: 0.001615  acc_pose: 0.316838
2023/07/22 15:00:44 - mmengine - INFO - Epoch(train)   [2][2550/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:58  time: 0.219580  data_time: 0.069860  memory: 2769  loss: 0.001571  loss_kpt: 0.001571  acc_pose: 0.376523
2023/07/22 15:00:56 - mmengine - INFO - Epoch(train)   [2][2600/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:16  time: 0.233512  data_time: 0.069192  memory: 2769  loss: 0.001588  loss_kpt: 0.001588  acc_pose: 0.480474
2023/07/22 15:01:06 - mmengine - INFO - Epoch(train)   [2][2650/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:29  time: 0.215090  data_time: 0.066871  memory: 2769  loss: 0.001616  loss_kpt: 0.001616  acc_pose: 0.384666
2023/07/22 15:01:17 - mmengine - INFO - Epoch(train)   [2][2700/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:47  time: 0.215675  data_time: 0.067341  memory: 2769  loss: 0.001601  loss_kpt: 0.001601  acc_pose: 0.381176
2023/07/22 15:01:28 - mmengine - INFO - Epoch(train)   [2][2750/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:21  time: 0.217987  data_time: 0.068197  memory: 2769  loss: 0.001593  loss_kpt: 0.001593  acc_pose: 0.439409
2023/07/22 15:01:39 - mmengine - INFO - Epoch(train)   [2][2800/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:34  time: 0.224119  data_time: 0.071853  memory: 2769  loss: 0.001582  loss_kpt: 0.001582  acc_pose: 0.358909
2023/07/22 15:01:50 - mmengine - INFO - Epoch(train)   [2][2850/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:36  time: 0.222361  data_time: 0.071014  memory: 2769  loss: 0.001562  loss_kpt: 0.001562  acc_pose: 0.423431
2023/07/22 15:02:01 - mmengine - INFO - Epoch(train)   [2][2900/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:09  time: 0.217877  data_time: 0.067547  memory: 2769  loss: 0.001635  loss_kpt: 0.001635  acc_pose: 0.434224
2023/07/22 15:02:12 - mmengine - INFO - Epoch(train)   [2][2950/4682]  lr: 6.250000e-05  eta: 2 days, 11:42:45  time: 0.218263  data_time: 0.069244  memory: 2769  loss: 0.001584  loss_kpt: 0.001584  acc_pose: 0.392643
2023/07/22 15:02:23 - mmengine - INFO - Epoch(train)   [2][3000/4682]  lr: 6.250000e-05  eta: 2 days, 11:42:50  time: 0.222961  data_time: 0.072229  memory: 2769  loss: 0.001605  loss_kpt: 0.001605  acc_pose: 0.488786
2023/07/22 15:02:34 - mmengine - INFO - Epoch(train)   [2][3050/4682]  lr: 6.250000e-05  eta: 2 days, 11:42:46  time: 0.221490  data_time: 0.070006  memory: 2769  loss: 0.001579  loss_kpt: 0.001579  acc_pose: 0.462665
2023/07/22 15:02:46 - mmengine - INFO - Epoch(train)   [2][3100/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:03  time: 0.224848  data_time: 0.072710  memory: 2769  loss: 0.001588  loss_kpt: 0.001588  acc_pose: 0.490568
2023/07/22 15:02:57 - mmengine - INFO - Epoch(train)   [2][3150/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:02  time: 0.221932  data_time: 0.069415  memory: 2769  loss: 0.001578  loss_kpt: 0.001578  acc_pose: 0.445818
2023/07/22 15:03:09 - mmengine - INFO - Epoch(train)   [2][3200/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:42  time: 0.238345  data_time: 0.085177  memory: 2769  loss: 0.001583  loss_kpt: 0.001583  acc_pose: 0.465223
2023/07/22 15:03:20 - mmengine - INFO - Epoch(train)   [2][3250/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:50  time: 0.223617  data_time: 0.072932  memory: 2769  loss: 0.001555  loss_kpt: 0.001555  acc_pose: 0.393350
2023/07/22 15:03:31 - mmengine - INFO - Epoch(train)   [2][3300/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:39  time: 0.220582  data_time: 0.071421  memory: 2769  loss: 0.001553  loss_kpt: 0.001553  acc_pose: 0.497468
2023/07/22 15:03:35 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:03:44 - mmengine - INFO - Epoch(train)   [2][3350/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:42  time: 0.252405  data_time: 0.102398  memory: 2769  loss: 0.001547  loss_kpt: 0.001547  acc_pose: 0.425660
2023/07/22 15:03:54 - mmengine - INFO - Epoch(train)   [2][3400/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:07  time: 0.216904  data_time: 0.069343  memory: 2769  loss: 0.001575  loss_kpt: 0.001575  acc_pose: 0.432068
2023/07/22 15:04:05 - mmengine - INFO - Epoch(train)   [2][3450/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:22  time: 0.214931  data_time: 0.066422  memory: 2769  loss: 0.001567  loss_kpt: 0.001567  acc_pose: 0.369169
2023/07/22 15:04:16 - mmengine - INFO - Epoch(train)   [2][3500/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:24  time: 0.222887  data_time: 0.071557  memory: 2769  loss: 0.001578  loss_kpt: 0.001578  acc_pose: 0.364742
2023/07/22 15:04:27 - mmengine - INFO - Epoch(train)   [2][3550/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:43  time: 0.215626  data_time: 0.068208  memory: 2769  loss: 0.001581  loss_kpt: 0.001581  acc_pose: 0.356414
2023/07/22 15:04:39 - mmengine - INFO - Epoch(train)   [2][3600/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:53  time: 0.234540  data_time: 0.071323  memory: 2769  loss: 0.001563  loss_kpt: 0.001563  acc_pose: 0.463508
2023/07/22 15:04:50 - mmengine - INFO - Epoch(train)   [2][3650/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:51  time: 0.222149  data_time: 0.072262  memory: 2769  loss: 0.001523  loss_kpt: 0.001523  acc_pose: 0.475855
2023/07/22 15:05:01 - mmengine - INFO - Epoch(train)   [2][3700/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:11  time: 0.215929  data_time: 0.068524  memory: 2769  loss: 0.001556  loss_kpt: 0.001556  acc_pose: 0.395543
2023/07/22 15:05:12 - mmengine - INFO - Epoch(train)   [2][3750/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:03  time: 0.221173  data_time: 0.070995  memory: 2769  loss: 0.001581  loss_kpt: 0.001581  acc_pose: 0.471426
2023/07/22 15:05:23 - mmengine - INFO - Epoch(train)   [2][3800/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:49  time: 0.220183  data_time: 0.069493  memory: 2769  loss: 0.001572  loss_kpt: 0.001572  acc_pose: 0.493317
2023/07/22 15:05:34 - mmengine - INFO - Epoch(train)   [2][3850/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:14  time: 0.216588  data_time: 0.068845  memory: 2769  loss: 0.001575  loss_kpt: 0.001575  acc_pose: 0.418905
2023/07/22 15:05:45 - mmengine - INFO - Epoch(train)   [2][3900/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:57  time: 0.219552  data_time: 0.070914  memory: 2769  loss: 0.001512  loss_kpt: 0.001512  acc_pose: 0.500859
2023/07/22 15:05:56 - mmengine - INFO - Epoch(train)   [2][3950/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:37  time: 0.219205  data_time: 0.069279  memory: 2769  loss: 0.001523  loss_kpt: 0.001523  acc_pose: 0.393162
2023/07/22 15:06:06 - mmengine - INFO - Epoch(train)   [2][4000/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:16  time: 0.218842  data_time: 0.068649  memory: 2769  loss: 0.001549  loss_kpt: 0.001549  acc_pose: 0.447751
2023/07/22 15:06:18 - mmengine - INFO - Epoch(train)   [2][4050/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:05  time: 0.220770  data_time: 0.069756  memory: 2769  loss: 0.001534  loss_kpt: 0.001534  acc_pose: 0.459695
2023/07/22 15:06:29 - mmengine - INFO - Epoch(train)   [2][4100/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:01  time: 0.221805  data_time: 0.072117  memory: 2769  loss: 0.001576  loss_kpt: 0.001576  acc_pose: 0.466968
2023/07/22 15:06:40 - mmengine - INFO - Epoch(train)   [2][4150/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:46  time: 0.219979  data_time: 0.070291  memory: 2769  loss: 0.001563  loss_kpt: 0.001563  acc_pose: 0.515747
2023/07/22 15:06:51 - mmengine - INFO - Epoch(train)   [2][4200/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:50  time: 0.234497  data_time: 0.085877  memory: 2769  loss: 0.001527  loss_kpt: 0.001527  acc_pose: 0.451229
2023/07/22 15:07:03 - mmengine - INFO - Epoch(train)   [2][4250/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:02  time: 0.224861  data_time: 0.072720  memory: 2769  loss: 0.001534  loss_kpt: 0.001534  acc_pose: 0.413065
2023/07/22 15:07:14 - mmengine - INFO - Epoch(train)   [2][4300/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:42  time: 0.219114  data_time: 0.069655  memory: 2769  loss: 0.001539  loss_kpt: 0.001539  acc_pose: 0.386309
2023/07/22 15:07:18 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:07:25 - mmengine - INFO - Epoch(train)   [2][4350/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:05  time: 0.227090  data_time: 0.073618  memory: 2769  loss: 0.001558  loss_kpt: 0.001558  acc_pose: 0.374963
2023/07/22 15:07:36 - mmengine - INFO - Epoch(train)   [2][4400/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:54  time: 0.220883  data_time: 0.069260  memory: 2769  loss: 0.001547  loss_kpt: 0.001547  acc_pose: 0.421333
2023/07/22 15:07:47 - mmengine - INFO - Epoch(train)   [2][4450/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:07  time: 0.225217  data_time: 0.071588  memory: 2769  loss: 0.001521  loss_kpt: 0.001521  acc_pose: 0.419866
2023/07/22 15:07:59 - mmengine - INFO - Epoch(train)   [2][4500/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:38  time: 0.228904  data_time: 0.075243  memory: 2769  loss: 0.001526  loss_kpt: 0.001526  acc_pose: 0.472045
2023/07/22 15:08:10 - mmengine - INFO - Epoch(train)   [2][4550/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:40  time: 0.223317  data_time: 0.070567  memory: 2769  loss: 0.001536  loss_kpt: 0.001536  acc_pose: 0.442823
2023/07/22 15:08:22 - mmengine - INFO - Epoch(train)   [2][4600/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:18  time: 0.241568  data_time: 0.072527  memory: 2769  loss: 0.001535  loss_kpt: 0.001535  acc_pose: 0.523795
2023/07/22 15:08:33 - mmengine - INFO - Epoch(train)   [2][4650/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:35  time: 0.226505  data_time: 0.074145  memory: 2769  loss: 0.001525  loss_kpt: 0.001525  acc_pose: 0.375999
2023/07/22 15:08:40 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:08:52 - mmengine - INFO - Epoch(train)   [3][  50/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:02  time: 0.228728  data_time: 0.077746  memory: 2769  loss: 0.001539  loss_kpt: 0.001539  acc_pose: 0.389852
2023/07/22 15:09:03 - mmengine - INFO - Epoch(train)   [3][ 100/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:33  time: 0.229128  data_time: 0.076196  memory: 2769  loss: 0.001536  loss_kpt: 0.001536  acc_pose: 0.374548
2023/07/22 15:09:14 - mmengine - INFO - Epoch(train)   [3][ 150/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:35  time: 0.223625  data_time: 0.070833  memory: 2769  loss: 0.001517  loss_kpt: 0.001517  acc_pose: 0.437881
2023/07/22 15:09:26 - mmengine - INFO - Epoch(train)   [3][ 200/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:53  time: 0.226991  data_time: 0.073356  memory: 2769  loss: 0.001556  loss_kpt: 0.001556  acc_pose: 0.403673
2023/07/22 15:09:37 - mmengine - INFO - Epoch(train)   [3][ 250/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:04  time: 0.225397  data_time: 0.071258  memory: 2769  loss: 0.001558  loss_kpt: 0.001558  acc_pose: 0.453507
2023/07/22 15:09:48 - mmengine - INFO - Epoch(train)   [3][ 300/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:10  time: 0.224612  data_time: 0.070947  memory: 2769  loss: 0.001525  loss_kpt: 0.001525  acc_pose: 0.435017
2023/07/22 15:10:00 - mmengine - INFO - Epoch(train)   [3][ 350/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:20  time: 0.225398  data_time: 0.070415  memory: 2769  loss: 0.001494  loss_kpt: 0.001494  acc_pose: 0.512108
2023/07/22 15:10:11 - mmengine - INFO - Epoch(train)   [3][ 400/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:30  time: 0.237399  data_time: 0.085648  memory: 2769  loss: 0.001521  loss_kpt: 0.001521  acc_pose: 0.386923
2023/07/22 15:10:23 - mmengine - INFO - Epoch(train)   [3][ 450/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:43  time: 0.238253  data_time: 0.086934  memory: 2769  loss: 0.001520  loss_kpt: 0.001520  acc_pose: 0.508170
2023/07/22 15:10:35 - mmengine - INFO - Epoch(train)   [3][ 500/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:08  time: 0.228809  data_time: 0.075898  memory: 2769  loss: 0.001501  loss_kpt: 0.001501  acc_pose: 0.333293
2023/07/22 15:10:46 - mmengine - INFO - Epoch(train)   [3][ 550/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:16  time: 0.225379  data_time: 0.072405  memory: 2769  loss: 0.001476  loss_kpt: 0.001476  acc_pose: 0.325959
2023/07/22 15:10:57 - mmengine - INFO - Epoch(train)   [3][ 600/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:02  time: 0.220697  data_time: 0.069189  memory: 2769  loss: 0.001509  loss_kpt: 0.001509  acc_pose: 0.460931
2023/07/22 15:11:05 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:11:08 - mmengine - INFO - Epoch(train)   [3][ 650/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:12  time: 0.225951  data_time: 0.073455  memory: 2769  loss: 0.001527  loss_kpt: 0.001527  acc_pose: 0.443181
2023/07/22 15:11:21 - mmengine - INFO - Epoch(train)   [3][ 700/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:06  time: 0.247277  data_time: 0.093363  memory: 2769  loss: 0.001508  loss_kpt: 0.001508  acc_pose: 0.397818
2023/07/22 15:11:32 - mmengine - INFO - Epoch(train)   [3][ 750/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:54  time: 0.221287  data_time: 0.070861  memory: 2769  loss: 0.001506  loss_kpt: 0.001506  acc_pose: 0.444112
2023/07/22 15:11:43 - mmengine - INFO - Epoch(train)   [3][ 800/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:50  time: 0.223241  data_time: 0.071001  memory: 2769  loss: 0.001499  loss_kpt: 0.001499  acc_pose: 0.474696
2023/07/22 15:11:54 - mmengine - INFO - Epoch(train)   [3][ 850/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:01  time: 0.226083  data_time: 0.073321  memory: 2769  loss: 0.001539  loss_kpt: 0.001539  acc_pose: 0.352767
2023/07/22 15:12:06 - mmengine - INFO - Epoch(train)   [3][ 900/4682]  lr: 6.250000e-05  eta: 2 days, 11:55:02  time: 0.236974  data_time: 0.069962  memory: 2769  loss: 0.001515  loss_kpt: 0.001515  acc_pose: 0.391092
2023/07/22 15:12:17 - mmengine - INFO - Epoch(train)   [3][ 950/4682]  lr: 6.250000e-05  eta: 2 days, 11:55:10  time: 0.225629  data_time: 0.072428  memory: 2769  loss: 0.001516  loss_kpt: 0.001516  acc_pose: 0.413933
2023/07/22 15:12:29 - mmengine - INFO - Epoch(train)   [3][1000/4682]  lr: 6.250000e-05  eta: 2 days, 11:55:17  time: 0.225717  data_time: 0.072638  memory: 2769  loss: 0.001517  loss_kpt: 0.001517  acc_pose: 0.568354
2023/07/22 15:12:40 - mmengine - INFO - Epoch(train)   [3][1050/4682]  lr: 6.250000e-05  eta: 2 days, 11:55:07  time: 0.221957  data_time: 0.072911  memory: 2769  loss: 0.001509  loss_kpt: 0.001509  acc_pose: 0.412208
2023/07/22 15:12:51 - mmengine - INFO - Epoch(train)   [3][1100/4682]  lr: 6.250000e-05  eta: 2 days, 11:55:01  time: 0.222699  data_time: 0.073540  memory: 2769  loss: 0.001503  loss_kpt: 0.001503  acc_pose: 0.438893
2023/07/22 15:13:02 - mmengine - INFO - Epoch(train)   [3][1150/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:52  time: 0.222204  data_time: 0.071423  memory: 2769  loss: 0.001505  loss_kpt: 0.001505  acc_pose: 0.458421
2023/07/22 15:13:13 - mmengine - INFO - Epoch(train)   [3][1200/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:34  time: 0.220323  data_time: 0.070108  memory: 2769  loss: 0.001493  loss_kpt: 0.001493  acc_pose: 0.550778
2023/07/22 15:13:24 - mmengine - INFO - Epoch(train)   [3][1250/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:08  time: 0.218356  data_time: 0.069542  memory: 2769  loss: 0.001499  loss_kpt: 0.001499  acc_pose: 0.388994
2023/07/22 15:13:35 - mmengine - INFO - Epoch(train)   [3][1300/4682]  lr: 6.250000e-05  eta: 2 days, 11:54:00  time: 0.222448  data_time: 0.072866  memory: 2769  loss: 0.001525  loss_kpt: 0.001525  acc_pose: 0.383418
2023/07/22 15:13:46 - mmengine - INFO - Epoch(train)   [3][1350/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:24  time: 0.216230  data_time: 0.067260  memory: 2769  loss: 0.001527  loss_kpt: 0.001527  acc_pose: 0.434656
2023/07/22 15:13:57 - mmengine - INFO - Epoch(train)   [3][1400/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:50  time: 0.216686  data_time: 0.067725  memory: 2769  loss: 0.001486  loss_kpt: 0.001486  acc_pose: 0.432514
2023/07/22 15:14:09 - mmengine - INFO - Epoch(train)   [3][1450/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:46  time: 0.236521  data_time: 0.086240  memory: 2769  loss: 0.001474  loss_kpt: 0.001474  acc_pose: 0.453256
2023/07/22 15:14:20 - mmengine - INFO - Epoch(train)   [3][1500/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:22  time: 0.219015  data_time: 0.069292  memory: 2769  loss: 0.001516  loss_kpt: 0.001516  acc_pose: 0.444045
2023/07/22 15:14:30 - mmengine - INFO - Epoch(train)   [3][1550/4682]  lr: 6.250000e-05  eta: 2 days, 11:53:00  time: 0.219084  data_time: 0.069413  memory: 2769  loss: 0.001479  loss_kpt: 0.001479  acc_pose: 0.489273
2023/07/22 15:14:41 - mmengine - INFO - Epoch(train)   [3][1600/4682]  lr: 6.250000e-05  eta: 2 days, 11:52:32  time: 0.218035  data_time: 0.069572  memory: 2769  loss: 0.001488  loss_kpt: 0.001488  acc_pose: 0.459996
2023/07/22 15:14:49 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:14:52 - mmengine - INFO - Epoch(train)   [3][1650/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:59  time: 0.216720  data_time: 0.068382  memory: 2769  loss: 0.001481  loss_kpt: 0.001481  acc_pose: 0.421594
2023/07/22 15:15:03 - mmengine - INFO - Epoch(train)   [3][1700/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:33  time: 0.218246  data_time: 0.069050  memory: 2769  loss: 0.001527  loss_kpt: 0.001527  acc_pose: 0.491073
2023/07/22 15:15:14 - mmengine - INFO - Epoch(train)   [3][1750/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:08  time: 0.218465  data_time: 0.068737  memory: 2769  loss: 0.001490  loss_kpt: 0.001490  acc_pose: 0.452824
2023/07/22 15:15:25 - mmengine - INFO - Epoch(train)   [3][1800/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:40  time: 0.217915  data_time: 0.069640  memory: 2769  loss: 0.001457  loss_kpt: 0.001457  acc_pose: 0.477751
2023/07/22 15:15:37 - mmengine - INFO - Epoch(train)   [3][1850/4682]  lr: 6.250000e-05  eta: 2 days, 11:51:27  time: 0.234829  data_time: 0.069162  memory: 2769  loss: 0.001489  loss_kpt: 0.001489  acc_pose: 0.458187
2023/07/22 15:15:48 - mmengine - INFO - Epoch(train)   [3][1900/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:58  time: 0.217580  data_time: 0.068042  memory: 2769  loss: 0.001459  loss_kpt: 0.001459  acc_pose: 0.535764
2023/07/22 15:15:58 - mmengine - INFO - Epoch(train)   [3][1950/4682]  lr: 6.250000e-05  eta: 2 days, 11:50:22  time: 0.215908  data_time: 0.066592  memory: 2769  loss: 0.001455  loss_kpt: 0.001455  acc_pose: 0.499629
2023/07/22 15:16:09 - mmengine - INFO - Epoch(train)   [3][2000/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:39  time: 0.214302  data_time: 0.066167  memory: 2769  loss: 0.001461  loss_kpt: 0.001461  acc_pose: 0.375722
2023/07/22 15:16:20 - mmengine - INFO - Epoch(train)   [3][2050/4682]  lr: 6.250000e-05  eta: 2 days, 11:49:10  time: 0.217361  data_time: 0.068411  memory: 2769  loss: 0.001514  loss_kpt: 0.001514  acc_pose: 0.493885
2023/07/22 15:16:31 - mmengine - INFO - Epoch(train)   [3][2100/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:29  time: 0.214414  data_time: 0.066097  memory: 2769  loss: 0.001464  loss_kpt: 0.001464  acc_pose: 0.489009
2023/07/22 15:16:41 - mmengine - INFO - Epoch(train)   [3][2150/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:51  time: 0.215387  data_time: 0.067409  memory: 2769  loss: 0.001457  loss_kpt: 0.001457  acc_pose: 0.456484
2023/07/22 15:16:52 - mmengine - INFO - Epoch(train)   [3][2200/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:32  time: 0.219605  data_time: 0.069114  memory: 2769  loss: 0.001480  loss_kpt: 0.001480  acc_pose: 0.388902
2023/07/22 15:17:03 - mmengine - INFO - Epoch(train)   [3][2250/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:17  time: 0.220411  data_time: 0.067490  memory: 2769  loss: 0.001506  loss_kpt: 0.001506  acc_pose: 0.420182
2023/07/22 15:17:14 - mmengine - INFO - Epoch(train)   [3][2300/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:59  time: 0.219991  data_time: 0.070105  memory: 2769  loss: 0.001506  loss_kpt: 0.001506  acc_pose: 0.477168
2023/07/22 15:17:26 - mmengine - INFO - Epoch(train)   [3][2350/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:47  time: 0.221399  data_time: 0.070510  memory: 2769  loss: 0.001508  loss_kpt: 0.001508  acc_pose: 0.445495
2023/07/22 15:17:37 - mmengine - INFO - Epoch(train)   [3][2400/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:27  time: 0.219253  data_time: 0.070244  memory: 2769  loss: 0.001470  loss_kpt: 0.001470  acc_pose: 0.510175
2023/07/22 15:17:49 - mmengine - INFO - Epoch(train)   [3][2450/4682]  lr: 6.250000e-05  eta: 2 days, 11:48:01  time: 0.247133  data_time: 0.096268  memory: 2769  loss: 0.001488  loss_kpt: 0.001488  acc_pose: 0.451647
2023/07/22 15:18:00 - mmengine - INFO - Epoch(train)   [3][2500/4682]  lr: 6.250000e-05  eta: 2 days, 11:47:25  time: 0.215521  data_time: 0.066550  memory: 2769  loss: 0.001459  loss_kpt: 0.001459  acc_pose: 0.455389
2023/07/22 15:18:10 - mmengine - INFO - Epoch(train)   [3][2550/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:53  time: 0.216507  data_time: 0.068897  memory: 2769  loss: 0.001446  loss_kpt: 0.001446  acc_pose: 0.416623
2023/07/22 15:18:21 - mmengine - INFO - Epoch(train)   [3][2600/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:33  time: 0.219372  data_time: 0.070798  memory: 2769  loss: 0.001473  loss_kpt: 0.001473  acc_pose: 0.480012
2023/07/22 15:18:29 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:18:32 - mmengine - INFO - Epoch(train)   [3][2650/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:19  time: 0.220712  data_time: 0.072121  memory: 2769  loss: 0.001465  loss_kpt: 0.001465  acc_pose: 0.578642
2023/07/22 15:18:44 - mmengine - INFO - Epoch(train)   [3][2700/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:15  time: 0.223269  data_time: 0.070139  memory: 2769  loss: 0.001458  loss_kpt: 0.001458  acc_pose: 0.408636
2023/07/22 15:18:55 - mmengine - INFO - Epoch(train)   [3][2750/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:56  time: 0.219595  data_time: 0.069752  memory: 2769  loss: 0.001433  loss_kpt: 0.001433  acc_pose: 0.406640
2023/07/22 15:19:05 - mmengine - INFO - Epoch(train)   [3][2800/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:20  time: 0.215343  data_time: 0.066182  memory: 2769  loss: 0.001445  loss_kpt: 0.001445  acc_pose: 0.447996
2023/07/22 15:19:17 - mmengine - INFO - Epoch(train)   [3][2850/4682]  lr: 6.250000e-05  eta: 2 days, 11:46:09  time: 0.236678  data_time: 0.070048  memory: 2769  loss: 0.001494  loss_kpt: 0.001494  acc_pose: 0.404458
2023/07/22 15:19:28 - mmengine - INFO - Epoch(train)   [3][2900/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:53  time: 0.220321  data_time: 0.070539  memory: 2769  loss: 0.001453  loss_kpt: 0.001453  acc_pose: 0.424017
2023/07/22 15:19:39 - mmengine - INFO - Epoch(train)   [3][2950/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:29  time: 0.218431  data_time: 0.067726  memory: 2769  loss: 0.001520  loss_kpt: 0.001520  acc_pose: 0.483733
2023/07/22 15:19:50 - mmengine - INFO - Epoch(train)   [3][3000/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:06  time: 0.218577  data_time: 0.068489  memory: 2769  loss: 0.001482  loss_kpt: 0.001482  acc_pose: 0.390663
2023/07/22 15:20:01 - mmengine - INFO - Epoch(train)   [3][3050/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:43  time: 0.218434  data_time: 0.070400  memory: 2769  loss: 0.001468  loss_kpt: 0.001468  acc_pose: 0.427277
2023/07/22 15:20:12 - mmengine - INFO - Epoch(train)   [3][3100/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:29  time: 0.220808  data_time: 0.071014  memory: 2769  loss: 0.001442  loss_kpt: 0.001442  acc_pose: 0.528775
2023/07/22 15:20:23 - mmengine - INFO - Epoch(train)   [3][3150/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:18  time: 0.221541  data_time: 0.070337  memory: 2769  loss: 0.001464  loss_kpt: 0.001464  acc_pose: 0.530323
2023/07/22 15:20:34 - mmengine - INFO - Epoch(train)   [3][3200/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:05  time: 0.220914  data_time: 0.069655  memory: 2769  loss: 0.001450  loss_kpt: 0.001450  acc_pose: 0.396172
2023/07/22 15:20:45 - mmengine - INFO - Epoch(train)   [3][3250/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:57  time: 0.222383  data_time: 0.071079  memory: 2769  loss: 0.001455  loss_kpt: 0.001455  acc_pose: 0.564231
2023/07/22 15:20:57 - mmengine - INFO - Epoch(train)   [3][3300/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:01  time: 0.241309  data_time: 0.086130  memory: 2769  loss: 0.001463  loss_kpt: 0.001463  acc_pose: 0.499546
2023/07/22 15:21:08 - mmengine - INFO - Epoch(train)   [3][3350/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:34  time: 0.217348  data_time: 0.068391  memory: 2769  loss: 0.001433  loss_kpt: 0.001433  acc_pose: 0.486005
2023/07/22 15:21:19 - mmengine - INFO - Epoch(train)   [3][3400/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:26  time: 0.222563  data_time: 0.072298  memory: 2769  loss: 0.001441  loss_kpt: 0.001441  acc_pose: 0.555127
2023/07/22 15:21:31 - mmengine - INFO - Epoch(train)   [3][3450/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:20  time: 0.238730  data_time: 0.087200  memory: 2769  loss: 0.001437  loss_kpt: 0.001437  acc_pose: 0.536771
2023/07/22 15:21:43 - mmengine - INFO - Epoch(train)   [3][3500/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:18  time: 0.224136  data_time: 0.072326  memory: 2769  loss: 0.001454  loss_kpt: 0.001454  acc_pose: 0.605151
2023/07/22 15:21:54 - mmengine - INFO - Epoch(train)   [3][3550/4682]  lr: 6.250000e-05  eta: 2 days, 11:45:03  time: 0.220573  data_time: 0.069013  memory: 2769  loss: 0.001445  loss_kpt: 0.001445  acc_pose: 0.476203
2023/07/22 15:22:05 - mmengine - INFO - Epoch(train)   [3][3600/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:49  time: 0.220986  data_time: 0.070026  memory: 2769  loss: 0.001406  loss_kpt: 0.001406  acc_pose: 0.499655
2023/07/22 15:22:13 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:22:16 - mmengine - INFO - Epoch(train)   [3][3650/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:31  time: 0.219798  data_time: 0.071113  memory: 2769  loss: 0.001459  loss_kpt: 0.001459  acc_pose: 0.420060
2023/07/22 15:22:27 - mmengine - INFO - Epoch(train)   [3][3700/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:22  time: 0.222087  data_time: 0.070992  memory: 2769  loss: 0.001469  loss_kpt: 0.001469  acc_pose: 0.466726
2023/07/22 15:22:38 - mmengine - INFO - Epoch(train)   [3][3750/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:03  time: 0.219632  data_time: 0.069718  memory: 2769  loss: 0.001445  loss_kpt: 0.001445  acc_pose: 0.371630
2023/07/22 15:22:49 - mmengine - INFO - Epoch(train)   [3][3800/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:59  time: 0.223605  data_time: 0.071977  memory: 2769  loss: 0.001457  loss_kpt: 0.001457  acc_pose: 0.482615
2023/07/22 15:23:01 - mmengine - INFO - Epoch(train)   [3][3850/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:33  time: 0.233919  data_time: 0.068537  memory: 2769  loss: 0.001433  loss_kpt: 0.001433  acc_pose: 0.563241
2023/07/22 15:23:11 - mmengine - INFO - Epoch(train)   [3][3900/4682]  lr: 6.250000e-05  eta: 2 days, 11:44:08  time: 0.217995  data_time: 0.067523  memory: 2769  loss: 0.001434  loss_kpt: 0.001434  acc_pose: 0.480374
2023/07/22 15:23:23 - mmengine - INFO - Epoch(train)   [3][3950/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:54  time: 0.220818  data_time: 0.069681  memory: 2769  loss: 0.001454  loss_kpt: 0.001454  acc_pose: 0.499782
2023/07/22 15:23:33 - mmengine - INFO - Epoch(train)   [3][4000/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:32  time: 0.218618  data_time: 0.069236  memory: 2769  loss: 0.001457  loss_kpt: 0.001457  acc_pose: 0.466018
2023/07/22 15:23:44 - mmengine - INFO - Epoch(train)   [3][4050/4682]  lr: 6.250000e-05  eta: 2 days, 11:43:12  time: 0.219392  data_time: 0.069481  memory: 2769  loss: 0.001442  loss_kpt: 0.001442  acc_pose: 0.521244
2023/07/22 15:23:55 - mmengine - INFO - Epoch(train)   [3][4100/4682]  lr: 6.250000e-05  eta: 2 days, 11:42:52  time: 0.219258  data_time: 0.069111  memory: 2769  loss: 0.001449  loss_kpt: 0.001449  acc_pose: 0.488747
2023/07/22 15:24:06 - mmengine - INFO - Epoch(train)   [3][4150/4682]  lr: 6.250000e-05  eta: 2 days, 11:42:30  time: 0.218605  data_time: 0.068648  memory: 2769  loss: 0.001440  loss_kpt: 0.001440  acc_pose: 0.496123
2023/07/22 15:24:17 - mmengine - INFO - Epoch(train)   [3][4200/4682]  lr: 6.250000e-05  eta: 2 days, 11:42:16  time: 0.220647  data_time: 0.069940  memory: 2769  loss: 0.001444  loss_kpt: 0.001444  acc_pose: 0.554808
2023/07/22 15:24:28 - mmengine - INFO - Epoch(train)   [3][4250/4682]  lr: 6.250000e-05  eta: 2 days, 11:41:46  time: 0.216402  data_time: 0.067053  memory: 2769  loss: 0.001433  loss_kpt: 0.001433  acc_pose: 0.380657
2023/07/22 15:24:39 - mmengine - INFO - Epoch(train)   [3][4300/4682]  lr: 6.250000e-05  eta: 2 days, 11:41:26  time: 0.219231  data_time: 0.068180  memory: 2769  loss: 0.001423  loss_kpt: 0.001423  acc_pose: 0.489721
2023/07/22 15:24:50 - mmengine - INFO - Epoch(train)   [3][4350/4682]  lr: 6.250000e-05  eta: 2 days, 11:41:02  time: 0.218099  data_time: 0.067306  memory: 2769  loss: 0.001446  loss_kpt: 0.001446  acc_pose: 0.480642
2023/07/22 15:25:01 - mmengine - INFO - Epoch(train)   [3][4400/4682]  lr: 6.250000e-05  eta: 2 days, 11:40:46  time: 0.220216  data_time: 0.070069  memory: 2769  loss: 0.001446  loss_kpt: 0.001446  acc_pose: 0.563972
2023/07/22 15:25:13 - mmengine - INFO - Epoch(train)   [3][4450/4682]  lr: 6.250000e-05  eta: 2 days, 11:41:12  time: 0.232098  data_time: 0.083050  memory: 2769  loss: 0.001438  loss_kpt: 0.001438  acc_pose: 0.490288
2023/07/22 15:25:24 - mmengine - INFO - Epoch(train)   [3][4500/4682]  lr: 6.250000e-05  eta: 2 days, 11:40:49  time: 0.218239  data_time: 0.070104  memory: 2769  loss: 0.001445  loss_kpt: 0.001445  acc_pose: 0.442514
2023/07/22 15:25:35 - mmengine - INFO - Epoch(train)   [3][4550/4682]  lr: 6.250000e-05  eta: 2 days, 11:40:37  time: 0.221468  data_time: 0.072447  memory: 2769  loss: 0.001436  loss_kpt: 0.001436  acc_pose: 0.394671
2023/07/22 15:25:46 - mmengine - INFO - Epoch(train)   [3][4600/4682]  lr: 6.250000e-05  eta: 2 days, 11:40:19  time: 0.219582  data_time: 0.069744  memory: 2769  loss: 0.001428  loss_kpt: 0.001428  acc_pose: 0.441705
2023/07/22 15:25:54 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251
2023/07/22 15:25:57 - mmengine - INFO - Epoch(train)   [3][4650/4682]  lr: 6.250000e-05  eta: 2 days, 11:40:07  time: 0.221277  data_time: 0.069339  memory: 2769  loss: 0.001419  loss_kpt: 0.001419  acc_pose: 0.550174
2023/07/22 15:26:04 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230722_143251

@xin-li-67
Copy link
Contributor Author

Here is a part of the dist_train log running on three 3090 GPUs:

07/24 10:33:33 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230724_092951
07/24 10:33:45 - mmengine - INFO - Epoch(train)  [10][1000/1561]  lr: 2.000000e-03  eta: 21:34:51  time: 0.258567  data_time: 0.088065  memory: 2863  loss: 0.001054  loss_kpt: 0.001054  acc_pose: 0.667949
07/24 10:33:58 - mmengine - INFO - Epoch(train)  [10][1050/1561]  lr: 2.000000e-03  eta: 21:34:36  time: 0.245400  data_time: 0.073074  memory: 2863  loss: 0.001058  loss_kpt: 0.001058  acc_pose: 0.643093
07/24 10:34:10 - mmengine - INFO - Epoch(train)  [10][1100/1561]  lr: 2.000000e-03  eta: 21:34:21  time: 0.246482  data_time: 0.070497  memory: 2863  loss: 0.001037  loss_kpt: 0.001037  acc_pose: 0.620019
07/24 10:34:22 - mmengine - INFO - Epoch(train)  [10][1150/1561]  lr: 2.000000e-03  eta: 21:34:02  time: 0.241395  data_time: 0.068484  memory: 2863  loss: 0.001044  loss_kpt: 0.001044  acc_pose: 0.742400
07/24 10:34:35 - mmengine - INFO - Epoch(train)  [10][1200/1561]  lr: 2.000000e-03  eta: 21:34:05  time: 0.263412  data_time: 0.069424  memory: 2863  loss: 0.001048  loss_kpt: 0.001048  acc_pose: 0.709707
07/24 10:34:47 - mmengine - INFO - Epoch(train)  [10][1250/1561]  lr: 2.000000e-03  eta: 21:33:45  time: 0.241096  data_time: 0.069936  memory: 2863  loss: 0.001023  loss_kpt: 0.001023  acc_pose: 0.731810
07/24 10:34:59 - mmengine - INFO - Epoch(train)  [10][1300/1561]  lr: 2.000000e-03  eta: 21:33:27  time: 0.243296  data_time: 0.071560  memory: 2863  loss: 0.001052  loss_kpt: 0.001052  acc_pose: 0.672277
07/24 10:35:13 - mmengine - INFO - Epoch(train)  [10][1350/1561]  lr: 2.000000e-03  eta: 21:33:35  time: 0.268337  data_time: 0.083195  memory: 2863  loss: 0.001055  loss_kpt: 0.001055  acc_pose: 0.703284
07/24 10:35:25 - mmengine - INFO - Epoch(train)  [10][1400/1561]  lr: 2.000000e-03  eta: 21:33:18  time: 0.243694  data_time: 0.072882  memory: 2863  loss: 0.001041  loss_kpt: 0.001041  acc_pose: 0.621418
07/24 10:35:37 - mmengine - INFO - Epoch(train)  [10][1450/1561]  lr: 2.000000e-03  eta: 21:33:04  time: 0.246809  data_time: 0.071920  memory: 2863  loss: 0.001051  loss_kpt: 0.001051  acc_pose: 0.673734
07/24 10:35:49 - mmengine - INFO - Epoch(train)  [10][1500/1561]  lr: 2.000000e-03  eta: 21:32:45  time: 0.242615  data_time: 0.069541  memory: 2863  loss: 0.001041  loss_kpt: 0.001041  acc_pose: 0.693784
07/24 10:36:02 - mmengine - INFO - Epoch(train)  [10][1550/1561]  lr: 2.000000e-03  eta: 21:32:26  time: 0.241543  data_time: 0.070940  memory: 2863  loss: 0.001035  loss_kpt: 0.001035  acc_pose: 0.661962
07/24 10:36:04 - mmengine - INFO - Exp name: td-hm_uniformer-s-8xb128-210e_coco-256x192_20230724_092951
07/24 10:36:04 - mmengine - INFO - Saving checkpoint at 10 epochs
07/24 10:36:49 - mmengine - INFO - Epoch(val)  [10][ 50/136]    eta: 0:01:10  time: 0.817474  data_time: 0.176064  memory: 3365  
07/24 10:37:29 - mmengine - INFO - Epoch(val)  [10][100/136]    eta: 0:00:29  time: 0.800923  data_time: 0.157949  memory: 3365  
07/24 10:38:38 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=4.02s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.73s).
Accumulating evaluation results...
DONE (t=0.31s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.538
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.815
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.588
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.508
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.598
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.608
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.866
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.662
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.567
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.666
07/24 10:38:52 - mmengine - INFO - Epoch(val) [10][136/136]    coco/AP: 0.538381  coco/AP .5: 0.814657  coco/AP .75: 0.588496  coco/AP (M): 0.508142  coco/AP (L): 0.598121  coco/AR: 0.607746  coco/AR .5: 0.866184  coco/AR .75: 0.662469  coco/AR (M): 0.566976  coco/AR (L): 0.665515  data_time: 0.175768  time: 0.816618
07/24 10:38:55 - mmengine - INFO - The best checkpoint with 0.5384 coco/AP at 10 epoch is saved to best_coco_AP_epoch_10.pth.

Seems like there is some ACC drop...

@xin-li-67
Copy link
Contributor Author

xin-li-67 commented Jul 24, 2023

Testing result on projects/uniformer/configs/td-hm_uniformer-s-8xb128-210e_coco-256x192.py:

07/24 14:04:55 - mmpose - INFO - Use torch.utils.checkpoint: False
07/24 14:04:55 - mmpose - INFO - torch.utils.checkpoint number: (0, 0, 0, 0)
07/24 14:04:55 - mmpose - INFO - Use global window for all blocks in stage3
07/24 14:04:56 - mmpose - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
07/24 14:04:56 - mmpose - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, norm4.weight, norm4.bias

07/24 14:04:56 - mmpose - INFO - Load pretrained model from /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
07/24 14:05:00 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/24 14:05:00 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.30s)
creating index...
index created!
loading annotations into memory...
Done (t=0.18s)
creating index...
index created!
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_256x192_global_small-d4a7fdac_20230724.pth
Downloading: "https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_256x192_global_small-d4a7fdac_20230724.pth" to /root/.cache/torch/hub/checkpoints/top_down_256x192_global_small-d4a7fdac_20230724.pth
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 96.3M/96.3M [00:05<00:00, 19.8MB/s]
07/24 14:05:14 - mmengine - INFO - Load checkpoint from https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_256x192_global_small-d4a7fdac_20230724.pth
07/24 14:05:55 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:04:51  time: 0.815450  data_time: 0.149263  memory: 2966  
07/24 14:06:37 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:04:12  time: 0.831289  data_time: 0.162858  memory: 2966  
07/24 14:07:18 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:03:31  time: 0.826862  data_time: 0.153038  memory: 2966  
07/24 14:08:00 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:02:51  time: 0.837929  data_time: 0.170698  memory: 2966  
07/24 14:08:42 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:02:10  time: 0.841122  data_time: 0.165579  memory: 2966  
07/24 14:09:23 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:01:28  time: 0.828984  data_time: 0.156390  memory: 2966  
07/24 14:10:05 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:00:47  time: 0.838562  data_time: 0.169481  memory: 2966  
07/24 14:10:47 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:05  time: 0.836921  data_time: 0.168182  memory: 2966  
07/24 14:11:26 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=3.31s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.63s).
Accumulating evaluation results...
DONE (t=0.31s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.740
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.903
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.821
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.705
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.809
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.795
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.941
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.866
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.754
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.855
07/24 14:11:40 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.740478  coco/AP .5: 0.902957  coco/AP .75: 0.821051  coco/AP (M): 0.704838  coco/AP (L): 0.808673  coco/AR: 0.794773  coco/AR .5: 0.941436  coco/AR .75: 0.866026  coco/AR (M): 0.753510  coco/AR (L): 0.855035  data_time: 0.161404  time: 0.831106

whereas the accuracy listed in the official UniFormer repo is:

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-S 256x192 74.0 90.3 82.2 66.8 76.7 79.5 4.7G

@xin-li-67
Copy link
Contributor Author

Testing result on projects/uniformer/configs/_base_/td-hm_uniformer-b-8xb128-210e_coco-256x192.py:

07/24 14:29:21 - mmpose - INFO - Use torch.utils.checkpoint: False
07/24 14:29:21 - mmpose - INFO - torch.utils.checkpoint number: (0, 0, 0, 0)
07/24 14:29:21 - mmpose - INFO - Use global window for all blocks in stage3
07/24 14:29:22 - mmpose - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_base_in1k.pth
07/24 14:29:22 - mmpose - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, blocks1.3.pos_embed.weight, blocks1.3.pos_embed.bias, blocks1.3.norm1.weight, blocks1.3.norm1.bias, blocks1.3.norm1.running_mean, blocks1.3.norm1.running_var, blocks1.3.conv1.weight, blocks1.3.conv1.bias, blocks1.3.conv2.weight, blocks1.3.conv2.bias, blocks1.3.attn.weight, blocks1.3.attn.bias, blocks1.3.norm2.weight, blocks1.3.norm2.bias, blocks1.3.norm2.running_mean, blocks1.3.norm2.running_var, blocks1.3.mlp.fc1.weight, blocks1.3.mlp.fc1.bias, blocks1.3.mlp.fc2.weight, blocks1.3.mlp.fc2.bias, blocks1.4.pos_embed.weight, blocks1.4.pos_embed.bias, blocks1.4.norm1.weight, blocks1.4.norm1.bias, blocks1.4.norm1.running_mean, blocks1.4.norm1.running_var, blocks1.4.conv1.weight, blocks1.4.conv1.bias, blocks1.4.conv2.weight, blocks1.4.conv2.bias, blocks1.4.attn.weight, blocks1.4.attn.bias, blocks1.4.norm2.weight, blocks1.4.norm2.bias, blocks1.4.norm2.running_mean, blocks1.4.norm2.running_var, blocks1.4.mlp.fc1.weight, blocks1.4.mlp.fc1.bias, blocks1.4.mlp.fc2.weight, blocks1.4.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, blocks2.4.pos_embed.weight, blocks2.4.pos_embed.bias, blocks2.4.norm1.weight, blocks2.4.norm1.bias, blocks2.4.norm1.running_mean, blocks2.4.norm1.running_var, blocks2.4.conv1.weight, blocks2.4.conv1.bias, blocks2.4.conv2.weight, blocks2.4.conv2.bias, blocks2.4.attn.weight, blocks2.4.attn.bias, blocks2.4.norm2.weight, blocks2.4.norm2.bias, blocks2.4.norm2.running_mean, blocks2.4.norm2.running_var, blocks2.4.mlp.fc1.weight, blocks2.4.mlp.fc1.bias, blocks2.4.mlp.fc2.weight, blocks2.4.mlp.fc2.bias, blocks2.5.pos_embed.weight, blocks2.5.pos_embed.bias, blocks2.5.norm1.weight, blocks2.5.norm1.bias, blocks2.5.norm1.running_mean, blocks2.5.norm1.running_var, blocks2.5.conv1.weight, blocks2.5.conv1.bias, blocks2.5.conv2.weight, blocks2.5.conv2.bias, blocks2.5.attn.weight, blocks2.5.attn.bias, blocks2.5.norm2.weight, blocks2.5.norm2.bias, blocks2.5.norm2.running_mean, blocks2.5.norm2.running_var, blocks2.5.mlp.fc1.weight, blocks2.5.mlp.fc1.bias, blocks2.5.mlp.fc2.weight, blocks2.5.mlp.fc2.bias, blocks2.6.pos_embed.weight, blocks2.6.pos_embed.bias, blocks2.6.norm1.weight, blocks2.6.norm1.bias, blocks2.6.norm1.running_mean, blocks2.6.norm1.running_var, blocks2.6.conv1.weight, blocks2.6.conv1.bias, blocks2.6.conv2.weight, blocks2.6.conv2.bias, blocks2.6.attn.weight, blocks2.6.attn.bias, blocks2.6.norm2.weight, blocks2.6.norm2.bias, blocks2.6.norm2.running_mean, blocks2.6.norm2.running_var, blocks2.6.mlp.fc1.weight, blocks2.6.mlp.fc1.bias, blocks2.6.mlp.fc2.weight, blocks2.6.mlp.fc2.bias, blocks2.7.pos_embed.weight, blocks2.7.pos_embed.bias, blocks2.7.norm1.weight, blocks2.7.norm1.bias, blocks2.7.norm1.running_mean, blocks2.7.norm1.running_var, blocks2.7.conv1.weight, blocks2.7.conv1.bias, blocks2.7.conv2.weight, blocks2.7.conv2.bias, blocks2.7.attn.weight, blocks2.7.attn.bias, blocks2.7.norm2.weight, blocks2.7.norm2.bias, blocks2.7.norm2.running_mean, blocks2.7.norm2.running_var, blocks2.7.mlp.fc1.weight, blocks2.7.mlp.fc1.bias, blocks2.7.mlp.fc2.weight, blocks2.7.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, blocks3.8.pos_embed.weight, blocks3.8.pos_embed.bias, blocks3.8.norm1.weight, blocks3.8.norm1.bias, blocks3.8.attn.qkv.weight, blocks3.8.attn.qkv.bias, blocks3.8.attn.proj.weight, blocks3.8.attn.proj.bias, blocks3.8.norm2.weight, blocks3.8.norm2.bias, blocks3.8.mlp.fc1.weight, blocks3.8.mlp.fc1.bias, blocks3.8.mlp.fc2.weight, blocks3.8.mlp.fc2.bias, blocks3.9.pos_embed.weight, blocks3.9.pos_embed.bias, blocks3.9.norm1.weight, blocks3.9.norm1.bias, blocks3.9.attn.qkv.weight, blocks3.9.attn.qkv.bias, blocks3.9.attn.proj.weight, blocks3.9.attn.proj.bias, blocks3.9.norm2.weight, blocks3.9.norm2.bias, blocks3.9.mlp.fc1.weight, blocks3.9.mlp.fc1.bias, blocks3.9.mlp.fc2.weight, blocks3.9.mlp.fc2.bias, blocks3.10.pos_embed.weight, blocks3.10.pos_embed.bias, blocks3.10.norm1.weight, blocks3.10.norm1.bias, blocks3.10.attn.qkv.weight, blocks3.10.attn.qkv.bias, blocks3.10.attn.proj.weight, blocks3.10.attn.proj.bias, blocks3.10.norm2.weight, blocks3.10.norm2.bias, blocks3.10.mlp.fc1.weight, blocks3.10.mlp.fc1.bias, blocks3.10.mlp.fc2.weight, blocks3.10.mlp.fc2.bias, blocks3.11.pos_embed.weight, blocks3.11.pos_embed.bias, blocks3.11.norm1.weight, blocks3.11.norm1.bias, blocks3.11.attn.qkv.weight, blocks3.11.attn.qkv.bias, blocks3.11.attn.proj.weight, blocks3.11.attn.proj.bias, blocks3.11.norm2.weight, blocks3.11.norm2.bias, blocks3.11.mlp.fc1.weight, blocks3.11.mlp.fc1.bias, blocks3.11.mlp.fc2.weight, blocks3.11.mlp.fc2.bias, blocks3.12.pos_embed.weight, blocks3.12.pos_embed.bias, blocks3.12.norm1.weight, blocks3.12.norm1.bias, blocks3.12.attn.qkv.weight, blocks3.12.attn.qkv.bias, blocks3.12.attn.proj.weight, blocks3.12.attn.proj.bias, blocks3.12.norm2.weight, blocks3.12.norm2.bias, blocks3.12.mlp.fc1.weight, blocks3.12.mlp.fc1.bias, blocks3.12.mlp.fc2.weight, blocks3.12.mlp.fc2.bias, blocks3.13.pos_embed.weight, blocks3.13.pos_embed.bias, blocks3.13.norm1.weight, blocks3.13.norm1.bias, blocks3.13.attn.qkv.weight, blocks3.13.attn.qkv.bias, blocks3.13.attn.proj.weight, blocks3.13.attn.proj.bias, blocks3.13.norm2.weight, blocks3.13.norm2.bias, blocks3.13.mlp.fc1.weight, blocks3.13.mlp.fc1.bias, blocks3.13.mlp.fc2.weight, blocks3.13.mlp.fc2.bias, blocks3.14.pos_embed.weight, blocks3.14.pos_embed.bias, blocks3.14.norm1.weight, blocks3.14.norm1.bias, blocks3.14.attn.qkv.weight, blocks3.14.attn.qkv.bias, blocks3.14.attn.proj.weight, blocks3.14.attn.proj.bias, blocks3.14.norm2.weight, blocks3.14.norm2.bias, blocks3.14.mlp.fc1.weight, blocks3.14.mlp.fc1.bias, blocks3.14.mlp.fc2.weight, blocks3.14.mlp.fc2.bias, blocks3.15.pos_embed.weight, blocks3.15.pos_embed.bias, blocks3.15.norm1.weight, blocks3.15.norm1.bias, blocks3.15.attn.qkv.weight, blocks3.15.attn.qkv.bias, blocks3.15.attn.proj.weight, blocks3.15.attn.proj.bias, blocks3.15.norm2.weight, blocks3.15.norm2.bias, blocks3.15.mlp.fc1.weight, blocks3.15.mlp.fc1.bias, blocks3.15.mlp.fc2.weight, blocks3.15.mlp.fc2.bias, blocks3.16.pos_embed.weight, blocks3.16.pos_embed.bias, blocks3.16.norm1.weight, blocks3.16.norm1.bias, blocks3.16.attn.qkv.weight, blocks3.16.attn.qkv.bias, blocks3.16.attn.proj.weight, blocks3.16.attn.proj.bias, blocks3.16.norm2.weight, blocks3.16.norm2.bias, blocks3.16.mlp.fc1.weight, blocks3.16.mlp.fc1.bias, blocks3.16.mlp.fc2.weight, blocks3.16.mlp.fc2.bias, blocks3.17.pos_embed.weight, blocks3.17.pos_embed.bias, blocks3.17.norm1.weight, blocks3.17.norm1.bias, blocks3.17.attn.qkv.weight, blocks3.17.attn.qkv.bias, blocks3.17.attn.proj.weight, blocks3.17.attn.proj.bias, blocks3.17.norm2.weight, blocks3.17.norm2.bias, blocks3.17.mlp.fc1.weight, blocks3.17.mlp.fc1.bias, blocks3.17.mlp.fc2.weight, blocks3.17.mlp.fc2.bias, blocks3.18.pos_embed.weight, blocks3.18.pos_embed.bias, blocks3.18.norm1.weight, blocks3.18.norm1.bias, blocks3.18.attn.qkv.weight, blocks3.18.attn.qkv.bias, blocks3.18.attn.proj.weight, blocks3.18.attn.proj.bias, blocks3.18.norm2.weight, blocks3.18.norm2.bias, blocks3.18.mlp.fc1.weight, blocks3.18.mlp.fc1.bias, blocks3.18.mlp.fc2.weight, blocks3.18.mlp.fc2.bias, blocks3.19.pos_embed.weight, blocks3.19.pos_embed.bias, blocks3.19.norm1.weight, blocks3.19.norm1.bias, blocks3.19.attn.qkv.weight, blocks3.19.attn.qkv.bias, blocks3.19.attn.proj.weight, blocks3.19.attn.proj.bias, blocks3.19.norm2.weight, blocks3.19.norm2.bias, blocks3.19.mlp.fc1.weight, blocks3.19.mlp.fc1.bias, blocks3.19.mlp.fc2.weight, blocks3.19.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, blocks4.3.pos_embed.weight, blocks4.3.pos_embed.bias, blocks4.3.norm1.weight, blocks4.3.norm1.bias, blocks4.3.attn.qkv.weight, blocks4.3.attn.qkv.bias, blocks4.3.attn.proj.weight, blocks4.3.attn.proj.bias, blocks4.3.norm2.weight, blocks4.3.norm2.bias, blocks4.3.mlp.fc1.weight, blocks4.3.mlp.fc1.bias, blocks4.3.mlp.fc2.weight, blocks4.3.mlp.fc2.bias, blocks4.4.pos_embed.weight, blocks4.4.pos_embed.bias, blocks4.4.norm1.weight, blocks4.4.norm1.bias, blocks4.4.attn.qkv.weight, blocks4.4.attn.qkv.bias, blocks4.4.attn.proj.weight, blocks4.4.attn.proj.bias, blocks4.4.norm2.weight, blocks4.4.norm2.bias, blocks4.4.mlp.fc1.weight, blocks4.4.mlp.fc1.bias, blocks4.4.mlp.fc2.weight, blocks4.4.mlp.fc2.bias, blocks4.5.pos_embed.weight, blocks4.5.pos_embed.bias, blocks4.5.norm1.weight, blocks4.5.norm1.bias, blocks4.5.attn.qkv.weight, blocks4.5.attn.qkv.bias, blocks4.5.attn.proj.weight, blocks4.5.attn.proj.bias, blocks4.5.norm2.weight, blocks4.5.norm2.bias, blocks4.5.mlp.fc1.weight, blocks4.5.mlp.fc1.bias, blocks4.5.mlp.fc2.weight, blocks4.5.mlp.fc2.bias, blocks4.6.pos_embed.weight, blocks4.6.pos_embed.bias, blocks4.6.norm1.weight, blocks4.6.norm1.bias, blocks4.6.attn.qkv.weight, blocks4.6.attn.qkv.bias, blocks4.6.attn.proj.weight, blocks4.6.attn.proj.bias, blocks4.6.norm2.weight, blocks4.6.norm2.bias, blocks4.6.mlp.fc1.weight, blocks4.6.mlp.fc1.bias, blocks4.6.mlp.fc2.weight, blocks4.6.mlp.fc2.bias, norm4.weight, norm4.bias

07/24 14:29:22 - mmpose - INFO - Load pretrained model from /root/mmpose/projects/uniformer/pretrained/uniformer_base_in1k.pth
07/24 14:29:26 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/24 14:29:26 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.30s)
creating index...
index created!
loading annotations into memory...
Done (t=0.19s)
creating index...
index created!
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_256x192_global_base-1713bcd4_20230724.pth
Downloading: "https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_256x192_global_base-1713bcd4_20230724.pth" to /root/.cache/torch/hub/checkpoints/top_down_256x192_global_base-1713bcd4_20230724.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 204M/204M [00:10<00:00, 20.9MB/s]
07/24 14:29:45 - mmengine - INFO - Load checkpoint from https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_256x192_global_base-1713bcd4_20230724.pth
07/24 14:30:50 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:07:43  time: 1.299552  data_time: 0.199893  memory: 3075  
07/24 14:31:54 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:06:36  time: 1.280605  data_time: 0.170661  memory: 3075  
07/24 14:32:58 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:05:29  time: 1.268949  data_time: 0.157379  memory: 3075  
07/24 14:34:02 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:04:25  time: 1.280099  data_time: 0.168987  memory: 3075  
07/24 14:35:06 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:03:21  time: 1.290992  data_time: 0.181665  memory: 3075  
07/24 14:36:10 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:02:17  time: 1.279991  data_time: 0.171667  memory: 3075  
07/24 14:37:14 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:01:13  time: 1.278610  data_time: 0.166956  memory: 3075  
07/24 14:38:18 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:08  time: 1.275209  data_time: 0.164871  memory: 3075  
07/24 14:38:59 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=3.18s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.17s).
Accumulating evaluation results...
DONE (t=0.31s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.750
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.905
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.829
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.715
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.818
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.804
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.943
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.872
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.762
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.864
07/24 14:39:12 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.749641  coco/AP .5: 0.905371  coco/AP .75: 0.828859  coco/AP (M): 0.714766  coco/AP (L): 0.817848  coco/AR: 0.803526  coco/AR .5: 0.942538  coco/AR .75: 0.871851  coco/AR (M): 0.761950  coco/AR (L): 0.863768  data_time: 0.172043  time: 1.280725

whereas the accuracy listed in the official UniFormer repo is:

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-B 256x192 75.0 90.6 83.0 67.8 77.7 80.4 9.2G

@xin-li-67
Copy link
Contributor Author

Testing result on projects/uniformer/configs/_base_/td-hm_uniformer-b-8xb32-210e_coco-384x288.py:

07/24 14:42:05 - mmpose - INFO - Use torch.utils.checkpoint: False
07/24 14:42:05 - mmpose - INFO - torch.utils.checkpoint number: (0, 0, 0, 0)
07/24 14:42:05 - mmpose - INFO - Use global window for all blocks in stage3
07/24 14:42:06 - mmpose - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_base_in1k.pth
07/24 14:42:06 - mmpose - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, blocks1.3.pos_embed.weight, blocks1.3.pos_embed.bias, blocks1.3.norm1.weight, blocks1.3.norm1.bias, blocks1.3.norm1.running_mean, blocks1.3.norm1.running_var, blocks1.3.conv1.weight, blocks1.3.conv1.bias, blocks1.3.conv2.weight, blocks1.3.conv2.bias, blocks1.3.attn.weight, blocks1.3.attn.bias, blocks1.3.norm2.weight, blocks1.3.norm2.bias, blocks1.3.norm2.running_mean, blocks1.3.norm2.running_var, blocks1.3.mlp.fc1.weight, blocks1.3.mlp.fc1.bias, blocks1.3.mlp.fc2.weight, blocks1.3.mlp.fc2.bias, blocks1.4.pos_embed.weight, blocks1.4.pos_embed.bias, blocks1.4.norm1.weight, blocks1.4.norm1.bias, blocks1.4.norm1.running_mean, blocks1.4.norm1.running_var, blocks1.4.conv1.weight, blocks1.4.conv1.bias, blocks1.4.conv2.weight, blocks1.4.conv2.bias, blocks1.4.attn.weight, blocks1.4.attn.bias, blocks1.4.norm2.weight, blocks1.4.norm2.bias, blocks1.4.norm2.running_mean, blocks1.4.norm2.running_var, blocks1.4.mlp.fc1.weight, blocks1.4.mlp.fc1.bias, blocks1.4.mlp.fc2.weight, blocks1.4.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, blocks2.4.pos_embed.weight, blocks2.4.pos_embed.bias, blocks2.4.norm1.weight, blocks2.4.norm1.bias, blocks2.4.norm1.running_mean, blocks2.4.norm1.running_var, blocks2.4.conv1.weight, blocks2.4.conv1.bias, blocks2.4.conv2.weight, blocks2.4.conv2.bias, blocks2.4.attn.weight, blocks2.4.attn.bias, blocks2.4.norm2.weight, blocks2.4.norm2.bias, blocks2.4.norm2.running_mean, blocks2.4.norm2.running_var, blocks2.4.mlp.fc1.weight, blocks2.4.mlp.fc1.bias, blocks2.4.mlp.fc2.weight, blocks2.4.mlp.fc2.bias, blocks2.5.pos_embed.weight, blocks2.5.pos_embed.bias, blocks2.5.norm1.weight, blocks2.5.norm1.bias, blocks2.5.norm1.running_mean, blocks2.5.norm1.running_var, blocks2.5.conv1.weight, blocks2.5.conv1.bias, blocks2.5.conv2.weight, blocks2.5.conv2.bias, blocks2.5.attn.weight, blocks2.5.attn.bias, blocks2.5.norm2.weight, blocks2.5.norm2.bias, blocks2.5.norm2.running_mean, blocks2.5.norm2.running_var, blocks2.5.mlp.fc1.weight, blocks2.5.mlp.fc1.bias, blocks2.5.mlp.fc2.weight, blocks2.5.mlp.fc2.bias, blocks2.6.pos_embed.weight, blocks2.6.pos_embed.bias, blocks2.6.norm1.weight, blocks2.6.norm1.bias, blocks2.6.norm1.running_mean, blocks2.6.norm1.running_var, blocks2.6.conv1.weight, blocks2.6.conv1.bias, blocks2.6.conv2.weight, blocks2.6.conv2.bias, blocks2.6.attn.weight, blocks2.6.attn.bias, blocks2.6.norm2.weight, blocks2.6.norm2.bias, blocks2.6.norm2.running_mean, blocks2.6.norm2.running_var, blocks2.6.mlp.fc1.weight, blocks2.6.mlp.fc1.bias, blocks2.6.mlp.fc2.weight, blocks2.6.mlp.fc2.bias, blocks2.7.pos_embed.weight, blocks2.7.pos_embed.bias, blocks2.7.norm1.weight, blocks2.7.norm1.bias, blocks2.7.norm1.running_mean, blocks2.7.norm1.running_var, blocks2.7.conv1.weight, blocks2.7.conv1.bias, blocks2.7.conv2.weight, blocks2.7.conv2.bias, blocks2.7.attn.weight, blocks2.7.attn.bias, blocks2.7.norm2.weight, blocks2.7.norm2.bias, blocks2.7.norm2.running_mean, blocks2.7.norm2.running_var, blocks2.7.mlp.fc1.weight, blocks2.7.mlp.fc1.bias, blocks2.7.mlp.fc2.weight, blocks2.7.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, blocks3.8.pos_embed.weight, blocks3.8.pos_embed.bias, blocks3.8.norm1.weight, blocks3.8.norm1.bias, blocks3.8.attn.qkv.weight, blocks3.8.attn.qkv.bias, blocks3.8.attn.proj.weight, blocks3.8.attn.proj.bias, blocks3.8.norm2.weight, blocks3.8.norm2.bias, blocks3.8.mlp.fc1.weight, blocks3.8.mlp.fc1.bias, blocks3.8.mlp.fc2.weight, blocks3.8.mlp.fc2.bias, blocks3.9.pos_embed.weight, blocks3.9.pos_embed.bias, blocks3.9.norm1.weight, blocks3.9.norm1.bias, blocks3.9.attn.qkv.weight, blocks3.9.attn.qkv.bias, blocks3.9.attn.proj.weight, blocks3.9.attn.proj.bias, blocks3.9.norm2.weight, blocks3.9.norm2.bias, blocks3.9.mlp.fc1.weight, blocks3.9.mlp.fc1.bias, blocks3.9.mlp.fc2.weight, blocks3.9.mlp.fc2.bias, blocks3.10.pos_embed.weight, blocks3.10.pos_embed.bias, blocks3.10.norm1.weight, blocks3.10.norm1.bias, blocks3.10.attn.qkv.weight, blocks3.10.attn.qkv.bias, blocks3.10.attn.proj.weight, blocks3.10.attn.proj.bias, blocks3.10.norm2.weight, blocks3.10.norm2.bias, blocks3.10.mlp.fc1.weight, blocks3.10.mlp.fc1.bias, blocks3.10.mlp.fc2.weight, blocks3.10.mlp.fc2.bias, blocks3.11.pos_embed.weight, blocks3.11.pos_embed.bias, blocks3.11.norm1.weight, blocks3.11.norm1.bias, blocks3.11.attn.qkv.weight, blocks3.11.attn.qkv.bias, blocks3.11.attn.proj.weight, blocks3.11.attn.proj.bias, blocks3.11.norm2.weight, blocks3.11.norm2.bias, blocks3.11.mlp.fc1.weight, blocks3.11.mlp.fc1.bias, blocks3.11.mlp.fc2.weight, blocks3.11.mlp.fc2.bias, blocks3.12.pos_embed.weight, blocks3.12.pos_embed.bias, blocks3.12.norm1.weight, blocks3.12.norm1.bias, blocks3.12.attn.qkv.weight, blocks3.12.attn.qkv.bias, blocks3.12.attn.proj.weight, blocks3.12.attn.proj.bias, blocks3.12.norm2.weight, blocks3.12.norm2.bias, blocks3.12.mlp.fc1.weight, blocks3.12.mlp.fc1.bias, blocks3.12.mlp.fc2.weight, blocks3.12.mlp.fc2.bias, blocks3.13.pos_embed.weight, blocks3.13.pos_embed.bias, blocks3.13.norm1.weight, blocks3.13.norm1.bias, blocks3.13.attn.qkv.weight, blocks3.13.attn.qkv.bias, blocks3.13.attn.proj.weight, blocks3.13.attn.proj.bias, blocks3.13.norm2.weight, blocks3.13.norm2.bias, blocks3.13.mlp.fc1.weight, blocks3.13.mlp.fc1.bias, blocks3.13.mlp.fc2.weight, blocks3.13.mlp.fc2.bias, blocks3.14.pos_embed.weight, blocks3.14.pos_embed.bias, blocks3.14.norm1.weight, blocks3.14.norm1.bias, blocks3.14.attn.qkv.weight, blocks3.14.attn.qkv.bias, blocks3.14.attn.proj.weight, blocks3.14.attn.proj.bias, blocks3.14.norm2.weight, blocks3.14.norm2.bias, blocks3.14.mlp.fc1.weight, blocks3.14.mlp.fc1.bias, blocks3.14.mlp.fc2.weight, blocks3.14.mlp.fc2.bias, blocks3.15.pos_embed.weight, blocks3.15.pos_embed.bias, blocks3.15.norm1.weight, blocks3.15.norm1.bias, blocks3.15.attn.qkv.weight, blocks3.15.attn.qkv.bias, blocks3.15.attn.proj.weight, blocks3.15.attn.proj.bias, blocks3.15.norm2.weight, blocks3.15.norm2.bias, blocks3.15.mlp.fc1.weight, blocks3.15.mlp.fc1.bias, blocks3.15.mlp.fc2.weight, blocks3.15.mlp.fc2.bias, blocks3.16.pos_embed.weight, blocks3.16.pos_embed.bias, blocks3.16.norm1.weight, blocks3.16.norm1.bias, blocks3.16.attn.qkv.weight, blocks3.16.attn.qkv.bias, blocks3.16.attn.proj.weight, blocks3.16.attn.proj.bias, blocks3.16.norm2.weight, blocks3.16.norm2.bias, blocks3.16.mlp.fc1.weight, blocks3.16.mlp.fc1.bias, blocks3.16.mlp.fc2.weight, blocks3.16.mlp.fc2.bias, blocks3.17.pos_embed.weight, blocks3.17.pos_embed.bias, blocks3.17.norm1.weight, blocks3.17.norm1.bias, blocks3.17.attn.qkv.weight, blocks3.17.attn.qkv.bias, blocks3.17.attn.proj.weight, blocks3.17.attn.proj.bias, blocks3.17.norm2.weight, blocks3.17.norm2.bias, blocks3.17.mlp.fc1.weight, blocks3.17.mlp.fc1.bias, blocks3.17.mlp.fc2.weight, blocks3.17.mlp.fc2.bias, blocks3.18.pos_embed.weight, blocks3.18.pos_embed.bias, blocks3.18.norm1.weight, blocks3.18.norm1.bias, blocks3.18.attn.qkv.weight, blocks3.18.attn.qkv.bias, blocks3.18.attn.proj.weight, blocks3.18.attn.proj.bias, blocks3.18.norm2.weight, blocks3.18.norm2.bias, blocks3.18.mlp.fc1.weight, blocks3.18.mlp.fc1.bias, blocks3.18.mlp.fc2.weight, blocks3.18.mlp.fc2.bias, blocks3.19.pos_embed.weight, blocks3.19.pos_embed.bias, blocks3.19.norm1.weight, blocks3.19.norm1.bias, blocks3.19.attn.qkv.weight, blocks3.19.attn.qkv.bias, blocks3.19.attn.proj.weight, blocks3.19.attn.proj.bias, blocks3.19.norm2.weight, blocks3.19.norm2.bias, blocks3.19.mlp.fc1.weight, blocks3.19.mlp.fc1.bias, blocks3.19.mlp.fc2.weight, blocks3.19.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, blocks4.3.pos_embed.weight, blocks4.3.pos_embed.bias, blocks4.3.norm1.weight, blocks4.3.norm1.bias, blocks4.3.attn.qkv.weight, blocks4.3.attn.qkv.bias, blocks4.3.attn.proj.weight, blocks4.3.attn.proj.bias, blocks4.3.norm2.weight, blocks4.3.norm2.bias, blocks4.3.mlp.fc1.weight, blocks4.3.mlp.fc1.bias, blocks4.3.mlp.fc2.weight, blocks4.3.mlp.fc2.bias, blocks4.4.pos_embed.weight, blocks4.4.pos_embed.bias, blocks4.4.norm1.weight, blocks4.4.norm1.bias, blocks4.4.attn.qkv.weight, blocks4.4.attn.qkv.bias, blocks4.4.attn.proj.weight, blocks4.4.attn.proj.bias, blocks4.4.norm2.weight, blocks4.4.norm2.bias, blocks4.4.mlp.fc1.weight, blocks4.4.mlp.fc1.bias, blocks4.4.mlp.fc2.weight, blocks4.4.mlp.fc2.bias, blocks4.5.pos_embed.weight, blocks4.5.pos_embed.bias, blocks4.5.norm1.weight, blocks4.5.norm1.bias, blocks4.5.attn.qkv.weight, blocks4.5.attn.qkv.bias, blocks4.5.attn.proj.weight, blocks4.5.attn.proj.bias, blocks4.5.norm2.weight, blocks4.5.norm2.bias, blocks4.5.mlp.fc1.weight, blocks4.5.mlp.fc1.bias, blocks4.5.mlp.fc2.weight, blocks4.5.mlp.fc2.bias, blocks4.6.pos_embed.weight, blocks4.6.pos_embed.bias, blocks4.6.norm1.weight, blocks4.6.norm1.bias, blocks4.6.attn.qkv.weight, blocks4.6.attn.qkv.bias, blocks4.6.attn.proj.weight, blocks4.6.attn.proj.bias, blocks4.6.norm2.weight, blocks4.6.norm2.bias, blocks4.6.mlp.fc1.weight, blocks4.6.mlp.fc1.bias, blocks4.6.mlp.fc2.weight, blocks4.6.mlp.fc2.bias, norm4.weight, norm4.bias

07/24 14:42:06 - mmpose - INFO - Load pretrained model from /root/mmpose/projects/uniformer/pretrained/uniformer_base_in1k.pth
07/24 14:42:10 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/24 14:42:10 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.29s)
creating index...
index created!
loading annotations into memory...
Done (t=0.19s)
creating index...
index created!
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_384x288_global_base-c650da38_20230724.pth
Downloading: "https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_384x288_global_base-c650da38_20230724.pth" to /root/.cache/torch/hub/checkpoints/top_down_384x288_global_base-c650da38_20230724.pth
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 204M/204M [00:09<00:00, 21.6MB/s]
07/24 14:42:29 - mmengine - INFO - Load checkpoint from https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_384x288_global_base-c650da38_20230724.pth
07/24 14:44:48 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:16:35  time: 2.789852  data_time: 0.260804  memory: 6649  
07/24 14:47:06 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:14:11  time: 2.755543  data_time: 0.209653  memory: 6649  
07/24 14:49:24 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:11:51  time: 2.760203  data_time: 0.212239  memory: 6649  
07/24 14:51:42 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:09:32  time: 2.752695  data_time: 0.202930  memory: 6649  
07/24 14:53:59 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:07:13  time: 2.757030  data_time: 0.213792  memory: 6649  
07/24 14:56:18 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:04:55  time: 2.763714  data_time: 0.216035  memory: 6649  
07/24 14:58:35 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:02:37  time: 2.754733  data_time: 0.205812  memory: 6649  
07/24 15:00:53 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:19  time: 2.756398  data_time: 0.217246  memory: 6649  
07/24 15:01:45 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=3.20s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.61s).
Accumulating evaluation results...
DONE (t=0.31s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.767
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.908
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.841
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.729
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.837
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.819
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.946
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.883
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.777
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.880
07/24 15:01:58 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.767028  coco/AP .5: 0.907571  coco/AP .75: 0.840608  coco/AP (M): 0.729437  coco/AP (L): 0.836965  coco/AR: 0.818640  coco/AR .5: 0.946316  coco/AR .75: 0.882714  coco/AR (M): 0.776618  coco/AR (L): 0.880119  data_time: 0.216960  time: 2.759213

whereas the accuracy listed in the official UniFormer repo is:

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-B 384x288 76.7 90.8 84.0 69.3 79.7 81.4 14.8G

@xin-li-67
Copy link
Contributor Author

Testing result on rojects/uniformer/configs/td-hm_uniformer-s-8xb128-210e_coco-384x288.py:

07/24 15:12:03 - mmpose - INFO - Use torch.utils.checkpoint: False
07/24 15:12:03 - mmpose - INFO - torch.utils.checkpoint number: (0, 0, 0, 0)
07/24 15:12:03 - mmpose - INFO - Use global window for all blocks in stage3
07/24 15:12:04 - mmpose - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
07/24 15:12:04 - mmpose - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, norm4.weight, norm4.bias

07/24 15:12:04 - mmpose - INFO - Load pretrained model from /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
07/24 15:12:08 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/24 15:12:08 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.29s)
creating index...
index created!
loading annotations into memory...
Done (t=0.19s)
creating index...
index created!
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_384x288_global_small-7a613f78_20230724.pth
Downloading: "https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_384x288_global_small-7a613f78_20230724.pth" to /root/.cache/torch/hub/checkpoints/top_down_384x288_global_small-7a613f78_20230724.pth
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 96.3M/96.3M [00:04<00:00, 21.2MB/s]
07/24 15:12:21 - mmengine - INFO - Load checkpoint from https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_384x288_global_small-7a613f78_20230724.pth
07/24 15:13:45 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:09:58  time: 1.675995  data_time: 0.252927  memory: 6540  
07/24 15:15:07 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:08:28  time: 1.635582  data_time: 0.206682  memory: 6540  
07/24 15:16:29 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:07:04  time: 1.646374  data_time: 0.215712  memory: 6540  
07/24 15:17:51 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:05:41  time: 1.637084  data_time: 0.210125  memory: 6540  
07/24 15:19:13 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:04:18  time: 1.642803  data_time: 0.216182  memory: 6540  
07/24 15:20:35 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:02:56  time: 1.637716  data_time: 0.213193  memory: 6540  
07/24 15:21:57 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:01:33  time: 1.640897  data_time: 0.214671  memory: 6540  
07/24 15:23:19 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:11  time: 1.639230  data_time: 0.210880  memory: 6540  
07/24 15:24:03 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=3.46s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.41s).
Accumulating evaluation results...
DONE (t=0.31s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.759
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.906
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.830
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.722
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.830
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.810
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.944
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.873
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.768
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.873
07/24 15:24:16 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.758805  coco/AP .5: 0.906079  coco/AP .75: 0.829732  coco/AP (M): 0.721798  coco/AP (L): 0.829743  coco/AR: 0.810327  coco/AR .5: 0.944112  coco/AR .75: 0.873268  coco/AR (M): 0.768069  coco/AR (L): 0.872575  data_time: 0.217041  time: 1.642961

whereas the accuracy listed in the official UniFormer repo is:

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-S 384x288 75.9 90.6 83.4 68.6 79.0 81.4 11.1G

@Ben-Louis
Copy link
Collaborator

Ben-Louis commented Jul 24, 2023

Hi, @xin-li-67, thank you for your effort and contribution. Could you please relocate the configs under configs/_base_/ to configs/? It is worth noting that the files in configs/_base/ are typically utilized by all model configs, while it is preferable to store model configs in configs/.

@xin-li-67
Copy link
Contributor Author

Hi, @xin-li-67, thank you for your effort and contribution. Could you please relocate the configs under configs/_base_/ to configs/? It is worth noting that the files in configs/_base/ are typically utilized by all model configs, while it is preferable to store model configs in configs/.

Got it! I have moved all the config files under the configs folder and updated the corresponding files. It will come up with README modifications later together.

@xin-li-67
Copy link
Contributor Author

Testing result on projects/uniformer/configs/_base_/td-hm_uniformer-b-8xb32-210e_coco-448x320.py:

07/24 15:27:05 - mmpose - INFO - Use torch.utils.checkpoint: False
07/24 15:27:05 - mmpose - INFO - torch.utils.checkpoint number: (0, 0, 0, 0)
07/24 15:27:05 - mmpose - INFO - Use global window for all blocks in stage3
07/24 15:27:06 - mmpose - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_base_in1k.pth
07/24 15:27:06 - mmpose - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, blocks1.3.pos_embed.weight, blocks1.3.pos_embed.bias, blocks1.3.norm1.weight, blocks1.3.norm1.bias, blocks1.3.norm1.running_mean, blocks1.3.norm1.running_var, blocks1.3.conv1.weight, blocks1.3.conv1.bias, blocks1.3.conv2.weight, blocks1.3.conv2.bias, blocks1.3.attn.weight, blocks1.3.attn.bias, blocks1.3.norm2.weight, blocks1.3.norm2.bias, blocks1.3.norm2.running_mean, blocks1.3.norm2.running_var, blocks1.3.mlp.fc1.weight, blocks1.3.mlp.fc1.bias, blocks1.3.mlp.fc2.weight, blocks1.3.mlp.fc2.bias, blocks1.4.pos_embed.weight, blocks1.4.pos_embed.bias, blocks1.4.norm1.weight, blocks1.4.norm1.bias, blocks1.4.norm1.running_mean, blocks1.4.norm1.running_var, blocks1.4.conv1.weight, blocks1.4.conv1.bias, blocks1.4.conv2.weight, blocks1.4.conv2.bias, blocks1.4.attn.weight, blocks1.4.attn.bias, blocks1.4.norm2.weight, blocks1.4.norm2.bias, blocks1.4.norm2.running_mean, blocks1.4.norm2.running_var, blocks1.4.mlp.fc1.weight, blocks1.4.mlp.fc1.bias, blocks1.4.mlp.fc2.weight, blocks1.4.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, blocks2.4.pos_embed.weight, blocks2.4.pos_embed.bias, blocks2.4.norm1.weight, blocks2.4.norm1.bias, blocks2.4.norm1.running_mean, blocks2.4.norm1.running_var, blocks2.4.conv1.weight, blocks2.4.conv1.bias, blocks2.4.conv2.weight, blocks2.4.conv2.bias, blocks2.4.attn.weight, blocks2.4.attn.bias, blocks2.4.norm2.weight, blocks2.4.norm2.bias, blocks2.4.norm2.running_mean, blocks2.4.norm2.running_var, blocks2.4.mlp.fc1.weight, blocks2.4.mlp.fc1.bias, blocks2.4.mlp.fc2.weight, blocks2.4.mlp.fc2.bias, blocks2.5.pos_embed.weight, blocks2.5.pos_embed.bias, blocks2.5.norm1.weight, blocks2.5.norm1.bias, blocks2.5.norm1.running_mean, blocks2.5.norm1.running_var, blocks2.5.conv1.weight, blocks2.5.conv1.bias, blocks2.5.conv2.weight, blocks2.5.conv2.bias, blocks2.5.attn.weight, blocks2.5.attn.bias, blocks2.5.norm2.weight, blocks2.5.norm2.bias, blocks2.5.norm2.running_mean, blocks2.5.norm2.running_var, blocks2.5.mlp.fc1.weight, blocks2.5.mlp.fc1.bias, blocks2.5.mlp.fc2.weight, blocks2.5.mlp.fc2.bias, blocks2.6.pos_embed.weight, blocks2.6.pos_embed.bias, blocks2.6.norm1.weight, blocks2.6.norm1.bias, blocks2.6.norm1.running_mean, blocks2.6.norm1.running_var, blocks2.6.conv1.weight, blocks2.6.conv1.bias, blocks2.6.conv2.weight, blocks2.6.conv2.bias, blocks2.6.attn.weight, blocks2.6.attn.bias, blocks2.6.norm2.weight, blocks2.6.norm2.bias, blocks2.6.norm2.running_mean, blocks2.6.norm2.running_var, blocks2.6.mlp.fc1.weight, blocks2.6.mlp.fc1.bias, blocks2.6.mlp.fc2.weight, blocks2.6.mlp.fc2.bias, blocks2.7.pos_embed.weight, blocks2.7.pos_embed.bias, blocks2.7.norm1.weight, blocks2.7.norm1.bias, blocks2.7.norm1.running_mean, blocks2.7.norm1.running_var, blocks2.7.conv1.weight, blocks2.7.conv1.bias, blocks2.7.conv2.weight, blocks2.7.conv2.bias, blocks2.7.attn.weight, blocks2.7.attn.bias, blocks2.7.norm2.weight, blocks2.7.norm2.bias, blocks2.7.norm2.running_mean, blocks2.7.norm2.running_var, blocks2.7.mlp.fc1.weight, blocks2.7.mlp.fc1.bias, blocks2.7.mlp.fc2.weight, blocks2.7.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, blocks3.8.pos_embed.weight, blocks3.8.pos_embed.bias, blocks3.8.norm1.weight, blocks3.8.norm1.bias, blocks3.8.attn.qkv.weight, blocks3.8.attn.qkv.bias, blocks3.8.attn.proj.weight, blocks3.8.attn.proj.bias, blocks3.8.norm2.weight, blocks3.8.norm2.bias, blocks3.8.mlp.fc1.weight, blocks3.8.mlp.fc1.bias, blocks3.8.mlp.fc2.weight, blocks3.8.mlp.fc2.bias, blocks3.9.pos_embed.weight, blocks3.9.pos_embed.bias, blocks3.9.norm1.weight, blocks3.9.norm1.bias, blocks3.9.attn.qkv.weight, blocks3.9.attn.qkv.bias, blocks3.9.attn.proj.weight, blocks3.9.attn.proj.bias, blocks3.9.norm2.weight, blocks3.9.norm2.bias, blocks3.9.mlp.fc1.weight, blocks3.9.mlp.fc1.bias, blocks3.9.mlp.fc2.weight, blocks3.9.mlp.fc2.bias, blocks3.10.pos_embed.weight, blocks3.10.pos_embed.bias, blocks3.10.norm1.weight, blocks3.10.norm1.bias, blocks3.10.attn.qkv.weight, blocks3.10.attn.qkv.bias, blocks3.10.attn.proj.weight, blocks3.10.attn.proj.bias, blocks3.10.norm2.weight, blocks3.10.norm2.bias, blocks3.10.mlp.fc1.weight, blocks3.10.mlp.fc1.bias, blocks3.10.mlp.fc2.weight, blocks3.10.mlp.fc2.bias, blocks3.11.pos_embed.weight, blocks3.11.pos_embed.bias, blocks3.11.norm1.weight, blocks3.11.norm1.bias, blocks3.11.attn.qkv.weight, blocks3.11.attn.qkv.bias, blocks3.11.attn.proj.weight, blocks3.11.attn.proj.bias, blocks3.11.norm2.weight, blocks3.11.norm2.bias, blocks3.11.mlp.fc1.weight, blocks3.11.mlp.fc1.bias, blocks3.11.mlp.fc2.weight, blocks3.11.mlp.fc2.bias, blocks3.12.pos_embed.weight, blocks3.12.pos_embed.bias, blocks3.12.norm1.weight, blocks3.12.norm1.bias, blocks3.12.attn.qkv.weight, blocks3.12.attn.qkv.bias, blocks3.12.attn.proj.weight, blocks3.12.attn.proj.bias, blocks3.12.norm2.weight, blocks3.12.norm2.bias, blocks3.12.mlp.fc1.weight, blocks3.12.mlp.fc1.bias, blocks3.12.mlp.fc2.weight, blocks3.12.mlp.fc2.bias, blocks3.13.pos_embed.weight, blocks3.13.pos_embed.bias, blocks3.13.norm1.weight, blocks3.13.norm1.bias, blocks3.13.attn.qkv.weight, blocks3.13.attn.qkv.bias, blocks3.13.attn.proj.weight, blocks3.13.attn.proj.bias, blocks3.13.norm2.weight, blocks3.13.norm2.bias, blocks3.13.mlp.fc1.weight, blocks3.13.mlp.fc1.bias, blocks3.13.mlp.fc2.weight, blocks3.13.mlp.fc2.bias, blocks3.14.pos_embed.weight, blocks3.14.pos_embed.bias, blocks3.14.norm1.weight, blocks3.14.norm1.bias, blocks3.14.attn.qkv.weight, blocks3.14.attn.qkv.bias, blocks3.14.attn.proj.weight, blocks3.14.attn.proj.bias, blocks3.14.norm2.weight, blocks3.14.norm2.bias, blocks3.14.mlp.fc1.weight, blocks3.14.mlp.fc1.bias, blocks3.14.mlp.fc2.weight, blocks3.14.mlp.fc2.bias, blocks3.15.pos_embed.weight, blocks3.15.pos_embed.bias, blocks3.15.norm1.weight, blocks3.15.norm1.bias, blocks3.15.attn.qkv.weight, blocks3.15.attn.qkv.bias, blocks3.15.attn.proj.weight, blocks3.15.attn.proj.bias, blocks3.15.norm2.weight, blocks3.15.norm2.bias, blocks3.15.mlp.fc1.weight, blocks3.15.mlp.fc1.bias, blocks3.15.mlp.fc2.weight, blocks3.15.mlp.fc2.bias, blocks3.16.pos_embed.weight, blocks3.16.pos_embed.bias, blocks3.16.norm1.weight, blocks3.16.norm1.bias, blocks3.16.attn.qkv.weight, blocks3.16.attn.qkv.bias, blocks3.16.attn.proj.weight, blocks3.16.attn.proj.bias, blocks3.16.norm2.weight, blocks3.16.norm2.bias, blocks3.16.mlp.fc1.weight, blocks3.16.mlp.fc1.bias, blocks3.16.mlp.fc2.weight, blocks3.16.mlp.fc2.bias, blocks3.17.pos_embed.weight, blocks3.17.pos_embed.bias, blocks3.17.norm1.weight, blocks3.17.norm1.bias, blocks3.17.attn.qkv.weight, blocks3.17.attn.qkv.bias, blocks3.17.attn.proj.weight, blocks3.17.attn.proj.bias, blocks3.17.norm2.weight, blocks3.17.norm2.bias, blocks3.17.mlp.fc1.weight, blocks3.17.mlp.fc1.bias, blocks3.17.mlp.fc2.weight, blocks3.17.mlp.fc2.bias, blocks3.18.pos_embed.weight, blocks3.18.pos_embed.bias, blocks3.18.norm1.weight, blocks3.18.norm1.bias, blocks3.18.attn.qkv.weight, blocks3.18.attn.qkv.bias, blocks3.18.attn.proj.weight, blocks3.18.attn.proj.bias, blocks3.18.norm2.weight, blocks3.18.norm2.bias, blocks3.18.mlp.fc1.weight, blocks3.18.mlp.fc1.bias, blocks3.18.mlp.fc2.weight, blocks3.18.mlp.fc2.bias, blocks3.19.pos_embed.weight, blocks3.19.pos_embed.bias, blocks3.19.norm1.weight, blocks3.19.norm1.bias, blocks3.19.attn.qkv.weight, blocks3.19.attn.qkv.bias, blocks3.19.attn.proj.weight, blocks3.19.attn.proj.bias, blocks3.19.norm2.weight, blocks3.19.norm2.bias, blocks3.19.mlp.fc1.weight, blocks3.19.mlp.fc1.bias, blocks3.19.mlp.fc2.weight, blocks3.19.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, blocks4.3.pos_embed.weight, blocks4.3.pos_embed.bias, blocks4.3.norm1.weight, blocks4.3.norm1.bias, blocks4.3.attn.qkv.weight, blocks4.3.attn.qkv.bias, blocks4.3.attn.proj.weight, blocks4.3.attn.proj.bias, blocks4.3.norm2.weight, blocks4.3.norm2.bias, blocks4.3.mlp.fc1.weight, blocks4.3.mlp.fc1.bias, blocks4.3.mlp.fc2.weight, blocks4.3.mlp.fc2.bias, blocks4.4.pos_embed.weight, blocks4.4.pos_embed.bias, blocks4.4.norm1.weight, blocks4.4.norm1.bias, blocks4.4.attn.qkv.weight, blocks4.4.attn.qkv.bias, blocks4.4.attn.proj.weight, blocks4.4.attn.proj.bias, blocks4.4.norm2.weight, blocks4.4.norm2.bias, blocks4.4.mlp.fc1.weight, blocks4.4.mlp.fc1.bias, blocks4.4.mlp.fc2.weight, blocks4.4.mlp.fc2.bias, blocks4.5.pos_embed.weight, blocks4.5.pos_embed.bias, blocks4.5.norm1.weight, blocks4.5.norm1.bias, blocks4.5.attn.qkv.weight, blocks4.5.attn.qkv.bias, blocks4.5.attn.proj.weight, blocks4.5.attn.proj.bias, blocks4.5.norm2.weight, blocks4.5.norm2.bias, blocks4.5.mlp.fc1.weight, blocks4.5.mlp.fc1.bias, blocks4.5.mlp.fc2.weight, blocks4.5.mlp.fc2.bias, blocks4.6.pos_embed.weight, blocks4.6.pos_embed.bias, blocks4.6.norm1.weight, blocks4.6.norm1.bias, blocks4.6.attn.qkv.weight, blocks4.6.attn.qkv.bias, blocks4.6.attn.proj.weight, blocks4.6.attn.proj.bias, blocks4.6.norm2.weight, blocks4.6.norm2.bias, blocks4.6.mlp.fc1.weight, blocks4.6.mlp.fc1.bias, blocks4.6.mlp.fc2.weight, blocks4.6.mlp.fc2.bias, norm4.weight, norm4.bias

07/24 15:27:06 - mmpose - INFO - Load pretrained model from /root/mmpose/projects/uniformer/pretrained/uniformer_base_in1k.pth
07/24 15:27:10 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/24 15:27:10 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.30s)
creating index...
index created!
loading annotations into memory...
Done (t=0.18s)
creating index...
index created!
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_448x320_global_base-a05c185f_20230724.pth
Downloading: "https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_448x320_global_base-a05c185f_20230724.pth" to /root/.cache/torch/hub/checkpoints/top_down_448x320_global_base-a05c185f_20230724.pth
100%|██████████████████████████████████████████████████████████████████████████████| 204M/204M [00:11<00:00, 19.1MB/s]
07/24 15:27:30 - mmengine - INFO - Load checkpoint from https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_448x320_global_base-a05c185f_20230724.pth
07/24 15:30:33 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:21:45  time: 3.655887  data_time: 0.280876  memory: 8555  
07/24 15:33:34 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:18:37  time: 3.622624  data_time: 0.227081  memory: 8555  
07/24 15:36:35 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:15:33  time: 3.623958  data_time: 0.230486  memory: 8555  
07/24 15:39:36 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:12:31  time: 3.621793  data_time: 0.223173  memory: 8555  
07/24 15:42:37 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:09:29  time: 3.621299  data_time: 0.229660  memory: 8555  
07/24 15:45:39 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:06:28  time: 3.631738  data_time: 0.235520  memory: 8555  
07/24 15:48:40 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:03:26  time: 3.627835  data_time: 0.227529  memory: 8555  
07/24 15:51:42 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:25  time: 3.627315  data_time: 0.237593  memory: 8555  
07/24 15:52:39 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=3.18s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.66s).
Accumulating evaluation results...
DONE (t=0.31s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.774
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.910
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.844
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.739
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.843
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.825
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.949
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.885
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.784
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.884
07/24 15:52:53 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.774310  coco/AP .5: 0.910345  coco/AP .75: 0.844331  coco/AP (M): 0.738556  coco/AP (L): 0.842938  coco/AR: 0.824748  coco/AR .5: 0.948992  coco/AR .75: 0.885076  coco/AR (M): 0.784239  coco/AR (L): 0.884244  data_time: 0.235981  time: 3.626385

whereas the accuracy listed in the official UniFormer repo is:

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-B 448x320 77.4 91.1 84.4 70.2 80.6 82.5 29.6G

@xin-li-67
Copy link
Contributor Author

Testing result on projects/uniformer/configs/td-hm_uniformer-s-8xb64-210e_coco-448x320.py:

07/24 16:02:55 - mmpose - INFO - Use torch.utils.checkpoint: False
07/24 16:02:55 - mmpose - INFO - torch.utils.checkpoint number: (0, 0, 0, 0)
07/24 16:02:55 - mmpose - INFO - Use global window for all blocks in stage3
07/24 16:02:55 - mmpose - INFO - Loads checkpoint by local backend from path: /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
07/24 16:02:55 - mmpose - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: model

missing keys in source state_dict: patch_embed1.norm.weight, patch_embed1.norm.bias, patch_embed1.proj.weight, patch_embed1.proj.bias, patch_embed2.norm.weight, patch_embed2.norm.bias, patch_embed2.proj.weight, patch_embed2.proj.bias, patch_embed3.norm.weight, patch_embed3.norm.bias, patch_embed3.proj.weight, patch_embed3.proj.bias, patch_embed4.norm.weight, patch_embed4.norm.bias, patch_embed4.proj.weight, patch_embed4.proj.bias, blocks1.0.pos_embed.weight, blocks1.0.pos_embed.bias, blocks1.0.norm1.weight, blocks1.0.norm1.bias, blocks1.0.norm1.running_mean, blocks1.0.norm1.running_var, blocks1.0.conv1.weight, blocks1.0.conv1.bias, blocks1.0.conv2.weight, blocks1.0.conv2.bias, blocks1.0.attn.weight, blocks1.0.attn.bias, blocks1.0.norm2.weight, blocks1.0.norm2.bias, blocks1.0.norm2.running_mean, blocks1.0.norm2.running_var, blocks1.0.mlp.fc1.weight, blocks1.0.mlp.fc1.bias, blocks1.0.mlp.fc2.weight, blocks1.0.mlp.fc2.bias, blocks1.1.pos_embed.weight, blocks1.1.pos_embed.bias, blocks1.1.norm1.weight, blocks1.1.norm1.bias, blocks1.1.norm1.running_mean, blocks1.1.norm1.running_var, blocks1.1.conv1.weight, blocks1.1.conv1.bias, blocks1.1.conv2.weight, blocks1.1.conv2.bias, blocks1.1.attn.weight, blocks1.1.attn.bias, blocks1.1.norm2.weight, blocks1.1.norm2.bias, blocks1.1.norm2.running_mean, blocks1.1.norm2.running_var, blocks1.1.mlp.fc1.weight, blocks1.1.mlp.fc1.bias, blocks1.1.mlp.fc2.weight, blocks1.1.mlp.fc2.bias, blocks1.2.pos_embed.weight, blocks1.2.pos_embed.bias, blocks1.2.norm1.weight, blocks1.2.norm1.bias, blocks1.2.norm1.running_mean, blocks1.2.norm1.running_var, blocks1.2.conv1.weight, blocks1.2.conv1.bias, blocks1.2.conv2.weight, blocks1.2.conv2.bias, blocks1.2.attn.weight, blocks1.2.attn.bias, blocks1.2.norm2.weight, blocks1.2.norm2.bias, blocks1.2.norm2.running_mean, blocks1.2.norm2.running_var, blocks1.2.mlp.fc1.weight, blocks1.2.mlp.fc1.bias, blocks1.2.mlp.fc2.weight, blocks1.2.mlp.fc2.bias, norm1.weight, norm1.bias, blocks2.0.pos_embed.weight, blocks2.0.pos_embed.bias, blocks2.0.norm1.weight, blocks2.0.norm1.bias, blocks2.0.norm1.running_mean, blocks2.0.norm1.running_var, blocks2.0.conv1.weight, blocks2.0.conv1.bias, blocks2.0.conv2.weight, blocks2.0.conv2.bias, blocks2.0.attn.weight, blocks2.0.attn.bias, blocks2.0.norm2.weight, blocks2.0.norm2.bias, blocks2.0.norm2.running_mean, blocks2.0.norm2.running_var, blocks2.0.mlp.fc1.weight, blocks2.0.mlp.fc1.bias, blocks2.0.mlp.fc2.weight, blocks2.0.mlp.fc2.bias, blocks2.1.pos_embed.weight, blocks2.1.pos_embed.bias, blocks2.1.norm1.weight, blocks2.1.norm1.bias, blocks2.1.norm1.running_mean, blocks2.1.norm1.running_var, blocks2.1.conv1.weight, blocks2.1.conv1.bias, blocks2.1.conv2.weight, blocks2.1.conv2.bias, blocks2.1.attn.weight, blocks2.1.attn.bias, blocks2.1.norm2.weight, blocks2.1.norm2.bias, blocks2.1.norm2.running_mean, blocks2.1.norm2.running_var, blocks2.1.mlp.fc1.weight, blocks2.1.mlp.fc1.bias, blocks2.1.mlp.fc2.weight, blocks2.1.mlp.fc2.bias, blocks2.2.pos_embed.weight, blocks2.2.pos_embed.bias, blocks2.2.norm1.weight, blocks2.2.norm1.bias, blocks2.2.norm1.running_mean, blocks2.2.norm1.running_var, blocks2.2.conv1.weight, blocks2.2.conv1.bias, blocks2.2.conv2.weight, blocks2.2.conv2.bias, blocks2.2.attn.weight, blocks2.2.attn.bias, blocks2.2.norm2.weight, blocks2.2.norm2.bias, blocks2.2.norm2.running_mean, blocks2.2.norm2.running_var, blocks2.2.mlp.fc1.weight, blocks2.2.mlp.fc1.bias, blocks2.2.mlp.fc2.weight, blocks2.2.mlp.fc2.bias, blocks2.3.pos_embed.weight, blocks2.3.pos_embed.bias, blocks2.3.norm1.weight, blocks2.3.norm1.bias, blocks2.3.norm1.running_mean, blocks2.3.norm1.running_var, blocks2.3.conv1.weight, blocks2.3.conv1.bias, blocks2.3.conv2.weight, blocks2.3.conv2.bias, blocks2.3.attn.weight, blocks2.3.attn.bias, blocks2.3.norm2.weight, blocks2.3.norm2.bias, blocks2.3.norm2.running_mean, blocks2.3.norm2.running_var, blocks2.3.mlp.fc1.weight, blocks2.3.mlp.fc1.bias, blocks2.3.mlp.fc2.weight, blocks2.3.mlp.fc2.bias, norm2.weight, norm2.bias, blocks3.0.pos_embed.weight, blocks3.0.pos_embed.bias, blocks3.0.norm1.weight, blocks3.0.norm1.bias, blocks3.0.attn.qkv.weight, blocks3.0.attn.qkv.bias, blocks3.0.attn.proj.weight, blocks3.0.attn.proj.bias, blocks3.0.norm2.weight, blocks3.0.norm2.bias, blocks3.0.mlp.fc1.weight, blocks3.0.mlp.fc1.bias, blocks3.0.mlp.fc2.weight, blocks3.0.mlp.fc2.bias, blocks3.1.pos_embed.weight, blocks3.1.pos_embed.bias, blocks3.1.norm1.weight, blocks3.1.norm1.bias, blocks3.1.attn.qkv.weight, blocks3.1.attn.qkv.bias, blocks3.1.attn.proj.weight, blocks3.1.attn.proj.bias, blocks3.1.norm2.weight, blocks3.1.norm2.bias, blocks3.1.mlp.fc1.weight, blocks3.1.mlp.fc1.bias, blocks3.1.mlp.fc2.weight, blocks3.1.mlp.fc2.bias, blocks3.2.pos_embed.weight, blocks3.2.pos_embed.bias, blocks3.2.norm1.weight, blocks3.2.norm1.bias, blocks3.2.attn.qkv.weight, blocks3.2.attn.qkv.bias, blocks3.2.attn.proj.weight, blocks3.2.attn.proj.bias, blocks3.2.norm2.weight, blocks3.2.norm2.bias, blocks3.2.mlp.fc1.weight, blocks3.2.mlp.fc1.bias, blocks3.2.mlp.fc2.weight, blocks3.2.mlp.fc2.bias, blocks3.3.pos_embed.weight, blocks3.3.pos_embed.bias, blocks3.3.norm1.weight, blocks3.3.norm1.bias, blocks3.3.attn.qkv.weight, blocks3.3.attn.qkv.bias, blocks3.3.attn.proj.weight, blocks3.3.attn.proj.bias, blocks3.3.norm2.weight, blocks3.3.norm2.bias, blocks3.3.mlp.fc1.weight, blocks3.3.mlp.fc1.bias, blocks3.3.mlp.fc2.weight, blocks3.3.mlp.fc2.bias, blocks3.4.pos_embed.weight, blocks3.4.pos_embed.bias, blocks3.4.norm1.weight, blocks3.4.norm1.bias, blocks3.4.attn.qkv.weight, blocks3.4.attn.qkv.bias, blocks3.4.attn.proj.weight, blocks3.4.attn.proj.bias, blocks3.4.norm2.weight, blocks3.4.norm2.bias, blocks3.4.mlp.fc1.weight, blocks3.4.mlp.fc1.bias, blocks3.4.mlp.fc2.weight, blocks3.4.mlp.fc2.bias, blocks3.5.pos_embed.weight, blocks3.5.pos_embed.bias, blocks3.5.norm1.weight, blocks3.5.norm1.bias, blocks3.5.attn.qkv.weight, blocks3.5.attn.qkv.bias, blocks3.5.attn.proj.weight, blocks3.5.attn.proj.bias, blocks3.5.norm2.weight, blocks3.5.norm2.bias, blocks3.5.mlp.fc1.weight, blocks3.5.mlp.fc1.bias, blocks3.5.mlp.fc2.weight, blocks3.5.mlp.fc2.bias, blocks3.6.pos_embed.weight, blocks3.6.pos_embed.bias, blocks3.6.norm1.weight, blocks3.6.norm1.bias, blocks3.6.attn.qkv.weight, blocks3.6.attn.qkv.bias, blocks3.6.attn.proj.weight, blocks3.6.attn.proj.bias, blocks3.6.norm2.weight, blocks3.6.norm2.bias, blocks3.6.mlp.fc1.weight, blocks3.6.mlp.fc1.bias, blocks3.6.mlp.fc2.weight, blocks3.6.mlp.fc2.bias, blocks3.7.pos_embed.weight, blocks3.7.pos_embed.bias, blocks3.7.norm1.weight, blocks3.7.norm1.bias, blocks3.7.attn.qkv.weight, blocks3.7.attn.qkv.bias, blocks3.7.attn.proj.weight, blocks3.7.attn.proj.bias, blocks3.7.norm2.weight, blocks3.7.norm2.bias, blocks3.7.mlp.fc1.weight, blocks3.7.mlp.fc1.bias, blocks3.7.mlp.fc2.weight, blocks3.7.mlp.fc2.bias, norm3.weight, norm3.bias, blocks4.0.pos_embed.weight, blocks4.0.pos_embed.bias, blocks4.0.norm1.weight, blocks4.0.norm1.bias, blocks4.0.attn.qkv.weight, blocks4.0.attn.qkv.bias, blocks4.0.attn.proj.weight, blocks4.0.attn.proj.bias, blocks4.0.norm2.weight, blocks4.0.norm2.bias, blocks4.0.mlp.fc1.weight, blocks4.0.mlp.fc1.bias, blocks4.0.mlp.fc2.weight, blocks4.0.mlp.fc2.bias, blocks4.1.pos_embed.weight, blocks4.1.pos_embed.bias, blocks4.1.norm1.weight, blocks4.1.norm1.bias, blocks4.1.attn.qkv.weight, blocks4.1.attn.qkv.bias, blocks4.1.attn.proj.weight, blocks4.1.attn.proj.bias, blocks4.1.norm2.weight, blocks4.1.norm2.bias, blocks4.1.mlp.fc1.weight, blocks4.1.mlp.fc1.bias, blocks4.1.mlp.fc2.weight, blocks4.1.mlp.fc2.bias, blocks4.2.pos_embed.weight, blocks4.2.pos_embed.bias, blocks4.2.norm1.weight, blocks4.2.norm1.bias, blocks4.2.attn.qkv.weight, blocks4.2.attn.qkv.bias, blocks4.2.attn.proj.weight, blocks4.2.attn.proj.bias, blocks4.2.norm2.weight, blocks4.2.norm2.bias, blocks4.2.mlp.fc1.weight, blocks4.2.mlp.fc1.bias, blocks4.2.mlp.fc2.weight, blocks4.2.mlp.fc2.bias, norm4.weight, norm4.bias

07/24 16:02:55 - mmpose - INFO - Load pretrained model from /root/mmpose/projects/uniformer/pretrained/uniformer_small_in1k.pth
07/24 16:02:59 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/24 16:02:59 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH   ) RuntimeInfoHook                    
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
before_train:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_train_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(NORMAL      ) DistSamplerSeedHook                
 -------------------- 
before_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_train_iter:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_val_epoch:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) SyncBuffersHook                    
 -------------------- 
before_val_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_val_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_val_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
(LOW         ) ParamSchedulerHook                 
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
after_train:
(VERY_LOW    ) CheckpointHook                     
 -------------------- 
before_test_epoch:
(NORMAL      ) IterTimerHook                      
 -------------------- 
before_test_iter:
(NORMAL      ) IterTimerHook                      
 -------------------- 
after_test_iter:
(NORMAL      ) IterTimerHook                      
(NORMAL      ) PoseVisualizationHook              
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_test_epoch:
(VERY_HIGH   ) RuntimeInfoHook                    
(NORMAL      ) IterTimerHook                      
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
after_run:
(BELOW_NORMAL) LoggerHook                         
 -------------------- 
loading annotations into memory...
Done (t=0.29s)
creating index...
index created!
loading annotations into memory...
Done (t=0.18s)
creating index...
index created!
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_448x320_global_small-18b760de_20230724.pth
Downloading: "https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_448x320_global_small-18b760de_20230724.pth" to /root/.cache/torch/hub/checkpoints/top_down_448x320_global_small-18b760de_20230724.pth
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 96.3M/96.3M [00:05<00:00, 19.8MB/s]
07/24 16:03:13 - mmengine - INFO - Load checkpoint from https://download.openmmlab.com/mmpose/v1/projects/uniformer/top_down_448x320_global_small-18b760de_20230724.pth
07/24 16:05:00 - mmengine - INFO - Epoch(test) [ 50/407]    eta: 0:12:44  time: 2.140249  data_time: 0.281717  memory: 8446  
07/24 16:06:45 - mmengine - INFO - Epoch(test) [100/407]    eta: 0:10:51  time: 2.104473  data_time: 0.235087  memory: 8446  
07/24 16:08:30 - mmengine - INFO - Epoch(test) [150/407]    eta: 0:09:03  time: 2.101846  data_time: 0.232801  memory: 8446  
07/24 16:10:16 - mmengine - INFO - Epoch(test) [200/407]    eta: 0:07:17  time: 2.103954  data_time: 0.233861  memory: 8446  
07/24 16:12:01 - mmengine - INFO - Epoch(test) [250/407]    eta: 0:05:31  time: 2.114850  data_time: 0.245181  memory: 8446  
07/24 16:13:47 - mmengine - INFO - Epoch(test) [300/407]    eta: 0:03:45  time: 2.105382  data_time: 0.237319  memory: 8446  
07/24 16:15:32 - mmengine - INFO - Epoch(test) [350/407]    eta: 0:02:00  time: 2.099765  data_time: 0.229156  memory: 8446  
07/24 16:17:17 - mmengine - INFO - Epoch(test) [400/407]    eta: 0:00:14  time: 2.104526  data_time: 0.233125  memory: 8446  
07/24 16:18:04 - mmengine - INFO - Evaluating CocoMetric...
Loading and preparing results...
DONE (t=3.50s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *keypoints*
DONE (t=9.13s).
Accumulating evaluation results...
DONE (t=0.32s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.762
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.906
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.832
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.725
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.834
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] =  0.814
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] =  0.944
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] =  0.876
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] =  0.772
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] =  0.877
07/24 16:18:17 - mmengine - INFO - Epoch(test) [407/407]    coco/AP: 0.762134  coco/AP .5: 0.906027  coco/AP .75: 0.832140  coco/AP (M): 0.725317  coco/AP (L): 0.833510  coco/AR: 0.814421  coco/AR .5: 0.944112  coco/AR .75: 0.876417  coco/AR (M): 0.772111  coco/AR (L): 0.876886  data_time: 0.240286  time: 2.107453

whereas the accuracy listed in the official UniFormer repo is:

Backbone Input Size AP AP50 AP75 ARM ARL AR FLOPs
UniFormer-S 448x320 76.2 90.6 83.2 68.6 79.4 81.4 14.8G

Copy link
Collaborator

@Tau-J Tau-J left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Tau-J Tau-J changed the title Add UniFormer Pose Estimation to Projects folder [Feature][MMSIG] Add UniFormer Pose Estimation to Projects folder Jul 24, 2023
@Tau-J Tau-J merged commit 85831b8 into open-mmlab:dev-1.x Jul 24, 2023
14 checks passed
@xin-li-67 xin-li-67 deleted the uniformer_projects branch July 24, 2023 12:08
@Tau-J Tau-J mentioned this pull request Jul 14, 2023
11 tasks
Tau-J added a commit that referenced this pull request Oct 12, 2023
* update

* [Fix] Fix HRFormer log link

* [Feature] Add Application 'Just dance' (#2528)

* [Docs] Add advanced tutorial of implement new model. (#2539)

* [Doc] Update img (#2541)

* [Feature] Support MotionBERT (#2482)

* [Fix] Fix demo scripts (#2542)

* [Fix] Fix Pose3dInferencer keypoint shape bug (#2543)

* [Enhance] Add notifications when saving visualization results (#2545)

* [Fix] MotionBERT training and flip-test (#2548)

* [Docs] Enhance docs (#2555)

* [Docs] Fix links in doc (#2557)

* [Docs] add details (#2558)

* [Refactor] 3d human pose demo (#2554)

* [Docs] Update MotionBERT docs (#2559)

* [Refactor] Update the arguments of 3d inferencer to align with the demo script (#2561)

* [Enhance] Combined dataset supports custom sampling ratio (#2562)

* [Docs] Add MultiSourceSampler docs (#2563)

* [Doc] Refine docs (#2564)

* [Feature][MMSIG] Add UniFormer Pose Estimation to Projects folder (#2501)

* [Fix] Check the compatibility of inferencer's input/output  (#2567)

* [Fix]Fix 3d visualization (#2565)

* [Feature] Add bear example in just dance (#2568)

* [Doc] Add example and openxlab link for just dance (#2571)

* [Fix] Configs' paths of VideoPose3d (#2572)

* [Docs] update docs (#2573)

* [Fix] Fix new config bug in train.py (#2575)

* [Fix] Configs' of MotionBERT (#2574)

* [Enhance] Normalization option in 3d human pose demo and inferencer (#2576)

* [Fix] Fix the incorrect labels for training vis_head with combined datasets (#2550)

* [Enhance] Enhance 3dpose demo and docs (#2578)

* [Docs] Enhance Codecs documents (#2580)

* [Feature] Add DWPose distilled WholeBody RTMPose models (#2581)

* [Docs] Add deployment docs (#2582)

* [Fix] Refine 3dpose (#2583)

* [Fix] Fix config typo in rtmpose-x (#2585)

* [Fix] Fix 3d inferencer (#2593)

* [Feature] Add a simple visualize api (#2596)

* [Feature][MMSIG] Support badcase analyze in test (#2584)

* [Fix] fix bug in flip_bbox with xyxy format (#2598)

* [Feature] Support ubody dataset (2d keypoints) (#2588)

* [Fix] Fix visualization bug in 3d pose (#2594)

* [Fix] Remove use-multi-frames option (#2601)

* [Enhance] Update demos (#2602)

* [Enhance] wholebody support  openpose style visualization (#2609)

* [Docs] Documentation regarding 3d pose (#2599)

* [CodeCamp2023-533] Migration Deepfashion topdown heatmap algorithms to 1.x (#2597)

* [Fix] fix badcase hook (#2616)

* [Fix] Update dataset mim downloading source to OpenXLab (#2614)

* [Docs] Update docs structure (#2617)

* [Docs] Refine Docs (#2619)

* [Fix] Fix numpy error (#2626)

* [Docs] Update error info and docs (#2624)

* [Fix] Fix inferencer argument name (#2627)

* [Fix] fix links for coco+aic hrnet (#2630)

* [Fix] fix a bug when visualize keypoint indices (#2631)

* [Docs] Update rtmpose docs (#2642)

* [Docs] update README.md (#2647)

* [Docs] Add onnx of RTMPose models (#2656)

* [Docs] Fix mmengine link (#2655)

* [Docs] Update QR code (#2653)

* [Feature] Add DWPose (#2643)

* [Refactor] Reorganize distillers (#2658)

* [CodeCamp2023-259]Document Writing: Advanced Tutorial - Custom Data Augmentation (#2605)

* [Docs] Fix installation docs(#2668)

* [Fix] Fix expired links in README (#2673)

* [Feature] Support multi-dataset evaluation (#2674)

* [Refactor] Specify labels to pack in codecs (#2659)

* [Refactor] update mapping tables (#2676)

* [Fix] fix link (#2677)

* [Enhance] Enable CocoMetric to get ann_file from MessageHub (#2678)

* [Fix] fix vitpose pretrained ckpts (#2687)

* [Refactor] Refactor YOLOX-Pose into mmpose core package (#2620)

* [Fix] Fix typo in COCOMetric(#2691)

* [Fix] Fix bug raised by changing bbox_center to input_center (#2693)

* [Feature] Surpport EDPose for inference(#2688)

* [Refactor] Internet for 3d hand pose estimation (#2632)

* [Fix] Change test batch_size of edpose to 1 (#2701)

* [Docs] Add OpenXLab Badge (#2698)

* [Doc] fix inferencer doc (#2702)

* [Docs] Refine dataset config tutorial (#2707)

* [Fix] modify yoloxpose test settings (#2706)

* [Fix] add compatibility for argument `return_datasample` (#2708)

* [Feature] Support ubody3d dataset (#2699)

* [Fix] Fix 3d inferencer (#2709)

* [Fix] Move ubody3d dataset to wholebody3d (#2712)

* [Refactor] Refactor config and dataset file structures (#2711)

* [Fix] give more clues when loading img failed (#2714)

* [Feature] Add demo script for 3d hand pose  (#2710)

* [Fix] Fix Internet demo (#2717)

* [codecamp: mmpose-315] 300W-LP data set support (#2716)

* [Fix] Fix the typo in YOLOX-Pose (#2719)

* [Feature] Add detectors trained on humanart (#2724)

* [Feature] Add RTMPose-Wholebody (#2721)

* [Doc] Fix github action badge in README (#2727)

* [Fix] Fix bug of dwpose (#2728)

* [Feature] Support hand3d inferencer (#2729)

* [Fix] Fix new config of RTMW (#2731)

* [Fix] Align visualization color of 3d demo (#2734)

* [Fix] Refine h36m data loading and add head_size to PackPoseInputs (#2735)

* [Refactor] Align test accuracy for AE (#2737)

* [Refactor] Separate evaluation mappings from KeypointConverter (#2738)

* [Fix] MotionbertLabel codec (#2739)

* [Fix] Fix mask shape (#2740)

* [Feature] Add training datasets of RTMW (#2743)

* [Doc] update RTMPose README (#2744)

* [Fix] skip warnings in demo (#2746)

* Bump 1.2 (#2748)

* add comments in dekr configs (#2751)

---------

Co-authored-by: Peng Lu <[email protected]>
Co-authored-by: Yifan Lareina WU <[email protected]>
Co-authored-by: Xin Li <[email protected]>
Co-authored-by: Indigo6 <[email protected]>
Co-authored-by: 谢昕辰 <[email protected]>
Co-authored-by: tpoisonooo <[email protected]>
Co-authored-by: zhengjie.xu <[email protected]>
Co-authored-by: Mesopotamia <[email protected]>
Co-authored-by: chaodyna <[email protected]>
Co-authored-by: lwttttt <[email protected]>
Co-authored-by: Kanji Yomoda <[email protected]>
Co-authored-by: LiuYi-Up <[email protected]>
Co-authored-by: ZhaoQiiii <[email protected]>
Co-authored-by: Yang-ChangHui <[email protected]>
Co-authored-by: Xuan Ju <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants