Skip to content

shan23chen/paper-reading

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

How do I go through CS paper:

CS/informatics papers usually hold six main components:

  1. title
  2. abstract
  3. introduction
  4. method
  5. experiments
  6. conclusion

And I will spend time to run through the paper three times:

First round: Run through the title, abstract, conclusion. Take a look at important figures and tables in the Methods and Experiments section. In this way, you can spend more than ten minutes to understand whether the paper is suitable for your research direction.

Second round: After confirming that the paper is worth reading, you can quickly go through the whole paper. You don’t need to know all the details. You need to understand important figures and tables, know what each part is doing, and circle the relevant literature. If you think the article is too difficult, you can read the cited literature.

The third round: what problem to ask, what method to use to solve this problem. How the experiment was done. Close the article and recall what each section is about.

How to edit this ReadMe

(Using semantics-scholar,Because its clean API

How to use semantics-scholar's API:

(https://img.shields.io/badge/dynamic/json?label=citation&query=citationCount&url=https%3A%2F%2Fapi.semanticscholar.org%2Fgraph%2Fv1%2Fpaper%2F + last chunk of slash in the url + %3Ffields%3DcitationCount) plus sign not included with no space

Cool blog talk about new techs

What is Diffusion // Diffusion Demo with code // Prompting intro // Contrastive learning and multi-modal // Method call //

What are we reading on next?

Proposed by? Year Title Date citation
Shan 2022 Competence-based Multimodal Curriculum Learning for Medical Report Generation Sept,06/22 citation

CV

Read? Year Title what do i think citation
2012 AlexNet First big W for Deeplearning citation

Multimodal

Read? Year Title what do i think citation
2022 CheXzero Limited refinement work of CLIP citation
2019 ConVirt medical domain and inspired CLIP citation
2020 ViT Transformer into CV world citation
2021 CLIP unsupervised contrastive learning with huge amount data building a richer semantics citation

Generation

Read? Year Title what do i think citation
2014 GAN First GAN citation

NLP

Read? Year Title what do i think citation
2017 Transformer Best for NLP after (MLP、CNN、RNN) citation
2022 Do Prompt-Based Models Really Understand the Meaning of Their Prompts? prompt semantics not as how we think citation

General New stuffs

Read? Year Title what do i think citation
2021 AlphaFold 2 atomic level 3D protein prediction citation

About

Just a reading list

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published