- π Hello! I am a third-year PhD Candidate in computer science at HKBU. My objective is to develop intelligent agents capable of interacting with both digital and physical environments. To break it down further:
- Foundation Models: Large Foundation Models, Models Alignment, etc.
- Applications: Foundation Models as Agents, Code Intelligence, etc.
- See my HomePage or Google Scholar for more about me and my research.
- π± I am proud of being the co-first author of the well-known code LLM, WizardCoder.
- π« Email: [email protected]
π»
Focusing
CS Ph.D. Candidate at HKBU, Hong Kong.
Co-first Author of WizardCoder.
My research revolves around Code Intelligence and LLMs.
(Ziyang Luo in Pinyin)
-
Hong Kong Baptist University
- Hong Kong, China
-
13:24
(UTC +08:00) - https://chiyeunglaw.github.io/
- @ChiYeung_Law
- https://www.zhihu.com/people/Chi-YeungLaw
Block or Report
Block or report ChiYeungLaw
Report abuse
Contact GitHub support about this userβs behavior. Learn more about reporting abuse.
Report abusePinned Loading
-
nlpxucan/WizardLM
nlpxucan/WizardLM PublicLLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
-
LexLIP-ICCV23
LexLIP-ICCV23 PublicOfficial Code for the ICCV23 Paper: "LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval"
-
HKBUNLP/Mr.Harm-EMNLP2023
HKBUNLP/Mr.Harm-EMNLP2023 PublicCode for our EMNLP 2023 paper - Beneath the Surface: Unveiling Harmful Memes with Multimodal Reasoning Distilled from Large Language Models
-
HKBUNLP/ExplainHM-WWW2024
HKBUNLP/ExplainHM-WWW2024 PublicOfficial Code for the WWW'24 Paper: "Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models"
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.