Skip to content

Tutorial on building a gpu compiler backend in LLVM

License

Notifications You must be signed in to change notification settings

adamtiger/tinyGPUlang

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tinyGPUlang

Tutorial on building a gpu compiler backend in LLVM

Goals

The goal of this tutorial is to show a simple example on how to generate ptx from the llvm ir and how to write the IR itself to access cuda features.

For the sake of demonstration a language frontend is also provided. The main idea of the language is to support pointwise (aka elementwise) operations with gpu acceleration.

If you are just curios about the code generation backend, you can jump directly to The code generator for NVPTX backend part.

What is inside the repo?

  • tinyGPUlang: the compiler, creates ptx from tgl (the example language file)
  • test: a cuda driver api based test for the generated ptx
  • examples: example tgl files
  • docs: documentation for the tutorial

Tutorial content

  1. Overview
  2. The TGL language
  3. Abstract Syntax Tree
  4. The code generator for NVPTX backend
  5. Short overview of the parser

Build

See the How to build the project? documentation for further details.

References

About

Tutorial on building a gpu compiler backend in LLVM

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published