-
Juelich Supercomputing Center (JSC), Forschungszentrum Juelich
Stars
Official inference repo for FLUX.1 models
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
Alice in Wonderland code base for experiments and raw experiments data
An open-source framework for training large multimodal models.
A MAD laboratory to improve AI architecture designs 🧪
A repository for research on medium sized language models.
Language models scale reliably with over-training and on downstream tasks
Launch and manage batch of SLURM experiments easily
A framework for the evaluation of autoregressive code generation language models.
A Comparative Study on Generative Models for High Resolution Extreme Ultraviolet Solar Images https://arxiv.org/abs/2304.07169
LLM training code for Databricks foundation models
A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
CLASP - Contrastive Language-Aminoacid Sequence Pretraining
ZeroCostDL4Mic: A Google Colab based no-cost toolbox to explore Deep-Learning in Microscopy
Simple large-scale training of stable diffusion with multi-node support.
O-GIA is an umbrella for research, infrastructure and projects ecosystem that should provide open source, reproducible datasets, models, applications & safety tools for Open Generalist Interactive …
Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Stable Diffusion web UI