-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-processed data for the multimodal representation learning task #34
Comments
Hi Xiaohan, experiments presented in Table 2 are based on all human Xenium breast samples (see HEST-1k metadata). You can query those samples using our download pipeline (see tutorial 1). We only did log1p normalization. The code for contrastive alignment is not public yet, but it is quite standard. |
Hi Guillaume, |
We used NCBI783 (IDC), NCBI785 (IDC), TENX95 (IDC), TENX99 (IDC) and TENX96 (ILC). Others are duplicates using a different gene panels. You can still use them (3 additional samples) but there is redundancy. |
You can refer to the patient entry in the metadata. |
Thanks, I will try. |
Hi Authors,
Thanks for your excellent work. I am very interested in developing algorithms based on the HEST-1k database.
I would like to know how to get access to the pre-processed data for the multimodal representation learning task, which corresponds to the experimental results in Table 2.
I look forward to your reply.
Best,
Xiaohan
The text was updated successfully, but these errors were encountered: