Replies: 1 comment 1 reply
-
@sz144 I believe I can contribute a pull request to help implement support for more file types with a little guidance on the general concepts/workflow for data structuring in PyKale. I see that the PyKale structure is related to PyTorch. If there is necessary development to be done, maybe we can kick off an issue? Again, I am happy to contribute code. :) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
A couple days ago, I decided that I wanted to apply domain adaptation to some simulated data so that I could get a better understanding for what kinds of structure/knowledge can transfer between two datasets. I wanted to work locally on my machine rather than in CoLab, so I have a fresh install of PyTorch and PyKale v0.1.0rc4.
There are several examples that access data in different ways, but also are aiming to do different kinds of modeling. For example, the
multisource_adapt
example does multisource domain adaptation, which is what I think I want to be experimenting with. The datasets are available on disk, but by the time they are handed over to a function likecreate_ms_adapt_trainer
they are represented as an instance of aMultiDomainAdapDataset
. To construct such an instance, you need a MultiDomainAccessDatasetAccess
object... and I am lost.What is the workflow to take arbitrary data, say a collection of comma-delimited text files, and load them as a data set that plugs into the rest of the API?
To test some basic things out, I generated the following file structure based on the docstring for the
MultiDomainImageFolder
class (each text file just contains the numbers 0 -- 9 on separate lines):Then, in Python:
That just yields an empty list.
I gave this thread a generic title, because I think I will have a number of questions like this one, but covering different topics. It might be nice to have them organized.
Beta Was this translation helpful? Give feedback.
All reactions