You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the way we handle multiple batches is to send repeated trajectories as input.
This is memory consuming and also might cause degradation in performance as deep inside we use tf.map_fn which runs with O(elem.shape[0]) So effectively, by sending reoeated trajectory, we seem to not be in great place. (Although on a contrary, using parallel_iterations will help us here, but mostly not by a lot. I still need to do some simple benchmark tests.).
However, we still need support for multiple trajectory for distributed training and training in a reconstruction pipeline that runs on multiple trajectories.
We can still come to a generic solution which can handle batches better even if we dont send repeated trajectories. That way, we can carry out the computations more easily in a vectorized fashion!
Tasks :
I will add a colab notebook here, to actually understand if there is any issues and what is the impact.
If we have a strong impact, we can work in such a way that we can work across coils and batch dimension internally as long as the trajectory is the same.
The text was updated successfully, but these errors were encountered:
Also, a side note here that when executing eagerly, parallel_iterations = 1, making debugging of code extremely slow (batch_size times slower), which is bad in case we want to reach to better results and iterate faster..
No, if i set it higher i get a warning saying that I cant run it in parallel in eager mode. The parallel iterations are used particularly while graph is being built, so this cant be done in eager mode.
Currently the way we handle multiple batches is to send repeated trajectories as input.
This is memory consuming and also might cause degradation in performance as deep inside we use tf.map_fn which runs with
O(elem.shape[0])
So effectively, by sending reoeated trajectory, we seem to not be in great place. (Although on a contrary, usingparallel_iterations
will help us here, but mostly not by a lot. I still need to do some simple benchmark tests.).However, we still need support for multiple trajectory for distributed training and training in a reconstruction pipeline that runs on multiple trajectories.
We can still come to a generic solution which can handle batches better even if we dont send repeated trajectories. That way, we can carry out the computations more easily in a vectorized fashion!
Tasks :
The text was updated successfully, but these errors were encountered: