Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compose/3D + GLVisualize + Geometry Primitives #7

Open
SimonDanisch opened this issue Jul 25, 2015 · 12 comments
Open

Compose/3D + GLVisualize + Geometry Primitives #7

SimonDanisch opened this issue Jul 25, 2015 · 12 comments

Comments

@SimonDanisch
Copy link
Member

Hi there,
I thought its about time that I write down, what I expect from Compose/3D and share some thoughts on how I think it could be implemented.

What I expect from Compose/3D:

  • composing objects with boundingboxes with different unites in a space(context) with different units
  • result of compose should be a tree of signals of transformation matrices. This does not account for attributes. Maybe it should rather be a tree of Contexts and the context type includes attributes and a transformation matrix. The results should then all be in the same space! How far away is this from the current implementation!?
  • Integrate attributes. This one is a little tricky. Like its currently done in Compose (context(0.5, 0.5, 0.5, 0.5), circle(), fill(...))` seems to me like its either slow or complicated to implement when a lot of flexibility is needed. I need to think about this one, maybe I just need to wrap my head around it.
    So far I'm more into visualize(primitive, kwarg1=x, kwarg2=z...). Nice thing about how compose currently handles it is, that you can globally set style attributes for a context which seems to be desirable! So maybe its worth rethinking for me and I just need to figure out what this means for GLVisualize.
  • by composing I think of alignments. Coming from photoshop this means something like this to me:
    image I think this is different, or higher level from what compose does. You would align by creating a new context, which e.g. has the middle of the parent context as its origin, right?!
  • unifying Escher and Compose. The previously mentioned alignments seem to be implemented in Escher. Also, the Tile abstraction seems to come very close to what I think of what a composable graphic looks like. A blackbox with arbitrary renderable objects inside + boundingbox.
  • deep integration of signals. So you could say something like center at Signal(Point2(0.5)), or hskip(Signal(0.5pt)). Also use signals of boundingboxes/contexts. Hide signals inside a tile/composable graphic. This would mean we can integrate a lot of dynamic information without ever recomposing.
  • I'm a big fan of using Julia objects for retrieving layouts. So a vector of things implies a list. A matrix implies a grid of things. A dict is a two column kind of structure. So if you want to have a grid of images, you would just put them into a matrix. You could supply optional parameters to change the padding, alignment, looks, etc. I like this because it gives often used julia types a sensible default visualization which is nice for visual debugging.

If we can agree on these points, I will help @rohitvarkey with Compose3D and replace my current interface in GLVisualize with it. This would mean GLVisualize becomes a backend for Compose3D, and @rohitvarkey should start to move out the WebGL functionality.

Outcome of Geometry Meetup AKA Geometry Primitives

It was just me and @garborg, but we had a very good talk!
I think the conclusion was along the lines, that we all should experiment around with different vector (fixed vector) representations, to see what feels most natural for some specific domain. If everyone is a little bit experimental and has a clearer picture in the end, we can better decide what a good vector type should look like.
I will experiment with FixedSizeArrays a bit. I will implement a few different versions and play around with them. Some candidates are in: https://gist.github.com/SimonDanisch/76c82631d38f12b2e350#file-geometry-jl .

One thing @garborg and I discussed was if including the space that a vector/point is in, e.g. spherical, camera space, etc makes sense. The gain: we would always know in which space we are, which makes interfacing easier and should reduce mistakes. Downside: complicates implementation quite a bit and might also make working with vectors annoying.

Random thoughts

NTuple seems to be more of a typealias, even though it is a different type under the hood.
Consider:

julia> test() = (1,2,3)
julia> test2() = (1f0,2,3)
julia> @code_llvm test()
define [3 x i64] @julia_test_1601() {
top:
  ret [3 x i64] [i64 1, i64 2, i64 3]
}
julia> @code_llvm test2()
define { float, i64, i64 } @julia_test2_1609() {
top:
  ret { float, i64, i64 } { float 1.000000e+00, i64 2, i64 3 }
}

We should keep this in mind. I think the implications are not that grave, but it could be nasty when compiling to the GPU.

@dcjones said he wants different measures inside one point. I'm not entirely convinced, that the use case he mentioned (Point2(2s, 2mm)) is that appealing.
It complicates things quite a bit and it seems to be more natural to have two seperated vectors, one for the time and one for the spatial dimension.
A more intriguing use case (at least for me and OpenGL) is that there are color types with different precisions per channel. This could easily be included in FixedSizeArrays, as only getindex and the eltype function are different. But we would need a more fine grained inheritance tree for that.

Conclusion

I hope this is helpful ;)
Let me know if you have questions!

CC: @dcjones @sjkelly @wkearn @jminardi @timholy @shashi @Keno @lobingera @ViralBShah @rohitvarkey @KristofferC @garborg @randyzwitch @jheinen @tedsteiner @zyedidia @yeesian @jminardi @dreammaker

If you don't want to get pinged, please notice me. These are the people I consider to be interested in the development of this.

Best,
Simon

@ViralBShah
Copy link

This is fantastic! Did the call happen on Thursday, and this is the outcome from that call? I am very interested in following, but don't have much to contribute directly. I am not watching the repo, but please ping me on anything where I can help.

@shashi
Copy link

shashi commented Jul 26, 2015

result of compose should be a tree of signals of transformation matrices.

deep integration of signals. So you could say something like center at Signal(Point2(0.5)), or hskip(Signal(0.5pt)). Also use signals of boundingboxes/contexts. Hide signals inside a tile/composable graphic. This would mean we can integrate a lot of dynamic information without ever recomposing

Reactive is meant to be orthogonal to other graphic tools. The graphic tools provide ways to draw something, and Reactive takes that to interactive dimension. I'm not very inclined towards supporting hskip(Signal(0.5pt)) and (as you seem to imply) Signal arguments for every function in Compose. Firstly, it is not elegant, secondly the usability improvement to implementation complexity trade off is not worth it. In fact, I'm of the opinion that usability is harmed by this because there are more options for the user and that's very confusing.

However, one useful thing we could support for great (mainly performance) benefits is composing signals of contexts together. This is relatively much easier to implement correctly, and the renderer can close over the parent context's computed tranformation and use that for rendering the child signals of contexts as they update. This is in line with how Escher supports embedding Signal of UIs in other static UIs. So instead of hskip(signal_of_length) you just say consume(hskip, signal_of_length).

unifying Escher and Compose. The previously mentioned alignments seem to be implemented in Escher. Also, the Tile abstraction seems to come very close to what I think of what a composable graphic looks like. A blackbox with arbitrary renderable objects inside + boundingbox.

Escher and Compose work well in that Compose graphics can be embedded in Escher UIs. Which is exactly how far that should go if you ask me. If you mean the Tile abstraction in Escher, then yes they are supposed to be black boxes. One could write an API for composable graphics similar to this but built on top of Compose. I'm fine with using Compose as it is, now that I'm over the initial part of the learning curve.

If we can agree on these points, I will help @rohitvarkey with Compose3D and replace my current interface in GLVisualize with it. This would mean GLVisualize becomes a backend for Compose3D, and @rohitvarkey should start to move out the WebGL functionality.

Why move out WebGL functionality? Isn't WebGL supposed to be a backend to Compose3D just like GLVisualize?

@dcjones said he wants different measures inside one point. I'm not entirely convinced, that the use case he mentioned (Point2(2s, 2mm)) is that appealing.

This sort of thing is very useful in Gadfly. It makes for a simpler implementation.

It complicates things quite a bit and it seems to be more natural to have two seperated vectors, one for the time and one for the spatial dimension.

I don't exactly understand the concern. In a Gadfly plot where the x-axis is time and the y axis is say the angle of the sun, each point represents a point in (time, angle) space. Here time is a spatial dimension when drawing the plot. The bonding box can then have a width in of 1 day, and the points will be beautifully resolved to the right coordinates.

@SimonDanisch
Copy link
Member Author

@ViralBShah great :) Yeah, the Geometry meetups kickstarted this!

@shashi

Reactive is meant to be orthogonal to other graphic tools. The graphic tools provide ways to draw something, and Reactive takes that to interactive dimension.

Sure... But how I understand it, compose should not really be a graphic tool, right? It should be a (interactive) composition tool. Cairo/WebGL/GLVisualize is the graphic tool.
That also explains this:

Why move out WebGL functionality? Isn't WebGL supposed to be a backend to Compose3D just like GLVisualize?

Besides, if GLVisualize depends on Compose3D, I don't want to automatically depend on WebGL as they are as orthogonal as it can get. Its like me suggesting to make Compose depend on ModernGL, even though you draw everything with Cairo.

So instead of hskip(signal_of_length) you just say consume(hskip, signal_of_length)

I might have other ideas about what hskip does. I thought it should directly result in a transformation matrix. As the transformation matrix should be a signal (if you already agreed on the context signals), it seems natural to have everything that ends up in a transformation matrix as a signal. For me, my whole infrastructure already works like that, so from the implementation it would be really simple and natural for me.
But I guess you don't really have transformation matrices but rather some html tag!?
I'm not sure what the implications are for Escher, but I want to move as much composition as possible to the compilation phase, and then at runtime let everything dynamic be handled by Reactive.
I think that's the recipe for fast interaction. Not even diffing can beat a simple memory update.
I might be wrong here though, depending on the architecture!
Mostly, I just don't want to tell anyone, hey be carefull with that hskip, if you use it with many objects dynamically, things will get slow.

Escher and Compose work well in that Compose graphics can be embedded in Escher UIs. Which is exactly how far that should go if you ask me.

You're the master of Escher ;) I just noticed the similarities and ended up picturing Compose3D as the way I want to compose my UI's in GLVisualize. I see Compose3D as a layouting tool, and GUIs are mostly about layouting.

One could write an API for composable graphics similar to this but built on top of Compose.

Sure, Compose3D could be the lower level, but the higher level needs to live somewhere. So why not just upgrade Compose3D a bit?

This sort of thing is very useful in Gadfly. It makes for a simpler implementation.

I was questioning, if I need Point(::mm, ::s) for that. Why not have compose(context(width::s, height::mm), time::s, heights::mm) -> Vector{Point{mm}}. I just think, as soon as you're drawing, points should have the same unit, preferably some native unit of the backend. After all, you're not really drawing time in that example. If you really want to draw time Signal{mm} would make more sense.
I'm not totally against it, not at all. It fits well into my "visual debugging - put as much information into a type as possible" scheme. Just trying to question what we really need in order to be all happy.
Its just that points with different units are a little bit finicky with the current type system.
We might want to collect use cases for all different kind of plotting problems to have a better handle on these kind of questions. I will open an issue where we can collect use cases from all different geometry domains.

I should then try to implement some prototypes which work well for GLVisualize with the use cases, to get a better sense of what is needed.

Best,
Simon

@dcjones
Copy link

dcjones commented Jul 26, 2015

I was questioning, if I need Point(::mm, ::s) for that. Why not have compose(context(width::s, height::mm), time::s, heights::mm) -> Vector{Point{mm}}.

It's right that all the coordinates get transformed to millimeters, but this doesn't happen when compose is called, but rather when draw is called. There needs to be some heterogenous coordinate type to store the representation of the graphic before draw is called, because it's not necessarily possible to do the transformation until you know all the parents in the tree and the size of the image. Separating points into separate x,y values is possible but not very appealing. It complicates the code, and makes the generalization to 3d harder.

Can you describe how heterogenous points complicate your code? Isn't it enough to just annotate your functions to disallow them instead of prohibiting them from existing at all?

@shashi
Copy link

shashi commented Jul 26, 2015

@SimonDanisch

Sure... But how I understand it, compose should not really be a graphic tool, right?

That's a matter of perspective... ;) A composition tool is still orthogonal to Reactive.

I thought it should directly result in a transformation matrix. As the transformation matrix should be a signal (if you already agreed on the context signals), it seems natural to have everything that ends up in a transformation matrix as a signal.

A context is a transformation matrix, optionally along with some properties. So if we can support embedding signals of contexts inside other contexts we get what you are asking for.

Besides, if GLVisualize depends on Compose3D, I don't want to automatically depend on WebGL as they are as orthogonal as it can get. Its like me suggesting to make Compose depend on ModernGL, even though you draw everything with Cairo.

ideally GLVisualize must not depend on Compose3D. Compose3D should depend on GLVisualize to provide the OpenGL backend. There will of course be some code to convert Compose3D objects to GLVisualize representation inside Compose3D. This is how Compose works - it has backends for Cairo, Patchwork and PGF and none of these packages know about Compose.

@ViralBShah
Copy link

Compose3D should be able to use GLVisualize as a rendering backend - that makes sense to me.

@SimonDanisch
Copy link
Member Author

@ViralBShah

Compose3D should be able to use GLVisualize as a rendering backend

I'm set on that!

Actually, my main point was, that as much as GLVisualize and Cairo don't live in Compose3D, WebGL shouldn't either. Where we put the glue code in the end isn't very important as long as the packages are compatible in general.

I thought of an interface, which is defined as something like this:

abstract Composable
boundingbox(::Composable)::Signal{BoundingBox{Measure}}
get_transformation(::Composable)::Signal{Matrix4x4}
set_transformation(::Composable, ::Signal{Matrix4x4})
attributes(::Composable)::Dict{Symbol, Any}
set_attributes(::Composable, ::Dict{Symbol, Any})
draw(::Tree{Composable})

This might be sufficient for compose to work with minimal amount of glue code. This interface works with graphics/geometries that already exists and just alters their attributes and transformation.
I've the feeling that @shashi finds this unaccaptable, as this implies mutability.
If not we need to decide, what a composable should be and how general graphics are encoded.
I would suggest to use common geometry types and on top of that backend-specific objects.
OpenGL is such a huge and flexible backend, that it will be desirable to not disallow people to use this diversity, just because e.g. PGFPlot does not have a representation for it.

Sorry for being so late in the game and overthinking subjects you already have sufficiently solved for Compose, but I just try to run against a few walls to test out our concepts ;)

@shashi

That's a matter of perspective... ;) A composition tool is still orthogonal to Reactive.

I'm not completely following your point. Reactive is sure orthogonal to a graphic tool, but that does not mean that we can't use it together with a graphic tool right!?
I just find it very elegant if I want to change one single color in a huge graph of composed objects, that I actually don't do anything else but changing that color.
I'm open to any solution that does this. Letting attributes be signals achieves this for me with my rendering backend and I'm not sure if your solution does.
Actually, I can't accept any solution that does not do this. Time is unbelievably tight for real time interactive graphics, especially if I want to offer VR at some point, so we can't really waste any microsecond. And with recomposing, it seems that we're talking about an order of magnitude more, so it'd be completely unaccaptable.
You might decide to not accept signals as parameters for your backend, though :)

@dcjones
If you say it's more user friendly, than we should try to make it work!
Here are a few peculiarities about tuples:
https://gist.github.com/SimonDanisch/bc09255a32d7d2c46f75#file-vectors-jl
Especially the first is relevant for method design.
The results are actually not entirely expected by me. Last times I checked it, it looked quite different.
But this is pretty much a green light for whatever you're doing at least from a performance point of view.
I'm all in for usability so we should bug people or improve things ourselves to make the most usable concept work!

@SimonDanisch
Copy link
Member Author

Okay, I just talked with @rohitvarkey, and we agreed that we stay in close contact, but that I will just go ahead and implement my own Compose3D prototype to get a better feel for the problem space. I'm still learning about the problems there afterall.
It's a little bit redundant, but I feel like its so crucial that it will be good to compare different approaches and chose the ideas that work best for the user and performance in the end.
I started an issue to collect use cases in #8, to help me understand how data is usually shipped, which in turn should shape how compose handles data.

@jheinen
Copy link

jheinen commented Jul 28, 2015

@SimonDanisch

I'm a big fan of using Julia objects for retrieving layouts. So a vector of things implies a list. A matrix implies a grid of things. A dict is a two column kind of structure. So if you want to have a grid of images, you would just put them into a matrix. You could supply optional parameters to change the padding, alignment, looks, etc. ...

I really like this point. But it's up to the underlying API to realize nesting or stacking of graphics objects. Not an easy job in 3D ;-)

@jheinen
Copy link

jheinen commented Jul 28, 2015

@SimonDanisch

I thought of an interface, which is defined as something like this:

abstract Composable
boundingbox(::Composable)::Signal{BoundingBox{Measure}}
get_transformation(::Composable)::Signal{Matrix4x4}
set_transformation(::Composable, ::Signal{Matrix4x4})
attributes(::Composable)::Dict{Symbol, Any}
set_attributes(::Composable, ::Dict{Symbol, Any})
draw(::Tree{Composable})
load(::File, Mime{UniqueIdentifier})
safe(::T, Mime{UniqueIdentifier})

Yes, that looks reasonable. But at some time, there will be requirements (from the end users) like those provided by blaze and pandas (in the Python world).

As time permits - hopefully in the next days - I'll try to give some more input ...

@SimonDanisch
Copy link
Member Author

Sorry, the last post was to the wrong issue -.-
Here's the correct one:
JuliaLang/julia#7299
But it's definitely related.

@lobingera
Copy link

Hello colleagues,

like some of the above comments, i'd like to see orthogonal packages/modules i.e. every piece of SW clearly structured to do one job perfectly (or almost) and providing a (standardised) interface to adjacent pieces.

In a scenario like this:
Cairo
GLVisualize
WebGL
are rendering packages that you use for display.

Compose or Compose3D are layout engines. You ask them to put some geometric entities into relations. Like Boxes on top of Boxes; Cubes left/right to other Cubes. Or fullfiling some contraints and put a list of objects (not necessary the same size) into a Box, or a tightly fitting Box.

Going towards interaction i miss the feature to ask Compose about the content. Assuming a 'draw' operation (so giving some rendering size), what is currently at position 100,40.0 relative to the display window upper left corner, where my mousepointer is right now? Which object, which Bounding Box, which context. What is the nearest object, what color etc. ?

Gadfly and Winston and others would be transforming raw input data into Compose objects and Compose in a 'draw' step would ask the rendering modules for a picture on screen.

A subpart of Compose (or a new module) would enable to do aesthetical (i'm not an english native speaker) placement of items. Like non-overlapping labels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants