-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compose/3D + GLVisualize + Geometry Primitives #7
Comments
This is fantastic! Did the call happen on Thursday, and this is the outcome from that call? I am very interested in following, but don't have much to contribute directly. I am not watching the repo, but please ping me on anything where I can help. |
Reactive is meant to be orthogonal to other graphic tools. The graphic tools provide ways to draw something, and Reactive takes that to interactive dimension. I'm not very inclined towards supporting However, one useful thing we could support for great (mainly performance) benefits is composing signals of contexts together. This is relatively much easier to implement correctly, and the renderer can close over the parent context's computed tranformation and use that for rendering the child signals of contexts as they update. This is in line with how Escher supports embedding Signal of UIs in other static UIs. So instead of
Escher and Compose work well in that Compose graphics can be embedded in Escher UIs. Which is exactly how far that should go if you ask me. If you mean the Tile abstraction in Escher, then yes they are supposed to be black boxes. One could write an API for composable graphics similar to this but built on top of Compose. I'm fine with using Compose as it is, now that I'm over the initial part of the learning curve.
Why move out WebGL functionality? Isn't WebGL supposed to be a backend to Compose3D just like GLVisualize?
This sort of thing is very useful in Gadfly. It makes for a simpler implementation.
I don't exactly understand the concern. In a Gadfly plot where the x-axis is time and the y axis is say the angle of the sun, each point represents a point in (time, angle) space. Here time is a spatial dimension when drawing the plot. The bonding box can then have a width in of 1 day, and the points will be beautifully resolved to the right coordinates. |
@ViralBShah great :) Yeah, the Geometry meetups kickstarted this!
Sure... But how I understand it, compose should not really be a graphic tool, right? It should be a (interactive) composition tool. Cairo/WebGL/GLVisualize is the graphic tool.
Besides, if GLVisualize depends on Compose3D, I don't want to automatically depend on WebGL as they are as orthogonal as it can get. Its like me suggesting to make Compose depend on ModernGL, even though you draw everything with Cairo.
I might have other ideas about what hskip does. I thought it should directly result in a transformation matrix. As the transformation matrix should be a signal (if you already agreed on the context signals), it seems natural to have everything that ends up in a transformation matrix as a signal. For me, my whole infrastructure already works like that, so from the implementation it would be really simple and natural for me.
You're the master of Escher ;) I just noticed the similarities and ended up picturing Compose3D as the way I want to compose my UI's in GLVisualize. I see Compose3D as a layouting tool, and GUIs are mostly about layouting.
Sure, Compose3D could be the lower level, but the higher level needs to live somewhere. So why not just upgrade Compose3D a bit?
I was questioning, if I need Point(::mm, ::s) for that. Why not have I should then try to implement some prototypes which work well for GLVisualize with the use cases, to get a better sense of what is needed. Best, |
It's right that all the coordinates get transformed to millimeters, but this doesn't happen when Can you describe how heterogenous points complicate your code? Isn't it enough to just annotate your functions to disallow them instead of prohibiting them from existing at all? |
That's a matter of perspective... ;) A composition tool is still orthogonal to Reactive.
A context is a transformation matrix, optionally along with some properties. So if we can support embedding signals of contexts inside other contexts we get what you are asking for.
ideally GLVisualize must not depend on Compose3D. Compose3D should depend on GLVisualize to provide the OpenGL backend. There will of course be some code to convert Compose3D objects to GLVisualize representation inside Compose3D. This is how Compose works - it has backends for Cairo, Patchwork and PGF and none of these packages know about Compose. |
Compose3D should be able to use GLVisualize as a rendering backend - that makes sense to me. |
I'm set on that! Actually, my main point was, that as much as GLVisualize and Cairo don't live in Compose3D, WebGL shouldn't either. Where we put the glue code in the end isn't very important as long as the packages are compatible in general. I thought of an interface, which is defined as something like this: abstract Composable
boundingbox(::Composable)::Signal{BoundingBox{Measure}}
get_transformation(::Composable)::Signal{Matrix4x4}
set_transformation(::Composable, ::Signal{Matrix4x4})
attributes(::Composable)::Dict{Symbol, Any}
set_attributes(::Composable, ::Dict{Symbol, Any})
draw(::Tree{Composable}) This might be sufficient for compose to work with minimal amount of glue code. This interface works with graphics/geometries that already exists and just alters their attributes and transformation. Sorry for being so late in the game and overthinking subjects you already have sufficiently solved for Compose, but I just try to run against a few walls to test out our concepts ;)
I'm not completely following your point. Reactive is sure orthogonal to a graphic tool, but that does not mean that we can't use it together with a graphic tool right!? @dcjones |
Okay, I just talked with @rohitvarkey, and we agreed that we stay in close contact, but that I will just go ahead and implement my own Compose3D prototype to get a better feel for the problem space. I'm still learning about the problems there afterall. |
I really like this point. But it's up to the underlying API to realize nesting or stacking of graphics objects. Not an easy job in 3D ;-) |
Yes, that looks reasonable. But at some time, there will be requirements (from the end users) like those provided by blaze and pandas (in the Python world). As time permits - hopefully in the next days - I'll try to give some more input ... |
Sorry, the last post was to the wrong issue -.- |
Hello colleagues, like some of the above comments, i'd like to see orthogonal packages/modules i.e. every piece of SW clearly structured to do one job perfectly (or almost) and providing a (standardised) interface to adjacent pieces. In a scenario like this: Compose or Compose3D are layout engines. You ask them to put some geometric entities into relations. Like Boxes on top of Boxes; Cubes left/right to other Cubes. Or fullfiling some contraints and put a list of objects (not necessary the same size) into a Box, or a tightly fitting Box. Going towards interaction i miss the feature to ask Compose about the content. Assuming a 'draw' operation (so giving some rendering size), what is currently at position 100,40.0 relative to the display window upper left corner, where my mousepointer is right now? Which object, which Bounding Box, which context. What is the nearest object, what color etc. ? Gadfly and Winston and others would be transforming raw input data into Compose objects and Compose in a 'draw' step would ask the rendering modules for a picture on screen. A subpart of Compose (or a new module) would enable to do aesthetical (i'm not an english native speaker) placement of items. Like non-overlapping labels. |
Hi there,
I thought its about time that I write down, what I expect from Compose/3D and share some thoughts on how I think it could be implemented.
What I expect from Compose/3D:
So far I'm more into visualize(primitive, kwarg1=x, kwarg2=z...). Nice thing about how compose currently handles it is, that you can globally set style attributes for a context which seems to be desirable! So maybe its worth rethinking for me and I just need to figure out what this means for GLVisualize.
Tile
abstraction seems to come very close to what I think of what a composable graphic looks like. A blackbox with arbitrary renderable objects inside + boundingbox.center at Signal(Point2(0.5))
, orhskip(Signal(0.5pt))
. Also use signals of boundingboxes/contexts. Hide signals inside a tile/composable graphic. This would mean we can integrate a lot of dynamic information without ever recomposing.If we can agree on these points, I will help @rohitvarkey with Compose3D and replace my current interface in GLVisualize with it. This would mean GLVisualize becomes a backend for Compose3D, and @rohitvarkey should start to move out the WebGL functionality.
Outcome of Geometry Meetup AKA Geometry Primitives
It was just me and @garborg, but we had a very good talk!
I think the conclusion was along the lines, that we all should experiment around with different vector (fixed vector) representations, to see what feels most natural for some specific domain. If everyone is a little bit experimental and has a clearer picture in the end, we can better decide what a good vector type should look like.
I will experiment with FixedSizeArrays a bit. I will implement a few different versions and play around with them. Some candidates are in: https://gist.github.com/SimonDanisch/76c82631d38f12b2e350#file-geometry-jl .
One thing @garborg and I discussed was if including the space that a vector/point is in, e.g. spherical, camera space, etc makes sense. The gain: we would always know in which space we are, which makes interfacing easier and should reduce mistakes. Downside: complicates implementation quite a bit and might also make working with vectors annoying.
Random thoughts
NTuple seems to be more of a
typealias
, even though it is a different type under the hood.Consider:
We should keep this in mind. I think the implications are not that grave, but it could be nasty when compiling to the GPU.
@dcjones said he wants different measures inside one point. I'm not entirely convinced, that the use case he mentioned (
Point2(2s, 2mm)
) is that appealing.It complicates things quite a bit and it seems to be more natural to have two seperated vectors, one for the time and one for the spatial dimension.
A more intriguing use case (at least for me and OpenGL) is that there are color types with different precisions per channel. This could easily be included in FixedSizeArrays, as only getindex and the eltype function are different. But we would need a more fine grained inheritance tree for that.
Conclusion
I hope this is helpful ;)
Let me know if you have questions!
CC: @dcjones @sjkelly @wkearn @jminardi @timholy @shashi @Keno @lobingera @ViralBShah @rohitvarkey @KristofferC @garborg @randyzwitch @jheinen @tedsteiner @zyedidia @yeesian @jminardi @dreammaker
If you don't want to get pinged, please notice me. These are the people I consider to be interested in the development of this.
Best,
Simon
The text was updated successfully, but these errors were encountered: