-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More descriptive memory management #30
Comments
It's a good idea. Maybe |
Funny thing... I was thinking of So, the two options are:
I don't think that's a good idea - we want the users to be able to call these things easily, if we bury them in nested namespaces, then the user has to write |
While this is a good idea and could be appreciated, I feel that there is a problem with that: this is basically adding a standard and new methods with some new particular signification on top of an existing standard. Certainly, pointers are a bit of a mess in C++, but that is pretty much not Ginkgo's problem but the std's. Secondly there are existing documentations available on those issues which should be sufficient, because this is indeed "basic" C++ (modern one at least). I feel that providing this documentation together with some well put together examples such as what you have done before should be enough. Eventually giving feedback to the std people could also be an option. |
I think we agree that the concepts defined in the standard are not adequate for Ginkgo. You are basically telling us that this is not our problem, that there is a problem with the standard, and that we should wait for them to fix it. My opinion is that there's nothing wrong with the standard:
C++ was built to be extensible and to allow implementing (efficient) abstractions to make the code more expressive (both for the compiler, and for the users) and more readable. Thus, I don't see anything wrong with adding "missing" abstractions to our library. This may, or may not lead to adding stuff to the standard. If we don't implement any "missing" abstractions ourselves, I can tell you right away that approaching STD people with it wont get us anywhere (they'll just ask why we didn't implement it ourselves). There are a lot of examples where people have implemented things on top of the standard which "feel" like they should be in the standard:
The only question I see here is whether these abstractions make sense outside Ginkgo. I'm not sure at this point, so my idea would be to implement them inside Ginkgo, and if we figure out that there's a broader use-case for them, we can always extract them into a separate library. |
Also, @tcojean, look at the proposed PR, you will realize that it's not a simple case of renaming some stuff just because we feel like it, but it does actually implement useful utilities which are otherwise not available. |
@hartwiganzt asked me to take a look at this and comment. I recently wrote a document (about a different topic, parallel finite-element assembly) that uses the term "share." When I showed it to the team on Tuesday, they were utterly confused and we spent most of an hour arguing about whether "share" means something an owner (not I) does to me, or something I (the owner) does to another. "Shared" means something different than "share," etc. I finally gave up and used mathematical symbols and set notation to describe everything ;-) . I worry a bit that "take," "borrow," "give," etc. could cause similar confusion: Is this from the caller's perspective, or from the recipient's perspective? On the other hand, I can see the argument that you want to help users understand. My recommendationsSummary (TL;DR)
Users need to understand referencesI would say: C++ users need to understand at least when it means to give the following things as arguments to a function:
Something like an Avoid smart pointers where passing by value worksConsider whether you really need so many smart pointers. C++ has been evolving towards making it cheap to return and pass objects by value. C++11 The advantage of this approach is that you can reserve the tricky semantics of smart pointers, for things that really need it. Sometimes you need an object to be unique, or to share an object. If everything is a smart pointer, then those special cases might get lost in the syntax noise. For example, two different sparse matrices may share the same graph, so it makes sense for the matrices' constructors to take the graph by User experience intentBeyond this, it depends on your intent for the user experience. My preference is to make semantics (especially performance-related semantics) explicit in the syntax, even if it makes users think harder and type more. Not everybody likes that. For example, some libraries hide |
Regarding my preference for explicit semantics: Last time I showed an interface design to an aero engineer, they said "I hate this, because it looks like X10 [the asynchronous PGAS programming language]." I responded, "I like this, because it's explicit about where and how I'm accessing the object." They replied, "That's exactly what I hate about X10. Why can't I just USE THE OBJECT?" So, there's no pleasing everyone ;-) . You have to decide what your target audience wants. |
@mhoemmen thanks for commenting. TL;DR version: just skip to "Questions" Ginkgo's design
This is exactly what guided the design of Ginkgo. Sometimes we do want to "share" the object, but we don't want this to be the default (then we get Java's reference semantics, where we end up not having a clue which objects are currently alive and which ones are destroyed, and risk ending up with a circular dependencies for which we need garbage collection). E.g. we want to use a matrix to construct a solver, but we don't want to end up with 2 copies of the matrix. Ginkgo always gives you a unique pointer to the object, which you can than pass around in several ways:
In any case, you need to explicitly say what you want to do. This way there are no hidden costs (from implicitly cloning objects), and no strange side effects (from implicitly sharing an object). Target audience
My impression is (@hartwiganzt correct me if I'm wrong) that we are not trying to be a high-level library which an average domain scientist will see, but they will probably use Ginkgo as backend to some other high-level software. The goal is to have abstractions that prevent stupid mistakes and save you from working directly with raw memory, while only introducing minimal overhead (nothing more than constant overhead like launching a kernel, or calling a few virtual methods). You should expect Ginkgo to give the same performance as doing everything yourself as long as your problem is large enough. For small problems, you may get some overhead from virtual calls, etc. I guess this is something libraries like Trilinos, or FreeFem++ may be interested in. QuestionsSo the question here is actually what do you, as a potential user of Ginkgo, prefer:
From your comments it seems that you prefer option 3? (That's also my personal preference). Assuming we agree for 3, the main question of this issue is which option do you prefer for specifying how to share?
Combinations of the above are also possible - we can only implement some of the helpers (an obvious candidate would be to just use |
TL;DR
I'm OK with Ginkgo deferring choices of memory management strategy to users, even if it means typing more characters. C++17 copy elision means that you can write functions
and Thing's constructor will be invoked exactly once, inside I worry a little bit about the examples above having a lot of unique-to-shared conversions. They suggest to me that the interface uses |
Unfortunately I cannot give a TL; DR version here, I think we need to know some specifics of Ginkgo to continue the discussion.
We don't "need" them, but it saves us from specifying the type twice: std::shared_ptr<SomeType> x = createSomeType(); vs auto x = gko::share(createSomeType()); What happens if I forgot to mention one thing before: Ginkgo classes that use pointers are polymorphic, passing by value destroys the runtime type and we cannot pass an abstract class by value. Ginkgo's abstract classesTo make this discussion less hypothetical, here is a bare-bones version of Ginkgo, so I can show exactly what these extensions are for and why we are going for pointers and not values. Note that Ginkgo uses C++11, so there'll be no C++14 or 17 features in the codes below. // Models a linear operator A : b -> x == Ab
// ( A(alpha*x + beta*y) == alpha * Ax + beta * Ay )
struct LinOp {
virtual std::unique_ptr<LinOp> clone() const = 0;
virtual void apply(const LinOp *b, LinOp *x) const = 0;
};
// Models a higher-order operator between linear operators : F : op -> F(op)
struct LinOpFactory {
virtual std::unique_ptr<LinOp> generate(std::shared_ptr<const LinOp> op) const = 0;
};
These two classes are the backbone of Ginkgo, and we can use them to implement matrices, solvers and preconditioners. Even factorizations can be expressed in terms of these. Concrete classesstruct Dense : LinOp {
std::unique_ptr<LinOp> clone() const override { /* ... */ }
void apply(const LinOp *b, LinOp *x) const override {
// does matrix-vector product
}
static std::unique_ptr<Dense> create(/*...*/);
};
struct Cg : LinOp {
std::unique_ptr<LinOp> clone() const override { /* ... */ }
void apply(const LinOp *b, LinOp *x) const override (
// solves "system_matrix * x = b" using Cg
// uses original value of `x` as initial guess
}
std::shared_ptr<const LinOp> system_matrix;
};
struct CgFactory : LinOpFactory {
std::unique_ptr<LinOp> generate(std::shared_ptr<const LinOp> op) const override {
return std::unique_ptr<Cg>(new Cg{op});
}
static std::unique_ptr<CgFactory> create(/*...*/);
};
Use caseWe have a matrix std::unique_ptr<Dense> read(const std::string &filename);
void user_function(std::shared_ptr<Dense> dense_matrix); // only works for Dense operator Scenario 1: only solve the system, no
|
I kinda feel bad just parachuting in and telling you to do this and that.... I'm only here because @hartwiganzt asked me to comment.
Good point :-D I understand why you need to return a lot of If users find themselves calling unique-to-shared all the time, you could consider making sharing the default for objects that users frequently share, like sparse graphs and matrices. |
Your comments are always welcome, they were useful before, and I am always interested to hear your opinion about design issues. I only wish I remembered to point out earlier that this is polymorphic stuff, would have saved us both a lot of typing. 😄 I would still want to hear your opinion on these wrappers, just look at the 3 examples above, and the without/with versions of the code. Which one do you like better? Or would you do something in between?
We do this for |
Glad you don't mind my commenting :D
It's hard to represent singletons right. |
Did you mean references instead of pointers?
(Pointers only make sense for arguments that could be null.) |
Just to elaborate on my comments: I like your design, especially how it encourages safe use of |
In brief, I like the helper functions, but have some suggestions:
Thanks for patiently explaining! |
@mhoemmen the signature is currently with pointers, not references, and I do agree this is a problem. However, changing to references doesn't solve it completely, and introduces some issues on its own. I feel this doesn't affect TL;DR version of the entire issue #37: as @mhoemmen put it nicely |
As for this issue, I feel that we did explain/discuss it quite a bit, and I know that @pratikvn, @mhoemmen and I are all for adding (some version of) these helpers to Ginkgo. Maybe the easiest way is to "vote" on the issue (just pick a reaction on the top of the page), and you can optionally comment here if you have something to add. (This also goes for anyone who may have silently tracked this discussion, but didn't comment). Thanks! |
I actively followed the discussion, but I am not really decided:
|
All of that makes sense, but just wanted to mention:
You can make this argument for any programming language/library, there are always multiple ways to do something. void f1() { int *x = new int; /* do something */; delete x;}
void f2() { std::unique_ptr<int> x = new int; /* do something */; }
template <class T> T g1(T x) { return std::abs(x); }
template <class T> T g2(T x) { using std::abs; return abs(x); } For most cases |
You are right, but that does not help. In other words: people that are new to Ginkgo and see something written in one fashion and then another fashion (Ginkgo-specific terms), they may be confused. I would assume it would be hard to have outside contributions using these features. But I am happy to be surprised! |
This is my opinion, others may disagree:
It does.
It doesn't (It's just a direct extension of concepts we already have). But that is also problematic. When do you start considering something "a new language"?. In C, that's pretty clear - you have the core language, and you can make functions. That's the only "extendable" part of the language. In C++, you can customize more things: define new types, define operators for those types, control how a type is constructed/destructed, how is its memory allocated, etc. You can even create new types which depend on the user's code (i.e. they don't exist before the user has written their program). For example, when you see a piece of code in C: typedef /* implementation defined */ T;
T a, b, c, d, ,x ,y, z;
a = b + c; // adds b and c and stores it in a
x -= y + a; // add y and a and substracts it from x
d = z * x; // add z and x and stores it in d
return d; With C++ I can make the same code do the following (without changing the type, just the implementation of the type): typedef /* implementation defined */ T;
T a, b, c, d, ,x ,y, z;
a = b + c; // doesn't do anything
x -= y + a; // opens a file "foo" and prints -2 in it
d = z * x; // prints "Hello world" on standard output
return d; // finds your default web browser and opens this issue Of course, this is not a very smart thing to do, but the point is that C++ is built so you can extend it and modify it, and build your own domain-specific languages in it. Is Ginkgo an example of that? |
I would have to see an example of what you mean to be able to comment on it. It's to hypothetical this way. |
I'm not an expert on modern C++ and don't know Ginkgo, but since @hartwiganzt asked me for my opinion: pass function arguments as const references, avoid shared pointers (only unique_ptr). And there are things like std::make_unique and std::make_shared, don't they do what you want without introducing confusing new operations like "share" and "give"? Either keep it simple (using references, as also @mhoemmen suggested) or make it easy to read for people who are proficient in modern C++. The latter tends to lead to nasty low-level code, I find, involving move semantics and stuff like that. |
Thanks for your comment.
Unfortunately this would cause performance penalties in Ginkgo (unless we use shared pointers behind the scenes, which hides things from users, and makes it more difficult to track bugs).
No, they create a unique/shared pointer from something that wasn't a pointer. |
|
Yes, but additionally it ensures that we are actually operating on pointers with ownership. So something like: int *p = some_smart_pointer.get();
shared_ptr<int> shared_p(p); // works
auto shared_p2 = gko::share(p); // error, p doesn't have ownership to share
The same thing here: int *p = some_smart_pointer.get();
unique_ptr<int> unique_p(std::move(p)); // works
unique_ptr<int> unique_p2(gko:give(p)); // error, p doesn't have ownership to give So it is designed to prevent bugs from accidentally having an object "owned" by two different smart pointers. |
![Ginkgo](https://github.com/ginkgo-project/ginkgo/raw/master/assets/logo.png) 1.0.0 ================================================================================================== The Ginkgo team is proud to announce the first release of Ginkgo, the next-generation high-performance on-node sparse linear algebra library. Ginkgo leverages the features of modern C++ to give you a tool for the iterative solution of linear systems that is: * __Easy to use.__ Interfaces with cryptic naming schemes and dozens of parameters are a thing of the past. Ginkgo was built with good software design in mind, making simple things simple to express. * __High performance.__ Our optimized CUDA kernels ensure you are reaching the potential of today's GPU-accelerated high-end systems, while Ginkgo's open design allows extension to future hardware architectures. * __Controllable.__ While Ginkgo can automatically move your data when needed, you remain in control by optionally specifying when the data is moved and what is its ownership scheme. * __Composable.__ Iterative solution of linear systems is an extremely versatile field, where effective methods are built by mixing and matching various components. Need a GMRES solver preconditioned with a block-Jacobi enhanced BiCGSTAB? Thanks to its novel linear operator abstraction, Ginkgo can do it! * __Extensible.__ Did not find a component you were looking for? Ginkgo is designed to be easily extended in various ways. You can provide your own loggers, stopping criteria, matrix formats, preconditioners and solvers to Ginkgo and have them integrate as well as the natively supported ones, without the need to modify or recompile the library. Ease of Use ----------- Ginkgo uses high level abstractions to develop an efficient and understandable vocabulary for high-performance iterative solution of linear systems. As a result, the solution of a system stored in [matrix market format](https://math.nist.gov/MatrixMarket/formats.html) via a preconditioned Krylov solver on an accelerator is only [20 lines of code away](https://github.com/ginkgo-project/ginkgo/blob/master/examples/minimal-cuda-solver/minimal-cuda-solver.cpp): ```c++ #include <ginkgo/ginkgo.hpp> #include <iostream> int main() { // Instantiate a CUDA executor auto gpu = gko::CudaExecutor::create(0, gko::OmpExecutor::create()); // Read data auto A = gko::read<gko::matrix::Csr<>>(std::cin, gpu); auto b = gko::read<gko::matrix::Dense<>>(std::cin, gpu); auto x = gko::read<gko::matrix::Dense<>>(std::cin, gpu); // Create the solver auto solver = gko::solver::Cg<>::build() .with_preconditioner(gko::preconditioner::Jacobi<>::build().on(gpu)) .with_criteria( gko::stop::Iteration::build().with_max_iters(1000u).on(gpu), gko::stop::ResidualNormReduction<>::build() .with_reduction_factor(1e-15) .on(gpu)) .on(gpu); // Solve system solver->generate(give(A))->apply(lend(b), lend(x)); // Write result write(std::cout, lend(x)); } ``` Notice that Ginkgo is not a tool that generates C++. It _is_ C++. So just [install the library](https://github.com/ginkgo-project/ginkgo/blob/master/INSTALL.md) (which is extremely simple due to its CMake-based build system), include the header and start using Ginkgo in your projects. Already have an existing application and want to use Ginkgo to implement some part of it? Check out our [integration example](https://github.com/ginkgo-project/ginkgo/blob/master/examples/three-pt-stencil-solver/three-pt-stencil-solver.cpp#L144) for a demonstration on how Ginkgo can be used with raw data already available in the application. If your data is in one of the formats supported by Ginkgo, it may be possible to use it directly, without creating a Ginkgo-dedicated copy of it. Designed for HPC ---------------- Ginkgo is designed to quickly adapt to rapid changes in the HPC architecture. Every component in Ginkgo is built around the _executor_ abstraction which is used to describe the execution and memory spaces where the operations are run, and the programming model used to realize the operations. The low-level performance critical kernels are implemented directly using each executor's programming model, while the high-level operations use a unified implementation that calls the low-level kernels. Consequently, the cost of developing new algorithms and extending existing ones to new architectures is kept relatively low, without compromising performance. Currently, Ginkgo supports CUDA, reference and OpenMP executors. The CUDA executor features highly-optimized kernels able to efficiently utilize NVIDIA's latest hardware. Several of these kernels appeared in recent scientific publications, including the optimized COO and CSR SpMV, and the block-Jacobi preconditioner with its adaptive precision version. The reference executor can be used to verify the correctness of the code. It features a straightforward single threaded C++ implementation of the kernels which is easy to understand. As such, it can be used as a baseline for implementing other executors, verifying their correctness, or figuring out if unexpected behavior is the result of a faulty kernel or an error in the user's code. Ginkgo 1.0.0 also offers initial support for the OpenMP executor. OpenMP kernels are currently implemented as minor modifications of the reference kernels with OpenMP pragmas and are considered experimental. Full OpenMP support with highly-optimized kernels is reserved for a future release. Memory Management ----------------- As a result of its executor-based design and high level abstractions, Ginkgo has explicit information about the location of every piece of data it needs and can automatically allocate, free and move the data where it is needed. However, lazily moving data around is often not optimal, and determining when a piece of data should be copied or shared in general cannot be done automatically. For this reason, Ginkgo also gives explicit control of sharing and moving its objects to the user via the dedicated ownership commands: `gko::clone`, `gko::share`, `gko::give` and `gko::lend`. If you are interested in a detailed description of the problems the C++ standard has with these concepts check out [this Ginkgo Wiki page](https://github.com/ginkgo-project/ginkgo/wiki/Library-design#use-of-pointers), and for more details about Ginkgo's solution to the problem and the description of ownership commands take a look at [this issue](#30). Components ---------- Instead of providing a single method to solve a linear system, Ginkgo provides a selection of components that can be used to tailor the solver to your specific problem. It is also possible to use each component separately, as part of larger software. The provided components include matrix formats, solvers and preconditioners (commonly referred to as "_linear operators_" in Ginkgo), as well as executors, stopping criteria and loggers. Matrix formats are used to represent the system matrix and the vectors of the system. The following are the supported matrix formats (see [this Matrix Format wiki page](https://github.com/ginkgo-project/ginkgo/wiki/Matrix-Formats-in-Ginkgo) for more details): * `gko::matrix::Dense` - the row-major storage dense matrix format; * `gko::matrix::Csr` - the Compressed Sparse Row (CSR) sparse matrix format; * `gko::matrix::Coo` - the Coordinate (COO) sparse matrix format; * `gko::matrix::Ell` - the ELLPACK (ELL) sparse matrix format; * `gko::matrix::Sellp` - the SELL-P sparse matrix format based on the sliced ELLPACK representation; * `gko::matrix::Hybrid` - the hybrid matrix format that represents a matrix as a sum of an ELL and COO matrix. All formats offer support for the `apply` operation that performs a (sparse) matrix-vector product between the matrix and one or multiple vectors. Conversion routines between the formats are also provided. `gko::matrix::Dense` offers an extended interface that includes simple vector operations such as addition, scaling, dot product and norm, which are applied on each column of the matrix separately. The interface for all operations is designed to allow any type of matrix format as a parameter. However, version 1.0.0 of this library supports only instances of `gko::matrix::Dense` as vector arguments (the matrix arguments do not have any limitations). Solvers are utilized to solve the system with a given system matrix and right hand side. Currently, you can choose from several high-performance Krylov methods implemented in Ginkgo: * `gko::solver::Cg` - the Conjugate Gradient method (CG) suitable for symmetric positive definite problems; * `gko::solver::Fcg` - the flexible variant of Conjugate Gradient (FCG) that supports non-constant preconditioners; * `gko::solver::Cgs` - the Conjuage Gradient Squared method (CGS) for general problems; * `gko::solver::Bicgstab` - the BiConjugate Gradient Stabilized method (BiCGSTAB) for general problems; * `gko::solver::Gmres` - the restarted Generalized Minimal Residual method (GMRES) for general problems. All solvers work with system matrices stored in any of the matrix formats described above, and any other general _linear operator_, such as combinations and compositions of other operators, or any matrix format you defined specifically for your application. Preconditioners can be effective at improving the convergence rate of Krylov methods. All solvers listed above are implemented with preconditioning support. This version of Ginkgo has support for one preconditioner type, but stay tuned, as more preconditioners are coming in future releases: * `gko::preconditioner::Jacobi` - a highly optimized version of the block-Jacobi preconditioner (block-diagonal scaling), optionally enhanced with adaptive precision storage scheme for additional performance gains. You can use the block-Jacobi preconditioner with system matrices stored in any of the built-in matrix formats and any custom format that has a defined conversion into a CSR matrix. Any linear operator (matrix, solver, preconditioner) can be combined into complex operators by using the following utilities: * `gko::Combination` - creates a linear combination **α<sub>1</sub> A<sub>1</sub> + ... + α<sub>n</sub> A<sub>n</sub>** of linear operators; * `gko::Composition` - creates a composition **A<sub>1</sub> ... A<sub>n</sub>** of linear operators. You can utilize these utilities (together with a solver which represents the inversion operation) to compute complex expressions, such as **x = (3A - B<sup>-1</sup>C)<sup>-1</sup>b**. As described in the "Designed for HPC" section, you have a choice between 3 different executors: * `gko::CudaExecutor` - offers a highly optimized GPU implementation tailored for recent HPC systems; * `gko::ReferenceExecutor` - single-threaded reference implementation for easy development and testing on systems without a GPU; * `gko::OmpExecutor` - preliminary OpenMP-based implementation for CPUs. With Ginkgo, you have fine control over the solver iteration process to ensure that you obtain your solution under the time and accuracy constraints of your application. Ginkgo supports the following stopping criteria out of the box: * `gko::stop::Iteration` - the iteration process is stopped once the specified iteration count is reached; * `gko::stop::ResidualNormReduction` - the iteration process is stopped once the initial residual norm is reduced by the specified factor; * `gko::stop::Time` - the iteration process is stopped if the specified time limit is reached. You can combine multiple criteria to achieve the desired result, and even add your own criteria to the mix. Ginkgo also allows you to keep track of the events that happen while using the library, by providing hooks to those events via the `gko::log::Logger` abstraction. These hooks include everything from low-level events, such as memory allocations, deallocations, copies and kernel launches, up to high-level events, such as linear operator applications and completions of solver iterations. While the true power of logging is enabled by writing application-specific loggers, Ginkgo does provide several built-in solutions that can be useful for debugging and profiling: * `gko::log::Convergence` - allows access to the final iteration count and residual of a Krylov solver; * `gko::log::Stream` - prints events in human-readable format to the given output stream as they are emitted; * `gko::log::Record` - saves all emitted events in a data structure for subsequent processing; * `gko::log::Papi` - converts between Ginkgo's logging hooks and the standard PAPI Software Defined Events (SDE) interface (note that some details are lost, as PAPI can represent only a subset of data Ginkgo's logging can provide). Extensibility ------------- If you did not find what you need among the built-in components, you can try adding your own implementation of a component. New matrices, solvers and preconditioners can be implemented by inheriting from the `gko::LinOp` abstract class, while new stopping criteria and loggers by inheriting from the `gko::stop::Criterion` and `gko::log::Logger` abstract classes, respectively. Ginkgo aims at being developer-friendly and provides features that simplify the development of new components. To help handling various memory spaces, there is the `gko::Array` type template that encapsulates memory allocations, deallocations and copies between them. Macros and [mixins](https://en.wikipedia.org/wiki/Mixin) (realized via the [C++ CRTP idiom](https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern)) that implement common utilities on Ginkgo's object are also provided, allowing you to focus on the implementation of your algorithm, instead of implementing various utilities required by the interface. License ------- Ginkgo is available under the [BSD 3-clause license](https://github.com/ginkgo-project/ginkgo/blob/master/LICENSE). Optional third-party tools and libraries needed to run the unit tests, benchmarks, and developer tools are available under their own open-source licenses, but a fully functional installation of Ginkgo can be obtained without any of them. Check [ABOUT-LICENSING.md](https://github.com/ginkgo-project/ginkgo/blob/master/ABOUT-LICENSING.md) for details. Getting Started --------------- To learn how to use Ginkgo, and get ideas for your own projects, take a look at the following examples: * [`minimal-solver-cuda`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/minimal-cuda-solver) is probably one of the smallest complete programs you can write in Ginkgo, and can be used as a quick reference for assembling Ginkgo's components. * [`simple-solver`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/simple-solver) is a slightly more complex example that reads the matrices from files, computes the final residual, and selects a different executor based on the command-line parameter. * [`preconditioned-solver`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/preconditioned-solver) is a slightly modified `simple-solver` example that adds that demonstrates how a solver can be enhanced with a preconditioner. * [`simple-solver-logging`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/simple-solver-logging) is yet another modification of the `simple-solver` example that prints information about the solution process to the screen by using built-in loggers. * [`poisson-solver`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/poisson-solver) is a more elaborate example that builds a small application for the solution of the 1D Poisson equation using Ginkgo. * [`three-pt-stencil-solver`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/three-pt-stencil-solver) is a variation of the `poisson_solver` that demonstrates how one could use Ginkgo with software that was not originally designed with Ginkgo support. It encapsulates everything related to Ginkgo in a single function that accepts raw data of the problem and demonstrates how such data can be directly used with Ginkgo's components. * [`inverse-iteration`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/inverse-iteration) is another full application that uses Ginkgo's solver as a component for implementing the inverse iteration eigensolver. You can also check out Ginkgo's [core](https://github.com/ginkgo-project/ginkgo/tree/master/core/test) and [reference](https://github.com/ginkgo-project/ginkgo/tree/master/reference/test) unit tests and [benchmarks](https://github.com/ginkgo-project/ginkgo/tree/master/benchmark) for more detailed examples of using each of the components. A complete Doxygen-generated reference is available [online](https://ginkgo-project.github.io/ginkgo/doc/master/), or you can find the same information by directly browsing Ginkgo's [headers](https://github.com/ginkgo-project/ginkgo/tree/master/include/ginkgo). We are investing significant efforts in maintaining good code quality, so you should not find them difficult to read and understand. If you want to use your own functionality with Ginkgo, these examples are the best way to start: * [`custom_logger`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/custom-logger) demonstrates how Ginkgo's logging API can be leveraged to implement application-specific callbacks for Ginkgo's events. * [`custom-stopping-criterion`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/custom-stopping-criterion) creates a custom stopping criterion that controls when the solver is stopped from another execution thread. * [`custom-matrix-format`](https://github.com/ginkgo-project/ginkgo/tree/master/examples/custom-matrix-format) demonstrates how new linear operators can be created, by modifying the `poisson-solver` example to use a more efficient matrix format designed specifically for this application. Ginkgo's [sources](https://github.com/ginkgo-project/ginkgo) can also serve as a good example, since built-in components are mostly implemented using publicly available utilities. Contributing ------------ Our principal goal for the development of Ginkgo is to provide high quality software to researchers in HPC, and to application scientists that are interested in using this software. We believe that by investing more effort in the initial development of production-ready method, the entire scientific community benefits in the long run. HPC researchers can save time by using Ginkgo's components as a starting point for their algorithms, or to compare Ginkgo's implementations with their own methods. Since Ginkgo is used for bleeding-edge research, application scientists immediately get access to production-ready new methods that help solve their problems more efficiently. Thus, if you are interested in making this project even better, we would love to hear from you: * If you have any questions, comments, suggestions, problems, or think you have found a bug, do not hesitate to [post an issue](https://github.com/ginkgo-project/ginkgo/issues/new) (you will have to register on GitHub first to be able to do it). In case you _really_ do not want your comment to be publicly available, you can send us an e-mail to [email protected]. * If you developed, or would like to develop your own component that you think could be useful to others, we would be glad to accept a [pull request](https://github.com/ginkgo-project/ginkgo/pulls) and distribute your component as part of Ginkgo. The community will benefit by having the new method easily available, and you would get the chance to improve your code further as part of the review process with our development team. You may also want to consider creating writing an issue or sending an e-mail about the feature you are trying to implement before you get started for tips on how to best realize it in Ginkgo, and avoid going the wrong way. * If you just like Ginkgo and want to help, but do not have a specific project in mind, fell free to take on one of the [open issues](https://github.com/ginkgo-project/ginkgo/issues), or send us an issue or an e-mail describing your interests and background and we will find a project you could work on. Backward Compatibility Guarantee and Future Support --------------------------------------------------- This is a major **1.0.0** release of Ginkgo. All future patch releases of the form **1.0.x** are guaranteed to keep exactly the same interface as the major release. All minor releases of the form **1.x.y** are guaranteed not to change existing interfaces, but only add new capabilities. Thus, all code conforming to the **1.0.0** release will continue to compile and run on all future Ginkgo versions up to (but not including) version **2.0.0**. About ----- Ginkgo 1.0.0 is brought to you by (__TODO:__ maybe add logos here): **Karlsruhe Institute of Technology**, Germany **Universitat Jaume I**, Spain **University of Tennessee, Knoxville**, US These universities, along with various project grants, supported the development team and provided resources needed for the development of Ginkgo. Ginkgo 1.0.0 contains contributions from: **Hartwig Anzt**, Karlsruhe Institute of Technology **Yenchen Chen**, National Taiwan University **Terry Cojean**, Karlsruhe Institute of Technology **Goran Flegar**, Universitat Jaume I **Fritz Göbel**, Karlsruhe Institute of Technology **Thomas Grützmacher**, Karlsruhe Institute of Technology **Pratik Nayak**, Karlsruhe Institue of Technologgy **Tobias Ribizel**, Karlsruhe Institute of Technology **Yuhsiang Tsai**, National Taiwan University Supporting materials are provided by the following individuals: **David Rogers** - the Ginkgo logo **Frithjof Fleischhammer** - the Ginkgo website The development team is grateful to the following individuals for discussions and comments: **Erik Boman** **Jelena Držaić** **Mike Heroux** **Mark Hoemmen** **Timo Heister** **Jens Saak**
Currently we use
std::move
,.get()
, and->clone()
with combination of different objects to express who owns what, who shares what, etc. This might get a bit confusing for users that are not so proficient in C++ and the code can get a bit strange.For example:
Now if we want to call some functions (consider all the lines as stand-alone examples, they don't work as a block of code, since some lines invalidate the objects):
There are of course more combinations, but this is enough to make it confusing for some users. Where to put .get(), where to put ->clone(), where to use std::move. To make it worse all 3 are called in radically different ways: as a method, as a method on a dereferenced object, and as a top-level function. There's also the problem of having to repeat the type twice when wanting to create a shared object, and the problem of clone which doesn't return the same type, but a super-type, which leads to clumsy syntax like:
So, I recommend the following wrappers:
gko::clone
->clone()
and preserves typegko::share
gko::give
std::move
gko::lend
/gko::loan
.get()
, or just return the input* The original input becomes invalid
I am not very happy with "borrow", I would prefer a shorter name since that will be the most common one, and I don't really like the verb. But it does fit perfectly in the semantics - the function that got the object using `gko::borrow` will assume it "owns" the object until it returns, and at that point it gives it back. From google translate:If anyone has a better name, I appreciate it.
I think we figured out we want either
gko::lend
orgko::loan
, which better describe what is happening.With the new wrappers, the examples would look like this:
@ginkgo-project/developers what do you think about this?
The text was updated successfully, but these errors were encountered: