Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deprecate expm in favor of exp #23233

Merged
merged 2 commits into from
Aug 24, 2017
Merged

deprecate expm in favor of exp #23233

merged 2 commits into from
Aug 24, 2017

Conversation

Sacha0
Copy link
Member

@Sacha0 Sacha0 commented Aug 12, 2017

Now that vectorized exp is out of the way, this pull request deprecates the spelling expm for matrix exponentiation in favor of exp. Analogous pull requests for logm and sqrtm to follow. (Do other *m functions exist?) Ref. #19598 and #8450. Best!

@Sacha0 Sacha0 added kind:deprecation This change introduces or involves a deprecation domain:linear algebra Linear algebra labels Aug 12, 2017
NEWS.md Outdated
@@ -296,6 +296,8 @@ Deprecated or removed
full path if you need access to executables or libraries in the `JULIA_HOME` directory, e.g.
`joinpath(JULIA_HOME, "7z.exe")` for `7z.exe` ([#21540]).

* `expm` has been deprecated in favor of `exp` ([#CATS]).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🐈 🐈

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best PR number :). Fixed on push. Thanks!

@deprecate expm(A::Hermitian) exp(A)
@deprecate expm(A::Symmetric) exp(A)
@deprecate expm(D::Diagonal) exp(D)
@deprecate expm(x::Number) exp(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless we're only deprecating a few methods of expm, you should be able to just do

@deprecate expm! exp!
@deprecate expm exp

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept the signatures narrow out of concern for clobbering expm[!] definitions outside Base. Should we not worry about that? If not, then agreed, the simpler/wider form would be better :).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those functions simply won't exist in Base in the future so external packages would need a deprecation warning for a function deprecation. It's up to them if they want to define and export expm themselves, or switch to Base.exp.

@@ -220,7 +220,7 @@ for f in (:sin, :sinh, :sind, :asin, :asinh, :asind,
:tan, :tanh, :tand, :atan, :atanh, :atand,
:sinpi, :cosc, :ceil, :floor, :trunc, :round,
:log1p, :expm1, :abs, :abs2,
:log, :log2, :log10, :exp, :exp2, :exp10, :sinc, :cospi,
#=:log,=# :log2, :log10, #=:exp,=# :exp2, :exp10, :sinc, :cospi,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is removing log here related to the expm->exp change? Also perhaps better to delete than to comment out?

Copy link
Member Author

@Sacha0 Sacha0 Aug 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The :log removal snuck in from another commit, and I will delete :exp rather than comment it out as you prefer. Fixed on push. Thanks! :)

@ararslan
Copy link
Member

Looks like there are @refs to expm in doc/src/manual/linear-algebra.md that are causing the doc build to fail.


# Matrix exponential

"""
expm(A)
exp(A)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps worth adding ::AbstractMatrix here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! Fixed on push. Thanks!

@@ -1632,6 +1632,15 @@ function SymTridiagonal(dv::AbstractVector{T}, ev::AbstractVector{S}) where {T,S
SymTridiagonal(convert(Vector{R}, dv), convert(Vector{R}, ev))
end

# deprecate expm in favor of exp
@deprecate expm!(A::StridedMatrix{<:LinAlg.BlasFloat}) exp!(A)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the following be enough?

@deprecate expm  exp
@deprecate expm! exp!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept the signatures narrow out of concern for clobbering expm[!] definitions outside Base. Should we not worry about that? If not, then agreed, the simpler/wider form would be better :).

@@ -9,7 +9,7 @@ import Base: USE_BLAS64, abs, big, broadcast, ceil, conj, convert, copy, copy!,
ctranspose, eltype, eye, findmax, findmin, fill!, floor, full, getindex,
hcat, imag, indices, inv, isapprox, isone, IndexStyle, kron, length, map,
ndims, oneunit, parent, power_by_squaring, print_matrix, promote_rule, real, round,
setindex!, show, similar, size, transpose, trunc, typed_hcat
setindex!, show, similar, size, transpose, trunc, typed_hcat, exp
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is in alphabetic order, perhaps move exp to the right position? 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Fixed on push. Thanks!

@yuyichao
Copy link
Contributor

yuyichao commented Aug 13, 2017

Seems like a good idea if MATLAB and numpy weren't a thing. It seems way too error prone and confusing for new users.

@Sacha0
Copy link
Member Author

Sacha0 commented Aug 13, 2017

Looks like there are @refs to expm in doc/src/manual/linear-algebra.md that are causing the doc build to fail.

Good catch! Fixed on push. Thanks!

@Sacha0 Sacha0 force-pushed the depexpm branch 2 times, most recently from 5228f10 to 367d019 Compare August 13, 2017 03:26
@andyferris
Copy link
Member

Very awesome! I strongly support.

@yuyichao I think people will be able to learn pretty quickly - it's more in line with the rest of LinAlg where functions of containers act on the container as a linear algebra entity, and we use broadcast for per-element operations.

@bramtayl
Copy link
Contributor

Seems like a good idea if MATLAB and numpy weren't a thing

When I first read this, I thought you were advocating for the downfall of MATLAB and numpy.

@yuyichao
Copy link
Contributor

yuyichao commented Aug 13, 2017

I think people will be able to learn pretty quickly

I highly doubt that. I've tried in the past months ever since it was proposed and no I can't stop expecting exp on a matrix to mean element wise operation. Raising an error is one thing, giving a totally different result is another. Since the result has the correct type and dimension, there's simply no way one can find such an error.

@yuyichao
Copy link
Contributor

Also, such a mistake is simply too easy to make. I can't count the number of times I accidentally passed an array to a function forgetting to add the . and get a depwarn/error. Some of them do come from exp (and also sqrt, I don't use log as much)!

So again, I'll be OK with this change if the function doesn't give an error but the result will do in most cases but that's simply not the case.

@StefanKarpinski
Copy link
Sponsor Member

StefanKarpinski commented Aug 13, 2017

All exported names in Base ending in m where the name without the m is also exported:

julia> filter(x->endswith(String(x), "m") && Symbol(chop(String(x))) in names(Base), names(Base))
6-element Array{Symbol,1}:
 :diagm
 :expm
 :logm
 :sqrtm
 :stdm
 :varm

So yes, it looks like you've got all of them (diagm, stdm and varm are different). As much as this would be lovely, I'm not entirely sold on this – I think @yuyichao's position on this is pretty on point.

@KristofferC
Copy link
Sponsor Member

It is inevitable that this will lead to bugs that will be hard to track down, and that pretty much every "Intro to Julia"-tutorial will have a warning about this behavior. The question is if it is worth it to avoid having to do a type check in functions that want to do exp and work on both scalars and matrices. In my opinion, it is not a good trade off and we should not change expm -> exp for matrices.

@StefanKarpinski
Copy link
Sponsor Member

StefanKarpinski commented Aug 13, 2017

The expm matrix functions already work fine on scalars, so if you're writing scalar/matrix generic code, just use the matrix versions you're fine. No type checks needed.

@ChrisRackauckas
Copy link
Member

While expm working generically is just fine for a hack, I'm not convinced that the change would really cause as many problems as is stated here. I'm surprised so many people think that this will make it hard to catch in generic AbstractArray codes. If it's for AbstractArrays x, then exp(x) will error on most arrays since after the depwarn is removed exp on vectors and higher order tensors will simply error. It will also error if the matrix is not square. So the only case where you'd accidentally run into this is where you were writing a code only and specifically for square nxn matrices and didn't want it to mean matrix exponential. Every other case would alert the user/developer to an error. To me, that makes it a lot more safe than what other comments make it out to be.

In fact, I actually think this will help some people out. This case on StackOverflow comes from a GSoC pre-application. This was late into the application process, and the issues that came up were because mathematics shorthand in papers tends to use notation that doesn't reference the matrices. At this point, the problem was not recognizing 1 in the paper should mean I in the context of systems of equations. But earlier was actually the problem of not recognizing the exp in exponential integrator papers means "the matrix exponential" expm, so this student was accidentally using exp instead of expm. When mathematics paper mean "element-wise exponential" they specifically have to mention that it's element-wise because the default interpretation is the matrix exponential (to name one example off the top of my head, papers on the Lamperti transform of specific diagonal noise SDEs).

So exp implying elementwise is natural to those already trained in scientific computing in NumPy and MATLAB, but it's definitely not universal. And I have actually seen students making the mistake in the opposite direction, expecting exp to "work naturally". Intro to Julia tutorials would only need to mention this as something for old MATLAB users to note, along with the other good changes. But, most importantly, people who would get tripped up on this would be people who have already been trained to know how to debug and correct code.

@yuyichao
Copy link
Contributor

yuyichao commented Aug 13, 2017

So the only case where you'd accidentally run into this is where you were writing a code only and specifically for square nxn matrices and didn't want it to mean matrix exponential.

And in actual code that is not meant to be a library that takes any array as input, when a function takes an square matrix as input in one case, it is highly likely that it'll always take square matrix (I don't think I've ever dealt with a matrix that's accidentally square) so it basically means that using square matrix will be really error prone. It is necessarily true that square matrices are used more often than expm (and other *m functions) since the latter requires the former.

There will always be people that don't read documents and misunderstand what a function does. Trying to match mathematical notation is not always (and usually not) a good idea and has already cause too much problem. If anything, raising a clear error and let the people choose will certainly help more people than picking one or the other as default.

but it's definitely not universal

Given the low number of complaint and (unfortunate but real) tradition of element-wise operation by default, the number of people that expect matrix op is certainly the minority, especially among the people doing numerical computation.

Intro to Julia tutorials would only need to mention this as something for old MATLAB users to note, along with the other good changes

It will also mean that simple functions are unusable and intentionally confusing for people without reading all the docs.

But, most importantly, people who would get tripped up on this would be people who have already been trained to know how to debug and correct code.

I take this to mean that I have not learnt how to debug and correct code since I've certainly not learnt how to find this kind of errors without an error message.

@StefanKarpinski
Copy link
Sponsor Member

There are two somewhat ambiguous choices for what exp(A) means: elementwise scalar exp or matrix exponential. In the current state of affairs, we force people to disambiguate this by writing either exp.(A) or expm(A). Therefore, what we currently do minimizes any risk of confusion by not trying to guess what people mean, inevitably getting it wrong some of the time. If this student wrote exp(A), copying notation from a math paper, they would immediately get an error telling them to use either exp.(A) if they want elementwise or expm(A) if they want a matrix exponential. On the other hand, if someone is coming from Python or Matlab, and they try exp for elementwise matrix, they will get an equally clear message. If they happened to apply it to a non-square matrix, we can even tell them unconditionally to write exp.(A). If a person coming from Python or Matlab tries expm, it will just work. The current state of affairs offers the best experience to the broadest number of people by forcing explicit disambiguation.

@tkelman
Copy link
Contributor

tkelman commented Aug 14, 2017

We don't force explicit disambiguation via powm for matrix powers though, have polynomials working in the matrix sense by default thrown that many people off? We could maybe define ^(::Irrational{:e}, ::Matrix) if we want to allow that bit of mathematical notation.

@antoine-levitt
Copy link
Contributor

It's true that it would cause confusion to Matlab / python people, but it does create a sense of mathematical consistency that makes it easy to remember ("ah yes, unlike those languages, julia actually does the right thing"). Also, what's the point of depreciating exp(arrays) if not to free up the notation for linear algebra?

@KristofferC
Copy link
Sponsor Member

Also, what's the point of depreciating exp(arrays) if not to free up the notation for linear algebra?

To be consistent about not having manually defined vectorized functions.

We don't force explicit disambiguation via powm for matrix powers though

I don't think having A^2 meaning A*A is comparable to exp(A) meaning I + A + A*A / 2! + A*A*A / 3! ...

We could maybe define ^(::Irrational{:e}, ::Matrix) if we want to allow that bit of mathematical notation.

👍!

@tkelman
Copy link
Contributor

tkelman commented Aug 14, 2017

I don't think having A^2 meaning A*A is comparable to exp(A) meaning I + A + A*A / 2! + A*A*A / 3! ...

Why not? Isn't that exactly what exp should mean? Either we have 3 spellings exp(A), exp.(A), and expm(A) where one of them is an error on matrices, or we have two spellings exp(A) and exp.(A). I wasn't entirely sold on this at first, but I'm increasingly convinced the m is a tradition we don't need to hold on to.

@KristofferC
Copy link
Sponsor Member

Isn't that exactly what exp should mean?

I'm not sure what you mean with should. exp means whatever we define it to mean. Our job is to define it in a way that is a good trade off between convenient syntax and not being confusing / introducing bugs in user code. In my view, this change weighs more on the side of being error prone than providing extra convenience.

@andreasnoack
Copy link
Member

I'm with Tony here. In Numpy, Fortran, and R, * is elementwise for arrays so you could easily argue that Matrix*Matrix being the matrix product in Julia is confusing and error prone to new users and that it would be more friendly to have a special operator. However, our Matrix*Matrix seems to work pretty well. Having exp(x) = ∑x^n/n! and x^n::Integer = x*...*x whatever the inputs seems pretty appealing and easy to explain.

@yuyichao
Copy link
Contributor

It is not easier to explain and the easiness to explain and understand is independent with how easy it is to make error. That's exactly why I was holding back my comment until I count the number of times I would have wasted a lot of time on this in the past few months.

@andyferris
Copy link
Member

Tony said it well, as did Andreas.

IMO, Julian's should really no longer expect functions to be automatically broadcasted to elements. I am one of those people that had bugs in MATLAB, and even in Julia, because I expected exp to do what it means in mathematics and forgot to use expm. We do still broadcast array + array, scalar * array and conj(array) to elements, because that's how vector spaces work. But if you're going to learn the language and do elementwise work with arrays, you'll certainly very quickly be used to using dot-call, broadcast, and/or map - otherwise you'd end up with errors in your code all over the place (and really, if you're using exp then you must have some degree of mathematical finesse).

@KristofferC It seems obvious to me that exp (and any other analytically continuous function, really) should be extended from Number to square AbstractMatrix in the standard, mathematically accepted way by Base.LinAlg. So yes, the fact that there is a Taylor expansion for that function does matter because it means the exp function has a meaning on linear operators (and we have already decided that it doesn't have an elementwise meaning). Julia excels at providing the correct/standard mathematical interpretation and implementation of things, in a way that is much more consistent and pleasing than MATLAB or python. IMO the semantic improvement for exp, sqrt, etc, will outweigh the downsides.

@StefanKarpinski I view this in a similar light to the proposals around modifying array + scalar to be more compatible with linear algebra - one major argument of that was to make generic code (and Taylor expansions in particular...) to work with linear operators. Exactly the same deal here - the advantages of generic mathematical programming in Julia are obvious and are worth just a little pain to fight for.

@andyferris
Copy link
Member

As far as I see @yuyichao's point of view is that this introduces a potential to create a logical error in your code with no errors or warnings, by writing code that is almost indistinguishable from the correct version (i.e. exp.(a) vs exp(a) look similar, and compounding this some people might arrive with the intuition that exp(a) is elementwise). I guess this is somewhat similar to a minus-sign error, if I write a + b instead of a - b somewhere because I mistyped or did a pen-and-paper calculation wrong - these types of errors are difficult to find because they require the programmer to check that each line of code is logically correct.

I'd begin by refuting that exp(a) and exp.(a) will "look" similar to a Julia programmer. There are basically zero functions or operators which distribute themselves elementwise, excepting those few operations that follow directly from linear algebra (conj(array), array + array, array - array, scalar * array, array * scalar. array / scalar and scalar \ array) and happen to be elementwise because that's just how vector spaces work. I know from experience that when I see a missing . in f.(a), alarm bells starting ringing, and quickly. It's easy to keep in mind if a is a container or a scalar - you don't need any knowledge of linear algebra to get this right.

Then there's the argument of consistency which enables generic programming. I think @eveydee had a great idea for an implementation of exp(::Any), and would also support e ^ x::Any mapping to exp(x), or vice-versa. @StefanKarpinski I don't see any value in removing exp or in avoiding exp(x) == e ^ x.

I also feel more generally that we should give LinAlg some authority on how to extend mathematical functions over AbstractArray. It creates a jarring interface to have * be "owned" by LinAlg for arrays, but not to do similarly for other mathematical operations which are commonplace in linear algebra. (Previously, this was blocked by residual MATLAB cruft, but happily the dot-call syntax has let us create something better).

@yuyichao
Copy link
Contributor

I guess this is somewhat similar to a minus-sign error

No because those will NEVER produce the correct result.

There are basically zero functions or operators which distribute themselves elementwise

This is unrelated to whether they "look similar"

I know from experience that when I see a missing . in f.(a), alarm bells starting ringing, and quickly.

That's exactly the opposite what I experience, especially when the code has a mix of scalar and element-wise operations.

Then there's the argument of consistency which enables generic programming.

Both e^x and expm will be generic so what we call it does not enable more generic programming at all.

I think @eveydee had a great idea for an implementation of exp(::Any)

While that can be nice, it is unrelated to what we need to call the function. Also, any example of why such a def is useful? (In almost all cases I can think of, that is NOT how you want to define it).

It creates a jarring interface to have * be "owned" by LinAlg for arrays,

I've already mentioned this that the difference is that these are operators so it's hard to find alternatives that's not way more complicated to write.

@PallHaraldsson
Copy link
Contributor

PallHaraldsson commented Aug 23, 2017

Is there a compromise to be had here? Where exp only works for scalars and expm only works for arrays, yes not fully generic, but the preferred (would be only documented?) way i.e. e^s and e^A and e.^A that call those other functions would be the generic options. To me it's not clear to me why teaching [newbies (who may not familiar with other languages)] exp[m] functions is preferred. ^ is also a function, just as fast generating identical code, and it's also more to type..

I.e. dropping from Base:

expm(x::Number) = exp(x)

and adding:

julia> import Base.^
julia> ^(::Irrational{:e}, A::Array) = expm(A)

julia> e^([3.0 2; 3 4])
2×2 Array{Float64,2}:
 163.002  160.284
 240.426  243.145

Later expm and exp could stop being exported, if we want to drop compatibility with other languages.

While I'm at it, why is this less accurate? I guess not to important, at least that case..:

julia> exp([3.0])
1-element Array{Float64,1}:
 20.0855

julia> exp(3.0)
20.085536923187668

@yuyichao
Copy link
Contributor

yuyichao commented Aug 23, 2017

For expm, I think replacing it with e^ is fine since that will probably stay.
That argument can't be made to other functions though.

While I'm at it, why is this less accurate? I guess not to important, at least that case..:

That's just a printing issue.

@dalum
Copy link
Contributor

dalum commented Aug 23, 2017

If you define exp(::Any) as I did above, any specialisation such as exp(::Number) or exp(::Matrix) is just an optimised version. I think this adds beauty and consistency to what the function exp can be expected to do. It also means that you only have to implement basic arithmetic operators (and one, isapprox) for exp to work on your own type. If you want a different definition, you can always override it.

I think another problem with having both exp and expm is that you have an added layer of subtle communication that might be missed. People with different backgrounds will be looking at each others' code, often without being able to communicate with the person who wrote it. If I write exp in a function, did I intentionally only want it to work with <:Number? If that's the case, why didn't I specify this in the normal way in the method signature? Maybe I just didn't imagine that someone might put something else in there, for which my function is still perfectly well defined? If I see this in a package someone else wrote, I could always change it to expm to treat the generic case, but then I have to either modify the package locally, open a pull request or contact the original author. This problem would not exist if exp worked generically.

But I think there is the deeper issue that exp normally is defined as a power series even for numbers. So why should it only be defined like that for numbers? Why should it not work for any object where a power series is well-defined? These kinds of consistencies and one-to-one correspondences with maths were what brought me to Julia in the first place.

And I just realised this, but in numpy, numpy.exp is consistent with the power series, because multiplication of arrays is element-wise.

@yuyichao
Copy link
Contributor

If you define exp(::Any) as I did above, any specialisation such as exp(::Number) or exp(::Matrix) is just an optimised version.

I was just asking for a case where this is actually useful. Also, as I said, it's unrelated to what we call it.

I think one problem, which hasn't been mentioned, with having both exp and expm (in addition to the inconvenience of being told to add ms to your functions in the middle of your workflow) is that you have an added layer of subtle communication that might be missed.

Adding m in packages code that is expected to work on generic types is only a very minor inconvenience compare to a lot of code that aren't going to be generic.

So why should it only be defined like that for numbers?

Because such mathmatical consistency is not the top priority for a programming language.

@ggggggggg
Copy link
Contributor

Just count the number of python and numpy users.

Just chiming in to say that I'm a primarily python/numpy user and part time Julia convert, and it seems obvious to me that exp should not be elementwise, Julia has function operate on their arguments, and uses dots for elementwise. It seems unjulian to me to have a seperate expm function. I can't really make any statements about frequency of errors as I've never actually needed this function.

@andreasnoack
Copy link
Member

Triage was unanimously in favor

@andreasnoack andreasnoack merged commit 03c5d7d into JuliaLang:master Aug 24, 2017
@cdsousa
Copy link
Contributor

cdsousa commented Aug 24, 2017

I like this change but,

unanimously in favor

?

@StefanKarpinski
Copy link
Sponsor Member

We have a weekly triage call at 12:15 Eastern on Thursdays on Google Hangouts. The link is in the #triage channel on the JuliaLang Slack. Anyone who wants to can join – although it's not exactly fun. Everyone on the call – which was about a dozen core contributors or so – was in favor. That seemed like enough to make the decision.

@Sacha0 Sacha0 deleted the depexpm branch August 26, 2017 01:12
@Sacha0
Copy link
Member Author

Sacha0 commented Aug 26, 2017

Thanks all! :)

@ChrisRackauckas
Copy link
Member

So today I was part of a MATLAB workshop with relatively new programmers. expm came up, so I asked them whether they would've thought it would've been a good idea for matrix exponentials to be exp (not the same thing, but close enough). Their response? Honestly it doesn't matter, and they'll just read the docs.

It's interesting how much we cared about this, but how little they cared. I for one was humbled by how little importance some of this stuff really is (though it does matter, I think?!). Cheers all and I hope that this makes us all humbled by the fact that, in the end, it really doesn't matter all too much.

@yuyichao
Copy link
Contributor

Exactly. No one cares what to use when you write the code. That's why I'm talking about how much effort you'll need to debug the code!!!

@@ -92,7 +92,6 @@ Base.repmat
Base.kron
Base.SparseArrays.blkdiag
Base.LinAlg.linreg
Base.LinAlg.expm
Copy link
Member

@fredrikekre fredrikekre Sep 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this should just have been changed to Base.LinAlg.exp(::StridedMatrix) rather than deleted?

@Keno
Copy link
Member

Keno commented Jun 27, 2018

Looks like we missed expm1, which is a bit oddly named at this point. Should probably just be exp1. However, that would be inconsistent with exp2 and exp10, though frankly, I'm not sure those functions should exist in favor of literally 2^x and 10^x (we can put appropriate fast paths in ^). Same with log2 and log10 actually. I'd prefer if those were just log(2, x) and log(10, x).

@StefanKarpinski
Copy link
Sponsor Member

expm1 is not related to expm, the naming is coincidental; it computes exp(x)-1 and has never operated on arrays. It could be extended to matrices and compute exp(A)-I but that's a bit weird.

@Keno
Copy link
Member

Keno commented Jun 27, 2018

I realized that the m in expm1 standard for minus, not matrix and is unrelated to this function. Nothing to do there then I guess. I still think we should consider getting rid of the exp[b] for b in 2,10 methods, since those seems a bit arbitrary.

@StefanKarpinski
Copy link
Sponsor Member

StefanKarpinski commented Jun 27, 2018

They're not arbitrary at all: they're the most common bases to work in and they're in libm. They can also both be computed more accurately and efficiently than the generic log function can for general bases.

@Keno
Copy link
Member

Keno commented Jun 27, 2018

I know they're in libm, but C doesn't have an exponentiation operator, and there is a much closer relation between API and implementation in C, so them having distinct implementations for performance is a good argument for why they should be separate functions. Esp with constant propagation that argument is much weaker here, and personally I'd prefer the uniformity of just writing 2^x and 10^x, etc. Arguably those are actually much more common already.

@Keno
Copy link
Member

Keno commented Jun 27, 2018

Across the entire package ecosystem, I count no use of exp2 (a couple of definitions), ~10 uses of exp10. The log variants are quite a bit more common. However, my argument is simply that if we have more fast/accurate ways to compute log(2, x), then log(2, x) should probably do that (and constant propagation will generally do away with the extra (minimal) overhead of doing the check). and if log(2, x) does that, then I don't see the point of log2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:linear algebra Linear algebra kind:deprecation This change introduces or involves a deprecation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet