Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

linspace(...)[1] can differ from linspace start value #14420

Closed
c42f opened this issue Dec 16, 2015 · 33 comments · Fixed by #18777
Closed

linspace(...)[1] can differ from linspace start value #14420

c42f opened this issue Dec 16, 2015 · 33 comments · Fixed by #18777
Labels
domain:maths Mathematical functions kind:bug Indicates an unexpected problem or unintended behavior
Milestone

Comments

@c42f
Copy link
Member

c42f commented Dec 16, 2015

Demo:

julia> start = 0.10000000000000045
0.10000000000000045

julia> linspace(start, 1)[1] - start
-1.3877787807814457e-17

julia> linspace(start, 1).start - start
0.0

I think the error here is in unsafe_getindex(::LinSpace, ::Integer), which assumes b*a/b == a, which unfortunately isn't true. The above amounts to computing

julia> 0.10000000000000045 * 49.0 / 49.0 - 0.10000000000000045
-1.3877787807814457e-17

Probably needless to say, this is a big problem for numerical code which should be able to assume LinSpace will exactly preserve the start and end values.

@eschnett
Copy link
Contributor

If this is the code

getindex{T}(r::LinSpace{T}, i::Integer) = (checkbounds(r, i); unsafe_getindex(r, i))
unsafe_getindex{T}(r::LinSpace{T}, i::Integer) = convert(T, ((r.len-i)*r.start + (i-1)*r.stop)/r.divisor)

then changing things to

(r.len-i)/r.divisor * r.start + (i-1)/r.divisor * r.end

would address this.

If the current code is deemed accurate enough, then storing 1 / r.divisor instead of divisor could be an additional performance improvement.

@c42f
Copy link
Member Author

c42f commented Dec 16, 2015

I wonder whether it's possible to solve for a floating point _start, _end, and mult such that the more naive formula

(r.len-i)*r.mult * r._start + (i-1)*r.mult * r._end

exactly reproduces the values at the end points. Then you'd get the best of both worlds (at the cost of some additional setup), though a multiply would be required to get start(r) and end(r)

@ringw
Copy link

ringw commented Dec 24, 2015

If r.len == 50, then (r.len-i)*r.mult * r.start + (i-1)*r.mult * r.end is incorrect (assuming r.mult = 1.0 / r.divisor and start and end are not adjusted):

julia> 49.0 * (1.0 / 49.0)
0.9999999999999999

julia> 49.0 / 49.0
1.0

I'm not sure of a good way to adjust mult, start, and end.

Using the formula (i == r.len) ? r.end : (r.start + (i-1)*r.incr) (where r.incr = (r.end - r.start) / r.divisor) seems to be faster than @eschnett's suggestion:

julia> let start = 0.10000000000000045, end_ = 1+eps(), len = 10000000,
       divisor = len-1, incr = (end_ - start) / divisor;
       @time [(i == len) ? end_ : (start + (i-1)*incr) for i in 1:len][1]
       end
  0.055571 seconds (2 allocations: 76.294 MB)
julia> let start = 0.10000000000000045, end_ = 1+eps(), len = 10000000,
       divisor = len-1;
       @time [(len-1)/divisor * start + (i-1)/divisor * end_ for i in 1:len][1]
       end
  0.142083 seconds (2 allocations: 76.294 MB)

@c42f
Copy link
Member Author

c42f commented Dec 25, 2015

It's not what I expected, but I get the same results locally (I put the code into functions and primed the JIT to make sure). Even implementing the linearly interpolated version in terms of multiplication (as I was proposing above) seems a bit slower than your version with a branch. It's a bit surprising that the branch really doesn't hurt performance. I checked the assembly and the branch does seem to have ended up in the inner loop, I guess it's just perfectly predictable.

I have a hunch that the linear interpolated version should be better behaved numerically due to potential cancellation error in computing incr = (end_-start)/divisor, but I didn't manage to come up with an example yet.

@ringw
Copy link

ringw commented Dec 26, 2015

I guess the JIT could generate a CMOV instead of a branch, but the branch does seem fast enough.
Most of the pathological cases I tested seem fine:

julia> function test_incr(start, end_, len)
       incr = (end_ - start) / (len - 1)
       return abs(start + incr * (len-1) - end_)
       end
test_incr (generic function with 1 method)

julia> test_incr(0.10000000000000045, 1, 50)
0.0

julia> test_incr(0.10000000000000045, 1, 1e15)
0.0

julia> test_incr(1, 1e15, 100)
0.125

julia> test_incr(1e15, 1e15 + 1, 1000)
0.0

The only inaccuracy I found is with a very small start and large end, or vice versa:

julia> let start = 1, end_ = 1e15, len = 100, incr = (end_ - start) / (len - 1);
       (start + incr * (len-1)) - end_
       end
0.125

julia> let start = 1e15, end_ = 1, len = 100, incr = (end_ - start) / (len - 1);
       (start + incr * (len-1)) - end_
       end
-0.125

The error is proportional to the smaller endpoint. For this particular error to crop up, all of the other elements will be much larger than the smaller endpoint, so the relative error is tiny.

@JeffBezanson JeffBezanson added kind:bug Indicates an unexpected problem or unintended behavior domain:maths Mathematical functions labels Dec 29, 2015
@c42f
Copy link
Member Author

c42f commented Jan 2, 2016

Hmm, looks like this problem has quite some history already: #2333 #9637 and probably several other issues.

I expect the aim of linspace should be to produce something as close as possible to the correctly rounded linear interpolation between start and end, hitting the end points exactly. Perhaps that's just me though.

The single sided version does have a nasty failure case when start is something large and negative and stop is small. In this case the relative errors on the right hand side of the range can be huge:

julia> x1,xN,N = -1f6, 0.1f0, 1e6
(-1.0f6,0.1f0,1.0e6)

julia> function lerp_onesided{T}(x1::T, xN::T, N, i)
           mult = (xN-x1)/(N-1)
           x1 + mult*(i-1)
       end
lerp_onesided (generic function with 1 method)

julia> (lerp_onesided(x1,xN,N,N) - xN)/xN
0.2499999802093956

julia> (linspace(x1,xN,N)[Int(N)] - xN)/xN
0.0f0

@c42f
Copy link
Member Author

c42f commented Jan 2, 2016

With a simple use of nextfloat and prevfloat during the LinSpace setup stage it seems possible to

  • Exactly hit the end points
  • Compute the values at each index without division
  • Have small error relative to the correctly rounded result across the entire range when both endpoints share the same sign. (I suspect that getting this for ranges which cross zero isn't possible without extra logic.)
function lerp_fix{T}(x1::T, xN::T, N, i)
    # Setup.  Compute values at end points.
    mult = one(T)/T(N-1)
    a    = T(N-1)*mult*x1
    b    = T(N-1)*mult*xN
    # If they don't match, adjust so that linear interpolation formula exactly
    # hits the ends of the range.
    if a != x1
        x1 = a < x1 ? nextfloat(x1) : prevfloat(x1)
    end
    if b != xN
        xN = b < xN ? nextfloat(xN) : prevfloat(xN)
    end

    # The actual computation at index `i`
    T(N-i)*mult*x1 + T(i-1)*mult*xN
end

I haven't had time to test the above as extensively as I'd like yet, but it seemed to work when I tried it for a large number of log and uniformly distributed end points and range sizes.

@ringw
Copy link

ringw commented Jan 5, 2016

That seems good. I can't find a proof, but it does seem that the error in "x*(1.0 / x)" is at most 1 in the mantissa, so either "nextfloat" or "prevfloat" should be able to fix the endpoints. If there's not a strong argument that this always works, it might be good to check that the updated endpoints are correct if a != x1 || b != xN, and raise an exception otherwise.

@c42f
Copy link
Member Author

c42f commented Jan 5, 2016

I think a proof of this is required if it's going to go into julia Base. So far I've come to a similar conclusion - first prove that

(1/M)*M ∈ {prevfloat(1),1,nextfloat(1)}

Follow that by proving that adjusting the endpoints by 1ulp is enough to correct a possible 1ulp error in (1/M)*M.

I haven't tried anything like this before, so I'm making slow progress. (Ok, I should probably just read the introduction in a good book on numerical analysis and it might be obvious. Suggestions welcome.)

Some observations which might be useful:

  • M = len-1 is an integer, and exactly representable given the checking already in linspace().
  • Floating point multiplication is exact if the input has few enough significant binary digits.

As a side note, lerp_fix fails if the rounding mode has been set to something other than RoundingMode{:Nearest}. I'm not too worried about this (taking the view that non-nearest rounding modes only make sense when local and lexically scoped), but perhaps other people disagree?

@StefanKarpinski
Copy link
Sponsor Member

I think there's a way to fix this and retain the LinSpace type, but there's been so much gnashing of teeth over this that I'm very strongly inclined to just revert to having linspace return a vector.

@andreasnoack
Copy link
Member

@StefanKarpinski Isn't the Vector vs LinSpace discussion separate from the discussion about which values to return? Is it because two divisions per getindex would be too slow?

@StefanKarpinski
Copy link
Sponsor Member

It's related because the LinSpace construction makes it much harder to get the right behavior for the endpoints. I'm fairly certain that the (len-1)/divisor * start + (i-1)/divisor * end_ formula does not work for many of the nastier cases – I tried that out when I was implementing this. In order to get all the intermediate points right, you really need to do the addition at a "higher level" and then use a single division to project down and let the guaranteed correct rounding of the fdiv operation do its thing. If you try to do the addition after that, it defeats the correctness of the rounding.

@StefanKarpinski
Copy link
Sponsor Member

Btw, the potential solution that I have in mind for this that doesn't require ditching LinSpace is to multiply the numerator and divisor of the LinSpace object by a small multiplier until you get a starting point for which the projection is correct. I believe this always works, but I haven't been able to prove it. Still, it seems too complicated and makes me think that the LinSpace business should just be partially reverted to the point where we had lifted computations but returned an array.

@andreasnoack
Copy link
Member

I'm not sure which metric is in "get all the intermediate points right". E.g. with @c42f's example above I get

immutable MyRange{T<:FloatingPoint}
    x1::T                              
    xn::T                              
    n::Int                             
end
Base.getindex(r::MyRange, i::Integer) = (r.n - i)/(r.n - 1)*r.x1 + (i - 1)/(r.n - 1)*r.xn

julia> x1,xN,N = -1f6, 0.1f0, 10^6 

julia> fr1 = MyRange(x1,xN,N);                                                                   

julia> fr2 = linspace(x1,xN,N);                                                                  

julia> br1 = MyRange(BigFloat(-1.0e6),BigFloat(0.1f0),N);                                        

julia> norm(Float64[fr1[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, Inf)          
2.220446049250313e-16                                                                            

julia> norm(Float64[fr1[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 2)            
9.650009737328148e-14                                                                            

julia> norm(Float64[fr1[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 0)            
249953.0                                                                                         

julia> norm(Float64[fr2[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, Inf)          
1.635118732634666e-7                                                                             

julia> norm(Float64[fr2[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 2)            
4.3156720262505825e-5                                                                            

julia> norm(Float64[fr2[i] for i = 1:N]./Float64[Float64(br1[i]) for i = 1:N] - 1, 0)            
999998.0                                                                                         

@StefanKarpinski
Copy link
Sponsor Member

Passing these tests is what I mean.

@andreasnoack
Copy link
Member

Yes, but that seems like an arbitrary criterion. What are they exactly testing?

@andreasnoack
Copy link
Member

...to be more specific. It seems like wrong to me not to respect the actual floating point that is passed to linspace but instead try to guess which non-representable number was input by the user. In particular when the costs are a large relative error and non-matching end points. Furthermore, it is much simpler to communicate what is happening in MyRange.

@StefanKarpinski
Copy link
Sponsor Member

I completely agree that this issue is a bug and should be fixed. But why is passing all of those tests I linked to arbitrary? Those test cases are tricky, but do you disagree that they are the desired behavior?

@andreasnoack
Copy link
Member

I'm not sure what the rule for the tests is. E.g. I'm puzzled by the test

[linspace(0.0,0.3,4);] == [0:3;]./10

The test seems to suggest that 0.3 means the number also known as 3//10 and therefore that the "right" range should produce the values

float(0//10),float(1//10),float(2//10),float(3//10)

but I think that is wrong. The range should be computed for float(3//10) and not 3//10. The second element should be float(2//3*float(0//10) + 1//3*float(3//10)) instead of float(1//10).

I'm not sure if it applies to all the tests, but the test just mentioned seems to compute the range for the closest decimal representation and then round to binary, but when our floating points are binary that seems wrong to me.

I think the right behavior would be to use the binary floiting point values that are passed to the linspace constructor and not try to guess the decimal point number that the user had in mind when entering the value. That "decimal" approach works for the tests you linked to, but it is not at all obvious to me that it is a feasible strategy in general. This is the reason why I think the tests are arbitrary.

@StefanKarpinski
Copy link
Sponsor Member

This is what matches our range behavior. Does that seem wrong to you as well? Or do you just think that we should accept that linspace and ranges are going to give different results? Note that there's nothing inherently decimal about what this algorithm does – it looks for rational "liftings" that work for both endpoints and then does the computation in the lifted space with integers and then projects back down.

@andreasnoack
Copy link
Member

Does that seem wrong to you as well?

Unfortunately, yes. I think I've now understood what is going on in the FloatRange code. The end points are approximated by rational numbers with denominators no larger than maxintfloat(Float32) and then a rational range is computed from these approximate end points. By doing so, no errors are introduced during the range computation, but errors are introduced initially when computing the new end points.

This approximation of the end points happens to produce something that is closer to the decimal representation that people are used to and what is used for I/O because users most often type a rational number with a fairly small denominator. Sometimes, though, this approach can be pretty far off

julia> Rational(Base.rat(0.33333333)...) - 0.33333333
3.333333331578814e-9

Effectively we are treating floating point numbers differently for ranges compared to any other floating point function. The fact that decimal numbers are used for I/O, but internally are stored in binary, has caused much confusion and will continue to do so, but I don't think we are simplifying things by obscuring this in the float ranges.

Since floating point number are in fact a rational number, we could still compute some linspaces in integer arithmetic to avoid as many floating point errors as possible. E.g. linspace(0.0,0.3,4) could be

julia> 0.3 |> t -> Rational(Int(2^52*significand(t)), 2^(52-exponent(t))) |>
    t -> float(collect(0:t/3:t))
4-element Array{Float64,1}:
 0.0
 0.1
 0.2
 0.3

but this would probably overflow in many cases.

So what about float ranges constructed from a step argument? As argued by Guido van Rossum and also mentioned in the original float range issue (#2333), this might not be a good API for float ranges. So if we really want to keep float ranges constructed from a step size, I think that the only good alternatives are either to continue to explain how binary floatings work (maybe with a bot) when people get something like

julia> v=.1:.1:.3
0.1:0.1:0.2

or to switch to decimal floating points by default which will probably give fewer surprises, but certainly much slower code.

@StefanKarpinski
Copy link
Sponsor Member

You're understanding is close but not quite correct. There is no approximation occurring, rather given a:s:b a search is done for rational values p, q and r such that

float(p) == a
float(q) == b
float(r) == s

Note the exact equality – these are never approximated; the rational values have to give the float range components exactly. If such rational values are found, computations are done so that the result is the correctly rounded result of what you'd get from doing the computation with those exact rational values.

The alternative to doing something like for ranges is to insist that n = (b - a)/s be an exact integer value and that a + n*s == b exactly. This is extremely rare for floating-point values and would make floating-point ranges fairly useless and unreliable. Which is not a hypothetical situation – this was the case until I worked out this algorithm. Since then, there have been essentially no "this doesn't work the way I expected it to" bugs filed for float ranges.

The fact that the endpoints for LinSpace are not exact is a bug and not related to any approximation.

@c42f
Copy link
Member Author

c42f commented Jan 11, 2016

For the record, I've managed to prove that (1.0/M)*M ∈ {1.0, prevfloat(1.0)}, and that there exists a_ such that (1/M)*M*a_ == a for any float a, so we could use a simplified version of lerp_fix() above if necessary, and I think it's guaranteed to work. The proof is pretty short - I'll spend the time to write it down carefully if it ends up being relevant.

In the meantime I need to read the existing algorithm in detail (it sounds really neat, but could sure use some comments!).

@c42f
Copy link
Member Author

c42f commented Jan 15, 2016

It looks to me like the existing expression for LinSpace's getindex() can't be salvaged, and with it goes the hope of matching FloatRange unless it changes as well.

At an endpoint x, the existing expression amounts to computing (x/M)*M for number of intervals M. To keep the same order of floating point ops, we'd need to solve for a nearby endpoint x_ such that (x_/M)*M == x. Unfortunately it's easy to produce a counterexample where there's no such x_:

x = 0.10000000000000003
# Some candidate endpoints adjacent to x
x_ = [prevfloat(prevfloat(x)), prevfloat(x), x, nextfloat(x), nextfloat(nextfloat(x))]
M = 49.0
julia> (x_*M)/M - x
5-element Array{Float64,1}:
 -2.77556e-17
 -1.38778e-17
 -1.38778e-17
  1.38778e-17
  2.77556e-17

@c42f
Copy link
Member Author

c42f commented Jan 15, 2016

It looks to me like the existing expression for LinSpace's getindex() can't be salvaged [...]

On further thought, the above counter example only shows that a new endpoint can't be solved for when the divisor is fixed. If divisor is allowed to vary (as it does when attempting the rational lifting), it may still be possible to solve simultaneously for a noninteger divisor and new endpoints so that everything works.

@StefanKarpinski
Copy link
Sponsor Member

I'm pretty sure that does work but as I said, I haven't proved it.

@c42f
Copy link
Member Author

c42f commented Jan 16, 2016

Fair enough. To summarize progress so far -

With the existing system, if the rational lifting fails we fall back to using the floating point begin and end provided. What's clear is that this fallback will need to be replaced because it doesn't always reproduce the endpoints.

The example above shows that keeping the fallback value for divisor and modifying the endpoints by 1ulp also won't work.

The question now is whether scaling the divisor while also scaling/modifying the endpoints will work.

@c42f
Copy link
Member Author

c42f commented Jan 26, 2016

I got the chance to read the existing code in more detail and think a little more about the interaction between LinSpace and FloatRange. Some observations:

  • The current method for deducing rational endpoints really is an approximation. It's a very good approximation which rounds correctly to the float end points (ie closer than than 0.5ulp), but @andreasnoack is right to say it's trying to guess what the user means - it chooses a simple rational for intermediate calculations where possible, rather than using the actual floating point numbers given. This nitpicking isn't meant as an argument against using the existing approach for FloatRange which seems to make it more intuitive for naive users and more usable as a convenient tool for quick hacks. I'm only pointing it out as an interesting precedent for subtly messing with the input range definition to give more reasonable results without harming other usage.
  • Base.rat uses maxintfloat(Float32) as the largest numerator or denominator allowed before the search for a rational will terminate. I really don't understand why Float32 occurs here independently of the input type. Is this a bug or a judgment of what constitutes a "simple" rational approximation?

I agree with @andreasnoack that constructing float ranges from a step argument is generally a poor API. How about removing the FloatRange type entirely, and make things like 0.1:0.1:1 return a LinSpace instead? We'd naturally want to retain the current rat magic to figure out the number of steps in the resulting linspace and the linspace stop value. This way, we bless linspace() as the appropriate numerically well behaved API, and do our best to make the other version work as naive users expect. Using a single type should make it easier to achieve the consistency expressed in some of the tests. Another advantage should be speed if we can contrive to avoid the float division in LinSpace (Quick & dirty testing suggests 2x faster running a simple collect()).

Rather than hack directly in Base I've put together some bits and pieces at https://github.com/c42f/LinSpaces2.jl. Some tests that this solves the current issue are included but there's a long way to go to have the same level of consistency with FloatRange. I've also included some quick & dirty benchmarks which may also be relevant to #13401.

@StefanKarpinski
Copy link
Sponsor Member

Base.rat uses maxintfloat(Float32) as the largest numerator or denominator allowed before the search for a rational will terminate. I really don't understand why Float32 occurs here independently of the input type. Is this a bug or a judgment of what constitutes a "simple" rational approximation?

I chose this value because FloatRange applies to Float32, Float64, BigFloat, etc. You don't really want the interpretation of an endpoint to depend on the types given and Float32 is the smallest "computational" floating-point type we support. Also, realistically, if the rational approximation of a floating-point value has numerator or denominator larger than 16 million, that's probably not what was intended.

@StefanKarpinski
Copy link
Sponsor Member

linspace still does not behave as well as FloatRange. Even though there is guessing going on, the fact that you have three floats that have to match means that the guesses are very good and tend to be correct. The simple linear interpolation behavior for linspace is broken in a lot of cases. If we make that the standard API there will be no simple way to express a lot of floating-point ranges correctly.

@c42f
Copy link
Member Author

c42f commented Jan 31, 2016

You don't really want the interpretation of an endpoint to depend on the types given

It already depend on the types. The rational is computed partially using floating point arithmetic, and required to round correctly only at the epsilon of the floating point type you put in. I think this type dependence is an inherent aspect of the current algorithm, since the rationals are expanded only to within the precision of the epsilon of a particular floating point type.

Here's the way I look at rat and its use within FloatRange and LinSpace: the user supplies a floating point number. There's no prospect of knowing whether this was supplied directly by the user or from within the depths of a numerical algorithm. This apparenly leads to two opposing desired behaviors:

  1. If supplied casually by the user, you want to infer which simple rational they actually meant and use it to give them the number of steps they actually wanted in the range, etc. The inferred rational is not equal to the actual floating point value supplied (though rounds correctly to it). This may sound a bit dodgy, but I think it's basically justifiable if you view it as a statistical inference problem: Each possible hypothesis about the "best" numerator and denominator must be considered in light of (a) how closely it matches the actual input floating point number, vs (b) a message length complexity cost based on some combination of the number of digits in numerator and denominator.
  2. If supplied from the depths of a numerical algorithm, we should potentially be using the exact floating point value to base further calculations from.

Which of these two cases is most important depends on your prior belief about the distribution of real numbers which are coming into LinSpace or FloatRange, and there's actually a continuous tradeoff between them based on trading off approximation accuracy vs message description length.

To me, the current behavior of trying to round correctly but also make an approximation which minimize the rational number description length is a rather clever sweet spot and I doubt I would have thought of it myself. It also introduces some constraints on the possible solutions to this bug which are very hard to meet :-)

@c42f
Copy link
Member Author

c42f commented Jan 31, 2016

The simple linear interpolation behavior for linspace is broken in a lot of cases.

After some more experimentation I must admit you're spot on with this. Even making LinSpace match integer ranges isn't possible when using the simplest linear interpolation formula.

Luckily, it seems that it is always possible to adjust divisor in the existing formula in order to exactly hit the end points, as evidenced by the following addition to the end of the linspace() function which does fix the end point problem. (Yup, using rand() is a horrible hack indeed, but it illustrates the principle.)

    m = 1.0
    for iters=1:10000
        m = rand()
        a = start*m
        c = stop*m
        s = n*m
        if a*n/s != start
            a = a*n/s > start ? prevfloat(a) : nextfloat(a)
        end
        if c*n/s != stop
            c = c*n/s > stop ? prevfloat(c) : nextfloat(c)
        end
        if a*n/s == start && c*n/s == stop
            return LinSpace3(a, c, len, s)
        end
    end

It's not possible to make divisor an integer in many cases, but multiplying it and the end points by some randomly chosen non-integer seems to work with probability very near 0.5 if the end points are optionally adjusted in the right direction by 1ulp, as above.

I'll think about how to make this deterministic and avoid iterating.

@c42f
Copy link
Member Author

c42f commented Jul 8, 2016

Just an update on this one - I've come back to looking at it and have tried several different tacks. The task as laid out above is to simultaneously solve for m such that

((start*m)*n)/(m*n) = start
((stop*m)*n)/(m*n) = stop

I think this is doable with careful backward error analysis.

I also tried using compensated floating point arithmetic to get very close to correct rounding across the whole range. It does work, but would be several times more floating point operations in getindex than other approaches, so I've stopped looking further for now.

@StefanKarpinski StefanKarpinski added this to the 0.6.0 milestone Sep 13, 2016
timholy added a commit that referenced this issue Jan 11, 2017
Implement StepRangeHiLo

Split out TwicePrecision utilities from basic Range functionality

Define AbstractTime numeric traits

Needed for tests with new ranges on 32-bit CPUs

New range performance optimizations

Add StepRangeLen to stdlib docs [ci skip]

Misc cleanups for range code

Fix printing bug with range of non-scalars

Simpler implementation of unsafe_getindex for LinSpace

Conflicts:
	base/dates/types.jl
	base/deprecated.jl
	base/exports.jl
	base/float.jl
	base/operators.jl
	base/range.jl
	doc/src/stdlib/math.md
	test/ranges.jl
timholy added a commit that referenced this issue Jan 12, 2017
Implement StepRangeHiLo

Split out TwicePrecision utilities from basic Range functionality

Define AbstractTime numeric traits

Needed for tests with new ranges on 32-bit CPUs

New range performance optimizations

Add StepRangeLen to stdlib docs

Misc cleanups for range code

Fix printing bug with range of non-scalars

Simpler implementation of unsafe_getindex for LinSpace

Conflicts:
	base/dates/types.jl
	base/deprecated.jl
	base/exports.jl
	base/float.jl
	base/operators.jl
	base/range.jl
	doc/src/stdlib/math.md
	test/ranges.jl
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:maths Mathematical functions kind:bug Indicates an unexpected problem or unintended behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants