Skip to content

Commit

Permalink
Add module name to method show output (#19162)
Browse files Browse the repository at this point in the history
* Add the method's module name to the show output

* Update tests that involve showing a method
  • Loading branch information
danielmatz authored and stevengj committed Nov 1, 2016
1 parent 9a46e9d commit 7aba3f5
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 13 deletions.
2 changes: 1 addition & 1 deletion base/methodshow.jl
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ function show(io::IO, m::Method; kwtype::Nullable{DataType}=Nullable{DataType}()
join(io, kwargs, ", ", ", ")
end
end
print(io, ")")
print(io, ") in ", m.module)
if line > 0
print(io, " at ", file, ":", line)
end
Expand Down
4 changes: 2 additions & 2 deletions test/ambiguous.jl
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ let err = try
io = IOBuffer()
Base.showerror(io, err)
lines = split(takebuf_string(io), '\n')
ambig_checkline(str) = startswith(str, " ambig(x, y::Integer) at") ||
startswith(str, " ambig(x::Integer, y) at")
ambig_checkline(str) = startswith(str, " ambig(x, y::Integer) in Main at") ||
startswith(str, " ambig(x::Integer, y) in Main at")
@test ambig_checkline(lines[2])
@test ambig_checkline(lines[3])
end
Expand Down
2 changes: 1 addition & 1 deletion test/reflection.jl
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,7 @@ let li = typeof(getfield).name.mt.cache.func::Core.MethodInstance,
mmime = stringmime("text/plain", li.def)

@test lrepr == lmime == "MethodInstance for getfield(...)"
@test mrepr == mmime == "getfield(...)"
@test mrepr == mmime == "getfield(...) in Core"
end


Expand Down
18 changes: 9 additions & 9 deletions test/replutil.jl
Original file line number Diff line number Diff line change
Expand Up @@ -363,15 +363,15 @@ let err_str,
sp = Base.source_path()
sn = basename(sp)

@test sprint(show, which(Symbol, Tuple{})) == "Symbol() at $sp:$(method_defs_lineno + 0)"
@test sprint(show, which(:a, Tuple{})) == "(::Symbol)() at $sp:$(method_defs_lineno + 1)"
@test sprint(show, which(EightBitType, Tuple{})) == "EightBitType() at $sp:$(method_defs_lineno + 2)"
@test sprint(show, which(reinterpret(EightBitType, 0x54), Tuple{})) == "(::EightBitType)() at $sp:$(method_defs_lineno + 3)"
@test sprint(show, which(EightBitTypeT, Tuple{})) == "(::Type{EightBitTypeT})() at $sp:$(method_defs_lineno + 4)"
@test sprint(show, which(EightBitTypeT{Int32}, Tuple{})) == "(::Type{EightBitTypeT{T}}){T}() at $sp:$(method_defs_lineno + 5)"
@test sprint(show, which(reinterpret(EightBitTypeT{Int32}, 0x54), Tuple{})) == "(::EightBitTypeT)() at $sp:$(method_defs_lineno + 6)"
@test startswith(sprint(show, which(getfield(Base, Symbol("@doc")), Tuple{Vararg{Any}})), "@doc(x...) at boot.jl:")
@test startswith(sprint(show, which(FunctionLike(), Tuple{})), "(::FunctionLike)() at $sp:$(method_defs_lineno + 7)")
@test sprint(show, which(Symbol, Tuple{})) == "Symbol() in Main at $sp:$(method_defs_lineno + 0)"
@test sprint(show, which(:a, Tuple{})) == "(::Symbol)() in Main at $sp:$(method_defs_lineno + 1)"
@test sprint(show, which(EightBitType, Tuple{})) == "EightBitType() in Main at $sp:$(method_defs_lineno + 2)"
@test sprint(show, which(reinterpret(EightBitType, 0x54), Tuple{})) == "(::EightBitType)() in Main at $sp:$(method_defs_lineno + 3)"
@test sprint(show, which(EightBitTypeT, Tuple{})) == "(::Type{EightBitTypeT})() in Main at $sp:$(method_defs_lineno + 4)"
@test sprint(show, which(EightBitTypeT{Int32}, Tuple{})) == "(::Type{EightBitTypeT{T}}){T}() in Main at $sp:$(method_defs_lineno + 5)"
@test sprint(show, which(reinterpret(EightBitTypeT{Int32}, 0x54), Tuple{})) == "(::EightBitTypeT)() in Main at $sp:$(method_defs_lineno + 6)"
@test startswith(sprint(show, which(getfield(Base, Symbol("@doc")), Tuple{Vararg{Any}})), "@doc(x...) in Core at boot.jl:")
@test startswith(sprint(show, which(FunctionLike(), Tuple{})), "(::FunctionLike)() in Main at $sp:$(method_defs_lineno + 7)")
@test stringmime("text/plain", FunctionLike()) == "(::FunctionLike) (generic function with 1 method)"
@test stringmime("text/plain", Core.arraysize) == "arraysize (built-in function)"

Expand Down

4 comments on commit 7aba3f5

@nanosoldier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Executing the daily benchmark build, I will reply here when finished:

@nanosoldier runbenchmarks(ALL, isdaily = true)

@nanosoldier
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@stevengj
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrevels, there appears to be no imaginable way that this commit could have affected performance, so we seem to need to tune the statistics of the benchmarks again. It's frustrating that it's so hard to get performance measurements with reasonable accuracy.

@jrevels
Copy link
Member

@jrevels jrevels commented on 7aba3f5 Nov 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the logs, it seems that past couple daily builds have been executing alternately on Nanosoldier's first and second worker nodes, which is the likely candidate for this variation. I can fix this by forcing daily builds to run on a specific machine, which should have been happening in the first place.

There's still the matter of normal builds being noticeably more noisy than they used to be for the past month or so (and those always run on consistent machines). The only change I can point to - besides Julia itself undergoing some changes, obviously - is that more benchmarks were added to the suite, increasing the total memory footprint of the benchmark suite. You also might notice that smaller groups of benchmarks (like "linalg" vs. ALL) are less prone to noise.

The above makes me inclined to believe my prior hunch that the spurious performance regressions are memory-related. I'd like to try having Nanosoldier load and run each module separately, as well as manually forcing a swap + drop_caches between them, to see if that cuts down on noise.

@maleadt These kinds of fluctuations are a point in favor of your idea to use Codespeed to visualize the daily builds, if you're still interested in setting that up.

Please sign in to comment.