Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide script julia-native #14995

Closed
wants to merge 1 commit into from

Conversation

eschnett
Copy link
Contributor

@eschnett eschnett commented Feb 8, 2016

Provide two scripts julia-native-setup and julia-native. These are installed into the bin directory, intended to be called by the end user and/or administrator who installs Julia.

From the documentation:

The script 'julia-native-setup' creates an optimized version of Julia's system image for a particular CPU type. Together with the script 'julia-native', this makes it quite easy to maintain a single Julia installation on a heterogeneous cluster that has a common file system.

  1. Install Julia in such a way that it works on every machine of the cluster. This usually requires either building Julia on a machine with the oldest CPU type, or explicitly setting "JULIA_CPU_TARGET".
  2. For each CPU type, run 'julia-native-setup' once, on a machine with this CPU type. This creates an optimized version of Julia's system image for this CPU type. (Note: This script cannot run in parallel.)
  3. To run Julia, use the script 'julia-native' instead of plain 'julia'. This script detects the machine's CPU type, and then executes Julia with the optimized system image.

Provide two scripts `julia-native-setup` and `julia-native`. These are installed into the `bin` directory, intended to be called by the end user and/or administrator who installs Julia.

From the documentation:

The script 'julia-native-setup' creates an optimized version of Julia's system image for a particular CPU type. Together with the script 'julia-native', this makes it quite easy to maintain a single Julia installation on a heterogeneous cluster that has a common file system.

1. Install Julia in such a way that it works on every machine of the cluster. This usually requires either building Julia on a machine with the oldest CPU type, or explicitly setting "JULIA_CPU_TARGET".

2. For each CPU type, run 'julia-native-setup' once, on a machine with this CPU type. This creates an optimized version of Julia's system image for this CPU type. (Note: This script cannot run in parallel.)

3. To run Julia, use the script 'julia-native' instead of plain 'julia'. This script detects the machine's CPU type, and then executes Julia with the optimized system image.
@vtjnash
Copy link
Sponsor Member

vtjnash commented Feb 9, 2016

I'm not certain we want to go this way exactly. In particular, I think it should be quite trivial for us to cross-compile to a different architecture (or several simultaneously, at a reasonable cost of tens of seconds per cpu-arch). What we've needed is a way to specify it so that they can be selected from jl_dump_shadow and to also update jl_gen_llvm_globaldata to emit the appropriate feature_string). I think we could add a new build flag --output-targets=core2,haswell,ivybridge,foobar,baz,native. Then you would be able to integrate your ./julia-native functionality into ./julia so that it is available to all users without any extra work.

I'd be happy to help walk you through the required changes in more detail. Probably easiest to PM me (my user id @ gmail.com) over working it out over the github issues.

@samoconnor
Copy link
Contributor

Hi @eschnett,
This looks like it might be useful for deploying to cloud containers.
At the moment in AWSLambda.jl I have a hack to build for the Xeon E5-2680 CPU used by AWS Lambda.
If I don't do this the multi-platform binaries are too big to fit in the container.

@nalimilan
Copy link
Member

Indeed @vtjnash's approach looks like it would also allow generic binary tarballs and distribution packages to ship additional images optimized for recent architectures, instead of always falling back to the greatest common denominator (i.e. x84_64, without AVX and co).

@eschnett
Copy link
Contributor Author

eschnett commented Feb 9, 2016

@nalimilan "Generic" binary tarballs (that handle multiple architectures) are also possible with julia-native.

I'll contact @vtjnash to see how involved his idea is.

@tkelman
Copy link
Contributor

tkelman commented Feb 9, 2016

All of the underlying functionality is portable so silly to go through a bash dependency for this.

@eschnett
Copy link
Contributor Author

eschnett commented Feb 9, 2016

@tkelman Do you refer to using Bash instead of sh, or to using a shell script instead of a Julia script, or to using a wrapper script at all?

@tkelman
Copy link
Contributor

tkelman commented Feb 9, 2016

All of the above, but mostly the first two until @vtjnash works a bit of magic.

@samoconnor
Copy link
Contributor

Note that I had to explicitly set OPENBLAS_TARGET_ARCH=SANDYBRIDGE to avoid building a huge multi-architecture openblas lib.

It would be good if whatever the solution, it deals with all the deps as well as the core of Julia.

@tkelman
Copy link
Contributor

tkelman commented Feb 9, 2016

This is about the system image. #9372 is about deps.

@eschnett
Copy link
Contributor Author

eschnett commented Feb 9, 2016

@tkelman The main reason I'm using a shell script is that I want to call exec in the end. How do I do this from Julia?

@tkelman
Copy link
Contributor

tkelman commented Feb 9, 2016

Ah right, I missed what advantage the exec buys you over a conventional spawn. A small executable shim, or incorporating this functionality into the existing repl driver, could probably achieve the same?

@eschnett
Copy link
Contributor Author

My solution is quick and dirty, but easy to use and reliable. In the end, this (or an equivalent mechanism) should become part of the julia driver itself.

@simonbyrne
Copy link
Contributor

As I mentioned on #18179, it would be nice if something like this was available as an installation option.

@tkelman
Copy link
Contributor

tkelman commented Aug 22, 2016

We don't ship a linker (yet), so this would only be available if you have development tools available at install time. Or you wouldn't get a shared library so you'd have faster runtime but slow startup.

@ViralBShah
Copy link
Member

Can't we at least have an optimize script that does the right thing if you have the linker? Doesn't need to be post-install.

@tkelman
Copy link
Contributor

tkelman commented Aug 22, 2016

"""
build_sysimg(sysimg_path=default_sysimg_path, cpu_target="native", userimg_path=nothing; force=false)
Rebuild the system image. Store it in `sysimg_path`, which defaults to a file named `sys.ji`
that sits in the same folder as `libjulia.{so,dylib}`, except on Windows where it defaults
to `JULIA_HOME/../lib/julia/sys.ji`. Use the cpu instruction set given by `cpu_target`.
Valid CPU targets are the same as for the `-C` option to `julia`, or the `-march` option to
`gcc`. Defaults to `native`, which means to use all CPU instructions available on the
current processor. Include the user image file given by `userimg_path`, which should contain
directives such as `using MyPackage` to include that package in the new system image. New
system image will not replace an older image unless `force` is set to true.
"""
function build_sysimg(sysimg_path=nothing, cpu_target="native", userimg_path=nothing; force=false, debug=false)

@ViralBShah
Copy link
Member

We do a much job at generating native code for the current architecture now, thanks to @yuyichao 's work.

@ViralBShah ViralBShah closed this May 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants