Hacker News new | past | comments | ask | show | jobs | submit login
Try APL (tryapl.org)
377 points by tosh on June 10, 2021 | hide | past | favorite | 228 comments



In the mid-seventies at Swarthmore College, we were mired in punched card Fortran programming on a single IBM 1130. The horror, a machine less powerful than the first Apple II. My job six hours a week was to reboot after each crash. People waited hours for their turn to crash the machine. I let a line form once people had their printouts. I'd find the single pair of brackets in a ten line listing, and I'd explain how their index was out of bounds. They thought I was a genius. Late one Saturday night, I made a misguided visit to the computer center while high, smelled sweat and fear, and spun to leave. Too late, a woman's voice: "Dave! I told Professor Pryor he needed you!" We didn't know that Fred Pryor was the economics graduate student freed in the 1962 "Bridge of Spies" prisoner exchange. Later he’d learn that I was his beagle’s favorite human, and I’d dog-sit to find steaks for me I couldn’t afford, but for now I feared him. So busted! Then I heard this voice “See these square brackets? See where you initialize this index?” He was spectacularly grateful.

One cannot overstate the rend in the universe that an APL terminal presented, catapulting me decades into the future. I quickly dreamed in APL. For $3 an hour for ten hours (a massive overcharge) I took a professor’s 300 line APL program translated literally from BASIC, and wrote a ten line APL program that was much faster. One line was classic APL, swapping + and * in an iterated matrix product for max and min. The other nine lines were input, output. The professor took years to realize I wasn’t also calling his code, then published my program.

Summer of 1977 I worked as a commercial APL programmer. Normally one never hires college students for the summer and expects them to be productive. The New York-based vice president was taking the train every day to Philadelphia because the Philly office was so far underwater, and desperate to try anything to save himself the commute. He knew Swarthmore had a terminal, and heard about me. At my interview I made a home-run derby of the questions from the Philly boss. The VP kept trying to intervene so he could put me in my place before hiring me. The tough questions were “dead key problems”. How do you write the following program, if the following keys are broken?

Our client was a mineral mining company, our task a reporting system. The reports were 2-dimensional projections of a 9-dimensional database. The accountants wanted all totals to be consistent across reports, and to be exactly the sums of their rounded components. I broke the news to our team that we needed to start over, rounding the 9-dimensional database once and for all, before generating each report. This took a few weeks; I wrote plenty of report generation helper routines. My coworkers overheard me say on a phone call that I was being paid $5 an hour, and at the time I didn’t understand which way they were shocked. I didn’t have much to do, the rest of the summer.

The mining company VP found me one morning, to ask for a different report, a few pages. He sketched it for me. He found me a few hours later to update his spec. He loved the printout he saw, imagining it was a prototype. “It’s done. I can make your changes in half an hour.”

At a later meeting he explained his own background in computing, how it had been key to his corporate rise. Their Fortran shop would take a month to even begin a project like I had knocked off in a morning, then weeks to finish it. He pleaded with me to pass on Harvard grad school and become his protege.

Some Lisp programmers had similar experiences, back in the day. Today, APL just sounds like another exotic language. In its heyday it was radical.


Years ago I talked to an older APL pro, that got lucky enough to work on a project where he wrote APL interactively on a Cray. I can't remember most of the details, but holy cow, that must have been like being handed a fighter jet in the biplane era.


> The tough questions were “dead key problems”. How do you write the following program, if the following keys are broken?

I've never heard of this style of programming question. It's fascinating that we've gone through so many styles of interview over the decades, from pair programming against unit tests to Google-style whiteboard algorithms to early-2000s Microsoft-style brain teasers and "why are manhole covers round".


It's very APL specific. Keys were APL symbols, operators in one keystroke. So if the sum key wasn't available, you'd decode base 1. That sort of thing...


This, sir, is why I read HN comments. Thank you.


Also just wanted to say thanks for sharing!

I grew up with PCs and a Commodore 64 in the 80's, which I think was the golden era of personal computing, but I absolutely love stories from earlier!


My grandfather had to drop out of high school, after his dad died from an infection. He unloaded trains for the A&P grocery chain, and proposed that they reorder shipments to save money. He retired in charge of half of New England. His computer center was a bank of women at fancy adding machines.

Fresh out of college, my father helped a senior programmer at Kodak automate testing for their film developing chemical baths. This replaced dozens of people. Kodak found them other positions, and moved my dad to the research labs. He used computers from the beginning, and told me many stories. On a family night in the 1960's I got to play the original "space wars".

In 1974 he came up with the color sensor pattern now used in nearly all digital cameras.


I also got to play space wars on a pdp-1 but my father didn't invent digital photography :(

Wondering though...I visited the Rochester labs in the mid-80s -- hardware I had a hand in was being used there for projects that I suspect were secret at the time such as computer simulation of photochemistry and warping of photoint images to conform to map data. I wonder if I met your father?


He invented the Bayer pattern??!



Your father was Bryce Bayer? That's amazing.


Fellow Swattie here, 2004 vintage. The CS department has changed dramatically even since my time there. We took classes in the observatory building, but they’ve long since outgrown that space. They’re now in the new Science Center building, and I heard they’ve actually outgrown that space as well. It’s a very popular major!


Read that as "We took classes in observatory building". Imagining the prof explaining how to grind a mirror, how revolvers can be used to make a very high resolution angular positioning system, and so on.


Sadly, the only linkage between the intended purpose of the building and the classes we took inside were the the Sun workstations.


Thanks for sharing, I love these types of stories. Really makes me pine for the "old" days, and wonder if there's a parallel universe where technology took a very different route such that languages like APL, Lisp, and Smalltalk are used instead of JavaScript, Java, and C#, and what that world looks like.

> Some Lisp programmers had similar experiences, back in the day.

About 20 years ago (so not quite so far back) I was an engineering intern in an industry (nuclear energy) with two main tools: heavy number crunching software in Fortran, and Excel for everything else. The plant I was working at had just gotten some software for managing and tracking fuel movement (reactor cores are comprised of several hundred fuel bundles, which are re-arranged every 1-2 years), and my task was to set it up and migrate historical data from Excel spreadsheets, either by entering data manually with the GUI (which wasn't really that good) or using the primitive built-in import/export functions (CSV-based probably). Good intern task, right?

At some point I noticed this odd window running in the background whenever I started the program: "Gold Hill Common Lisp". Hm, what's this, it seems to have some kind of command line... and so I dived down the rabbit hole of the CL REPL and live image manipulation. I discovered the apropos command (or maybe GHCL had an actual UI for it?), which let me learn about all the internal data structures and methods, which I was able to use to quickly configure the plant specifics and import data.

"Oh, you're done already? OK next we need to get these custom reports out, talk to the vendor about implementing them. And see if you can now export data into our old report generator" (another spreadsheet of course). So I dutifully started the requisition process to get custom reports added, but while that was working through the system, I was stretching my new-found Lisp knowledge to not just dump out report data, but add the functionality to the UI. Coming from a background in C and Fortran I was fully ingrained with "write, compile, run" being how things worked. Image how much it blew my mind when I found out I could call a function in the REPL and actually add a menu to the running program!

One feature of the software was it could be used for "online" fuel movement tracking, which was traditionally done on paper, in duplicate. It's probably still done that way for good reasons, but still nice to have that electronic tracking. I was so proud when we were demonstrating it to the reactor operators, and they asked if we could add some little functionality (the details escape me), and I was able to say "yep no problem!" No requisitions, then back-and-forth with the vendor, eventually getting the feature in a year maybe. Really wish all software was so powerful (although admittedly my hijinks were a bit of a QA nightmare, but the software wasn't considered safety-related since there were many checks and paper records).

Fast-forward a couple years, after much coursework in Fortran and Matlab, I'm about to graduate and am now interviewing with the vendor. Question comes up "so what would you change about our software?" "Well, the interface is a bit clunky, I'd probably want to re-write it in a modern language like C++" :facepalm:.

Only years later, re-discovering CL, along with Racket and Clojure, did it occur to me how much was wrong with that response, and how sad that the key lesson of that semester internship had gone over my head.


I love Lisp and Scheme too. Though like sex at 19, nothing has ever felt quite like the first months with APL.

For "production" math research, nothing comes close to Haskell for me. Every time I consider straying, the ease of parallelism draws me back.

I have written a fair bit of Scheme, using my own preprocessor that avoids most parentheses. I stay clear of that loaded debate, this is for personal use. The code is poetic, though not as dense as APL or Haskell. As Bill Joy once opined, what you can fit on a screen matters.

My own favorite Haskell code is a very terse implementation of monadic parsing. Typed trees are not far off from strings, and then one has algebraic data types as parsing. (Pattern matching is parsing a bit at a time, without reifying the activity for program control.)

APL gets unexpected mileage from a core of array handling. I dream of a Lisp-like language tuned to parse algebraic data types as its core activity, with macros as its "middle of the plate" pitch rather than as a strapped-on afterthought. (I'm not trolling here, this is my honest opinion. In 10,000 runs of the simulation, I doubt Lisp macros would be this clumsy in most runs.)

Ultimately we all long for better language tools to express pattern as code. My electric guitar never sounded like what's in my head, and for code I'm still reaching too. Though APL was wonderful in its day.


Playing with J (~ APL) certainly feels magical (though I can never remember the syntax a day later) and APL like Lisp gets a lot of leverage from a powerful vocabulary on a rich data structure (arrays and lists respectively). However the "One Great Datastructure" flattens the domain and doesn't self-document, nor constrain you from unintended uses, the way a rich type system does, so I find reading and maintaining Lisp (and I assume the same applies to APL) to be frustrating and tedious.

Writing this, I'm reminded how J felt full of "tricks" much like using Perl: there are these tricks you can use to get the result you wanted that isn't necessarily the most faithful expression of the problem.


> Thanks for sharing, I love these types of stories. Really makes me pine for the "old" days, and wonder if there's a parallel universe where technology took a very different route such that languages like APL, Lisp, and Smalltalk are used instead of JavaScript, Java, and C#, and what that world looks like.

Easy, here is some time travel,

"Alan Kay's tribute to Ted Nelson at "Intertwingled" Festival"

https://www.youtube.com/watch?v=AnrlSqtpOkw

"Eric Bier Demonstrates Cedar"

https://www.youtube.com/watch?v=z_dt7NG38V4

"Yesterday's Computer of Tomorrow: The Xerox Alto │Smalltalk-76 Demo"

https://www.youtube.com/watch?v=NqKyHEJe9_w

However to be fair, Java, C# and Swift alongside their IDEs are the closest in mainstream languages to that experience, unless you get to use Allegro, LispWorks or Pharo.


Nice links. For a taste of what to program in Pharo is like, see https://avdi.codes/in-which-i-make-you-hate-ruby-in-7-minute...


This post is almost Pynchonesque! Great story and thanks for sharing!


Very fun story, thank you for sharing! Small question of clarification:

> I took a professor’s 300 line APL program translated literally from BASIC, and wrote a ten line program that was much faster.

The professor translated BASIC->APL, and you translated APL->more concise APL?


Yes [fixed]. And in an interpreted language, moving the work to a single operation was a big win.


This is a textbook case of "humble bragging"... and I love it!


@Syzgies When did you stop using APL as your black magic? Or when did it lose its lustre?


So we can expect ecmascript 7 to include some dyadic operators ? :)

thanks for this story btw


If the dang pipeline operator will ever actually get to Stage 2/3, it should be straightforward to invent a lot of arbitrary operator-like behavior using composition and pipelines.


    |> 
this ?


Yeah. With that, you can easily do a lot of quasi-operator functionality with things like:

    3 |> add(5) |> mult(7)
Not as good as "real" custom operators, but it still fills a lot of gaps.


Back in university (1974), I took a course in AI. The prof wanted us to write a brute-force solution to solve the 8-queens problem -- any language!

I wrote the solution in APL in about an hour and it only had 3 lines of code. The rest of the class spent days on their terminals and keypunches trying to solve it. Most solutions took hundreds of lines of code.

I got a D from my professor. I questioned why and was told that it was unreadable, and that the solution was inefficient. This annoyed me because he didn't know APL, and I figured that since I solved the problem in one hour, while the rest took days, it was very efficient.

I protested the result with the department head (who liked APL) and ended up getting an A+. As you can imagine, all the rest of my assignments, written in a variety of languages, were graded by that AI prof with significant prejudice.

I passed nonetheless. I loved APL and ended up working for one of the major APL providers as my first job out of school.


> I questioned why and was told that it was unreadable, [...] this annoyed me because he didn't know APL.

I often tell people that Spanish is unreadable if you don't know Spanish. This also applies to language features! It's only fair to call things "unreadable" if you have the full context to understand but still find it hard.


> I got a D from my professor. I questioned why and was told that it was unreadable, and that the solution was inefficient.

That is such a bad faith argument, how can a brute force solution be efficient or inefficient?


Constant factors, heuristics, memory usage, etc...

There was a discussion on array programming languages here recently where someone proudly showed off a K language solution to a simple problem, stating that the K solution was efficient because it could solve it in 1.2 microseconds. I used Rust to solve it in 5.5 nanoseconds, which is nearly 200x faster. Both used "brute force" with no cleverness, but there's "bad" brute force and "good" brute force.

I've had a similar experience at university while using Haskell. It's a very elegant lazy pure functional language, but it's glacially slow. The natural, low-friction path is very inefficient. The efficient path is unidiomatic and verbose.

I hear people making similar observations about F# also. It's fast to program in, but if you also need performance then it is no better than more mainstream languages -- in fact worse in some ways because you're "going against the grain".


Compared to Rust/C/C++? Sure!

But Haskell vs most others, it’s faster and compiles down to a binary executable.

F# runs on .NET and is comparable to Haskell but not AOT.


This is magical thinking.

Compilation doesn't magically eliminate Haskell treating all lists as linked lists. It's an inherent aspect of the language.


Calling Haskell "glacially slow" is grossly misleading when there are languages like Python and Ruby in common use.


Exactly! Where is the nuance here plus he’s not exactly mentioning what he’s comparing it to.


I think that the nuance here was perfectly obvious, if not explicit, to everyone who wasn't busy burying it under a pile of whataboutism.

I would also say that I wouldn't be at all surprised if idiomatic Python is actually quite a bit faster than idiomatic Haskell in some interesting use cases. Which does not at all mean that the opposite is true. There are always interesting cases that make good fodder for "what about" comments, but getting carried away with them doesn't really make for particularly edifying discussion. Just mentally append "in my experience" to everyone's comments and move on.


> I would also say that I wouldn't be at all surprised if idiomatic Python is actually quite a bit faster than idiomatic Haskell in some interesting use cases. Which does not at all mean that the opposite is true.

I'd be utterly amazed. Haskell is orders of magnitude faster than Python for "typical" code (having to do a hash lookup for every method invocation ends up being really costly). It's not a slow language by any reasonable definition. And the fact that you're suggesting it is suggests that the nuances are not at all obvious.

(Not trying to hate on Python - performance is a lot less important than most people think it is - just trying to put Haskell's performance characteristics in a familiar context)


It's not that much of a nuance, TBH. It's just that Python has largely become a high-level language for gluing together libraries that are mostly written in much faster AOT-compiled languages such as C, C++, Fortran, or even Cython. For example, I prefer to stick with Python for a lot of the number crunching things that I do because, while you certainly can beat numpy's performance in other languages, in practice it turns out that doing so is generally more work than it's worth. Especially if you're using the Intel distribution of numpy.

So, yeah, it's true, you do a hash lookup for every Python method invocation, and also you've got to worry about dynamic type checks for all the dynamically typed references. But the practical density of method invocations and dynamic type checks can be surprisingly low for a lot of Python's more interesting use cases.


Haskell doesn't treat all lists as linked lists. It treats linked lists as linked lists. It also has Seq (finger trees), boxed and unboxed arrays and various streaming libraries.


> I've had a similar experience at university while using Haskell. It's a very elegant lazy pure functional language, but it's glacially slow. The natural, low-friction path is very inefficient. The efficient path is unidiomatic and verbose.

Well, if you run a Haskell program on a "C-Machine" of course a comparable program in a "C-Language" will be faster — as it don't need to bridge any gap in execution semantics.

The point is: Mostly all modern computers are "C-Machines". Modern CPUs go even a long way to simulate a "PDP-7 like computer" to the outside world, even they're working internally quite different. (The most effective optimizations like cache hierarchies, pipelineing, out-of-order execution, JIT compilation to native instructions [CPU internal "micro-ops"], and some more magic are "hidden away"; they're "transparent" to programmers and actually often not even accessible by them). So not only there's nothing than "C-Machines", those "C-Machines" are even highly optimized to most efficiently execute "C-Languages", but nothing else! If you want to feed in something that's not a "C-Language" you have to first translate it to one. That transformation will almost always make your program less efficient than writing it (by hand) in a "C-Language" directly. That's obvious.

On the other hand running Haskell on a "Haskell-Machine"¹ is actually quite efficient. (I think depending on the problem to solve it even outperforms a "C-Machine" by some factor; don't remember the details, would need to look through the papers to be sure…). On such a machine an idiomatic C or Rust program would be "glacially slow", of course, for the same reason as the other way around: The need to adapt execution semantics before such "no-native" programs could be run will obviously make the "translated" programs much slower compared to programs build in languages much closer to the execution semantics provided by the machines hardware implemented evaluator.

That said, I understand why we can't have dedicated hardware evaluators for all kind of (significantly different) languages. Developing and optimizing hardware is just to expensive and takes to much time. At least if you'd like to compete on the status quo.

But I could imagine for the future some kind of high level "meta language" targeting FPGAs which could be compiled down to efficient hardware-based evaluators for programs written in it. Maybe this could even end the predominance of "C-Languages" when it comes to efficient software? Actually the inherently serial command-stream semantics of "C-Languages" aren't well fitted to the parallel data-flow hardware architectures we're using at the core now since some time (where we even do a lot of gymnastics to hide the true nature of the metal by "emulating a PDP-7" to the outside world — as that's what "C-Languages" are build for and expect as their runtime).

To add to the topic of the submission: Are there any HW implementations of APLs? Maybe on GPUs? (As this seems a good fit for array processing languages).

¹ https://github.com/tommythorn/Reduceron


Whoa! Reduceron is cool! Your point about virtual “C-Machines” getting in the way of the hardware is quite relevant with today’s architectures...

> Are there any HW implementations of APLs?

This has been discussed on Y-Combinator in the past[1]. I found a discussion referencing a paper about leveraging Futhark[2].

1. https://news.ycombinator.com/item?id=11965474

2. https://futhark-lang.org/publications/fhpc16.pdf


Great story, it’s stories like these that make me still hold onto my undergrad work (as bad as it is). Maybe one c#, Visual Basic, asp, php, t-sql and other esoteric projects will be looked at as relics of the past.


If you want to learn more about array programming languages there is a new podcast series at https://www.arraycast.com with some banter, philosophy, history and a collection of related resources https://www.arraycast.com/resources


Found it when their first ep was posted here on HN a few months ago. Had seen array langs before but never dared to sit down with them. Their pod made me take the plunge. These langs are fascinating. As someone who likes func programming normally it feels related but with reduce on steroids.


I'm taking the opportunity to mention my project that implements a language that is inspired by, and is mostly compatible with APL. It has some major differences, such as being lazy evaluated and providing support for first-class functions.

It also supports defining syntax extensions which is used by the standard library to provide imperative syntax, which means you can mix traditional APL together with your familiar if/else statements, etc.

At this point there isn't much documentation, and the implementation isn't complete, so I'm not actually suggesting that people run out to try it unless they are really interested in APL. I just took this opportunity since APL is mentioned so rarely here.

https://github.com/lokedhs/array

There is an example of a graphical mandelbrot implementation in the demo directory, that may be interesting.


As someone who has used APL professionally to maintain a legacy codebase https://en.wikipedia.org/wiki/Write-only_language

Anyway, I like reduce, shape, membership, find, and/or, and ceiling/floor. I actually like dealing with arrays in this way.

IMO, that is why numpy/matlab is so much better than APL.


A lot of people seem to have trouble with symbol-based languages, for eg. regular expressions, some parts of Perl, or APL in this case. That seems to be part of the appeal of Python too, for a lot of people, that it's unusually low on non-alphanumeric symbols. I wonder if it has something to do with "Head-voice vs. quiet-mind" [1]. I'm generally on the non-verbal quiet-mind side, and find APL-like languages very intuitive and appealing. Debugging or maintaining them doesn't feel any more difficult than more verbal languages either.

[1] http://web.archive.org/web/20210207121250/http://esr.ibiblio...


I think that quiet vs verbal mind personality difference is really what separates whether people like which languages.

I personally can't stand languages that are "spoken description". I understand the appeal to others but the languages just don't mesh with my way of thought. When I'm programming or building a system I'm thinking in the sense of abstract transformations and structures not in any spoken structure. Often times for me it's easier to draw out what I'm thinking of rather than explain it since there's not necessarily a verbal representation behind what I'm thinking of until I sit down and try to come up with one.


There was study of Harvard undergraduates that demonstrated Greek letters made math harder.

I tell my students that Columbia undergraduates are of course smarter, but still...


Interesting. I was under the impression that Iversion intended APL to also read almost like English, provided you knew the names of operators and idioms.

Aaron Hsu has some talks showing this off about his co-dfns compiler.


What is your setup like? I was just messing around with it just now using homebrew's gnu-apl package, and it just seems like a toy language, for example scripting mode is sort of bolted on top of interactive mode, since you have to add an ")OFF" command at the end of your script. How do you handle modules?


GNU APL is mostly a reimplementation of APL2 from the 80s, with some additions that in my opinion do nothing to get it out of the 80s. Dyalog has namespaces, but scripting support is only due to be released in the next version, 18.1.

So I don't know of any APL that allows module-defining scripts. This is really unfortunate since there's no technical reason to prevent it. With lexical scoping (Dyalog has it, GNU doesn't), it's easy to define a module system and I did this in my APL reboot called BQN: https://mlochbaum.github.io/BQN/doc/namespace.html .


BQN is really impressive, and implements a language which is similar to APL, but without a lot of the legacy baggage that Dyalog has gathered over the years.

For someone that wants to get started with array languages and does not have any need to be compatible with APL, then this is probably the best place to get started.

It also has good documentation, unlike my array language. I need to put a lot of effort into it to get even close to what BQN did.


I see it's self-hosted. How much code needs to be written in another language in order to bootstrap the whole thing?


In Javascript it would be probably around 250 lines: the current VM is 500 but that includes extra stuff for performance, and system stuff like math and timers that aren't part of the core language.

This depends a lot on the host language. BQN requires garbage collection because it has closures so an implementation in a language without it needs to include a GC. JS has a lot of conveniences like closures of its own, and the ability to tack properties onto anything, so even other high-level hosts would generally take more code.


I'm thinking Julia could be a good fit. It's garbage collected, pretty fast, and designed for numerical work.


And Julia has APL.jl, implementing APL within Julia, with the beautiful APL characters, using string macros. This library might contain some ideas for implementing your array language.


Agree with both of you: I've been planning to do an embedded BQN for Julia (as well as finish my NumPy one), and having found APL.jl in this thread it looks like a pretty good resource. There are some missing syntax features in the compiler that I'd like to finish first.

If anyone else is interested in working on a Julia VM I think this would be a pretty cool project, and a nice way to learn some of how bytecode interpreters are implemented without getting into the weeds in a low-level language. Join the forums and ask about it!


An implementer might want to use

https://juliafolds.github.io/Transducers.jl/dev/


Thanks for your work on BQN Marshall. Can it run on the desktop though? I know little about JS, but kind of assume it needs a browser? My only experience with JS on the desktop is via bloated electron apps. I always thought a single file executable would be best. Is that possible?


There are multiple VMs. The JS one does run offline with Node.js (it's one file to implement BQN and one for Node-specific stuff like command-line and filesystem interaction), but it's not designed for speed. CBQN is what you want. EDIT: Just run make, which will pull from a bytecode branch if necessary. It doesn't ship with bytecode, so currently you have to bootstrap once with either dzaima/BQN or Node; at some point I'm sure we'll publish a full release that can be compiled with C alone for bootstrapping.

Further details at https://mlochbaum.github.io/BQN/running.html, and feel free to contact me or join the forums for help getting set up!


EDIT: No longer necessary; see edit above.

Updated this gist, so you can also copy the bytecode (into the commented files in src/gen in the CBQN repository) from here: https://gist.github.com/mlochbaum/7208fa5a4dd767102f9a99b363...


Thanks for reaching out. I'll def keep watching the project. Really happy to see there is a C version.

What is the roadmap for I/O and common data formats support on the C side (like csv, json, xml...etc), or do you get all of that with the bootstrapping?


There already is some basic file I/O in CBQN (though not complete yet). Format parsing isn't hard to do in BQN itself, and finishing the base implementation is a priority over fancy built-in interfaces for now.


various interpreters have ways to make external calls via com/web/etc. APL is basically python calling C++/C#/Java/etc.

Seeing pure APL for XML parsing is.. interesting. Most interpreters support saved/read of functions in a more procedural way.


I suppose if arrays language get popular enough, we will get a module to use them as a DSL for libs like numpy, just like we have regex for strings, instead of a whole language dedicated for them.

It would be a win / win, you gain the strength of J, K and APL for array processing, without the weakness of them for anything else.

And just like with regex, you'll get fat disclaimers in doc telling you to not get carried away too much as it can quickly become unreadable.


APL is not unreadable any more than mathematical or musical notation are. Sure, to someone who doesn’t know the notation it looks like an incomprehensible mess. So does math, music, greek, chinese, arabic, hebrew, etc.

I used APL professionally every day for ten years. I can read it. I can touch-type it. And I don’t need a specially-labelled keyboard (even thirty years later).

This should not be surprising at all. A pianist does not need labeled keys and people familiar enough with the above-listed spoken languages can touch-type them without much effort.

While, sadly, APL has no practical application in modern software engineering (it stagnated and became irrelevant and impractical) it is wrong to look at the brilliant use of notation as a tool for the concise communication and expression of ideas and list it as a negative. Not being able to speak, read or write Chinese does not make it a bad language.


> Not being able to speak, read or write Chinese does not make it a bad language.

Well that's the problem. Not being able to read or write APL makes it more fun to learn.



Thanks for posting this intriguing link. A glance at the Jupyter notebook on this site will bring a smile to a Julia user who grew up on (or has somehow encountered) APL.


What an incredibly concise, elegant implementation.


The regex DSL only works for strings. I lament that I cannot use something regex-like to match general sequences, e.g. a sequence of tokens, instead of only strings (sequence of characters).

The operations could be the same. There are classes, and operators for matching 0, 1, or more repetitions, etc.

Array languages are powerful especially when you have arrays with arbitrary elements, including arrays themselves. Good luck using a regex DSL to match a sequence of strings, where you might want to define a string class (analogous to a character class) as a regex itself.


> I lament that I cannot use something regex-like to match general sequences, e.g. a sequence of tokens

You can. It's not typically built into a programming language's standard library. But there are plenty of general-purpose automata-building libraries out there, and some of them do provide DSLs. At least to the extent that the regular expressions you're using are actually regular (many aren't), all a regex is is a domain-specific specialization of nondeterministic finite automata.

I sometimes lament that I don't see them, or hand-coded automata, more often. This was CS101-level stuff when I was in college, and it's pretty easy to code your own even if you don't have a good library available in your language. And, for problems where they're appropriate, using them typically yields a result that's simpler and easier to maintain than whatever ad-hoc alternative you might see instead.


> using a regex DSL to match a sequence of strings, where you might want to define a string class (analogous to a character class) as a regex itself.

http://p3rl.org/retut#Defining-named-patterns

https://docs.raku.org/language/grammar_tutorial#The_technica...

If you want a demo, reply with a concrete example I can implement.


The now defunct Viewpoints Research org chased an idea similar to this. It was a meta language that was based around PEGs, intended to allow the easy creation of DSLs. I imagine the papers are still up somewhere. It was called OMeta.


There are a few libraries expanding on the concept, here's one: https://clojure.org/guides/spec#_sequences


I'm not sure of the need to do that. Not for parsing anyway.

A well designed regex system is enough without tokenization.

The parser for the Raku language, for example, is a collection of regexes composed into grammars. (A grammar is a type of class where the methods are regexes.)

We could probably do the token thing with multi functions, if we had to.

    multi parse ( 'if',  $ where /\s+/, …, '{', *@_ ) {…}
    multi parse ( 'for', $ where /\s+/, …, '{', *@_ ) {…}
    …
Or something like that anyway.

(Note that `if` and `for` etc are keywords only when they are followed immediately by whitespace.)

I'm not sure how well that would work in practice; as hypothetically Raku doesn't start with any keywords or operators. They are supposed to seem like they are added the same way keywords and operators are added by module authors. (In order to bootstrap it we of course need to actually have keywords and operators there to build upon.)

Since modules can add new things, we would need to update the list of known tokens as we are parsing. Which means that even if Raku did the tokenization thing, it would have to happen at the same time as the other steps.

Tokenization seems like an antiquated way to create compilers. It was needed as there wasn't enough memory to have all of the stages loaded at the same time.

---

Here is an example parser for JSON files using regexes in a grammar to show the simplicity and power of parsing things this way.

    grammar JSON::Parser::Example {

        token TOP       { \s* <value> \s* }

        rule object     { '{' ~ '}' <pairlist>  }
        rule pairlist   { <pair> * % ','        }
        rule pair       { <string> ':' <value>  }

        rule array      { '[' ~ ']' <arraylist> }
        rule arraylist  {  <value> * % ','      }

        proto token value {*}

        token value:sym<number> {
            '-'?                          # optional negation
            [ 0 | <[1..9]> <[0..9]>* ]    # no leading 0 allowed
            [ '.' <[0..9]>+ ]?            # optional decimal point
            [ <[eE]> <[+-]>? <[0..9]>+ ]? # optional exponent
        }

        token value:sym<true>    { <sym>    }
        token value:sym<false>   { <sym>    }
        token value:sym<null>    { <sym>    }

        token value:sym<object>  { <object> }
        token value:sym<array>   { <array>  }
        token value:sym<string>  { <string> }

        token string {
            「"」 ~ 「"」 [ <str> | 「\」 <str=.str_escape> ]*
        }

        token str { <-["\\\t\x[0A]]>+ }
        token str_escape { <["\\/bfnrt]> }
    }
A `token` is a `regex` with `:ratchet` mode enabled (no backtracking). A `rule` is a `token` with `:sigspace` also enabled (whitespace becomes the same as a call to `<.ws>`).

The only one of those that really looks anything like traditional regexes is the `value:sym<number>` token. (Raku replaced non capturing grouping `(?:…)` with `[…]`, and character classes `[eE]` with `<[eE]>`)

This code was copied from https://github.com/moritz/json/blob/master/lib/JSON/Tiny/Gra... but some parts were simplified to be slightly easier to understand. Mainly I removed the Unicode handling capabilities.

It will generate a tree based structure when you use it.

    my $json = Q:to/END/;
    {
      "foo": ["bar", "baz"],
      "ultimate-answer": 42
    }
    END
    my $result = JSON::Parser::Example.parse($json);
    say $result;
The above will display the resultant tree structure like this:

    「{
      "foo": ["bar", "baz"],
      "ultimate-answer": 42
    }
    」
     value => 「{
      "foo": ["bar", "baz"],
      "ultimate-answer": 42
    }
    」
      object => 「{
      "foo": ["bar", "baz"],
      "ultimate-answer": 42
    }
    」
       pairlist => 「"foo": ["bar", "baz"],
      "ultimate-answer": 42
    」
        pair => 「"foo": ["bar", "baz"]」
         string => 「"foo"」
          str => 「foo」
         value => 「["bar", "baz"]」
          array => 「["bar", "baz"]」
           arraylist => 「"bar", "baz"」
            value => 「"bar"」
             string => 「"bar"」
              str => 「bar」
            value => 「"baz"」
             string => 「"baz"」
              str => 「baz」
        pair => 「"ultimate-answer": 42
    」
         string => 「"ultimate-answer"」
          str => 「ultimate-answer」
         value => 「42」
You can access parts of the data using array and hash accesses.

    my @pairs = $result<value><object><pairlist><pair>;

    my @keys = @pairs.map( *.<string>.substr(1,*-1) );
    my @values = @pairs.map( *.<value> );

    say @pairs.first( *.<string><str> eq 'ultimate-answer' )<value>;
    # 「42」
You can also pass in an actions class to do your processing at the same time. It is also a lot less fragile.

See https://github.com/moritz/json/blob/master/lib/JSON/Tiny/Act... for an example of an actions class.

---

Note that things which you would historically talk about as a token such as `true`, `false`, and `null` are written using `token`. This is a useful association as it will naturally cause you to write shorter, more composable regexes.

Since they are composable, we could do things like extend the grammar to add the ability to have non string keys. Or perhaps add `Inf`, `-Inf`, and `NaN` as values.

    grammar Extended::JSON::Example is JSON::Parser::Example {
        rule pair { <key=.value> ':' <value>  } # replaces existing rule

        # adds to the existing set of value tokens.
        token value:sym<Inf>    { <sym> }
        token value:sym<-Inf>   { <sym> }
        token value:sym<NaN>    { <sym> }
    }
This is basically how Raku handles adding new keywords and operators under the hood.


I've written an open source version of q: http://www.timestored.com/jq/ It's implemented in java. You idea is interesting. Allowing q and java intermixed... combining it inside java like linq and C# would be interesting.


Since this seems to have brought TryAPL down, there are other options listed at [0]. In particular, ngn/apl[1] is a JavaScript implementation and runs client-side. But it's limited relative to Dyalog (used on TryAPL) and no longer under development.

[0] https://aplwiki.com/wiki/Running_APL

[1] https://abrudz.github.io/ngn-apl/web/


> Since this seems to have brought TryAPL down

Apparently that always happens:

Try APL in your browser - https://news.ycombinator.com/item?id=9774875 - June 2015 (27 comments)

Try APL online - https://news.ycombinator.com/item?id=4090097 - June 2012 (4 comments)


I had a blast learning to write and read APL for a course at my university where we chose and presented the papers from HOPL IV. If you want a fairly quick and easy read about the history of APL I can heartily recommend the paper "APL Since 1978". A small taste: `twoSum ← {1↑⍸⍺=(⍵∘.+⍵)}`, a dyadic function to find the indicies of (the) two elements in an array that sum to ⍺, for example: `9 twoSum 2 7 11 15` will return `0 1`. Though I doubt I'll ever write any larger programs with it, I've had a lot of fun with it.



One past thread:

APL Since 1978 [pdf] - https://news.ycombinator.com/item?id=23510433 - June 2020 (21 comments)


If APL interests you, I worked through (most of) Mastering Dyalog APL [0] a while back. It is very well paced and organized. The vast majority of it also works with GNU APL, though not all.

[0] https://www.dyalog.com/mastering-dyalog-apl.htm


Does anyone know of an input method for APL that works similar to an IME that you would use for Japanese?

Basically you type the name of the operator in latin characters and get the proper symbol autocompleted.

I only see direct key to symbol mappings which might be fine for a full time APL dev but offer a bit too much of a learning curve for just trying it out.


The Windows emoji keyboard (Win+. or Win+; whichever is more comfortable for you) has a lot of the Unicode math symbols under its Symbols tab (marked with an omega). It has a pretty good IME-ish type to search experience for regular emoji, but doesn't support type to search under the Symbols tab. (I wish it did and hope it is something they consider adding.)


The linked site uses a couple methods:

  `i => ⍳ (iota)
  ii<tab> => ⍳
In some editors you can change the prefix character (in emacs I think the default is . or I changed it to . almost immediately). Also in emacs (though I didn't try this with APL) you can use an entry method based on TeX so if you type:

  \iota
You will get


The RIDE interface (https://github.com/dyalog/ride) allows you to type double-backtick and then a search word. Screenshot: https://i.imgur.com/kagYC73.png


Emacs has a good quail-completion system for `GNU-APL`.


Topical! One of the most recent Corecursive episodes has a guest with a fascinating take on APL, while the episode is completely unrelated.

https://corecursive.com/065-competitive-coding-with-conor-ho...

Get yo actuary tables on.


Thanks for sharing!

A question I had with APL was how do you actually type it in, and it turns out you just use a back-tick as a prefix, like a leader key in vim. Conor walked my through solving something with TryAPL in this video:

https://www.youtube.com/watch?v=lG-CcPb7ggU


You can also set it as an altlayout on your keyboard. I'm on linux so I set it to holding Winkey switches to APL.


The main reason I want to learn APL is for the white-boarding exercises during interviews. Most places will let you write in your strongest language.


Yes, but go with something too far off the beaten path and you just get https://aphyr.com/posts/341-hexing-the-technical-interview, which while really fun, doesn't get callbacks.

(I've always wanted to try doing a white-boarding interview in a visual language like Scratch)


I've been meaning to write 'bullshitting the technical interview' where I neither know how to solve the problem nor APL but end up convincing the interviewer that I know both.


Haha using APL in interviews (outside of finance) would be legendary.


When I was more regularly doing coding interviews, I would pick a problem suited for the strengths of the candidate's self-identified strongest language, but basically go "use whatever language you want, but beware that if it's not one of <short list>, I will have to transcribe your code and run it for the final assessment".

I had Haskell, OCAML, and Ruby thrown at me (no, none of those were on the list). None actually ran on the first try, but I could rehab the Ruby to working (it was a silly mistake, only a small amount of score knocked off), but I could not rehab the Haskell nor the OCAML, so that ended up with a "recommend no-hire".

Bit of a shame, had they chosen the Python that the CV indicated they preferred, it may well have been a "strong hire" (but, failure to produce runnable code, that can't be easily fixed, in a language explicitly recommended against indicates that there are some possible red flags).


Bit of a shame? It was your decision. I don't see that as a red flag unless the candidate had time to produce running code on their own development setup. In a timed interview producing running code is irrelevant in my opinion, but hey you chose your rules and got your results.


Has anyone used APL to implement web frontend? In the GOL example it shows how to use ⍴ to reshape into a matrix and then use ↑ to pad it out, and then the structure is displayed in the console. I'm imagining some kind of declarative JS+CSS frontend framework where the APL-ish source code maps intuitively to the visual representation in the DOM (or canvas).


I doubt there is one. I know of Dyalog's backend and web service frameworks, but I don't think there has been a use of APL's notation for HTML/CSS/JS programming.


APL is something that has been on my TODO list for a long time. To me it’s tantalizing to express a problem with pure symbols instead of a mix of it. While APL is cool I’ve always felt that coding languages are mostly wrong. Also natural languages also feels blocky, inefficient and sometimes alien.


Got a server error trying a few examples from the shown tutorial:

~~~~ TryAPL Version 3.4.5 (enter ]State for details) Thu Jun 10 2021 14:30:06 Copyright (c) Dyalog Limited 1982-2021 2 + 2 4 4 2 3 + 8 5 7 12 7 10 ⍳100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 !10 3628800 +/⍳10 55 +/⍳10 SERVER ERROR ~~~~


Yeah, I'm guessing Hug of Death.


I did APL programming on an IBM mainframe on a co-op work term (in 1986) and later found an APL interpreter for a Z80 CP/M machine I had (a Superbrain) and strictly for nostalgia reasons, hacked together a character ROM for that machine so the thing would display properly.

At which point a housemate walked up to it and coded the Sieve of Erasthosthenes prime number algorithm in something like 11 characters and out came a list of primes. Holy. And I thought I'd grokked APL.

This is so long ago, that I had to google it. Here it is, OK, more than 11 characters but maybe the code is not exactly the same.

          (~v∊v∘.×v)/v←1↓⍳100           ⍝ primes to 100.
    2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97
Taken from here with explanation how it works: https://dfns.dyalog.com/n_sieve.htm

It might be added that when the ratio of human:computer processing power was still more in favour of "human", insanely cryptic "hero languages" were popular. Thinking about the lowest-level of scripting in text formatters, APL, the vi macro langauge, and the nastier aspects of Perl.

These days, readable code trumps all, and it's the compiler's job to turn it into incomprehensible gibberish.


I thought reading APL was like reading Chinese hanzi or Japanese kanji, if you are getting used of it, it would be very efficient instead of English of French, for example.


Are there companies still producing modern keyboards with APL in mind? I like the idea but can't see how it would be practical with a normal keyboard.


It looks like Dyalog are selling keyboards

> A different keyboard is not required to enter Dyalog glyphs. However, if you would like one, Black Cherry G80-3000 keyboards with Dyalog glyphs engraved are available in UK, US, DE and DK layouts. USB and PS2 adapters are included. The keyboards have been tested using both Microsoft Windows and Linux

https://www.dyalog.com/apl-font-keyboard.htm

I wonder if there are keyboard sticker sets available somewhere. I think I've seen some in the past for video editing software.


There are many custom keyboard companies who will put anything you want on a keyboard. It shouldn't be a problem to get APL

I've had keyboards made by "WASD" with Yiddish keycaps (which are slightly different from Hebrew -- there's a double-vov, double-yud, kometz aleph -- here's an online version if you're curious https://keymanweb.com/#ydd-hebr,Keyboard_yiddish_pasekh ) Just upload your graphics and they'll make it.


Unicomp was selling them for a while, if I remember correctly. I don't see them on their website anymore though (but maybe I didn't look hard enough.)


https://www.pckeyboard.com/page/product/USAPLSET - they seem to still have it - or at least a keyset.

You can order it custom also: https://www.pckeyboard.com/page/product/KBDCFG - US APL.


You can still get them, only on certain keyboard though. I want to say you can purchase the individual key caps as well.


This was initially downvoted, perhaps because I didn’t post confirmation , so in case no one believes me:

https://www.pckeyboard.com/page/product/KBDCFG - choose a color and than select Language -> US APL

https://www.pckeyboard.com/page/product/USAPLSET - keycaps

https://www.pckeyboard.com/page/product/00UA41P4A - premade


(Well, it wasn't me, the comment I'm replying to now says 37 minutes ago. I did downvote your first comment just now, before seeing your self-reply. Sorry. I upvoted your second, to reimburse you.)

When I saw "I want to say" I stared at it for a minute..

I've been hearing it for a while from Yasser Seirawan (US grandmaster) doing chess commentary on major tournaments, on the St Louis chess youtube channel. He's a great storyteller and commentator, seems a wonderful guy, but a kind of walking verbal accident-waiting-to-happen. Like he seems incapable of pronouncing about 50% of names. He started saying "I want to say" frequently in the last couple of years. Lately one or two of his co-commentators starting saying it too. It seems to just mean "I think"—i.e. not "I think it was 1987" but "I wanna say it was 1987"—for no apparent reason except it's more words. (Maybe because no-one can say "You're wrong" to "I want to say"?) I really loathe it. Why tell me you want to say something, why not just say it?! Uh but it doesn't mean that apparently, it just means "I think". That was a perfectly good expression. "I want to say" makes me feel like going into a forest where I never have to hear humans speak again. (Related: I downvote any comment I see starting "Fun fact:"—that should be most strongly discouraged, I believe, being similarly barbarous.)

This is the first time I've seen it in print. I considered replying with a strong objection, imagined whether that could change the expression's course of popularity in English. Was considering favouriting or screenshotting it for further reference when I saw the comment I'm now replying to. Where does it "I want to say" (meaning "I think") come from?!

Apologies for long rant, but I felt I owed you an explanation! Maybe your other downvoter was also trying to nip "I want to say" in the bud on here, before it becomes another "literally" or "fun fact".


It's a venerable idiom that means something like "I think X, but am more uncertain than 'I think' normally implies."


This is now an aside, but I suspect it comes from this progression:

"I say that" -> "I want to say that"

It's a verbal form of softening what you're saying and indicating you're not sure. The "online text" version would be to use "IIRC".


Oh thank you, fascinating. Hmm I notice now that "I think" is similar to "I want to say" in literal meaning—not stating the uncertainty, but they're both like "Warning, about to report on what my brain is doing, don't blame me!"


It's similar to the third-personing we do when we have to enforce a rule: "I'm sorry but I have to ..." or "I'm afraid I'm going to ...".

In both cases we're trying to downplay our involvement a bit.


There is Aplette which supposedly integrates nicely with other Unix tools. It's a port/update of the earlier openAPL source code, which I think was done by Ken Thompson? Here:

https://github.com/gregfjohnson/aplette


I once used a (very bad) tool called Mediation Zone [0], and they used APL as a scripting language inside of the tool. Worst experience ever.

[0] https://www.digitalroute.com/mediationzone/


https://www.artinsolutions.com/competencies/mediationzone%C2... says "Business logic of these agents is defined in APL – Application Definition Language, which is based on Java language." which doesn't sound like the APL I know.

https://careers.digitalroute.com/jobs/98847-implementation-c... has a code sample (https://stuffthatiliketoshare.files.wordpress.com/2014/07/no...) which does not look like APL at all.


It seems like they're very successful from their website. What was bad about it?

I would personally love to have a built-in APL scripting language for interacting with data.


Based on the other comment, it looks like it's another language that happens to shorten down to APL in name rather than APL "A Programming Language". It looks like some internal Java derivative language which sounds like could be rather painful if not done well.


Ohhh gotcha. Yeah, I'd hate to use just a crappy Java. I think either Salesforce or SAP does something similar.


This little thing caught my interest:

    APL takes care of binary-vs-decimal inexactness too, so 3×(1÷3) is really 1:

      3×(1÷3)
How is this done? How well it scale? I suppose well, because APL is good for math, rigth? But then I remember that must be slow...?


That must be a reference to tolerant comparison, but the way it's written is bizarre. Comparisons in APL are (optionally: you can turn it off) subject to tolerance, meaning that numbers with a small enough difference are treated as equal by comparison or search functions. There are probably better resources but I describe it at [0].

However, Dyalog just uses IEEE doubles (with an error in place of infinities or NaNs; also you can configure to use higher-precision decimal floats), no different from JavaScript. The result of, say, 0.1+0.2, is not the same as 0.3, it's just equal under tolerant comparison.

[0] https://www.dyalog.com/blog/2018/11/tolerated-comparison-par...


To be honest, I don't know what that claim means, but know that `(1 / 3) * 3` is exactly 1 under plain-old IEEE floats.

The "fun" is with, e.g. `0.1 + 0.2`. I just tried on the website

        0.1 + 0.2 - 0.3
  2.775557562E¯17

which agrees:

  julia> 0.1 + (0.2 - 0.3)
  2.7755575615628914e-17


So does Raku

    say 3×(1÷3)
    # 1
It works because `1÷3` is a rational number.

    my \rat = 1÷3;

    say rat.numerator;   # 1
    say rat.denominator; # 3
If you multiply it by 3, you get another rational number which represents 1.

    my \result = 3×rat;

    say result; # 1

    say result.numerator;   # 1
    say result.denominator; # 1
---

Now that this may be slower than using floating point numbers, but it isn't that much slower. You only have two integers that you have to manage.

Actually on many older processors there wasn't built in support for floating point numbers, so it might actually have been faster to use rationals like this.

(I'm not sure if this is how APL handled it.)

---

APL happens to be one of the languages Raku copied ideas from

    ×/⍳10
Translated to Raku

    [×] 1..10
That is rather than `×/…` it is written `[×] …`. Partly because `/` is the ASCII version of the division operator.

We could go further towards replicating it.

    {
      sub prefix:<⍳> (UInt \n){ 1..n }
      sub infix:</> (&f, +list){
        [[&f]] list
      }
      constant term:<×> = &infix:<×>;

      say ×/⍳10;
      # 3628800
    }


It was used heavily for legacy trading and accounting applications at a fortune 50 company that I worked at early in my career. I don't recall any performance discussions at the time, but we were processing a large chunk of money on a handful of expensive ibm power pcs.


It's still used at those companies. Arthur Whitney wrote an APL variant called A+ still in-use at Morgan Stanley. He then left and wrote the "k" language, which is just ASCII APL (although definitely some major differences). K is the language of the extremely expensive kdb+ time series database (normally used for stock price data analysis I think). It's all in-memory data on a giant SSD (very fast and elegant design). He since left kx systems (company he made kdb+ with) and has a new startup called Shakti. Finance are generally the only folks that can afford the prices I believe.


Yea, I have no doubt that there are still a handful of extremely well paid APL devs at the company I worked at then (I wasn't at morgan stanley). I was part of a project that helped put a webservice frontend in front of the app so that non-power users didn't have to use a cli to interact with the application if they didn't want to. I don't think there was any appetite or desire to engage in a re-write due to the risk involved when I was there then and I would guess they have maintained that reasoning. I have a vague recollection of some of the stories from the APL devs, one of them was the first to get APL working on a personal computer way back in the day. He carried the computer into some APL conference and demo'd it and got a massive ovation supposedly.


That's a great story. I wish I could write modern APL at work. Would be a lot of fun.


I worked at a company right out of college that used K for some things. They did some consulting work for the financial services industry early on, and I think that's how they started. I ran through some basic tutorials, but never worked on any of the projects that used it, so never became proficient. Only a special subset of blessed people got to use K.

The only things I remember are that a symbol could could do 3 things when used with a monad, dyad, or infix, and that it was very hard to read anyone else's code because it was so terse.

They also went through a phase of naming their Java classes things like com.a.b after getting used to K's terseness. Fortunately, that didn't last very long.


I wonder if only a few got to use it due to the high costs?


So does forth:

3 1 3 */ .s <1> 1 ok


Everything here is pretty easily accomplished using vanilla Python (numpy counts as vanilla at this point). I'm having trouble seeing what the advantage is of an "array-based" language over a well-designed array library like numpy.

  import numpy as np
  from collections import Counter

  print(2+2)
  print(np.array([4,2,3]) + np.array([8,5,7]))
  print(np.arange(1,11))
  print(np.arange(1,100001).sum())
  print(np.arange(1,11).prod())

  avg = lambda x: x.sum() / x.shape[0]
  print(avg(np.array([1, 6, 3, 4])))

  throws = np.random.randint(1, 7, size=(10000,))
  print((throws == 1).sum())
  print(Counter(throws))

  print(Counter("Mississippi"))


People like it for the same reason we like modern chess notation over older more verbose notation.

Consider a game that starts with white moving their king side knight to the square in front of the pawn on the bishop's file. Here is how that would have been written through the ages [1].

Early 1600s: The white king commands his owne knight into the third house before his owne bishop.

Mid-1700s: K. knight to His Bishop's 3d.

Early 1800s: K.Kt. to B.third sq.

Around 1850: K.Kt to B's 3rd.

Around 1860: K.Kt to B. 3d.

Around 1870: K.Kt to B3.

Around 1890: KKt-B3.

Early 1900s: Kt-KB3.

Mid 1900s: N-KB3.

Last quarter of 1900s to present: Nf3 or ♘f3 if you want to avoid language-specific piece names.

Reading that numpy code compared to reading APL is like reading a record of a chess game from around 1700.

[1] https://www.knightschessclub.org/the_history_of_notation.htm...



It's interesting how many programming languages seem to have "accessible to those not trained in it" as a design goal (offhand I can think of SQL and COBOL, but also BASIC and you see it even in Python, etc).

But there are trade-offs doing that, and "amount you can look at at once" is a huge one.


This.

What I find really annoying is how many people confuse this property ("accessible to those not trained in it") with readability.


It's a shame, too, as the most "dense" recent language we had was perl and that got universally made fun of as "line noise" - which I feel influences people more than they think.


I also wonder how many people understand that linenoise meant noise on a data cable of some description.

i.e. the difference between `d` and `$` is a single bit. If your cable happened to glitch during that first `1` bit, you would get a `$` instead of the `d` you should have.

The thing is that doesn't happen anymore. Or at least when it does happen it gets caught and retransmitted.

---

Which means that some of the first people to say it might not have been making a value judgement, but a statement of fact. (I doubt there was very many in that camp.)

Note that I say that as a person who's second favorite language is Perl. (First favorite is Raku, formally known as Perl6.)

It is too bad that most of coolest features have come to Perl after I switched to Raku.


"amount you can look at at once" is severely underestimated.


Exist power in being fully immerse in a paradigm. You can do OO on C, but still you are on C, that is not OO.

You can do arrays on python, but python is not an array language.

In other words, the moment you mix 2 paradigms, them will mismatch in direction of the most dominant one.


The point is not _what_ you can achieve, but _how_. You can achieve almost anything in any language, but I bet you can’t implement Game of Life in 20 characters of Python!


What about 17? {≢⍸⍵}⌺3 3∊¨3+0,¨⊢


If you are interested in trying APL in an interactive notebook, try it here: https://nextjournal.com/try/mk/dyalog-apl


I really like APL but I cannot imagine myself depending on a closed source tool-chain. I've heard that the GNU APL is order of magnitude slower than Dyalog APL, but I can't find a benchmark to confirm that.


Not to claim any are better than Dyalog, but there are options: https://aplwiki.com/wiki/List_of_open-source_array_languages . I think dzaima/APL and April are both all right. Personally, I think GNU is not a good design: for example it has no lexical scoping or even control structures, essentially requiring you to use goto for program control. BQN is my own effort and I would say it has more serious ambitions than these. If you're not attached to the APL language specifically, I'd take a look.


I check back in on your BQN project every few months. I am an anxious potential user and love the work you do.


Thank you for the comment, I wasn’t aware of the BQN project, sounds very interesting to me.


Yes, GNU APL is quite slow. There was some primitive implemented in GNU APL (as part of the interpreter, in c). A pure-APL implementation running on dyalog outperformed it by a considerable margin.

If you're looking for a performance, opensource APL, try j. Though if performance is not a priority then, as the sibling says, dzaima/apl or april should be more than adequate.


NB. I believe the primitive implemented Knuth's dancing links algorithm.

NB. NARS is also very decent, though it only runs on windows.


How practical is a language that uses all these non-ascii symbols? Do developers remember the alt-codes or do they use visual IDEs like the one OP linked or maybe even special keyboards?


There was an APL keyboard.

And, the APL interpreter/compiler was weird. if you typed an F followed by a backspace, followed by an L, it would merge those into an E, as if they were typed on top of each other, like a typewriter.


Makes sense because some special functions were meant to be expressed as a combination of symbols. Natural logarithm was ○, backspace, star ; circle was used for logarithms and star was used for both exponentiation and the constant e.


If I were to use a language like this, I'd use AutoHotKey to map certain strings to symbols. I already have several, e.g.

α = qalpha

β = qbeta

↑ = qup

∀ = qall

¬ = qnot

Since you read code more often than you write it, I think it is actually preferable to work with a language like this.


vi/nvi/vim has the :ab command with does literally that on typing.

In ~/.exrc/.vimrc

    ab qalpha α
    ab qbeta β 
    ab qup ↑
    ab qall ∀ 
    ab qnot ¬
Also, for rlwrap users:

https://github.com/utkarshkukreti/apl-inputrc


Actually once you setup a compose key it can become second nature.

I program in Raku, which can be written using only ASCII, but it can be clearer if you mix in a bit of Unicode.

For example, a raw quote can be written like this using only ASCII

    Q[C:\Windows\]
Or you can write it like this

    「C:\Windows\」
To get those two characters I have added these two sequences

    Compose [ [
    Compose ] ]
I use them so often that it was worth making them a double press of the same key. The other options would have probably been `Q[` and `Q]`.

Another example is

    * * *
That is a lambda that takes two arguments and multiplies them together. It is also much clearer as

    * × *
I didn't even have to add this

    Compose x x
There is also

    1, 2, 3 <<+>> 40, 50, 60
    1, 2, 3  «+»  40, 50, 60

    @array>>.is-prime
    @array».is-prime
These were also already there

    Compose < <
    Compose > >
Most of the compose sequences I have added match the ASCII equivalent. Which makes it very easy to remember them, even though I may not use them often.

    π   pi
    τ   tau
    ∪   (|)
    ⊎   (+)
    ∩   (&)
    ≡   (==)
    ≢   !(==)
(That is I type `Compose p i` for `π`, and it is equivalent to `pi`.)

or are at least similar

    Unicode
        Compose
                #   Actual ASCII

    ∅   set     #   set()
(That is I type `Compose s e t` and it is equivalent to `set()`.)

Though some of the ASCII operators are apparently too long to do that with.

    ∈   (el)    #   (elem)
    ∉   !(el)   #   !(elem)
    ∋   (co)    #   (cont)
    ∌   !(co)   #   !(cont)
I would like to point out that I actually had to read my .XCompose to remember how to type this last four. I don't use them as often as `「」`, and they don't match the ASCII like `≡`.

There are also some that I had to make the compose sequence longer.

    ∘   &o      #   o
---

There are some codes that I just remember for some reason like `U2424` is `␤`. (That's not even an operator in Raku, though it is useful for messaging the camelia eval-bot on irc.)


Back in the day it was special keyboards. (Source I had an internship in 1984 or so, and at night I could use the mainframe for fun. I wrote a calculate digits of Pi with APL program. I had a slowly converging algorithm and the machine despite being bad ass was quite a lot smaller than my iPad, but it was awesome. I did have a deep maths background so was used to at least all the Greek letters).


While you are getting started, your IDE generally has a "language bar" with the glyphs, that you can press to type them, but hovering the language bar will show you the keyboard shortcuts for the glyphs... And you learn the shortcuts fairly quickly, to be honest. Also, most of the symbols have plenty of mnemonics or just end up being in the key you would expect.


I'm curious why there was not larger adoption of this language and why it was lost by the wayside. One of the commenters mentioned that the symbology made it alien to average folks who did not have the background in that notation. That appears to imply that the domain learning curve was too steep for the folks who don't "think" in that fashion. Hmmm.


Solving a problem by writing a verbose loop lets you think along the surface of the problem. The APL way almost requires thinking harder before writing any code, and visualizing the transformations of arrays geometrically. The APL coder often works in a hammock with a pencil and paper, spending much of the time staring out the window. The conventional programmer spends most of the time tapping away. Many people resist this deeper mode of thought, because it hurts, or they’re not used to it. I think that is the root reason that array languages did not become more popular. The reason that Lisp did not become mainstream is similar. It’s not the parentheses, nor the weird characters. It’s the focus that effective use of these languages requires. Of course they are more powerful—far more powerful. But only as extensions to a thought process that normal people do not enjoy or can not exploit.


Yes. A lot of programmers are lost without step-by-step debug. That's why concurency problems becames unsolvable for them.


I think some linear algebra knowledge certainly helps a lot, so that's a good point. A lot of APL doc talks about concepts such as "Rank" and "Inversion" and "Transpose". That is a little math heavy.

The other problem is that I would bet that a lot of the early use for APL was done better by Excel spreadsheets on desktops. By the time APL moved off the mainframe, it was too late. Of course, I'd imagine as far as code maintenance goes, APL beats a very large spreadsheet with a lot of VBA. Excel also has built-in charts and other functionality that is far clunkier, even in the best APLs.


Seems nice in theory, though I'm immediately turned off by having to reach for special characters not on my keyboard.


Julia solves this very gracefully with LaTeX-like input, ie. you type \lambda<TAB> which expands to λ


Some APL environments such as ngn/apl[0] allow tab completion like |o<tab> results in ⌽. This is probably available on tryapl.org too, but I can't test it when it's down :/. It may also be available in the full dyalog apl product and IDE.

EDIT: see https://news.ycombinator.com/item?id=27462828 too

[0]: https://github.com/abrudz/ngn-apl


To avoid this you typically use a character keymap with APL characters accessible with AltGr and Shift+AltGr, for instance AltGr+i for iota and AltGr+r for rho etc. After a while you learn the keystrokes and if you need a less frequently used character you can click on it in the language bar.



someday soon we’ll be able to use voice input to say these characters.

Of course, I’ve been wrong for about a decade. but we have to be getting close


You can if you put in the work to set it up.

There are two (nearly identical) talks by Emily Shea

- Perl Out Loud https://www.youtube.com/watch?v=Mz3JeYfBTcY

- Voice Driven Development: Who needs a keyboard anyway? https://www.youtube.com/watch?v=YKuRkGkf5HU


I’m glad I work at home.


hehe, just clicked "2 + 2" followed by pressing enter and got SERVER ERROR. HN hug of death, I presume? Otherwise either me or the tool is very bad at math.


I assure you, APL is not bad at math. It's been around almost as long as COBOL.

https://en.m.wikipedia.org/wiki/APL_(programming_language)


It's not the same without a decent APL keyboard :-(

Even for the on-screen keyboard - at least make it have a few rows, not everything on a single row. Jeez.


Would APL work as well with the unusual symbols replaced by keywords? Any with APL experience know?


No. The power of notation is significant enough that you would lose a lot. Someone mentioned J. It’s an abomination. Iverson made a huge mistake taking that path and likely contributed to APL’s trip into becoming a curiosity. I have written about this before on HN.

Context: I used APL professionally every day for about ten years and was quite active in the community at the time.


Congrats on your work with APL. Please do not engage in flaming on obsolete business concerns from 30 years past, on a repeated basis.

J is fine, great, easier license, has a larger and more powerful set of primatives.


Flaming?

Please.

You are free to provide a counter-argument.

Here’s the key question you have to answer:

After decades of not only inventing, but using, promoting and educating about the advantages of a specialized notation for programming. Did Iverson start J because it was a distinctively better path than APL?

No. Of course not. After nearly thirty years of brilliantly creating and promoting his notation Iverson made a mistake. It would not be the first time in the history of technology this happened, and it won’t be the last.


It what ways is J an abomination? APL (and to a degree J) are on my bucket list of languages to learn. I’d always read that J was a sort of spiritual successor to APL.


I think my reasons are very different from robomartin's but I share the opinion that J has some pretty serious flaws. Some are brought forward from APL, some are not.

It has no context-free grammar (but neither does APL). Scoping is local or global: if a variable in a function is local, even inner functions can't access it. It has namespaces (locales), but they are not first-class and are instead referenced by numbers or strings, so they can't be garbage collected. Until this year, there was no syntax to defined a function—even though control structures had dedicated syntax! System functionality has no syntactic support, so it's provided by passing a numeric code to the "foreign" operator. To enable recursive tacit functions, functions are passed by name. Not just as arguments, anywhere. The name is just a string with no context information, so if you pass a function to an explicit modifier and the name means something different in that scope it will do something different (probably a value error, fortunately). Oh, also these names store function ranks, and if you define the function later it won't pick it up, so even the tacit recursion thing is broken.

The designers of J weren't familiar with design principles that are common knowledge outside the APL world, and it really shows. J was my first programming language (well, except TI basic) and while I'm still a big fan of array programming it really held me back from learning these things as well.


Functions, if you are referring to monads and dyads, can be defined using "3 : '...'" or "3 : 0\n...\n)" using "3" for monads and "4" for dyads.

As for rank information, functions do carry that rank. I believe you are not defining any rank, it which case it becomes infinite rank and depending on usage, it will be applied incorrectly.

For tacit recursion, there is the "$:" operator, which allows a tacit function to call itself anonymously. You may also need the agenda operator "@." to define your base case with a gerund "`".

The "foreigns" table, while useful, seems like a cludgy way to introduce functions that don't or are difficult to introduce cohesively into J's notation.


He holds a grudge against J, but not everyone does. I write research code in J and it's a very good little language. For instance, scripting is much easier in J. (Dyalog) APL is much more like an APL machine (in the sense of the LISP machine).


I'll reply to all comments here.

No, I don't hold a grudge against J. That's preposterous. Silly, really. These are tools.

No. I don't prefer one symbol to two characters. That is also silly.

You have to understand the HISTORY in order to understand my statement.

APL was, at the time, brilliant. Remember that it started in the 60's. Way ahead of its time. I learned and worked with it professionally in the 80's and early 90's.

Ken Iverson, the creator of APL, understood the power of notation as a tool for thought. In fact, he famously authored a paper with exactly that title [0].

I had the pleasure of being at an APL conference where Iverson himself presented and discussed this paper. I also took advantage of tutorials and sessions by Iverson and many of the early APL luminaries of the time.

The power of notation might not be easy to understand without truly internalizing APL or having good command of a reasonable analog. For example, a classically trained musician appreciates the value of musical notation. While not perfect, the alternatives have failed to deliver equivalent results, power, expression, etc. The day something else is used to score and perform orchestral performances we might have something to consider.

There are other examples of the power of notation and the paper covers the subject well.

So, why is it I say J is an abomination?

History.

Why does J exist? Why did Iverson go this way after correctly noting and promoting the idea that notation was a powerful tool?

He made a mistake, likely driven by a failed attempt to improve business outcomes.

Here's the history part.

Back in the '80's doing APL characters was not easy. On mainframe based systems we either had to use specialized IBM or Tektronix terminals and printers. When the IBM PC came out we had to physically replace the video card's character ROM (not sure most people know what this is these days) in order to get APL characters.

A common hack was to install a kludgy setup with a toggle switch so you could switch between APL and standard characters. The keyboard, for the most part, got stickers glued to it on the front face of the keycaps. You could, eventually, buy new keycaps for the IBM PC standard keyboard.

Printers suffered a similar fate. You had to reprogram and hack Epson and other printers in order to be able to print the characters.

Incidentally, if you wanted to use a PC for, say, Japanese and English back then you had to resort to the same kinds of hacks or buy specialized video cards and software.

I could go on. The point is that you had to be truly determined to do APL back then and it was a pain in the ass. Convincing an employer to hack 500 PC's so you could migrate to APL was an exercise in futility. Financials and other industries where the power of APL could be put to good use took the plunge, nobody else did. I did an APL-based DNA sequencing tool for a pharmaceutical company back in the 80's.

APL wasn't going to go very far beyond academic and corner-case circles under those conditions.

That's when Iverson came up with the J concept of transliterating APL characters to combinations of standard ASCII characters. It was hard to impossible to sell APL to the masses given the issues of the time. Iverson thought the transliteration would open the doors to the wider audience. Well, it did not. Among other things, notation, believe it or not, is much more practical and easier to learn than a seemingly random mish-mash of ASCII characters.

From my perspective Iverson suffered from a lack of conviction based on likely business pressure. The hardware of the day was conspiring against being able to push APL out to the masses. He did not have visibility into what was coming. Shortly after he took the J trip, the IBM PC went fully graphical and universal multi language (spoken) characters could be rendered without hardware hacks. Except that now Iverson, the creator of APL, had his business interests firmly attached to J. Here we had a situation where the creator of APL pretty much abandoned the language he brilliantly developed, taught and promoted for over twenty years.

The J enterprise, as a business, deviated his path and likely seriously damaged APL. And it failed. It absolutely failed. Nobody who used APL for any non-trivial application was going to consider J. Not due to some test of purity. No, it was because the power of notation is such that J was, in fact, a complete abomination. The only way anyone I knew would consider it was if by force. I can't think of any APL-ers of note of the era that truly jumped on the J bandwagon. The proof that J failed is simple. It's just as dead as APL. Corner case use, sure. Just like APL. I used APL professionally for ten years and I would not touch it at all for any real application today. Anyone doing so would be crazy to make that choice. The only exception would be in maintaining or extending an existing system. Even then, you have to very seriously consider porting it to a modern language.

Notation isn't "funny characters". It's a powerful tool of both expression and thought. If notation is "funny characters" what do we say about every written human language with "funny characters" (Chinese, japanese, greek, arabic, hebrew, etc.). Do we convert every word in Chinese into ASCII transliterations of the symbols just do it doesn't look "funny" and to make Chinese more popular? No, this would be an abomination. Chinese is a rich and expressive language, complex, yes, of course. And yet the power of this language, among other things, in the notation used to put it to paper.

Imagine converting every "funny character" human language to a mish-mash of ASCII just because the IBM PC could not display them in the early days. Imagine then saying that someone calls these abominations because "he prefers one fancy character to two ugly characters" or "He holds a grudge against <language>". The first thing that comes to mind is "ignorant", followed by "misplaced". Learning about and understanding history and context is very important.

Can J be used for useful applications? Of course. So can COBOL, FORTH and LISP. Yet this does not mean it is a good idea. And using J to write research code (which I did with APL in school as well) is nowhere near the realities of real code in real life at scale and maintainable over time and hardware evolution. Extending this to mean that abandoning the power of APL notation due to hardware issues was a good long-term idea is, in my opinion very wrong. J has no future. Neither does APL in its current form. I still think everyone should be exposed to languages like FORTH, LISP and APL in school. There's a lot of value in this.

EDIT: Imagine if Guido had abandoned Python just before it started to become popular and went in a different direction. The language would have stagnated and likely become irrelevant. That's what happened to APL. And J went nowhere as well. Iverson confused and antagonized the APL community to go after perceived business opportunities. In doing so he pretty much destroyed APL and J.

[0] https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p...


The only explanation I see here for what's wrong with J is

> Among other things, notation, believe it or not, is much more practical and easier to learn than a seemingly random mish-mash of ASCII characters.

Is that your main problem with J?


Probably a two part objection:

First, abandoning a new branch of computing that started with the development of a specialized notation. As I said elsewhere, the power this offers cannot be appreciated without a suitable frame of reference. This can be either the use of APL (to a good degree of competency, not just dabbling) or a reasonable analog, such as musical notation.

The timing for the introduction of J was terrible. Computers made the transition from text-only terminal output to full-on, character-agnostic and graphically-rich output just around that time. I can't fault Iverson for this, nobody has a crystal ball. Having the inventor/founder of APL leave the language behind for a half measure that was primarily a reaction to character-only computers did a lot of damage to APL. One has to wonder how things might have evolved had he stayed the course he charted. After three decades of educating an entire community on the value and power of notation he threw it all away purely due to a mistimed decision about computer hardware of the day. As I said elsewhere, not the first time and not the last time someone in technology makes a bad decision. None of us are immune to this.


J started in 1990, Mac was 6 years old. I think it was pretty obvious GUIs will stay, no?


Maybe I'm missing the point but could you clarify a bit more on APL's notation vs J's notation?

Speaking as someone who is not very well math inclined and as someone who was born in an Asian country, both APL's special characters for verbs and J's alphabetical characters for verbs are similar enough for me. Both languages use symbols for verbs, it's just that J's symbols happens to very closely resemble the characters of the English alphabet.

Although, due to the familiarity of the English alphabet, J's symbols might intuitively bring up ideas of the alphabet character, is it not possible to just think of it as a new mathematical symbol? For example, instead of seeing "o." as the alphabet character 'o' followed by a period, couldn't it be seen as a circle followed by a dot? Or if we lived in a world where the alphabetical characters of the English were swapped with the special characters of APL, would J's notation still be broken? Does familiarity of the symbols used in a notation make it any less powerful?

Maybe the reason why I don't understand is because I haven't tried APL and only tried J. And I eventually ended up quitting on learning J because it was starting to get too difficult for me. Would it be possible to explain the differences in APL's notation and J's notation is an easier or simpler fashion?


APL’s verbs are geometrically suggestive. They are little pictures that represent what they do, and how they are related to each other. For example, ⌽ reverses an array along its last axis; you can see the array flipping around the vertical line. And you will know what ⍉ does without looking it up, I bet. These symbols are so well designed that you don’t have to memorize much, because they document themselves.


Couldn't the same be said of J? If APL's powerful notation comes from not having to memorize much and being a good visual representation of what the verb does, doesn't J's usage of alphabetical characters achieve something similar albeit a bit worse? For example "i." for index and related functions. Since the letter 'i' is usually used for indexing, one could assume that "i." is something related to indexing. Does the usage of alphabetical characters weaken the notation so much that it could be considered an abomination?

If there was another language that was a copy of APL but with new non-alphabetical symbols that were less suggestive than the original APL symbols, would that language be considered to have a less powerful notation? If so, how much weaker would it be considered? What would the symbols of a language that is APL-like and uses non-alphabetical characters, but would still be considered an abomination look like? Would that language be considered to have a more powerful notation than J?

This might be a bit of a stretch but I'd like to use the symbols on a media player as an analogy. The symbols on a media player (play, pause, resume, seek back, seek forward) could be compared to APL's symbols. Then, for the J version of the media player, rather than the symbols, there could be "Pl", "Pa", "Re", "SB", "SF" or something of the sort. I would say that the APL's symbols do look nicer, but I don't think J's usage of alphabetical characters should be considered an abomination. If so, wouldn't all text GUI's (e.g. command line managers such as nnn or MidnightCommander) be considered an abomination compared to a regular GUI version?

Maybe I'm not looking at the right thing here but APL's and J's notation seem to be similar. One does look better than the other, but both seem to serve the same purpose.


I’ve only glanced at J and never used it, so I don’t have any strong opinions about it. But APL just has that extra magic that J seems to lack. Notation does matter. It could be that I’m partially sentimental, as it’s the first programming language that I learned.


> Maybe I'm not looking at the right thing here but APL's and J's notation seem to be similar. One does look better than the other, but both seem to serve the same purpose.

Not sure if it is possible to understand this without having the context of being well versed in another means of communication that uses specialized notation. Musical notation being an easy example of this. Mathematics could be another. And, of course, languages that don't use the latin alphabet. Outside of APL, I happen to be fluent at musical notation and one non-ASCII spoken language, as well as having the mathematical background.

The closest I can come to explaining what happened with J is that they did their best to convert every APL symbol into an equivalent combination of ASCII characters. Here's the key:

They did NOT do this because Iverson thought this was a better path forward. He did not abandon thirty years of history creating and promoting notation because mashing ASCII characters together was a better idea. He did this because computers of the day made rendering non-ASCII characters a pain in the ass. This got in the way of both commercial and open adoption of the language. He likely genuinely thought the transliteration would bring array programming concepts to the masses. It did not.

In the grand context of computing, J is a failure and APL suffered greatly when its creator and primary evangelist abandoned it.

Imagine a world where people are writing perfectly legible code in C, Basic, Pascal, etc. Now imagine someone proposing the use of seemingly random arrangements of ASCII characters instead of those languages. It's like telling everyone: Stop programming in these languages! We are all going to program in something that looks like regex!

Well, the rest is history. The proof is in the fact that APL is but a curiosity and J isn't a commercially viable tool. Yes, they both exist in corner-case applications or legacy use. Nobody in their right mind would use either of them for anything other than trivial personal or academic applications. That's coming from someone who used APL professionally for ten years and even envisioned a future creating hardware-accelerated APL computing systems at some point. It's computer science history now.

I still think it should be taught (along with FORTH and LISP) as there's value in understanding a different way of thinking about solving problems computationally.

As an extension of this, part of me still thinks that the future of computing might require the development of specialized notation. For some reason I tend to think that working at a higher level (think real AI) almost requires us to be able to move away (or augment) text-based programming with something that allows us to express ideas and think at a different level.


Thanks for taking the time to reply. I think I'm beginning to understand but am not quite sure.

While I wouldn't consider myself fluent in any of the following, I do know how to read musical notation (from middle school/high school band) and I can read/write/speak a non-ASCII language (Korean). So I am somewhat familiar with non-ASCII notation.

> The closest I can come to explaining what happened with J is that they did their best to convert every APL symbol into an equivalent combination of ASCII characters.

This is the statement I keep on getting stuck on. From what I have read, besides the symbols being converted to ASCII characters, APL and J are generally the same. Both work on arrays, both are parsed right to left, etc. It seems like the only major change is that the symbols got converted to ASCII characters that are at a maximum 2 characters long. If this is the case, what would you say about the J language's notation if the authors one day decided to change all the symbols to non-ASCII characters? Everything else would stay the same, such as what the symbols do and how much space the symbols takes up (max 2 characters). If the J language were to change only its symbols and nothing else, would its notation be considered to be on par with APL's?

As you mentioned, my lack of proficiency in other specialized notation might be preventing me from understanding the issue. That said, your last set of comments strikes a chord with me and I do think I kind of understand. As you mentioned previously, notation is "a powerful tool of both expression and thought." The usage of specialized notations allows one to express their thoughts and ideas in a way that normal writing can't. But I guess this is where being well versed in the subject matter comes into play, since after all it is a "specialized" notation. It would be difficult for someone who doesn't have a strong background in the subject matter to take advantage of the specialized notation.

To me, with my limited knowledge and experience, J vs APL appears to be a symbol (graphical) design comparison rather than a notation design comparison. And as someone who doesn't have a strong mathematical background, both APL's and J's symbols conveyed nothing to me when I first saw them. Changing the symbols to non-ASCII or ASCII has no effect on me besides figuring out how I would input the non-ASCII characters. But I suppose that to you, a change in the symbols isn't something so superficial. The way I understand APL vs J now is that for those who are experienced in APL, the changing of the non-ASCII symbols to ASCII characters, simply for the purpose of not having go through the trouble of inputting non-ASCII characters, "broke" the notation.


> what would you say about the J language's notation if the authors one day decided to change all the symbols to non-ASCII characters?

That's a very interesting question. I think the only possible answer has to be that this would return the language to what I am going to label as the right path. It would be wonderful.

APL is the only programming language in history to attempt to develop a notation for computing. Iverson actually invented it to help describe the inner workings of the IBM mainframe processors. Any hardware engineer who has ever read a databook for, say, an Intel (and other) processors has run into APL-inspired notation that made it into the language of explaining how processor instructions work. It's a remarkable piece of CS history.

> besides the symbols being converted to ASCII characters, APL and J are generally the same

Let's call it "notation" rather than "symbols". The distinction I make is the difference between a language and just a set of glyphs that not entirely related to each other.

You might want to read Iverson's original paper on notation. It makes a very strong argument. Coming from the man who created APL, this is powerful. It also --at least to me-- tells me that his detour into J had to be motivated by business pressures. There is no way a man makes such an effort and defends a notion with such dedication for three decades only to throw it out for something that isn't objectively better.

I don't think we can find a paper from Ken Iverson that says something like "I abandoned three decades of promoting a new notation for computing and created J because this is better". You will find statements that hint at the issues with hardware of the era and the problems this created in popularizing APL.

Here's my lame attempt to further explore the difference. I don't know Korean at all. I just used Google translate and this is what I got for "hello from earth":

지구에서 안녕

I count seven characters, including the space.

Let's create J-korean because we are in the 80's and it is difficult to display Korean characters.

지 This looks like a "T" and an "L": So "TL".

구 This looks like a number "7" with a line across it: "7-"

에 This looks like an "o" with two "L"'s, one with a pre-dash: "O-LL"

서 This looks like an "A" with a dashed-"L": "A-L"

안 This looks like an "o" with a dashed-"L" and another "L": "OLL-"

녕 This looks like an "L" with two dashes and an "O": "L--O"

Space remains a space.

Here's that phrase in J-korean:

TL7-O-LLA-L OLL-L--O

It's a mess. You can't tell where something starts and ends.

OK, let's add a separator character then: "|"

TL|7-|O-LL|A-L| |OLL-|L--O|

Better? Well, just in case we can do better, let's make the space a separator. Two spaces in a row denote a single space:

TL 7- O-LL A-L OLL- L--O

We have now transliterated Korean characters into an ASCII printable and readable combination of characters.

Isn't this an abomination?

We destroyed the Korean language purely because computers in the 80's could not display the characters. We have now trifurcated the history of the language. Which fork will people adopt? Which will they abandon? Will all books be re-written in the new transliterated form?

Which of the above encodings (real Korean and the two transliterations) conveys, communicates and allows one to think in Korean with the least effort and the greatest degree of expressive freedom?

If I, not knowing one bit of Korean, expressed a strong opinion about J-korean being better because it doesn't use "funny symbols" I would not be treated kindly (and rightly so).

I don't know if this clarifies how I see the difference between APL and J. Had we stayed with APL's notation, evolved and enriched it over the last few decades we would have had an amazing tool for, well, thought and the expression of computational solutions to problems. No telling where it would have led. Instead Iverson took a path driven by the limitations of the hardware available at the time and managed to effectively kill both languages.

I happen to believe that the future of AI requires a specialized notation. I can't put my finger on what this means at this time. This might be a worthwhile pursuit at a future time, if I ever retire (I can't even think about what that means...I love what I do).

Here's Iverson's paper on notation. It is well worth reading. It really goes into the advantages of notation to a far greater level of detail than is possible on HN comments:

https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p...


> I happen to believe that the future of AI requires a specialized notation. I can't put my finger on what this means at this time. This might be a worthwhile pursuit at a future time, if I ever retire (I can't even think about what that means...I love what I do).

I also share this opinion and I might know what you mean. A lot of breakthroughs in physics are due to new notation – e.g. Maxwell's equations, Einstein notation, etc. Or to be precise, it is easier to think new thought in notation/language that is suited for it.

Current machine learning is a 90:10 blend of empirical stuff followed by delayed theory. A language for theory-based ML is math with paper and pencil. However language for empirical experiments are PyTorch, Tensorflow, JAX, DEX, Julia, Swift, R, etc. Former is "dead" historical language, latter are notations for differentiable computational graphs that can be run on modern HW accelerators. If you look at what have those programming languages in common is that they were all influenced by APL. And funny enough, machine learning is the best use case for APL, but practically non-existent. APL should be reborn as differentiable tensor-oriented notation. That would be wild – prototyping new ML architectures at the speed of thought.

Anyway, another angle for APL + ML is an alternative interface – write APL with pencil on paper that understands your handwriting and evaluates your APL scribbles. [0] I committed myself to provide a BQN [1] with such interface and see where it leads.

Ideally, the best outcome would be the combination of both approaches.

[0] https://www.youtube.com/watch?v=0W7pPww6Z-4 [1] https://mlochbaum.github.io/BQN/


Thank you for the discussion. I am now convinced but unfortunately, I cannot confidently say that I deeply understand.

I've taken a shot at the linked paper but will require more readings to fully grasp what it's saying. However, between what I understood of the paper and the example that you provided, it makes sense that J would be considered an abomination of APL.

Hopefully, after reading the paper a few more times and maybe even trying out APL, I'll have a better understanding. Thanks again for your time.


I believe he prefers one fancy character to two ugly characters.


not in the way you'd hope.

due to the way APL is written and it's sort of philosophy of computation if you tried to be literal about you'd end up with a bunch of inscrutable nonsense anyways

consider https://www.aplwiki.com/wiki/FinnAPL_idiom_library#Inner_Pro...

  (⌽∨\⌽' '∨.≠X)/X  would be
  (reverse or scan reverse ' ' or prod = X) reduce X
or that infamous Game of Life one liner

  life←{⊃1 ⍵ ∨.∧ 3 4 = +/ +/ 1 0 ¯1 ∘.⊖ 1 0 ¯1 ⌽¨ ⊂⍵}
  life set {disclose 1 X or prod and 3 4 = + reduce + reduce 1 0 -1 bind prod rotf 1 0 -1 rot each enclose X}
 
Still just nonsense right? You'd end up writing very different code.


> (reverse or scan reverse ' ' or prod = X) reduce X

This just looks like Haskell.


Try Rob Pike's (yes, that Rob Pike) APL-with-keywords, ivy: https://pkg.go.dev/robpike.io/ivy?utm_source=godoc


It wouldn't work the same, that's for sure. And APL experience isn't enough to know, because you only see one side of the comparison! Here are some notes from that side.

The advantage of using symbols for array operations is the same as the advantage in math of using them for arithmetic. A symbol can be read much faster than a word, and it makes syntax easier, not harder, to discern. This is because you only have to mentally group symbols into expressions instead of grouping characters into words and words into expressions. When programming with keywords you'd probably mitigate this by writing shorter lines in order to move some of the structure to a higher level.

Keywords have the advantage that the ones the user writes aren't any different than the ones the language provides. This can be nice, although I find that distinguishing user-defined words from symbols can also be helpful: those words tend to give a good summary of what's going on, while the primitives around them just do all the data-shuffling detail work required to make it happen. So it's easier to ignore those details and quickly get the bigger picture.


There's J, which doesn't use the random Unicode things: https://en.wikipedia.org/wiki/J_(programming_language)


> Would APL work as well with the unusual symbols replaced by keywords?

Hot take: take a look at numpy.

In general, I would say no. One of the main points is that the notation is terse so that the experienced readers can at a glance see what is going on. Notation as a tool of thought, and all that.


In the 90s I installed an optical CMM at an IBM plant and they had PCs with APL keyboards sitting in the clean room. I was impressed that they would use something like that in a production area, not really sure what they did with it but the keyboard looked very impressive.

I think without the special keyboard APL is a no-go these days.


> I think without the special keyboard APL is a no-go these days.

Unicomp, the company who bought the rights and tools for the venerable IBM Model M keyboards, do still sell to this day both IBM Model M keyboards and sets of APL keys (!).

https://www.pckeyboard.com/page/product/USAPLSET


I've heard that R is an array language. Can it compare with the likes of APL?


The closest language to APL/J/K is NumPy, the numerics library for Python. It was deliberately designed in the spirit of an APL, but without all the squiggly characters.


Note that Julia has explicitly[1] taken much inspiration from APL to a further degree than NumPy does IMO.

1. https://github.com/JuliaLang/julia/search?q=apl&type=issues


> but without all the squiggly characters

I so liked the APL squiggly characters (mostly Greek letter based glyphs). Made for very succinct code. Combined with right-to-left precedence resulted in very low overhead to write and read code. Most people who complained about it APL as “write-only” seemed to be outside the community of serious APL programmers. Once you learned to read the language it was very easy to comprehend at regular reading speed...

In contrast NumPy seems absurdly wordy, and waiting for the completion menu in an IDE tends to derail my train of thought.

However I do tend to like NumPy for being APL-like, so it’s not a damning contrast as far as getting things done.


I get it, and I wrote that even though I'm a fairly big J enthusiast. It's fun to solve a tricky problem by cranking out two or three rows of line noise. :) Numpy is wordy, but you can still program in terms of arrays -- it might not feel like quick symbolic magic, but the underlying paradigm is (almost) the same.


I really don't understand why people dislike this. As long as the domain and input/ouputs are clear.. having a Δ is not worse than a `delta` or `computeDeltaOfPair` (actually the shorter the better to me..


in fairness, a delta is far more discoverable to a casual reader than, say the APL Grade Up symbol. :)


But once you use grade up, it's easy to remember. I used it like a year ago when playing with APL and haven't touched it since then and still remember the symbol.


Julia has builtin language support that kind of array handling, including broadcasting.


Yes, there are a handful of languages where you can operate on arrays (without looping), including Fortran. But this doesn’t make them true array languages like APL.


Nice. Now one doesn't need to purchase a specialty keyboard to write in APL.


The general method of input most people use for APL is an IME with a shifting key, see: https://aplwiki.com/wiki/Typing_glyphs#By_method


Last used APL in 1981; We had keyboards specialized for the APL char set.

IMO Its pithy syntax was just not worth it, even back then, so haven't returned since and don't see any reason to break my 40 year hiatus.


APL's greatest contribution is its semantics, not its syntax.

(Some parts of the syntax are brilliant and would be awesome if adopted by modern languages. Others ..... not so much).


Which pieces of APL would you see fitting in a modern language? Simply curious about preferences of people who already enjoy the semantics and would find the APL syntax ideal.


(not parent) personally I would like to see [trains](https://aplwiki.com/wiki/Tacit_programming#Trains) in more languages, though the way they are implemented in APL is pretty spotty (e.g. the length of a train changes its semantics)


I think adopting the “each” syntax would help every language that has containers - including Python and C#

It is semantically more like “map” than “foreach”, but syntactically emphasized the applied function rather than the fact that it is applied to the container - and that shows especially when nested - f””x or f”[2]x means “apply at depth two”. Compare that to a nested map/foreach

There are other syntax features I like, but most of them do interact with semantics (some more, some less). This one is useful in any language.


How do you key the ⍳ character easily?


`i on this site. If you use another editor (including, I believe, Dyalog's actual APL implementation, but I always used emacs when I was playing with APL) you can change the prefix character. I always preferred . because there is no reaching for the character (on the Dvorak layout at least) and it doesn't conflict with much other than decimal numbers (where you may need to type .. to get the desired result).


Use `i to get it. If you hover over the character in the language bar at the top, the text shown has some indications of how to type it. Here, "Prefix: <prefix> i", with the prefix being a backtick. The version with a tab afterwards is longer but could be easier to remember for some characters.


I recently listened to this podcast episode titled CORECURSIVE #065 From Competitive Programming to APL With Conor Hoekstra[0]. I learned a lot things there. I highly recommend it.

[0]: https://corecursive.com/065-competitive-coding-with-conor-ho...


I can't be the only who though Alexa Presentation Language




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: