Hacker News new | past | comments | ask | show | jobs | submit login
ALiEn – a GPU-accelerated artificial life simulation program (alien-project.org)
825 points by tosh on June 11, 2021 | hide | past | favorite | 210 comments



When I was learning to program, I tried to make a toy artificial life evolution simulation. Particle organisms on a 2D plane had 'dna' which was a list of heritable traits, like size, speed, number of offspring. Bigger organisms could eat smaller organisms, but they burn energy faster. 0 energy = death. When two organisms of opposite gender collided and had sufficient energy, they'd give some of their energy split among the offspring, with each offspring's 'dna' values set to one of the parent's +/- 5%.

As I was developing this, I hadn't figured out how I wanted to do food yet, so as an easy first step, I just had a constant amount of energy that was split amongst all organisms on the screen. Lots of little dots buzzing around, was kind of neat but nothing too special. I left it to run overnight.

When I came back I was very surprised: previously i was running at about 30FPS - now it was running at about 4 seconds per frame. The screen was filled with dense expanding circles of tiny slow organisms emanating from where organisms had mated and nothing else.

My simulation evolved to outsmart my simple food algorithm: when food is divided equally among all organisms, the best strategy is to use minimal energy and maximize offspring count. I had populated the world with a default offspring count of ~5 and they had evolved to the tens of thousands. The more offspring an organism had, the greater the amount of the energy pool would go to their offspring.

It was a very cool "Life, uh, finds a way" moment - that such a simple toy simulation of evolution was able to find an unanticipated optimal solution to the environment I created overnight was very humbling and gave me a lot of respect for the power of evolution.


I worked on a similar project. One of the heritable traits I had was a quantity of energy that would be passed on to a child. A mother and father organism contributed a random half of their stats to a new child and both parents deducted their caretaker energy and increased the child's energy by the same amount.

I had a lot of different graphs to show me stats as the simulation continued. One thing I noticed was that after a while of this simulation "average age" started to go way up.

At first, I was proud. I thought I had evolved creatures that could live indefinitely in my simulated environment. I kind of had - but it didn't work like I thought. At some point the creatures seemed to become immortal and all new creatures died off. I was monitoring "average age at death" which confirmed all the dying creatures were very young and "average generation count" which showed it stabilized midway through the simulation and then locked in place. They got to a place where new organisms died off and there were a bunch of immortal organisms running around.

I finally figured out what had gone wrong. The stats, including caretaker energy, could be randomly modified by a small random value up or down whenever a child was produced. Nothing prevented caretaker energy from going negative, and indeed, that's what would happen. The simulation would work for a while while only a small number of organisms had negative caretaker energy, but eventually these guys would take over and become the whole population. They could indefinitely sustain themselves by having children, but their children (spawned by two parents who passed on negative energy) would instantly die.


Decades ago I read about a simulation aiming to evolve creatures that could walk, or one day run. The fitness function that determined how many offspring you get was “maximum speed ever attained in your life”.

They let it run for a while and came back to find all the creatures had evolved into extremely tall, thin stalks that would tip over and never move again. The top would be moving very fast before it hit the ground.


Our own human intelligence could also seen this kind of side effect. The purpose of it being for us to be better at hunting other animals and gathering extra food. Instead, in just several hundred thousand years we’ve built a bunch of ‘buildings’ and we managed to throw the entire ecosystem out of balance.


There is a 2018 paper called "The Surprising Creativity of Digital Evolution" which is a collection of similar anecdotes: https://arxiv.org/abs/1803.03453


Thanks, great paper! "Many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds."


Karl Sims is a nice Aptronym for someone who studies evolution in physics simulations.


Though Charles would have been better...


My goal with this project was to be able to seed the world with a single proto-organism and have two distinct species - one predator and one prey - evolve to create a stable ecosystem.

I eventually localized food sources and added a bunch of additional rules, but was never able to realize this goal. I think for predator-prey relationships to evolve in my system it would have required sensory organs and methods to react to local environment.

Seems like ALiEn is able to simulate food chains with distinct species - and alas I don't have a CUDA GPU - but curious if they've been able to create an ecosystem where predators and prey can coexist in a balanced stable ecosystem. (In my experiments, it was very easy to get predator population explosion, all of the prey gets eaten, and then all of the predators die)


In the real world what prevents this outcome? Humans aside. Because it's uncommon. Ecosystem diversity? Pathogens?


I think this probably happens sometimes in the real world. However, there usually aren't ecosystems with only two species. Most predators eat multiple prey species, so a predator may hunt the "easy" species to extinction leaving only the harder-to-hunt species remain, which causes the predator population to fall as only a subset of the predators are able to succeed in these harder conditions.

Cicadas could be another example of a strategy to deal with prey-decimation - by only emerging every N years, food sources have time time to regenerate between cycles. Similarly, many large predators are nomadic - so as they reduce prey availability in one area, they choose to look elsewhere, giving the prey in that area time to recover.

I think geography and terrain also helps a lot in the real world: prey is usually smaller than predators and thus has more hiding spots. Maybe I should have implemented a 'turtling' mode, where organisms could spend part of their time immobile and invulnerable, but also not gaining energy, as a way to prevent predation. I think sensory organs would still probably be necessary to make that strategy work.


Sometimes the prey can hide?


If you managed to evolve huge organisms eating lots of small ones while both species survive than that was a predator and prey stability.


Yeah, I was able to achieve short temporary equilibria, but in every case the predators would slowly die out (and re-evolve later, and die out again), or they'd be too successful and kill everything.


That sounds like a neat idea.

I too implemented GOL at some point (when I had a look at SDL) and for fun changed the (boolean) game field to integers that I mapped to grayscale (and later RGB). So instead of killing/giving birth to cells you just decrease/increase their integer value. The result looks like a spreading fungus (with the classic horizontal/vertical/diagonal patterns) which can be very chaotic when numbers start to overflow and underflow.

It's a really fun and engaging way to play with 2d graphics and simulation.


Definitely! I made some too:

https://youtu.be/gaFKqOBTj9w

There are a few videos (not just this one) where I tweaked the rules arbitrarily but got realty interesting behavior!


This is way cooler than what I am doing now to learn a language.


My dad, who was teaching me, wrote the canvas+js 2D visualization for me, while I built the engine that managed the state of the world. When you're learning asking for help from those who are more experienced than you is huge. Also, don't be shy about taking someone else's thing and modifying it until it does what you want it to do: these are educational projects, you don't really need to worry about licensing or those kinds of things. I learned a lot from 'hacking' in-browser web games before trying to write my own: cheat at the game, then add a button that improves Qol, then try to add a feature.

Try to simulate something that you're interested in! Everyone has their own interests, but I find these kinds of problems a lot of fun to work on.

When you're learning, it forces you to make reductive approximations and simplifications - you just can't do it the "right" way, so try to find a way to get something similar with something close to it. Trying to model a bunch of simplistic rules that replicate a phenomenon. Flocking/crowd/traffic behavior, spread of memes or viruses, growing plants, etc. - the sorts of problems were you have a bunch of tiny particle/cells that each have simple behavior but they can interact with each other are very rewarding to get working because simple rules can produce complex system behavior.


>The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult.

https://en.wikipedia.org/wiki/Permutation_City


I read that a few years ago, probably because it was recommended here on HN. Absolutely wild book.


It seriously flipped my view of the world on it's head for a bit. Then I read it again right after lol


What a coincidence, I just reread that section of permutation city yesterday.

I wonder if there is an efficient cellular automata that vaguely approximates the behavior of real chemistry, just like the autoverse.

It's a shame the garden eden configuration plot point is kind of bunk.


Stephen Wolfram says yes.


This book is absolutely fantastic. Egan is a magnificent writer.


He is, but sometimes I go cross-eyed reading his books. He likes to explore some crazy topics, which makes for great reading but sometimes confusing reading too.


Yeah, that's why my favorite way of reading him is via his short stories. Just enough to get the big idea across and not get lost in too much detail.


Could you recommend one or two of these short stories? Thank you!


Pick up Axiomatic, try the first story (The Infinite Assassin) and see if it grabs you. It's got the best parts of something like Snow Crash, which drops you in without much exposition, and lets the visuals/action lead. Learning to Be Me from the same collection really stood out to me, it's a unique take on the "uploading your consciousness" trope.


My recollection from Infinite Assassin is that for someone not pretty literate in mathematics it would nearly be nonsense.


It's not a short story, but this is the first work of Egan's that I read and I loved it: https://en.wikipedia.org/wiki/Diaspora_(novel)


Into Darkness was another great one from Axiomatic. Also great action, and a seriously intense brooding atmosphere.


Thank you for letting me find my next read! Loved Egan's Diaspora years ago, looking forward to this one.


Diaspora is one of my all time favourite books


This was my first thought after looking at the project. Just finished the book.. wild!


Permutation city was my gateway drug for the excellent Greg Egan bibliography


Since the title isn't informative: "ALiEn is a GPU-accelerated artificial life simulation program." https://alien-project.org/


Nvidia GPUs only.


I've written a lot of software against GPUs, albiet some years back. The main challenge was that many of the best libraries had CUDA support (or CuDNN support on top) but not support for other GPU lines' frameworks.

Getting CUDA to work well is hard. Not hard on your laptop, not hard on a particular machine but hard to work everywhere when you dont know the target environment beforehand -- there are different OSs, different OS versions, different versions of CUDA, different cards with different capabilities. But we did get it to work fairly widely across client machines.

The same effort needs to be put into getting things to work for other manufacturers, except a layer deeper since now you're not even standardized on CUDA. Many companies just dont make the investment. Our startup didn't, because we wouldn't find people who could make it work cost effectively.

What I really wish is that the other manufacturers would themselves test popular frameworks against a matrix of cards under different operating systems and versions. We see some of that, for example, with the effort of getting TensorFlow to run on Apple's m1 and metal. I just dont see a random startup (e.g., mine w/ 12 employees) being able to achieve this.

For example, if I know from the manufacturer that I could get TensorFlow X to work on GPU Y on {Centos N, Ubuntu 18/20}, I would gladly expand support to those GPUs. But sometimes you dont know if it is even possible and you spin your wheels for days or weeks -- and if the marketshare for the card is limited, the business justification for the effort is hard to make. The manufacturers can address this issue.


How viable is it to replace CUDA with compute shaders?


Many organizations writing GPU-compliant software are not actually "writing CUDA" but they are either using key libraries which area using CUDA (e.g., TensorFlow) or it is a layer deeper (e.g., I use a deep learning library, the deep learning library using CuDNN, CuDNN uses CUDA.)

Other orgs are using something written in another language that compiles into CUDA.

Either way, to replace CUDA, that middle component needs to be replaced by someone and ideally it should be the card manufacturers themselves (IMHO.) I cant imagine any small/medium organization having sufficient engineering time to write the middle component and keep them up to date with the slew of new GPUs, OS updates, or new GPU features -- unless it is their core business.


Technically it is entirely viable. Vulkan/OpenGL compute shaders offer more or less a 1:1 equivalent to every CUDA features. It is more of an usability issue. CUDA has been design to be a GPGPU API from the get go and, therefor, tend to be "easier" to use. OpenCL could have been a better replacement, but the API was really not on par with CUDA when it comes to usability. SYCL looks like finally a good answer by the Khronos group but it is so late. You already have a lot of people who know how to use CUDA, a lot of learning resources, etc ...


Why not use the Nvidia docker runtime and put your application in a container?


Can you download a RTX 3080 from the Docker registry?


The hardware is a hell of a lot cheaper than wasting a week of engineering time though isn't it?


Nvidia docker runtime has been more recent. There were some issues with k8s usage and Nvidia docker runtime -- you couldnt use a non-integer number of allocations (e.g., cant split allocation of a GPU).

That said, NVIDIA Docker Runtime is awesome now -- however, all this underscores further how much further behind the non-NVIDIA stack is!


What about OpenCL?


OpenCL certainly has the potential to be a universal API but support for it is surprisingly spotty given its age.

For proprietary implementations, Intel appears to have the broadest and most consistent support. Nvidia skipped OpenCL 2.x for some technical reason (IIUC). AMD is a complete mess, for some reason not bothering (!!!) to roll out ROCm support for their two most recent generations of consumer GPUs.

In open source "Linux only" land, Mesa mostly supports OpenCL 1.2 (https://mesamatrix.net/#OpenCL) at this point. So if you're targeting Linux specifically then that's something at least.

Good luck shipping an actual product using OpenCL that will "just work" across a wide variety of hardware and driver versions. POCL and CLVK are both experimental but might manage this "some day". In the mean time, resign yourself to writing Vulkan compute shaders. (Then realize that even those will only run on Apple devices via MoltenVK, and despair at the state of GPGPU standardization efforts.)


OpenCL feels pretty stagnant. Showstopping bugs staying open for years. Card support is incredibly spotty. Feature support isn't even near parity with CUDA.

This despite v3.0 being released just last year... And completely breaking the API.


A simple artifical life / cellular automaton framework would be a great demo for portable compute shaders. I'm looking at this as a potential starting point in my compute-shader-101 project. If someone is interested in coding something up, please get in touch.


I'd love to help with this. I've been experimenting with rustgpu + wgpu, so I'd personally go there first probably.


Yeah, that looks like probably the most promising stack for the future, but there are certainly rough edges today. See [8] for a possible starting point (a pull into that repo or a link to your own would both be fine here).

[8] https://github.com/googlefonts/compute-shader-101/pull/8



And Windows 10 only :(


What would be the best alternative? OpenCL? Vulkan?


OpenCL is sadly stagnant. Vulkan is a good choice but not itself portable. There are frameworks such as wgpu that run compute shaders (among other things) portably across a range of GPU hardware.


In what way is Vulkan not portable? It runs on all operating systems (Windows 7+, Linux, Android, and Apple via MoltenVK) and all GPUs (AMD GCN, nVidia Kepler, Intel), and shaders (compute and rendering) are to my knowledge standardized in the portable SPIR-V bytecode.

WGPU is more portable, since it can use not only Vulkan but also other APIs like OpenGL and Direct3D 11, but Vulkan is already very highly portable for almost everyone with a computer modern enough to run anything related to GPU compute.


It's kinda portable, but I've had not-great experiences with MoltenVK - piet-gpu doesn't work on it, for reasons I haven't dug into. It may be practical for some people to write Vulkan-only code.


Vulkan is supported on basically all modern platforms except for Apple operating systems, Apple refuses to support open graphics APIs on their platform and there's nothing anyone can do about it - this isn't a Vulkan problem. Even OpenGL is deprecated and support hasn't been updated for years, and that's basically the most open graphics API in existence.


You basically complain about Vulkan not being portable enough because Apple made their ownTM Vulken-like API instead of actually supporting Vulkan. And some other people made a subset of Vulkan working on top of that.

Why don't you complain about Apple not supporting Vulkan instead?


Nowaday I think it would be SYCL. It use the same kind of "same source" API that CUDA propose and is portable. Technically it can even use a CUDA backend.


I've used it with a CUDA backend, it does work.


OpenGL Compute Shaders are one option too.


Who else is there… AMD?


Also Intel. Being Nvidia-only is not very good from an accessibility point-of-view. It means that only ML researchers and about 60% of gamers can run this.


Thankfully it's a specialist software package not aimed at gamers or ML researchers.


And most people in Hollywood using rendering engines like Optix.


No they don't. Also optix isn't a renderer, it just traces rays and runs shaders on the ray hits on nvidia cards. Memory limitations and immature renderers hinder gpu rendering. The makers of gpu renderers want you to think it's what most companies use, but it is not.

Also Hollywood is a city and most computer animation is not done there. The big movie studios aren't even in Hollywood except for paramount.


Except that is what companies like OTOY happen to build their products on.

https://home.otoy.com/render/octane-render/

As for the rest of the comment, usual Nvidia hate.


Octane is exactly the type of thing I'm talking about. This is not what film is rendered with. It is mostly combinations of PRman, Arnold or proprietary renderers, all software.

I don't know where you are getting "nvidia hate", studios that use linux usually use nvidia, mostly because of the drivers.

None of this changes that optix is not a renderer.


AMD, and Intel, yeah.


AMD, Intel, Apple, Samsung and all the other mobile chip makers.


Yes?


NVDA is 82% of the market and rising.


Are we celebrating monopolies now?


the difference between current AMD and Nvidia GPUs isn't even that large if viewed from price/performance ratio... Comparing cards at similar price has AMD having slightly less performance while having significantly more GDDR memory.

i still use an RTX3080 though, thankfully got one before the current craze started


The difference between AMD and Nvidia is _huge_ when you look at software support and drivers and etc. Part of this is network effects and part of it is just AMD itself. But the hard reality is I'd never buy AMD for compute, even if in specs it were better.

Just as a random anecdote, I grabbed an AMD 5700xt around when those came out (for gaming). Since I had it sitting around between gaming sessions, I figured I'd try to use it for some compute, for Go AI training. For _1.5 years_ there existed a showstopping bug with this, it just could not function for all of that time. They _still_ do not support this card in their ROCm library platform last I checked. The focus and support from AMD is just not there.


Actually they lost 1% and are now at 81%


Wanted to give a shot out to Larry Yeager who wrote software for this all the way back in the mid nineties : https://en.wikipedia.org/wiki/Polyworld

He was a professor of mine in grad school, he also did visual effects for The Last Starfighter, and the early work on character recognition in the Apple Newton. Cool dude.


Continuing the thread of other individuals who've done interesting work in this area, I've always been a huge fan of Jeffrey Ventrella's "Gene Pool": http://www.swimbots.com/genepool/

His fractal art is quite compelling as well: http://www.ventrella.com/


Yes, cool dude indeed! Glad you mention him, I was thinking of him when I saw this article. He visited the group where I was doing my PhD many years ago and also gave a fantastic talk on evolution, learning, and artificial life -- I was quite impressed. He has some cool stuff in his website too (http://shinyverse.org/larryy/).


Also relevant is Karl Sims' Evolving Virtual Creatures:

https://www.youtube.com/watch?v=JBgG_VSP7f8

Not sure how they're connected, but I remember being obsessed with that paper and actually recreating it in 2D as a grad project.


This is amazing. My question is whether there are emergent structures in a long-running sandbox environment? The videos that were posted appeared to have quite complex structures but it was unclear whether they were designed or if they "evolved" from earlier more-basic structures. Would be curious to get the author's take.


I wrote a (much less fancy) cellular automata program I called "evol" [0]. It simulates organisms on a flat grid. They have opcodes which are randomly permutated from time to time. If they can collect enough energy, they split; if they lose too much, they die. Having more opcodes costs more energy. There is no hinting or designing; everything starts with a simple "MOVE_RANDOM".

If you leave the program running long enough, they do actually evolve different behavior. Specifically, they will learn to recognize that there are other lifeforms in a direction and then move the opposite direction, reducing competition over the fixed amount of energy in a cell. You can actually see the population density rise when this happens. Since the grid wraps, you will generally get them "flowing" in one direction, cooperatively.

The world is simple and boring and it doesn't have graphics. Also, since the naive "Dna"/opcodes I chose use branching and random number generation, it's very slow and can't be simulated on a GPU.

Fun project nevertheless. The last few months, I've been slowly rewriting it in Rust and adding more stuff like terrain height. Haven't published the Rust version yet as it's incomplete—got hung up on the poor state of its terminal libraries.

[0] https://github.com/ehbar/evol


Very cool. Interesting to hear that they actually managed to evolve. Would be curious to see what happens when they can eat each-other. Though I recognize that might be significantly more complicated


I’ve been thinking about that for the rewrite! Maybe some chance to incorporate snippets of opcodes from the consumed.


Most of these look very designed, though "emerging" from simple rules on agents/particles.

Others look evolved inside the sandbox. (see doc here: https://alien-project.org/documentation/Evolutionexperiments...)


One of the YouTube videos claims that they are self-replicating structures that were "evolved" in another simulation. So possibly the appearance of being designed comes from the fact that they were selected from the best of whatever was produced by that other simulation and placed together for a video.


Not a biologist but I understand that isolation is an important factor of diversity and by default, this simulation wouldn't have that. So it makes sense to evolve in different areas and put them back into the same area.


Have you ever seen http://boxcar2d.com/? It requires Flash so it probably doesn't work anymore, but it used genetic algorithms to "design" a 2d car to travel over bumpy terrain.

It was ported here https://rednuht.org/genetic_cars_2/ but it's not quite the same thing.


One of the videos mentioned they were evolved in a different simulation.


I wonder if multiple simulations show any level of convergence.


Not sure if this is just for fun or for research. Artificial Life is/was a field of ressearch for a while, papers were written, books were published, [1][2] etc. The field sought to study biology and the complexity of living things by experimenting with simulations of the real thing(s).

This reminds me of what IMHO is best use of artificial life in a game, Unnatural Selection. In the game you had to select and breed creatures to go against other enemy creatures. [3][4]

[1] https://en.wikipedia.org/wiki/Artificial_life

[2] https://www.amazon.com/Artificial-INSTITUTE-SCIENCES-COMPLEX...

[3] https://www.mobygames.com/game/unnatural-selection

[4] https://www.youtube.com/watch?v=zteY_f9DrQA


It's still an active research area and the alife conference [0] is quite popular

[0] https://alife.org


Reminds me of my favorite programming game that no one has heard of: http://grobots.sourceforge.net

(Old video of grobots in action at https://youtu.be/BLXKedZHls4?t=801)

Alien looks awesome!


Then there's Core War from 1984 [1]. 11 years ago I computationally evolved a warrior (a program competing for virtual resources) and submitted it to the nano hill [2], it's still ranked top 20 to this day. Every few months the hill emails me the stats of someone trying to beat us with a new warrior :)

[1] https://en.wikipedia.org/wiki/Core_War

[2] http://sal.discontinuity.info/hill.php?key=nano


There is an old artificial life simulator (darwinbots http://wiki.darwinbots.com/w/Main_Page ) that is inspired by grobots-like programming games. Each organism is driven by its own code that can mutate randomly at each reproduction.

I've been trying to produce a web version of it. This is where I got so far (before more or less desisting):

http://darwinjs.herokuapp.com/


Nicely done! Very fluid.


Thanks!


This is why I like reading the comments on HN. I'd never heard of Grobots, it's brilliant!


Grobots looks like RobotWar https://www.youtube.com/watch?v=uSdCtxN96_o


There's a similar project that runs right in your browser:

https://exophysics.net/

It's more simple, and more about physics than biology, but the emergent phenomena are pretty interesting nevertheless. This universe comes the closest to a life simulation: https://exophysics.net/exhibit/mordial.html [based on the Primordial Particle System, https://www.youtube.com/watch?v=makaJpLvbow]


https://exophysics.net/exhibit/ogneron.html

Pretty cool! I see some rare behaviors like maybe there is a weak magnetic property and certain combinations of particles are more prone to it than others? Trying to rationalize this behavior I see after about a minute where some "molecules" seem to start trailing others.



Excellent


Imagine if our real world is juste some github project


It's more likely than you think: https://en.wikipedia.org/wiki/Simulation_hypothesis


Not really. If we're in a simulation that just begs the question - what is the "real" universe that the simulation is running in. It pushes the question of the nature of our universe up a level where we have zero visibility. No more satisfying that "where was god before he created the universe?"

The mathematical universe sidesteps this problem. If there is a concise and complete model of the universe, that is sufficient for it to exist. A simulation might also be considered a mathematical model, and it would exist in the same way even if nothing ever runs the simulation. So I guess maybe it could be a simulation, but we mustn't ask what it runs on, but what is the program?


> The mathematical universe sidesteps this problem. If there is a concise and complete model of the universe, that is sufficient for it to exist.

This then leads to how does math exist instead of nothing? Math is a concept, and if concepts exist then that is not "nothing".

Many people confuse "nothing" with the vacuum of space and particles appearing out of nowhere. In this case, we have something (space, vacuums, and particles), not nothing.


Because nothing is precisely what does exist. But nothing implies something, so my working theory is that nothing's implied opposite something is itself the first thing, then some cellular automata like progression results from similar logical self-reference and down the line our physics (and the entirety of every logical permutation of information n-dimensionally) results from that.

A similar conception I've heard is that its like something and nothing, at the beginning of time, made a bet whether there'd be something or nothing, but the act of making the bet was already something, rigging it in something's favor. Nothing thought that was bullshit and tried to call it so, and they've been battling it out ever since.


> But nothing implies something

Put another way, nothing has absolutely no properties - including the property of being nothing, or empty. If an empty nothing lacks the property of being empty, or nothing, then something must arise.


I'm working on writing a paper along those lines. I do believe that the answer to "why there is something rather than nothing?" may be: actually nothing is the only thing that exist, but its instability creates our apparent reality trough a self-referential observer-observed reality loop. I would love to chat, use my research email.


> A similar conception I've heard is that its like something and nothing, at the beginning of time, made a bet whether there'd be something or nothing

The problem is that nothing can't make bets.


What would it look like for math to not exist?


Does science fiction exists? Or Pokemon? In case your view is that they don't, you may argue similarly that math is a human made construct (which happen to work well to describe our universe, but it may be just survivorship bias as we use in physics only the math that works. For instance we discard imaginary solutions to classical motion equations). I do believe it is the right view, math is a man made "language" inspired by physics which is more fundamental.


If that's your view that's fine, but it's not the view of GP.


I see good arguments for both views. A way to get insights could be discovering an alien civilization math.


> It pushes the question of the nature of our universe up a level where we have zero visibility

The universe does not owe you visibility into its origins, the lack of it does not make the hypothesis any less likely.

> If there is a concise and complete model of the universe, that is sufficient for it to exist.

Sounds like the ontological argument [1], one of the worst contortions of apologia ever imagined.

[1]: https://en.wikipedia.org/wiki/Ontological_argument


If the universe will never leak any information about its origins, then those origins cannot affect us in any way, ever. This doesn't make any such hypothesis less likely, but it makes them irrelevant to us.


If we are in a simulation and this simulation obeys similar constraints to our computational models, we can test hypotheses on the basis of information theory. Or possibly find error-correction codes encoded in string theory as some quantum physicists have suggested.


There’s no evidence we live in a simulation, and a whole lot to suggest we aren’t.


Is there any evidence suggesting we're not in a simulation?


A lot of physics seems unnecessarily expensive to compute. Quantum mechanics suggest we either have nonlocality or exponential blowup, both of which cause simulation challenges. With just classical physics you don’t need to deal with that.

On the other hand, there are a lot of things that make physics tractable to compute, such as the +++- metric tensor and other factors forbidding causality violations. A universe with closed timelike curves becomes very expensive to compute because you usually have to use implicit solvers that are slow and might not even converge, corresponding to various time travel paradoxes.


Isn't this all assuming the simulation is running on a "computer" in a universe like ours.

Couldn't the universe where the simulation runs be so entirely different to ours that computing all that stuff is just easy.


Everyone saying it would be computationally expensive to simulate our universe is failing to put their mind outside of the box of our universe. Imagine for a moment that there's a universe which compares to our universe similarly to how our world compares to one inside of Conway's Game of life.

Granted, this scenario doesn't provide us with anything we can take action on, but the idea that we're in a simulation at all doesn't, either.


Some self-replicating "creature" in Conway's Game of Life could rowhammer the machine it runs in such that the creature (or a copy of it) now exists outside the Game and is able to replicate across the machine and maybe even across the network. If it takes control of a robot, you could argue it's "escaped" its simulation and now exists in our physical world.

The odds of all that happening without it being prevented are all but zero.

We have a better chance that one of the simulators grows attached and—against protocol—decides to uplift us from the simulation into a form where we can directly communicate with them.


So you only have to assume physics totally different from ours, and we can’t observe it. Isn’t that a bit of a weak point? And what would be the point of this simulation that has been running for billions of years?


>you only have to assume physics totally different from ours

You don't have to assume anything is true if you don't want to, but if you want to consider whether we're living in a simulation, it's probably worth considering.

>And what would be the point of this simulation that has been running for billions of years?

First, it's billions of years in our time. Second, what's the point of Conway's Game of Life?


> You don't have to assume anything is true if you don't want to, but if you want to consider whether we're living in a simulation, it's probably worth considering.

Which, for me, is a dead end. It's the same as assuming there's a God, except the moral implications are worse.

> First, it's billions of years in our time.

So, if billions of years of our time fly by like your average simulation run in "their" universe, the simulation can't be very meaningful to them. And it makes the distance between our and "their" physics even larger.

> what's the point of Conway's Game of Life?

None, and that's why nobody runs one with 10^120 cells for billions of years. And if somebody did, the result would be incomprehensible. The gap between us and our creators must then be incomprehensible for us. All this is so outlandish, that the word "likely" shouldn't be anywhere near this discussion.


Your ability to comprehend something is your shortcut to assessing how likely it is?


> And what would be the point of this simulation that has been running for billions of years?

There is zero evidence for or against the simulation hypothesis, so why would some random person on HN be able to have the answer to this question even if we are in a simulation or even if we simply assume that we are?


Even if it's easy, simple simulations would still dominate the space of all possible simulations if the resources of the simulators are finite. So simpler simulations are more likely. (https://osf.io/ca8se , disclaimer I'm the author)


> A lot of physics seems unnecessarily expensive to compute.

When you make a simple simulation of rigid bodies with classical physics you often get numerically unstable results - bodies jerking against each other, slowly passing through, etc. One common way to solve this is to introduce a "frozen" state. When objects are close enough to be at rest with balanced forces - you mark them as frozen and don't compute them every frame to save computing power. You only unfreeze them when some other unfrozen object interacts with them.

Additionally hierarchical space indexing algorithms are often used to avoid n^2 comparisons calculating what interacts and what doesn't. And these algorithms often use heuristics and hashing functions with collisions to subdivide the problem, which might result in objects becoming unfrozen without actually touching each other.

The result from inside this simulation would be weird, nonlinear, nonlocal and look a little like wave function collapse (if particle A whose coordinates hashed through this weird function are the same as those of particle B happens to unfreeze - the particle B unfreezes as well despite not interacting in any way). And this would be probably considered "hard to compute" compared to the simple equations the system developer wanted to simulate.

Example that might be more relatable for scientists - it's much easier and cheaper computationally to make a numerical simulation for 3-body problem than to make an analytic simulation of it. But describing this numerical simulation behavior in terms of physical equations requires much more complex model than the equations that you wanted to compute in the first place. You have to include implementation details like floating point accuracy, overflows, etc. And if you go far enough you have to add the possibility of space ray hitting a memory cell in the computer that runs your simulation.

I'm not saying this is the reason QM is weird - I don't understand QM well enough to form valid hypotheses ;), but I'm saying we might be mistaking the intention of The Developer with the compromises (s)he made to get there. If you take any imperfect implementation of a simple model and treat it as perfect - the model becomes much more complex.


That's like a sim saying that their world would be impossible to run on the desktop in their simulated living room. That is tautologically true.


The physics of the simulator would have to be totally different to support a simulation with exponential computational costs. You probably couldn’t have anything like conservation of energy. Polynomial overhead would feel much more plausible.


Consider the physics of the simulator is literally the physics of our current universe. It need not be running on a binary substrate, the computation platform could just be the mass of the universe over time.


> A lot of physics seems unnecessarily expensive to compute.

Sure, but look at the state of frontend web development :)


> A lot of physics seems unnecessarily expensive to compute.

How do you know it's all computed? To make a convincing simulation, you just need to simulate in detail the bits that are actually being observed.

Everything else that happens could just be approximated at larger and larger granularity the further it is away from an observer.


Nope, long distance entanglement collapse breaks this. You either need exponential blowup to simulate all possible eigenvalues or you need superluminal coordination.

There isn’t actually an observed/unobserved distinction in physics. Unless you mean the simulation is specifically targeting humans, which is a vastly more complicated proposal.


> Unless you mean the simulation is specifically targeting humans, which is a vastly more complicated proposal.

It's also the most likely proposal (with current understanding of universe).

Axis of Evil (Cosmology) calls into question the Copernicus views of the universe. Essentially saying our solar system is somehow back at the center of the universe.

https://www.youtube.com/watch?v=hjVCjdX5XRw

If WE are the subject of the simulation, it's likely everything our instruments observe are like the sky on the Truman show - not there, just phantoms of what we would expect to be there with what the simulation wants us to know about physics.

There's a max speed the speed of light, what if this is the max processing ability of the computer we're running on. What if we're not on a computer at all but some sort of wetware computer system that grows as it needs to, and never runs out of resources?

What if the speed of light in the parent sim is 500x bigger for them, or ours is like a centimeter in comparison.

A dream is a simulation, we could all be dream creatures to some huge extra-dimensional being. Not everything pre-supposes human technology.

I've seen literal "glitches" in reality, so it's pretty easy for me to believe that reality isn't something completely set in stone. For others it challenges everything they believe in, for that I say open your mind.

Donald Hoffman believes that what we see is like what someone in a VR headset sees, outside the VR headset who knows what that world is like, but in this one -- everything except math (which he believes is universal and extra-universal), is made to fit this universe. Physics, science, all of it is unique only inside the headset. There could be many headsets with different settings running parallel (parallel worlds/universes), maybe the speed of light is faster in one than the other, maybe gravity works different, etc... So many things in our understanding are really like "settings" like size of a planck's constant, pie, speed of light, etc. Almost reads like a config file.

I mean if you buy into a "God" being, if computing is a thing which we have it so why wouldn't God? Wouldn't it even make more sense for him to just code up a simulation? I mean it's gotta be a lot less demanding than building a whole universe from nothing.


Unnecessarily expensive computation in software? preposterous.


Maybe the simulation isn’t optimized, but how do the unnecessarily expensive bits negate the theory that it all is in fact a simulation?


Yes, any evidence of complexity (assuming that simpler universes are more likely) is evidence against the simulation hypothesis: The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization https://osf.io/ca8se (I'm the author :) )


complexity and simplicity seems pretty biased to human understanding?

Just as a thought experiment, I'd propose that our universe and the human experience is incredibly simple. Humans were only given a limited number of senses so that the simulation can be run in this "low fidelity". Compared to the thousands of senses a level or two up. We also are simulated in a simple linear time model, only able to experience a single time at once, greatly reducing the complexity and fidelity needed. Same for the number of dimensions we are able to sense.


Yes, you need to remain inside the reference class of human simulations, so in a sense there is a bias in where do you want to draw the line of what's a human simulation. But once you do it, the result in not ambiguous.


1. That the simulator must run something much larger than this world much faster than real time. How feasible is that?

2. That the same arguments also apply to our simulator operators. They also inhabit a simulated universe. And so on. Where does it stop, and why?


Running an ai simulation on an 8 bit nintendo is going to be a lot more complicated, and difficult than running one on a 512^e38 bit (pulled out of ass) 100kth gen Radeon GPU that won't be developed for 1000 years from now...

In a universe where time itself could be fluid, where it could be easy to reverse events, rewrite events, etc - making quantum computers work even way better than we ever could because we're limited by causality.

I mean the people beyond this universe could have 50 senses, like a sense of how far up or down they are, or how much water they can breathe in before they need oxygen if o2 is even a thing, or a sense of time so they can go back/forward through time. If they have 50 senses, our 5 sounds like "nothing" to simulate.

It's all a matter of perspective, I'm sure an ant feels like they keep pretty busy and nothing could possibly simulate their colonies, but I'm sure that would be pretty easy.


That so many people believe they have knowledge of all the evidence suggests to me that something very strange is going on here.

This may not literally be a simulation, but it seems to behave like one in many ways.


It is impossible to know the likelihood as we have no other points of comparison.


It is actually possible :) (with some assumption on the distribution of the simulations) Complex sims are less likely, so the likelihood of increasing optional complexity of our sim should be slim, for instance interstellar travel. It's still unsolved how much unlikely, but if you have a large enough increase in complexity (say interstellar travel over billions of light years) you will hit sims which are unlikely enough.

(https://osf.io/ca8se, I'm the author)


For folks who like this sort of thing, I will once again make my monthly plug for folks to check out "The Evolution of Cooperation", by Robert Axelrod.


Also, "The Selfish Gene". Super fun read. Also, there are a bunch of really interesting videos made to demonstrate concepts like the evolution of altruism on youtube: https://www.youtube.com/watch?v=goePYJ74Ydg


Bought it now! I really likes Steven Levy's "Artificial Life" as a light introduction to this world. Sadly, the book isn't too far out of date despite being 30 years old now.


I think you need to read the first few chapters of After Virtue to appreciate the concept behind creating things for people to enjoy directly; it's a form of art, essentially reducing the overgrown calculator known as a "computer" to a beautiful vase holding a flower arrangement.

Here is the link: https://archive.org/details/4.Macintyre/page/n17/mode/2up

This is a sort of test playground for marketing, brands, and so on since the programs occupy a no man's land between games, academic research, toys, entertainment, and programming.

It also satisfies the self-feeding dogfood condition: similar to games such as RoboWar, it is difficult to resist the temptation to experiment at the simulation level, a phenomenon that could be described as the MFTL effect.

http://www.catb.org/~esr/jargon/html/M/MFTL.html


I'm not a fan of proprietary frameworks like CUDA. Makes you too dependent on the manufacturer.


I'm not a fan of ROCM since it doesn't work on consumer brands.


Reminds me of ParticleLife (for example [1]), which has much simpler, but more random rules. There also emerge lots of interesting organism, but I rarely see replicators.

I used this as inpiration and to learn about Unity ECS and made a 3D version with WebGL support [2]; native builds obviously have much better performance. But it is all CPU-only. Anyway. What I found very interesting is dynamically switching between 2d and 3d. Most organisms survive the dimensional increase or reduction and just reconfigure itself to a similar structure.

[1] https://fnky.github.io/particle-life/ [2] https://particlelife-3d-unity-ecs.github.io/


Looks like Conway's game of life next generation.


Definitely a kind of cellular automata, I like seeing a different version of that than Game of Life.


I don't think you could really call this a cellular automata, as that's defined by the cellular-neighbourhood-processing update rule. To put it another way, this looks like a 'vector' simulation (or automata) compared to GoL's 'raster' update.

There are certainly a lot of other fascinating cellar automata though! Even within 2-state-2d-totalistic (the class GoL is from) there's loads to see and lots of surprises! Well worth exploring! (there's an app called 'golly' that's good for that, and it's cousin 'ready' does related (also 'raster') 'reaction-diffusion' simulations.)


In no way comparable in sophistication, but I did find making an n-body simulation to be unexpectedly profound.

My universe started as a uniform random distribution of small stationary objects, the only rules that existed were gravity (F=gm1m2/r^2) and inertia (F=ma).

Mass started clumping together, orbiting each other, eventually forming a relatively stable arrangement of what we recognize as stars orbited by planets orbited by moons.

With two simple rules to govern my universe, an emergent order had occurred that mirrored my reality.


Reminds me of SmoothLife.

Generalization of Conway's "Game of Life" to a continuous domain - SmoothLife (Stephan Rafler) http://arxiv.org/abs/1111.1567

Video of SmoothLifeL: https://www.youtube.com/watch?v=KJe9H6qS82I


Windows + NVIDIA only.


Yeah, seems like a missed opportunity since the Mac/Linux crowed would eat this up.


The most popular OS and the most popular GPU? I can run this easily.


Definitely going to play around with this. I'd love to see examples where you have a ton of at first non-usable energy spread out in the world, with some "hot spot" of energy in a corner with some setup allowing for evolving mechanisms. Seeing mechanisms form that are able to utilize the spread-out energy would be really fascinating.


It reminds me of the spaceship game Reassembly. Straight away, I thought: "hey, you could build a massive reassembly with this probably".


Their favicon really looks like Flickr's logo.


While that's true, I doubt that will ever be a problem for ALiEn or Flickr.


Reading through the comments, the number of folks who simply can't get this running on their system seems to be fairly large.

The GPU compute ecosystem is truly in a very sorry state, and NVidia is very much to blame for this: in their quest to get a stranglehold on the market, they've reached a point where things don't even work reliably on their own products.


WebGL seems to be the only kind-of-robust way to do portable GPU code now, if you don't have encyclopedic experience of deploying native GPU apps. Or the time, $$ budget and opportunity cost budget to engage in multiplatform and testing fixing.


This is way above my head but looks insanely cool.


Are there any similar projects that would help newbies like me to learn a bit about any real area of life sciences, e.g., biochemistry, cell biology, neuroscience in a fun and engaging way? This project looks like tons of fun, but I am too ignorant to judge whether it would teach me anything applicable beyond the scope of this particular bit of software.


This reminds me of the awesome Scriptbots project by Karpathy (of Tesla self driving fame) from 10 years ago, which I spent countless hours playing with: https://sites.google.com/site/scriptbotsevo/


This is going to be a random comment, but the thing that struck me most was how close this person's Github username is to my own! github.com/chrxr vs github.com/chrxh. Feels bizarre. And seeing their actual name it appears there username has the same relationship to their real name as my own username.


This reminds me of a reddit r/nosleep story, "The Life in the Machine"

https://www.reddit.com/r/nosleep/comments/u7zc2/the_life_in_...



Didn't work on my AMD 3700x + 64GB + GTX 1070. Windows 10 is updated and nvidia drivers too. Got only a black screen after clicking "play" :( Tested both 2.52 and 2.47 versions.


Sometimes I see a frozen render. Maybe it only work on Intel CPUs...


Nope. Appears to be broken for Nvidia 10 series. I'm on Intel with a 1070ti + another user reports issues on the 1080ti (see similar post in this thread; reported to author here: https://github.com/chrxh/alien/issues/21).


Just informing that this really worked on an old i7 920 with 24 GB and a RTX 2060 Super (8GB). I didn't even need to install the CUDA SDK on this one.

Since the i7 920 supports less instructions than the 3700x, it really is a specific problem with the GTX 1070 (8GB)


Thank you for letting me know!

Later I'm going to test it on another computer with a RTX 2060 super, but slower CPU and slower interconnections (PCIe).

Thanks for the link too. :)


Aha... well that explains why my 1060-based system didn't work.


That reminds me of a small game I did last year: https://gcampos1712.github.io/index.html


Reminds me of this great youtube channel: https://www.youtube.com/watch?v=YNMkADpvO4w


look/feel very reminiscent Osmos game, even the audio:

https://m.youtube.com/watch?v=QADU5SHHO-w


I have no experience with CUDA, is it possible to run this on a normal PC with a 1080ti?

I've tried, the program runs but I cant step the simulation even once.


Same problem with my 1070ti (also Pascal architecture). When first started can pan, zoom, edit, etc. But, as soon as Run is hit, rendering completely breaks: scroll bars indicate zoom is working, but display never updates. In addition, program hangs on exit (one CPU is pegged at 100%).

Have updated to current CUDA (11.3.1) and current NVidia driver (466.77) with no luck.


Submitted an issue on GitHub: https://github.com/chrxh/alien/issues/21.


install cuda drivers. a 1080ti supports cuda


This is awesome! Seems very interesting and looks beautiful! Will definitely give it a try!


Now what I really want to see is someone who is skilled with it live streaming on twitch.


Is there any way to build Microsoft Visual Studio 2019 projects in Linux?


You can try this converter:

https://github.com/pavelliavonau/cmakeconverter

And also use Vcpkg for dependencies, but I guess there will be some mess with Nvidia SDKs so you'll have to hack around a bit.


Smartscreen is blocking the installer... That's so annoying.


Seems like our current definition of life is way too arbitrary.


I am surprised nobody has mentioned one of the oldest alife simulations: Tierra by Tom Ray (http://life.ou.edu/tierra/)


Does it have all of the rules of DNA like jumping genes?


Looks better than framsticks.


This is amazing.


This is heavy. I shall delve.


Could someone explain the obsessive devotion to doing all Artificial "Life" research in terms of Cellular Automata? If you could supply the mathematical reasoning for this, I would be very interested in hearing the answer.

And preferably an answer that goes beyond just saying that Von Neumann used Cellular Automata in his ALife research.

Also, if anyone knows of alternative methods to CA in studying the properties of life then I would also be interested in learning of these.

All in all, I find these procedurally generated art pieces to be rather underwhelming in any serious attempt or study of what artifical life is/can be.

> digital organisms and evolution

This is a claim without definition of what a non-biological organism even is. Could we just claim that any CA, any program is "living" while it is running?

I would love to see some formality before claims are made in this area.

EDIT: After watching the "Planet Gaia" video [0], I feel even more like the excitement about this is no different than the excitement for a video game and not for actual scientific progress. Cool code and cool visuals. Very little in the way of understanding life better.

[0] https://www.youtube.com/watch?v=w9R6zrdl6jM


I think many artificial life simulations end up underwhelming because life is incredibly complex, and so it's very hard to simulate at scale. This ALiEn is perhaps the most advanced one I've seen, and it looks like even still they take some shortcuts (like copypasting interesting organisms from previous simulations together to create interesting interactions).

What you see as a criticism of this line of research I think is actually its reason: Life is arguably the most interesting thing in the universe, and if we can create it digitally it will surprise us. Evolution yields insights and solutions that you cannot predict. If we can synthesize what the minimal set of key properties are necessary for artificial lifeforms to create interesting unexpected outcomes, it helps us clarify the definition of what a non-biological organism could be.

I'm personally fascinated by the idea of autonomous digital agents that exist and self replicate while trying to earn cryptocurrency, which is used to pay for the hosting costs of themselves and their progeny. I think we are about two decades away from this being realized, but in the future, software services could self assemble, replicate imperfectly and evolve to please humans without any humans writing additional code: we'd just have to code a profitable LUCA, create suitable 'nests' and pay the organisms that please us. "What is life" is debatable, but IMO this would be a valid digital lifeform.


> so it's very hard to simulate at scale.

But, this is a very unaddressed point. Why focus on "simulation" when mathematical formalisms & theories could be potentially even more useful? Especially when most "simulations" are running on some arbitrary set of hard-coded assumptions?

> What you see as a criticism of this line of research

To clarify, I was in no way criticizing ALife research. Quite the opposite. I am actually trying to help ensure it does not get stuck in a rut.


Ah. We'll speaking personally, mathematical formalisms & theories sound very intimidating, whereas CA-type simulations are so approachable many are 'fun toys' that kids can enjoy playing with.

A mathematically formal approach does sound potentially more useful, but I'd have no idea how to approach that sort of problem. I speculate that the venn diagram of people who want to work on these types of problems and also have the depth of formal math understanding to actually achieve it is a small handful of people who have plenty of other interesting problems to work on.

Or maybe someone has done this work successfully, but the depth of knowledge required to understand it has prevented wider awareness?


> Could someone explain the obsessive devotion to doing all Artificial "Life" research in terms of Cellular Automata?

Uh, it doesn't exist. Plenty of A-life research doesn't use cellular automata as a model.

> Also, if anyone knows of alternative methods to CA in studying the properties of life then I would also be interested in learning of these.

https://avida.devosoft.org/


> https://avida.devosoft.org/

Thanks for the link!

> Plenty of A-life research doesn't use cellular automata as a model.

While I would like to believe you on that, one link does not seem sufficient to support the word "plenty" when the ratio of ALife projects built around CAs to not is extremely high.


I'll at least point to another example: http://aevol.fr/


Although they could call this an automata, I think it's incorrect to call it a CA as it's not cell-neighbourhood based:

"A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off "

From: https://en.wikipedia.org/wiki/Cellular_automaton?wprov=sfla1


It's not grid based. But that seems like a rather pedantic differentiation to make. It does quite literally utilize the concept of "cells". [0]

I find it interesting that the project seems to have hard-coded emergence with the concept of "tokens".

So, I am much less intrigued by the simulation examples when most of what we are seeing is just a procedurally-generated video game with pre-defined game rules. Much of it is not truly emergent.

Again, it's a "oh, that's cool" kind of factor, but a far cry from contributing to anything in the way of "artificial life" research.

[0] https://alien-project.org/documentation/Basicnotion.html


>To build alien you need Microsoft Visual Studio 2019.

no thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: