* Processors have a defined interface ("instruction set") that you can implement a compiler for and have it keep working. The interaction between the FPGA tooling and the FPGA targeted it much more complicated. It's a bit like graphics card drivers and the hardware, where the line between the two is blurry and you have to consider them both together as the graphics system. This means that the 'cleverness' of the design is much more exposed to the toolchain and having that open source would give a way a lot to competitors
* Synthesising FPGAs is a LOT harder and more complicated than compiling code (and compiling code is pretty hard). There are many stages in the pipeline involving synthesising the logic and laying it out on the target for best performance. To give an example, we upgraded our toolchain for a Xilinx FPGA and power consumption improved by 20% because they'd changed the ways that they segment clock domains and gate off stuff being used. Good performance in the tooling is a major differentiating factor that you might not want to give away.
The point is that the free toolchains got good enough that in general you don't need to care, and I think it's reasonable to expect the same dynamics to apply here.
The uop cache, and L1 code cache, of modern chips is rather small. You can often grossly increase performance locally by loop-unrolling, but if that causes the "hot path" to no longer fit in uop-cache (or L1 cache), then you've lost a chunk of global performance vs a small local-gain in performance.
Global vs local optimization is just a tough subject in general. Even on CPUs (which is probably easier than FPGAs)
To a certain (admittedly limited) degree the "limited space" problem with FPGAs maps intriguingly to the "limited memory" conditions that early LISP and FORTH compilers spawned in.
If you are making 10,000+ units the economics are overwhelmingly in favor of ASIC over FPGA.
(In particularly if you are the first mover that proves the market with an FPGA you should have started an ASIC design in parallel to it because the second and third movers will have ASIC from day one and a cost structure 10x or 20x better than you!)
This keeps FPGA a niche market because it serves niche markets.
No the question was: would the vendor want this?
I'm referring to this comment above:
> Programmable logic seems like a space that could be disrupted with a company that has fully open-source tooling around its hardware along with great documentation. (...)
QuickLogic EOS S3 (provided by QuickLogic)
Lattice iCE40 (reversed, project icestorm)
Lattice ECP5 (reversed, project trellis)
Lattice Nexus (reversed, project oxide)
Gowin GW1N (reversed, project apicula)
Xilinx family 7 (reversed, project x-ray)
With more on the works. Hopefully an AMD Xilinx will start contributing to the open source tooling themselves, as they do with CPU and GPU tooling.
I think a truly "open" documentation and compilers model (like what AMD have been up to with their graphics cards) would still be a game-changer in the industry.
The medium hard part of FPGA tooling is synthesis, and the truly hard part is high performance placement and routing with low run times.
The competitive benefits of being good at that are huge, because it can literally mean the difference between choosing a component of one vendor vs the one of another. As a user, the tooling and the underlying HW architecture can be mostly treated as an inseparable blob: it's not as if you're going to design specifically for a particular FPGA logic element feature, you rely entirely on what the backend tool with do with it.
If I were Intel or AMD, I'd be fine with opening up some documentation that helps users to a certain extent, but I'd never agree to an open backend model.
With that said, I don't understand the FPGA market well enough to know if open tooling would be an advantage competitively.
With respect to opening documentation:
I really do wonder. I don't know the market well enough. For a "we need an FPGA because we need XYZ to be fast and don't have the volume/$ to tape out," of course a good PnR and meeting timing is the #1 concern, and a savvy buyer would choose the best performing solution even if the software suite is brutal to use. But for "we need a PLD for some glue logic, it doesn't need to be 100% blazing fast, it needs to meet XYZ requirements," would a buyer choose a toolchain that worked better and let their engineers go to market faster, even if the PnR wasn't quite as efficient? In that case, opening up documentation and letting the open-source ecosystem build out support could be valuable.
What would be telling here would be whether Lattice have seen a measurable sales impact from the reverse engineered open toolchain appearing for their parts. I cynically think that perhaps the idea is futile in this industry, but that's also what everyone thought about graphics and AMD went and did it anyway...
Also I wouldn't say that their tools are in the dark ages- most of the problems with their tools stem from modern software tool design. So for example, I find Vivado / SDK to be complex yes, but also much more pleasant to use than tool chains for simple micro-controllers. An example is the total mess that is software libraries delivered via STM32CubeMX (no part migration, libraries depend on custom BSPs vs. auto-generated code, good luck preserving your fixes if their auto-generated code has bugs).
A reason that the Xilinx tools are big and complex is that they are following modern software tool practices (that I hate): it's a large Java-based tool that aims to hide the underlying relatively simple command line programs. Xilinx tools from the mid-90s are actually closer to what the open source FPGA tools look like today.
AMD is a Big Company, could go either very right or very wrong.
Fusion was the name AMD gave pretty soon after ATi acquisition to their fancy HSA concept of accelerating compute with gpgpu, especially with the integrated gpus in their APUs. It was announced with great fanfare and pomp, but never really became reality. Today their gpgpu framework (rocm) is both unpopular and doesn't even support igpus in apus.
Overall it seems like a move to strengthen their portfolio for integrated high-end solutions, not something they want to radically shake up. So I suspect it'll be business as usual at Xilinx after the acquisition, but maybe one day we'll actually get a mythical datacenter-class FPGA/CPU combo using Epyc, if it makes sense for their customers.
These companies make money selling hardware, just open the tools and let people use what they want instead of forcing this 'visual programming' paradigm with half-baked IDEs on everyone.
My worry with AMD is what will happen to the SoC chips that have actual ARM cores in the fabric (like the Zynq family)?
Why does that worry you? AMD already has ARM cores in their portfolio at least for their A-series Opterons and the PSP.
The paradigm is the least important, when it works, I don't know anyone in real world projects that didn't have to configure something manually through a tcl script because it was either not exposed in GUI or not working in GUI.
My issue is not with 'visual programming' my issue is with the 'half-baked' part of the tools.
There are too many multi year old bugs that go acknowledged by Xilinx. The VHDL2008 standard came out more than a decade ago and from memory it was only in 2019 that they supported it as a valid simulation file in Vivado. There a plethora of autogenerate/copied/cached files that add a ridiculous amount of friction to any attempt of sane version control. There is just too much complexity in what I assume is an attempt to hide the shoe-horned messiness of the tcl backend and 'prettifying' the GUI frontend.
Worse is the other way around, a GUI control not exposed through the TCL scripts, which makes it very difficult to maintain a TCL build script which can be version controlled (unlike the project files: this particular IDE likes to completely rewrite its project files in a way which is not at all amenable to diffing).
And the FPGAs need not even be as efficient in packing as Xilinx or Altera.
The thing that always gets me: you are already charging for HW - that's the only real business model. The CAD tools 1) suck, 2) aren't going to be a major money maker, and 3) desparately need UI design and programming talent well beyond what ever gets applied today.
Dear god is it ever. My fiancee, bless her, got me a Spartan 6 for Christmas one year. I never got around to using it because I couldn't find a download for the single, outdated version of (I think) Arduino Software it claimed it 100% required.
I'm interested in what you mean by this. I was working at Altera at the time the acquisition went through, on the software tools mind you, but I didn't notice a significant shift in strategy. Are you referring to before or after the acquisition was announced?
Generally, during and after a merger, there is a lot of instability in a company as layoffs in comparable departments get figured out and company infrastructures get merged. I assume that on the software side, you were insulated from this instability since they were going to need to keep developing Quartus (or its replacement) either way and Intel had no equivalent.
I was an Altera customer before and during the merger (including after it was announced). It looked from the outside like sales teams and chip design teams had significant amounts of instability. I personally experienced months of delays getting Arria 10s (and associated FAE support) from our sales team immediately after the acquisition. Our new sales team from Intel (figured out 8 months later) was twice the size, had to push CPUs and Intel's software in addition to FPGAs, and was also dealing with the embarrassing situation around Stratix 10 delays.
The delays surrounding Stratix 10 were indeed embarrassing, but I'm not sure how much of that was due to the switch to the Intel foundry vs architectural features new to S10.
I can't speak at all for sales/FAE support, but I guess that was a target ripe for "synergy", or w/e they call it.
Basically if AMD dont do anything to Xilinx, leaving it to them continue on their own, both Xilinx and AMD will gain benefits on design cost reduction. Which is important considering leading edge design cost is forever increasing.
That's interesting, can you expand on this? I'm curious what could have impacted them that much (I'm a student about to finish my undergrad in comp eng)
It's not impossible, it's just very difficult, impacts every single part of the stack, and is very difficult to justify.
It is typically simply cheaper to deploy FPGAs in released products when the volume is small, while it may be cheaper to use full custom when the volume is in the millions to hundreds of millions, in the cases where either solution is functionally workable.
That includes amortizing the non-recurring engineering costs over the total units, which is typically higher for full custom than FPGA -- although sometimes they are actually in the same ballpark.
Aside from that you are correct; people sometimes imagine that most any application can be significantly accelerated with FPGAs, but even in the cases where fine-grained parallelism is present to be accelerated (well-known not to be the case for all application areas), the FPGA solution space is decreased by the solution space where full custom makes engineering and financial sense.
Also, a specific potential use of FPGAs is for pattern matching on large amounts of text/data: If you do it at all, you're likely to do it often; and it can't be a custom implementation since the circuit depends on the specific pattern.
There are FPGA-SoCs out there that are mostly FPGA, and that seems like the way to go if you want to combine FPGAs and CPUs, otherwise they should probably be on separate power and heat budgets.
What we have is still a CPU and a GPU. Separate. Discrete. The inclusion changed nothing about what kind of software we can write or problems we can solve. Back when APUs were still just a slide in a powerpoint deck, there was a lot of talk about how this could "change everything" because of the radically different type of compute you could do on a GPU.
The closest thing to this is the coin mining craze, but everyone is gravitating towards the fastest hardware for that. And that's not your APU.
Although I dont see that happening soon. Mostly because no one has the interest in doing so. What most likely will happen is that Apple are so far ahead in this game they started to take notice and work on something.
On Server it is very different though. Tiles and Chiplet will finally means FPGA can have their own node and optimisation while sitting next to CPU without all the complexity of trying to get them into the same silicon. Although the cost of specific software optimisation for specific task will likely means they only goes to HyperScaler.
I was hoping for an FPGA-SoC with some Xeon cores when Intel bought Altera, and now I'm hoping for one with Epyc cores.
Would the chiplet differentiator for AMD impact implementation details in a significantly positive way?
I fully recognize that I may be speculating out my *s here, and would welcome further constructive comment even if it's just to say "Um, yeah, that's not how it works".
Is there any example of a FPGA provider delivering better products after acquisition?
EDIT: I'd wager that if the last administration hadn't blocked the sale of Lattice it would now be dead.
> any example of a FPGA provider
Are there lots of FPGA companies that have gotten acquired in the first place? Only Altera comes to my mind.
Would the Xilinx/AMD acquisition make Lattice the biggest independent FPGA vendor?
From the AMD side of things, it's just expanding their offerings, and maybe bringing in some know-how and technologies that could help boost/extend their existing products.
If you have Xilinx stocks because you believe in what they're doing and predicting a bright future and possible dividend while at the same time don't see much of a future in the AMD64 platform, then why would you want AMD to the company.
Most stockholders don't care about the companies they invest in any more. It doesn't matter if they make FPGAs, cars or toaster ovens. What matters is that you can make a profit buying a stock now and selling it at a higher place down the road. Those kinds of stockholder won't block an AMD takeover if the price is right.
Making a profit has been the point of buying and selling stocks literally since day one, when the Dutch East India Company issued publicly tradeable shares in 1602.
If you want to show your support for the vision of a company, buy their t-shirts. If you're buying their shares (typically not even from the company itself, but from another shareholder) for that reason, you're doing it wrong.
but, why? do we need more than 16 exabytes of memory?
Joking aside, we very well may one day have more memory available that 64 address lines can handle. RISC-V’s spec does have a stub chapter for “RV128”
I'm not sure if either AMD or Xilinx has sufficient operations in China to justify it, though.
Recent China history on mergers includes failure to approve the merger of Qualcomm/NXP
"The companies had been waiting nearly two years for their deal to clear global regulatory hurdles. The takeover had been approved in eight other jurisdictions, including the European Union and South Korea, since it was announced in October 2016. China was the lone holdout. "