ASIC stands for Application Specific Integrated Circuit- quite a mouthful. Most people have interacted with them in the form of ASIC Miners for Bitcoin or other Proof of Work Cryptocurrencies. To understand them, let’s talk a little bit about the history of the other chips you can use to mine: CPUs and GPUs.
CPU (Central Processing Unit) is an old term for the “brain” of the computer. It was the set of circuits doing the arithmetic, and it needed to be able to do every possible computation in order to be considered “Turing complete”, a term we use to define a computer.
CPUs can do anything and must be able to or else your computer wouldn’t be able to run certain things. This limits its ability to specialize- quite literally, humanity is only capable of fitting so many transistors per square inch at a time. If we build a chip that has more pathways designed to do a certain type of computation, it can only come at the expense of other pathways.
It wasn’t until the 80’s and 90’s did we start to get GPUs, then and technically still called Graphics Processing Units. Modern GPUs are actually “GPGPUs”, or “General Purpose Graphics Processing Unit(s)”, because they are becoming better at more things as time goes on, but we keep the original name for brevity’s sake.
GPUs had an early life as an often forgotten “Floating Point Unit”. Aptly named, FPUs were specialized for doing calculations on floating point numbers, that is numbers with decimals. Once the personal computer and video games were developed however, these chips were much more often used for that purpose as video rendering uses a great deal of floating point operations.
We have this to thank for the ubiquity of GPUs in the world, and are a great tool for decentralized hashpower. But when you go “all the way”- making a chip where every single transistor is designed to do one thing and one thing only- you’ve made an ASIC. ASICs can be made at the same transistor density as any other chip and therefore will always be the fastest way to do any computation that wasn’t specifically engineered not to be done on such a chip and even then some clever person still might find a way. The only machine that is faster at a particular task than an ASIC is another, better ASIC. In many ways, it can be seen as inevitable, or a sign of success.
Some people see ASICs as a problem, others see it as a solution. It may even be preferable to choose an algorithm that is easy for ASICs to be developed for. Generally, we’re Network Neutral- There are tradeoffs to everything and the best solutions come when the part is the best fit for the project. Generally, ASIC-Hard algorithms require more validation time out of the full nodes and possibly updating of the algorithm by developers, requiring a hard fork. Alternatively, GPUs offer more globally distributed hardware. It’s up to the choices and goals of each development team and community to decide.