Single chip computing power
The types of chips that provide computing power for AI include GPU, FPGA and ASIC
GPU is a kind of microprocessor specialized in image operation on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones, etc.). It is similar to Cu, except that GPU is designed to perform complex mathematical and geometric calculations, which are necessary for graphics rendering
FPGA can complete any digital device function chip, even high-performance CPU can be implemented with FPGA. In 2015, Intel acquired the FPGA long alter head with us $16.1 billion. One of its purposes is to focus on the development of FPGA's special computing power in the field of artificial intelligence in the future
ASIC refers to the integrated circuits designed and manufactured according to the requirements of specific users or the needs of specific electronic systems. Strictly speaking, ASIC is a special chip, which is different from the traditional general chip. It's a chip specially designed for a specific need. The TPU that Google recently exposed for AI deep learning computing is also an ASIC
extended data:
chips are also called integrated circuits. According to different functions, they can be divided into many kinds, including those responsible for power supply voltage output control, audio and video processing, and complex operation processing. The algorithm can only run with the help of chips, and because each chip has different computing power in different scenarios, the processing speed and energy consumption of the algorithm are also different. Today, with the rapid development of the artificial intelligence market, people are looking for chips that can make the deep learning algorithm perform faster and with lower energy consumption
this question is quite professional, but according to my knowledge, I have a chance to finish it let's first introce some basic knowledge about the field of chip manufacturing
I think this kind of problem at most comes from the concern and consideration of China's semiconctor instry manufacturing. Because China's 14nm process has been proced and operated in China, but compared with semiconctor giants like TSMC, we still have a big gap in 5nm and 7Nm. Therefore, there may be such a problem: if we want to use 14nm instead of 5nm, the starting point is very good, but the charm of science lies in constantly exploring the limits and unknowns. Only by constantly climbing can we have a deeper understanding of the world and improve our proctivityon September 18, Huawei released a heavyweight proct, Atlas 900, which brings together Huawei's decades of technological precipitation, is the fastest AI training cluster in the world, and is composed of thousands of ascendant processors. In the resnet-50 model training, the gold standard of AI computing ability, Atlas 900 completed the training in 59.8 seconds, which is 10 seconds faster than the original world record
"imagenet-1k data set" contains 1.28 million images, with an accuracy of 75.9%. Under the same accuracy, the test results of the other two mainstream manufacturers in the instry are 70.2s and 76.8s respectively, and the atlas 900 AI training cluster is 15% faster than the second. Hu houkun said: the powerful computing power of atlas 900 can be widely used in scientific research and business innovation. For example, astronomical exploration, oil exploration and other fields all need to carry out huge data calculation and processing. Originally, it may take several months, but now atlas 900 is just a matter of seconds. The thousands of integrated shengteng processors in atlas 900 are the commercial shengteng 910 some time ago