Position: Home page » Computing » Single chip computing power

Single chip computing power

Publish: 2021-03-28 00:59:25
1. As the name suggests, the mine calculation card is a graphics card used for mining. More strictly speaking, it is a graphics card used for mining with high load for a long time. The graphics card used in mining usually works at full load 24 hours a day for several months. In this way, PCB and electronic components will accelerate aging, affecting the life of components. Not counting the rest time of the graphics card, even if we play the game for 8 hours a day, the life of the mine card is only one third of that of the normal graphics card. It can be said that the general life of the mine card is only a few months.
2.

The types of chips that provide computing power for AI include GPU, FPGA and ASIC

GPU is a kind of microprocessor specialized in image operation on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones, etc.). It is similar to Cu, except that GPU is designed to perform complex mathematical and geometric calculations, which are necessary for graphics rendering

FPGA can complete any digital device function chip, even high-performance CPU can be implemented with FPGA. In 2015, Intel acquired the FPGA long alter head with us $16.1 billion. One of its purposes is to focus on the development of FPGA's special computing power in the field of artificial intelligence in the future

ASIC refers to the integrated circuits designed and manufactured according to the requirements of specific users or the needs of specific electronic systems. Strictly speaking, ASIC is a special chip, which is different from the traditional general chip. It's a chip specially designed for a specific need. The TPU that Google recently exposed for AI deep learning computing is also an ASIC

extended data:

chips are also called integrated circuits. According to different functions, they can be divided into many kinds, including those responsible for power supply voltage output control, audio and video processing, and complex operation processing. The algorithm can only run with the help of chips, and because each chip has different computing power in different scenarios, the processing speed and energy consumption of the algorithm are also different. Today, with the rapid development of the artificial intelligence market, people are looking for chips that can make the deep learning algorithm perform faster and with lower energy consumption

3.

this question is quite professional, but according to my knowledge, I have a chance to finish it let's first introce some basic knowledge about the field of chip manufacturing

I think this kind of problem at most comes from the concern and consideration of China's semiconctor instry manufacturing. Because China's 14nm process has been proced and operated in China, but compared with semiconctor giants like TSMC, we still have a big gap in 5nm and 7Nm. Therefore, there may be such a problem: if we want to use 14nm instead of 5nm, the starting point is very good, but the charm of science lies in constantly exploring the limits and unknowns. Only by constantly climbing can we have a deeper understanding of the world and improve our proctivity

4. 1) At present, intelligent speakers are put in the cloud to do NLP because the knowledge map and computing power required by the question answering system can not be realized locally. 2) most of the A7 and A53 chips used in the current speakers. 3) according to the local home kit released by Google and Xiaoai teacher released by Xiaomi, there is no problem for A53 to realize local ASR, and some simple It can be expected that NLP in limited fields can execute the corresponding answer / command. 4) if it is a floor sweeping robot, a7 and A53 can be competent if it only needs simple command words. 5) the requirements for the main control chip are mostly determined by the requirements of the application scenario, and the accuracy and anti-interference ability determine the requirements for the chip; If it is a low-power scenario, such as the wake-up and command word functions of TWS headphones, it can be realized with Apollo 2 / 3 of ambiqmicro. If the floor sweeping robot is not sensitive to cost and has high performance requirements (with great noise), then the general MCU is not necessarily suitable. You can consider A7 and A53
5. Let me explain to you that Tron is the world's largest decentralized blockchain application operating system founded by sun Yuchen. It has become the only Chinese virtual currency with market value among the top ten. At the beginning of 2021, the total number of users of BoChang public chain exceeded 21 million, becoming the only Chinese chain among the three major public chains in the world, with a total of 1.5 billion transactions, running the largest DAPP ecosystem in the world. Did you know that?
6.

on September 18, Huawei released a heavyweight proct, Atlas 900, which brings together Huawei's decades of technological precipitation, is the fastest AI training cluster in the world, and is composed of thousands of ascendant processors. In the resnet-50 model training, the gold standard of AI computing ability, Atlas 900 completed the training in 59.8 seconds, which is 10 seconds faster than the original world record

"imagenet-1k data set" contains 1.28 million images, with an accuracy of 75.9%. Under the same accuracy, the test results of the other two mainstream manufacturers in the instry are 70.2s and 76.8s respectively, and the atlas 900 AI training cluster is 15% faster than the second. Hu houkun said: the powerful computing power of atlas 900 can be widely used in scientific research and business innovation. For example, astronomical exploration, oil exploration and other fields all need to carry out huge data calculation and processing. Originally, it may take several months, but now atlas 900 is just a matter of seconds. The thousands of integrated shengteng processors in atlas 900 are the commercial shengteng 910 some time ago

7. It's not from Japan, but Japan follows the trend.
8. This is mainly related to their own design, and has nothing to do with the FPGA chip. We need to make a detailed pre analysis of the design task. For example, the internal processing clock frequency is not high enough or FIFO read-write scheling is not fast, so the processing speed is certainly not high.
9. Voice processing technology, pre-processing (noise rection) - asr-nlp-tts, all need computing power. The trend is that asr-nlp-tts in the cloud will move to the end side. It can be used for command word recognition, speaking at will and other applications locally.
10. Doesn't it make sense just to compare the number of transistors? Hehe, in terms of performance, the processing efficiency is 5 ~ 10 times higher
Hot content
Inn digger Publish: 2021-05-29 20:04:36 Views: 341
Purchase of virtual currency in trust contract dispute Publish: 2021-05-29 20:04:33 Views: 942
Blockchain trust machine Publish: 2021-05-29 20:04:26 Views: 720
Brief introduction of ant mine Publish: 2021-05-29 20:04:25 Views: 848
Will digital currency open in November Publish: 2021-05-29 19:56:16 Views: 861
Global digital currency asset exchange Publish: 2021-05-29 19:54:29 Views: 603
Mining chip machine S11 Publish: 2021-05-29 19:54:26 Views: 945
Ethereum algorithm Sha3 Publish: 2021-05-29 19:52:40 Views: 643
Talking about blockchain is not reliable Publish: 2021-05-29 19:52:26 Views: 754
Mining machine node query Publish: 2021-05-29 19:36:37 Views: 750