This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy
We need to ensure that AI benefits everyone, but to achieve that the industry needs to overcome existing theoretical and technological challenges.
Zhou Hong, President of the Institute of Strategic Research, Huawei
As we move towards an intelligent world, information sensing, connectivity, and computing are becoming key. The better knowledge and control of matter, phenomena, life, and energy that result from these technologies are also becoming increasingly important. This makes rethinking approaches to networks and computing critical in the coming years.
In terms of networks, about 75 years ago Claude Shannon proposed his theorems based on three hypotheses: discrete memoryless sources, classical electromagnetic fields, and simple propagation environments. But since then, the industry has continued to push the boundaries of his work.
In 1987, Jim Durnin discovered self-healing non-diffracting beams that could continue to propagate when encountering an obstruction.
In 1992, L. Allen et. al. postulated that the spin and orbital angular momentum of an electromagnetic field has infinite orthogonal quantum states along the same propagation direction, and each quantum state can have one Shannon capacity.
After AlphaGo emerged in 2016, people realized how well foundation models can be used to describe a world with prior knowledge. This means that much information is not discrete or memoryless.
With the large-scale deployment of 5G Massive MIMO in 2018, it has become possible to have multiple independent propagation channels in complex urban environments with tall buildings, boosting communications capacity.
These new phenomena, knowledge, and environments are helping us break away from the hypotheses that shaped Shannon theorems. With them, I believe we can achieve more than 100-fold improvement in network capabilities in the next decade.
In computing, intelligent applications are developing rapidly, and AI models in particular are likely to help solve the fragmentation problems that are currently holding AI application development back. This is driving an exponential growth in model size. Academia and industry have already begun exploring the use of AI in domains like software programming, scientific research, theorem verification, and theorem proving. With more powerful computing models, more abundant computing power, and higher-quality data, AI will be able to better serve social progress.