Narrow proficiency, narrow range of applications

Get Complete Project Material File(s) Now! »

Limitations within AI and its’ developments

To be able to know how AI will influence society further on, it is crucial to know the constrains of current AI and the possible potential to meet those constraints. By observing results within AI research, the limitations are evaluated: is it an impracticability or is there potential to overcome it? In this chapter some of the vital limitations of existing AI and potential developments are raised.

Narrow proficiency, narrow range of applications

While there are numerous of examples of AI agents that exceed human proficiency in specific tasks, none of these can compete with the all-around reasoning and ability to process abstract knowledge and phenomenon of human.
Ford’s interviewees, in his book Architects of Intelligence, all mentioned key skills yet to be mastered of AI systems. Among the core things where transfer learning, knowledge in one domain is applied to another, and unsupervised learning.
Transfer learning failure, due to lack of concept understanding is a major set-back when it comes to DL. Take DeepMind’s work with Atari game as an example. When the system played a brick breaking Atari game “after 240 minutes of training, [the system] realizes that digging a tunnel thought the wall is the most effective technique to beat the game”. Marvellous, right? At first glance the result is promising, fast evolving and succeeding because of a skilled combination of deep learning and reinforcement learning. However, the system has not really understood or learnt anything besides how to solve a secluded problem. The physical concept “tunnel” does not have a deeper meaning to the system than it did 240 minutes earlier.11 It was mere a statistical approach to solve a specific task. Researchers at Vicarious certificate the lack of concept understanding by experiments on DeepMind’s Atari system A3C (Asynchronous Advantage Actor-Critic) playing the game Breakout. The system failed to proceed when introducing minor perturbations like walls or moving the paddle vertically.35 In Deep Learning: A Critical Appraisal, Marcus concludes that “It’s not that the Atari system genuinely learned a concept of wall that was robust but rather the system superficially approximated breaking through walls within a narrow set of highly trained circumstances”.

Limited memory

In supervised learning the model training process consumes an ample set of training data which is both memory- and computation-intensive. In reinforcement learning many algorithms is iterating a computation over a data sample and updates the corresponding model parameters every time executed, until model convergence is achieved. One approach to improve the iterative computation is to store away, cache, the sample data and model parameters into memory. Memory however is limited and is therefor a constraint to effectivity.36
There are some examples of approaches to the memory limitation and they are executed on three different levels: application level, the runtime level, or the OS level.
At the application level one approach is to design external memory algorithms that process data too large for the main memory. Part of the model parameters are cached into memory and the rest can be stored on a disk (a secondary story, in contrast to main storage, RAM). The program can execute with the model parameters in the memory and a smaller working set. However, this means coercion to engineer correct algorithms, which is often easier said than done.
Runtime Level – using a framework (e.g. Spark) supplies a set of transparent data set, application programming interface (API), that utilize memory and disks to storage data. The external memory execution is not totally transparent and still demands working with the storage. The solution, however, releases the burden of engineering exact algorithms.
At the Operative System Level paging can be implemented. Paging (i.e. data is stored and retrieved from a secondary source to use in main memory) is only used when memory starts to become bounded, the approach is adaptive and transparent to the software (program?). The method does not mean any demand on additional programming. The limits of the solution are that the swap space in the paging is restrained and difficulty to achieve higher performance because it demands knowledge of virtual memory which often are concealed under multiple system layers.37

Sample Data

Majority of ML methods rely on data labelled by humans, a major constriction for development. Professor Geoff Hinton has expressed his concern regarding DL’s reliance on large numbers of labelled examples. In cases where data are limited DL is often not an ideal solution. Once again, take DeepMind’s work on board games and Atari as an example, which is a system needing millions of data samples for training. Another is AI researchers’ PhysNet that requires between 100-200k of training sets for tower building scenarios with only 2-4 blocks28 (see 1.3). Take this in contrast to children that can muster intuitive perception by just one or a few examples. This limitation is strongly linked to 2.2.1 Narrow proficiency and that current DL methods often “lacks a mechanism for learning abstractions through explicit, verbal definition”.11 When learning complex rules human performance surpass artificial intelligence by far.
The inventors behind ImageNet claim that it took almost 50k people from 167 different countries, labelling close to a billion images for over three years to start up the project.39 A clear limitation to efficiency in deep learning methods.
Others claim that labelling data is not a problem. A lot of China’s success at AI origins from good data, but also cheap labour force.38 Having a large amount of, by human hand, labelled data has contributed to late promising AI results in China.39 This could be an inkling that labelled data must not amount to restrains of development. Yet large amount of sample data could of course be restricting efficiency.

Limited knowledge in linked domains

Because AI is a multidisciplinary research area including disciplines such as neuroscience and cognitive psychology, restrains in the connected sciences are also limitations to AI’s future.
A lot of prominent AI models and techniques, RL for example, are highly influenced by biology and psychology.
Attempting to map the human brain is a major project that includes a lot of researchers all around the world. For example, the European commission is funding 100 Universities to map the human brain, US has several ongoing projects, China and Japan has also announced that they are taking on the attempt. Some of the challenges includes the size and complicity of the brain’s composition. The brain consists totally of around 100 billion neurons. Scientists describes the specimens’ size as a “grain of sand” which holds about 100k neurons and among them 100 billion connections. Data gathering is both heavily time- and memory consuming. One example is the attempt by The Allen Institute to identify different cell types in the brain of a mouse. Picture gathering of 1 3 took about 5 months and the total map with all connections are approximated to take up to 5 years. The total data storage ended up being 2 Peta Bytes, equalizing in 2 million GB from 1mm brain.40
AI is used for segmenting the mouse brain, after the Allen Institute finished imaging the brain sample. The machine learning algorithm “can evaluate images pixel by pixel to determine the location of neurons” according to the article How to map the brain, published in Nature. Segmentation by computers are by far faster than human but with lower accuracy, monitoring is therefore still a demand. The person behind the algorithm Sebastian Seung, neuro- and computer scientist at Princeton University, is tackling the mistakes of the system through an online game “Eyewire”. The players are challenged to correct the errors in the connectome. Since launching of the game 2012 about 290k users have registered and together their gaming has produced in a work equal to 32 persons working full time for a total of 7 years, according to Amy Robinson Sterling, executive director of Eyewire.41
Another problem with creating a connectome is that the brain is a constantly changing organ with new connections and synapses.
If mapping the brain is the next goal in order to proceed in AI, something drastic need to change in the technical approaches in neuroscience. Scientists say either technology develop in such way that there is more memory, or we have to change our way of thinking into a way that compress information to finish this work.
Two significant researchers emphasize the danger with getting too bound to the attempt to imitate the brain. In a panel chat by Pioneer Works “Scientific Controversies No.12: Artificial Intelligence”, hosted by astrophysicist Janna Levin, Director of Science at Pioneer Works, with guests Yann LeCun Director of AI research at Facebook, NYU professor and Max Tegmark, physicist at MIT and Director of the Future of Life Institute, the scientists discuss the potential of replicating human brain function.42
Focusing too much on the brain is “carbon chauvinism” (the perception that carbon life forms are superior) Tegmark says during the discussion. Biology should inspire us, not be object to direct replication, LeCun reasons. He also expresses his concern on how current systems are trained “We train neural nets in really stupid ways,” he says, “nothing like how humans and animals train themselves.”. In supervised learning human must label data and feed the machine with thousands of examples. In image recognition this means labelling huge amounts of images of cats and feeding it to the system before the algorithm correctly can identify cats in images. That’s nothing how human learn, the recurrent example is that of a child who can see only one cat and from that correctly identify cats in other images and at the same time develop abstract understanding of the concept “cat”.
Even though neural networks in deep learning have been a key factor to late progress within the field, Tegmark and LeCun seem to observe the approach of exact brain function replication as a dead end and rather a limitation than an asset. Or as Tegmark says, “We’re a little bit to obsessed with how our own brain works”43, a fair point when we try to envision the first superhuman intelligence, why look at man when you try to create something beyond that?
In contrary to this view, a lot of neuroscientists think we’re on the right path towards mapping our entire brain and predict a dramatic leap in understanding in 10-20 years. Neuroscientist call technique the biggest restraint right now.40

READ  Competence barriers to innovation in SMEs


There are other aspects that may limit present artificial intelligence besides technical and lack of profound knowledge in concatenated domains.
Professor Gary Marcus alarms the culture in the research and describes the climate as harsh and judgemental with little room for questioning. According to Marcus he has been contacted by a DL researcher confined to him that he wanted to write a paper on symbolic AI but refrained to do so in fear of it affecting his future career.29 An important example of questioning is the test of the SQuAD model by Robin Jia and Percy Liang (see 1.3).30 Without questioning there is no chance to evaluate and refine.
In the book Architects of Intelligence (see 2.1) there was a distinct lack of consensus concerning many aspects of the current state AI and where it is heading, amongst the interviewed research elite.44 The absence of unity may be concerning, especially since it is among the superiority. If even the definition of what artificial intelligence is, how it should evolve and the necessities to obtain the goals, is a disagreement, will it then be possible to in an effective way incubate the discipline? Gary Marcus culture alarm and the disagreement of the prominent AI researchers may be cause to some serious red flags concerning the current state of the discipline. On the other hand, one could argue that without friction, questioning and discussion there would not be a healthy environment for the research to thrive. The dissensions amongst the Ford’s interviewees (2.1) may not necessarily mean damage for the development but could result in different approaches to reach achievements. Besides anthropomorphism possibly being a philosophical dilemma for human apprehension of machines, it is also conceivably that it is affecting AI research. Getting to bound to the brain imitation approach may impede other important inputs and attempts to develop AI.
In a panel chat Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds, at Future of Life Institute hosted by MIT professor Max Tegmark the panel were asked about the timeline for when superintelligence will be achieved in the case of super-human-intelligence already invented. Futurist and inventor Ray Kurzweil phrase “Every time there’s an advance in AI we dismiss it” continuing saying “ ’That’s not really AI. Chess, Go, self-driving cars’ and AI is, you know, a field of things we haven’t done yet. That will continue when we’ve actually reached AGI. There will be a lot of controversy. By the time the controversy settles down, we’ll realize it’s been around for a few years.”45
The phenomenon described by Kurzweil is called the AI effect. Advances in AI is sometimes dispatched by redefining, e.g., what intelligence are. It has been said that DeepMind playing Atari games is not really intelligence, yet merely a statistical solution. The AI effect may contribute to underestimation and “play down” of achievements within the field.

Conclusions and Summary

The limitations raised in this section include narrow proficiency, lack of memory, lack of training data, limited knowledge of how the brain works, and limited definitions of intelligence. I now summarise these in the order of importance that they appear (from my findings) to have.
The most serious concern is regarding transfer learning and the lack of ability to engender abstract knowledge. Present AI are highly skilled on specific tasks, often outcompeting human by far, while producing disheartening results in a broader spectrum of skillsets. Leaving us with a very point-directed skill specific agent with a narrow set of proficiencies and a narrow range of applications. This is the most alarming limitation since it seems to be the hardest to solve. It would require new
45 Future of life institute, Max Tegmark, et al. Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds, ways of perceiving intelligence, the sequence of learning and new methods of engineering.
Memory seems to be the second most worrying limit. Many ML and DL methods are extremely memory consuming with large data sets, model parameters and expensive iterations of the algorithms. Without memory the AI is limited in the ability to refine performance through, for example, reinforcement learning. At the same time there are some existing methods to deal with the problem which are promising. With existing approaches there are opportunities to meliorate those or even find new ones.
Knowledge in linked domains is, from some perspectives, very limited. If the goal is to attain artificial general intelligence through brain imitation, we’re in for some serious obstacles to solve. The neuroscientific belief that there would be a dramatic leap in understanding in 10-20 years does not promote AI within the coming decade, and the wish to create a connectome seems far away for now. However, the possibility to search for solutions in other fields and find other approaches is still present and may be the way of dodging the neuroscientific limitations. Cognitive science is still contributing to some interesting AI developments through reinforcement learning, like the new model (2019) from RMIT that exceeds Google DeepMind’s results when playing the Atari game Montezumas revenge (see 1.4).
The least important limitation appears to be culture and different outlooks. Social structures and influences are always evolving and there are numerous ways of impacting those. It is not a dead-end in the same way as lack of knowledge in linked areas of research.
Regardless of the promising results in the field, something profound is still missing. Provided the perfect circumstances in an isolated environment with the goal to fulfil a niched skill, the AI can impress and imbue its purpose. However, the artificial intelligence available for us is a one without ability of deep representation, presentation, and profound understanding of context and the worlds richness.

The Labour Market

Automation and digitalisation have radically changed the foundation of society numerous times, industrial revolution, the IT-boom and now we talk about the new era, Industry 4.0, a subset to the industrial revolution. In this section AI in today’s work are explained and observations on how AI is predicted to affect current occupations.
Predictions of technological mass unemployment has so far been exaggerated and not come to reality. Introducing machines as substitute for human labour has so for resulted in the market adjusting by increased efficiency, leading to production and product costs decreasing and the demand of new labour increasing. Substituting labour with technology first result in a shift of work, demanding workers to relocate their labour. Then, in the end employment expand due to companies entering productive industries and requiring more work force.46 This have been the rule of demand and supply on the market so far, however this is not a nature of law and the question is if the same holds in the coming decade. In the following section the capacity of AI regarding labour is examined.

Table of contents :

1.1 What’s artificial intelligence?
1.1.1 Machine learning and Deep learning
1.1.2 Physics and Psychology in learning models
1.1.3 Philosophical aspects and problems
2.1 Future AI
2.2 Limitations within AI and its’ developments
2.2.1 Narrow proficiency, narrow range of applications
2.2.2 Sample Data
2.2.3 Limited memory
2.2.4 Limited knowledge in linked domains
2.2.5 Other
2.2.6 Conclusions and Summary
3.1 The Labour Market
3.1.1 Technical capacity of AI and correlation to labour
3.1.2 Predictions on employment
3.2 Society and economy
3.2.1 US Silicon Valley vs China
3.2.2 Social class & distribution of power
3.2.3 Risks, integrity, facial recognition, surveillance
3.2.4 New jobs and conclusions
4.1 Outlook
4.1.1 Approaches
4.2 The Next Decade & Conclusions


Related Posts