A New World Order That Is Not New For All
A New World Order That Is Not New For All
Dr. Cecilia Waismann, VP R&D MindCET
“Elon Musk warns A.I. could create an ‘immortal dictator from which we can never escape’” (CNBC, Nov. 2018); “Reddit Co-founder Mocks Elon Musk’s Warnings About AI: But then he issued a warning of his own” (Futurism, Dec. 2018); “How we can prepare for catastrophically dangerous AI and why we can’t wait” (Gizmodo, Dec. 2018); “Bill Gates: A.I. can be our friend” (CNBC, Feb. 2018).
The year 2018 has witnessed a continuous debate over alarming calls for a public “scare” about the latest developments of Artificial Intelligence (AI). A significant number of headlines, especially in non-scientific media, were clearly semantically structured to call for our dismay and apprehension. The big question is whether this is a true civic concern to raise public awareness, an irresponsible abuse of the power of media regarding concerns taken out of context, or a real concern of the growing incertitude about our future. In any case, AI has become a center of attention, provoking skeptical reactions towards the constant and never-ending technological marvelous developments.
However, I believe that the true reaction to such headlines relies on our adoption of smart systems, on our seamless and inadvertent integration of emerging technologies in our daily habits, leaving those headlines to our conceptual and moral debates. Moreover, a stronger reaction we currently observe is the younger generations’ expression of a confident know-how, a natural behavior towards the ongoing technological revolution, accompanied by a lack of awe, and instead the suggestion of a new world order that is not new for them! We are witnessing the birth of a much more inclusive natural environment.
Let us try to look a bit closer into it.
[maxbutton id=”1″ ]
Humans’ and machines’ symbiotic process despite apocalyptic scares
Emerging technologies are quickly becoming more intelligent and helping bring machines and humans closer together. A significant milestone happened in 2016, during the Google project “AlphaGo,” wherein a computer program played against Lee Sedol, the 18-time world champion in “Go,” which is considered the most complex and intellectually challenging board game in existence. AlphaGo initiated moves that were unthinkable for a human, leaving everyone in total bewilderment. AlphaGo’s moves provoked a state of expectancy that made programmers react to the game-play with the emotional and surprised reactions of mere observers. This event exemplified a stage of smart machines’ development where they are not only able to process non-human amounts of data at non-human speed, but to behave with a non-human and maybe unpredictable intelligence.
Mixed reactions emerged. On one hand, more significant technological developments took over many industries; on the other hand, there was a burst of scary predictions even from original advocates of AI. The image of this apocalyptic Terminator, a super-intelligent monster that will take over jobs, lives, and even the world, started to take root in the public perception. Alongside that came the idea of a disconnection from our relevance and power over it, almost as if we had no responsibility for these developments, but were instead only passive user-victims. We forget sometimes that many of our modern conveniences were also initially met with such fear and dissent, and long later, acknowledged responsibility. Any technological development must be expected to have a double-edged-sword impact. We can take as an example the human use of plastic (a major development of the 20th century) which took half a century to truly revolutionize industries, and ended up helping inhibit bacterial spread and lower mortality rates as well as creating new human habits. At the same time, humans’ irresponsible and exacerbated use of plastic contributed, and still does, to the destruction of earth’s environment, quickly becoming a danger for our planet’s ecosystem and survival of species. Does that make plastic a monster? Plastic itself isn’t inherently evil, but our use of it has become dangerous, so who is at fault? To what extent is AI development any different from other technologies? Maybe the answer lies in the “intelligence” element – our descriptive differentiation from the other beings of our ecosystem. AI has so many basic commonalities with humans that it may require a new definition of “being.”
A much more inclusive natural environment
The generational gaps we currently face are only of a few years, causing significant differences in the perception of the world around us. The youngest generations have incorporated as a natural part of the environment even the most sophisticated technologies, and have even become immune to novelties. Some explain it by the uncontrolled overload of information they learned to digest; others explain it by their navigation outreach accessibility; others by the constant and overriding virtual connections which displace (or place anywhere) us all. For whatever reason, smart machines are a natural part of the current landscape, which the unstoppable development race is re-defining as a dynamic, ever-changing, unpredictable landscape.
Are we ready to “educate” the current generation?
In such a scenario, it is no surprise to see educational systems in crisis, standing on a shaky ground without a clear vision of how and what role they play. Educators perceive a growing generation gap almost as something debilitating!
Of all industries, educational systems probably face the biggest generational gap due to their incapacity to understand and adapt to the new landscape, while at the same time they still hold a major and defining role in the younger generations’ lives. The challenges are many and the solutions not clearly defined. Artificial intelligence is an enabler that can help us change and improve our educational systems. As Ray Kurzweil argues, the melding of humans and machines as a result of the singularity and the growth of AI will be a significant enabler: “As machines become more intelligent, humanity will also grow to become smarter.”
We, educators, should be aware of the risks and opportunities of AI and help develop a generation to responsibly use technology. Smart systems as enablers of new educational solutions could narrow generational gaps, and definitely help educational systems meet the needs of a generation that is currently very desensitized and frustrated with the existing ones. This generation is abandoning educational systems in favor of the much more attractive and rich Internet, which, it could be argued, is currently becoming a parallel educational system. Is that the future educators wish to see?
[maxbutton id=”1″ ]