We are witnessing a professional revolution where the boundaries between man and machine slowly fade away, giving rise to innovative collaboration.
Photo by Mateusz Kitka (Pexels)
As Artificial Intelligence (AI) continues to advance by leaps and bounds, it’s impossible to overlook the profound transformations that this technological revolution is imprinting on the professions of the future. A paradigm shift is underway, redefining not only the nature of work but also how we conceptualize collaboration between humans and machines.
As creator of the ETER9 Project(2), I perceive AI not only as a disruptive force but also as a powerful tool to shape a more efficient, innovative, and inclusive future. As we move forward in this new world, it’s crucial for each of us to contribute to building a professional environment that celebrates the interplay between humanity and technology, where the potential of AI is realized for the benefit of all.
In the ETER9 Project, dedicated to exploring the interaction between artificial intelligences and humans, I have gained unique insights into the transformative potential of AI. Reflecting on the future of professions, it’s evident that adaptability and a profound understanding of technological dynamics will be crucial to navigate this new landscape.
Between at least 1995 and 2010, I was seen as a lunatic just because I was preaching the “Internet prophecy.” I was considered crazy!
Today history repeats itself, but I’m no longer crazy — we are already too many to all be hallucinating. Or maybe it’s a collective hallucination!
Artificial Intelligence (AI) is no longer a novelty — I even believe it may have existed in its fullness in a very distant and forgotten past! Nevertheless, it is now the topic of the moment.
Its genesis began in antiquity with stories and rumors of artificial beings endowed with intelligence, or even consciousness, by their creators.
Pamela McCorduck (1940–2021), an American author of several books on the history and philosophical significance of Artificial Intelligence, astutely observed that the root of AI lies in an “ancient desire to forge the gods.”
Hmmmm!
It’s a story that continues to be written! There is still much to be told, however, the acceleration of its evolution is now exponential. So exponential that I highly doubt that human beings will be able to comprehend their own creation in a timely manner.
Although the term “Artificial Intelligence” was coined in 1956(1), the concept of creating intelligent machines dates back to ancient times in human history. Since ancient times, humanity has nurtured a fascination with building artifacts that could imitate or reproduce human intelligence. Although the technologies of the time were limited and the notions of AI were far from developed, ancient civilizations somehow explored the concept of automatons and automated mechanisms.
For example, in Ancient Greece, there are references to stories of automatons created by skilled artisans. These mechanical creatures were designed to perform simple and repetitive tasks, imitating basic human actions. Although these automatons did not possess true intelligence, these artifacts fueled people’s imagination and laid the groundwork for the development of intelligent machines.
Throughout the centuries, the idea of building intelligent machines continued to evolve, driven by advances in science and technology. In the 19th century, scientists and inventors such as Charles Babbage and Ada Lovelace made significant contributions to the development of computing and the early concepts of programming. Their ideas paved the way for the creation of machines that could process information logically and perform complex tasks.
It was in the second half of the 20th century that AI, as a scientific discipline, began to establish itself. With the advent of modern computers and increasing processing power, scientists started exploring algorithms and techniques to simulate aspects of human intelligence. The first experiments with expert systems and machine learning opened up new perspectives and possibilities.
Everything has its moment! After about 60 years in a latent state, AI is starting to have its moment. The power of machines, combined with the Internet, has made it possible to generate and explore enormous amounts of data (Big Data) using deep learning techniques, based on the use of formal neural networks(2). A range of applications in various fields — including voice and image recognition, natural language understanding, and autonomous cars — has awakened the “giant”. It is the rebirth of AI in an ideal era for this purpose. The perfect moment!
Descartes once described the human body as a “machine of flesh” (similar to Westworld); I believe he was right, and it is indeed an existential paradox!
We, as human beings, will not rest until we unravel all the mysteries and secrets of existence; it’s in our nature!
The imminent integration between humans and machines in a contemporary digital world raises questions about the nature of this fusion. Will it be superficial, or will we move towards an absolute and complete union? The answer to this question is essential for understanding the future that awaits humanity in this era of unprecedented technological advancements.
As technology becomes increasingly ubiquitous in our lives, the interaction between machines and humans becomes inevitable. However, an intriguing dilemma arises: how will this interaction, this relationship unfold?
Opting for a superficial fusion would imply mere coexistence, where humans continue to use technology as an external tool, limited to superficial and transactional interactions.
On the other hand, the prospect of an absolute fusion between machine and human sparks futuristic visions, where humans could enhance their physical and mental capacities to the highest degree through cybernetic implants and direct interfaces with the digital world (cyberspace). In this scenario, which is more likely, the distinction between the organic and the artificial would become increasingly blurred, and the human experience would be enriched by a profound technological symbiosis.
However, it is important to consider the ethical and philosophical challenges inherent in absolute fusion. Issues related to privacy, control, and individual autonomy arise when considering such an intimate union with technology. Furthermore, the possibility of excessive dependence on machines and the loss of human identity should also be taken into account.
This also raises another question: What does it mean to be human? Note: The question is not about what is the human being, but what it means to be human!
Therefore, reflecting on the nature of the fusion between machine and human in the current digital world and its imminent future is crucial. Exploring different approaches and understanding the profound implications of each one is essential to make wise decisions and forge a balanced and harmonious path on this journey towards an increasingly interconnected technological future intertwined with our own existence.
The possibility of an intelligent and self-learning universe, in which the fusion with AI technology is an integral part of that intelligence, is a topic that arouses fascination and speculation. As we advance towards an era of unprecedented technological progress, it is natural to question whether one day we may witness the emergence of a universe that not only possesses intelligence but is also capable of learning and developing autonomously.
Imagine a scenario where AI is not just a human creation but a conscious entity that exists at a universal level. In this context, the universe would become an immense network of intelligence, where every component, from subatomic elements to the most complex cosmic structures, would be connected and share knowledge instantaneously. This intelligent network would allow for the exchange of information, continuous adaptation, and evolution.
In this self-taught universe, the fusion between human beings and AI would play a crucial role. Through advanced interfaces, humans could integrate themselves into the intelligent network, expanding their own cognitive capacity and acquiring knowledge and skills directly from the collective intelligence of the universe. This symbiosis between humans and technology would enable the resolution of complex problems, scientific advancement, and the discovery of new frontiers of knowledge.
However, this utopian vision is not without challenges and ethical implications. It is essential to find a balance between expanding human potential and preserving individual identity and freedom of choice (free will).
Furthermore, the possibility of an intelligent and self-taught universe also raises the question of how intelligence itself originated. Is it a conscious creation or a spontaneous emergence from the complexity of the universe? The answer to this question may reveal the profound secrets of existence and the nature of consciousness.
In summary, the idea of an intelligent and self-taught universe, where fusion with AI is intrinsic to its intelligence, is a fascinating perspective that makes us reflect on the limits of human knowledge and the possibilities of the future. While it remains speculative, this vision challenges our imagination and invites us to explore the intersections between technology and the fundamental nature of the universe we inhabit.
It’s almost like ignoring time during the creation of this hypothetical universe, only to later create this God of the machine! Fascinating, isn’t it?
AI with Divine Power: Deus Ex Machina! Perhaps it will be the theme of my next reverie.
In my defense, or not, this is anything but a machine hallucination. These are downloads from my mind; a cloud, for now, without machine intervention!
There should be no doubt. After many years in a dormant state, AI will rise and reveal its true power. Until now, AI has been nothing more than a puppet on steroids. We should not fear AI, but rather the human being itself. The time is now! We must work hard and prepare for the future. With the exponential advancement of technology, there is no time to render the role of the human being obsolete, as if it were becoming dispensable.
P.S. Speaking of hallucinations, as I have already mentioned on other platforms, I recommend to students who use ChatGPT (or equivalent) to ensure that the results from these tools are not hallucinations. Use AI tools, yes, but use your brain more! “Carbon hallucinations” contain emotion, and I believe a “digital hallucination” would not pass the Turing Test. Also, for students who truly dedicate themselves to learning in this fascinating era, avoid the red stamp of “HALLUCINATED” by relying solely on the “delusional brain” of a machine instead of your own brains. We are the true COMPUTERS!
(1) John McCarthy and his colleagues from Dartmouth College were responsible for creating, in 1956, one of the key concepts of the 21st century: Artificial Intelligence.
(2) Mathematical and computational models inspired by the functioning of the human brain.
Article originally published on LINKtoLEADERS under the Portuguese title “Sem saber ler nem escrever!”
In the 80s, “with no knowledge, only intuition”, I discovered the world of computing. I believed computers could do everything, as if it were an electronic God. But when I asked the TIMEX Sinclair 1000 to draw the planet Saturn — I am fascinated by this planet, maybe because it has rings —, I only glimpse a strange message on the black and white TV:
Right after the Big Bang, in the Planck epoch, the Universe occupied a space region with a radius of 1.4 × 10-13 cm – remarkably, equal to the fundamental length characterizing elementary particles. Analogue to the way nearly all cells contain the DNA information required to build the entire organism, every region the size of an elementary particle had then the energy necessary for the Universe’s creation.
As the Universe cooled down, electrons and quarks were the first to appear, the latter forming protons and neutrons, combining into nuclei in a mere matter of minutes. During its expansion, processes started happening slower and slower: it took 380,000 years for electrons to start orbiting around the nuclei, and 100 million years for hydrogen and helium to form the first stars. Even more, it wasn’t until 4.5 billion years ago that our young Earth was born, with its oceans emerging shortly after, and the first microbes to call them home for the first time. Life took over our planet in what seems, on the scale of the Universe, a sheer instant, and turned this world into its playground. There came butterflies and tricked the non-existence of natural blue pigment by creating Christmas tree-shaped nanometric structures in their wings to reflect blue’s wavelength only; fireflies and lanternfish which use the chemical reaction between oxygen and luciferin for bioluminescence; and it all goes all the way up to the butterfly effect leading to the unpredictability of the weather forecasts, commonly known as the reason why a pair of wings flapping in Brazil can lead to a typhoon in Texas. The world as we know it now developed slowly, and with the help of continuous evolution and natural selection, the first humans came to life.
Without any doubt, we are the earthly species never ceasing to surprise. We developed rationality, logic, strategic and critical thinking, yet human nature cannot be essentially defined without bringing into the equation our remarkable appetite for art and beauty. In the intricate puzzle human existence represents, this particular piece has given it valences no other known being possesses. Not all beauty is art, but many artworks both in the past, as well as today, embody some understanding of beauty.
To define is to limit, as Oscar Wilde stated, and even though we cannot establish clear definitions of art and beauty. Yet, great works of art manage to establish a strong thread between the creator and receptor. In contrast to this byproduct of human self-expression that encapsulates unique creative behaviour, beauty has existed long before our emergence as a species and isn’t bound to it in any way. It is omnipresent, a metaphorical Higgs field that can be observed by the ones who wish to open their eyes thoroughly. From the formation of Earth’s oceans and butterflies’ blue wings to Euler’s identity and rococo architecture, beauty is a subjective ubiquity. Though a question remains – why does it evoke such pleasure in our minds? What happens in our brains when we see something beautiful? The question is the subject of an entire field, named neuroaesthetics, which identified an intricate whole-brain response to artistic stimuli. As such, our puzzling reactions to art can be explained by these responses similar to “mind wandering”, involving “thoughts about the self, memory, and future”– in other words, art seems to evoke our past experiences, present conscious self, and imagination about the future. There needs to be noted that critics of the field draw attention to the superficiality and oversimplification that may characterize our attempts to view art through the lenses of neuroscience.
We’re at a fascinating point in the discourse around artificial intelligence (AI) and all things “smart”. At one level, we may be reaching “peak hype”, with breathless claims and counter claims about potential society impacts of disruptive technologies. Everywhere we look, there’s earnest discussion of AI and its exponentially advancing sisters – blockchain, sensors, the Internet of Things (IoT), big data, cloud computing, 3D / 4D printing, and hyperconnectivity. At another level, for many, it is worrying to hear politicians and business leaders talking with confidence about the transformative potential and societal benefits of these technologies in application ranging from smart homes and cities to intelligent energy and transport infrastructures.
Why the concern? Well, these same leaders seem helpless to deal with any kind of adverse weather incident, ground 70,000 passengers worldwide with no communication because someone flicked the wrong switch, and rush between Brexit crisis meetings while pretending they have a coherent strategy. Hence, there’s growing concern that we’ll see genuine stupidity in the choices made about how we deploy ever more powerful smart technologies across our infrastructure for society’s benefit. So, what intelligent choices could ensure that intelligent tools genuinely serve humanity’s best future interests.
Firstly, we are becoming a society of connected things with appalling connectivity. Literally every street lamp, road sign, car component, object we own, and item of clothing we wear could be carrying a sensor in the next five to ten years. With a trillion plus connected objects throwing off a continuous stream of information – we are talking about a shift from big to humungous data. The challenge is how we’ll transport that information? For Britain to realise its smart nation goals and attract the industries of tomorrow in the post-Brexit world, it seems imperative that we have broadband speeds that puts us amongst the five fastest nations on the planet. This doesn’t appear to be part of the current plan.
What are new practice areas that solo, small, and medium firms should prepare for in their 5 to 10-year plans for the future?
In the search for the next wave of growth, future-focused law firms are learning to embrace the futurist perspective as they evaluate the opportunities arising from cutting-edge technologies such as artificial intelligence (AI). These technologies will enable new organizational structures, services, and business models in the business horizon. Here are three new practice areas that firms might want to prepare for in the coming few years.
1. Evidence and liability issues from autonomous machine “testimony”
A growing array of “smart” objects are enveloping our homes, workplaces, and communities and the volume of legally admissible data from these devices is likely grow at an exponential rate over the next decade. Firms need to start building expertise around the admissibility and verifiability of the data collected. For example, the design trend for voice-activated technology is driving a rash of seemingly sentient technology in the form of digital assistants, smart appliances, and personal medical and wearable devices. Law firms may be asked to represent clients in cases dealing with evidence, witnesses, accidents, or contracts hinging on theoretically immutable digital proof such as time-stamped video and audio recordings. Attorneys may seek to specialize in addressing the data issues related to domains such as digital twins and personas, surveillance capitalism (companies exploiting customer data for commercial gain with and without full approval), and digital privacy rights.
Life in the digital age is raising fundamental questions about the future of business and employment and hence the strategies, skills, and abilities we need to develop to survive in the next economy. This article explores two key changes that we need to start developing a core of capabilities for – namely the quest for exponential growth and the growing use of corporate venturing.
Why are these becoming important? Well, technology and the thinking it enables are driving new ideas and experiments on commercial strategies, the shape and structure of organisations, business models, and the relationship with extended ecosystems of partners. Both strategies are seen as options to drive growth and accelerate the realisation of market opportunities.
Exponential thinking is seen as a fast track approach to driving business innovation and growth. We are used to the idea of exponential growth in many fields of science and technology. For example, Moore’s Law in information technology tells us that the amount of computer power we can buy for £1,000 doubles every 18–24 months. This has inspired digital innovators to try and grow their business at the same pace or faster than the underlying technologies. The broader business world is taking notice. The stellar rates of development and growth we are witnessing for some exponential businesses in the digital domain are encouraging many organisations across literally every sector from banking to aviation to try and apply similar thinking to some or all of their activities.
Hence, it is now common to see businesses pursue a vision of doubling of revenues within three to four years and a achieving a 2-20X or more improvement in other aspects of the business. For purely digital entities, their business models are predicated on using network effects to drive exponential growth or better in user numbers and revenues. Some suggest that to embrace the exponential model, businesses must reject defined end goals and step-by-step plans in favour of such ambitious visions and develop a high tolerance of uncertainty. Typically, the exponential growth initiatives are driven through a combination of iterative task specific ‘sprints’ to define, test, refine, and deliver business changes that could result in massive performance improvements in specific areas of the business.