Today, artificial intelligence hardly has a unified definition or application. AI is listed as a capability across many technology products, but this is mostly a marketing tactic to capitalize on a feature perceived as a must-have. However, the biggest contributions of AI to our society have yet to be realized. Looking at least one decade into the future, I believe we will see significant advancements in three areas by 2030 as a result of AI â€“ personal assistance devices, genomics, and space exploration.
Interfacing with machines
I believe the field of human computer interaction will go through massive changes in the next ten years. Right now, personal assistance devices and software have some traction, but they are still very immature. Alexa, Google Assist, and other similar technologies are still very rudimentary and leave the burden of making them understand the human side. They are mostly translating natural language to search queries without fully considering the subject, context, and the person they are interacting with. A majority of questions are interpreted incorrectly, or receive an “I don’t know about that” type of answer.
What I predict for 2030 is that conversing with these agents will feel much more natural, and they will have appear to have a personality and relate in a specific way with each of the humans they interact with. They will predict human needs better, and understand the state of mind of the person they are interacting with.
In addition, I believe direct neural interfaces will start becoming more practical, giving people the capability to interact with “intelligent” machines through thought. Ten years is too soon for it to mature, so I think this technology will be in early maturity in that time frame.
Considering all of that, the biggest risks I see are privacy, security and ethics. The robots built with AI will have a compliance layer that auditors and forensic experts will be able to query in case of a mishap or for tracking non-ethical violations. This means abstract concepts like ethics and philosophy will be modeled for operation in a wide variety of contexts to suit cultural biases, implying geo- and culture-centric AI-based products.
Having intimate knowledge about the personality of their users, the AI agents can be abused by the programmers and companies behind them to move forward commercial agendas in a very subtle way.
Advancements in the area of genetic editing, with technologies like CRISPR (Clusters of Regularly Interspaced Short Palindromic Repeats), are ramping up in practicality and accessibility.
One of the advancements that was overshadowed by COVID coverage, and didn’t get as much media attention as it deserved in 2020, was the in vivo usage of gene editing technology. In vivoÂ means that the gene edit was done inside the body, in contrast to in vitro, which literally means “in glass” – that is, in a test tube or outside the body.
By 2030, there will be so many more possibilities with gene editing. After genetic sequencing technology became economically viable, and advancements in CRISPR, AI is poised to be the biggest catalyst in boosting the practical applicability of genomics. The question will be who can access such technology, and what are the ethics and consequences of that for the human race as a whole.
Another big news story that got overshadowed by COVID this year was the granting of a contract to Nokia by NASA to build a cellular network on the moon. This is just a start and I believe many commercially available technologies and software will start being used outside of Earth by leveraging AI.
Due to latency in communication, AI is essential in performing critical missions in space. This theme has been portrayed in science fiction for a long time, but I believe 2030 is the decade where it stops being fiction and becomes a reality.
By 2030, we will be on the verge of establishing a permanent extraterrestrial colony for the first time in human history. This brings a lot of opportunity and with it many concerns as space exploration will start being a commercial activity rather than a tightly controlled state endeavor. Imagine hackers attacking a colony on the Moon or Mars and putting a lot of lives in danger! Imagine countries having warfare in space, or border disputes on Mars! There is already a series of negotiations on how countries will split these new territories.
In conclusion, everyone â€“ individuals, employees and organizations â€“ will all be challenged to accept new use cases for AI come 2030, including systems that interact with humans. In all cases, security and ethics in software and its development play a critical role in success and trust. Continuing to uphold principles like “do no harm” that apply to many jobs today will require new forms of testing and threat modeling activities, possibly in real-time, when layered with AI.
Much of this is being discussed in standards groups today, but wonâ€™t be adopted more generally for a few years. A decade â€“ equivalent to a couple of iterations in the compliance standards world â€“Â is a reasonable timeline for some of this to move beyond early stage adoption.