Here's how Google will get to AGI, Artificial General Intelligence

Advertisemen

Artificial intelligence (AI) research has reached new heights in recent years, with the emergence of ambitious projects such as AGI. Several research institutions are working to develop systems that can learn and adapt to situations never seen before, with efficiency and adaptability comparable to that of a human. In this context, OpenAI recently announced a partnership with a US national research institute to accelerate the creation of a robust and versatile AGI. However, one of the main players in this revolution is Google, which has been leading the development of advanced technologies in artificial intelligence for decades.

Although Google started late compared to OpenAI, for example, in delivering its innovations in the field of AI into the hands of end users (think of the unconvincing performance of Bard, at the beginning, and compare it with the excellent Gemini models of today ...), it is precisely from the Mountain View company that some of the most revolutionary ideas have come. Consider, for example, the concept of Transformer and reasoning for Large Language Models (LLM). We talk about it in the article on AI explained simply.


Two developers with 25 years of experience at Google tell the story of their journey towards Artificial General Intelligence

The AGI (Artificial General Intelligence) or Artificial General Intelligence goal is the goal towards which Google is aiming with determination and conviction. This was revealed by Jeff Dean and Noam Shazeer, two developers who have seen how Google has changed in 25 years and what its most up-to-date ambitions are. Both Dean and Shazeer in fact began their careers at Google in 1999.

Noam Shazeer played a fundamental role in the creation of techniques such as the Transformer model, which gave a strong boost to the creation of modern solutions based on generative artificial intelligence. His name can be read among the authors of the study “Attention Is All You Need“. But Shazeer is also the creator of architectures such as Mixture of Experts and Mesh TensorFlow, which have allowed us to overcome previous barriers in terms of scalability and computational power. These developments have led to the creation of language models that not only understand but also generate text content, solving complex tasks.

Jeff Dean has been a key contributor to the creation of advanced processing systems such as MapReduce, BigTable, and TensorFlow. Each of these innovations has had a profound impact on the AI ​​industry, laying the foundation for modern machine-learning architectures. Dean also participated in the development of Gemini, the latest generation of Google's advanced generative models, which has further pushed the limits of natural language processing capabilities.


Google's future prospects and goals in the field of Artificial Intelligence

Looking ahead, Dean and Shazeer (the full interview is available on YouTube) predict that AI models will continue to evolve, acquiring the ability to solve even more complex problems. Currently, a modern AI system can tackle tasks by breaking them down into 10 subtasks, with a success rate of 80%. However, researchers envision a future in which AI will be able to break a task down into 1,000 subtasks, with a success rate that could exceed 90%. This kind of progress would lead to AI that can perform increasingly sophisticated tasks with incredible precision.

Another key goal is the integration of language models in increasingly diverse modalities, ranging from text, images, videos and audio. This ambitious vision aims to make AI not only smarter at understanding human language but also capable of interacting with and understanding non-human data sources, such as those from self-driving vehicles and genomic sequences.


Responsible AI and ethical sustainability: the crucial role of safety measures

Despite the excitement about its future potential, AI also raises significant concerns about safety and ethics. Google has implemented a series of policies to promote responsible AI, seeking to balance progress with the protection of humanity.

Misinformation and the risk of hacking are among the greatest dangers associated with the irresponsible use of AI, and to address these issues, the company has adopted advanced security measures. To build safe AI, especially for high-risk tasks, Google is taking inspiration from security standards developed for aerospace systems.

The main challenges concern phenomena such as “hallucinations” of AI models: these occur when the system produces incorrect or inconsistent responses. Hence the need to continue to refine the models to prevent such problems.

According to Dean and Shazeer, however, the measures adopted by Google promise to reduce these risks and maximize long-term benefits, especially in crucial areas such as education and healthcare.

In another article of ours, we tried to explain why saying that artificial intelligence will one day surpass human intelligence is intrinsically wrong. AI can imitate human intelligence, approximate it, can eventually “think” and “act” (in quotes) in a way that resembles the behavior of a human being.


Image credit: iStock.com – BlackJack3D

Advertisemen
 
This website or its third party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. By tapping on "I accept" you agree to the use of cookies.