Bio: Dr. Thórisson is Professor of Computer Science at Reykjavik University. During 30+ years of basic and applied AI research he has taught courses at Columbia University, KTH and RU, worked at LEGO, MIT, and British Telecom, advised the Prime Minister of Iceland and Swedish government on AI, and consulted on robotics for NASA and HONDA Research. His company Radar Networks was in 2003 selected as one of the US’ 10 most innovative startups by Reuters Venture Capital. Over the past 20 years his General Machine Intelligence group at Reykjavik University has developed a new kind of AI that can learn complex tasks autonomously via recursive self-programming. Dr. Thórisson is a three-time recipient of the Kurzweil Award for his work on AGI. He holds a Ph.D. degree from M.I.T.
NAIM 2025 - talk: General machine intelligence: timeline & prospects
According to frequent news bulletins, artificial intelligence is “almost solved”. A full takeover is imminent. As a result, we will all be out of a job within a few months. What shall one do about this? The consensus seems to be: Resistance is futile — retire early, lay back, and wait for your universal income government paycheck, which hopefully will arrive shortly. NOT SO FAST! Upon further scrutiy such claims turn out to surf the social media on hot air alone, having no decent argumentative support, scientific rigor, or theoretical foundations behind it. When it comes to AI, this is nothing new. The past 70 years of AI is chockfull of overpromise and underdeliverance, including general intelligence. Contemporary AI — applied AI based on deep neural networks, reinforcement learning, and related methods — cover but a tiny fraction of what general intelligence involves. Theoretically and architecturally, systems based on these technologies are much closer to the first artificial neural networks of the 50s — Minsky’s SNARC and Rosenblatt’s Perceptron — than examples of higher-level intelligence found in nature, and certainly compared to more general intelligences like those of dogs and ravens. So what is missing from contemporary AI? In short, we don’t know how thinking works! Let me list a few things scientists are short on ideas for how to build: Cumulative autonomous learning, empirical reasoning, self-reflection, meaning generation, understanding, transversal resource management (or, in everyday parlance, attention) and cognitive architecture. According to my theory of cognition, these are absolutely necessary (yet may still be insufficent) for creating general machine intelligence. But is that something we need — or even want? Explanaining these topics and answering such questions will be the focus of my keynote at Nordic AI Meet.
NAIM 2025 - talk: General machine intelligence: timeline & prospects
According to frequent news bulletins, artificial intelligence is “almost solved”. A full takeover is imminent. As a result, we will all be out of a job within a few months. What shall one do about this? The consensus seems to be: Resistance is futile — retire early, lay back, and wait for your universal income government paycheck, which hopefully will arrive shortly. NOT SO FAST! Upon further scrutiy such claims turn out to surf the social media on hot air alone, having no decent argumentative support, scientific rigor, or theoretical foundations behind it. When it comes to AI, this is nothing new. The past 70 years of AI is chockfull of overpromise and underdeliverance, including general intelligence. Contemporary AI — applied AI based on deep neural networks, reinforcement learning, and related methods — cover but a tiny fraction of what general intelligence involves. Theoretically and architecturally, systems based on these technologies are much closer to the first artificial neural networks of the 50s — Minsky’s SNARC and Rosenblatt’s Perceptron — than examples of higher-level intelligence found in nature, and certainly compared to more general intelligences like those of dogs and ravens. So what is missing from contemporary AI? In short, we don’t know how thinking works! Let me list a few things scientists are short on ideas for how to build: Cumulative autonomous learning, empirical reasoning, self-reflection, meaning generation, understanding, transversal resource management (or, in everyday parlance, attention) and cognitive architecture. According to my theory of cognition, these are absolutely necessary (yet may still be insufficent) for creating general machine intelligence. But is that something we need — or even want? Explanaining these topics and answering such questions will be the focus of my keynote at Nordic AI Meet.