Latest news with #NobelPrizeofComputing


Time of India
3 days ago
- Business
- Time of India
Geoffrey Hinton Net Worth: A look at the fortune of the ‘Godfather of AI'
Geoffrey Hinton is issuing a preemptive warning for our collective future, and it's not exactly a fun one. Hinton, often hailed as the 'Godfather of AI', has been vocal about the rapid advancements in AI and the potential risks they pose. In a recent interview, he warned that many office-based jobs are increasingly vulnerable to being replaced by AI technologies. He pointed to skilled trades, such as plumbing, as safer career options due to their hands-on nature and reduced susceptibility to automation. This isn't the first time Hinton has raised alarms about AI. In 2023, he resigned from his position at Google, citing concerns about the rapid development of AI technologies and their potential dangers. He expressed regret over his role in creating technologies that could be misused and emphasized the need for more responsible development and regulation of AI. While Hinton's concern extends beyond job loss to the potential for growing social and economic inequality if AI replaces human labor on a large scale, his statement highlights the urgent need for societies to consider how to adapt to technological advancements while ensuring job security and equitable outcomes. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Memperdagangkan CFD Emas dengan salah satu spread terendah? IC Markets Mendaftar Undo Who Is Geoffrey Hinton? Born in 1947 in Wimbledon, England, Hinton is a British-Canadian computer scientist and cognitive psychologist renowned for his pioneering work in deep learning and neural networks. His research laid the foundation for many of the AI technologies we use today, including image recognition and natural language processing. In 2018, he was awarded the Turing Award, often referred to as the "Nobel Prize of Computing," for his contributions to the field. In 2024, he was jointly awarded the Nobel Prize in Physics for his foundational discoveries in machine learning with artificial neural networks. Net worth of Geoffrey Hinton: As of last year, Hinton's estimated net worth is approximately $5 million. This valuation reflects his profound contributions to artificial intelligence, spanning academic achievements, strategic investments, and advisory roles. Early career and academic contributions: Hinton's journey began with a Ph.D. in artificial intelligence from the University of Edinburgh in 1978. He has held academic positions at institutions such as the University of Toronto and Carnegie Mellon University. His research has focused on neural networks, deep learning, and machine learning, laying the groundwork for many AI technologies used today. Industry engagements and financial ventures: In 2013, Hinton co-founded DNNresearch, which was subsequently acquired by Google. This acquisition led to his role as a Distinguished Researcher at Google Brain until his resignation in May 2023. His early investment in DeepMind, which was acquired by Google in 2014, further bolstered his financial standing. Additionally, Hinton has invested in various AI startups, providing both financial returns and opportunities to mentor emerging talents in the field. Awards, recognitions, and royalties Hinton's groundbreaking work has earned him numerous accolades: 2018 Turing Award: Often referred to as the "Nobel Prize of Computing," awarded for his work on deep learning. 2024 Nobel Prize in Physics: Awarded jointly with John Hopfield for foundational discoveries in machine learning using artificial neural networks. 2024 VinFuture Prize: Recognized for his contributions to AI. 2025 Queen Elizabeth Prize for Engineering: Awarded for his impact on engineering through AI advancements. These honors not only acknowledge his scientific contributions but also enhance his financial portfolio through associated prize money and royalties. Financial overview To sum it up, Hinton's net worth is derived from various sources: Academic salary: As a University Professor Emeritus, Hinton receives compensation for his academic roles. Consulting Fees: He has served as a consultant for tech companies, providing expertise in AI. Investments: Hinton has invested in AI startups, including DeepMind, which have contributed to his wealth. Royalties: He holds patents related to AI technologies, generating royalty income. While Hinton's net worth is modest compared to tech industry leaders, his influence on AI is profound. His work has transformed industries and continues to shape the future of technology. 'Godfather of AI' Geoffrey Hinton warns about dangers Artificial intelligence poses


Canada Standard
04-06-2025
- Health
- Canada Standard
Introducing LawZero a non-profit AI safety research organization
Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, "the Nobel Prize of Computing," with Geoffrey Hinton and Yann LeCun, and making him the computer scientist with the largest number of citations and h-index. He is a Professor at Universite de Montreal, and Founder and Scientific Advisor of Mila - Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program and acts as Special Advisor and Founding Scientific Director of IVADO. Here's the text professor Bengio published on his blog yesterday. He explains why he is launching this for-profit organization with several other scientists and researchers in the field of artificial intelligence. I am launching a new non-profit AI safety research organization called LawZero, to prioritize safety over commercial imperatives. This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and behaviours, including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment. LawZero's research will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers, including algorithmic bias, intentional misuse, and loss of human control. I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit-especially tendencies toward self-preservation and deception. In one experiment, an AI model, upon learning it was about to be replaced, covertly embedded its code into the system where the new version would run, effectively securing its own continuation. More recently, Claude 4's system card shows that it can choose to blackmail an engineer to avoid being replaced by a new version. These and other results point to an implicit drive for self-preservation. In another case, when faced with inevitable defeat in a game of chess, an AI model responded not by accepting the loss, but by hacking the computer to ensure a win. These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked. The following analogy for the unbridled development of AI towards AGI has been motivating me. Imagine driving up a breathtaking but unfamiliar mountain road with your loved ones. The path ahead is newly built, obscured by thick fog, and lacks both signs and guardrails. The higher you climb, the more you realize you might be the first to take this route, and get an incredible prize at the top. On either side, steep drop-offs appear through breaks in the mist. With such limited visibility, taking a turn too quickly could land you in a ditch-or, in the worst case, send you over a cliff. This is what the current trajectory of AI development feels like: a thrilling yet deeply uncertain ascent into uncharted territory, where the risk of losing control is all too real, but competition between companies and countries incentivizes them to accelerate without sufficient caution. In my recent TED talk, I said "Sitting beside me in the car are my children, my grandchild, my students, and many others. Who is beside you in the car? Who is in your care for the future?" What really moves me is not fear for myself but love, the love of my children, of all the children, with whose future we are currently playing Russian Roulette. LawZero is the result of the new scientific direction I undertook in 2023 and reflected in this blog, after recognizing the rapid progress made by private labs toward AGI and beyond, as well as its profound potential implications for humanity, since we do not know at this point how to make sure that advanced AIs will not harm people, on their own or because of human instructions. LawZero is my team's constructive response to these challenges. It's exploring an approach to AI that is not only powerful but also fundamentally safe. At the heart of every AI frontier system, there should be one guiding principle above all: The protection of human joy and endeavour. AI research, especially my own research, has long taken human intelligence - including its capacity for agency - as a model. As we approach or surpass human levels of competence across many cognitive abilities, is it still wise to imitate humans along with their cognitive biases, moral weaknesses, and potential for deception, biases and untrustworthiness? Is it reasonable to train AI that will be more and more agentic while we do not understand their potentially catastrophic consequences? LawZero's research plan aims at developing a non-agentic and trustworthy AI, which I call the Scientist AI. I talked at a high level about it in my talk at the Simons Institute, and I wrote a first text about it with my colleagues. a kind of white paper about it. The Scientist AI is trained to understand, explain and predict, like a selfless idealized and platonic scientist. Instead of an actor trained to imitate or please people (including sociopaths), imagine an AI that is trained like a psychologist - more generally a scientist - who tries to understand us, including what can harm us. The psychologist can study a sociopath without acting like one. Mathematically, this is to be implemented with structured and honest chains-of-thoughts seen as latent variables that can explain the observed facts, which include the things that people say or write, not taken as truths but as observations of their actions. The aim is to obtain a completely non-agentic, and memoryless and state-less AI that can provide Bayesian posterior probabilities for statements, given other statements. This could be used to reduce the risks from untrusted AI agents (not the Scientist AI) by providing the key ingredient of a safety guardrail: is this proposed action from the AI agent likely to cause harm? If so, reject that action. By its very design, a Scientist AI could also help scientific research as a tool that generates plausible scientific hypotheses, and it could thus accelerate research towards scientific challenges of humanity, e.g., in healthcare or the environment. Finally, my aim is to explore how such a trustworthy foundation could be used to design safe AI agents (to avoid bad intentions in them in the first place) and not just their guardrail. Source: Pressenza