AI Must Combat AI-Generated Misinformation
Nvidia CEO Jensen Huang says artificial intelligence (AI) is the key to combating the risks posed by AI-generated misinformation. Speaking at the Bipartisan Policy Center recently, Huang stressed that AI’s ability to produce fake information quickly will require AI-based solutions to counteract it.
Huang further warned that its ability to generate misleading data and false information will only accelerate as this technology evolves. Thus, similar systems must operate at the same speed or faster to detect and stop these threats.
Huang compared the use of artificial intelligence in tackling misinformation to the current cybersecurity landscape. He said that nearly every company faces the threat of cyberattacks, and more robust cybersecurity measures are needed to neutralize such attacks.
In the same way, artificial intelligence technology will be required to stay ahead of AI-driven threats.
Huang Urges Government to Embrace AI Technology
Huang also called on the US government to take a more active role in artificial intelligence development. He emphasized that the government shouldn’t only regulate artificial intelligence but also become a technology practitioner.
He mentioned the Department of Energy and the Department of Defense as crucial ministries where this innovative tech could play a critical role. Huang suggested that the United States should consider building a supercomputer to accelerate research and development in this regard.
He said such a move would allow scientists to develop new artificial intelligence algorithms that could advance national interests. Huang’s comments come amid growing concerns about the role of technology in shaping public perception, particularly as the US approaches federal elections in November.
A recent Pew Research Center survey found that nearly 60% of Americans are worried about artificial intelligence being used to spread fake information about presidential candidates. Around 40% of those surveyed believe artificial intelligence will be used for harmful purposes in the upcoming elections, while only a tiny percentage felt that AI would be used for good.
These fears were further heightened when an anonymous US intelligence official reported that Russia and Iran are already employing artificial intelligence to manipulate political content, including videos of Vice President Kamala Harris.
Future AI Models Will Require More Energy
Huang also noted that AI models will require significantly more power as they become more complex. He predicted that future data centers could need up to 20 times the energy used by today’s data centers.
The Nvidia CEO suggested building these centers near locations with excess energy, as artificial intelligence does not depend on where it learns. Thus, remote data centers become viable options for managing energy consumption.
He noted that future models will increasingly rely on other artificial intelligence systems to train one another. This, combined with the growing amount of data needed for training, will drive up energy consumption across the industry.
California Governor Newsom Vetoes Controversial Safety Bill
Meanwhile, California Governor Gavin Newsom has vetoed SB 1047, a widely debated AI safety bill, saying it would hinder innovation. The bill (known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) aimed to implement strict safety standards on models.
Thus, it becomes mandatory for companies like OpenAI, Meta, and Google to perform testing and introduce a “kill switch” for their artificial intelligence systems. After rejecting the bill, Newsom expressed concerns that the proposed regulations would stifle the development of emerging artificial intelligence models.
According to him, the legislation targeted large AI firms without effectively addressing the real risks posed by artificial intelligence. He emphasized that the bill would impose unnecessary restrictions on essential functions, creating a barrier to future innovation.
The bill’s sponsor, Senator Scott Wiener, argued that these regulations were necessary to prevent potential disasters linked to artificial intelligence development. If passed, the bill would have allowed California’s attorney general to sue developers where artificial intelligence systems enable significant risks, such as potential takeovers of critical infrastructures like power grids.
Newsom’s Veto Sparks Debate on Innovation and Safety
Nevertheless, Newsom acknowledged the need for artificial intelligence safety measures but called for a more balanced approach. He has tasked experts with developing science-based risk analyses and directed state agencies to continue assessing potential threats from artificial intelligence.
The bill faced strong opposition from Silicon Valley, including tech giants like OpenAI and Google, as well as some politicians. Former House Speaker Nancy Pelosi warned that the bill could slow artificial intelligence progress in California.
The post AI Could Help Fight AI Misinformation – Nvidia CEO first appeared on CryptocyNews.com.
from CryptocyNews.com https://www.cryptocynews.com/ai-could-help-fight-ai-misinformation-nvidia-ceo/
via Bitcoin News
via Bitcoin News Today