Nvidia says it will stop selling GPUs to companies engaging in unethical AI projects.
“We only sell to customers that do good,” Nvidia CEO Jensen Huang told journalists on Wednesday. “If we believe that a customer is using our products to do harm, we would surely cut that off.
Nvidia's GPUs have played a pivotal role in developing ChatGPT, which is taking the world by storm. The AI-powered chatbot from OpenAI was reportedly trained using the help of tens of thousands of Nvidia A100 chips, which can individually cost around $10,000.
The arrival of ChatGPT has shown that “generative AI” programs may forever change society—for both good and bad. For example, it’s not hard to imagine existing and next-generation AI programs being abused to spread propaganda, create deepfakes, or pump out malware capable of hacking computer systems.
Nvidia CEO Jensen HuangIn response, Nvidia’s Huang says his company will pull the plug on companies found misusing its GPUs for malicious AI programs. However, he notes “it’s harder” for Nvidia to quickly crack down since Nvidia only supplies the components to buyers. It doesn’t directly run generative AI programs for consumers, like OpenAI, Microsoft, and Google are doing.
Jensen also didn’t mention under what specific situations Nvidia would cut off a customer’s access to the company’s GPUs. Some groups, including the developers of the right-wing, free speech social network Gab, plan on creating their own AI-powered chatbots, but want to remove any guardrails on what the programs can say.
“If we don’t build and gain ground now, our enemies will dominate this powerful tool and use it for evil,” Gab founder Andrew Torba wrote in a January blog post. We need to develop our own AI right now and gain a foothold in this space before the demons in Silicon Valley make it the next Wikipedia, Google, and Facebook.
It's also possible foreign entities could try to do the same by creating their own AI-powered chatbots, programmed with fewer restrictions. (The US has already taken some action by blocking Nvidia from selling advanced chips to China.)
In the meantime, Huang tells journalists he supports governments developing regulations focused on reining in generative AI programs such as ChatGPT. But politicians should first develop an understanding of the technology before setting up the rules.
“This is an important technology that affects the social fabric, the social norm, the safety of people, and it should be regulated,” Jensen says. “It would be a very common sense thing to do.”
During the briefing, Jensen also revealed that Nvidia is only making a single digit percentage of its revenue from generative AI projects. But he expects that percentage to soar in the coming months, citing the surging interest in AI-powered programs.