[ad_1]
What could possibly go wrong?
Drone Days
OpenAI has hopped into bed with a defense contractor that makes swarming killer drones. What could possibly go wrong?
In a statement announcing the partnership with that contractor, Anduril — which was cofounded by Oculus VR’s Palmer Luckey and takes its name from the glowing sword given to Aragorn by Elves in “The Lord of the Rings” — OpenAI CEO Sam Altman waxed prolific about how drones are important for democracy.
“OpenAI builds AI to benefit as many people as possible, and supports US-led efforts to ensure the technology upholds democratic values,” Altman said. “Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free.”
As Anduril cofounder and CEO Brian Schimpf said in the statement, the ChatGPT maker’s AI models will help the firm improve its air defense systems, essentially making the Ukraine-proven battle drones smarter and faster.
“Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations,” Schimpf said.
Policy Shift
In an interview with Wired, a former OpenAI employee who spoke anonymously to protect their identity said the company’s AI models would help Anduril “assess drone threats more quickly and accurately, giving operators the information they need to make better decisions while staying out of harm’s way.”
Before this year, OpenAI prohibited any use of its models for “military or warfare” or “weapons development.” After The Intercept reported in January that that policy had been lifted, however, the company announced at Davos that it would be providing the Pentagon with cybersecurity tools — a mask-off moment that, according to Wired’s insiders, turned off employees at the firm but never resulted in outright protest.
Though an OpenAI spokesperson insisted in a statement to the MIT Technology Review that the partnership “is consistent with our policies and does not involve leveraging our technology to develop systems designed to harm others,” the firm will, once the technologies are integrated, be fully involved in the business of warfare.
All told, it seems very much like helping the creators of a company that sells attack drones operate better — and that seems like a glaring loophole in its policy against using its tech to “harm yourself or others.”
More on OpenAI: AI Safety Researcher Quits OpenAI, Saying Its Trajectory Alarms Her
Source link
#OpenAI #Strikes #Deal #Military #Contractor #Provide #Attack #Drones
[ad_2]