Ethical concerns surrounding artificial intelligence seem to be at the top of the tech news cycle. Which is surprising, considering how much of an issue it still appears to be for major tech companies.
But I digress. Every month, there seems to be some story about how a powerful AI went rogue (Exhibit A: Gemini’s images in early 2024) or a company decided to dissolve an AI ethics team (Exhibit B: OpenAI around the time GPT-4o was released). In response, there is usually an outcry of criticism and a renewed call for more censorship, oversight, and care surrounding AI and its future. A call that seems to go unanswered.
But perhaps that’s because we’re focusing our energies in the wrong place.
The issue with AI ethics is that like many other aspects of Silicon Valley, it has become “clique-y.” So many discussions around AI ethics take place in university research presentations and talk about technical failures within model development — settings and topics that simply do not pertain to the greater proportion of folks who actually use AI. Frankly, if one is expected to be an expert technologist or full-time student or startup CEO before…
Source link
#Ethics #Everyday #User #Care #Murtaza #Ali #Jan