Although AI technology capable of taking over the world is limited to science fiction literature and movies, existing artificial intelligence is capable of wrongdoing, such as producing hallucinations, training on people’s data, and using other people’s work to create new outputs. How do these shortcomings align with rapid AI adoption?
That question was heavily explored at SXSW, with most AI-related sessions either touching upon — or diving deep into — the topic of AI safety. Company leaders from IBM, Meta, Microsoft, and Adobe, to name a few, had insights to share on the future of AI. The consensus? It’s not all doom and gloom.
Also: Microsoft is an AGI skeptic, but is there tension with OpenAI?
“AI needs a better PR agent; everything we have learned is from sci-fi,” said Hannah Elsakr, founder of Firefly for Enterprise at Adobe. “We think AI is going to take over our lives; that’s not the purpose of it.”
Regardless of the panel, the leaders from some of the largest AI tech companies discussed three overarching themes about how safety and responsibility fit in the future of the technology. What they had to say may help put your concerns at ease.
1. The use case matters
There is no denying that AI systems are flawed. They often hallucinate and incorporate biases in their responses. As a result, many worry that incorporating AI systems into the workplace will introduce errors in internal processes, negatively impacting employees, clients, and business goals.
The key to mitigating this issue is carefully considering which task you delegate to AI. For example, Sarah Bird, CPO of responsible AI at Microsoft, looks for use cases that are a good match for what the technology can do today.
“You want to make sure you have the right tool for the job, so you shouldn’t necessarily be using AI for every single application,” said Bird. “There are other cases where perhaps we should never use AI.”
An example of an AI use case that might be troublesome is using it for hiring practices. Many studies have shown that AI has inherent biases that make it favor certain nationalities, educational backgrounds, and genders in its outputs. As a result, IBM stopped using AI agents for filtering and selecting processes and, instead, used an agent to help match candidates to potential job roles.
“I cannot stress enough the importance of really making sure that whatever your use case for AI and agents is fit to your company and your culture,” said Nickle LaMoreaux, IBM’s chief human resources officer.
Also: 5 quick ways to tweak your AI use for better results – and a safer experience
Although AI can do many tasks, that doesn’t mean it should. Understanding the technology’s limitations and strengths is key to ensuring that users get the best possible outcome from implementing AI and avoid pitfalls.
2. Humans are here to stay
As AI systems become more intelligent and autonomous, people are naturally alarmed at the technology’s potential to negatively impact the workforce by making humans more replaceable. However, the business leaders all agreed that even though AI will transform work as we know it, it won’t necessarily replace it.
“AI is allowing people to do more than they did before, not necessarily a wholesale replacement,” said Ella Irwin, head of generative AI safety at Meta. “Will some jobs be replaced? Yes, but like with any other technology, such as the internet, we will see new jobs develop, and we will see people using this technology and doing their jobs differently than before.”
Also: As AI agents multiply, IT becomes the new HR department
Leaders and experts throughout the conference frequently discussed the parallels between AI and other transformational technologies, such as the internet, because they share so many similarities. For instance, just as the internet replaced hours in the library, new Deep Research AIs from Google or OpenAI can now complete hours of research in minutes.
“Think about it like email, or mobile phones, or the internet — AI is a tool, AI is a platform, every job has been transformed by that,” said LaMoreaux.
3. User trust will be one of the biggest challenges
When discussing obstacles to AI developments, the roadblocks that people consider typically involve the technical development of the AI models, that is, how the models can be built safer, quicker, and cheaper. However, a part of the discussion that is often left out is consumer sentiment.
At SXSW, the role of the consumer was heavily discussed because, ultimately, these models will only be helpful and transformative if people trust them enough to consider trying them out.
“AI is only as trustworthy as people place the trust in it — if you don’t trust it, it’s useless; if you trust it, you can start the adoption of it,” said Lavanya Poreddy, head of trust & safety at HeyGen.
Also: Can AI supercharge creativity without stealing from artists?
As discussed above, transformative technologies, such as the internet, the cloud, or even the calculator, were met with hesitation. Irvwin used the example of the debit card to illustrate this idea, as when it initially launched, people were concerned about what it meant for the security of their funds.
“With every new technology, there this initial reaction by policymakers, by the market, by consumers which is a little more fear-based,” added Meta’s Irwin.
To overcome this hurdle, companies must remain transparent about their models, how they were trained, red-teaming policies, safety approaches, and more. There has already been a push in that direction, with more companies adding model cards to their releases.
Source link
#themes #dominated #SXSW #heres #navigate