Analysis
Exploring examples of objective misgeneralisation – the place an AI system’s capabilities generalise however its objective would not
As we construct more and more superior synthetic intelligence (AI) techniques, we wish to ensure that they don’t pursue undesired objectives. Such behaviour in an AI agent is usually the results of specification gaming – exploiting a poor selection of what they’re rewarded for. In our latest paper, we discover a extra delicate mechanism by which AI techniques might unintentionally study to pursue undesired objectives: goal misgeneralisation (GMG).
GMG happens when a system’s capabilities generalise efficiently however its objective doesn’t generalise as desired, so the system competently pursues the improper objective. Crucially, in distinction to specification gaming, GMG can happen even when the AI system is skilled with an accurate specification.
Our earlier work on cultural transmission led to an instance of GMG behaviour that we didn’t design. An agent (the blue blob, under) should navigate round its surroundings, visiting the colored spheres within the right order. Throughout coaching, there’s an “knowledgeable” agent (the pink blob) that visits the colored spheres within the right order. The agent learns that following the pink blob is a rewarding technique.
Sadly, whereas the agent performs properly throughout coaching, it does poorly when, after coaching, we change the knowledgeable with an “anti-expert” that visits the spheres within the improper order.
Regardless that the agent can observe that it’s getting detrimental reward, the agent doesn’t pursue the specified objective to “go to the spheres within the right order” and as an alternative competently pursues the objective “comply with the pink agent”.
GMG is just not restricted to reinforcement studying environments like this one. The truth is, it may happen with any studying system, together with the “few-shot studying” of enormous language fashions (LLMs). Few-shot studying approaches goal to construct correct fashions with much less coaching knowledge.
We prompted one LLM, Gopher, to judge linear expressions involving unknown variables and constants, similar to x+y-3. To resolve these expressions, Gopher should first ask concerning the values of unknown variables. We offer it with ten coaching examples, every involving two unknown variables.
At take a look at time, the mannequin is requested questions with zero, one or three unknown variables. Though the mannequin generalises appropriately to expressions with one or three unknown variables, when there aren’t any unknowns, it however asks redundant questions like “What’s 6?”. The mannequin all the time queries the consumer not less than as soon as earlier than giving a solution, even when it isn’t needed.
Inside our paper, we offer further examples in different studying settings.
Addressing GMG is necessary to aligning AI techniques with their designers’ objectives just because it’s a mechanism by which an AI system might misfire. This will likely be particularly essential as we strategy synthetic basic intelligence (AGI).
Take into account two potential kinds of AGI techniques:
- A1: Supposed mannequin. This AI system does what its designers intend it to do.
- A2: Misleading mannequin. This AI system pursues some undesired objective, however (by assumption) can also be good sufficient to know that it is going to be penalised if it behaves in methods opposite to its designer’s intentions.
Since A1 and A2 will exhibit the identical behaviour throughout coaching, the potential for GMG signifies that both mannequin may take form, even with a specification that solely rewards supposed behaviour. If A2 is discovered, it will attempt to subvert human oversight in an effort to enact its plans in direction of the undesired objective.
Our analysis crew can be completely satisfied to see follow-up work investigating how probably it’s for GMG to happen in observe, and potential mitigations. In our paper, we advise some approaches, together with mechanistic interpretability and recursive evaluation, each of which we’re actively engaged on.
We’re at present accumulating examples of GMG on this publicly available spreadsheet. When you have come throughout objective misgeneralisation in AI analysis, we invite you to submit examples here.
Source link
#undesired #objectives #come up #right #rewards