• About
  • Advertise
  • Privacy & Policy
  • Contact
Wednesday, December 31, 2025
  • Login
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
    • Home – Layout 4
    • Home – Layout 5
    • Home – Layout 6
  • News
    • All
    • Business
    • Politics
    • Science
    • World
    Hillary Clinton in white pantsuit for Trump inauguration

    Hillary Clinton in white pantsuit for Trump inauguration

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Tech
    • All
    • Apps
    • Gadget
    • Mobile
    • Startup
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
  • Entertainment
    • All
    • Gaming
    • Movie
    • Music
    • Sports
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    So you want to be a startup investor? Here are things you should know

    So you want to be a startup investor? Here are things you should know

  • Lifestyle
    • All
    • Fashion
    • Food
    • Health
    • Travel
    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    How couples can solve lighting disagreements for good

    How couples can solve lighting disagreements for good

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Trending Tags

    • Golden Globes
    • Game of Thrones
    • MotoGP 2017
    • eSports
    • Fashion Week
  • Review
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    Intel Core i7-7700K ‘Kaby Lake’ review

    Intel Core i7-7700K ‘Kaby Lake’ review

No Result
View All Result
Ai News
Advertisement
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
    • Home – Layout 4
    • Home – Layout 5
    • Home – Layout 6
  • News
    • All
    • Business
    • Politics
    • Science
    • World
    Hillary Clinton in white pantsuit for Trump inauguration

    Hillary Clinton in white pantsuit for Trump inauguration

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Tech
    • All
    • Apps
    • Gadget
    • Mobile
    • Startup
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
  • Entertainment
    • All
    • Gaming
    • Movie
    • Music
    • Sports
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    So you want to be a startup investor? Here are things you should know

    So you want to be a startup investor? Here are things you should know

  • Lifestyle
    • All
    • Fashion
    • Food
    • Health
    • Travel
    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    How couples can solve lighting disagreements for good

    How couples can solve lighting disagreements for good

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Trending Tags

    • Golden Globes
    • Game of Thrones
    • MotoGP 2017
    • eSports
    • Fashion Week
  • Review
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    Intel Core i7-7700K ‘Kaby Lake’ review

    Intel Core i7-7700K ‘Kaby Lake’ review

No Result
View All Result
Ai News
No Result
View All Result
Home Robotics & Smart Systems

Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions

AiNEWS2025 by AiNEWS2025
2025-07-30
in Robotics & Smart Systems
0
Interview with Kate Candon: Leveraging explicit and implicit feedback in human-robot interactions
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Kate Candon is a PhD student at Yale University interested in understanding how we can create interactive agents that are more effectively able to help people. We spoke to Kate to find out more about how she is leveraging explicit and implicit feedback in human-robot interactions.

Could you start by giving us a quick introduction to the topic of your research?

I study human-robot interaction. Specifically I’m interested in how we can get robots to better learn from humans in the way that they naturally teach. Typically, a lot of work in robot learning is with a human teacher who is only tasked with giving explicit feedback to the robot, but they’re not necessarily engaged in the task. So, for example, you might have a button for “good job” and “bad job”. But we know that humans give a lot of other signals, things like facial expressions and reactions to what the robot’s doing, maybe gestures like scratching their head. It could even be something like moving an object to the side that a robot hands them – that’s implicitly saying that that was the wrong thing to hand them at that time, because they’re not using it right now. Those implicit cues are trickier, they need interpretation. However, they are a way to get additional information without adding any burden to the human user. In the past, I’ve looked at these two streams (implicit and explicit feedback) separately, but my current and future research is about combining them together. Right now, we have a framework, which we are working on improving, where we can combine the implicit and explicit feedback.

In terms of picking up on the implicit feedback, how are you doing that, what’s the mechanism? Because it sounds incredibly difficult.

It can be really hard to interpret implicit cues. People will respond differently, from person to person, culture to culture, etc. And so it’s hard to know exactly which facial reaction means good versus which facial reaction means bad.

So right now, the first version of our framework is just using human actions. Seeing what the human is doing in the task can give clues about what the robot should do. They have different action spaces, but we can find an abstraction so that we can know that if a human does an action, what the similar actions would be that the robot can do. That’s the implicit feedback right now. And then, this summer, we want to extend that to using visual cues and looking at facial reactions and gestures.

So what kind of scenarios have you been kind of testing it on?

For our current project, we use a pizza making setup. Personally I really like cooking as an example because it’s a setting where it’s easy to imagine why these things would matter. I also like that cooking has this element of recipes and there is a formula, but there’s also room for personal preferences. For example, somebody likes to put their cheese on top of the pizza, so it gets really crispy, whereas other people like to put it under the meat and veggies, so that maybe it is more melty instead of crispy. Or even, some people clean up as they go versus others who wait until the end to deal with all the dishes. Another thing that I’m really excited about is that cooking can be social. Right now, we’re just working in dyadic human-robot interactions where it’s one person and one robot, but another extension that we want to work on in the coming year is extending this to group interactions. So if we have multiple people, maybe the robot can learn not only from the person reacting to the robot, but also learn from a person reacting to another person and extrapolating what that might mean for them in the collaboration.

Could you say a bit about how the work that you did earlier in your PhD has led you to this point?

When I first started my PhD, I was really interested in implicit feedback. And I thought that I wanted to focus on learning only from implicit feedback. One of my current lab mates was focused on the EMPATHIC framework, and was looking into learning from implicit human feedback, and I really liked that work and thought it was the direction that I wanted to go into.

However, that first summer of my PhD it was during COVID and so we couldn’t really have people come into the lab to interact with robots. And so instead I did an online study where I had people play a game with a robot. We recorded their face while they were playing the game, and then we tried to see if we could predict based on just facial reactions, gaze, and head orientation if we could predict what behaviors they preferred for the agent that they were playing with in the game. We actually found that we could decently well predict which of the behaviors they preferred.

The thing that was really cool was we found how much context matters. And I think this is something that is really important for going from just a solely teacher-learner paradigm to a collaboration – context really matters. What we found is that sometimes people would have really big reactions but it wasn’t necessarily to what the agent was doing, it was to something that they had done in the game. For example, there’s this clip that I always use in talks about this. This person’s playing and she has this really noticeably confused, upset look. And so at first you might think that’s negative feedback, whatever the robot did, the robot shouldn’t have done that. But if you actually look at the context, we see that it was the first time that she lost a life in this game. For the game we made a multiplayer version of Space Invaders, and she got hit by one of the aliens and her spaceship disappeared. And so based on the context, when a human looks at that, we actually say she was just confused about what happened to her. We want to filter that out and not actually consider that when reasoning about the human’s behavior. I think that was really exciting. After that, we realized that using implicit feedback only was just so hard. That’s why I’ve taken this pivot, and now I’m more interested in combining the implicit and explicit feedback together.

You mentioned the explicit element would be more binary, like good feedback, bad feedback. Would the person-in-the-loop press a button or would the feedback be given through speech?

Right now we just have a button for good job, bad job. In an HRI paper we looked at explicit feedback only. We had the same space invaders game, but we had people come into the lab and we had a little Nao robot, a little humanoid robot, sitting on the table next to them playing the game. We made it so that the person could give positive or negative feedback during the game to the robot so that it would hopefully learn better helping behavior in the collaboration. But we found that people wouldn’t actually give that much feedback because they were focused on just trying to play the game.

And so in this work we looked at whether there are different ways we can remind the person to give feedback. You don’t want to be doing it all the time because it’ll annoy the person and maybe make them worse at the game if you’re distracting them. And also you don’t necessarily always want feedback, you just want it at useful points. The two conditions we looked at were: 1) should the robot remind someone to give feedback before or after they try a new behavior? 2) should they use an “I” versus “we” framing? For example, “remember to give feedback so I can be a better teammate” versus “remember to give feedback so we can be a better team”, things like that. And we found that the “we” framing didn’t actually make people give more feedback, but it made them feel better about the feedback they gave. They felt like it was more helpful, kind of a camaraderie building. And that was only explicit feedback, but we want to see now if we combine that with a reaction from someone, maybe that point would be a good time to ask for that explicit feedback.

You’ve already touched on this but could you tell us about the future steps you have planned for the project?

The big thing motivating a lot of my work is that I want to make it easier for robots to adapt to humans with these subjective preferences. I think in terms of objective things, like being able to pick something up and move it from here to here, we’ll get to a point where robots are pretty good. But it’s these subjective preferences that are exciting. For example, I love to cook, and so I want the robot to not do too much, just to maybe do my dishes whilst I’m cooking. But someone who hates to cook might want the robot to do all of the cooking. Those are things that, even if you have the perfect robot, it can’t necessarily know those things. And so it has to be able to adapt. And a lot of the current preference learning work is so data hungry that you have to interact with it tons and tons of times for it to be able to learn. And I just don’t think that that’s realistic for people to actually have a robot in the home. If after three days you’re still telling it “no, when you help me clean up the living room, the blankets go on the couch not the chair” or something, you’re going to stop using the robot. I’m hoping that this combination of explicit and implicit feedback will help it be more naturalistic. You don’t have to necessarily know exactly the right way to give explicit feedback to get the robot to do what you want it to do. Hopefully through all of these different signals, the robot will be able to hone in a little bit faster.

I think a big future step (that is not necessarily in the near future) is incorporating language. It’s very exciting with how large language models have gotten so much better, but also there’s a lot of interesting questions. Up until now, I haven’t really included natural language. Part of it is because I’m not fully sure where it fits in the implicit versus explicit delineation. On the one hand, you can say “good job robot”, but the way you say it can mean different things – the tone is very important. For example, if you say it with a sarcastic tone, it doesn’t necessarily mean that the robot actually did a good job. So, language doesn’t fit neatly into one of the buckets, and I’m interested in future work to think more about that. I think it’s a super rich space, and it’s a way for humans to be much more granular and specific in their feedback in a natural way.

What was it that inspired you to go into this area then?

Honestly, it was a little accidental. I studied math and computer science in undergrad. After that, I worked in consulting for a couple of years and then in the public healthcare sector, for the Massachusetts Medicaid office. I decided I wanted to go back to academia and to get into AI. At the time, I wanted to combine AI with healthcare, so I was initially thinking about clinical machine learning. I’m at Yale, and there was only one person at the time doing that, so I was looking at the rest of the department and then I found Scaz (Brian Scassellati) who does a lot of work with robots for people with autism and is now moving more into robots for people with behavioral health challenges, things like dementia or anxiety. I thought his work was super interesting. I didn’t even realize that that kind of work was an option. He was working with Marynel Vázquez, a professor at Yale who was also doing human-robot interaction. She didn’t have any healthcare projects, but I interviewed with her and the questions that she was thinking about were exactly what I wanted to work on. I also really wanted to work with her. So, I accidentally stumbled into it, but I feel very grateful because I think it’s a way better fit for me than the clinical machine learning would have necessarily been. It combines a lot of what I’m interested in, and I also feel it allows me to flex back and forth between the mathy, more technical work, but then there’s also the human element, which is also super interesting and exciting to me.

Have you got any advice you’d give to someone thinking of doing a PhD in the field? Your perspective will be particularly interesting because you’ve worked outside of academia and then come back to start your PhD.

One thing is that, I mean it’s kind of cliche, but it’s not too late to start. I was hesitant because I’d been out of the field for a while, but I think if you can find the right mentor, it can be a really good experience. I think the biggest thing is finding a good advisor who you think is working on interesting questions, but also someone that you want to learn from. I feel very lucky with Marynel, she’s been a fabulous advisor. I’ve worked pretty closely with Scaz as well and they both foster this excitement about the work, but also care about me as a person. I’m not just a cog in the research machine.

The other thing I’d say is to find a lab where you have flexibility if your interests change, because it is a long time to be working on a set of projects.

For our final question, have you got an interesting non-AI related fact about you?

My main summertime hobby is playing golf. My whole family is into it – for my grandma’s 100th birthday party we had a family golf outing where we had about 40 of us golfing. And actually, that summer, when my grandma was 99, she had a par on one of the par threes – she’s my golfing role model!

About Kate

Kate Candon is a PhD candidate at Yale University in the Computer Science Department, advised by Professor Marynel Vázquez. She studies human-robot interaction, and is particularly interested in enabling robots to better learn from natural human feedback so that they can become better collaborators. She was selected for the AAMAS Doctoral Consortium in 2023 and HRI Pioneers in 2024. Before starting in human-robot interaction, she received her B.S. in Mathematics with Computer Science from MIT and then worked in consulting and in government healthcare.




AIhub
is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information in AI.


AIhub
is a non-profit dedicated to connecting the AI community to the public by providing free, high-quality information in AI.



Lucy Smith
is Managing Editor for AIhub.

Source link

#Interview #Kate #Candon #Leveraging #explicit #implicit #feedback #humanrobot #interactions

Previous Post

Shaping the Future of Healthcare with AI – with Lyndi Wu of NVIDIA and Will Guyman of Microsoft

Next Post

The next big ISO 20022 migration: Structured addresses

AiNEWS2025

AiNEWS2025

Next Post
The next big ISO 20022 migration: Structured addresses

The next big ISO 20022 migration: Structured addresses

Stay Connected test

  • 23.9k Followers
  • 99 Subscribers
  • Trending
  • Comments
  • Latest
A tiny new open source AI model performs as well as powerful big ones

A tiny new open source AI model performs as well as powerful big ones

0
Water Cooler Small Talk: The Birthday Paradox 🎂🎉 | by Maria Mouschoutzi, PhD | Sep, 2024

Water Cooler Small Talk: The Birthday Paradox 🎂🎉 | by Maria Mouschoutzi, PhD | Sep, 2024

0
Ghost of Yōtei: The acclaimed Ghost of Tsushima is getting a sequel

Ghost of Yōtei: The acclaimed Ghost of Tsushima is getting a sequel

0
Best Headphones for Working Out (2024): Bose, Shokz, JLab

Best Headphones for Working Out (2024): Bose, Shokz, JLab

0
The Machine Learning “Advent Calendar” Bonus 1: AUC in Excel

The Machine Learning “Advent Calendar” Bonus 1: AUC in Excel

2025-12-31
The science of how (and when) we decide to speak out—or self-censor

The science of how (and when) we decide to speak out—or self-censor

2025-12-31
Two cybersecurity employees plead guilty to carrying out ransomware attacks

Two cybersecurity employees plead guilty to carrying out ransomware attacks

2025-12-31
Ship of Scientists Headed to Doomsday Glacier

Ship of Scientists Headed to Doomsday Glacier

2025-12-31

Recent News

The Machine Learning “Advent Calendar” Bonus 1: AUC in Excel

The Machine Learning “Advent Calendar” Bonus 1: AUC in Excel

2025-12-31
The science of how (and when) we decide to speak out—or self-censor

The science of how (and when) we decide to speak out—or self-censor

2025-12-31
Two cybersecurity employees plead guilty to carrying out ransomware attacks

Two cybersecurity employees plead guilty to carrying out ransomware attacks

2025-12-31
Ship of Scientists Headed to Doomsday Glacier

Ship of Scientists Headed to Doomsday Glacier

2025-12-31
Footer logo

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow Us

Browse by Category

  • AI & Cloud Computing
  • AI & Cybersecurity
  • AI & Sentiment Analysis
  • AI Applications
  • AI Ethics
  • AI Future Predictions
  • AI in Education
  • AI in Fintech
  • AI in Gaming
  • AI in Healthcare
  • AI in Startups
  • AI Innovations
  • AI News
  • AI Research
  • AI Tools & Automation
  • Apps
  • AR/VR & AI
  • Business
  • Deep Learning
  • Emerging Technologies
  • Entertainment
  • Fashion
  • Food
  • Gadget
  • Gaming
  • Health
  • Lifestyle
  • Machine Learning
  • Mobile
  • Movie
  • Music
  • News
  • Politics
  • Review
  • Robotics & Smart Systems
  • Science
  • Sports
  • Startup
  • Tech
  • Travel
  • World

Recent News

The Machine Learning “Advent Calendar” Bonus 1: AUC in Excel

The Machine Learning “Advent Calendar” Bonus 1: AUC in Excel

2025-12-31
The science of how (and when) we decide to speak out—or self-censor

The science of how (and when) we decide to speak out—or self-censor

2025-12-31
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.