...

If we use AI to do our work – what is our job, then?


There’s no modality that is not handled by AI. And AI systems reach even further, planning advertisement and marketing campaigns, automating social media postings, … Most of this was unthinkable a mere ten years ago.

But then, the first machine learning-driven algorithms did their initial steps: out of the research labs, into first products. They began to curate content on YouTube and social media sites. They started recommending movies on Netflix and songs on Spotify. The ranked search results. They played strategic games on par with humans. The general rise of AI-enabled things has been spectacular.

AI in the workplace

And the workplace is not immune against this. As an undergrad, I was studying how to construct hyperplanes, centroids, and backpropagation rules, and for most of my studies, AI was mostly regarded as an academic research direction. Since I entered the job market, this has changed A LOT. Employers and employees alike realized the potential of AI for work. In most (digital) workplaces, AI is rapidly becoming an invisible co-worker.

Many dedicated AI tools already made the leap onto our desktops: programmers use AI-assisted coding tools, data analysts prepare pipelines from single sample files through AI, and designers draft faster with AI-generated visuals. These tools undeniably make work easier. But they also raise a deeper question:

What is one’s work?

What is truly my own work? Do I still need to interact with my code, with anything, really, in detail?

The more we AI-ify our workflows, the less we need to engage with our work material. It might well turn out that we no longer need to become experts, possessing deep knowledge about a fairly narrow topic, but rather shallow surfers, taking an AI-glimpse here and there.

In other words, we become mere managers of how work is done by AI. Notice there’s no “our” in front of work.

Is that, can that be fulfilling? Do we not need some sense of depth in our work?

I well remember a time when I had to handle multiple concurrent projects. At that time, which was before AI took hold in the offices, I was often switching between three different and mostly unrelated projects per day. Together with semi-urgent interruptions, one can imagine that there was not much time to spend extended time on a single topic; before I could go deep enough into any topic to make actual progress, I already had to switch.

Nowadays, AI systems often act as proxies, preventing us from needing to engage with a project in the first place. Even though we might be working on a single project only, we prompt our way forward – which leads to the question:

If we use AI to do our work, what is our work, then?

Is our work simply doing more work? AI is often hailed as allowing us to do more, which implies that, given the same working times, we need to engage with the material even less.

This implies that, by definition, we cannot gain profound experience in one topic.

This, further, implies that we could, in principle, do any job that is related enough to our skills.

Which, finally, means that somebody else could do our job.

We are, thus, replaceable as soon as AI automation scales.

How can we prevent this?

Use AI deliberately: Think first, prompt later

In my opinion, the only way* is: use AI deliberately, selectively. Do not outsource your thinking. Don’t let your ability to think deeply and critically decay through unconscious non-use.

It’s completely fine — often even smart — to use AI tools for the truly boring tasks that any decently skilled person could do. For programmers, safe (in the sense of not making us dumber) uses of AI include: summarizing codebases, creating README documents, generating boilerplate, or loading and cleaning data.

But when the task at hand requires human judgment, interpretation, or specific design choices and tradeoffs, that’s when you should resist the temptation to hand it off. These are exactly the moments where you build the expertise that keeps you irreplaceable.

To make this more concrete, you can use this simple heuristic when deciding on using AI assistance:

  1. Task that are Low-stake, repetitive, well-defined → Let AI help.
    Examples are: formatting code, generating test stubs, writing SQL queries.
  2. Task that are high-stake, ambiguous, or require human judgment → Do it yourself. Examples are: designing system architecture, interpreting experiment results, making ethical decisions.

This rule of thumb keeps the “boring” stuff automated while protecting the work that actually builds your expertise. To integrate the heuristics into daily practice, you should Intentionally pause before a task. Ask yourself: Do I want to/need to understand this deeply, or just get it done?

Then, if the goal is understanding → start manually. Code the first draft, debug yourself, sketch the design. Once you’ve thought it through, you can augment your works with the output of an AI system.

However, if the goal is mere output → let AI accelerate you. Prompt it, adapt it, and repeat with the next task.

Think of it as a mantra: “Think first, prompt later.”

Then, at the end of a work week, you can reflect back: which tasks did you outsource to AI this week? Did you learn something from those tasks, or just complete them? Where could you have benefited from engaging more deeply?

Closing thought

It turns out that, as AI is more and more used in the workplace, our real job might not be to churn out more output with AI. Instead, our job is to engage directly with the material when it matters — to build the kind of judgment, insight, and depth that no system can replace.

So, use AI deliberately. Yes, automate the boring parts, but protect the parts that make you grow. That balance is what will keep your work not only valuable, but also fulfilling.


* A non-alternative for most machine learning folks who spent considerable time building a career in data science: switching careers to do something manual and offline. Examples are construction work, hair dressing, waiting, etc.

Source link

#work #job