Imagine this scenario:

You’re at work, just finished a report heavily created by generative AI like ChatGPT, but there’s a glaring error pointed out by your supervisor.

Instead of accepting responsibility, your instant reaction is to say:

“Oh, that? That must’ve been the AI! It’s always messing things up.”

As humorous as it may sound, with generative AI becoming more integrated into our daily lives through tools such as ChatGPT and Office 365 Copilot I believe we might start hearing excuses like these more frequently. But here’s the crux of the matter:

With AI becoming more incorporated in our lives and even handling some/most of our professional responsibilities, how do we ensure personal accountability remains intact?

We must remember these tools are not infallible. Just like us humans (shocker alert!), they can make mistakes too.

The AI tools operate as advanced statisticians writing text purely from what makes statistical sense, not what makes human sense, and generate content based on inputs provided by us.

Therefore we can never fully trust the output to be factually correct, no matter how good it sounds.

The human is still accountable for the correctness of the output - not the AI.

If not, what do we even need the human in the loop for?

Own Up! Link to heading

If you’ve made an error or overlooked something the AI tool generated mistakenly - admit it!

Not only will it make you a better human, it will teach others to avoid the same overreliance on tools.

Remember that making mistakes is part of being human; it’s how we learn and grow.

So instead of blaming our silicon counterparts when things go awry - let’s take responsibility!

Conclusion Link to heading

In conclusion, while generative AI has undoubtedly made life easier and tasks more efficient – we must remember that they are tools designed to help us perform better not substitute for personal accountability or responsibility for output.