How to Prioritize Responsibility Over Reactivity in Evaluating AI
Table of Contents
For decades, artificial intelligence (AI) was the realm of PhD-level data scientists. But when OpenAI released ChatGPT in November 2022, AI was launched into the public conversation and made AI accessible to anyone with internet access—and the media went wild with sensational headlines and dire predictions about how AI would change our work worlds for the better, or more likely, the worse.
Months later, the headlines have trickled down into sales pitches and offer suites designed to help people better integrate AI tools into their work. But every sales narrative still tugs on the fears of falling behind, of being taken over, of being made irrelevant.
In our AI in Business: The Importance of Responsible Innovation webinar, Brad Kasell, Principal Technology Strategist, and Karl Altern, Principal Program Manager of Data Governance, cut through the mania and share their perspectives on AI—where it came from, where it is, where it’s going, and how we can innovate with AI responsibly.
Check out our top misconceptions around AI below and watch the full webinar to get Karl and Brad’s perspective on how IT leaders should respond.
Misconception #1: Everyone’s using AI, and my team is falling behind.
Actually, Karl noted that he’s spoken with many leaders who are taking time to evaluate AI for their businesses. Some have concerns about sensitive company data being used to train models—especially hearing cautionary tales about AI gone wrong.
In other words, while we believe AI will become a powerful tool in future workplaces, you don’t have to jump on the bandwagon today. Vetting and onboarding the right tools is more important than getting them three months earlier than the competition.
Misconception #2: IT teams or regulatory bodies can fully control how people use AI tools.
Organizations, industries, and even governments will undoubtedly struggle to fully monitor and control AI tool usage at work or within society. Because people are naturally self-preserving, most will use tools that save time and energy, even if they must use their personal devices. The risks of AI innovations also aren’t always clear to non-technical users, and these platforms are frustratingly designed to look and feel trustworthy.
Misconception #3: AI will take my job or the jobs of people around me.
For non-data-scientists, ChatGPT seemed to come out of nowhere, threatening to take over their workplaces. Some people felt invigorated, but others felt jarred, surprised, or fearful for their jobs. While this transition may have felt bumpy, we believe that AI will quickly become part of the landscape, another tool in the suite (albeit an especially powerful one). Jobs, roles, and development plans should evolve seamlessly as companies begin bringing AI responsibly into the workplace.
So, how can leaders responsibly innovate in the world of AI?
Prioritizing responsibility over reactivity is the best thing that IT leaders can practice and preach about AI today. Karl and Brad believe that, eventually, the future of AI in workplaces will focus more on model management and less on the raw capabilities.
That’s why creating safe environments for using AI and educating people on both the opportunities and risks is essential. For practical advice on achieving this, check out the full webinar.