AI: It’s All About the Context

AI: It’s All About the Context
Without the right context, decisions made with AI support can create ethical and legal dilemmas for many enterprises.
By Martha Buyer

As the new year gets rolling and I see increasing numbers of ads on tv and/or online for AI “solutions” (I HATE that word in this context), I remain concerned (to the point of neurotic) about the serious downside of reliance on AI. Is AI a powerful tool? Absolutely. But its judicious use is critical if it’s going to bring sound and useful results to the enterprise. If answers to tough questions were always in black and white, answers would be much easier to find and implement. The challenge is that business, government, and life are much, much messier. Can AI help sort through the glop? Absolutely, but reliance on its outcomes must be made with careful consideration and great care.
 
Many years ago, a friend who ran a company that managed a series of airport parking lots throughout the U.S. asked me what his business was really about? Parking? Real estate? Hardly. It’s about service, and if you think about it, every enterprise exists in some way or another to serve its customers. If you go to a routine doctor’s appointment and are kept waiting three hours because the office has overbooked, that’s a negative customer service experience and will likely force you and others like you to look elsewhere for non-emergent medical care. Bad service equals “I’m taking my business (and money) elsewhere.”
 
AI has certainly made its way into the customer experience, but questions need to be asked, including:
  1. What are we measuring?
  2. How are the measurements being made?
  3. Is this the right thing(s) to be measured?
  4. How are we validating that our data input is accurate and the output useful?
  5. How is the entity that’s performing an AI-based service weighting the factors/raw information that we provide?
  6. How are we using the information that’s generated from the AI provider that we’ve chosen?
  7. How comfortable are we that the AI yields create an accurate picture of what we’re trying to measure?
My first “real” job was on a call center for a local bank that handled Visa and Mastercard transactions. Not surprisingly, no one ever called to say, “Hey, my bill looks terrific this month, thanks a lot.” So, callers were often ill-informed, annoyed, or just plain hostile, and learning how to diffuse them was part of the job. Agents were ranked monthly by performance.
 
Interestingly, the agent who was smarter than everyone else in the room always finished 2nd, although everyone knew how smart and capable she was at the job. Nonetheless, the agent ranked #1 was rewarded with bonuses and other benefits. Only after some time, did someone bother to dig into the underlying facts behind agent #1’s success. What she did routinely was to handle the easy calls and “accidentally” disconnect the hard ones so that she’d be available for more easy calls, thus improving her call count and holding on to her rank as #1.
 
This was a simple AI system, but because manages were looking only at the call center peg counts, no one was considering the actual quality of the service that the #1 agent provided. While AI systems have become infinitely more complex, the underlying issue remains the same. Managers and those who rely on AI-based information must understand the context of both the data that’s input as well as the generated outcome. With additional complexity comes additional responsibility for validation of the input and output.
 
In many cases, AI can be used to justify desired outcomes. It’s for this reason that all parties to an AI transaction be aware of how the underlying algorithms are weighing different factors. Is the senior partner at a law firm simply looking at billable hours of junior associates before deciding which one should be promoted or are they including other factors like the nature of the work, the nature of the client, and the attorney’s ability to get work done in a timely manner? There’s no right answer to this question without placing the question at hand in appropriate context; AI tools can only generate an answer based on the actual received input.
 
Legally, there are also several factors to be considered. Most notable among them is where liability falls. If an entity makes a decision based on AI-generated information, and the decision is wrong, who is legally “on the hook” for the error? Is the AI provider on the hook? The best answer is “maybe,” but this is something that any enterprise that’s considering AI use to make decisions shouldn’t only consider but write into its agreements with AI providers.
 
Another important legal consideration is how updates can and will be built into the service that’s being acquired. That is, if you buy access to a service for X years, what guarantees are there that the vendor of the AI product will keep your service running at the latest and greatest levels of sophistication and accuracy?
 
There are no easy answers here, but until enterprises can identify the right questions, and can qualify the answers, they are in a tough spot with regard to knowing how best to use the information.

 

Share this post:

Comments on "AI: It’s All About the Context"

Comments 0-5 of 0

Please login to comment