Artificial intelligence (AI) has been a go-to technology for many people throughout the COVID-19 pandemic. AI has helped businesses solve many issues during this highly disruptive time, from helping to improve the customer experience and detecting fraud to automating work processes. That is helpful, but businesses also need to be aware of the risks of using AI.
The three greatest risks are as follows:
1. Violations of personal privacy. AI is fed by data that are designed to protect personal privacy, but that is not always what happens. This reality has been highlighted by a number of reported data breaches, some of which have targeted large businesses, like Twitter and Magellan Health.
Sadly, no business can 100 percent guarantee the privacy of its users, although that is the goal. All businesses can do is ensure that all systems and departments have the same privacy policies and procedures in place. For example, online sellers walk a fine line between protecting personal data and personalizing the offerings to consumers at a time when, according to a 2019 study, 49 percent of consumers made a purchase after receiving a personalized product recommendation. For these companies, the ability to remain in compliance with privacy laws like the General Data Protection Regulation (GDPR) may rely on zero-party data. According to Forrester, the company that coined the term, zero-party data is data customers intentionally and proactively share with a brand, such as purchase intentions, personal context and information about how they want the brand to recognize them.
2. Unintentional consequences. AI has the potential to unintentionally discriminate. For example, bank systems that gather anonymous zip code and income data to create targeted offerings may offer low-interest-rate loans or credit cards to consumers in some zip codes and not in others. This may be discriminatory even though the underlying data do not identify the particular groups being discriminated against.
When these types of unintended consequences are revealed, they can cause legal and reputational damage to the business, which can be difficult to reverse.
3. Human error. People are at the core of AI. Whether they are creating the underlying algorithms or directly using the software, human error can cause serious issues, ranging from accidentally revealing customer data to allowing a firewall to be breached.
You can mitigate these risks in the following ways:
1. Establish roles and functions. Review the various roles of your IT and security teams. Each of these functions is somewhat different and formalizing the responsibilities of each member of the team can make it easier to identify gaps in the company’s security plan.
2. Provide training. Train everyone at the company to recognize the signs of a breach and create robust procedures for reporting any gaps they find in the company’s systems. For example, teaching the entire team about pattern recognition will enable them to flag anything outside of the norm as that can indicate a problem.
3. Keep staff informed. Let everyone at the company understand that this is a constantly evolving area of business operations and everyone is involved. Making the entire staff responsible for keeping the data gathered by company’s AI safe and secure gives everyone a reason to buy in to the concept.
Subscribe to the MCB Blog and get all new MCB blog posts sent directly to your inbox.