Estimated reading time: 4 minutes
Artificial intelligence (AI) is a regular topic of conversation in today’s business environment. One in four employers are using AI to support HR related activities, according to an article on the Society for Human Resource Management website. And that number is sure to increase over time.
We’ve been talking about artificial intelligence on HR Bartender and some of the things we can do to learn more about the use of AI. It’s important to note that AI isn’t just a human resources tool. It’s an organizational tool, which means that the organization needs to work together to have an aligned artificial intelligence strategy.
During this year’s HR Technology Online event, I listened to Keegan Fonte from PrimePay talk about using AI ethically. My takeaway from his presentation was that the ethical use of AI is very much tied to the way that HR and the rest of the organization partners with each other as well as their external suppliers. Here’s a high-level overview of what to consider.
Be prepared to address bias. We already hear the stories about artificial intelligence and bias. There’s no good reason to deny the potential exists. That being said, organizations and HR departments can mitigate AI bias by regularly examining the data and decision making they are using. Poor data isn’t going to help AI learn what it needs to learn to help the organization. That will lead to poor company decisions, which if left unexamined, will only perpetuate AI bias.
Know and monitor data privacy legislation. There are already some pieces of legislation in place related to data privacy – the General Data Protection Regulation (GDPR) is one of the most prominent. But as AI becomes more widely adopted, there will be more laws. So, HR professionals need to partner with their legal counsel to stay current with data privacy legislation. Look for sources in articles, podcasts, webinars, and conferences to hear about regulations that could impact the use of AI.
Act in a transparent manner. We’ve talked before about the importance of having a company AI strategy as part of an overall technology strategy. In addition, Fonte stressed during the session that organizations need to have company policies regarding the use of artificial intelligence. HR will need to bring together key stakeholders in operations, legal, technology, etc. to discuss what a policy would entail and how it would be monitored and enforced.
Understand the risks. Speaking of strategies and policies, organizations need to understand the risks associated with using artificial intelligence. Some of those risks we know like bias and regulations. There could also be risks we haven’t uncovered yet because AI is still relatively new. Organizations need to discuss their risk tolerance and how they want to handle potential risks when they learn of them.
Ask your partners tough questions. Organizations are most likely to work with a vendor when they start using artificial intelligence. Some of the company’s current technology vendors might introduce an AI feature into existing software. HR departments will want to ask about the company’s track record of using AI and possibly speak with some existing clients that are using the tool. They also might want to ask about auditing AI data. Vendors should be prepared to answer these questions.
Using artificial intelligence ethically begins with the organization reaching consensus on how AI will be used and how new information regarding AI will be evaluated. HR departments need to understand their data, regulations, and current technologies. They will want to partner with IT, legal, risk, and their external vendors to have a legally compliant and ethically responsible AI strategy and policy.
Image captured by Sharlyn Lauby while exploring the streets of Fort Lauderdale, FL
70