AI, DevOps, and Employment Law: Thinking Ahead

Last summer, Electric Cloud announced ElectricFlow DevOps Foresight, which, among other things, uses machine learning to help organizations weigh the value in a release (re: Jira User Stories) against a “release risk score,” based on Developer and Team Contribution.  This data can be used to assess the strengths and weaknesses for individuals and teams across multiple criteria.  The goal is to help customers learn from experience and assign the most appropriate developers to a particular release, and constructively highlight individual skills gaps to justify training.

While everyone was very enthusiastic about the potential value to be gained by applying AI/ML to DevOps use cases such as release risk scoring, a few people asked if this data could somehow be used to bring punitive action against a particular individual. This was, of course, not our intent, but it’s a great question – one that deserves more discussion!

So, last month we co-hosted a Meetup entitled “AI In DevOps and Associated Employment Law Issues.”Our own Electric Cloud CTO, Anders Wallgren, Stephen Wu, Shareholder at Silicon Valley Law Group, and Peter Gillespie, Partner at Laner Muchin, Ltd. offered their insights on the intersection of metrics, AI/ML systems, and employment law.

Below are a few highlights from the conversation, with a link to the entire panel below.

The Relevant Metrics

During the discussion, Wallgren reflected on the fact that there hasn’t been any huge breakthroughs in AI over the last 20 years other than faster, more powerful compute resources and tons more data.  Even with data on a massive scale, some decisions, like altering the risk of a release, still need to be made by a human.  “Software has always been a team sport, he reiterated. “Using these metrics to measure people is foolish.”

Wallgren reminded folks that metrics need to be relevant and designed so they can’t be gamed, whether they are derived from traditional methods or via AI/ML. He referred to his favorite Dilbert cartoonwhere the pointy-haired manager announced a bug bounty and Wally said “I’m gonna write me a minivan this afternoon.”  The point is to focus on the desired outcomes, like lowering the risk of a release, rather than individual behaviors.

He added that software metrics have been around since software began and are inherently objective but have never been effective for managing people.  He later said that employees will find a way to game the system so be prepared to watch for it and adjust accordingly.

What About Bias?

At one point in the discussion, Wu asked about the “elephant in the room” – bias in the data or the algorithm.  He suggested that if historical datasets are inherently biased, as when the data shows that Group A is inherently better than Group B the datasets will continue to favor Group A, so he turned the discussion to what vendors and AI users need to do to minimize that bias. Wallgren said that algorithms themselves don’t have an opinion, they just offer theories, but those algorithms can be constructed with the wrong math or the wrong data, which would result in the wrong answer.  So long as the data that is being fed into the system has not been manipulated, intentionally or otherwise, over time that will improve the quality of the system.  He pointed back to his earlier comment about making sure you’re clear about your desired outcome, like measuring releases not people.

GDPR and AI: Not Just for Customer Data

Gillespie also brought up a number of relevant aspects of GDPR that could apply to the use or misuse of this kind of data.  For instance, if an employee is subject to GDPR jurisdiction and has a disciplinary action taken against them because of a decision from an AI/ML system, that employee has a right to understand how that algorithm works and how it made that decision. The “black box” nature of AI/ML systems may lead to problems for both vendors and employers if they are used for HR purposes.

In closing, Gillespie recommended that organizations adopting anyML-based tool for decision purposes should have the vendor provide an explanation of how it works so employers and employees understand its limitations.

A Final Thought

Whether or not you use ML in your own company, or if you use software from companies that do, the questions around law and ethics will be important ones to figure out. Watch the video now to learn more.

Tim Johnson

Tim Johnson

Tim is product marketing manager at Electric Cloud and focuses on the impact DevOps has on the people and the organizations adopting it.He has over 15 years product marketing experience with industry leaders like BMC Software, Cisco, Google, and SurfControl.He holds an MBA from the University of California, Irvine and is a Scoutmaster and wood turner in his "spare" time.
Tim Johnson

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

Subscribe via RSS
Click here to subscribe to the Electric Cloud Blog via RSS

Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.

By continuing to browse or by dismissing this alert you agree to the storing of first- and third-party cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See privacy policy.