Ethics and Analytics

How do you consider/account for the ethical implications of the predictive analytics that we are running? I sometimes worry that we are so pressed to move forward with gaining scientific insight that we don’t consider the potential negative ramifications of our analyses (e.g., machine learning won’t negate racial biases that exist in our orgs already).

Hi, Sahra. Thanks for your question. As a long-time promoter of #PeopleDataforGood this is a question that’s near and dear to me, and one that’ll require continuous attention over time. In short, here’s a reply I shared with another questioner. Before pressing paste, though, I will say this: We have to have gates, wickets, whatever you want to call it, that determine go-no go decisions throughout the Data-to-Change Process. This includes when we’re collecting data, analyzing data, packaging insights, distributing insights, etc. This takes discipline and forethought and, unfortunately, too few have allocated the time and energy to understand how best to deal with current data, analysis, and insights, as well as future data, analysis, and insights. This is unfortunate and risky as the world is only moving faster. Again, thanks for your question, Sahra! Hope this helps in some way.

From before: Tools that capture and analyze people-related data are now commonplace. I use the term “people-related data” because not all data that generated by individuals is consciously known by them. Yes, in most cases they sign a piece of paper or click a terms of use that are rarely read, yet these agreements allow the data to move at the discretion of the enterprise and/or platform owner, often times will little to no communication back to the individual. And, to be real, if communication, emails for example, went back to the individual each time data was captured and analyzed they’d have a few lifetimes of work to just absorb it. This, therefore, is not realistic. So what must be done? Organizational leaders, platform providers, and data generation and analysis firms must have an Ethics Charter or some guiding principles that outlives how they handle people-related data analysis. What are their rules? What are their boundaries? What constitutes a “no-go” decision? How does the analysis help the individuals who generated data? These questions and others must not only be answered, they must be supported by examples over time. This is what happened. This is what we did. This was the result. And, again, this must done again and again over time . In the end, those analyzing people-related data must strive to build trust. Especially now that the collection and analysis of passive behavioral data (“data exhaust”) meant to shed light on how people are actually spending their time. Especially now that the potential to perpetuate or exacerbate bias exists. Especially now that language (NLP) data is being used more widely. All these can be perceived as invasive if not handled proactively, virtuously, and continuously. Hope this helps and, of course, much more to share and explore on this topic.

2 Likes

Thanks, @Al_Adamsen! Super helpful and insightful.

My pleasure @Sahra_Kaboli-Nejad. Super appreciate the kind words!

We actually just had a very similar conversation on this in our team offsite today. We still need to build some formality around ethics and governance as a team but one piece that I think one piece that will be abundantly clear is to undertake the analysis with the ends in mind. One example of this is looking at integrating new employees into the organization. Does it really matter how likely it is that a particular person is going to leave or is the more valuable/actionable insight actually how we can improve the onboarding process holistically (thinking early/preventative identification of something like a peripheral node). Ultimately it’s the factors that can influence the process and that can be arrived at in an anon fashion. This type of approach builds trust vs demonstrating a big brother type approach.