Skip to Content
Woman looking at large digital screen

Algorithmic Accountability: The Other Side of Machine Learning

By Tom Slee

There is a rousing chorus of excitement--and investment--around new developments in machine learning and artificial intelligence. Neural network techniques first developed decades ago have been reinvigorated by new data sources, computer hardware, and theoretical advances. Under the name “Deep Learning” these techniques have produced high-profile breakthroughs in image recognition, speech recognition, playing complex games such as Go and poker, self-driving cars, and much more.

The chorus continues to swell, but it is accompanied by a counterpoint of concern and caution, raising questions about fairness, transparency, privacy and more: topics that are sometimes bundled together under the umbrella of “algorithmic accountability”.

Algorithmic accountability is already following a path trodden by debates around privacy, a topic with which many Human Capital Management professionals will be more familiar. When Big Data burst on the scene it was quickly clear that here was a major commercial opportunity. But when all that data being collected is about you and me, we become concerned about privacy. As the Big Data / privacy debate has evolved, it has become clear that companies who wish to profit from the opportunities of Big Data must reckon seriously with the challenges of privacy: Chief Privacy Officers are now appearing on executive boards, many jurisdictions now have Privacy Commissioners to stand up for the rights of citizens, court cases around privacy violations carry major penalties, and privacy breaches can badly damage brand reputations. The privacy debate is not an event that happened once and is over: it is an evolving discussion that will continue to play out for years.

The algorithmic accountability story has parallels. As algorithms become more influential and ubiquitous, and as their decisions and recommendations increasingly shape our opportunities and our experiences, it’s only natural that many people will be concerned about fairness at an individual and a group level. Businesses who wish to benefit from machine learning (especially applied to people) must pay attention to these concerns. New machine learning techniques---and particularly Deep Learning---have become increasingly difficult to interpret, and so transparency and explanation have become central to algorithmic accountability.

Machine learning techniques

Here’s a quick summary of those machine learning techniques, based on a description in Chapter 1 of the leading Deep Learning text:

  • Rule-based machine learning: here the features (variables) being used are explicit and the algorithms are essentially hand-designed for the specific problem at hand. Explanation falls naturally out of the algorithm itself.
  • Classic machine learning: the features are hand-selected but their weights and relevance in the final model are “learned” by the algorithm during the training phase. A hiring algorithm might use university degrees, years of experience, and most recent position as some of the relevant features, and then use a sample of applicants together with their level of employment success to find the best weights for each.
  • Deep Learning: Deep Learning algorithms learn to identify features as well as their weights. It is particularly suited for tasks involving unstructured data: Deep Learning algorithms for image recognition extract a set of features that helps to distinguish--for example--dogs from cats, features that may include edges and textures and color patches, but may also include features for which we don’t have words.

While much of the current concern for transparency swirls around the relatively inscrutable Deep Learning algorithms, not every machine learning system is going to become a Deep Learning system. Classical methods continue to be the right tool for many problems, especially those based on the structured data that make up many enterprise systems. The strengths of Deep Learning lie in extracting key features from unstructured and ambiguous data such as images, video, and text; applying the techniques to the well-structured and relatively unambiguous data that make up many enterprise systems runs the risk of overfitting and often offers little or no benefit compared to more classical techniques.

Moving forward: Opportunities for responsible companies

Like privacy, accountability and fairness are broad concepts. And like privacy, they are not just risks to be minimized, or roadblocks standing in the way of technological progress. We know in our daily lives as citizens that fairness, accountability, and privacy are valuable social goods. Looking ahead, there is no doubt that machine learning will only grow in importance and reach, and so will rules and expectations around fairness and accountability. We do live in interesting times: what is a responsible company to do?

Companies have an opportunity to get ahead of the game, and engage constructively with the challenges of algorithmic accountability. Businesses have a chance to track and participate in policy and ethical debates, to understand evolving legal frameworks and social norms, and to build fairness and accountability into their offerings. Companies who adopt the approach of addressing privacy issues throughout the software engineering process are said to adopt Privacy by Design. Maybe we could call the approach of building fairness and accountability into the software design and implementation process “Fairness by Design”.

The good news is that this debate is in its early days. Lots has been written, but much is still at a very general level, painting in broad brush strokes and addressing wide-ranging concerns. Over time the debates will become more specific as the challenges facing individual industries and areas of work emerge, and as machine learning solutions change. The landscape for professions such as HCM will become better-defined, though maybe not simpler, and the most interesting and important developments have probably not happened yet.

About the Author

Tom Slee, Ph.D.
Senior Product Manager, SAP HANA

Tom Slee is a senior product manager for the SAP HANA in-memory database system, where he specified in programming language interfaces and UI tools. The product management team helps to set priorities for HANA and communicate product capabilities and directions to customers.

About SAP SuccessFactors

SAP SuccessFactors Human Experience Management (HXM) Suite helps you completely reinvent the entire employee experience. You can shift from traditional HR transactions to engaging, end-to-end experiences, using intelligent technology to make each interaction simpler and more meaningful. And by linking employee feedback to operational data, you’ll understand what’s happening and why, so you can continuously deliver unexpectedly exceptional experiences that keep your business growing.

Back to top