MIT researchers have developed tools to help data scientists make features used in machine learning models more understandable to end users

0

It is well known that machine learning models excel in a wide range of tasks. Building trust in the AI ​​process requires understanding how these models work. However, researchers still do not clearly understand how AI/ML models use particular aspects or reach certain conclusions due to the complexity of features and algorithms used to train these models.

Recent research from the MIT team creates a taxonomy to help developers everywhere create features that are easier to understand for their target audience. In their article, “The Need for Interpretable Features: Motivation and Taxonomy,” they identify properties that make features more interpretable to create taxonomy. They did this for five different types of users, from AI professionals to those that predicting a machine learning model can have an impact. They also offer advice on how developers could make features more accessible to the general public.

Machine learning models use features as input variables. Features are often chosen to ensure increased accuracy of the model rather than whether a decision maker can interpret them.

The team discovered that certain characteristics, such as the trend of a patient’s heart rate over time, were presented as aggregated values ​​by machine learning models used to predict the risk a patient experiences. complications after heart surgery. Clinicians did not know how the characteristics obtained in this way were calculated, although they were “model ready”.

In contrast, many scientists valued aggregated characteristics. For example, instead of a feature such as “number of posts a student has made on discussion boards”, they prefer relevant features to be aggregated and tagged with words they recognize, such as “participation” .

Their lead researcher says there are many levels of interpretability, and this is a major driving factor behind their work. They detail the features that are likely to be most important to particular users and specify the features that may make the features more or less interpretable for various decision makers.

For example, machine learning developers can prioritize predictive and compatible features, thereby improving model performance. On the other hand, many human word features (which are described in a natural way for users) value features that are understandable and better suited for decision makers without prior machine learning experience.

When creating interpretable entities, it is important to understand at what level they are interpretable. According to them, depending on the field, we may not need all levels.

The researchers also provide feature engineering methodologies that developers can use in order to make features more understandable to a certain audience.

For machine learning models to process data, data scientists use aggregation and normalization techniques. In many cases, it is almost impossible for the average person to interpret these changes. Also, most models cannot process categorical data without first transforming it into a numeric code.

They note that it may be necessary to undo some of this coding to produce interpretable features. Additionally, many domains have a minimal trade-off between interpretable features and model accuracy. For example, the researchers mention in one of their papers that they adhered to features that met their interpretability standards while recycling the model for child welfare screening officers. The results showed that the decline in model performance was virtually non-existent.

Their work will enable a model developer to more efficiently handle complex feature transformations and produce explanations for people-oriented machine learning models. Additionally, this new system will translate algorithms created to explain model-ready datasets into formats that decision makers can understand.

They believe their study would encourage model developers to consider incorporating interpretable elements early rather than focusing on explainability later.

This Article is written as a summary article by Marktechpost Staff based on the research paper 'The Need for Interpretable Features: Motivation and Taxonomy'. All credit for this research goes to researchers on this project. Checkout the paper and blog post.

Please Don't Forget To Join Our ML Subreddit


Source link

Share.

Comments are closed.