Feature engineering
Feature engineering or feature extraction is the process of using domain knowledge to extract features (characteristics, properties, attributes) from raw data.[1] The motivation is to use these extra features to improve the quality of results from a machine learning process, compared with supplying only the raw data to the machine learning process.
Part of a series on |
Machine learning and data mining |
---|
![]() |
Process
The feature engineering process is:[2]
- Brainstorming or testing features[3]
- Deciding what features to create
- Creating features
- Testing the impact of the identified features on the task
- Improving your features if needed
- Repeat
Typical engineered features
The following list[4] provides some typical ways to engineer useful features
- Numerical transformations (like taking fractions or scaling)
- Category encoder like one-hot or target encoder (for categorical data)[5]
- Clustering
- Group aggregated values
- Principal component analysis (for numerical data)
Relevance
Features vary in significance.[6] Even relatively insignificant features may contribute to a model. Feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting).[7]
Explosion
Feature explosion occurs when the number of identified features grows inappropriately. Common causes include:
- Feature templates - implementing feature templates instead of coding new features
- Feature combinations - combinations that cannot be represented by a linear system
Feature explosion can be limited via techniques such as: regularization, kernel methods, and feature selection.[8]
Automation
Automation of feature engineering is a research topic that dates back to the 1990s.[9] Machine learning software that incorporates automated feature engineering has been commercially available since 2016.[10] Related academic literature can be roughly separated into two types:
- Multi-relational decision tree learning (MRDTL) uses a supervised algorithm that is similar to a decision tree.
- Deep Feature Synthesis uses simpler methods.
Multi-relational decision tree learning (MRDTL)
MRDTL generates features in the form of SQL queries by successively adding clauses to the queries.[11] For instance, the algorithm might start out with
SELECT COUNT(*) FROM ATOM t1 LEFT JOIN MOLECULE t2 ON t1.mol_id = t2.mol_id GROUP BY t1.mol_id
The query can then successively be refined by adding conditions, such as "WHERE t1.charge <= -0.392".[12]
However, most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using tricks such as tuple id propagation.[13][14] Efficiency can be increased by using incremental updates, which eliminates redundancies.[15]
Deep feature synthesis
The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition.[16][17]
Libraries:
[OneBM] helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost.[21]
Feature stores
The Feature Store is where the features are stored and organized for the explicit purpose of being used to either train models (by Data Scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions.[22]
A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used.[23]
Feature stores can be standalone software tools or built into machine learning platforms.
See also
References
- "Machine Learning and AI via Brain simulations". Stanford University. Retrieved 2019-08-01.
- "Big Data: Week 3 Video 3 - Feature Engineering". youtube.com.
- Jalal, Ahmed Adeeb (January 1, 2018). "Big data and intelligent software systems". International Journal of Knowledge-based and Intelligent Engineering Systems. 22 (3): 177–193. doi:10.3233/KES-180383 – via content.iospress.com.
- "Creating Features". kaggle.com. Retrieved 2021-09-30.
- "Category Encoders — Category Encoders 2.2.2 documentation". contrib.scikit-learn.org. Retrieved 2021-10-01.
- "Feature Engineering" (PDF). 2010-04-22. Retrieved 12 November 2015.
- "Feature engineering and selection" (PDF). Alexandre Bouchard-Côté. October 1, 2009. Retrieved 12 November 2015.
- "Feature engineering in Machine Learning" (PDF). Zdenek Zabokrtsky. Archived from the original (PDF) on 4 March 2016. Retrieved 12 November 2015.
- Knobbe, Arno J.; Siebes, Arno; Van Der Wallen, Daniël (1999). "Multi-relational Decision Tree Induction" (PDF). Principles of Data Mining and Knowledge Discovery. Lecture Notes in Computer Science. Vol. 1704. pp. 378–383. doi:10.1007/978-3-540-48247-5_46. ISBN 978-3-540-66490-1.
- "Its all about the features". Reality AI Blog. September 2017.
- "A Comparative Study Of Multi-Relational Decision Tree Learning Algorithm". CiteSeerX 10.1.1.636.2932.
{{cite journal}}
: Cite journal requires|journal=
(help) - Leiva, Hector; Atramentov, Anna; Honavar, Vasant (2002). "Experiments with MRDTL – A Multi-relational Decision Tree Learning Algorithm" (PDF).
{{cite journal}}
: Cite journal requires|journal=
(help) - Yin, Xiaoxin; Han, Jiawei; Yang, Jiong; Yu, Philip S. (2004). "CrossMine: Efficient Classification Across Multiple Database Relations". Proceedings. 20th International Conference on Data Engineering. Proceedings of the 20th International Conference on Data Engineering. pp. 399–410. doi:10.1109/ICDE.2004.1320014. ISBN 0-7695-2065-0. S2CID 1183403.
- Frank, Richard; Moser, Flavia; Ester, Martin (2007). "A Method for Multi-relational Classification Using Single and Multi-feature Aggregation Functions". Knowledge Discovery in Databases: PKDD 2007. Lecture Notes in Computer Science. Vol. 4702. pp. 430–437. doi:10.1007/978-3-540-74976-9_43. ISBN 978-3-540-74975-2.
- "How automated feature engineering works - The most efficient feature engineering solution for relational data and time series". Retrieved 2019-11-21.
- "Automating big-data analysis".
- Kanter, James Max; Veeramachaneni, Kalyan (2015). "Deep Feature Synthesis: Towards Automating Data Science Endeavors". 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE International Conference on Data Science and Advanced Analytics. pp. 1–10. doi:10.1109/DSAA.2015.7344858. ISBN 978-1-4673-8272-4. S2CID 206610380.
- "Featuretools | An open source framework for automated feature engineering Quick Start". www.featuretools.com. Retrieved 2019-08-22.
- Hoang Thanh Lam; Thiebaut, Johann-Michael; Sinn, Mathieu; Chen, Bei; Mai, Tiep; Alkan, Oznur (2017). "One button machine for automating feature engineering in relational databases". arXiv:1706.00327 [cs.DB].
- "ExploreKit: Automatic Feature Generation and Selection" (PDF).
- Thanh Lam, Hoang; Thiebaut, Johann-Michael; Sinn, Mathieu; Chen, Bei; Mai, Tiep; Alkan, Oznur (2017-06-01). "One button machine for automating feature engineering in relational databases". arXiv:1706.00327 [cs.DB].
- "What is a feature store". Retrieved 2022-04-19.
- "An Introduction to Feature Stores". Retrieved 2021-04-15.
Further reading
- Boehmke, Bradley; Greenwell, Brandon (2019). "Feature & Target Engineering". Hands-On Machine Learning with R. Chapman & Hall. pp. 41–75. ISBN 978-1-138-49568-5.
- Zheng, Alice; Casari, Amanda (2018). Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists. O'Reilly. ISBN 978-1-4919-5324-2.
- Zumel, Nina; Mount, John (2020). "Data Engineering and Data Shaping". Practical Data Science with R (2nd ed.). Manning. pp. 113–160. ISBN 978-1-61729-587-4.