Table Of Contents
- About the editors
- About the book
- This eBook can be cited
- Table of Contents
- List of contributing authors (Dana RAD, Gavril RAD, Edgar DEMETER, Violeta TULCEANU)
- Feature engineering for digital well-being psychological data predictive analysis (Dana RAD, Gavril RAD, Edgar DEMETER, Violeta TULCEANU)
- Critical mass dynamics and nudge theory: implications for change in regards to Digital Well-being (Csaba KISS, Dana RAD)
- Well-being and self-effectiveness of teachers during the Covid 19 pandemic period (Tiberiu DUGHI, Adriana HORVATH, Henrietta TORKOS)
- ID GAMES: Co-Create assistive games for people with Intellectual Disability to enhance their inclusion (Dana DUGHI, Anamaria CRISTA)
- Objective or subjective time and well-being (Zeno GOZO)
- The benefit of using S.T.E.M. education and the teacher’s desire to implement it in the Romanian education system (Roxana MAIER, Valeria TOTAN)
- The power of fictive kinship models in choosing the right career trajectory (Delia BÎRLE, Adela LAZĂR)
- Prevalence among the adult population, personality factors, and sexting predictors (Sonia IGNAT, Diana Cristiana CINTEAN)
- Life changing and resilience in post-pandemic era (Roxana MAIER, Florinda GOLU, Nicu Ionel SAVA)
- Social media platforms and their implications in the life of adolescents (Delia BÎRLE, Monica SECUI, Adela LAZĂR, Adriana PĂDUREAN-KISS)
- Efficiency and results in online speech therapy (Viorel AGHEANĂ, Doru Vlad POPOVICI)
- The predictors of body image disturbances: the case of a mixed gender sample of Romanian emergent young people (Tania RĂDUI, Delia BÎRLE)
- Therapy for the oldest (Simona POPESCU, Bogdan POPESCU)
- A short correlational analysis between moral attitudes and their associated psychophysiological relevance (Alexandra Maria CONSTANTIN, Orlando VOICA)
- Digital well-being in the areas of health, wellness, sport and rehabilitation (Ágnes SIMON-UGRON)
- Joint mobility: essential motor skills in the correct acquisition of swimming technique in children aged 8–11 (Andrei BITANG, Viorel BITANG, Vasile Liviu ANDREI, Corina Ramona DULCEANU, Roberto Gabriel MARCONI)
- Leadership dimensions preferred amongst Brazilian coaches in different sports from amateur to Olympic championships (Vinicius Barroso HIROTA, Jeferson Oliveira SANTANA, Marcelo Rodrigues da CUNHA, Carlos Eduardo Lopes VERARDI)
- Review regarding the approach to the relationship between emotions, psychological status and the onset of psychosomatic conditions (Nicolae CRISTEA)
- Study on perception speed (VP), motor coordination (CMC), and self-regulation (AR) in junior alpine skiing in sports clubs (Andreea Carleta TOMA, Vlad Teodor GROSU, Radu Adrian ROZSNYAI, Alexandru ZADIC, Viorel Petru ARDELEAN, Emilia Florina GROSU)
- From physical health to digital health (Ágnes Simon-Ugron, Melinda Járomi, Brigitta Szilágyi, Alexandra Makai, Viktória Prémusz, Viktória Kovácsné Bobály, Bálint Molics, Márta Hock)
- Study and recovery of patellar tendonitis injuries in performance athletes (Corina Ramona DULCEANU, Vasile Liviu ANDREI, Andrei BITANG, Viorel Petru ARDELEAN, Gyongyi OSSER, Brigitte OSSER, Claudiu Octavian BULZAN, Denis PETRAN, Alexandru Ioan BALTEAN, Narcis Julien HERLO, Georgeta Lucia PISCOI, Ovidiu Gheorghe SERBAN, Gabriel Roberto MARCONI, Iosif ILIA)
- Study of knee mobility recovery in performance athletes (Claudiu Octavian BULZAN, Vasile Liviu ANDREI, Corina Ramona DULCEANU, Gyongyi OSSER, Brigitte OSSER, Narcis Julien HERLO, Andrei BITANG, Denis PETRAN, Alexandru Ioan BALTEAN, Gabriel Roberto MARCONI, Iosif ILIA)
- The evolution of the international performance in the world championships of powerlifting: junior age category (Vasile Emil URSU, Alin TOMUȘ, Ovidiu PĂNĂZAN)
1. Adela LAZĂR,
2. Adriana HORVATH,
3. Adriana PĂDUREAN-KISS,
4. Ágnes SIMON-UGRON,
5. Alexandra MAKAI,
6. Alexandra Maria CONSTANTIN,
7. Alexandru Ioan BALTEAN,
8. Alexandru ZADIC,
9. Alin TOMUȘ,
10. Anamaria CRISTA,
11. Andreea Carleta TOMA,
12. Andrei BITANG,
13. Bálint MOLICS,
14. Bogdan POPESCU,
15. Brigitta SZILÁGYI,
16. Brigitte OSSER,
17. Carlos Eduardo LOPES VERARDI,
18. Claudiu Octavian BULZAN,
19. Corina Ramona DULCEANU,
20. Csaba KISS,
21. Dana DUGHI,
22. Dana RAD
23. Delia BÎRLE,
24. Denis PETRAN,
25. Diana Cristiana CINTEAN,
26. Doru Vlad POPOVICI,
27. Edgar DEMETER,
28. Emilia Florina GROSU,
29. Florinda GOLU,
30. Gabriel Roberto MARCONI,
31. Gavril RAD,
32. Georgeta Lucia PISCOI,
33. Gyongyi OSSER,
34. Henrietta TORKOS,
36. Jeferson Oliveira SANTANA,
37. Marcelo Rodrigues DA CUNHA,
38. Márta HOCK,
39. Melinda JÁROMI,
40. Monica SECUI,
41. Narcis Julien HERLO,
42. Nicolae CRISTEA,
43. Nicu Ionel SAVA,
44. Orlando VOICA,
45. Ovidiu Gheorghe SERBAN,
46. Ovidiu PĂNĂZAN,
47. Radu Adrian ROZSNYAI,
48. Roberto Gabriel MARCONI,
49. Roxana MAIER,
50. Simona POPESCU,
51. Sonia IGNAT,
52. Tania RADUI,
53. Tiberiu DUGHI,
54. Valeria TOTAN,
55. Vasile Emil URSU,
56. Vasile Liviu ANDREI,
57. Viktória Kovácsné BOBÁLY,
58. Viktória PRÉMUSZ,
59. Vinicius Barroso HIROTA,
60. Violeta TULCEANU,
61. Viorel AGHEANĂ,
62. Viorel BITANG,
63. Viorel Petru ARDELEAN,
64. Vlad Teodor GROSU,
65. Zeno GOZO
Faculty of Educational Sciences, Psychology and Social Sciences
Center of Research Development and Innovation in Psychology
PhD researcher at KU Leuven, Belgium
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org email@example.com
Abstract:An important area of machine learning that is sometimes ignored or appears to be overly easy is feature engineering approaches. A machine learning approach called feature engineering uses data to generate new variables that are not present in the training dataset. With the aim of streamlining and accelerating data transformations while also improving model accuracy, it may generate new features for both supervised and unsupervised learning. With machine learning models, feature engineering is necessary. No matter the architecture or the data, a bad feature will directly affect your model. The act of choosing, modifying, and converting unprocessed data into features that can be applied in supervised learning is known as feature engineering. It could be important to create and train better features in order to make machine learning effective on new tasks. Any quantifiable input that may be employed in a predictive model is referred to as a feature. Thus, feature engineering is the process of employing statistical or machine learning techniques to transform unprocessed observations into desired features. This chapter will further present a methodology for feature engineering in psychological datasets aiming predictive analysis with machine learning algorithms.
Keywords: feature engineering, machine learning, psychological data, predictive analysis
These massive datasets may be employed with machine learning algorithms that have been developed by researchers in statistics and AI. These algorithms are widely used to predict outcomes, such as classifying a person into a certain group based on the information or predicting an event that will happen in the future. In these circumstances, strategies to prevent overfitting, or the finding of fake ←13 | 14→patterns in a dataset that do not transfer to other datasets, are typically integrated with machine learning algorithms (Hamaker, & Wichers, 2017).
Researchers have employed machine learning algorithms to solve a variety of prediction-focused psychological research concerns using the kinds of datasets mentioned above. Social media data have been used to identify significant predictors for a number of mental health disorders, including depression, posttraumatic stress disorder, suicidal thoughts, and schizophrenia (Schwartz et al., 2014; Ismail et al., 2020; De Choudhury et al., 2016; Mitchell et al., 2015). Analysis of usage of smartphones and wearable sensor data has shown predictors of current and future cognitive states, including emotional states, with a focus on mood disturbance (Mehrotra et al., 2017; Rachuri et al., 2010; Canzian & Musolesi, 2015; Mehrotra et al., 2016; Saeb et al., 2015).
Using big, readily available clinical datasets, clinical psychology, psychiatry, and neuroscience have all predicted how to identify, diagnose, and treat mental illness (Dwyer et al., 2018).
Studies that concentrate on prediction, like the ones we’ve just discussed, can assist guide efforts at intervention, prevention, and therapy in real-world situations. By highlighting key variables and interactions that may then be looked into as potential causative elements underlying the phenomena being examined, a prediction-focused approach can also contribute to psychological theory.
To obtain correct predictions, however, psychologists frequently have to carefully pick relevant variables to include in machine learning models; this process is known as feature engineering in machine learning (Zheng & Casari, 2018). Without a priori understanding of the phenomena being studied, which is commonly absent in complicated datasets with many variables, it is difficult to develop accurate forecasting models. This is because feature engineering is required. In fact, it’s possible that the challenge of feature engineering had a significant role in the fact that no one strategy consistently produced high predicted accuracy in the research mentioned above.
2. What is Feature Engineering
It could be important to create and train better features for enhancing machine learning’s effectiveness in psychological forecasting. Any quantifiable input that may be employed in a predictive model is referred to as a feature. Feature engineering (Turner, Fuggetta, Lavazza, & Wolf, 1999) represents the process of employing statistical or machine learning techniques to transform unprocessed observations into desired features.
←14 | 15→The craft of creating meaningful features from existing data in accordance with the aim to be learnt and the used machine learning model is known as feature engineering (Kuhn, & Johnson, 2019). It entails changing data into formats that are more closely related to the underlying goal that must be taught. When feature engineering is done correctly, it may increase the value of current data and boost the effectiveness of the proposed machine learning models. On the other side, if researchers use poor features, they might need to create models that are far more complicated to obtain the same level of performance (Dong, & Liu, 2018). A solid understanding of the business issue and the data sources at hand is the foundation for effective feature engineering. Thus, researchers gain a deeper comprehension of the data and gain more insightful knowledge by developing new features. Feature engineering is one of the most useful data science approaches when done properly, but it is also one of the most difficult (Duboue, 2020).
Feature engineering types include:
- • Facilitating learning and enhancing the comprehension of the results, scaling and normalizing include changing the range and centering of the data.
- • Filling up missing values entails substituting null values using machine learning algorithms, heuristics, or expert knowledge. Due to the difficulties of gathering complete datasets as well as mistakes made during the data collection process, real-world datasets can contain missing values.
- • Removing elements that are unnecessary, redundant, or plain ineffective for learning is referred to as feature selection. There are occasions when researchers have too many features and should have less.
- • In feature coding, many categories are represented by a collection of symbolic values. Concepts can be recorded in many columns, each of which represents a single value and has a true or false in each field, or they can be recorded in a single column that has numerous values. A new feature or features are produced through feature building from one or more previous features. For instance, you might incorporate a function that shows the day of the week using the date. With this additional knowledge, the algorithm can learn that some outcomes are more likely to occur on Mondays or weekends.
- • Moving from low-level characteristics that are inappropriate for learning, getting bad testing results, to higher-level features that are advantageous for learning is known as feature extraction. When converting specialized data formats, such as photos or text, to a tabular row-column, example-feature structure, feature extraction is frequently useful.
←15 | 16→Scaling just changes the range of the data. Normalization is a more significant alteration. The process of normalization involves converting your observations into data that can be compared to a normal distribution (Webb‐Robertson, et al., 2011).
Imputation relates to the treatment of missing value data. While deleting items without particular values is one way to solve this issue, doing so might result in the loss of some crucial information. Imputation is advantageous in this circumstance (Musil, et al., 2002). It may be loosely classified into two categories. Specifically, missing numerical values are commonly imputed using the mean of the equivalent value in other entries, and missing categorical values are typically substituted with the value that appears the most frequently in other entries.
Missing values are one of the most prevalent issues when it comes to preparing your data for machine learning (Fernando, et al., 2021). Missing values may be the result of a variety of factors, such as human error, interruptions in the data flow, privacy concerns, and others. Missing values have an impact on how effectively machine learning models operate. Imputation’s major objective is to deal with these missing values (García, Luengo, & Herrera, 2015). Two categories of imputation exist: numerical imputation and categorical imputation.
- ISBN (PDF)
- ISBN (ePUB)
- ISBN (Softcover)
- Publication date
- 2023 (February)
- Berlin, Bern, Bruxelles, New York, Oxford, Warszawa, Wien, 2023. 336 pp., 36 fig. b/w, 47 tables.