Forecasting Traits: Human Personality Prediction with Machine Learning: Comparison
Please note this is a comparison between Version 1 by Jagannath D J and Version 3 by Catherine Yang.

In today's data-driven world, machine learning applications have permeated various aspects of our daily lives, presenting innovative solutions to complex problems. One such significant application lies in the realm of personality prediction, where individuals are classified based on their unique personality traits 

  • Personality prediction
  • personality traits
  • machine learning
  • Big Five model
  •  predictive analytics
  • behavioural psychology
  • effective marketing
  • appealing products and services.

I. Introduction

In today's data-driven world, machine learning applications have permeated various aspects of our daily lives, presenting innovative solutions to complex problems. One such significant application lies in the realm of personality prediction, where individuals are classified based on their unique personality traits [1]. This predictive approach holds immense potential, particularly in enhancing the effectiveness of marketing campaigns by precisely targeting specific demographic groups. By leveraging high-dimensional datasets, marketers can tailor their communications to resonate with the personality traits of their target audience, thereby increasing product visibility, engagement, and overall customer satisfaction. Examples of Personality-Based Approaches:

  • Personalized Advertising:

Personalizing online advertisement campaigns based on personality traits has been shown to yield higher revenue and click-through rates [2]. By understanding the preferences and behaviour patterns associated with different personality types, marketers can tailor their ad content to better resonate with individual users.

  • Recommender Systems:

Personality traits are closely linked to an individual's preferences and behaviours, making them valuable inputs for recommender systems [3]. Incorporating personality-based approaches has significantly enhanced the accuracy and relevance of recommendations, leading to improved user satisfaction and engagement.

  • Personalized Visualizations and Music Recommendations:

Personality-based adaptations can also extend to areas such as personalized visualizations and music recommendations [4]. By considering an individual's personality traits, systems can generate tailored visualizations and offer music recommendations that align with their preferences, enhancing the overall user experience.

  • Addressing the "Cold Start" Problem:

Personality traits serve as scientifically validated and relatively stable latent dimensions of an individual, offering a solution to the "cold start" problem in personalized systems [5]. By leveraging these traits, systems can provide personalized recommendations even in scenarios where limited user data is available.

The significance of personality prediction extends beyond marketing applications, permeating various aspects of society, including recruitment processes. Companies often incorporate personality assessments into their hiring procedures to gain insights into candidates' suitability for specific roles. By understanding an individual's personality profile, organizations can assign tasks that align with their strengths, ultimately enhancing overall efficiency and productivity.

II. contextual research

Trait theories of personality have long been instrumental in attempting to quantify and categorize the myriad facets of human behaviour and psychology. Early theories presented varying perspectives on the number of personality traits, with figures ranging from thousands to a mere handful. Gordon Allport, for instance, proposed a comprehensive list comprising 4,000 personality traits [6], while Raymond Cattell distilled his theory to 16 key factors [7], and Hans Eysenck proposed a more simplified three-factor model [8]. However, many researchers found Cattell's theory overly complex and Eysenck's too restrictive in scope. Consequently, the Big Five personality traits emerged as a widely accepted framework, offering a more streamlined approach to understanding the fundamental components of personality [9]. The Big Five factors characterize an individual's personality. It is crucial to recognize that each of the five primary personality traits represents a spectrum between two opposing poles. For instance, extraversion spans a continuum from extreme extraversion to extreme introversion, with most individuals falling somewhere along the spectrum. While considerable literature supports the existence of these primary personality traits, there is not always unanimous agreement on the specific labels assigned to each dimension. Nonetheless, these five traits are represented pictorially in figure1 and are typically described as follows:

  • Openness:

Figure 1. The Big Five factor model.

  • Openness:

Also known as openness to experience, this trait underscores imagination and insight as its defining characteristics. Individuals high in openness exhibit a broad range of interests, curiosity about the world and people, and a penchant for learning and embracing new experiences. They often display creativity and a propensity for adventure, while those low in openness tend to gravitate towards tradition and may struggle with abstract thinking.

  • Conscientiousness:

Characterized by thoughtfulness, impulse control, and goal-directed behaviour, conscientiousness reflects meticulous organization and attention to detail. Highly conscientious individuals are methodical in their approach, mindful of deadlines, and considerate of others' perspectives. Conversely, those scoring lower in this trait may exhibit a lack of structure and organization, leading to procrastination and missed deadlines.

  • Extraversion:

This trait encompasses traits such as sociability, assertiveness, and emotional expressiveness. Extraverts thrive in social settings, drawing energy from interactions with others and displaying enthusiasm and excitement. In contrast, introverts are more reserved, finding social interactions draining and often requiring solitude to recharge.

  • Agreeableness:

Reflecting qualities such as trust, altruism, and kindness, agreeableness pertains to prosocial behaviours and interpersonal interactions. Individuals high in agreeableness are cooperative and compassionate, while those low in this trait may display competitiveness and occasionally manipulative tendencies.

  • Neuroticism:

Neuroticism encompasses emotional instability, moodiness, and susceptibility to negative emotions. High levels of neuroticism are associated with mood swings, anxiety, and irritability, whereas low levels indicate emotional stability and resilience. These five primary personality traits provide a foundational framework for understanding the multifaceted nature of human personality, offering valuable insights into individual differences and behaviour patterns.

III. related works

The literature on predicting personality traits in online hiring processes using machine learning techniques presents a multifaceted approach to streamlining candidate selection and improving recruitment efficiency. This review synthesizes key findings from five distinct studies to illustrate the diverse methodologies and applications employed in this domain. The first study proposes a novel approach to automate candidate pre-screening by predicting personality traits through a personality prediction test [10]. By leveraging machine learning algorithms, the system aims to identify candidates whose personal attributes align with the organization's criteria, thereby facilitating more efficient recruitment processes and reducing the need for extensive rounds of interviews and background analyses. Similarly, next study emphasizes the growing significance of personality assessment in the recruitment process, particularly amidst a competitive job market [11]. Employing Natural Language Processing (NLP) techniques, the study explores the use of machine learning algorithms to predict personality traits from CV analysis [12] [13]. Results indicate that the Random Forest algorithm outperforms other algorithms, offering potential applications in recruitment software to expedite candidate selection. Moving beyond traditional recruitment methods, the third study delves into the realm of social media data analysis to predict personality traits using the Big Five Model [14] [15]. By analysing user interactions on platforms like Twitter and Facebook, the study demonstrates the utility of social media data in predicting personality traits and its implications for various domains, including business intelligence and marketing. Meanwhile, the fourth study investigates the predictive power of smartphone behaviour data in discerning individuals' personality dimensions [16]. Through the analysis of communication patterns, app usage, and mobility data, the study reveals correlations between behaviour patterns and personality traits, shedding light on both the benefits and risks associated with smartphone data collection in privacy and personality prediction. Finally, reiterating the importance of personality assessment in recruitment, the fifth study underscores the challenges posed by the influx of job seekers and the need for efficient candidate shortlisting methods. By emphasizing the role of personality traits in professional success, the study highlights the potential of machine learning techniques to streamline the recruitment process and identify candidates best suited for specific job roles [17]. In summary, these studies collectively contribute to advancing the understanding and application of machine learning techniques in predicting personality traits for online hiring processes, offering valuable insights for recruiters, employers, and researchers alike.

IV. meterial and methods

The methodology of investigation involves three phases. Phase one is the dataset preparation, phase two is the algorithm training and testing, phase three is the performance evaluation. Figure 2 reveals the phases of investigation and the process involved in each phase. The dataset preparation was achieved in four stages; importing information, data analysis, data visualization and data preprocessing. Once the dataset was ready, these data were imposed in training and testing of Machine Learning algorithms. During the second phase, four Machine Learning algorithms were utilized, Random Forest Classifier, Decision Tree Classifier, Ada Boost Classifier and hybrid XGB (eXtreme Gradient Boosting) Classifier. In the third phase, the performance evaluation of all the tested algorithms were estimated by using five various metrics, precision score, recall score, F1 score, support score and accuracy to evaluate the best for the problem stated in this work.

2. Dataset



Figure 2. The Methodology of Investigation.

4.1 Dataset

The dataset used in this research comprises responses to a personality assessment administered in an online format. It consists of structured data collected from participants who completed the assessment. Each row in the dataset represents an individual respondent, while each column represents a specific item or question in the assessment. The following sub-sections describe the overview of Dataset and data preparation that were carried out for the investigation.

2.1 Overview

4.1.1 Overview

The dataset, stored in a CSV file named "data-final.csv", comprises 1,012,050 rows and 110 columns. It includes personality-related attributes (EXT1 to EXT10, EST1 to EST10, AGR1 to AGR10, CSN1 to CSN10, and OPN1 to OPN10) and additional metadata (dateload, screenw, screenh, introelapse, testelapse, endelapse, IPC, country, lat_appx_lots_of_err, and ong_appx_lots_of_err). Missing values were present in certain columns and were handled by dropping corresponding rows to maintain data integrity. The dataset likely serves the purpose of predicting personality traits based on self-reported responses provided by individuals. Each row represents a respondent, and the columns contain information about personality attributes and metadata like country of origin.

2.2 Data Preparation

4.1.2 Data Preparation

Missing values in columns, including personality attributes and metadata, were addressed through data cleaning techniques. The resulting dataset offers a comprehensive resource for analysing personality traits and potential correlations with demographic or behavioural factors. The corelation matrix between the five factors are plotted figure 3.

Figure 3. correlation matrix between personality traits

V. results and discussion

This section deals with the outcomes of the investigations in all the three phases. The following sub-sections are used to reveal the in-depth analysis and substantial results of various stages of the investigation.

3. Data Analysis5.1 Data Analysis

  • Data cleaning and preprocessing:

The initial phase of the investigation focuses on cleansing and preparing the dataset. This encompasses the tasks of managing null values, transforming data types, and eliminating redundant columns. It additionally excludes rows that have missing answers to the personality questions or reaction times that are outside of a reasonable range. The set of histograms displays the modified distributions of five fundamental personality traits - Extroversion, Neuroticism, Agreeableness, Conscientiousness, and Openness, as shown in figure 4. The changes entail converting unprocessed survey responses for each characteristic. Each subplot represents a distinct characteristic, displaying the distribution of individuals across various levels of that characteristic. The lack of a subplot in the bottom-right corner suggests that Openness is not depicted in this arrangement. The visualizations offer valuable insights into the general

trends and fluctuations in the adjusted personality traits obtained from the investigated dataset.

  • Exploratory Data Analysis (EDA):

Figure 4. Histogram Distribution of Adjusted Personality Traits

  • Exploratory Data Analysis (EDA):



It is a process of analysing and visualizing data to gain insights and understand the underlying patterns and relationships in the data. Upon completing the data cleaning process, the investigation was extended with Exploratory Data Analysis (EDA) in order to obtain a deeper understanding of the distribution and attributes of the personality traits and response times, as shown in figure 5. Box plots, scatter plots, and distribution plots were employed for data analysis and pattern recognition.

  • Statistical Analysis:
    
    
 

 

 

Figure 5. Temporal evolution of Big Five personality traits in multiple panel

  • Statistical Analysis:

The investigation employed several statistical measures, including the mean reaction time and correlation coefficients between response time and the word count or letter count in the questions. The Pearson correlation coefficients were computed to measure the association between response time and the level of question complexity.

  • Geographic Analysis:

The dataset was used to extract geographic information, such as nation codes, names, and continents. Bar plots and choropleth maps were employed to examine the dispersion of survey respondents across various countries and continents.

  • Analysis of time:

The algorithm examines the temporal dimension of the data, encompassing the distribution of survey responses throughout time and variations in personality traits across different years.

  • Correlation Between Questions and Answers:

The questionnaire in the investigation links each answer option to its related text question, facilitating comprehension of the survey questions and responses.

  • Additional data cleaning and processing:

Further data cleaning procedures were carried out, including modifying response values and computing composite scores for each personality feature based on individual question replies.

  • Correlation analysis:

Correlation matrices and heatmaps were created to visually represent the connections between various personality traits, offering valuable insights into their interrelationships.

  • Graphical representations:

Diverse visualizations like count plots, distribution plots, and scatter plots were employed during the analysis to effectively convey findings and insights.

  • Analysis and understanding:

Ultimately, the investigation resulted with interpretations and insights that were obtained from the analysis. These findings were then utilized to shape the conclusions and recommendations of the research report.

4. Investigated Algorithms5.2 Investigated Algorithms

4.1 Random Forest5.2.1 Random Forest

The methodology for implementing a Random Forest Classifier begins with data preparation, where relevant features were extracted and the target variable was defined. Subsequently, the dataset was split into training and testing sets to enable model validation. The Random Forest Classifier was then trained using the training data, employing a specified number of decision tree classifiers and other hyperparameters to construct an ensemble model. Following training, the model was applied to make predictions on the testing data. Finally, the model's performance was evaluated by calculating accuracy scores and generating a detailed classification report, which includes metrics such as precision, recall, and F1-score for each class, providing insights into the classifier's performance across different categories, which are shown in table 1. The overall accuracy of the classifier was 90%, indicating its effectiveness in correctly predicting the classes across the dataset.

Table 1. Classification Table of Random Forest Classifier

 

Precision

Recall

F1 Score

Support

Extroversion

0.75

0.75

0.75

32125

Neurotic

0.79

0.80

0.79

30324

Agreeable

0.67

0.68

0.68

30712

Conscientious

0.71

0.70

0.71

24282

Open

0.77

0.77

0.77

28721

 

Precision

Recall

F1-Score

Support

Extroversion

0.90

0.91

0.91

32125

Neurotic

0.92

0.93

0.92

30324

Agreeable

0.88

0.88

0.88

30712

Conscientious

0.92

0.86

0.89

24282

Open

0.91

0.91

0.91

28721

4.2 Decision tree classifier5.2.2 Decision tree classifier

Decision Trees (DTs) represent a non-parametric supervised learning approach widely utilized for both classification and regression tasks. The fundamental objective of DTs is to construct a predictive model capable of forecasting the value of a target variable through the extraction of straightforward decision rules derived from the characteristics present in the dataset. Conceptually, a decision tree resembles a partitioning of the feature space into distinct regions, each associated with a particular prediction. In this work, we employed the Decision Tree Classifier module from the Scikit-learn library [18] to implement the DT algorithm. The model was trained on the provided dataset, and subsequent predictions were made on the test set. The performance of the Decision Tree model was evaluated using standard metrics, including accuracy, which quantifies the proportion of correctly classified instances. Our results indicate an accuracy of 73.96%, suggesting the efficacy of the Decision Tree classifier in this context. In Table 2, the classification report of the Decision Tree classifier presents detailed metrics such as precision, recall, F1-score, and support for each class label. These metrics provide insights into the performance of the classifier across different personality traits, including Extroversion, Neuroticism, Agreeableness, Conscientiousness, and Openness.

Table 2. Classification Table of Decision tree Classifier

Table 2. Classification Table of Decision tree Classifier

 

4.3 Ada Boost classifier5.2.3 Ada Boost classifier-

The AdaBoost classifier operates by sequentially training a series of weak learners, typically decision trees, on the dataset. In each iteration, the algorithm adjusts the weights of incorrectly classified instances, placing greater emphasis on those that were misclassified. This iterative process allows subsequent weak learners to focus more on the difficult cases, gradually improving the overall predictive accuracy. Specifically, the AdaBoost model was trained with 100 decision tree estimators in our work. After training, the model made predictions on the test dataset, and its performance was evaluated using various metrics. The accuracy metric indicates the proportion of correctly predicted instances out of the total. In our work, the AdaBoost classifier achieved an accuracy of 90.67%, indicating that it accurately predicts the target classes for the majority of instances. The classification report provides a detailed breakdown of the model's performance for each class in the dataset. Precision measures the proportion of true positive predictions out of all instances predicted as positive, while recall measures the proportion of true positive predictions out of all actual positive instances. The F1-score is the harmonic mean of precision and recall, providing a balance between the two metrics. The following Table 3 represents the classification table of Ada Boost Classifier;

Table 3. Classification Table of Decision tree Classifier

Table 3. Classification Table of Decision tree Classifier

28721

Precision

Recall

F1-score

Support

Extroversion

0.94

0.89

0.91

32125

Neurotic

0.91

0.96

0.93

30324

Agreeable

0.89

0.88

0.88

30712

Conscientious

0.91

0.85

0.88

24282

Open

0.88

0.95

0.91

28721

 

Precision

 

4.4 Hybrid XGB Classifier5.2.4 Hybrid XGB Classifier

XGBoost, or eXtreme Gradient Boosting, is a highly efficient boosting algorithm used for supervised learning tasks. It sequentially builds a strong ensemble of decision trees, optimizing an objective function through innovative regularization techniques like L1 and L2 regularization. XGBoost is known for its scalability, supporting parallel processing for large datasets, and its flexibility in handling various loss functions, making it a popular choice for diverse machine learning applications due to its superior performance and interpretability. To apply the XGBoost classifier, the dataset was first prepared by cleaning and splitting it into features and target variables. Next, the initialization of the XGBoost model was done using the XGB Classifier from the XGBoost library and trained it on the training data using the fit function. Once trained, we used the model to make predictions on the test dataset with the predict function. We then evaluated the model's performance by comparing the predicted labels with the actual labels using metrics like accuracy, precision, recall, and F1-score. The classification report was analysed to gain insights into the model's performance across different classes as shown in Table 3. Optionally, we fine-tuned the model by adjusting hyperparameters and conducting feature engineering to enhance its predictive capabilities. Through this methodology, we were able to effectively utilize the hybrid XGBoost algorithm for classification tasks, achieving high accuracy and robust performance. The following table 4 represents the classification report of the hybrid XGBoost classifier which was used to present the detailed metrics such as precision, recall, F1-score, and support for each class label. The hybrid XGBoost classifier demonstrated exceptional accuracy, achieving an accuracy score of 0.9566, or approximately 95.66%. This high level of accuracy indicates that the model effectively classified the majority of instances in the dataset correctly. This level of accuracy is indicative of the robustness and efficacy of the hybrid XGBoost algorithm in handling classification tasks, making it a powerful tool for predictive modelling.

Table 4. Classification Table of hybrid XGBoost Classifier

Table 4. Classification Table of hybrid XGBoost Classifier

2872

 

Precision

Recall

Recall

F1 Score

F1 score

Support

Support

Extroversion

Extroversion

0.75

0.96

0.75

0.96

0.75

0.96

32125

32125

Neurotic

Neurotic

0.79

0.80

0.97

0.79

30324

0.96

0.96

30324

Agreeable

Agreeable

0.67

0.94

0.68

0.95

0.68

30712

0.95

30712

Conscientious

0.71

Conscientious

0.95

0.70

0.95

0.71

24282

0.95

24282

Open

0.77

Open

0.96

0.77

0.96

0.77

0.96

28721

2872

Precision

Recall

F1-score

Support

Extroversion

0.94

0.89

0.91

32125

Neurotic

0.91

0.96

0.93

30324

Agreeable

0.89

0.88

0.88

30712

Conscientious

0.91

0.85

0.88

24282

Open

0.88

0.95

0.91

 

Precision

Recall

F1 score

Support

Extroversion

0.96

0.96

0.96

32125

Neurotic

0.97

0.96

0.96

30324

Agreeable

0.94

0.95

0.95

30712

Conscientious

0.95

0.95

0.95

24282

Open

0.96

0.96

0.96

 

4.5. Accuracy of Classifiers

5.3 Accuracy of Classifiers

The comparison plot shown in figure 6 provides a concise overview of the accuracies achieved by different classifiers, including hybrid XGBoost, AdaBoost, Random Forest, and the Decision Tree. The plot demonstrates that hybrid XGBoost achieved the highest accuracy of 95.66%, followed by AdaBoost with 90.67%, Random Forest with 90.25%, and the Decision Tree with 73.96%. These results suggest that hybrid XGBoost outperforms the other classifiers in terms of predictive accuracy. Researchers can use this information to make informed decisions about which classifier to use for their specific classification tasks, with hybrid XGBoost being the preferred choice when high accuracy is paramount.

 

5

Figure 6. Accuracy evaluation of the four algorithms

VI. conclusion

This research investigated the practical uses of machine learning, namely in the field of personality prediction using the Big Five qualities. As machine learning is becoming more common in different areas of our lives, its capacity to categorize people according to their distinct personality profiles provides useful information for focused advertising strategies and customized services. Our objective was to enhance the comprehension and application of personality data in particular contexts, such as competitive tests, by creating a prediction framework. The objective of this study was to evaluate and compare the efficacy of several classification algorithms in the context of predictive modelling. By conducting experiments and analysing the results, it was determined that hybrid XGBoost attained the highest level of accuracy, reaching 95.66%. This accuracy surpassed that of AdaBoost (90.67%), Random Forest (90.25%), and the Decision Tree (73.96%). The hybrid XGBoost's exceptional precision underscores its efficacy in managing intricate categorization assignments, rendering it the favoured option for predictive modelling in situations when utmost accuracy is crucial. These findings highlight the significance of choosing the suitable method according to the specific demands of the classification task at hand. Further studies might extend to more improvements and enhancements of hybrid XGBoost to improve its effectiveness across a broader range of applications. Investigating the potential for future development in this field involves various intriguing possibilities. Firstly, further research might focus on improving the understanding and efficiency of machine learning models, specifically in the context of predicting personality characteristics. This might involve developing innovative methods to clarify the decision-making process of complex models such as the hybrid XGBoost, hence enhancing confidence and comprehension among end-users. Furthermore, doing research on the influence of integrating domain-specific characteristics and contextual information into personality prediction models could provide significant insights. Furthermore, investigating the utilization of ensemble approaches in combination with deep learning architectures for personality prediction offers a captivating avenue for future research. It is essential to carefully analyze the ethical consequences of using personality prediction models in many fields, such as recruiting and tailored marketing, in order to guarantee fairness, transparency, and responsibility in their application.

Acknowledgement

Authors express their earnest thankfulness to the Karunya Institute of Technology and Sciences for the research environment and infrastructure that has been extended.

Author Contributions

Tungala Sreelakshmi: Conceptualization; Investigation; Methodology; Roles/Writing - original draft.

  1. J. Jagannath: Investigation; Project administration; Resources; Supervision; Roles/Writing - original draft; and Writing - review & editing.

Jemy George: Data curation; Formal analysis; Resources; Investigation; Writing - review & editing.

Mangadoddi Achsah: Validation; Visualization; Investigation; Roles/Writing - original draft.

Martin Sagayam.K: Writing - review & editing, Formal analysis and Resources.

 

 

Data Availability declaration

This manuscript’s data are not publicly available. However, the data can be provided by the authors upon request.

VII. references

[1] Vinciarelli, Alessandro, and Gelareh Mohammadi. "A survey of personality computing." IEEE Transactions on Affective Computing 5.3 (2014): 273-291.

[2] Shumanov, Michael, Holly Cooper, and Mike Ewing. "Using AI predicted personality to enhance advertising effectiveness." European Journal of Marketing 56.6 (2022): 1590-1609.

[3] Dhelim, Sahraoui, et al. "A survey on personality-aware recommendation systems." Artificial Intelligence Review (2022): 1-46.

[4] Millecamp, Martijn, et al. "To explain or not to explain: the effects of personal characteristics when explaining music recommendations." Proceedings of the 24th international conference on intelligent user interfaces. 2019.

[5] Lam, Xuan Nhat, et al. "Addressing cold-start problem in recommendation systems." Proceedings of the 2nd international conference on Ubiquitous information management and communication. 2008.

[6] Allport, Gordon W., and Henry S. Odbert. "Trait-names: A psycho-lexical study." Psychological monographs 47.1 (1936): i.

[7] Cattell, Raymond Bernard, and Harold John Butcher. The prediction of achievement and creativity. Ardent Media, 1968.

[8] Eysenck, Sybil BG, and Hans J. Eysenck. "Crime and personality: an empirical study of the three-factor theory." The British Journal of Criminology 10.3 (1970): 225-239.

[9] Roccas, Sonia, et al. "The big five personality factors and personal values." Personality and social psychology bulletin 28.6 (2002): 789-801.

[10] Dhokley, Waheeda, et al. "A Novel Approach to Predict Personality of a Person." Proceedings of the 4th International Conference on Advances in Science & Technology (ICAST2021). 2021.

[11] Kulkarni, Atharva, Tanuj Shankarwar, and Siddharth Thorat. "Personality prediction via CV analysis using machine learning." International Journal of Engineering Research & Technology (IJERT) 10 (2021): 544-547.

[12] Ali, Irfan, et al. "Resume classification system using natural language processing and machine learning techniques." Mehran University Research Journal Of Engineering & Technology 41.1 (2022): 65-79.

[13] Robey, Allan, et al. "Personality prediction system through CV analysis." Int. Res. J. Eng. Technol 6.2 (2019): 2395-0056.

[14] Azucar, Danny, Davide Marengo, and Michele Settanni. "Predicting the Big 5 personality traits from digital footprints on social media: A meta-analysis." Personality and individual differences 124 (2018): 150-159.

[15] Menon, Vishnu M., and H. A. Rahulnath. "A novel approach to evaluate and rank candidates in a recruitment process by estimating emotional intelligence through social media data." 2016 International Conference on Next Generation Intelligent Systems (ICNGIS). IEEE, 2016.

[16] Stachl, Clemens, et al. "Predicting personality from patterns of behavior collected with smartphones." Proceedings of the National Academy of Sciences 117.30 (2020): 17680- 17687.

[17] Garg, Swati, et al. "A review of machine learning applications in human resource management." International Journal of Productivity and Performance Management 71.5 (2022): 1590-1610.

[18] Kramer, Oliver, and Oliver Kramer. "Scikit-learn." Machine learning for evolution strategies (201

ScholarVision Creations