Difference between revisions of "Supervised Machine Learning for Fake News Detection"

From Sinfronteras
Jump to: navigation, search
(Summary of Results)
(Summary of Results)
Line 79: Line 79:
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_naivebayes|<math>{\color{blue}76.41%}</math>]]
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_naivebayes|<math>{\color{blue}76.41%}</math>]]
 
|
 
|
|[[Establishing an authenticity of sport news by Machine Learning Models#fnc_naivebayes|<math>{\color{blue}93.95%}</math>]]
+
|
 
|
 
|
 
|
 
|
Line 104: Line 104:
 
|RF
 
|RF
 
|
 
|
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|-
 
|[[Establishing an authenticity of sport news by Machine Learning Models#General linearized models|General linearized models]]
 
|Friedman et al., 2010
 
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/glmnet/index.html wglmnet]
 
|GLMNET*
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_glmnet|<math>{\color{blue}94.58%}</math>]]
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|-
 
|[[Establishing an authenticity of sport news by Machine Learning Models#Maximum entropy|Maximum entropy]]
 
|Jurka, 2012
 
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/maxent/index.html maxent]
 
|MAXENT*
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_maxent|<math>{\color{blue}96.09%}</math>]]
 
 
|
 
|
 
|
 
|
Line 143: Line 117:
 
|XGBOOST*
 
|XGBOOST*
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_xgboost|<math>{\color{red}97.42%}</math>]]
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_xgboost|<math>{\color{red}97.42%}</math>]]
|
 
|[[Establishing an authenticity of sport news by Machine Learning Models#fnc_xgboost|<math>{\color{red}95.36%}</math>]]
 
|
 
|
 
|
 
|
 
|
 
|-
 
|[[Establishing an authenticity of sport news by Machine Learning Models#Classification or regression tree|Classification or regression tree]]
 
|Ripley., 2012
 
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/tree/index.html tree]
 
|TREE
 
|
 
 
|
 
|
 
|
 
|
Line 164: Line 125:
 
|
 
|
 
|-
 
|-
|[[Establishing an authenticity of sport news by Machine Learning Models#Boosting|Boosting]]
+
|[[Establishing an authenticity of sport news by Machine Learning Models#General linearized models|General linearized models]]
|Tuszynski, 2012
+
|Friedman et al., 2010
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/caTools/index.html caTools]
+
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/glmnet/index.html wglmnet]
|BOOSTING
+
|GLMNET*
|
+
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_glmnet|<math>{\color{blue}94.58%}</math>]]
 
|
 
|
 
|
 
|
Line 177: Line 138:
 
|
 
|
 
|-
 
|-
|[[Establishing an authenticity of sport news by Machine Learning Models#Neural networks|Neural networks]]
+
|[[Establishing an authenticity of sport news by Machine Learning Models#Maximum entropy|Maximum entropy]]
|Venables and Ripley, 2002
+
|Jurka, 2012
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on
+
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/maxent/index.html maxent]
[https://cran.r-project.org/web/packages/nnet/index.html nnet]
+
|MAXENT*
|NNET
+
|[[Establishing an authenticity of sport news by Machine Learning Models#fnd_maxent|<math>{\color{blue}96.09%}</math>]]
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|-
 
|Bagging
 
|Peters and Hothorn, 2012
 
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/ipred/index.html ipred]
 
|BAGGING**
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|
 
|-
 
|Scaled linear discriminant analysis
 
|Peters and Hothorn, 2012
 
|We used [[Establishing an authenticity of sport news by Machine Learning Models#The RTextTools package|RTextTools]], which depends on [https://cran.r-project.org/web/packages/ipred/index.html ipred]
 
|SLDA**
 
|
 
 
|
 
|
 
|
 
|
Line 217: Line 151:
 
|
 
|
 
|-
 
|-
| colspan="9" |* Low-memory algorithm
+
| colspan="9" |* Accuracy of predictions made with the Kaggle Fake News Model over the Gofaaas Fake New Dataset
<nowiki>**</nowiki> Very high-memory algorithm
+
<nowiki>**</nowiki> Accuracy of predictions made with the Fake News Detector Model over the Gofaaas Fake New Dataset
 
|
 
|
 
|
 
|

Revision as of 19:51, 27 April 2019

Declaration


Acknowledgement

Thanks for Muhammad, Graham and Mark


Abstract


Introduction


Chapter 1


Chapter 2 - Training a Supervised Machine Learning Model for fake news detection

Supervised text Classification for fake news detection Using Machine Learning Models


Procedure

  • The Dataset
  • Splitting the data into Train and Test data
  • Cleaning the data
  • Building the Document-Term Matrix
  • Model Building
  • Cross validation
  • Making predictions from the model created and displaying a Confusion matrix


Results


Summary of Results

Summary of Results

Algorithms Author Package Keyword Accuracy
Kaggle fake news dataset Fake news Detector dataset Gofaaas Fake News Dataset

500 rows

Test data (30% of the dataset) Cross validation Test data (30% of the dataset)
Cross validation Test data (30% of the dataset) Cross validation Using the Kaggle Model* Using the Detector Model**
Naive Bayes Bayes, Thomas We used RTextTools, which depends on e1071 NB*
Support vector machine Meyer et al., 2012 We used RTextTools, which depends on e1071 SVM*
Random forest Liawand Wiener, 2002 We used RTextTools, which depends on randomForest RF
Extreme Gradient Boosting Chen & Guestrin, 2016 xgboost XGBOOST*
General linearized models Friedman et al., 2010 We used RTextTools, which depends on wglmnet GLMNET*
Maximum entropy Jurka, 2012 We used RTextTools, which depends on maxent MAXENT*
* Accuracy of predictions made with the Kaggle Fake News Model over the Gofaaas Fake New Dataset

** Accuracy of predictions made with the Fake News Detector Model over the Gofaaas Fake New Dataset





Algorithms Author Package Keyword Accuracy
Kaggle fake news dataset

20,800 rows

Fake news Detector dataset

10,000 rows

Gofaaas Fake News Dataset

500 rows

Test data (70% of the dataset) Cross validation Test data (70% of the dataset) Cross validation Test data (70% of the dataset) Cross validation Using the Kaggle Model^ Using the Detector Model^^
Naive Bayes Bayes, Thomas e1071 NB*
Support vector machine Meyer et al., 2012 We used RTextTools, which depends on e1071 SVM*
Random forest Liawand Wiener, 2002 randomForest RF
Extreme Gradient Boosting Chen & Guestrin, 2016 xgboost XGBOOST*
General linearized models Friedman et al., 2010 We used RTextTools, which depends on wglmnet GLMNET*
Maximum entropy Jurka, 2012 We used RTextTools, which depends on maxent MAXENT*
* Low-memory algorithm

** Very high-memory algorithm

^

^^



Evaluation of Results

We evaluate our approach in different settings. First, weperform cross-validation on our noisy training set; second,and more importantly, we train models on the training setand validate them against a manually created gold standard.17Moreover, we evaluate two variants, i.e., including and exclud-ing user features. [smb:home/adelo/1-system/1-disco_local/1-mis_archivos/1-pe/1-ciencia/1-computacion/2-data_analysis-machine_learning/gofaaaz-machine_learning/5-References/7-Weakly_supervised_searning_for_fake_news_detection_on_twitter.pdf]


The Gofaaas-Fake News Detector R Package


Installation


Functions


Datasets used


Kaggle Fake News Dataset

https://www.kaggle.com/c/fake-news/data


Distribution of the data:

The distribution of Stance classes in train_stances.csv is as follows:

rows unrelated discuss agree disagree
49972 0.73131 0.17828 0.0736012 0.0168094



Fake News Detector Dataset


Gofaaas Fake News Dataset


Algorithms


Naive Bayes

Naïve Bayes is based on the Bayesian theorem, there in order to understand Naïve Bayes it is important to first understand the Bayesian theorem.

Bayesian theorem is a mathematical formula for determining conditional probability which is the probability of something, happening given that something else has already occurred.


Image.png
  • P(c|x) is the posterior probability of class (target) given predictor (attribute).
  • P(c) is the prior probability of class.
  • P(x|c) is the likelihood which is the probability of predictor given class.
  • P(x) is the prior probability of predictor.


Prior probability, in Bayesian statistical inference, is the probability of an event before new data is collected.

Posterior probability is the revised probability of an event occurring after taking into consideration new information.

In statistical terms, the posterior probability is the probability of event A occurring given that event B has occurred.



Support vector machine


Random forest


Extreme Gradient Boosting


The RTextTools package

RTextTools - A Supervised Learning Package for Text Classification:


Chapter 3 - Gofaas Web App

A way to interact, test and display the model results


Conclusion