References
[1] X. Ding, Y. Zhang, T. Liu, and J. Duan, ‘Using Structured Events to Predict Stock Price Movement: An Empirical Investigation’, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (, 2014, pp. 1415–1425.
[2] J. Devlin, M.-W. Chang, K. Lee, and K. N. Toutanova, ‘BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding’, 2018.
[3] D. Araci, ‘FinBERT: Financial Sentiment Analysis with Pre-trained Language Models’, arXiv [id=’cs.CL’ full_name=’Computation and Language’ description=’Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.’]. 2019.
[4] O. Onyshchak, ‘Stock Market Dataset’. Kaggle, 2020.
[5] S. Ioffe and C. Szegedy, ‘Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift’, arXiv [id=’cs.LG’ full_name=’Machine Learning’ description=’Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.’]. 2015.
[6] A. Vaswani et al., ‘Attention Is All You Need’, arXiv [id=’cs.CL’ full_name=’Computation and Language’ description=’Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.’]. 2023.
[7] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, arXiv [id=’cs.CV’ full_name=’Computer Vision and Pattern Recognition’ is_active=True alt_name=None in_archive=’cs’ is_general=False description=’Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.’]. 2015.
[8] P. Micikevicius et al., ‘Mixed Precision Training’, arXiv [full_name=’Artificial Intelligence’ description=’Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.’]. 2018.
[9] David Peer and Bart Keulen and Sebastian Stabinger and Justus Piater and Antonio Rodríguez-Sánchez. ‘Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization’, arXiv. 2022.