Prof. Eduard Hovy, Fellow of ACL, University of Melbourne, Australia and
Biography: Eduard Hovy is the Executive Director of Melbourne Connect (a research and tech transfer centre at the University of Melbourne), a professor at the University of Melbourne’s School of Computing and Information Systems, and a research professor at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. In 2020–21 he served as Program Manager in DARPA’s Information Innovation Office (I2O), where he managed programs in Natural Language Technology and Data Analytics. Dr. Hovy holds adjunct professorships in CMU’s Machine Learning Department and at USC (Los Angeles). Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987 and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015. He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and is also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). Dr. Hovy’s research focuses on computational semantics of language and addresses various areas in Natural Language Processing and Data Analytics, including in-depth machine reading of text, information extraction, automated text summarization, question answering, the semi-automated construction of large lexicons and ontologies, and machine translation. In early 2022 his Google h-index was 95, with over 54,000 citations. Dr. Hovy is the author or co-editor of eight books and around 400 technical articles and is a popular invited speaker. From 2003 to 2015 he was co-Director of Research for the Department of Homeland Security’s Center of Excellence for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. In 2001 Dr. Hovy served as President of the international Association of Computational Linguistics (ACL), in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society (DGS). Dr. Hovy regularly co-teaches Ph.D.-level courses and has served on Advisory and Review Boards for both research institutes and funding organizations in Germany, Italy, Netherlands, Ireland, Singapore, and the USA.
Speech Title: Toward Understanding the Limitations of Deep Neural Networks
Deep neural networks today can do amazing things. But they also fail spectacularly and unexpectedly, in ways people find difficult to predict and explain. This talk illustrates some interesting failures and uses them to suggest ways in which we can understand better how neural network operate. Such understanding enables us focus our research and network training more effectively.
Assoc. Prof. Shafiq Rayhan Joty, Research Director at Salesforce Research, Nanyang Technological University (NTU), Singapore
Biography: Shafiq Joty is a tenured Associate Professor in the School of Computer Science and Engineering (SCSE) at NTU, where he founded the NTU-NLP group and currently leads the group. He is also a research director at Salesforce Research, where he directs the NLP group. Shafiq's research has primarily focused on developing language analysis tools (e.g., syntactic parsers, language models, NER, discourse parser, coherence models) and downstream NLP applications including machine translation, question answering, text summarization, controllable generation and vision-language tasks. A significant part of his current research focuses on multilingual processing and robustness of NLP models. Shafiq severed (or will serve) as a PC co-chair of SIGDIAL-2023, a senior area chair for ACL’22 and EMNLP’21 in Machine Learning (ML) and NLP Applications tracks respectively, and area chair for ICLR-23, ACL'19-21, EMNLP'19, NAACL’21 and EACL’21 in ML, QA and Discourse tracks. He is an action editor for ACL-RR. He gave tutorials at ACL’19, ICDM’18 and COLING’18 on discourse processing and conversation modeling. His research contributed to 17 patents and more than 110+ papers in top-tier NLP and ML conferences and journals including ACL, EMNLP, NAACL, NeurIPS, ICML, ICLR, CVPR, ECCV, ICCV, CL and JAIR. More about him can be found at https://raihanjoty.github.io/
Speech Title: Model, Data and Task Engineering for NLP
With the advent of deep learning and neural methods, NLP research over the last decade has shifted from feature engineering to model engineering, primarily focusing on inventing new architectures for NLP problems. Two other related factors that are getting more attention only recently are: how to better use the available data, and which objectives or tasks to optimize; referred to as data engineering and task engineering, respectively. In this talk, I will present our recent work along these three dimensions: model, data and task engineering for NLP. In particular, I will first present novel neural architectures for parsing texts into hierarchical structures at the sentence and discourse level, and efficient parallel encoding of such structures for better language understanding and generation. I will then present effective data augmentation methods for supervised and unsupervised machine translation and other cross-lingual tasks. Finally, I will present a new objective for text generation tasks that aims to mitigate the degeneration issues prevalent in neural generation models and a unified multitask framework for lifelong few-shot language learning based on prompt tuning. With empirical results, I will argue that while model engineering is crucial to the advancement of the field, the other two factors are more important to build robust NLPsystems.
Prof. Thepchai Supnithi, Lab. Director, National Electronics and Computer Technology Center
Biography: Thepchai Supnithi is currently Director of Artificial Intelligence Research Group at National Electronics and Computer Technology Center. His interest includes Natural Language Processing, Knowledge Engineering and Knowledge Graph. He serves as an executive committee at Asia Pacific Society of Computer in Education, AACL, IJCNLP. He is currently a vice president of Artificial Intelligence Association of Thailand. His main research covers machine translation, text summarization, corpus construction and applied AI in a lot of domains, such as culture, education and medical. He also participates as a committee member in a lot of conferences such as ICCE, IJCNLP, ACL, MT Summit, KICSS, and IJCKG.
Speech Title: A Recent Deep Learning approach for Thai Text summarization
Due to the information overloading, it is not easy to gather all information and analyze at the short period. Text summarization becomes and important issue. This talk aims to explain the history of Thai Text summarization, the direction of Thai text summarization, current research on Thai text summarization, especially applying deep learning approaches for text summarization task.