Submission Topics

NLPIR is one of the key academic conferences to present research results and new developments in the area of the Natural Language Processing and Information Retrieval. For its 9th edition, NLPIR 2025 will be held in Kyushu University, Fukuoka, Japan during December 14-16, 2025.

The topics of interests for submission include, but are not limited to:

  • Core NLP & Data Science
  • Foundations of Language Processing
    •Data/text mining, corpus linguistics, and psycholinguistic modeling
    •Basic NLP pipelines: tokenization, POS tagging, lemmatization, dependency parsing, and semantic role labeling
    •Low-resource language engineering and cross-lingual adaptation

    Linguistic Analysis & Understanding
    •Syntax, semantics, discourse analysis, and pragmatics
    •Multimodal speech recognition/synthesis (ASR/TTS) and conversational AI
    •Diachronic corpora, temporal reasoning, and evolving language models

    Knowledge Systems & Semantics
    •Automated knowledge acquisition, ontology generation/alignment, and semantic web technologies
    •Neuro-symbolic integration: combining logic-based reasoning with neural networks

    Content Analysis & IR
    •Topic modeling, event/anomaly detection, and sentiment/emotion analysis
    •Document summarization, plagiarism detection, and authorship attribution
    •Dynamic/personalized IR, adversarial retrieval, and cross-language systems

    Social & Multimedia Analysis
    •Personality/emotion detection in social media, misinformation tracking
    •Multimodal IR (text, image, video) and virality prediction



  • AI-Driven Methods & Innovations
  • Large Language Models (LLMs) & Transformers
    •Architectures (BERT, GPT, T5, LLaMA) for NLU, generation, and few-shot learning
    •Domain-specific LLMs (e.g., BioGPT, Codex) and tools like ChatGPT, DeepSeek, Claude
    •Ethical challenges: bias mitigation, hallucination control, and AI-generated content detection

    Generative AI & Automation
    •Abstractive summarization, synthetic data generation, and conversational agents
    •Multimodal LLMs (e.g., GPT-4V) for vision-language tasks

    Graph & Deep Learning
    •GNNs for co-occurrence graphs, knowledge graph completion, and dynamic networks
    •Swarm intelligence hybridized with transformer architectures

    Efficiency & Scalability
    •Model compression (pruning, quantization), federated learning, and edge NLP
    •Distributed training frameworks for trillion-parameter models



  • Cross-Cutting Themes
  • Human-Centric NLP
    •Interactive AI: chatbots, dynamic query resolution, and personalized recommendation systems
    •Explainability (XAI) and visualization of attention mechanisms

    Machine Translation & Multilinguality
    •Zero-shot translation, LLM-driven low-resource adaptation, and post-editing workflows

    Decentralized & Collaborative Systems
    •Blockchain for decentralized knowledge graphs, federated search, and privacy-preserving NLP

    Ethics & Governance
    •AI safety, fairness audits, and regulatory compliance (e.g., EU AI Act)
    •Combatting misinformation and deepfakes in social/content platforms



  • Emerging Frontiers
  • AI for Science: LLMs in biomedical NLP, climate text analysis, and legal document processing
    Embodied AI: Language models integrated with robotics and real-world interaction
    Self-Supervised Learning: Pre-training paradigms beyond transformers