Foundation Models for Natural Language Processing Foundation Models for Natural Language Processing
Artificial Intelligence: Foundations, Theory, and Algorithms

Foundation Models for Natural Language Processing

Pre-trained Language Models Integrating Media

출판사 설명

This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. 

Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models.


After a brief introduction tobasic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.

장르
컴퓨터 및 인터넷
출시일
2023년
5월 23일
언어
EN
영어
길이
454
페이지
출판사
Springer International Publishing
판매자
Springer Nature B.V.
크기
63.2
MB
Artificial Intelligence and Cognitive Science Artificial Intelligence and Cognitive Science
2023년
Fundamental Approaches to Software Engineering Fundamental Approaches to Software Engineering
2023년
Tools and Algorithms for the Construction and Analysis of Systems Tools and Algorithms for the Construction and Analysis of Systems
2019년
Computer and Information Sciences Computer and Information Sciences
2016년
Hands-On Large Language Models Hands-On Large Language Models
2024년
Semantic Systems. The Power of AI and Knowledge Graphs Semantic Systems. The Power of AI and Knowledge Graphs
2019년
Künstliche Intelligenz Künstliche Intelligenz
2021년
Artificial Intelligence Artificial Intelligence
2024년
Retrieval-Augmented Generation (RAG): Empowering Large Language Models (LLMs) Retrieval-Augmented Generation (RAG): Empowering Large Language Models (LLMs)
2023년
Hypergraph Computation Hypergraph Computation
2023년
Representation Learning for Natural Language Processing Representation Learning for Natural Language Processing
2020년
Quantum Computing in Artificial Intelligence: Enhancing Applications Quantum Computing in Artificial Intelligence: Enhancing Applications
2024년
Artificial Intelligence and Cognitive Science Artificial Intelligence and Cognitive Science
2023년
Big Data and Artificial Intelligence in Digital Finance Big Data and Artificial Intelligence in Digital Finance
2022년
Hypergraph Computation Hypergraph Computation
2023년
AI Ethics AI Ethics
2023년
Heterogeneous Graph Representation Learning and Applications Heterogeneous Graph Representation Learning and Applications
2022년
Towards a Code of Ethics for Artificial Intelligence Towards a Code of Ethics for Artificial Intelligence
2017년
Multi-Modal Robotic Intelligence Multi-Modal Robotic Intelligence
2025년
Neural Text-to-Speech Synthesis Neural Text-to-Speech Synthesis
2023년