Skip to content
AI-HPC.org
Search
K
Main Navigation
Home
The Alliance
News & Insights
Working Groups
AI4Science Platform
Scientific Problems
Software Factory
AI4Science Engine
Compute OS
Scientific Cases
Marketplace
Resources & Community
Knowledge Base
Community
Events
AI-HPC Technical Expert
English
简体中文
English
简体中文
Appearance
中文
Menu
Return to top
On this page
LLM Pre-training
Transformer Architecture
Self-Attention Mechanism
Multi-Head Attention
Feed Forward Network (FFN)
Positional Encoding
Absolute Positional Encoding (Sinusoidal)
Rotary Positional Embedding (RoPE)
Training Objectives
Masked Language Modeling (MLM)
Causal Language Modeling (CLM)