Researchers at Microsoft have unveiled Kosmos-2 the successor of Kosmos-1, a Multimodal Large Language Model (MLLM) that integrates the capability of perceiving object descriptions and grounding text in the visual world. By representing refer expressions as links in Markdown format, Kosmos-2 achieves the vital task of grounding text to visual elements, enabling multimodal grounding, referring expression comprehension and generation, perception-language tasks, and language understanding and generation. This milestone in the development of artificial general intelligence lays the foundation for Embodiment AI and the convergence of language, multimodal perception, action, and world modeling, bringing us closer to bridging the gap between humans and machines and revolutionizing various domains where AI interacts with the real world. With just 1.6B parameters, the model is quite small and will be available open on GitHub