Spring AI – Multimodality – Orbis Sensualium Pictus
Discription

Humans process knowledge, simultaneously across multiple modes of data inputs. The way we learn, our experiences are all multimodal. We don't have just vision, just audio and just text. These foundational principles of learning were articulated by the father of modern education John Amos Comenius, in his work, "Orbis Sensualium Pictus", dating back to 1658. "All things that are naturally connected ought to be taught in combination" Contrary to those principles, in the past, our approach to Machine Learning was often focused on specialised models tailored to process a single modality. For instance, we developed audio models for tasks like text-to-speech or speech-to-text, and computer vision models for tasks such as object detection and classification. However, a new wave of multimodal large language models starts to emerge. Examples include OpenAI's GPT-4 Vision, Google's Vertex AI Gemini Pro Vision, Anthropic's Claude3, and open source offerings LLaVA and balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs. The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video. Spring AI – Multimodality Multimodality refers to a model’s ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats. The Spring AI…Read More

Back to Main

Subscribe for the latest news: