GeWu

AI big model platform

GeWu AI big model platform is a multi-lingual pre-training model that supports 113 languages. It generates all-modal dialogues based on texts, speech, images, and video. GeWu was one of the AIIA 2022 Top 20 big model cases and a typical case of the “intelligence empowering all businesses” program. It has clenched top spot for 48 tasks in widely recognized tests such as CLUE and WMT. We provide a whole set of services based on customer-specified scenarios, starting from data processing, prompt engineering, secondary pre-training, fine-tuning, to deployment reasoning. The big model solution thus is an end-to-end one.

Get demo
Multi-lingual Pre-training Model
Using a multi-lingual pre-training method based on knowledge comparison information embedding, we carry out multiple rounds of two-way driving and fusion of massive multilingual non-aligned unlabeled data, bilingual sentence pairs and cross-lingual knowledge data to train a multi-lingual pre-training big model that supports 113 languages. The model covers parameter scales from lightweight to tens of billions, and innovatively integrates multiple languages, tasks, and scenarios into one pluggable and flexibly expandable task framework.
Learn more
Generative Big Model
In response to the lack of aligned texts, images, audio and video data and the limited architectural capabilities of a single-modal model, we pursue breakthroughs in content understanding and controllable multi-modal thinking chain generation and build a controllable generative big model that supports input and output in four modes. With such technological basis, the system supports full-modal human-computer interactive dialogue, multi-modal personalized content generation, and intelligent content editing, providing in-depth, professional, comprehensive, and customized content creation services.
Learn more
Machine Translation Big Model
Adopting a large-scale pre-training method based on the new MoE, we proposed the new concept of star-ring attention MoE, thus breaking through the restrictions of model capacity and language barriers, and realizing multi-lingual machine translation with a single model. The scale is up to hundreds of billions, radically improving the basic performance of low-resource multilingual machine translation.
Learn more