MetaCube
MetaAvatar/MetaSJTU, an online version of SJTU, powered with a series of algorithms like stylized face parameters / mocap / basic game features.
MetaCube/MetaSJTU is a (series of) project(s) that I partipated at MetaCube Lab, SJTU. The primary goal is to build up a digital model of our university and integrate it into a simple multi-player online game, where users can have their on featured avatars and interact with each other. We explored a number of Proof-of-Concept machine learning techniques like motion capture, stylized face parameter estimation, ASR+TTS for agent talking, etc. In below I list a few things where I was the owner:
- Making an online version(model) of SJTU, develop game features with unity.
- Re-produce the pipeline of
AgileAvatar
, which estimates stylized face parameters from a single RGB image. - Deploy face blendshape estimation model, deploy llm+TTS+Audio2Face to talk with the agent.
- Implement the web-demo via vue and backend api via flask.
First reproduce the paper AgileAvatar, where we collect many real faces, avatar models and accessories models, randomly assemble the character and render to images with unity. The paired data are further used to train a StyleGANv2 model + MLP estimator. Here is a simple progress illustrating the training of the StyleGANv2.

With the reproduced pipeline, I built up a flask service and web demo as follows, which supports uploading an image and estimate its stylized face parameter sets. The stylized face parameters are further used to assemble a model that “looks like” the input human face. Notice that it’s only a POC and later we improve the art quality with better human models.
Here is a simple unity demo illustrating our MetaSJTU game. It’s featured with basic game controls and syncing(with mirror), mocap with webcam(mediapipe) and retargeting to avatars, customized face creation. It’s also only a proof-of-concept.