# Perpetual Motion:

Generating Unbounded Human Motion

###### Abstract

The modeling of human motion using machine learning methods has been widely studied. In essence it is a time-series modeling problem involving predicting how a person will move in the future given how they moved in the past. Existing methods, however, typically have a short time horizon, predicting a only few frames to a few seconds of human motion. Here we focus on long-term prediction; that is, generating long sequences (potentially infinite) of human motion that is plausible. Furthermore, we do not rely on a long sequence of input motion for conditioning, but rather, can predict how someone will move from as little as a single pose. Such a model has many uses in graphics (video games and crowd animation) and vision (as a prior for human motion estimation or for dataset creation). To address this problem, we propose a model to generate non-deterministic, ever-changing, perpetual human motion, in which the global trajectory and the body pose are cross-conditioned. We introduce a novel KL-divergence term with an implicit, unknown, prior. We train this using a heavy-tailed function of the KL divergence of a white-noise Gaussian process, allowing latent sequence temporal dependency. We perform systematic experiments to verify its effectiveness and find that it is superior to baseline methods.

## 1 Introduction

Given a static human pose, e.g. a person is sitting on a sofa, we are able to predict plausible motion sequences of a person over long time horizons, e.g. the person might stand up and walk out of the room or the person might lie down on the sofa to have a rest. Despite a long history of methods that learn to model human motions, generating natural movements over arbitrarily long sequences remains extremely challenging. There are two major reasons. First, long-term human motion is intrinsically stochastic. Without the ability to model a rich set of future motion sequences, existing work often results in mode collapse, static poses or unrealistic body configurations. Second, human motion consists of a global motion trajectory and local variation in the limb poses. To be perceived as a natural motion sequence, the global trajectory and the local poses have to correspond in a physically plausible way.

Because of these challenges, most existing learning-based approaches focus on short-term motion prediction, the aim of which is to predict a deterministic human motion sequence in the very near future (often less than a second), based on a relatively long observation (e.g. aksan2019structured ; martinez2017human ). The issue is so prevalent that synthesizing motions as short as several seconds is considered “long-term”. Furthermore, when the global motion trajectory is considered, it is often regarded as a pre-defined input from a user or a separate path planning module.

In this work, our goal is to generate significantly longer, or “perpetual”, motion: given a short motion sequence or even a static body pose, the goal is to generate non-deterministic ever-changing human motions in the future. To this end, we design a two-stream variational autoencoder (see Sec. 3) with RNNs, in which the change of body pose and the change of the body translation are conditioned on each other. The model is then learned from motion capture data AMASS:2019 , without additional information like user input or action labels.

Similar to work that synthesizes generic time series, e.g. chung2015recurrent ; sonderby2016ladder ; aksan2018stcn , our novel model is auto-regressive over time, and has stochastic modules inside. During training, an evidence lower bound (ELBO) is maximized. During testing, the latent variables are sampled from the inference posterior. Importantly, our model does not have an explicit prior model as in other studies. Instead, we apply a Charbonnier penalty function charbonnier1994two on the KL-divergence term, and the implicit latent sequence prior is then different from a standard normal distribution, making the latent variables possess temporal dependencies. In addition, our novel KL-divergence term still retains a valid ELBO, and we observe that it effectively overcomes posterior collapse during training.

To verify the effectiveness of our method, we perform systematic experiments to evaluate the model’s representation power, analyze the frequency and diversity of the generated human motions, and conduct a perceptual study to evaluate the naturalness of the generated motions. We show that the proposed method outperforms two state-of-the-art baseline methods. Qualitatively, after generating 72000 frames (10 minutes) of motion with our method, the body motion is still plausible.

In summary, our contribution is as follows: (1) To address the task of generating perpetual motion from short-term sequences or static poses, we propose a two-stream cross-conditional variational RNN network. Its effectiveness and superior performance to state-of-the-art baseline methods are verified by experiments. (2) We establish a systematic evaluation pipeline to verify the effectiveness of methods, from the perspective of model representation power, motion frequency, diversity, and naturalness. (3) We design a novel KL-divergence term to implicitly make the latent sequence prior possess temporal dependency. Also, this novel KL-divergence term still leads to a valid ELBO, and effectively avoids posterior collapse.

## 2 Related Work

##### Short-term motion prediction.

From the motion prediction perspective, it is expected to produce deterministic and accurate short sequences. In martinez2017human , the model is trained over 1 second of motion, and predicts motion up to 400 milliseconds in the future. This experimental setting is widely used in follow-up studies, such as in Gui_2018_ECCV ; pavllo2018quaternet ; pavllo2019modeling ; ghosh2017learning ; gui2018adversarial ; li2018convolutional ; wang2019imitation ; aksan2019structured . Although long-term prediction is also considered in these studies, the generated future sequence is often in the range of seconds. In addition, the work Hernandez_2019_ICCV formulates motion prediction and planning as temporal in-painting problems. Rather than performing evaluation w.r.t. joint position errors, they propose to measure the power spectrum distribution as a metric to compare their generated results with the ground truth.

##### Long-term motion generation and character animation.

From the animation perspective, generated motions are much longer. The work of yan2019convolutional generates the motion sequence as a whole from a pre-defined latent Gaussian process. In their experiments, sequences of 1000 frames (about 1 minutes) are generated. The work pavllo2018quaternet also designs a network for character control, which is based on user inputs and given walking paths. The studies of holden2017phase ; starke2019neural can generate quite long human motion as well, while the motion is blended from several pre-defined action categories. Physical simulation is employed for motion generation in some studies like peng2016terrain , which proposes a reinforcement learning method with physical constraints. The mass, size, friction factor and other attributes of the character model are specified in advance.

##### Deep variational Bayesian methods for generic time series modeling.

As human motion is a special time series, deep variational Bayesian methods for generating speech, handwriting, natural language, and other types of time series are related as well. The work of VRNN chung2015recurrent designs a learnable prior to incorporate temporal dependencies of latent variables. To improve the performance, the work of sonderby2016ladder proposes a latent prior with ladder structures for temporal dependency modeling. Based on such ladder structure, STCN aksan2018stcn employs a temporal convolutions to process information, and achieves state-of-the-art performance for speech generation and handwriting generation.

##### Ours versus others.

Methodologically, our method is a special type of deep variational Bayesian method, and practically, our method aims at generating endless motions over time. Therefore, in this paper, we compare our method with two state-of-the-art methods from individual fields, which are conservatively modified to fit our task for fair comparison. See Sec. 4.2 for details.

## 3 Method

### 3.1 Human Motion Representation

In this paper, we represent human motion with time time sequences of the 3D pelvis location, , and the articulated body pose, . The body is represented by the SMPL SMPL:2015 kinematic tree rooted at the pelvis and includes 22 body joints. We use the relative joint rotation in this kinematic tree to represent the body pose, with rotations represented in a 6D continuous space zhou2019continuity , which has proven useful for back-propagation. Therefore, the body pose at each time instant has , as well as the body translation at each time is . The pelvis location and rotation are with respect to the world coordinate system.

To make the motion representation invariant to the world coordinate system, we unify the world coordinates of different motion sequences. For each sequence, we set the negative gravity direction as the Z-axis, set the horizontal component from the left hip to the right hip as the X-axis, and determine the Y-axis according to the right-hand rule. Also, the world origin is located at the body pelvis in the first frame. As a pre-processing step, transforming every mocap sequence to this new world coordinate is conducted before training models. As a post-processing step, we transform all generated bodies back to their original world coordinates. Since we deal with inconsistent world coordinates in the AMASS dataset AMASS:2019 , we refer to this new world coordinate setting as “AMASS coordiante”.

### 3.2 Network Design

We use the variational autoencoder framework vae to model a generic time sequence . As in bayer2014learning , we have

(1) |

For each conditional probability, we derive an evidence lower bounded (ELBO) as

(2) |

in which is the inference model (encoder), is the generation model (decoder), and is assumed to enable the encoder to perform prediction.

##### Cross-conditional two-stream variational RNN.

We design an auto-regressive model illustrated in Fig. 2, in which the translation stream and the body pose stream are conditioned on each other. The activation function is Swish ramachandran2017searching , i.e. , which makes training faster in our trials. In this network, the encoder takes the body configuration in the current frame as input, and combines it with the RNN states incorporating information from inputs in the past. Therefore, the posterior model is effectively modeled. In addition, the RNN in the decoder uses the latent sequence up to the current time to produce the output. Combining with the residual connection, the generation model is effectively modeled.

##### -residual connection.

Previous work has reported that RNNs can lead to first-frame jump artifacts; i.e. a considerable discontinuity between the last given input frame and the first generated frame. Martinez et al. martinez2017human use a residual connection to overcome this artifact. However, the standard residual connection does not work for us, since the training loss is very small in the beginning, and the model parameters are not updated. To overcome this issue, we employ the exponential moving average scheme as a type of residual connection. With a hyper-parameter , this “-residual” gives , in which incorporates other network modules. Note that this cannot be learned via back-propagation, since it will increase rapidly towards 1 and the network parameters will not effectively update. By default, the -residual is set to in our trials.

### 3.3 Training Loss

Provided ground truth body translations and body poses , we feed the sub-sequences and to the model as input, and obtain and . The training loss comprises three components, and is given by .

##### The reconstruction loss :

This component corresponds to the expectation term in the ELBO (Eq. 2), and is given by

(3) |

which includes frame-wise and time difference reconstruction. is a hyper-parameter.

##### The pose naturalness loss :

Since our body is represented in the SMPL kinematic tree, we can use the pre-trained VPoser pavlakos2019expressive to encourage naturalness of generated body poses, as employed in pavlakos2019expressive ; PROX:2019 ; PSI:2019 . Specifically, given the VPoser encoder , we have

(4) |

in which is the predicted body poses excluding the global orientation (the pelvis rotation).

##### The KL-divergence loss :

In our cross-conditional VRNN model, we let be the average of our proposed KL divergence terms corresponding to the two streams. Studies like chung2015recurrent ; sonderby2016ladder ; aksan2018stcn ; razavi2018preventing explicitly design the latent sequence prior with temporal dependency. During testing, latent variables are sampled from the designed prior distribution. Despite effectiveness, the prior distribution design might not be appropriate, and extra computational cost is involved, as the latent prior as well as the inference posterior are both learned from data. In our work, we do not formulate the latent prior explicitly. Instead, we propose a novel KL-divergence term, which implicitly allows the prior to possess temporal dependency. Thus, the latent prior does not have an explicit formula. During testing, we draw latent samples from the inference posterior , which is dependent on RNN states and is regularized by that implicit latent prior. Specifically, we assume the latent prior not to be a standard normal distribution, then we have . We let

(5) |

in which is the Charbonnier penalty function, , charbonnier1994two . To investigate its properties, without loss of generality, we assume the feature dimension in sequences and is 1D. We find that

###### Proposition 1.

The new KL-divergence in Eq. 5 can: (1) lead to a higher ELBO than its counterpart with a standard normal distribution prior, (2) introduce temporal dependencies in the latent space, (3) avoid posterior collapse numerically, and (4) retain a low computational cost.

###### Proof.

We factorize

(6) |

According to our network design, we actually have , in which and are derived based on the RNN states (Fig. 2). Therefore, we can rewrite the KL-divergence term to

(7) |

with being the KL-divergence between the posterior and the standard normal distribution at time . From the above derivation, we can see that the temporal correlation term appears in the formula. Also, with the same generation model and the reconstruction loss, our novel KL-divergence term leads to a higher ELBO.

Since the Charbonnier penalty function is a scalar function, our method retains a low computation cost, unlike methods with an explicit latent prior, e.g. chung2015recurrent ; aksan2018stcn . In addition, it is noted the derivative of the Charbonnier function is . Consequently, gradients for updating the KL divergence term will get small, when the KL-divergence is small. Numerically, it effectively overcomes the posterior collapse problem. ∎

In our experiments, the training loss weights are set to for all trials, without weight annealing. With a trained model, motion is generated in an auto-regressive manner after providing the initial frame(s). Due to the sampling module, the generated motions are not deterministic.

## 4 Experiments

### 4.1 Datasets

In our experiments, we use AMASS AMASS:2019 , which unifies diverse motion capture data into the SMPL SMPL:2015 body representation. We use ACCAD and CMU for training, and use HumanEva and MPI-Mosh for testing, which are all captured at 120Hz. A brief summary is shown in Tab. 1. In ACCAD, each motion sequence records a characteristic action, e.g. walking, kicking, etc., and actions are performed on the same ground plane. In CMU, many motion sequences contain multiple actions, which are not restricted to a common ground plane, e.g. actions like climbing the stairs or jumping from a high place are included. We discard sequences shorter than 120 frames (1 second).

dataset | #sequences | #subjects | #frames per sequence statistics | ||
---|---|---|---|---|---|

average | min | max | |||

ACCAD accad | 252 | 20 | 764 | 202 | 6361 |

CMU mocap_cmu | 2061 | 106 | 1702 | 128 | 22948 |

HumanEva sigal2010humaneva | 28 | 3 | 2180 | 360 | 3552 |

MPI-Mosh Loper:SIGASIA:2014 | 77 | 19 | 1413 | 362 | 3287 |

### 4.2 Investigated Models

##### Baseline 1: QuaterNet pavllo2019modeling .

From related motion generation methods we select the QuaterNet as our baseline, since it not only achieves state-of-the-art motion prediction results, but also performs well on generating long-term motions with global movements. However, the original QuaterNet takes a motion path as input rather than generating the global trajectory. Furthermore it only considers locomotion (e.g. walking or running) in long-term generation. To use the QuaterNet for our task, and enable a fair comparison, we make the following modifications to pavllo2019modeling : (1) generating global body translations and local body poses jointly, (2) replacing the quaternion representation by the 6D rotation representation, due to its better performance zhou2019continuity and fair comparison with ours, (3) adding a sampling layer to produce non-deterministic motion sequences as ours, (4) adding -residual to overcome the first-frame jump. We denote the version without random sampling as “Q” model, and the version with random sampling as “VQ” model, which are trained with the same loss as ours (see Sec. 3.3).

##### Baseline 2: STCN aksan2018stcn .

From the related variational Bayesian methods, we select the stochastic temporal convolutional network (STCN) as our baseline, due to its superior performance on handwriting and speech generation. Also, comparing to another CNN-based method proposed in yan2019convolutional , STCN is auto-regressive for generating arbitrary long sequences, and has a data-adaptive latent sequence prior. To use STCN for our task and enable a fair comparison of the network architecture, our primal modifications are: (1) increasing the receptive field in the temporal encoder to 128, and (2) using the reconstruction loss in Sec. 3.3. We retain its original KL-divergence term, and also learn a latent sequence prior from data. We denote our modified STCN as “S” model.

##### Our model instances.

We train our cross-conditional VRNN models either on ACCAD or CMU, for analyzing the influence of training data. Additionally, we train models either with or without the -residual connection, for discovering its impact in addition to overcoming first-frame jump. Moreover, we select either LSTM hochreiter1997long or GRU gru as our RNN cells, for studying the influence of RNN cell types. For short, we denote the family of our model instances as “C” models.

### 4.3 Evaluation on Model Representation Power

We input every testing sequence into the models, and compute the reconstruction error (), the time difference reconstruction error (), and the negative ELBO (). Note that includes the loss weights, but does not include the VPoser loss.

Results are shown in Tab. 2. First, we can see the -residual considerably reduces , which is probably caused by the weighted-average between the ground truth input and the network output. The temporal difference reconstruction is improved as well, which indicates that -residual can improve the model representation power consistently. Second, C-models trained on CMU consistently outperform their counterparts trained on ACCAD, indicating that a larger training set with more motion variation is favourable. In addition, we observe that the C-models with GRU perform slightly better than their counterparts with LSTM. Moreover, the S-model has larger time difference reconstruction errors, which suggests that the reconstructed motion is less smooth.

HumanEva | MPI-Mosh | |||||

Model | ||||||

Q-ACCAD | 0.013 | 0.002 | - | 0.013 | 0.001 | - |

VQ-ACCAD | 0.301 | 0.005 | 0.325 | 0.171 | 0.004 | 0.190 |

VQ-Res-ACCAD | 0.023 | 0.002 | 0.047 | 0.015 | 0.001 | 0.034 |

VQ-CMU | 0.302 | 0.005 | 0.326 | 0.173 | 0.004 | 0.192 |

VQ-Res-CMU | 0.031 | 0.002 | 0.041 | 0.019 | 0.001 | 0.024 |

S-ACCAD | 0.091 | 0.005 | 0.130* | 0.088 | 0.005 | 0.126* |

S-CMU | 0.063 | 0.003 | 0.108* | 0.056 | 0.003 | 0.099* |

C-LSTM-ACCAD | 0.256 | 0.005 | 0.283 | 0.155 | 0.004 | 0.178 |

C-Res-LSTM-ACCAD | 0.040 | 0.002 | 0.053 | 0.017 | 0.001 | 0.024 |

C-Res-GRU-ACCAD | 0.031 | 0.002 | 0.043 | 0.016 | 0.001 | 0.023 |

C-LSTM-CMU | 0.071 | 0.003 | 0.091 | 0.060 | 0.003 | 0.078 |

C-Res-LSTM-CMU | 0.010 | 0.002 | 0.022 | 0.008 | 0.001 | 0.015 |

C-Res-GRU-CMU | 0.009 | 0.002 | 0.021 | 0.007 | 0.001 | 0.015 |

### 4.4 Evaluation on Motion Generation

For each sequence in HumanEva and MPI-Mosh, we separately input the first single frame, the first 10% of frames, and the first 50% of frames into the model, and generate the rest of the sequence. We only compare models with -residual, since models without -residual lead to obvious first-frame jumps. Moreover, we find the Q-model without random sampling, and the S-model trained on CMU produce unrealistic body poses very quickly, and hence do not compare them with others. This indicates that a model good at representing sequences is perhaps bad at generating sequences.

#### 4.4.1 Motion Frequency

The motion generation result is non-deterministic, and it can happen that generated results are plausible but different from the ground truth. Following Hernandez_2019_ICCV , we compare the frequency power spectrum distribution between generated results and the ground truth. We apply the fast Fourier transform over time, and then compute the power spectrum distribution for each feature dimension. As in Hernandez_2019_ICCV , we propose to use two metrics: (1) Power spectrum entropy ratio (PSER), i.e. the entropy increasing ratio of the generated result over the ground truth. The closer to 0, the better. A positive value indicates noise, and a negative value indicates lack of variations. (2) The power spectrum KL-divergence (PSKLD), where lower is better.

We average the scores for all sequences as the final results, which are presented in Tab. 3. We can see that our proposed C-models outperform the baselines consistently. The generated motion from the S-model and the VQ-model has considerably lower frequencies than ground truth, indicating the generated motion lacks variation. Between the C-models, training on CMU, which is larger and has more motion variations than ACCAD, can lead to high frequency in the generated motion. Also, we find that LSTM is superior to GRU when trained by CMU, but inferior when trained by ACCAD. Moreover, we note that comparing performance between columns in Tab. 3 is not appropriate, because the sequence length can heavily influence the precision of frequency computed by FFT.

HumanEva | MPI-Mosh | |||||

Model | 1-frame | 10%-frame | 50%-frame | 1-frame | 10%-frame | 50%-frame |

VQ-Res-ACCAD | -0.57/0.91 | -0.53/1.00 | -0.51/1.70 | -0.61/1.37 | -0.59/1.50 | -0.54/2.31 |

VQ-Res-CMU | -0.74/0.95 | -0.72/1.01 | -0.63/1.62 | -0.78/1.60 | -0.70/1.63 | -0.59/2.53 |

S-ACCAD | -0.72/0.91 | -0.72/0.96 | -0.67/1.50 | -0.87/1.98 | -0.76/1.82 | -0.69/2.83 |

C-Res-LSTM-ACCAD | -0.44/0.82 | -0.36/0.89 | -0.41/1.53 | -0.68/1.44 | -0.54/1.48 | -0.42/2.34 |

C-Res-GRU-ACCAD | -0.31/0.82 | -0.30/0.86 | -0.29/1.45 | -0.34/1.24 | -0.33/1.38 | -0.36/2.28 |

C-Res-LSTM-CMU | 0.05/0.92 | 0.01/1.00 | -0.07/1.64 | -0.11/1.25 | -0.13/1.36 | -0.14/2.17 |

C-Res-GRU-CMU | -0.16/0.88 | -0.16/0.91 | -0.22/1.48 | -0.28/1.23 | -0.29/1.33 | -0.28/2.10 |

#### 4.4.2 Diversity

For each model and each seed input sequence, we generate three sequences. The diversity is evaluated by standard deviation. Results are presented in Tab. 4. We can see that our methods outperform the baselines. Within the C-model family, training on a larger dataset with more motion variations can improve the diversity. Additionally, we observe that the diversity results are roughly consistent with the PSER results shown in Tab. 3.

HumanEva | MPI-Mosh | |||||

Model | 1-frame | 10%-frame | 50%-frame | 1-frame | 10%-frame | 50%-frame |

VQ-Res-ACCAD | 0.10 | 0.12 | 0.10 | 0.09 | 0.09 | 0.07 |

VQ-Res-CMU | 0.007 | 0.008 | 0.01 | 0.008 | 0.01 | 0.02 |

S-ACCAD | 0.03 | 0.04 | 0.05 | 0.02 | 0.04 | 0.05 |

C-Res-LSTM-ACCAD | 0.09 | 0.10 | 0.08 | 0.06 | 0.10 | 0.10 |

C-Res-GRU-ACCAD | 0.12 | 0.11 | 0.10 | 0.12 | 0.12 | 0.11 |

C-Res-LSTM-CMU | 0.22 | 0.20 | 0.16 | 0.21 | 0.20 | 0.17 |

C-Res-GRU-CMU | 0.15 | 0.15 | 0.13 | 0.17 | 0.17 | 0.15 |

#### 4.4.3 Naturalness

To evaluate the naturalness of generated motions, we perform a perceptual study on Amazon Mechanical Turk. For each generated sequence, 3 workers give a score ranging from 1 (unnatural) to 5 (very natural). As a control group, the ground truth sequence is evaluated in the same manner. We report the results in Tab. 5. Our method outperforms the baseline methods consistently. Also, models trained on CMU perform comparably worse than their counterparts trained on ACCAD, indicating that the scale of the dataset and the resulting naturalness are weakly related.

HumanEva | MPI-Mosh | |||||

Model | 1-frame | 10%-frame | 50%-frame | 1-frame | 10%-frame | 50%-frame |

ground truth | 3.771.22 | 3.731.20 | ||||

VQ-Res-ACCAD | 2.371.31 | 2.881.40 | 2.891.38 | 2.501.41 | 2.531.30 | 3.001.43 |

VQ-Res-CMU | 3.051.18 | 3.161.34 | 2.991.34 | 2.641.25 | 3.291.28 | 2.941.36 |

S-ACCAD | 3.051.19 | 2.711.33 | 3.061.33 | 2.881.24 | 2.871.44 | 3.121.27 |

C-Res-LSTM-ACCAD | 3.471.24 | 3.311.21 | 3.271.29 | 3.141.28 | 3.311.16 | 3.171.30 |

C-Res-GRU-ACCAD | 3.441.18 | 3.181.36 | 3.101.43 | 3.201.18 | 3.231.32 | 3.311.31 |

C-Res-LSTM-CMU | 3.051.33 | 2.981.34 | 3.351.39 | 2.991.36 | 3.011.36 | 3.381.32 |

C-Res-GRU-CMU | 3.161.18 | 3.201.32 | 3.101.32 | 3.181.25 | 3.101.24 | 3.131.27 |

## 5 Conclusion

In this paper, we address the task of generating “perpetual” motions, given a static body pose in the beginning. We propose a two-stream variational RNN network, in which the changes of the global trajectory and the body pose are conditioned on each other. With a novel KL-divergence term, we incorporate temporal dependencies in the latent sequence prior, and effectively overcome posterior collapse. To verify the effectiveness and perform fair comparisons, we establish a systematical pipeline to evaluate the model representation power, motion frequency, diversity and naturalness.

However, our method still has some limitations. For example, foot sliding still exists in some results, and we plan to introduce physical constraints, e.g. foot-ground contact friction, as a solution. Also, the model might jump into an infinite loop after long time generation, although a stochastic mechanism is already employed. A potential solution is to introduce another stochastic mechanism, which allows random sampling from the activity level. We expect that our model can pave the way for future studies, like generating motions from natural languages or environments.

##### Broader Impact.

This work has positive impact towards understanding how a real person synthesizes motions, and hence can be related to cognitive psychology and neuroscience. In addition, modeling human motion effectively could be favourable for simulations in biomedical engineering, and patient analysis in neurology and orthopedics. Moreover, personalized motion has potentials to be a biometric measure, which could cause certain privacy issues.

##### Acknowledgements.

We appreciate the insightful discussions with Otmar Hilliges and Emre Aksan about STCN and deep variational Bayesian methods.

##### Disclosure.

In the last five years, MJB has received research gift funds from Intel, Nvidia, Adobe, Facebook, and Amazon. While MJB is a part-time employee of Amazon, his research was performed solely at, and funded solely by, MPI. MJB has financial interests in Amazon and Meshcapade GmbH.

## References

- [1] Emre Aksan, Manuel Kaufmann, and Otmar Hilliges. Structured prediction helps 3d human motion modelling. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 7144–7153, 2019.
- [2] Julieta Martinez, Michael J Black, and Javier Romero. On human motion prediction using recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- [3] Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. AMASS: Archive of motion capture as surface shapes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2019.
- [4] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems (NeurIPS), pages 2980–2988, 2015.
- [5] Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in neural information processing systems (NeurIPS), pages 3738–3746, 2016.
- [6] Emre Aksan and Otmar Hilliges. STCN: Stochastic temporal convolutional networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
- [7] Pierre Charbonnier, Laure Blanc-Feraud, Gilles Aubert, and Michel Barlaud. Two deterministic half-quadratic regularization algorithms for computed imaging. In Proceedings of the International Conference on Image Processing, pages 168–172, 1994.
- [8] Liang-Yan Gui, Yu-Xiong Wang, Deva Ramanan, and Jose M. F. Moura. Few-shot human motion prediction via meta-learning. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
- [9] Dario Pavllo, David Grangier, and Michael Auli. Quaternet: A quaternion-based recurrent model for human motion. Proceedings of British Vision Machine Conference (BMVC), 2018.
- [10] Dario Pavllo, Christoph Feichtenhofer, Michael Auli, and David Grangier. Modeling human motion with quaternion-based neural networks. International Journal of Computer Vision, pages 1–18, 2019.
- [11] Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges. Learning human motion models for long-term predictions. In Proceedings of the International Conference on 3D Vision (3DV), pages 458–466, 2017.
- [12] Liang-Yan Gui, Yu-Xiong Wang, Xiaodan Liang, and José MF Moura. Adversarial geometry-aware human motion prediction. In Proceedings of the European Conference on Computer Vision (ECCV), pages 786–803, 2018.
- [13] Chen Li, Zhen Zhang, Wee Sun Lee, and Gim Hee Lee. Convolutional sequence to sequence model for human dynamics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5226–5234, 2018.
- [14] Borui Wang, Ehsan Adeli, Hsu-kuang Chiu, De-An Huang, and Juan Carlos Niebles. Imitation learning for human pose prediction. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), pages 7124–7133, 2019.
- [15] Alejandro Hernandez, Jurgen Gall, and Francesc Moreno-Noguer. Human motion prediction via spatio-temporal inpainting. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), October 2019.
- [16] Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan, and Dahua Lin. Convolutional sequence generation for skeleton-based action synthesis. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4394–4402, 2019.
- [17] Daniel Holden, Taku Komura, and Jun Saito. Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG), 36(4):1–13, 2017.
- [18] Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. Neural state machine for character-scene interactions. ACM Transactions on Graphics (TOG), 38(6):1–14, 2019.
- [19] Xue Bin Peng, Glen Berseth, and Michiel Van de Panne. Terrain-adaptive locomotion skills using deep reinforcement learning. ACM Transactions on Graphics (TOG), 35(4):1–12, 2016.
- [20] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Transactions of Graphics (Proc. SIGGRAPH Asia), 34(6):248:1–248:16, October 2015.
- [21] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- [22] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun, editors, Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
- [23] Justin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610, 2014.
- [24] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
- [25] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- [26] Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, and Michael J. Black. Resolving 3D human pose ambiguities with 3D scene constraints. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2282–2292, October 2019.
- [27] Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J. Black, and Siyu Tang. Generating 3d people in scenes without people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- [28] Ali Razavi, Aaron van den Oord, Ben Poole, and Oriol Vinyals. Preventing posterior collapse with delta-VAEs. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
- [29] Accad mocap system and data. https://accad.osu.edu/research/motion-lab/systemdata, 2018.
- [30] Cmu graphics lab. cmu graphics lab motion capture database. http://mocap.cs.cmu.edu/, 2000.
- [31] Leonid Sigal, Alexandru O Balan, and Michael J Black. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. International Journal of Computer Vision (IJCV), 87(1-2):4, 2010.
- [32] Matthew M. Loper, Naureen Mahmood, and Michael J. Black. MoSh: Motion and shape capture from sparse markers. ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 33(6):220:1–220:13, November 2014.
- [33] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- [34] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar, October 2014. Association for Computational Linguistics.
- [35] Nishant Nikhil and Brendan Tran Morris. Convolutional neural network for trajectory prediction. In The European Conference on Computer Vision (ECCV) Workshops, September 2018.

Perpetual Motion:

Generating Unbounded Human Motion

**Appendix**

## Appendix A More Details of Experiments

### a.1 Details of Investigated Models

In Sec. 4.2, we have demonstrated how to modify two state-of-the-art methods to fit our task. Here we present more details.

##### Baseline 1: QuaterNet [9].

We have derived two versions from the original QuaterNet, i.e. the Q-model and the VQ-model. Their architectures are shown in Fig. 1. We change the quaternion rotation representation to the 6D continuous representation [21], and remain the two-layer GRU cells. In our setting, the hidden variable dimension in GRU is 1000, the noise dimension in the VQ-model is 32. The first fc layer in the VQ-model decoder projects vectors from the dimension of 32 to the dimension of 1000, and the second fc layer projects vectors from 1000 to 135 (3D body translation and 132D body joint rotation). The Q-model is trained with the reconstruction loss in Eq. 3, as in our C-model family. The training loss of the VQ-model is identical to the loss of C-models, as demonstrated in Sec. 3.3.

##### Baseline 2: STCN [6].

We first concatenate the body translation and the body pose to form the motion sequence , which is then modeled by our modified STCN, i.e. the S-model. The architecture of the S-model is presented in Fig. 2. As in our C-model and the Q-model, the input and output of the S-model are sub-sequences with time shift. To train the S-model, the reconstruction loss Eq. 3 is used. In addition, the latent prior is data-adaptive, and is obtained by minimizing its KL-divergence with the inference posterior. Moreover, in our S-model the latent prior and the inference posterior take the same hidden sequence as input, which is in contrast to the original STCN. During testing, the S-model works in an auto-regressive manner, producing the next single frame based on previous 128 frames. The S-model with the original latent prior setting [6] is also tested in this appendix B.1.

### a.2 Details of The User Study

Our user study follows the protocol in [27], and our user study interface is shown in Fig. 3. Specifically, for each generated sequence, we paint the body mesh to gray if frames are given, and paint the body mesh to red if frames are generated. As a control group, we let users to evaluate the ground truth sequences as well. In each ground truth sequence, we randomly paint the body mesh to gray either in the first frame, or in the first 10% frames, or in the first 50% frames, and paint the body mesh to red in the remaining frames. Moreover, to avoid unknown technical problems from the user side, e.g. some users cannot play the video, we let users give a 0 score if they cannot play the video. We find the ratio of valid user study results (with a non-zero score) is 93.1%. Invalid results are uniformly distributed across all user study groups, and we exclude them from calculating the final results in the table.

## Appendix B More Analysis on Methods

### b.1 Our Method versus Related Variational Bayesian Methods

In this paper, we design our two-stream variational RNN model based on Eq. 2, in which we assume , namely, the inference posterior (encoder) performs prediction. We notice that such assumption leads to large architecture differences between our method and previous variational Bayesian methods like [4, 6]. Methods without such assumption leads to an autoencoder without time shifting between input and output, and prediction is actually performed by the data-adaptive prior during the testing phase. Specifically, without such assumption, Eq. 2 is changed to

(8) |

in which the inference posterior and the generation posterior only perform reconstruction, and prediction comes from the prior . As a consequence, formulating an explicit latent sequence prior, which is different from the inference posterior, is necessary.

In addition, the training loss will be different from Sec. 3.3 as well. Provided the ground truth body translations and body poses, we feed the entire sequences to the model as input, and obtain their reconstructed versions. Specifically, the reconstruction loss is defined as

(9) |

which does not have the time shift as in Eq. 3.

##### Comparison with the original STCN.

It is noticed that our modified version of STCN [6] has the assumption in Eq. 2. In this case, the modified STCN (i.e. the S-model in Sec. 4.2 and Sec. A.1) can use the same reconstruction loss with ours, and hence the comparison can focus on the network architecture. However, when generating motion, we find that our modified STCN trained on CMU produces unrealistic body poses very quickly after the first frame. A similar observation on CNN-based trajectory prediction is reported by [35]. We conjecture that such unstable performance is caused by this assumption. Therefore, here we evaluate the performance of the original formulation of STCN, without the assumption in Eq. 2. Specifically, except the latent prior network, the network setting is identical to the S-model in Sec. 4.2 and Sec. A.1. Correspondingly, the reconstruction loss becomes to Eq. 9 rather than Eq. 3, and the ELBO becomes to Eq. 8 rather than Eq. 2. The loss weights are the same as in Sec. 3.3. We name this STCN version as “SO”, meaning “STCN Origin”.

Unlike what we have conjectured, we find that the motion generation process based on the SO-model is not completely stable. The SO-model trained on CMU produces unrealistic body poses quickly, e.g. after generating 300 frames. Therefore, we think such unstable behavior could be rooted at somewhere else rather than the model itself. A probable reason is, that our data pre-processing step, i.e. transforming the sequence to the AMASS coordinate, is not suitable for CNN-based models. Always starting with body translation could make motion generation numerically unstable. Nevertheless, our proposed RNN-based models do not encounter such instability issue, indicating that the RNN-based model is more suitable for the motion generation task.

In the following tables 1-4, we show the results of the SO-models with respect to the model representation power, the motion frequency, the diversity and the naturalness. Since the SO-model trained on CMU cannot perform motion generation stably, we only evaluate its model representation power here. From the results, we can see the naturalness performance of the SO-model is comparably better than our proposed C-models, while the diversity performance is much inferior to our C-models. From the qualitative results, we observe that many generated motions are ‘standing still’, which are plausible but lack variations.

HumanEva | MPI-Mosh | |||||
---|---|---|---|---|---|---|

Model | ||||||

SO-ACCAD | 0.108 | 0.005 | 0.212* | 0.120 | 0.005 | 0.213* |

SO-CMU | 0.078 | 0.004 | 0.271* | 0.076 | 0.004 | 0.267* |

HumanEva | MPI-Mosh | |||||

Model | 1-frame | 10%-frame | 50%-frame | 1-frame | 10%-frame | 50%-frame |

SO-ACCAD | -0.83/0.91 | -0.77/0.94 | -0.77/1.64 | -0.92/2.09 | -0.84/1.81 | -0.77/2.73 |

HumanEva | MPI-Mosh | |||||

Model | 1-frame | 10%-frame | 50%-frame | 1-frame | 10%-frame | 50%-frame |

SO-ACCAD | 0.03 | 0.03 | 0.03 | 0.01 | 0.03 | 0.05 |

HumanEva | MPI-Mosh | |||||

Model | 1-frame | 10%-frame | 50%-frame | 1-frame | 10%-frame | 50%-frame |

SO-ACCAD | 3.311.16 | 3.321.19 | 3.511.06 | 3.281.07 | 3.241.17 | 3.321.18 |

### b.2 Influence of The -residual Connection

In our paper we propose the -residual connection to overcome the first-frame jump artifacts, and set in all trials. Here we qualitatively show its effectiveness in Fig. 1. Obviously, our method overcomes the first-frame jump artifacts in a very effective manner.