key: cord-0605681-oadim0li authors: Rozemberczki, Benedek; Scherer, Paul; He, Yixuan; Panagopoulos, George; Riedel, Alexander; Astefanoaei, Maria; Kiss, Oliver; Beres, Ferenc; L'opez, Guzm'an; Collignon, Nicolas; Sarkar, Rik title: PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models date: 2021-04-15 journal: nan DOI: nan sha: f4cd7998107d51963da4c3fbc423da191e1f5764 doc_id: 605681 cord_uid: oadim0li We present PyTorch Geometric Temporal a deep learning framework combining state-of-the-art machine learning algorithms for neural spatiotemporal signal processing. The main goal of the library is to make temporal geometric deep learning available for researchers and machine learning practitioners in a unified easy-to-use framework. PyTorch Geometric Temporal was created with foundations on existing libraries in the PyTorch eco-system, streamlined neural network layer definitions, temporal snapshot generators for batching, and integrated benchmark datasets. These features are illustrated with a tutorial-like case study. Experiments demonstrate the predictive performance of the models implemented in the library on real world problems such as epidemiological forecasting, ridehail demand prediction and web-traffic management. Our sensitivity analysis of runtime shows that the framework can potentially operate on web-scale datasets with rich temporal features and spatial structure. Deep learning on static graph structured data has seen an unprecedented success in various business and scientific application domains. Neural network layers which operate on graph data can serve as building blocks of document labeling, fraud detection, traffic forecasting and cheminformatics systems [7, [45] [46] [47] 63] . This emergence and the wide spread adaptation of geometric deep learning was made possible by open-source machine learning libraries. The high quality, breadth, user oriented nature and availability of specialized deep learning libraries [13, 15, 46, 67] were all contributing factors to the practical success and large-scale deployment of graph machine learning systems. At the same time the existing geometric deep learning frameworks operate on graphs which have a fixed topology and it is also assumed that the node features and labels are static. Besides limiting assumptions about the input data, these off-the-shelf libraries are not designed to operate on spatiotemporal data. Present work. We propose PyTorch Geometric Temporal, an open-source Python library for spatiotemporal machine learning. We designed PyTorch Geometric Temporal with a simple and consistent API inspired by the software architecture of existing widely used geometric deep learning libraries from the PyTorch ecosystem [15, 40] . Our framework was built by applying simple design principles consistently. The framework reuses existing neural network layers in a modular manner, models have a limited number of public methods and hyperparameters can be inspected. Spatiotemporal signal iterators ingest data memory efficiently in widely used scientific computing formats and return those in a PyTorch compatible format. The design principles in combination with the test coverage, documentation, practical tutorials, continuous integration, package indexing and frequent releases make the framework an end-user friendly spatiotemporal machine learning system. The experimental evaluation of the framework entails node level regression tasks on datasets released exclusively with the framework. Specifically, we compare the predictive performance of spatiotemporal graph neural networks on epidemiological forecasting, demand planning, web traffic management and social media interaction prediction tasks. Synthetic experiments show that with the right batching strategy PyTorch Geometric Temporal is highly scalable and benefits from GPU accelerated computing. Our contributions. The main contributions of our work can be summarized as: • We publicly release PyTorch Geometric Temporal the first deep learning library for parametric spatiotemporal machine learning models. Temporal which can handle spatiotemporal datasets. • We release new spatiotemporal benchmark datasets from the renewable energy production, epidemiological reporting, goods delivery and web traffic forecasting domains. • We evaluate the spatiotemporal forecasting capabilities of the neural and parametric machine learning models available in PyTorch Geometric Temporal on real world datasets. The remainder of the paper has the following structure. In Section 2 we overview important preliminaries and the related work about temporal and geometric deep learning and the characteristics of related open-source machine learning software. The main design principles of PyTorch Geometric Temporal are discussed in Section 3 with a practical example. We demonstrate the forecasting capabilities of the framework in Section 4 where we also evaluate the scalability of the library on various commodity hardware. We conclude in Section 5 where we summarize the results. The source code of PyTorch Geometric Temporal is publicly available at https://github.com/benedekrozemberczki/pytorch_geometric_ temporal; the Python package can be installed via the Python Package Index. Detailed documentation is accessible at https://pytorchgeometric-temporal.readthedocs.io/. In order to position our contribution and highlight its significance we introduce some important concepts about spatiotemporal data and discuss related literature about geometric deep learning and machine learning software. Our framework considers specific input data types on which the spatiotemporal machine learning models operate. Input data types can differ in terms of the dynamics of the graph and that of the modelled vertex attributes. We take a discrete temporal snapshot view of this data representation problem [25, 26] and our work considers three spatiotemporal data types which can be described by the subplots of Figure 1 and the following formal definitions: Definition 2.1. Dynamic graph with temporal signal A dynamic graph with a temporal signal is the ordered set of graph and node feature matrix tuples D = {(G 1 , X 1 ), . . . , (G , X )} where the vertex sets satisfy that = , ∀ ∈ {1, . . . , } and the node feature matrices that X ∈ R | |× , ∀ ∈ {1, . . . , } . Definition 2.2. Dynamic graph with static signal. A dynamic graph with a static signal is the ordered set of graph and node feature matrix tuples D = {(G 1 , X), . . . , (G , X)} where vertex sets satisfy = , ∀ ∈ {1, . . . , } and the node feature matrix that X ∈ R | |× . Definition 2.3. Static graph with temporal signal. A static graph with a temporal signal is the ordered set of graph and node feature matrix tuples D = {(G, X 1 ), . . . , (G, X )} where the node feature matrix satisfies that X ∈ R | |× , ∀ ∈ {1, . . . , } . Representing spatiotemporal data based on these theoretical concepts allows us the creation of memory efficient data structures which conceptualize these definitions in practice well. Our work provides deep learning models that operate on data which has both temporal and spatial aspects. These techniques are natural recombinations of existing neural network layers that operate on sequences and static graph-structured data. A large family of temporal deep learning models such as the LSTM [24] and GRU [12] generates in-memory representations of data points which are iteratively updated as it learns by new snapshots. Another family of deep learning models uses the attention mechanism [3, 35, 59] to learn representations of the data points which are adaptively recontextualized based on the temporal history. These types of models serve as templates for the temporal block of spatiotemporal deep learning models. Learning. Learning representations of vertices, edges and whole graphs with graph neural networks in a supervised or unsupervised way can be described by the message passing formalism [17] . In this conceptual framework using the node and edge attributes in a graph as parametric function generates compressed representations (messages) which are propagated between the nodes based on a message-passing rule and aggregated to form new representations. Most of the existing graph neural network architectures such as GCN [30] , GGCN [33] , ChebyConv [14] , and RGCN [50] fit perfectly into this general description of graph neural networks. Models are differentiated by assumptions about the input graph (e.g. node heterogeneity, multiplexity, presence of edge attributes), the message compression function used, the propagation scheme and message aggregation function applied to the received messages. A spatiotemporal deep learning model fuses the basic conceptual ideas of temporal deep learning techniques and graph representation learning. Operating on a temporal graph sequence these models perform message-passing at each time point with a graph neural network block and the new temporal information is incorporated by a temporal deep learning block. This design allows for sharing salient temporal and spatial autocorrelation information across the spatial units. The temporal and spatial layers which are fused together in a single parametric machine learning model are trained together jointly by exploiting the fact that the fused models are end-to-end differentiable. In Table 1 we summarized the spatiotemporal deep learning models implemented in the framework which we categorized based on the temporal and graph neural network layer blocks, the order of spatial proximity and heterogeneity of the edge set. The current graph representation learning software ecosystem which allows academic research and industrial deployment extends open-source auto-differentiation libraries such as TensorFlow [1] , PyTorch [41] , MxNet [11] and JAX [16, 28] . Our work does the same as we build on the PyTorch Geometric ecosystem. We summarized the characteristics of these libraries in Table 2 which allows for comparing frameworks based on the backend, presence of supervised training functionalities, presence of temporal models and GPU support. Our proposed framework is the only one to date which allows Library Backend Supervised Temporal GPU PT Geometric [15] PT ✔ ✘ ✔ Geometric2DR [49] PT ✘ ✘ ✔ CogDL [9] PT ✔ ✘ ✔ Spektral [21] TF ✔ ✘ ✔ TF Geometric [27] TF The open-source ecosystem for spatiotemporal data processing consists of specialized database systems, basic analytical tools and advanced machine learning libraries. We summarized the characteristics of the most popular libraries in Table 3 with respect to the year of release, purpose of the framework, source code language and GPU support. First, it is evident that most spatiotemporal data processing tools are fairly new and there is much space for contributions in each subcategory. Second, the database systems are written in highperformance languages while the analytics and machine learning oriented tools have a pure Python/R design or a wrapper written in these languages. Finally, the use of GPU acceleration is not widespread which alludes to the fact that current spatiotemporal data processing tools might have a scalability issue. Our proposed framework PyTorch Geometric Temporal is the first fully opensource GPU accelerated spatiotemporal machine learning library. Year Purpose Language GPU GeoWave [60] 2016 Database Java ✘ StacSpec [23] 2017 Database Javascript ✘ MobilityDB [69] 2019 Database C ✘ PyStac [44] 2020 Database Python ✘ StaRs [42] 2017 Analytics R ✘ CuSpatial [56] 2019 Analytics Python ✔ PySAL [43] 2017 Machine Learning Python ✘ STDMTMB [2] 2018 Machine Learning R ✘ Our work 2021 Machine Learning Python ✔ Our primary goal is to give a general theoretical overview of the framework, discuss the framework design choices, give a detailed practical example and highlight our strategy for the long term viability and maintenance of the project. The spatiotemporal neural network layers are implemented as classes in the framework. Each of the classes has a similar architecture driven by a few simple design principles. 3.1.1 Non-proliferation of classes. The framework reuses the existing high level neural network layer classes as building blocks from the PyTorch and PyTorch Geometric ecosystems. The goal of the library is not to replace the existing frameworks. This design strategy makes sure that the number of auxiliary classes in the framework is kept low and that the framework interfaces well with the rest of the ecosystem. 3.1.2 Hyperparameter inspection and type hinting. The neural network layers do not have default hyperparameter settings as some of these have to be set in a dataset dependent manner. In order to help with this, the layer hyperparameters are stored as public class attributes and they are available for inspection. Moreover, the constructors of the neural network layers use type hinting which helps the end-users to set the hyperparameters. The spatiotemporal neural network layers in our framework have a limited number of public methods for simplicity. For example, the auxiliary layer initialization methods and other internal model mechanics are implemented as private methods. All of the layers provide a forward method and those which explicitly use the message-passing scheme in PyTorch Geometric provide a public message method. 3.1.4 Auxiliary layers. The auxiliary neural network layers which are not part of the PyTorch Geometric ecosystem such as diffusion convolutional graph neural networks [32] are implemented as standalone neural network layers in the framework. These layers are available for the design of novel neural network architectures as individual components. The design of PyTorch Geometric Temporal required the introduction of custom data structures which can efficiently store the datasets and provide temporally ordered snapshots for batching. Based on the categorization of spatiotemporal signals discussed in Section 2 we implemented three types of Spatiotemporal Signal Iterators. These iterators store spatiotemporal datasets in memory efficiently without redundancy. For example a Static Graph Temporal Signal iterator will not store the edge indices and weights for each time period in order to save memory. By iterating over a Spatiotemporal Signal Iterator at each step a graph snapshot is returned which describes the graph of interest at a given point in time. Graph snapshots are returned in temporal order by the iterators. The Spatiotemporal Signal Iterators can be indexed directly to access a specific graph snapshot -a design choice which allows the use of advanced temporal batching. The time period specific snapshots which consist of labels, features, edge indices and weights are stored as NumPy arrays [58] in memory, but returned as a PyTorch Geometric Data object instance [15] by the Spatiotemporal Signal Iterators when these are iterated on. This design choice hedges against the proliferation of classes and exploits the existing and widely used compact data structures from the PyTorch ecosystem [40] . As part of the library we provide a temporal train-test splitting function which creates train and test snapshot iterators from a Spatiotemporal Signal Iterator given a test dataset ratio. This parameter of the splitting function decides the fraction of data that is separated from the end of the spatiotemporal graph snapshot sequence for testing. The returned iterators have the same type as the input iterator. Importantly, this splitting does not influence the applicability of widely used semi-supervised model training strategies such as node masking. We provided easy-touse practical data loader classes for widely used existing [38] and the newly released benchmark datasets. These loaders return Spatiotemporal Signal Iterators which can be used for training existing and custom designed spatiotemporal neural network architectures to solve supervised machine learning problems. In the following we overview a simple end-to-end machine learning pipeline designed with PyTorch Geometric Temporal. These code snippets solve a practical epidemiological forecasting problempredicting the weekly number of chickenpox cases in Hungary [47] . The pipeline consists of data preparation, model definition, training and evaluation phases. 1 from torch_geometric_temporal import ChickenpoxDatasetLoader 2 from torch_geometric_temporal import temporal_signal_split In the forward pass method of the neural network the model uses the vertex features, edges and the optional edge weights (line 11). The initial recurrent graph convolution based aggregation (line 12) is followed by a rectified linear unit activation function [37] and dropout [53] for regularization (lines [13] [14] . Using the fullyconnected layer the model outputs a single score for each spatial unit (lines 15-16). Using the dataset split and the model definition we can turn our attention to training a regressor. In Listings 3 we create a model instance (line 1), transfer the model parameters (line 3) to the Adam optimizer [29] which uses a learning rate of 0.01 and set the model to be trainable (line 5). In each epoch we set the accumulated cost to be zero (line 8) iterate over the temporal snapshots in the training data (line 9), make forward passes with the model on each temporal snapshot and accumulate the spatial unit specific mean squared errors (lines 10-13). We normalize the cost, backpropagate and update the model parameters (lines 14-17). Listings 4: Evaluating the recurrent graph convolutional neural network on the test portion of the spatiotemporal dataset using the time unit averaged mean squared error. We set the model to be non trainable and the accumulated squared error as zero (lines 1-2). We iterate over the test spatiotemporal snapshots, make forward passes to predict the number of chickenpox cases and accumulate the squared error (lines 3-7). The accumulated errors are normalized and we can print the mean squared error calculated on the whole test horizon (lines 8-10). Exploiting the power of GPU based acceleration of computations happens at the training and evaluation steps of the PyTorch Geometric Temporal pipelines. In this case study we assume that the Hungarian Chickenpox cases dataset is already loaded in memory, the temporal split happened and a model class was defined by the code snippets in Listings 1 and 2. Moreover, we assume that the machine used for training the neural network can access a single CUDA compatible GPU device [48] . 1-3) . The optimizer registers the model parameters and the model parameters are set to be trainable (lines 5-6). We iterate over the temporal snapshot iterator 200 times and the iterator returns a temporal snapshot in each step. Importantly the snapshots which are PyTorch Geometric Data objects are transferred to the GPU (lines 8-10). The use of PyTorch Geometric Data objects as temporal snapshots allows the transfer of the time period specific edges, node features and target vector with a single command. Using the input data a forward pass is made, loss is accumulated and weight updates happen using the optimizer in each time period (lines [11] [12] [13] [14] [15] [16] [17] . Compared to the cumulative backpropagation based training approach discussed in Subsection 3.3 this backpropagation strategy is slower as weight updates happen at each time step, not just at the end of training epochs. During model scoring the GPU can be utilized again. The snippet in Listings 6 demonstrates that the only modification needed for accelerated evaluation is the transfer of snapshots to the GPU. In each time period we move the temporal snapshot to the device to do the forward pass (line 4). We do the forward pass with the model and the snapshot on the GPU and accumulate the loss (lines [5] [6] [7] [8] . The loss value is averaged out and detached from the GPU for printing (lines 9-11). The viability of the project is made possible by the open-source code, version control, public releases, automatically generated documentation, continuous integration, and near 100% test coverage. Releases. The source code of PyTorch Geometric Temporal is publicly available on GitHub under the MIT license. Using an open version control system allowed us to have a large group collaborate on the project and have external contributors who also submitted feature requests. The public releases of the library are also made available on the Python Package Index, which means that the framework can be installed via the pip command using the terminal. The source-code of PyTorch Geometric Temporal and Sphinx [8] are used to generate a publicly available documentation of the library at https://pytorch-geometric-temporal. readthedocs.io/. This documentation is automatically created every time when the code-base changes in the public repository. The documentation covers the constructors and public methods of neural network layers, temporal signal iterators, public dataset loaders and splitters. It also includes a list of relevant research papers, an in-depth installation guide, a detailed getting-started tutorial and a list of integrated benchmark datasets. We provide continuous integration for PyTorch Geometric Temporal with GitHub Actions which are available for free on GitHub without limitations on the number of builds. When the code is updated on any branch of the repository the build process is triggered and the library is deployed on Linux, Windows and macOS virtual machines. Coverage. The temporal graph neural network layers, custom data structures and benchmark dataset loaders are all covered by unit tests. These unit tests can be executed locally using the source code. Unit tests are also triggered by the continuous integration provided by GitHub Actions. When the master branch of the open-source GitHub repository is updated, the build is successful, and all of the unit tests pass a coverage report is generated by CodeCov. The proposed framework is evaluated on node level regression tasks using novel datasets which we release with the paper. We also evaluate the effect of various batching techniques on the predictive performance and runtime. We release new spatiotemporal benchmark datasets with PyTorch Geometric Temporal which can be used to test models on node level regression tasks. The descriptive statistics and properties of these newly introduced benchmark datasets are summarized in Table 4 . The forecasting experiments focus on the evaluation of the recurrent graph neural networks implemented in our framework. We compare the predictive performance under two specific backpropagation regimes which can be used to train these recurrent models: • Incremental: After each temporal snapshot the loss is backpropagated and model weights are updated. This would need as many weight updates as the number of temporal snapshots. • Cumulative: When the loss from every temporal snapshot is aggregated it is backpropagated and weights are updated with the optimizer. This requires one weight update per epoch. Using 90% of the temporal snapshots for training, we evaluated the forecasting performance on the last 10% by calculating the average mean squared error from 10 experimental runs. We used models with a recurrent graph convolutional layer which had 32 convolutional filters. The spatiotemporal layer was followed by the rectified linear unit [37] activation function and during training time we used a dropout of 0.5 for regularization [53] after the spatiotemporal layer. The hidden representations were fed to a fully connected feedforward layer which outputted the predicted scores for each spatial unit. The recurrent models Table 5 where we also report standard deviations around the test set mean squared error and bold numbers denote the best performing model under each training regime on a dataset. Our experimental findings demonstrate multiple important empirical regularities which have important practical implications. Namely these are the following: (1) Most recurrent graph neural networks have a similar predictive performance on these regression tasks. In simple terms there is not a single model which acts as silver bullet. This also postulates that the model with the lowest training time is likely to be as good as the slowest one. (2) Results on the Wikipedia Math dataset imply that a cumulative backpropagation strategy can have a detrimental effect on the predictive performance of a recurrent graph neural network. When computation resources are not a bottleneck an incremental strategy can be significantly better. The evaluation of the PyTorch Geometric Temporal runtime performance focuses on manipulating the input size and measuring the time needed to complete a training epoch. We investigate the runtime under the incremental and cumulative backpropagation strategies. The runtime evaluation used the GCon-vGRU model [51] with the hyperparameter settings described in Subsection 4.2. We measured the time needed for doing a single epoch over a sequence of 100 synthetic graphs. Reference Watts-Strogatz graphs in the snapshots of the dynamic graph with temporal signal iterator had binary labels, 2 10 nodes, 2 5 edges per node and 2 5 node features. Runtimes were measured on the following hardware: • CPU: The machine used for benchmarking had 8 Intel 1.00 GHz i5-1035G1 processors. • GPU: We utilized a machine with a single Tesla V-100 graphics card for the experiments. In this paper we discussed PyTorch Geometric Temporal the first deep learning library designed for neural spatiotemporal signal processing. We reviewed the existing geometric deep learning and machine learning techniques implemented in the framework. We gave an overview of the general machine learning framework design principles, the newly introduced input and output data structures, long-term project viability and discussed a case study with source-code which utilized the library. Our empirical evaluation focused on (a) the predictive performance of the models available in the library on real world datasets which we released with the framework; (b) the scalability of the methods under various input sizes and structures. Our work could be extended and it also opens up opportunities for novel geometric deep learning and applied machine learning research. A possible direction to extend our work would be the consideration of continuous time or time differences between temporal snapshots which are not constant. Another opportunity is the inclusion of temporal models which operate on curved spaces such as hyperbolic and spherical spaces. We are particularly interested in how the spatiotemporal deep learning techniques in the framework can be deployed and used for solving high-impact practical machine learning tasks. Tensorflow: A System for Large-Scale Machine Learning sdmTMB: Spatial and spatiotemporal GLMMs with TMB Neural Machine Translation by Jointly Learning to Align and Translate Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting Node Embeddings in Dynamic Graphs Temporal Walk Based Centrality Metric for Graph Streams Scaling Graph Neural Networks with Approximate Pagerank Sphinx Documentation CogDL: An Extensive Toolkit for Deep Learning on Graphs GC-LSTM: Graph Convolution Embedded LSTM for Dynamic Link Prediction MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation StellarGraph Machine Learning Library Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering Fast Graph Representation Learning with PyTorch Geometric Compiling Machine Learning Programs via High-Level Tracing Neural Message Passing for Quantum Chemistry Jraph: A Library for Graph Neural Networks in Jax DynamicGEM: A Library for Dynamic Graph Embedding Methods GEM: A Python Package for Graph Embedding Methods Graph Neural Networks in Tensor-Flow and Keras with Spektral Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting The Open-Source Software Ecosystem for Leveraging Public Datasets in Spatio-Temporal Asset Catalogs (STAC) Long Short-Term Memory Modern Temporal Network Theory: A Colloquium Temporal Networks Efficient Graph Deep Learning in TensorFlow with TF Geometric JAX: Composable Transformations of Python+NumPy Programs Adam: A Method for Stochastic Optimization Semi-Supervised Classification with Graph Convolutional Networks Predicting Path Failure in Time-Evolving Graphs Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting Gated Graph Sequence Neural Networks Cong Fu, and Shuiwang Ji. 2021. DIG: A Turnkey Library for Diving into Graph Deep Learning Research Effective Approaches to Attention-based Neural Machine Translation Rectified Linear Units Improve Restricted Boltzmann Machines Transfer Graph Neural Networks for Pandemic Forecasting EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs PyTorch: An Imperative Style, High-Performance Deep Learning Library PyTorch: An Imperative Style, High-Performance Deep Learning Library Spatiotemporal Arrays: Raster and Vector Datacubes PySAL: A Python Library of Spatial Analytical Methods PySTAC: Python library for working with any SpatioTemporal Asset Catalog (STAC) Pathfinder Discovery Networks for Neural Message Passing Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs Rik Sarkar, and Tamas Ferenci. 2021. Chickenpox Cases in Hungary: a Benchmark Dataset for Spatiotemporal Signal Processing with Graph Neural Networks CUDA by Example: An Introduction to General-Purpose GPU Programming Learning Distributed Representations of Graphs with Geo2DR Modeling Relational Data with Graph Convolutional Networks Structured Sequence Modeling with Graph Convolutional Recurrent Networks Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition Dropout: A Simple Way to Prevent Neural Networks from Overfitting Predictive Temporal Embedding of Dynamic Graphs Learning to Represent the Evolution of Dynamic Graphs with Recurrent Models OpenNE: An Open Source Toolkit for Network Embedding The NumPy Array: a Structure for Efficient Numerical Computation Attention is All You Need GeoWave: Utilizing Distributed Key-Value Stores for Multidimensional Data Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks AliGraph: A Comprehensive Graph Neural Network Platform Spatio-Temporal Graph Convolutional Networks: a Deep Learning Framework for Traffic Forecasting MediaPipe Hands: On-device Real-time Hand Tracking T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction GMAN: A Graph Multi-Attention Network for Traffic Prediction Learning Graph Neural Networks with Deep Graph Library A3T-GCN: Attention Temporal Graph Convolutional Network for Traffic Forecasting MobilityDB: A Mobility Database Based on PostgreSQL and PostGIS