From-scratch implementations of graph neural networks, conditional graph generation, variational autoencoders, spatial 3D tree generation, and geometric mesh interpolation — all built with PyTorch, no external graph libraries.
GCN and GAT layers implemented from scratch using only PyTorch tensor operations. Evaluated on Zachary's Karate Club graph (34 nodes, 78 edges) with 50% labeled nodes.
A conditional VAE that generates small graphs matching target structural properties (node count, density, clustering). Trained on mixtures of Erdos-Renyi, Barabasi-Albert, and Watts-Strogatz random graphs.
Variational autoencoder that learns a continuous latent representation of graph structure, enabling smooth interpolation between topologically distinct graphs and generation of novel network structures.
Spectral layout computes deterministic 2D node positions from graph topology. Greedy node matching and edge blending produce smooth morphing sequences between structurally different graphs.
A variational autoencoder for trees embedded in continuous 3D space. At each step the decoder selects a parent node (discrete attention), predicts a 3D offset (learned Gaussian), and decides whether to stop — combining discrete structural decisions with continuous spatial positions.
Smooth interpolation between low-poly 3D meshes with different topology and vertex counts. Uses greedy nearest-neighbour node correspondence, linear position blending, and gradual edge topology transition — no learned decoder required.
Six distinct low-poly shapes spanning Platonic solids, prisms, stellated forms, and a toroidal surface. Each mesh has fixed topology with randomized vertex perturbation for training diversity.
Centre both meshes at origin. Pad the smaller mesh with phantom nodes at the centroid. Greedily match nodes by nearest-neighbour in 3D.
Linearly interpolate matched node positions: p(t) = (1−t) · pA + t · pB
Classify edges as shared, source-only, or target-only. Remove source-only edges proportionally to t and add target-only edges as t increases.
Three shape pairs demonstrating interpolation across varying topology complexity: same vertex count, different vertex count, and reduction.
A graph VAE with inner-product decoder (VGAE, Kipf & Welling 2016) extended to jointly predict 3D node positions. Used for generative sampling from the learned latent space.
Physics-informed graph neural networks that embed geometric priors directly into the message-passing framework. Cotangent Laplacian edge weights, discrete curvature encodings, reaction-diffusion dynamics for pattern formation, and energy-based regularisation (Dirichlet, total variation, elastic) ensure that learned representations respect the underlying differential geometry of the domain.
Compute geometric edge weights from vertex positions using cotangent weights, replacing the combinatorial Laplacian with a mesh-aware operator.
Discrete mean, Gaussian, and principal curvatures computed per vertex and injected as positional node features into the GNN encoder.
Gray-Scott dynamics on graph structure for pattern formation: coupled diffusion of two species with nonlinear reaction terms drives Turing-like instabilities.
Dirichlet energy for smoothness, total variation for edge-preserving gradients, and elastic energy for shape preservation — all computed on the graph Laplacian.
Embedding hierarchical data in the Poincaré ball model of hyperbolic space, where trees embed with zero distortion. Exponential volume growth matches the exponential branching of hierarchical structures, enabling faithful low-dimensional representations.
Map data to hyperbolic space where distance grows exponentially near the boundary. The metric tensor gx = (2/(1−||x||²))² · I warps Euclidean distances.
Aggregate neighbours via logmap → tangent space → expmap. Operations stay on the manifold throughout the forward pass.
RiemannianAdam preserves manifold constraints during gradient descent by rescaling gradients with the inverse metric tensor.
Curved arcs orthogonal to the boundary circle — the shortest paths in hyperbolic geometry rendered in the Poincaré disk.
The 1-dimensional Weisfeiler-Leman test iteratively refines node colors by aggregating neighbor multisets. It is provably equivalent in discriminative power to message-passing GNNs (Xu et al., 2019; Morris et al., 2019), making it a fundamental tool for understanding GNN expressivity limits.
Assign each node a color based on its degree. Nodes with the same degree get the same initial color.
For each node, hash (color, sorted neighbor colors) and relabel to consecutive integers. Repeat for k iterations.
If color histograms of two graphs differ at any iteration, they are provably non-isomorphic. If they always match, WL cannot distinguish them.
A walkthrough of every module in the Graph ML Lab. Each card explains the workflow, expected outputs, and key observations for understanding graph neural networks.
What it does: Node classification on Zachary’s Karate Club (34 nodes, 78 edges) using GCN or GAT layers built from scratch.
What it does: Conditional graph VAE that generates graphs matching target properties (node count, density, clustering).
What it does: Graph VAE encodes graphs into a latent space; diffusion model denoises random noise into coherent graphs.
What it does: Autoregressive tree VAE for 3D spatial trees and neuron morphologies, plus geometric mesh interpolation.
What it does: Physics-informed GNN with cotangent Laplacians, discrete curvatures, reaction-diffusion dynamics, and energy regularisation.
What it does: Embeds graphs in the Poincaré ball model of hyperbolic space, where hierarchical structures like trees can be represented with zero distortion.
What it does: Interactive 1-Weisfeiler-Leman colour refinement for testing graph isomorphism and understanding GNN expressivity limits.