TensorFlow: Demand-Driven Execution
An alternative executor for TensorFlow that replaces the default breadth-first (eager) graph execution with a demand-driven, depth-first approach. By propagating demand backward from output nodes, intermediate results are consumed sooner and freed earlier — dramatically reducing peak memory usage during operations like backpropagation, where standard execution forces early activations to persist until the very end of the backward pass. Developed as an undergraduate research project at the University of Athens, with a working prototype showing promising memory reduction results.
Features
Demand-Driven Executor
Replaces TensorFlow's FIFO-based node scheduling with depth-first execution driven by output demand. Instead of computing all available nodes breadth-first, the executor starts from the requested output and recursively triggers only the computations needed, in the order they're consumed.
Memory Optimization
Reduces peak memory footprint by freeing intermediate results as soon as they're consumed, rather than holding them until all downstream nodes are scheduled. Enables a trade-off between storage and recomputation — results can be dropped and recalculated on demand instead of persisted.