Encountering a ModuleNotFoundError
while running a PySpark job can be frustrating, especially when the module exists and works perfectly outside of Spark. I recently ran into this exact issue while working with a Spark job that imported a function from a local utils
module. Everything seemed fine until I tried to use a PySpark UDF, and suddenly, the module was nowhere to be found.
Thank me by sharing on Twitter 🙏
After some debugging, I found a straightforward way to fix it that worked perfectly for me.
Understanding the ModuleNotFoundError
Problem
PySpark runs code on distributed nodes, and those nodes might not have access to the same Python environment as the driver (the machine running the job). When a Python UDF is executed on worker nodes, it gets serialized and shipped over the network. If the UDF relies on a module that isn’t available on the workers, PySpark throws a ModuleNotFoundError
.
That’s why an import statement might work fine at the top of a script but fail when running inside a UDF. PySpark workers simply don’t know where to find the custom module unless explicitly told.
Fixing the Import Error in PySpark UDFs
Import Inside the UDF
The solution that worked for me was to move the import statement inside the function being used as a UDF. This ensures that the necessary module is available when the function is executed on worker nodes.
Start with Why: How Great Leaders Inspire Everyone to Take Action
$10.49 (as of March 31, 2025 14:14 GMT +00:00 - More infoProduct prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on [relevant Amazon Site(s), as applicable] at the time of purchase will apply to the purchase of this product.)The Microsoft Office 365 Bible: The Most Updated and Complete Guide to Excel, Word, PowerPoint, Outlook, OneNote, OneDrive, Teams, Access, and Publisher from Beginners to Advanced
$34.17 (as of March 31, 2025 14:14 GMT +00:00 - More infoProduct prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on [relevant Amazon Site(s), as applicable] at the time of purchase will apply to the purchase of this product.)Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism
$17.71 (as of March 31, 2025 14:14 GMT +00:00 - More infoProduct prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on [relevant Amazon Site(s), as applicable] at the time of purchase will apply to the purchase of this product.)Before (Fails with ModuleNotFoundError):
from utils.formatting import format_text # This works on the driver, but fails inside UDF
def format_text_local(text):
return format_text(text) # Might break when run inside Spark
After (Works with Spark UDFs):
def format_text(text):
import sys
import os
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
from utils.formatting import format_text as format_text2
return format_text2(text)
By placing the import inside the function, it gets executed on the worker nodes where the function actually runs.
Conclusion
PySpark’s distributed nature makes dependency management tricky, but in my case, simply importing the module inside the UDF was enough to resolve the issue. If you’re running into a ModuleNotFoundError
when using a PySpark UDF, this approach is worth trying first before considering more complex solutions.