Oh… There are several bugs that could be causing the problem… not just one.
There are real causes here, and they are mostly not “you forgot to force something.” Under normal behavior, a new commit to a Space repo should cause HF to rebuild the Docker image and provision a new VM for the new container. Separately, HF documents factory_reboot=True much more narrowly: it rebuilds from scratch without caching requirements. That means the official “hard rebuild” control is aimed at dependency-layer cache, not guaranteed recovery from a stale source snapshot or a stale runtime binding. (Hugging Face)
Bottom line
The symptom “build kicked off, but it did not use HEAD” usually falls into one of three real buckets:
-
stale builder snapshot
The build job itself is attached to an older commit than repo HEAD. This is the closest match to your complaint. Recent forum reports describe exactly that behavior: users pushed fresh commits, triggered redeploys, even did factory rebuilds, and the Space still picked an old version of the repo. (Hugging Face Forums)
-
correct build, stale runtime
The build may actually have used the right commit, but the running environment or served app is still bound to older state. A recent report showed the Space SHA moving forward while the runtime SHA stayed old, which is a strong signal that the control/runtime layer can lag behind the repo/build layer.
-
platform-side queue/control trouble
Sometimes the problem is broader than one repo. Empty logs, Build queued forever, or multiple Spaces acting strangely all point toward platform-side build/control issues rather than a bug in your repo.
The likely causes, in plain English
1. HF selected the wrong source snapshot for the build
This is the most literal version of your problem. The repo HEAD is new, but the build job still starts from an older revision. The public reports from February 2025 are very close to this. In those cases, repeated commits and restarts did not reliably move the Space forward. (Hugging Face Forums)
How to recognize it:
Check the build log header and compare its Commit SHA to your actual repo HEAD. If the log header itself is older than HEAD, you are not looking at a startup bug or a Gradio bug. The wrong source revision entered the build pipeline. (Hugging Face Forums)
Why factory rebuild may not fix it:
Because the documented scope of factory rebuild is requirements-cache invalidation, not “guarantee that the source snapshot pointer is fresh.”
2. The build used HEAD, but the runtime never really rolled forward
This is subtler, and it explains why users sometimes swear the build is wrong when the real problem is the running environment. HF’s config docs separate build success from startup health. A new image can exist, but if the app never becomes healthy enough, the platform can leave you with behavior that still looks like the previous version. The docs explicitly expose startup_duration_timeout, and they also document preload_from_hub, which exists precisely because startup-time downloads can make readiness fragile.
How to recognize it:
The build log Commit SHA matches HEAD, but the app still looks old, or runtime metadata still points at an older SHA. That is the pattern shown in the “new Space SHA, old runtime SHA” report.
Typical hidden triggers:
Slow startup, model downloads at boot, startup health timing out, or runtime state not fully rotating to the new revision.
3. The Space slug or deployment state is wedged
This is the broader control-plane explanation. A historical GitHub issue reported that restarting the Space did not update the served app, and the new app only appeared after a hardware switch. A separate 2025 report said the newest code only appeared after a different mode transition. Those are not “git fixes.” They are evidence that deeper infra state transitions can dislodge stale deployment state.
How to recognize it:
One specific Space keeps acting wrong, while the same code in a fresh staging Space behaves correctly.
4. A platform incident is making your repo look guilty
This is the case where build controls are unhealthy across the platform. The signal here is not one old SHA. The signal is empty or useless build logs, Build queued forever, or brand-new minimal Spaces also failing.
How to recognize it:
Create a minimal test Space. If that also gets stuck or behaves strangely, stop blaming your repo first.
The solutions that actually make sense
Solution 1. Split “wrong build” from “wrong runtime” immediately
Do this first:
- compare local
git rev-parse HEAD
- compare the build log’s
Commit SHA
- compare runtime behavior after startup
That one split tells you which class of fix is even relevant. If build-log SHA is old, you have a source/build selection bug. If build-log SHA is correct, you have a runtime/startup/control bug. (Hugging Face Forums)
Solution 2. Use one factory rebuild, once
A factory rebuild is still worth doing once because it is the official full rebuild control. But treat it as a diagnostic control, not a cure-all. If the next build log still shows the wrong SHA, repeating factory rebuilds is unlikely to teach you anything new.
Solution 3. Create a fresh staging Space with the exact same latest commit
This is the best real-world diagnostic after the SHA check.
- staging Space uses HEAD correctly → your original Space state is likely wedged
- staging Space also fails → this is broader platform/account/build-path trouble
This is one of the cleanest tests because it keeps the code constant while changing the Space identity and deployment history. The public reports strongly support this style of diagnosis.
Solution 4. If the build SHA is correct, fix startup health instead of chasing git
If your build log matches HEAD, shift focus:
- reduce boot-time downloads
- use
preload_from_hub for assets that must exist before the app is ready
- increase
startup_duration_timeout if initialization is legitimately long
- inspect runtime logs, not build logs
This is the official config path for Spaces that “built fine” but never become cleanly ready.
Solution 5. If multiple Spaces or a minimal Space also fail, escalate fast
At that point, more commits are noise. Gather:
- Space URL
- expected repo HEAD SHA
- observed build-log
Commit SHA
- timestamps
- whether a fresh minimal Space reproduces
- whether the issue is isolated to one Space or broader
That is the evidence package that distinguishes a real platform-side problem from repo-level debugging. (Hugging Face Forums)
What I would not do
I would not keep pushing no-op commits hoping one lands. The 2025 reports show that repeated redeploys can fail to advance the Space. I would also not assume that “factory rebuild” means “invalidate every stale layer in the system,” because HF’s own API docs define it more narrowly than that. (Hugging Face Forums)
I would also not jump straight to Dev Mode. You already ruled that out, and for this specific complaint the better path is to identify which layer is stale rather than change your workflow. The Dev Mode docs are still useful as a statement of what the normal non-Dev lifecycle is supposed to be. (Hugging Face)
My recommended order of operations
- Check repo HEAD vs build-log
Commit SHA. (Hugging Face Forums)
- Run one factory rebuild.
- If build-log SHA is still old, create a fresh staging Space with the same commit.
- If build-log SHA is correct, treat it as startup/runtime and adjust readiness settings or preload strategy.
- If a minimal Space also fails, escalate as platform-side.
My best short answer
Causes: stale builder snapshot, stale runtime/control state, wedged Space state, or broader platform queue/control issues. (Hugging Face Forums)
Solutions: compare SHAs first, use factory rebuild only once as a control, switch to a fresh staging Space if the original slug is wedged, tune startup if the build SHA is correct, and escalate quickly if a minimal Space also reproduces.
That is the shortest path that actually matches the evidence.