Hass.io 'error while loading page addon'

So I was chasing my tail, trying to resolve a ‘ghost switch’ on a Sonoff 4 Ch pro. Long story short, it turned out to be the retain flag in the MQTT broker… BUT… trying to access the Hass.io Addon page to de-install / re- install the MQTT Broker, and it doesn’t load. Screen shot:

I’ve trawled the forums / community, cant seem to close this one out.

BTW - I transitioned the MQTT broker to my QNAP Nas. Changed the MQTT Broker info in config. yaml, changed the MQTT Broker IP info on all Sonoffs etc, and its rock solid, so probably better off…

But now I want to uninstall the MQTT broker and I cant access… Secondly, I want to update Node red, samba, appdaemon etc,and each time I select the addon, same thing happens…

Im running Hass.io on a Pi3, 0.87.1. Bit gun shy of updating, in case there is another dependency etc, and Im happy on 0.87.1.

Not sure which if any logs / other config would be beneficial here. - I haven’t changed config.yaml for over 3 months…

Cheers,

Jarrod.

Did you try CTRL + F5 ?

Hi Tom,

Yes I tried CTRL+F5.

read here about clearing cache. - Thing is, this happens on my iPhone and other desktops.

To clarify - I open Hass.io - can SEE the addons, i.e. MQTT, Node red etc, but when I click on the addon, is when the above screen occurs.

Ive also tried refresh / update the supervisor in Hass.io advanced settings.

Other than that, and the good old restart, I cant really find any other options of what to do…

Yeah. That’s not good. You might need some expert help.

I know you are a busy man @pvizeli and hate to bug you like this, but but do you have any advice you can offer here?

I’m having exactly the same fault and I really need to get this fixed, any news on this subject?

/Dennis

Just thought I’d bump this one back up a bit? -

Im considering a fresh install on the Pi, after a backup, but Im not sure if the backup will carry the issue over…

Thoughts?

Same problem where after upgrade to 2021.06.3

Hi jpsfs,

Not the answer you want to hear - did a back up and rebuild - it remained an issue.
Bit the bullet and and did a VM build on the NAS. never looked back.

If you can, take ther plunge.

Good luck.

Jarrod.

Thank you for update @Jarrod!
Perhaps I got lucky, in my case the problem was due to a JSON file not being sent over NGINX as it should. During a reboot the internal IP changed so I had to update the trusted_proxies configuration to allow for the entire subnet instead of a specific IP (which was what I initially did).