Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Good question. goooood question....
hinst115657dDocker on Raspberry Pi Zero = RIP
You can write init-file for systemd for your service apps
cron job that runs periodically
advance crons (there's talks on these but basically more control and ensuring that things runs)
multi-system-crons (chef, puppet, other dev-ops-y tools)
there's probably a better way, but depends on what you care about. do they crash or need re-ran? maybe a service? maybe a tool to make sure theyre running? sounds like it's set and forget though so ¯\_(ツ)_/¯ if it's working, dont need to fix it
As far as monitoring goes, you can use Grafana
rantydev28757dI've used supervisor on RPis. Lightweight and easy to set up. It can also be set to monitor and restart programs automatically in case they crash. 10/10 would use again
If you want to export metrics - best would be definitely export a Prometheus endpoint and scrape it.
You can do this in any language - coding side is trivial in my opinion, what's harder is understanding metrics, naming of metrics and metric types (gauge, counter, ...).
The joyful thing of Prometheus is it's wide adaptation.
You can easily use InfluxDbs Telegraf to scrape *multiple* Prometheus endpoints -/ InfluxDb endpoints etc and output a *single* scrape point with metrics.
One open port external, everything else on loopback network. Firewall configuration is easy peasy.
Grafana is a possibility for dashboards.
The beauty is: Telegraf -/ Prometheus allow you to split the simpler task - gathering metrics from the beefy part (aggregation, evaluation, representation, etc.)
So you can just run Telegraf on the RPI, scrape it on another system and do everything heavy on the other system, too.
Do you want to execute a certain command at a certain time?
Go for SystemD timers.
The primary reason is flexibility and recoverability.
SystemD timers do *not* just run at a certain period of time, they can - if wanted - store the last time of execution.
Meaning that if for whatever reason there is an interruption like reboot, the timer will still trigger.
Plus you get the dependency management of units. Which simplifies things when you're dependent on e.g. a network connection, certain mounts etc.
What you should *NEVER* do is trying to multi command inside e.g. a docker container.
Don't try to be clever and e.g. define a CMD with multiple commands -/ processes that run parallel.
It's fragile, it's an anti pattern, it's painful and it will haunt you in many ways.
Why metrics instead of raw data… like an JSON endpoint?
The beauty of metrics is that they take away the math - interpolation, timeline etc. All done. No need to bother. No room for mistakes in format.
Only complexity is understanding and writing proper metrics (naming, tagging, types).
fruitfcker93756dAsk yourself, what type of metric do you want to see. Is it server specific (CPU, Memory, utilization) or Python/NodeJS app specific (HTTP codes, scraping results, number of scraped pages per hour, etc) or both?
steev25256dyou could use PM2