Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
We created standards and measures of errors logged. The department manager has a report along with alerts to the entire department if errors go behind a defined threshold. Splunk makes this very easy to do.
Developers are also held accountable for downtime caused by errors so they can get quite obsessive with making sure the code works before release (unit and lots of integration tests) and create their own alerts to make sure they are notified ASAP when problems arise.
It didn't happen overnight.
Start small and maybe you can develop a culture where developers take ownership and pride in the software they release. -
@PaperTrail
While I agree with the sentiment, I can't ever recommend splunk. They don't deliver value for the cost. With our log throughput (5gb/day), splunk was almost $11,000 a year.
We run an EFK+Prometheus+Graphana stack at < $900/year with better performance. -
torbuxx415yI guess it's a motivational issue or even lacking of esteem.
If something fails for two weeks without someone noticing, how the work can be important?
You should check this and if the importance of the project is underestimated by your coworker, then explain it. What is means if the project is not well maintained, what is means if the customer is not satisfied, what is means if a project gets lost. -
hitko31455yWith a properly configured pipeline this shouldn't happen, like, ever. Developers have no reason to care how the deployment process happens, or to manually check whether the required dependencies got deployed as well. Pipeline should be configured to ensure deployment is all or nothing, and you should use deployment tools like helm test to verify everything works on deploy.
-
@SortOfTested
> I can't ever recommend splunk
I understand. Splunk used to be affordable. We've been migrating some logging to Elasticsearch (internal performance measures) but have too much invested in code+resources. Nobody feels moving existing reports/dashboards/alerts is worth the effort.
I'm so sick of devs not caring what happens after they push their code. A new feature was released on the front-end two weeks ago but the backend was never deployed. It's been logging errors for 2 weeks now.
I know I'm equally at fault for not noticing but I feel like the only person that ever notices things like this. I also discovered a data issue today by looking at the error logs.
How can I get my teammates to be more invested in how the service runs live?
rant
live
errors
production
responsibility
process