Report rendering handler to prevent linux from crashing on too much reporting at once
This commit is contained in:
@@ -15,6 +15,32 @@
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
ISSUE: reporting failing on linux under load
|
||||
well, not failing exactly, but eating up all resources and bringing to a halt
|
||||
possible reporting issues solution: don't start a new report until last one finished
|
||||
i.e. it should wait until the last report has resolved before spinning up a new one
|
||||
this would resolve the linux issue I think where it gets overloaded because it's trying to do too many at once
|
||||
maybe a setting for a hard limit on number of reports currently running or instances of chrome started by report
|
||||
zombie processes?
|
||||
https://developers.google.com/web/tools/puppeteer/troubleshooting#running_puppeteer_in_docker
|
||||
|
||||
todo: Test my killing process thing on Linux and ensure no more than one process of chrome is there at a time and the test can't kill the server
|
||||
make sure it works before proceeding
|
||||
Also there was that no single process setting change to check into as well
|
||||
|
||||
|
||||
todo:
|
||||
ideally it should allow at least two or three instances at once and track all and return a server busy response to try again if no free slot available
|
||||
so it will need a list of instances with age and process id
|
||||
Report rendering at client must respond back with the too busy message and try again
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
AUTOMATED TESTING
|
||||
NEEDS:
|
||||
Acceptance "smoke" testing to ensure can release confidently "E2E" testing
|
||||
@@ -31,19 +57,6 @@ todo: take a sample report and strip it to see where and what can be changed to
|
||||
there are lots of indications online when using puppeteer to gen docs that too many elements slow it down and some particular ones are killer for this
|
||||
|
||||
|
||||
todo: reporting failing on linux under load
|
||||
well, not failing exactly, but eating up all resources and bringing to a halt
|
||||
possible reporting issues solution: don't start a new report until last one finished
|
||||
i.e. it should wait until the last report has resolved before spinning up a new one
|
||||
this would resolve the linux issue I think where it gets overloaded because it's trying to do too many at once
|
||||
maybe a setting for a hard limit on number of reports currently running or instances of chrome started by report
|
||||
zombie processes?
|
||||
https://developers.google.com/web/tools/puppeteer/troubleshooting#running_puppeteer_in_docker
|
||||
|
||||
store process id in ram, have a job that periodically goes thorugh the list and looks for that process and kills it if more than 30 seconds old??
|
||||
can store in ram because it's ephemeral and closing server would kill those processes anyway
|
||||
|
||||
|
||||
|
||||
|
||||
investigate: hard cap on appointments brought back controlled by client in settings otherwise unusable
|
||||
@@ -455,7 +468,7 @@ todo: 2 make priority / woitem / wostatus colors distinct from each other
|
||||
todo: 2 Server metrics don't seem to tell the story of teh overall server
|
||||
i.e. when reporting is jammed up on linux and the server manager in do shows 100% cpu, inside the ayanova server ops metrics shows a much smaller number and responds
|
||||
so it seems it's reporting only on ayanova's share of what is happening but if the overall server is bottlenecking we need to show that as well
|
||||
|
||||
|
||||
todo: 2 make sample data appointments suitable for evaluation of calendar
|
||||
spread throughout the day
|
||||
right now a bunch are extremely close if not on the same time window for same wo
|
||||
|
||||
Reference in New Issue
Block a user