This commit is contained in:
@@ -252,22 +252,7 @@ TODO: 1 BETA DOCS:
|
||||
|
||||
|
||||
---
|
||||
- 1 issue: New reporting process is causing linux server to reboot with large reports
|
||||
AyaNova is really fast at compiling the data and can produce more data than a 1gb droplet can render in well under 1 minute
|
||||
This causes chromium to eat up all teh ram quickly adn triggers somehow not known how, ayanova to reboot without any log or issue
|
||||
might be some kind of linux protection thing, not sure
|
||||
|
||||
I played with some settings and now it appears it doesn't just crash the linux server (at least I *think* so, getting bleary eyed with this) but does go to swap and then drag on forever and ever
|
||||
the moment it has to swap it pretty much stands still
|
||||
So now, with the current settings for chromium the timeout can once again take hold
|
||||
on a 1gb droplet setting it to 1m max render time seems to catch all the out of memory swap conditions
|
||||
UPDATE: probably the no-shm command line parameter causing it to run out of ram with no swap happening to crash it
|
||||
|
||||
todo: figure out the appropriate timeout and range that a 2gb droplet can handle (and maybe larger ones as well for completeness)
|
||||
testing with service wo's dispatch report
|
||||
See if there can be a "auto" setting for report rendering that will just limit the time based on ram
|
||||
or maybe an override that will ramp down to 1m if it sees less than 2g free ram or some value (actually this makes the most sense right now, jsut limit if short of ram otherwise let them set it at will)
|
||||
Find a sane default to go with, 5m is too low for my windows dev and 1800 records, it isn't swapping just time consuming so needs to be higher for that
|
||||
|
||||
todo: is there memory able to be freed up in getreport data as it's caching a lot during generation that could be used by rendering engine?
|
||||
todo: are the reports fully efficient in memory usage and allocation?
|
||||
@@ -275,30 +260,6 @@ TODO: 1 BETA DOCS:
|
||||
am I sending the data twice into the page (thought I saw that somewhere)
|
||||
examine the dispatch report and see if there are efficiencies in memory usage / performance
|
||||
|
||||
|
||||
TESTING INFO:
|
||||
if select 500 wo it *will* try to complete but timeout so no crash there
|
||||
memory definitelhy hits 99% on this one but doesn't crash just times out
|
||||
|
||||
bumped linux server to 2gb ram and was easily able to run 500 pages but it took 78% of ram to do it (I think)
|
||||
1000 recs timed out at 5 minutes but didn't crash the server!!
|
||||
|
||||
1gb devops can run 200 pages of service wo dispatch in about 30 seconds without hitting swap
|
||||
anything higher triggers swap
|
||||
other reports are fine and go fast
|
||||
I'm thinking 1m render timeout setting is appropriate limit for 1gb server
|
||||
|
||||
windev release run did 1000 recs in under 4 minutes, I think 3:40 or something
|
||||
'' 1800 recs 1:38 to query and XX total to final render (timed out after 5:43 still churning away)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
DO I HAVE A SWAP FILE AT ALL?? ON DO DROPLETS?
|
||||
no - https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-20-04
|
||||
|
||||
|
||||
- 1 todo: add the caching technique to *all* the other getreportdata methods as was done with workorder
|
||||
|
||||
- 1 todo: check reporting code, is caching being freed before report generation starts?
|
||||
@@ -570,6 +531,13 @@ todo: Look into this error, was doing report rendering testing heavily on server
|
||||
at AyaNova.Biz.JobsBiz.GetReadyJobsExclusiveOnlyAsync() in C:\data\code\raven\server\AyaNova\biz\JobsBiz.cs:line 30
|
||||
at AyaNova.Biz.JobsBiz.ProcessJobsAsync() in C:\data\code\raven\server\AyaNova\biz\JobsBiz.cs:line 247
|
||||
|
||||
todo: 2 metrics are useless bullshit
|
||||
underreporting completely, doesn't show actual cpu usage
|
||||
how to tell if it's swapping, that's critical, how close to all memory being used up?
|
||||
should report percentage of real memory used, percentage of swap file used, percentage of entire machine cpu used
|
||||
right now seems to be showing maybe only ayanova process itself but that's not helpful when doing ops metrics tasks looking for bottlenecks
|
||||
Also, add a shorter time frame, 6 hours as smallest is bullshit, make it 1 hour
|
||||
|
||||
todo: 2 Log levels are all over the place at the server
|
||||
After attempting to diagnose an issue I'm leaning towards save trace for when need to see sql and put all other shit above trace completely
|
||||
the sql kind of eats up all the space, however I guess that's useful, but it's so huge it's concerning and hard to deal with
|
||||
|
||||
Reference in New Issue
Block a user