This commit is contained in:
2021-12-23 00:54:31 +00:00
parent f2b5fcdafe
commit 5978e001c0

View File

@@ -243,55 +243,22 @@ TODO: 1 BETA DOCS:
██ ██ ███████ ██ ██████ ██ ██ ██ ██ ██ ████ ██████
- 1 todo: need to change report rendering timeout system into an overall time limit system
There is no unlimited timeout anymore instead the report rendering timeout is added to the current timestamp when the report request is received
add guards in code to check if the expiry time has been reached and if so send back the too many records or took too long message that the client can interpret
This timeout applies to the entire rendering process from request to return of link
The getreportdata should receive the expiry time and check it in it's loops and throw a specific exception if it's reached and cleanup accordingly
The getreportbiz should handle the timeout and also have it's own timeout guards between render steps that are time consuming
- Clear error message at client "select less data or increase timeout"
check the current error, it might need to handle two scenarios: no slots and "timeout select less data"
- fixup the docs as they go into there being no timeout if slots are open which is not how it should work at all.
- change the hard cap to something logical, consider how much memory can be consumed in how much time scenario? Maybe still minutes at first and see where it takes us?
- The default should be the sane level for most purposes so that people do not run into it casually
- keep max instances and config, but reword it a bit as the timeout and stuff affects what actually happens
- 1 todo: timeout is not be working, try to test it, joyce having issues
she's using it heavily and it's constantly locking up as in furiously processing away a stuck report seemingly without end
There needs to be a overall sanity timeout for this shit and I need to test it with some reports that just endlessly spin on some wait(true) type bullshit
research:
actually, it seems like maybe it's not the rendering process but the query itself that is slow because once the query is done the rendering is very fast
the bagcheck happens before the query runs so the real problem may be the speed of the query and needing a way to timeout the query or cancel it if it takes too long
Currently looking at workorderbiz getreportdata method to see why it's taking 20 seconds to process 100 workorders which seems crazy slow
Workorderbiz::GetReportData took ms: 26239
NEXT UP break out the fetch from the process timing wise and see which part is slowing it down
Before any changes:
Workorderbiz::GetReportData took ms: 22987
Workorderbiz::GetReportData took ms: 19131
Workorderbiz::GetReportData took ms: 28449
Workorderbiz::GetReportData took ms: 25477
Remove populate viz:
Workorderbiz::GetReportData took ms: 4174
Workorderbiz::GetReportData took ms: 3659
Workorderbiz::GetReportData took ms: 3811
Workorderbiz::GetReportData took ms: 3878
DING DING DING!
Only remove woitem populate:
Workorderbiz::GetReportData took ms: 4845
Workorderbiz::GetReportData took ms: 5026
Workorderbiz::GetReportData took ms: 4955
Workorderbiz::GetReportData took ms: 4926
Remove all *except* woitem populate
Workorderbiz::GetReportData took ms: 19413
Workorderbiz::GetReportData took ms: 19665
Workorderbiz::GetReportData took ms: 29741
Workorderbiz::GetReportData took ms: 20944
Remove woitempopulate first 5 grandchildren:
Workorderbiz::GetReportData took ms: 18954
Workorderbiz::GetReportData took ms: 19925
Workorderbiz::GetReportData took ms: 15410
Workorderbiz::GetReportData took ms: 15513
Remove woitempopulate bottom 5 grandchildren
Workorderbiz::GetReportData took ms: 12142
Workorderbiz::GetReportData took ms: 14650
Workorderbiz::GetReportData took ms: 11417
Workorderbiz::GetReportData took ms: 11710
Cache ALL THE THINGS!
- 1 todo: add timeout to getreportdata methods so that they can bail if they take too long, maybe tied into report timeout??
this is necessary because the overwhelming majority of time is spent gathering the report data not in the actual report rendering
- 1 todo: add the caching technique to *all* the other getreportdata methods as was done with workorder
@@ -494,6 +461,8 @@ todo: 1 When there is a rendering issue with chromium browser startup the server
this is because it was written expecting any error was a template error not a starting chromium error so need to look there in the exception handler
would rather not log report template issues to the server log but anything else structural should be
todo: 2 wrap this error in a debug build block before release:
2021-12-22 16:19:07.0290|ERROR|AyaNova.Biz.TranslationBiz|********* GetSubsetAsync problem: Duplicate keys: WorkOrderItemPartQuantity)
todo: 2 Test with expired key, can superuser login and no one else?? **CRITICAL**
Awaiting a raven license key generator first, currently do not have one, 404's in rockfish!!
@@ -830,6 +799,11 @@ BUILD 8.0.0-beta.0.6 CHANGES OF NOTE
- new docs folder hosting current version of v8 docs https://www.ayanova.com/docs/
this is for our use so don't need to spin up a server to view the docs but will likely be the permanent url for v8 docs once all is done
- fixed query bug in workorderlist AGE column incorrectly calculated
- Added caching to workorder report viz field resulting in a 160% report rendering speed improvement in a test of 100 workorders
- fixed "Service Dispatch" report template that had a duplicate translation key in it: "WorkOrderItemPartQuantity"
(note you can check for this during report development by looking at the server log which will show the following error when the report is rendered:
2021-12-22 16:19:07.0290|ERROR|AyaNova.Biz.TranslationBiz|********* GetSubsetAsync problem: Duplicate keys: WorkOrderItemPartQuantity)