This commit is contained in:
2020-04-30 13:56:35 +00:00
parent 7dca38ddc2
commit 06090a413a

View File

@@ -69,152 +69,26 @@ CURRENT TODOs
todo: WIKI insert image should have extra linefeed before and after because it fails to show the image at all if it follows something like <br> even though it appears to be on the next line
- tl/dr: ensure blank before and after for any url
DBDUMP
todo: Turn dbdump into exporter direct to api
- "V8 Export"
- need login page
- checks for presence of map, if found prompts to resume or restart
- checks for data at destination and if a new export prompts to erase destination
- need route that allows erase any db
- need to track exported objects in a map in v7 and clear on demand for fresh export
- so can resume an export and can have the map for exporting
- replicate v7 code so far involved in import
https://docs.microsoft.com/en-us/aspnet/web-api/overview/advanced/calling-a-web-api-from-a-net-client
NEW PLAN:
Export directly via api usage? - Before I do anything, take a long think about making the dbdump do the export straight via API to the server
PROS
People can't use the dbdump to move to other software!
No code at the server needs to handle v7 shit or be aware of it in any way
export can proceed one object at a time and not involve one massive pile of data at once
This would make sense, be clean and would make it way easier at the server end
I can use the v7 entire api for any thing related to exporting
CONS
lot of coding needed to be done in v7 laptop, kind of uggo
server will need to be in export mode though or ops only in order to avoid issues
at client end would need to track what was exported with a dictionary to confirm the existance before re-uploading
could this be way slower than a local import from export files? (does it matter?)
THINGS
systemwide readonly mode during import (ops) so users can view the progress without making changes to the objects at the RAVEN end?
unless it's so fast that it's irrelevant :)
Special route at server for export?
Need authorization stuff, who does it? Ops woudn't normally have rights to biz objects so it would need a bizadminfull I guess
Kind of fun working with the api from a plugin like that I guess
Should it handle stop and restart scenarios, and fixup as it goes along for resiliency?
Once I get rolling this way it wouldn't be much more work to do other objects
check maximum file count in a single folder, may need to dump things out differently due to os limitations
what if someone is exporting 20k parts (very likely)?
overall Zip is the weak link I think and the ziplib I'm using is hella old so just sidestep that entirely
Do not zip the entire export instead, leave that up to the user, but *do* zip individual files during export
at the import end, accept a zip file and unzip it before importing
change the import end to work with a folder and files, not a zip archive
Modify import to do the file attachments with the objects at same time, maybe provide a list or something
dump the wiki content *with* the object in question as a string property (use jextra or whatever as necessary)
dump the files differently:
- dump them one at a time zipped in an archive with the file name set to the file ID corresponding to the same entry in the jextra section of attachments
- store them in same sub folder with same name as json file so that we don't end up with too many files in one folder
Change export to dump files first and keep a list of all the files it dumps, then put that info in jextra as attachments collection like this:
{
"DefaultLanguage": "Custom English",
"DefaultServiceTemplateID": "00000000-0000-0000-0000-000000000000",
"UserType": 1,
"Active": true,
"ClientID": "00000000-0000-0000-0000-000000000000",
"HeadOfficeID": "00000000-0000-0000-0000-000000000000",
"MemberOfGroup": "ff0de42a-0ea0-429b-9643-64355703e8d1",
"Created": "2005-03-21 7:05 AM",
"Modified": "2015-09-15 12:22 PM",
"Creator": "2ecc77fc-69e2-4a7e-b88d-bd0ecaf36aed",
"Modifier": "00a267a0-7e9f-46d3-a5c1-6cec32d49bab",
"ID": "00a267a0-7e9f-46d3-a5c1-6cec32d49bab",
"FirstName": "Test",
...
"TimeZoneOffset": null,
"jextra": {
"hexaScheduleBackColor": "#000000",
"attachments":[{
"name": "image13.png",
"created": "2020-04-26T13:59:22.147-07:00",
"creator": "2ecc77fc-69e2-4a7e-b88d-bd0ecaf36aed",
"mimetype": "image/png",
"id": "3af26b2e-55cf-4709-9fb3-c94f8d30da91"
}, etc
]
}
}
(And remove these parts of the file meta that aren't required:
"size": 11111,
"ayafiletype": 1,
"rootobjectid": "d4afcaa2-152a-4e2f-be31-bce64e55d9f9",
"rootobjecttype": 72
)
use async feedback to keep user up to date on what's happening
- a section of live action separate from teh list of tasks showing exact current operation including files being dumped as that's slow
todo: after attachments - DATADUMP - v7 wiki to RAVEN markdown
- https://rockfish.ayanova.com/default.htm#!/rfcaseEdit/3468
- Need to export images and attached docs as attachments
todo: dB DUMP needs async feedback as it's doing it's thing adn to be cancellable
todo: dB DUMP needs to be cancellable
- also it should show when it's exporting files etc
todo: Datadump EXPORT and RAVEN IMPORT of all attachment / wiki stuff
- v7 attached files, internal documents all handled
- Code it now
todo: stub primitive workorder and client at server (all attachable objs?) so can test export with big data now?
todo: dbdump v7 DOCUMENTS / linked files feature
- users might have a folder of attached docs source files that they really want to keep and not some others that are less important?
- Maybe need a single destination object that just holds the attached docs in order to prevent dupes?
todo: linked docs exported to attached files as a decision by exporter if yes then it attempts but accepts missing or unlinked files
- Maybe an attached docs special folder that they can dump files into that are important and if an object has a link to them it can look in there to see if it's present or not?
- This will need some thinking to avoid duplicate imports and to determine what the user wants to do in reality
- there are potential issues around making attached docs into locally stored ones such as duplicating unnecessarily etc
- Attachments are allowed to be duplicates, there is no reference counting system or anything so ...
- Needs a good think, maybe it's not something we can support?
- Or, maybe it needs to be something done at the v7 end first, like users need to choose what to keep or not somehow?
- Maybe it needs to just make a link in the wiki to the docs?
- import maybe *looks* for the linked docs in locations provided by user or expects to be in location provided
- will automatically scan entire folder structure, get all file names and then work on importing / attaching
- so user just needs to ensure they are *somewhere* in a folder specified so that AyaNova can find them and import them
- this is as an alternative to putting them into the dbdump export file, it exports the links but the import will take the links provided in the dbdump as well as the location to search
for the files and do the rest.
- Or maybe dbdump looks for the linked files, confirms their existence, checksums them and then uses that as the basis to re-find them at the import end?
- this way even if the name is different it doesn't matter, it will find them.
- this helps because there could be many dupes linked to the same file etc.
- Actually that's an issue, what if they were trying to save space so had ONE copy of a file linked from all over the place and then v8 comes along and replicates it over and over again
todo: clean out import v7 code / routes at server,
but keep job infrastructure related code for future reference?
---------------------------------------------
todo: THIS! At this point, upload to dev server and thoroughly test with devices, it seems a bit slow at times
- Might need to hide attachments until user clicks on something to reveal as it seems odd to fetch every open
todo: careful and thorough PERF tests remotely and local
- https://www.digitalocean.com/community/tutorials/how-to-use-chrome-dev-tools-to-find-performance-bottlenecks?utm_source=DigitalOcean_Newsletter
todo: after attachments - integration tests update
@@ -228,6 +102,17 @@ todo: Can't hide custom fields on widget form? (no code to check if it's hidden?
todo: Look at attachment saving code on server, should it zip?
- pros and cons?
todo: THIS! At this point, upload to dev server and thoroughly test with devices, it seems a bit slow at times
- Might need to hide attachments until user clicks on something to reveal as it seems odd to fetch every open
todo: careful and thorough PERF tests remotely and local
- https://www.digitalocean.com/community/tutorials/how-to-use-chrome-dev-tools-to-find-performance-bottlenecks?utm_source=DigitalOcean_Newsletter
todo: change trial detection route that client first hits
- make it a different controller and renamed to something like server ping or "hello" or something friendly and useful
- maybe the route that gets Notifications