This commit is contained in:
2020-04-27 00:49:47 +00:00
parent 977ebba1cd
commit 1d5f7ea9e7

View File

@@ -67,33 +67,95 @@ CURRENT TODOs
@@@@@@@@@@@ ROADMAP STAGE 2:
DBDUMP
tentative plan:
wiki export isn't monolithic but instead goes with object export and is exported out to the objects folder structure since it need to be imported that way at the same time
use WikiPageExistanceChecker on each object that is wikiable in v7 to check and separately export that as well
wiki in v7 are in html format so just need an html to rtf converter
internal images will be much fuckery
file attachments must be done simultaneously in the same process as they go hand in hand
NEW PLAN:
Export directly via api usage? - Before I do anything, take a long think about making the dbdump do the export straight via API to the server
PROS
No code at the server needs to handle v7 shit or be aware of it in any way
export can proceed one object at a time and not involve one massive pile of data at once
This would make sense, be clean and would make it way easier at the server end
I can use the v7 entire api for any thing related to exporting
CONS
lot of coding needed to be done in v7 laptop, kind of uggo
server will need to be in export mode though or ops only in order to avoid issues
at client end would need to track what was exported with a dictionary to confirm the existance before re-uploading
THINGS
Special route at server for export?
Need authorization stuff, who does it? Ops woudn't normally have rights to biz objects so it would need a bizadminfull I guess
Kind of fun working with the api from a plugin like that I guess
Should it handle stop and restart scenarios, and fixup as it goes along for resiliency?
Once I get rolling this way it wouldn't be much more work to do other objects
internal static bool WikiPageExists(Guid SourceObjectID)
{
return ((WikiPageExistanceChecker)DataPortal.Fetch(new Criteria(SourceObjectID))).mExists;
check maximum file count in a single folder, may need to dump things out differently due to os limitations
what if someone is exporting 20k parts (very likely)?
overall Zip is the weak link I think and the ziplib I'm using is hella old so just sidestep that entirely
Do not zip the entire export instead, leave that up to the user, but *do* zip individual files during export
at the import end, accept a zip file and unzip it before importing
change the import end to work with a folder and files, not a zip archive
Modify import to do the file attachments with the objects at same time, maybe provide a list or something
dump the wiki content *with* the object in question as a string property (use jextra or whatever as necessary)
dump the files differently:
- dump them one at a time zipped in an archive with the file name set to the file ID corresponding to the same entry in the jextra section of attachments
- store them in same sub folder with same name as json file so that we don't end up with too many files in one folder
}
Change export to dump files first and keep a list of all the files it dumps, then put that info in jextra as attachments collection like this:
{
"DefaultLanguage": "Custom English",
"DefaultServiceTemplateID": "00000000-0000-0000-0000-000000000000",
"UserType": 1,
"Active": true,
"ClientID": "00000000-0000-0000-0000-000000000000",
"HeadOfficeID": "00000000-0000-0000-0000-000000000000",
"MemberOfGroup": "ff0de42a-0ea0-429b-9643-64355703e8d1",
"Created": "2005-03-21 7:05 AM",
"Modified": "2015-09-15 12:22 PM",
"Creator": "2ecc77fc-69e2-4a7e-b88d-bd0ecaf36aed",
"Modifier": "00a267a0-7e9f-46d3-a5c1-6cec32d49bab",
"ID": "00a267a0-7e9f-46d3-a5c1-6cec32d49bab",
"FirstName": "Test",
...
/// <summary>
/// Get's content with AyaNova RI ready AyaImage: tags and AyaNova: urls
/// </summary>
/// <param name="bForEditing">true=don't build file list</param>
/// <param name="baseSiteURL"></param>
/// <returns></returns>
public string GetContentAsRIReadyHTML(bool bForEditing, string baseSiteURL)
"TimeZoneOffset": null,
"jextra": {
"hexaScheduleBackColor": "#000000",
"attachments":[{
"name": "image13.png",
"created": "2020-04-26T13:59:22.147-07:00",
"creator": "2ecc77fc-69e2-4a7e-b88d-bd0ecaf36aed",
"mimetype": "image/png",
"id": "3af26b2e-55cf-4709-9fb3-c94f8d30da91"
}, etc
]
}
}
(And remove these parts of the file meta that aren't required:
"size": 11111,
"ayafiletype": 1,
"rootobjectid": "d4afcaa2-152a-4e2f-be31-bce64e55d9f9",
"rootobjecttype": 72
)
use async feedback to keep user up to date on what's happening
- a section of live action separate from teh list of tasks showing exact current operation including files being dumped as that's slow
todo: after attachments - DATADUMP - v7 wiki to RAVEN markdown
- https://rockfish.ayanova.com/default.htm#!/rfcaseEdit/3468
- Need to export images and attached docs as attachments
todo: dB DUMP needs async feedback as it's doing it's thing adn to be cancellable
- also it should show when it's exporting files etc
todo: Datadump EXPORT and RAVEN IMPORT of all attachment / wiki stuff
- v7 attached files, internal documents all handled
- Code it now