This commit is contained in:
2018-10-02 00:06:25 +00:00
parent c54d09b5da
commit 2f6f005ff0
2 changed files with 21 additions and 5 deletions

View File

@@ -33,11 +33,23 @@ IMMEDIATE ITEMS:
- https://gist.github.com/ruckus/5718112
- Can it be sped up with the huge dataset test
- Right now a full unconstrained search done *while running all tests at once* is taking 38 seconds!!
- I need to play with the indexes on the Search tables to get faster results, but need to generate a similar huge dataset to know if it's working
- Maybe I can make the indexes in pgAdmin as a test and then see how they fare before I commit it into schema code
- STATISTICS: maybe pgAdmin stats can help with index creation: https://www.pgadmin.org/docs/pgadmin4/dev/pgadmin_tabbed_browser.html
- https://statsbot.co/blog/postgresql-query-optimization/
(The actual slowness is directly related to namefetch so that's where I am concentrating effort)
- First up is to test generate data again and see if the name fetcher query uses the compound name/id index I added when data is generated freshly
- First add to ayschema these two indexes (2 in case one preferred over other or order issues)
CREATE INDEX widget_idx_test_name_id2
ON public.awidget USING btree
(name COLLATE pg_catalog."default", id)
TABLESPACE pg_default;
CREATE UNIQUE INDEX widget_idx_name_id
ON public.awidget USING btree
(id, name COLLATE pg_catalog."default")
TABLESPACE pg_default;
- Update all the other routes to include search indexing (attachments, tags etc, anything with text in it)

View File

@@ -364,6 +364,10 @@ FROM awidget AS m
WHERE m.id = 12989
LIMIT 1
"Limit (cost=0.29..8.30 rows=1 width=27) (actual time=0.079..0.080 rows=1 loops=1)"
" -> Index Scan using awidget_pkey on awidget m (cost=0.29..8.30 rows=1 width=27) (actual time=0.077..0.077 rows=1 loops=1)"
" Index Cond: (id = 12989)"