The initial index for FindLectures.com was huge – 9.1 GB for around 210,000 videos. Most Solr hosting services would charge $150-$300 / month for this size index. For some time I used BizSpark on Azure, which gives you $150 / month VM. When that ran out, I switched to a t2.medium VM on AWS, but this costs $50 / month.
This index stores video titles, lengths, descriptions, closed captions, some facets (topic, year), and a quality score.
I made several changes, which reduced disk usage from 9.1 GB to 210 MB.
- Removed a feature that let you search for phrases in videos by timestamp – this doubled the amount of closed caption information (save 2 GB)
- I initially marked all attributes as “stored”, for easy of debugging. Disabling this for the closed caption field saved 1.9GB.
- Facet fields were being stored and indexed. You only need these indexed, and it’s not likely to be useful to store full text information about these. Removing this information on facets saved 1.8 GB (termVectors=”false” termPositions=”false” omitNorms=”true”).
- Solr also stores positions of words in each document, which is useful for highlighting, or for reconstructing information not otherwise available at query time. This is again not useful for facets. Disabling position information saved 2 GB (termOffsets=”false” omitPositions=”true”)
- All of the fields are copied into a single, shared field (_text_) – this lets you search the title, description, and captions all at once. The downside to this is it includes all fields, and there doesn’t appear to be a way to itemize fields in a <copyField>. Because of this, the terms “true” and “false” were the most common words in the index. This could be addressed by concatenating fields you want at index time. For simplicity I chose to generate a list of stopwords (427 terms) – this saved 1.2 GB.
If you want to analyze your own index:
Need help with Solr or Elastic Search? Contact me for Solr consulting.