Hey,
I’ve been trying to understand how marqo works but the documentation isn’t all that clear.
Context
First off I've got no experience with vector databases and context embedding. \I’m attempting to create some context embedding for my model. I’d like to use marqo for the vector database part where I’ll be storing past converstations, documents and maybe some other stuff like social media posts/emails/whatnot.
Questions
Q1:
Would it be possible setup a docker compose file for marqo, I’ve got no idea on what to do with the volume part (I want it mapped to a folder in home directory).
Could an example docker compose be added to the documentation.
Q2:
When I specify the embedding model to use in the creation of the index and adding of documents, will it then keep the model loaded in memory so long as the docker is active?
If that were the case and I add stuff to the database with a different script at a different time, I would not have to specify the same model (so long as the index is the same) again right?
Q3:
Could you add support for the heavy intfloat/5-mistral-7b-instruct model
(this model is right up there at rank 1 with 4096 dimensions).
Q4:
So If I’ve understood things correctly, I could use a different script in the creation and addition of bulk documents utilizing CUDA and during the context embedding I could just run it off the cpu right?