Want to see how our AI search works?
Learn about Marqo’s technology
Learn about how Marqo’s platform finetunes and deploys embedding models.
Read our case studies
Read Marqo’s case studies about previous customer success.
Marqo scales seamlessly with your catalogue size, users and use-cases. Easy – just like the rest of Marqo.
Fully managed
End-to-end vector creation and storage
Horizontally scalable
Model Customization
CPU instances and GPU instances
Scale at the click of a button
Access control
High availability
Low latency
All of the features of Marqo Cloud
Embedding model training and data ingestion
AB Testing Tooling
Merchandising
User Interaction Metrics Tracking
Enhanced Enterprise SLA
Automated Search Evaluation
Access to ML scientists
Vector search allows you to search documents, images and other data by converting items into a collection of vectors. This collection of vectors summarises the data in semantic form and allows us not only to match documents against queries through analysis of the semantic content, but also to understand where and how the document matched the query. With Marqo, inference to create the vectors is included.
The number of instances you will need depends on a number of factors. The number of documents, the size of the documents and the type of data (image vs text). When dealing with low search volumes that primarily involve text or when low latency is not crucial, using CPU inference nodes can be a cost-effective solution. On the other hand, GPU inference nodes provide a significant performance boost when indexing and searching with images and are recommended for indexing large datasets and processing high volume, low latency searches. For multimodal models marqo.CPU.large is recommended as a minimum.
The estimates for storage capacity provided in our calculator assume your are using a model that produces 768 dim. vectors.
The only changes you need to make are to update your URL and API key when accessing Marqo.
Cloud pricing billed at the end of the month for total inference and shard hours used. Usage is rounded up to 15-minute increments. Enterprise pricing also includes a per session component for model training.
Learn about how Marqo’s platform finetunes and deploys embedding models.
Read Marqo’s case studies about previous customer success.
Enter your business email and a member of our team will reach out to schedule a time.