Marqo Cloud

Better embeddings, faster

Build transformative search, recommendation, and RAG applications with an end-to-end cloud platform.

1

Train

Use Marqo's embedding training platform to train high quality embedding models, optimised for your domain and usecase.
Flexible
Choose from hundreds of open source base models, or bring your own base model to Marqtune.
Multilingual
Finetune models in different languages depending on the use case.
Multimodal
Marqtune handles both image and text based models.
2

Embed

Generate embeddings for querying and reranking blazingly fast, with Marqo’s fully managed inference engine. Use GPU and CPU inference out of the box.
5ms
Average image inference time*
11,880
Average token inference/second*


3

Retrieve

State of the art retrieval capability, combined with reranking, hybrid search and many other features.
26ms
median query latency*
100%
median recall*
*Retrieval statistics based on a HNSW index of 100 million 768 dim vectors. Image inference based on Marqo GPU running ViT-B-16-SigLIP,  text inference based on Marqo GPU running MiniLM-L6 v4.

Feature-rich developer experience

Marqo provides a feature-rich developer experience, letting you perform multimodal vector search without sacrificing features like filtering and highlighting.

Multimodal search

Marqo can be used with text and/or image data. Multimodal indexes seamlessly handle image-to-image, image-to-text and text-to-image search.

Composite queries

Marqo supports weighted queries which can combine multiple text and image queries together. Negative weights can be added to query terms to push certain items out of your result set.

Ranking modification

Marqo supports score modifiers, numeric fields in your documents can be used to manipulate the score and influence the ranking of results.

Context search

Additional context can be added to queries by providing vectors directly, this helps tailor results without the overhead of additional inference.

Custom model integration

Import open source models from Hugging Face, bring your own, or load private models from AWS S3, GCP or Hugging Face using authentication.

Bulk operations

Parts of Marqo's API support bulk operations to improve throughput. These bulk operations enable use cases such as bulk changes to multiple indexes or coalescing of queries.

Horizontal scalability

Marqo is horizontally scalable and can be run at million document scale whilst maintaining lightning-fast search times.

Highlighting

Marqo provides search highlighting functionality which allows you to transparently understand where and why a match occurred.

Filtering

Marqo offers a powerful query DSL (Domain Specific Language), which can be applied as a prefilter ahead of approximate k-NN search.

Request a demo

We’d love to speak with you. Send us your questions about Marqo and we’ll set up a time to meet with you.

Book Demo