My main use case with Pinecone involved building a RAG application where we used it for indexing. In our RAG application using Pinecone for indexing, we convert whatever text chunks are coming in into vector embeddings, which might be of different dimensions. We store these embeddings in Pinecone, and at query time, we perform a nearest neighbor search to find the most relevant text. For example, if you're asking a question about the different OPD expenses, then the embeddings most semantically similar to OPD expenses will come up. It's used for similarity search on text.
External reviews
External reviews are not included in the AWS star rating for the product.
Rag prototypes have become faster to build and text similarity search is now seamless
What is our primary use case?
What is most valuable?
The best features that Pinecone offers include the separation via namespaces, which was really good. Another feature is the ability to deploy within different cloud servers of our choice, which was also advantageous, along with the out-of-the-box option that allows you to just use an API key and start using it without needing to set up anything.
Out of the features I mentioned, the most valuable in my workflow is the easy setup because it helps in prototyping easily. There can be different RAG use cases, and for each of these use cases, you just do not have to install a vector database or go through all of that; you can just use the API key.
Pinecone has positively impacted our organization by helping us to land a few clients or at least give a demo of how our application works, becoming a part of the solution we developed.
What needs improvement?
Pinecone can be improved by having the ability to have multi-modal embeddings out of the box as a good idea. A major reason we did not use Pinecone is that the serverless region was only in the United States; if it were available in India with serverless out-of-the-box implementation, we would have definitely used Pinecone.
Regarding needed improvements, I would like to see more regional endpoints, particularly serverless regional endpoints, as that's the most important one, along with multi-modality support.
For how long have I used the solution?
I used the solution for about three to six months.
What do I think about the stability of the solution?
Pinecone is stable in my experience.
What do I think about the scalability of the solution?
Pinecone is highly scalable compared to others, and it's out-of-the-box, although the drawback is the pricing.
How was the initial setup?
Regarding how we set things up with Pinecone, we created different namespaces and within these namespaces, we created the embeddings for different catalogs.
What was our ROI?
We currently use PG vector with Postgres; it's not something I previously used, but it's beneficial because it operates on disk rather than in memory, which saves a lot of money.
Which other solutions did I evaluate?
What other advice do I have?
My advice for others looking into using Pinecone is to optimize all the embeddings before using it because if you are using it in a highly scalable manner, the pricing can get very high, so you have to be careful.
Pinecone was one of the earliest vector databases I came to know about, and it's the go-to option; I suggest it for anyone new to or learning about vector databases because it's very easy to start and work with without needing complex setups.
For future interviews, before asking deep questions, it's good to clarify things first because there is focus on cloud deployments, but we did not dive deeply into that; we only used Pinecone during the proof of concept and had to switch back due to data residency issues. I rate this product an 8 out of 10.
RAG workflows have transformed financial query responses but still need larger vectors and deeper tracing
What is our primary use case?
We are using Pinecone because we have a good amount of documents which we use on a daily basis from our vendors. Based on those documents, we need to provide information to the end customer for that particular company. We have a UI where customers can ask any questions related to anything in the financial domain. We need to provide the latest information, so we are dynamically doing the chunking with the help of an OpenAI LLM model and then inserting into Pinecone. In Pinecone, we are using a very high dimension vector space, almost more than 3K dimension size. We then perform similarity search and provide the final response to the UI. For our RAG system implementation, we are using Pinecone.
What is most valuable?
There are multiple factors that impressed me while using Pinecone. I had an option to use Milvus as well, but I preferred Pinecone. The first is the UI. Pinecone's UI is really strong. If I need to do some debugging on the backend side, I simply log into the UI and can perform operations based on my demand. This is a valuable UI feature.
Second is the scalability option. I can either define my own workers or use the auto-scaling feature. From an enterprise application and scalability perspective, this is very useful. We had an incident during a Black Friday sale and other occasional events that directly impacted our product traffic. Because we selected the auto-scaling feature in Pinecone, it automatically handled all the traffic spikes and we did not face any performance issues.
Ease of troubleshooting is another valuable feature. If any transaction fails and we need to check and debug each transaction, we can perform a text search on the UI. Based on the text search, we get all the related vectors on the UI. The UI definitely helps us from a troubleshooting perspective. Selecting the infrastructure is also an important option. I can create multiple indexes based on demand so that it will not become messy for our enterprise application. Pinecone is the backbone of the entire system, helping us with cost and time savings.
What needs improvement?
I have a suggestion to expand the vector size in Pinecone. Right now, whatever the limit Pinecone is suggesting, I would recommend increasing that option. Currently, I believe it is around 3K, but if I want to go to 4K, 5K, or something higher, that would be beneficial. Embedding models are coming into the market and they are providing good amounts of vector sizes. Try to encapsulate all these features into Pinecone.
I have two main suggestions from my side. One is to increase the vector size. Currently, it is supporting only around 3K vector size, and I would recommend increasing that. The second suggestion involves creating a feature similar to LangSmith, which is a monitoring tool. In LangSmith, end-to-end API calls can be analyzed, showing what request came from the customer, what vector search was performed, what prompt was created, what call was given to the LLM, and what response was received from the LLM to the UI. The whole journey can be captured. I would appreciate if Pinecone could provide this capability from their side. I understand that Pinecone cannot capture the LLM call and everything. However, if it is possible, I could use the API key of Pinecone in my code where I can enable these feature logs and see all these things on the Pinecone dashboard.
The major improvement I am expecting from Pinecone is increased vector size. The second improvement would be to provide end-to-end debugging or the whole end-to-end call journey as a GenAI product, showing how the end-to-end journey works for a single request. If I am able to see the whole process on the Pinecone dashboard, it would be really valuable.
For how long have I used the solution?
I am using Pinecone as an enterprise application in my organization for almost three years.
What was our ROI?
I do not have specific metrics, but I can give some high-level approximations. The task that was happening before developing this product was taking around one hour, but now it is done in hardly one or two minutes. So from 60 minutes to one or two minutes, you can assume how much cost savings we are achieving. Additionally, we are engaging customers much better through the UI. Previously, customers used to wait for 60 minutes. Now they get results within one or two minutes. We are definitely increasing our customer database.
Which other solutions did I evaluate?
Milvus is another contender that we considered while deciding on a vector database. I suggested Pinecone because of its good amount of quality compared to Milvus. However, Milvus has the capability to handle any vector size, which is missing in Pinecone.
Semantic search has transformed financial document discovery and supports real-time RAG chat
What is our primary use case?
I have used Pinecone in two main contexts. First, in a client project where I implemented a vector search system over a corpus of financial documents, balance sheets, trial balances, and invoices. I stored document embeddings in Pinecone and used it for similarity-based lookup and recommendation features. Second, I built a RAG-based document chatbot where Pinecone served as a retrieval layer. I would chunk documents, generate embeddings, store them in Pinecone, and then retrieve relevant context for an LLM to answer user queries.
Adding vector search to the client project significantly improved how quickly users could find relevant financial documents. Instead of manual keyword search, they got semantically relevant answers. For a RAG chatbot, Pinecone made retrieval fast and accurate enough to power real-time question answering over documents, which would have been impractical with brute-force search.
What is most valuable?
The best features Pinecone offers, in my experience, include strong performance and reliability. However, the free tier is somewhat limited. If you are experimenting with a larger data set, you hit the limits quickly during development. Cost can scale up as your index size grows, which is something to plan for. Also, for someone just starting out, understanding the right embedding dimensions, indexing strategies, and metadata filtering takes some trial and error. More guided tutorials or best practice templates for common use cases like RAG would help.
Before I integrated Pinecone, the client was doing keyword-based search over their financial documents, balance sheets, invoices, and similar items. It was slow and often returned irrelevant results because keyword matching does not capture semantic meaning. Once I switched to vector search with Pinecone, users could find contextually relevant documents much faster. Instead of sifting through dozens of keyword mismatches, they would get the most semantically similar documents right at the top. That is a real workflow improvement that saved them hours every week on document retrieval.
What needs improvement?
On the integration side, Pinecone's Python SDK is straightforward. It integrates well with the usual AI stack like LangChain and LlamaIndex. That was smooth for me. Where it could improve is around documentation for edge cases. For instance, handling metadata filtering at scale, understanding the right embedding dimensions for different use cases, and best practices for indexing strategies. Those topics felt sparse in the documentation. More real-world tutorials specific to common patterns like RAG or recommendation systems would help developers ramp up faster.
On support, the community is helpful, but if you hit something tricky and you are on a lower-tier plan, getting quick answers can be slow. Better-tiered support or more comprehensive troubleshooting guides would be valuable, especially for production deployments where latency is critical.
For how long have I used the solution?
I have been using it for about one year.
What do I think about the stability of the solution?
Pinecone is very stable for me. I have had excellent uptime and cannot recall any significant outages affecting my production indexes over the past year.
What do I think about the scalability of the solution?
Scalability has been solid. I have grown from around 10,000 vectors to 500,000 without hitting any hard times or performance issues. Pinecone handles that growth transparently. I do not have to manually re-partition data or manage sharding myself like I would with self-hosted solutions. Query latency remained consistent even as the index grew, which is impressive. The main constraint is not technical scalability, it is cost. As your index size grows, your monthly bill grows proportionally. So you need to be thoughtful about what you are indexing rather than just throwing everything at it.
How are customer service and support?
Customer support is decent but has some limitations. The community Slack channel is helpful, and I can get answers from their users and Pinecone engineers fairly quickly. What has been useful for me is that if you are on a lower-tier plan, getting direct support can be slow. For production issues where you need quick solutions, having more responsive support channels would be beneficial. The documentation and troubleshooting guides are good, but they do not always cover edge cases or complex scenarios I might run into.
Which solution did I use previously and why did I switch?
Before Pinecone, I was using a more basic approach with keyword-based search using Elasticsearch. It worked for simple use cases, but keyword mismatching did not capture semantic meaning, so relevance was poor. I also experimented briefly with building my own vector search solution using Milvus, which is an open-source vector database. The appeal was cost savings, but it required dedicated DevOps effort to deploy, maintain, scale, and monitor. That overhead was not worth it given my team size.
I switched to Pinecone because it gave me the semantic search quality I needed without the operational burden. It was a trade-off: slightly higher cost compared to self-hosting Milvus, but much lower operational complexity and faster time to production. For a lean team, that made sense. Elasticsearch could not do semantic search well, and managing Milvus myself was too much overhead. Pinecone hit the sweet spot between capability and operational simplicity.
How was the initial setup?
The deployment process itself was fairly straightforward. Creating indexes through Pinecone's dashboard and configuring the index settings like dimension and metric type took maybe an hour to get right. The Python SDK integration was smooth, and connecting my application to the indexes worked without much friction.
Where it got a bit tricky was the initial work around embeddings and index configuration. I had to experiment with embedding dimensions, whether to use 384, 768, or 1536 dimensions, depending on my use case. That affected both performance and cost. I also spent time getting metadata filtering right for financial documents, since I needed to filter by document type and date ranges alongside semantic search. Overall, this was not a major blocker, but there was definitely a learning curve on the configuration side. Once I got it dialed in, running it in production has been easy.
What was our ROI?
The clearest ROI is time saved on documentation retrieval. That 15 to 20 minutes per user per day adds up. If you have a team of, say, 10 financial analysts, that is roughly 150 to 200 minutes saved daily, or about 30 to 40 hours per week across the team. Over a year, that is substantial.
In terms of direct cost savings, I did not need to hire additional DevOps staff to manage a vector database myself. The managed service handled that, so there is an implicit cost avoidance there. On the revenue side, for my client, the faster document retrieval made their service more competitive and improved user satisfaction, which likely helped with retention, though I did not track the metric explicitly. The clearest financial metric is probably this: the cost of Pinecone, which is a few hundred dollars monthly, is easily offset by the productivity gains from not having analysts spend hours manually searching documents. The payback period was basically immediate once I deployed it.
What's my experience with pricing, setup cost, and licensing?
Pinecone charges based on index size and API requests. I am paying for storage and compute. The free tier is generous for experimentation, but it gets maxed out pretty quickly if you are working with real-world data sets. For my setup, initial costs were low since I started small, but as I scaled to 500,000 vectors, the monthly bill grew noticeably.
Which other solutions did I evaluate?
I did evaluate a few alternatives. Milvus was one. It is open-source and cost-effective, but the operational overhead was a concern. I also looked at Weaviate, which is another managed vector database option. It has some nice features around hybrid search and knowledge graphs, but it felt a bit more complex than what I needed, and pricing was comparable to Pinecone anyway.
In the end, Pinecone won out because it offered the best balance: managed infrastructure, so no DevOps headaches, solid query performance, straightforward Python integration, and transparent pricing.
What other advice do I have?
Pinecone is especially valuable for teams that want a managed vector database without the overhead of self-hosting something like Milvus or Weaviate. If you are building RAG systems, semantic search, or recommendation features and you want something that just works out of the box, Pinecone is a solid choice.
The main impact was around speed and relevance. Without fast vector retrieval, real-time question answering over documents would have been too slow to be practical. Pinecone made that workflow possible in the first place, rather than just improving it.
On reliability, I have had really good uptime and cannot recall any significant outages affecting my production indexes. Pinecone's infrastructure is managed, so they handle failover and redundancy behind the scenes. One thing to note is that during peak usage times, I have occasionally seen slightly higher latency, maybe 200 to 300 milliseconds instead of the usual 50 to 100 milliseconds.
Pinecone handles scaling pretty in practice. That is one of the main selling points of a managed service. I do not have to manually shard or manage replicas myself like I would with a self-hosted solution. I have scaled from maybe 10,000 vectors to around 500,000 vectors over the course of the year, and Pinecone handled that transparently. Query latency stayed fast throughout. The main challenge was not performance itself, it was cost. As your index size grows, you are paying more for storage and compute resources. I had to be strategic about what embeddings I kept and which documents I actually needed to index. Scaling works smoothly, but you need to plan for cost implications early on rather than discovering them later when your bill starts to grow.
I would rate Pinecone 8 out of 10. The reason it is not a full 10 is mainly two things: the free tier limitations hit you fast when you are experimenting with large data sets, and the documentation could go deeper on real-world patterns like RAG and metadata filtering. However, the reason it is still an 8 and not lower is because the core product is really strong. Managed infrastructure means zero maintenance headaches. Query performance is fast and reliable. The Python SDK integrates smoothly with tools like LangChain, and similarity search results are genuinely relevant. For what it does—managed vector search in production—it delivers. Those last two points are just areas where it could go from great to excellent.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Low-Latency Similarity Search with Scalable, Developer-Friendly APIs
RAG workflows have become cost‑efficient and integrate seamlessly with existing cloud tools
What is our primary use case?
We're using Pinecone to build our RAG pipeline. We need a vector database, and we have a lot of options in the market. RAG is the biggest use case for us.
What is most valuable?
The first thing is that we've always been using AWS. AWS provides OpenSearch serverless out of the box, but OpenSearch happens to be pretty expensive because you have to pay per hour of use if you want to have an OpenSearch server alive. It's billed as the number of OCUs. Pinecone, on the other hand, is pay-as-you-go on the number of queries. You only pay for the queries that you hit.
Pinecone's integration with AWS was seamless. All we had to do was take one of the API keys and upload it to AWS's Key Management Service, and then configure that through it, and then it starts working seamlessly. When you're building a production system for RAG, Pinecone gives you the vector search, but you still have a lot of pieces that have to come with it, including embeddings, chunking, pre-processing the query, and security. Pinecone doesn't provide that out of the box. AWS has the infrastructure for it. When you're using Bedrock with Pinecone, it becomes a good combination because Bedrock itself is free. They only ask you to pay for the model invocations.
Pinecone is flexible. They give you a bunch of options. One of the good features is that they also provide embeddings within Pinecone, which is a neat feature. You can essentially choose your embedding sizes and things like that. So you do have some control over it. It's easy to set up, and we felt like it's not that expensive for us in comparison to serverless. That's why we took it.
What needs improvement?
If Pinecone gave us RAG as a service, we'd be more than happy to use that. Then we wouldn't have to go to something like AWS again.
For how long have I used the solution?
We've been using Pinecone for a little over four months.
What do I think about the scalability of the solution?
So far we haven't scaled it to that extent. We're just building a beta version of it. For the beta version, at least so far, it's been good. We're demoing this to a few people, and then we'll possibly scale up if needed. But so far, it's looking good.
We've rolled out the early version as a beta access to a few, maybe twenty to thirty customers. So far, there haven't been that many complaints, but also it hasn't been really stress-tested for say, ten thousand requests per minute or something like that. We haven't really put it to the test. But for these demos for our clients to use, it's working fine so far.
How are customer service and support?
I have not personally engaged with customer service, as there are people above me who are making those decisions. I work as a developer and am just integrating everything. I haven't needed support because the documentation is good enough to help developers get up to speed.
The documentation is great. Plus, they have a chatbot that can help you answer all the questions about documentation, which I find helpful. I would say it's even better than AWS's documentation because AWS's SDK documentation is just not as helpful.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We weren't really sure about Pinecone security, and that's why we're using AWS for it. AWS is going to handle that whole pipeline of security and making sure that everything is passing through correctly. Pinecone comes in at just one of the stages, where it has to either at inference give you the most similar vectors or store your embedded chunks into a vector database. It's just one small piece in this. Most of the heavy lifting is done by our back-end plus AWS.
We were also using S3 Vectors, but it's still in preview. They haven't released it for all regions. It works in the US East, but in Europe West, it's not live yet. So we weren't able to go ahead with S3 Vectors. Pinecone was available though, and that's what we're using right now.
How was the initial setup?
We're using Pinecone as a vector database over OpenSearch.
What about the implementation team?
What other advice do I have?
As a standalone vector database, I think Pinecone gets the job done. I would give it an eight out of ten. Overall, I rate this product an eight.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Effortless Integration and Fast Queries with Pincone
RAG workflows have transformed document research and now provide precise answers with citations
What is our primary use case?
My main use case for Pinecone is creating vector indexes for GenAI applications.
A specific example of how I use Pinecone in one of my projects is utilizing a RAG pipeline where I take text from PDF documents, convert those into chunks, ingest those into the Pinecone vector database, and then have a frontend UI that uses LLMs to query the vector database and retrieve answers.
What I appreciate about Pinecone is that it provides reranking and other features, and it's a SaaS-based solution that is serverless.
What is most valuable?
Pinecone's reranking aspect works by taking a list of documents from the indexes and organizing them based on the ranking that is relevant to the question being asked by the user, ensuring that if reranking is applied, the user gets the most relevant answers as LLMs understand them, providing near-perfect answers versus when not using reranking, where the LLM takes all output from the vector index, which won't be quite that perfect.
Pinecone's serverless aspect is valuable because I don't have to manage the infrastructure myself, as Pinecone takes care of that.
Pinecone has positively impacted my organization by helping people in needle-in-a-haystack situations, as previously they had to grind through PDF documents, PowerPoint documents, and websites, but now with Pinecone, they can ask questions and receive references to documents along with the page numbers where that information exists, so they can use it as a reference or backtrack, especially for things such as FDA approvals where they can quote the exact page number from PDF documents, eliminating hallucination and providing real-time data that relies on an external vector database with enough guardrails to ensure it won't provide information not in the vector database, confining it to the information present in the indexes.
Pinecone has helped full-time employees rely less on contractors to find information, enabling them to access data at their fingertips and reducing the turnaround time to generate reports.
What needs improvement?
I give Pinecone a nine out of ten because I hope it provides an end-to-end agentic solution, but currently, it doesn't have those agentic capabilities, meaning I have to create a Streamlit application and manage it to communicate with Pinecone. If Pinecone could provide those kinds of web apps out of the box, I would give it a perfect ten.
Nothing else is needed since Pinecone provides APIs for integration, making it not a hurdle, and I am happy with what I have.
Pinecone is good as it is, but had it been on AWS infrastructure, we wouldn't experience some network lags because it's outside AWS. However, when we started two years ago, there weren't any vector databases on AWS, making Pinecone a pioneer in the field.
For how long have I used the solution?
I have been using Pinecone for the last two years.
What do I think about the stability of the solution?
Pinecone is stable.
What do I think about the scalability of the solution?
Pinecone is scalable.
How are customer service and support?
I have not needed customer support yet, as everything works seamlessly.
Which solution did I use previously and why did I switch?
There was no solution before Pinecone, as the vector database gained traction about two years ago, and Pinecone were the pioneers in this field, which is why we picked them.
What was our ROI?
I have seen a return on investment with Pinecone, as the application we built received positive feedback from internal stakeholders about how much it's helping them make business decisions and access information quickly at their fingertips.
What's my experience with pricing, setup cost, and licensing?
The experience with pricing, setup cost, and licensing for Pinecone is not in my area, as I am a developer who uses the tools.
Which other solutions did I evaluate?
No other options were evaluated before choosing Pinecone.
What other advice do I have?
Pinecone perfectly fits my organization's needs based on our use case. The market for vector databases is broad right now, offering many options; however, I don't have experience with other tools and technologies. I would give Pinecone a rating of nine out of ten overall.
Nice vector db easy to use
God of creating embeddings
Using Pinecone on production - 1 year later
- High performance (upsert and search in the ms)
- Simple integration via API and deployment and now after their recent release of serverless indexes it's very simple to maintain and scale (it's autoscaling).
- Low price (relative to the number of vectors) and free limited indexes. Free indexes are great to run development environment data. For a while it was impossible to upgrade a free index to a paying one, but this is now addressed.
- Incredible support (we had an issue and was not expecting getting this quality of support without paying the usual business support fees of an AWS for example)
- The ability to assign metadata is very useful (we still maintain a traditional db to keep track of the vectors)
- The single stage query vector/metadata is very useful and saves the headache of over-querying
- One feature we have meant to use is the use of sparse vectors in combination with the dense vectors. So, can't really comment yet
- The documentation using metadata and single stage queries is a bit light
- They have a smart bot to help answer support questions. On the great side, it seems they use their own technology for RAG type of application, but on the other it often misses the mark. ChatGPT or Perplexity are surprisingly more effective.
- There has been a few down times, but they are very communicative about them and maintain a server health page for each endpoint. It's usually related to a specific infrastructure (AWS or GCP) they run on.
- They have been growing and improving the technology, and like with other player, sometimes to update their python library or the way to reference to the indexes. But each time it's been toward simplification, and I suspect it will stabilize.