User Research & Product Design
GenAI Knowledge Mining Platform for a venture capital (VC) and private equity (PE) firm
GenAI Knowledge Mining Platform for a venture capital (VC) and private equity (PE) firm



Brief Overview -
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
This project was a fast-paced, insight-heavy sprint to shape an internal knowledge mining platform built for venture capital and private equity teams.
The goal?
Build a system that could surface internal knowledge, parse through dense external reports
Make it easier for analysts, associates, and partners to connect the dots—historically, strategically, and intelligently.
Improve user experience and navigation of the current PoC.
The platform had two core use-cases: a conversational AI assistant that could answer nuanced questions based on firm documents and reports, and a smart document generation tool that could synthesize insights into usable formats for IC decks, investment memos, and partner reviews.
What made this particularly exciting (and challenging) was its rapid prototyping cycle. The team had already started with a Bolt and Cursor-built PoC, and my role was to help build a minimum viable product, to ensure we layer in the real-world feedback on top of a lightweight, testable foundation.
I was brought on board thanks to my past experience designing knowledge-mining platforms across industries—tools that helped users extract, organize, and surface meaningful insights from unstructured data. That background helped me plug in quickly with a small but sharp team: 1 Product Manager, 1 Managing Director, 1 Partner, and 2 developers.
Over 6 weeks, I led initial user research and usability testing with 7 core users, conducted a heuristic analysis of the PoC, and shaped the early product feedback loops that refined both functionality and user experience. This wasn't just about building a tool—it was about making something that could genuinely support decision-making, insight discovery, and faster strategic movement inside a VC/PE environment.












Use Case 1: Prism Mode vs Mirror Mode – Designing for Depth and Comparison
In the early phases of defining product functionality, one of the key challenges was understanding how users—particularly analysts and partners—interact with insights. Do they want to go deep into a single narrative thread, or compare and contrast across multiple data points or sources?
That’s where we introduced two foundational modes: Prism Mode and Mirror Mode. Each mode served a different mental model of analysis.
🌀 Prism Mode – For Focused, Deep-Dive Exploration
This mode supported single-threaded conversations—ideal for when users wanted to investigate a topic thoroughly, follow up on prompts, or track insights across documents without context switching. Think of this as a “lean-in” experience where one thread of thought is being refined, questioned, and unraveled over time.
This helped users stay in flow, especially when researching a specific startup, dissecting due diligence findings, or building an investment thesis based on historical data.
🪞 Mirror Mode – For 1:1 Comparisons (and Beyond)
On the other hand, Mirror Mode introduced dual-threaded thinking. This mode allowed users to run side-by-side comparisons—like comparing two startups, two versions of an investment memo, or two market reports.
At this stage, the comparison was 1:1, but our design intentionally left space for the future: expanding to 1:Many or even 1:1:1 comparisons. The core idea was to allow users to spot patterns, similarities, or contradictions in a clean, structured way without losing track of their thought process.
We made it a point to keep both modes lightweight and friction-free. Users could toggle between them as needed, depending on whether they were in exploration or evaluation mode.






Multi-Agentic Experience: Simulation vs Analysis
Another exciting layer we experimented with was the multi-agent experience—giving users the ability to engage with different “types” of AI agents depending on what they needed:
Analysis agents were geared for factual extraction, synthesis, and grounded insight—like a sharp junior associate scanning documents for you.
Simulation agents took it a step further, hypothesizing scenarios or role-playing perspectives, like asking “What would our competitor do next?” or “What if this market took off in 12 months?”
In these early builds, we kept it simple: users could manually switch agents. That manual switching wasn’t seen as a blocker—if anything, it created clarity and allowed users to intentionally choose their lens of analysis, which was important as the system was still evolving.
Longer-term, we knew the goal was to orchestrate these agentic experiences more seamlessly. But for this phase, the priority was to validate the modes, test user behavior, and see what patterns naturally emerged.






Use Case 2: Generative AI for Insightful Report Building & Team Collaboration
The second core functionality of the platform focused on what every VC team is constantly doing behind the scenes: synthesizing research into structured, shareable insight documents.
We explored how generative AI could speed up that process—not by replacing judgment, but by helping users structure, tone, and polish their findings faster.
✍️ From Notes to Narratives
With the MVP, users could start building basic reports directly within the platform—think short investment memos, deal briefs, or sector deep-dives. These weren’t full-blown 10-pagers with charts, images, or executive polish, but they were functional for everyday knowledge-sharing and internal reviews.
Once generated, these documents could be:
Commented on by users
Reviewed using AI-suggested highlights that helped improve flow, clarity, or tone
Saved and stored within the platform for easy access and iteration
The goal was to see whether this lightweight flow could become part of a VC analyst or partner’s regular workflow—turning scattered thoughts and insights into something review-ready, in less time and with more structure.



🛠️ What It Supports (And What It Doesn’t—Yet)
The MVP was intentionally scoped small:
Works best for short-to-medium-form reports (under ~8 pages)
Currently does not support embedded visuals, diagrams, or advanced formatting
No automated citations or plagiarism checks yet, but that’s on the horizon
Over time, we see the potential for GenAI to do more here—especially around:
Citations (linking back to source docs or external references)
Plagiarism detection
Structuring content for different audiences (IC meetings, LP updates, internal Slack summaries)



📂 Question-Mark Feature — Report Sharing: “Spaces” vs Templates
A big part of the experiment was testing the right model for team collaboration. Should these reports live in a shared “Spaces” section—similar to Perplexity or Notion, where groups can view, comment, and build on each other’s work? Or should they evolve into template-based workflows, where different types of reports follow specific formats?
For now, leadership was more interested in exploring the “Spaces” model:
Encouraging peer-to-peer sharing
Creating a living knowledge base
Supporting early-stage transparency and alignment
Templates could still be a future path, especially for repeatable content like quarterly market overviews or company update memos, but the immediate focus was on learning how users naturally collaborate around generated content.
Conclusion & Learnings
This project was not just about building a product. It was a hands-on crash course in how GenAI can plug into the real workflows of VC and PE firms—where speed, clarity, and intelligence matter.
Here’s what stood out to me most:
🚀 MVP Velocity in a High-Stakes Environment
Working with a tight-knit team—just one PM, one Partner, one MD, and two engineers—meant decisions were made fast. We could test, tweak, and launch with purpose. The culture of the firm supported experimentation, and there was a real appetite for iterating quickly while staying grounded in user needs.
🧩 Plugins & Multi-Agent Capabilities
We explored the possibility of using plugin-like agents within the platform—each one optimized for different mental models (analysis vs simulation, etc.). Users could manually switch agents for now, but it opened up a lot of thinking around what autonomous workflows might look like in the future. These modular, pluggable experiences felt like a great fit for knowledge-heavy workflows where depth, context, and personalization matter.
🛠️ Exposure to Bolt & Cursor
This was also my first time working hands-on with Bolt and Cursor—and I was pleasantly surprised at how quickly we could scaffold, prototype, and deploy ideas. These tools made it easy to run experiments without heavy overhead, and that agility matched the pace of how VC teams actually work: fast, smart, iterative.




Let’s Daydream, Create, or Just Say Hi!
Let’s Daydream, Create, or Just Say Hi!
Whether you’re reaching out for work, want to chat about travel or social impact, or feel like dreaming up ways to make good things happen — I’m all ears. Don’t overthink it, I’d love to hear from you.
Copyright Disha Shah © 2025 – All Right Reserved
Copyright Disha Shah © 2025 – All Right Reserved
User Research & Product Design
GenAI Knowledge Mining Platform for a venture capital (VC) and private equity (PE) firm
GenAI Knowledge Mining Platform for a venture capital (VC) and private equity (PE) firm



Brief Overview -
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
This project was a fast-paced, insight-heavy sprint to shape an internal knowledge mining platform built for venture capital and private equity teams.
The goal?
Build a system that could surface internal knowledge, parse through dense external reports
Make it easier for analysts, associates, and partners to connect the dots—historically, strategically, and intelligently.
Improve user experience and navigation of the current PoC.
The platform had two core use-cases: a conversational AI assistant that could answer nuanced questions based on firm documents and reports, and a smart document generation tool that could synthesize insights into usable formats for IC decks, investment memos, and partner reviews.
What made this particularly exciting (and challenging) was its rapid prototyping cycle. The team had already started with a Bolt and Cursor-built PoC, and my role was to help build a minimum viable product, to ensure we layer in the real-world feedback on top of a lightweight, testable foundation.
I was brought on board thanks to my past experience designing knowledge-mining platforms across industries—tools that helped users extract, organize, and surface meaningful insights from unstructured data. That background helped me plug in quickly with a small but sharp team: 1 Product Manager, 1 Managing Director, 1 Partner, and 2 developers.
Over 6 weeks, I led initial user research and usability testing with 7 core users, conducted a heuristic analysis of the PoC, and shaped the early product feedback loops that refined both functionality and user experience. This wasn't just about building a tool—it was about making something that could genuinely support decision-making, insight discovery, and faster strategic movement inside a VC/PE environment.












Use Case 1: Prism Mode vs Mirror Mode – Designing for Depth and Comparison
In the early phases of defining product functionality, one of the key challenges was understanding how users—particularly analysts and partners—interact with insights. Do they want to go deep into a single narrative thread, or compare and contrast across multiple data points or sources?
That’s where we introduced two foundational modes: Prism Mode and Mirror Mode. Each mode served a different mental model of analysis.
🌀 Prism Mode – For Focused, Deep-Dive Exploration
This mode supported single-threaded conversations—ideal for when users wanted to investigate a topic thoroughly, follow up on prompts, or track insights across documents without context switching. Think of this as a “lean-in” experience where one thread of thought is being refined, questioned, and unraveled over time.
This helped users stay in flow, especially when researching a specific startup, dissecting due diligence findings, or building an investment thesis based on historical data.
🪞 Mirror Mode – For 1:1 Comparisons (and Beyond)
On the other hand, Mirror Mode introduced dual-threaded thinking. This mode allowed users to run side-by-side comparisons—like comparing two startups, two versions of an investment memo, or two market reports.
At this stage, the comparison was 1:1, but our design intentionally left space for the future: expanding to 1:Many or even 1:1:1 comparisons. The core idea was to allow users to spot patterns, similarities, or contradictions in a clean, structured way without losing track of their thought process.
We made it a point to keep both modes lightweight and friction-free. Users could toggle between them as needed, depending on whether they were in exploration or evaluation mode.






Multi-Agentic Experience: Simulation vs Analysis
Another exciting layer we experimented with was the multi-agent experience—giving users the ability to engage with different “types” of AI agents depending on what they needed:
Analysis agents were geared for factual extraction, synthesis, and grounded insight—like a sharp junior associate scanning documents for you.
Simulation agents took it a step further, hypothesizing scenarios or role-playing perspectives, like asking “What would our competitor do next?” or “What if this market took off in 12 months?”
In these early builds, we kept it simple: users could manually switch agents. That manual switching wasn’t seen as a blocker—if anything, it created clarity and allowed users to intentionally choose their lens of analysis, which was important as the system was still evolving.
Longer-term, we knew the goal was to orchestrate these agentic experiences more seamlessly. But for this phase, the priority was to validate the modes, test user behavior, and see what patterns naturally emerged.






Use Case 2: Generative AI for Insightful Report Building & Team Collaboration
The second core functionality of the platform focused on what every VC team is constantly doing behind the scenes: synthesizing research into structured, shareable insight documents.
We explored how generative AI could speed up that process—not by replacing judgment, but by helping users structure, tone, and polish their findings faster.
✍️ From Notes to Narratives
With the MVP, users could start building basic reports directly within the platform—think short investment memos, deal briefs, or sector deep-dives. These weren’t full-blown 10-pagers with charts, images, or executive polish, but they were functional for everyday knowledge-sharing and internal reviews.
Once generated, these documents could be:
Commented on by users
Reviewed using AI-suggested highlights that helped improve flow, clarity, or tone
Saved and stored within the platform for easy access and iteration
The goal was to see whether this lightweight flow could become part of a VC analyst or partner’s regular workflow—turning scattered thoughts and insights into something review-ready, in less time and with more structure.



🛠️ What It Supports (And What It Doesn’t—Yet)
The MVP was intentionally scoped small:
Works best for short-to-medium-form reports (under ~8 pages)
Currently does not support embedded visuals, diagrams, or advanced formatting
No automated citations or plagiarism checks yet, but that’s on the horizon
Over time, we see the potential for GenAI to do more here—especially around:
Citations (linking back to source docs or external references)
Plagiarism detection
Structuring content for different audiences (IC meetings, LP updates, internal Slack summaries)



📂 Question-Mark Feature — Report Sharing: “Spaces” vs Templates
A big part of the experiment was testing the right model for team collaboration. Should these reports live in a shared “Spaces” section—similar to Perplexity or Notion, where groups can view, comment, and build on each other’s work? Or should they evolve into template-based workflows, where different types of reports follow specific formats?
For now, leadership was more interested in exploring the “Spaces” model:
Encouraging peer-to-peer sharing
Creating a living knowledge base
Supporting early-stage transparency and alignment
Templates could still be a future path, especially for repeatable content like quarterly market overviews or company update memos, but the immediate focus was on learning how users naturally collaborate around generated content.
Conclusion & Learnings
This project was not just about building a product. It was a hands-on crash course in how GenAI can plug into the real workflows of VC and PE firms—where speed, clarity, and intelligence matter.
Here’s what stood out to me most:
🚀 MVP Velocity in a High-Stakes Environment
Working with a tight-knit team—just one PM, one Partner, one MD, and two engineers—meant decisions were made fast. We could test, tweak, and launch with purpose. The culture of the firm supported experimentation, and there was a real appetite for iterating quickly while staying grounded in user needs.
🧩 Plugins & Multi-Agent Capabilities
We explored the possibility of using plugin-like agents within the platform—each one optimized for different mental models (analysis vs simulation, etc.). Users could manually switch agents for now, but it opened up a lot of thinking around what autonomous workflows might look like in the future. These modular, pluggable experiences felt like a great fit for knowledge-heavy workflows where depth, context, and personalization matter.
🛠️ Exposure to Bolt & Cursor
This was also my first time working hands-on with Bolt and Cursor—and I was pleasantly surprised at how quickly we could scaffold, prototype, and deploy ideas. These tools made it easy to run experiments without heavy overhead, and that agility matched the pace of how VC teams actually work: fast, smart, iterative.




Let’s Daydream, Create, or Just Say Hi!
Let’s Daydream, Create, or Just Say Hi!
Whether you’re reaching out for work, want to chat about travel or social impact, or feel like dreaming up ways to make good things happen — I’m all ears. Don’t overthink it, I’d love to hear from you.
Copyright Disha Shah © 2025 – All Right Reserved
Copyright Disha Shah © 2025 – All Right Reserved
User Research & Product Design
GenAI Knowledge Mining Platform for a venture capital (VC) and private equity (PE) firm
GenAI Knowledge Mining Platform for a venture capital (VC) and private equity (PE) firm



Brief Overview -
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
This project aimed to test a quick proof of concept (PoC) and minimum viable product (MVP) developed with Bolt and Cursor, refining it based on real-world use cases. The goal was to create a knowledge mining platform for analysts and partners in a venture capital (VC) and private equity (PE) firm.
This platform facilitates access to both internal knowledge and external reports, helping users monitor historical data, extract insights, and predict outcomes. It supports two core use-cases: a conversational AI platform and a document generation platform.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
Disclaimer: Sensitive company information and couple of core features of this product aren't showcased here to protect the copyright clauses upheld by the NDA signed.
This project was a fast-paced, insight-heavy sprint to shape an internal knowledge mining platform built for venture capital and private equity teams.
The goal?
Build a system that could surface internal knowledge, parse through dense external reports
Make it easier for analysts, associates, and partners to connect the dots—historically, strategically, and intelligently.
Improve user experience and navigation of the current PoC.
The platform had two core use-cases: a conversational AI assistant that could answer nuanced questions based on firm documents and reports, and a smart document generation tool that could synthesize insights into usable formats for IC decks, investment memos, and partner reviews.
What made this particularly exciting (and challenging) was its rapid prototyping cycle. The team had already started with a Bolt and Cursor-built PoC, and my role was to help build a minimum viable product, to ensure we layer in the real-world feedback on top of a lightweight, testable foundation.
I was brought on board thanks to my past experience designing knowledge-mining platforms across industries—tools that helped users extract, organize, and surface meaningful insights from unstructured data. That background helped me plug in quickly with a small but sharp team: 1 Product Manager, 1 Managing Director, 1 Partner, and 2 developers.
Over 6 weeks, I led initial user research and usability testing with 7 core users, conducted a heuristic analysis of the PoC, and shaped the early product feedback loops that refined both functionality and user experience. This wasn't just about building a tool—it was about making something that could genuinely support decision-making, insight discovery, and faster strategic movement inside a VC/PE environment.












Use Case 1: Prism Mode vs Mirror Mode – Designing for Depth and Comparison
In the early phases of defining product functionality, one of the key challenges was understanding how users—particularly analysts and partners—interact with insights. Do they want to go deep into a single narrative thread, or compare and contrast across multiple data points or sources?
That’s where we introduced two foundational modes: Prism Mode and Mirror Mode. Each mode served a different mental model of analysis.
🌀 Prism Mode – For Focused, Deep-Dive Exploration
This mode supported single-threaded conversations—ideal for when users wanted to investigate a topic thoroughly, follow up on prompts, or track insights across documents without context switching. Think of this as a “lean-in” experience where one thread of thought is being refined, questioned, and unraveled over time.
This helped users stay in flow, especially when researching a specific startup, dissecting due diligence findings, or building an investment thesis based on historical data.
🪞 Mirror Mode – For 1:1 Comparisons (and Beyond)
On the other hand, Mirror Mode introduced dual-threaded thinking. This mode allowed users to run side-by-side comparisons—like comparing two startups, two versions of an investment memo, or two market reports.
At this stage, the comparison was 1:1, but our design intentionally left space for the future: expanding to 1:Many or even 1:1:1 comparisons. The core idea was to allow users to spot patterns, similarities, or contradictions in a clean, structured way without losing track of their thought process.
We made it a point to keep both modes lightweight and friction-free. Users could toggle between them as needed, depending on whether they were in exploration or evaluation mode.






Multi-Agentic Experience: Simulation vs Analysis
Another exciting layer we experimented with was the multi-agent experience—giving users the ability to engage with different “types” of AI agents depending on what they needed:
Analysis agents were geared for factual extraction, synthesis, and grounded insight—like a sharp junior associate scanning documents for you.
Simulation agents took it a step further, hypothesizing scenarios or role-playing perspectives, like asking “What would our competitor do next?” or “What if this market took off in 12 months?”
In these early builds, we kept it simple: users could manually switch agents. That manual switching wasn’t seen as a blocker—if anything, it created clarity and allowed users to intentionally choose their lens of analysis, which was important as the system was still evolving.
Longer-term, we knew the goal was to orchestrate these agentic experiences more seamlessly. But for this phase, the priority was to validate the modes, test user behavior, and see what patterns naturally emerged.






Use Case 2: Generative AI for Insightful Report Building & Team Collaboration
The second core functionality of the platform focused on what every VC team is constantly doing behind the scenes: synthesizing research into structured, shareable insight documents.
We explored how generative AI could speed up that process—not by replacing judgment, but by helping users structure, tone, and polish their findings faster.
✍️ From Notes to Narratives
With the MVP, users could start building basic reports directly within the platform—think short investment memos, deal briefs, or sector deep-dives. These weren’t full-blown 10-pagers with charts, images, or executive polish, but they were functional for everyday knowledge-sharing and internal reviews.
Once generated, these documents could be:
Commented on by users
Reviewed using AI-suggested highlights that helped improve flow, clarity, or tone
Saved and stored within the platform for easy access and iteration
The goal was to see whether this lightweight flow could become part of a VC analyst or partner’s regular workflow—turning scattered thoughts and insights into something review-ready, in less time and with more structure.



🛠️ What It Supports (And What It Doesn’t—Yet)
The MVP was intentionally scoped small:
Works best for short-to-medium-form reports (under ~8 pages)
Currently does not support embedded visuals, diagrams, or advanced formatting
No automated citations or plagiarism checks yet, but that’s on the horizon
Over time, we see the potential for GenAI to do more here—especially around:
Citations (linking back to source docs or external references)
Plagiarism detection
Structuring content for different audiences (IC meetings, LP updates, internal Slack summaries)



📂 Question-Mark Feature — Report Sharing: “Spaces” vs Templates
A big part of the experiment was testing the right model for team collaboration. Should these reports live in a shared “Spaces” section—similar to Perplexity or Notion, where groups can view, comment, and build on each other’s work? Or should they evolve into template-based workflows, where different types of reports follow specific formats?
For now, leadership was more interested in exploring the “Spaces” model:
Encouraging peer-to-peer sharing
Creating a living knowledge base
Supporting early-stage transparency and alignment
Templates could still be a future path, especially for repeatable content like quarterly market overviews or company update memos, but the immediate focus was on learning how users naturally collaborate around generated content.
Conclusion & Learnings
This project was not just about building a product. It was a hands-on crash course in how GenAI can plug into the real workflows of VC and PE firms—where speed, clarity, and intelligence matter.
Here’s what stood out to me most:
🚀 MVP Velocity in a High-Stakes Environment
Working with a tight-knit team—just one PM, one Partner, one MD, and two engineers—meant decisions were made fast. We could test, tweak, and launch with purpose. The culture of the firm supported experimentation, and there was a real appetite for iterating quickly while staying grounded in user needs.
🧩 Plugins & Multi-Agent Capabilities
We explored the possibility of using plugin-like agents within the platform—each one optimized for different mental models (analysis vs simulation, etc.). Users could manually switch agents for now, but it opened up a lot of thinking around what autonomous workflows might look like in the future. These modular, pluggable experiences felt like a great fit for knowledge-heavy workflows where depth, context, and personalization matter.
🛠️ Exposure to Bolt & Cursor
This was also my first time working hands-on with Bolt and Cursor—and I was pleasantly surprised at how quickly we could scaffold, prototype, and deploy ideas. These tools made it easy to run experiments without heavy overhead, and that agility matched the pace of how VC teams actually work: fast, smart, iterative.




Let’s Daydream, Create, or Just Say Hi!
Let’s Daydream, Create, or Just Say Hi!
Whether you’re reaching out for work, want to chat about travel or social impact, or feel like dreaming up ways to make good things happen — I’m all ears. Don’t overthink it, I’d love to hear from you.
Copyright Disha Shah © 2025 – All Right Reserved
Copyright Disha Shah © 2025 – All Right Reserved