2025
·
Finance
The problem
Investment research teams work across too many surfaces — databases, reports, models, conversations — with no coherent thread between them. Atlas was an attempt to collapse that into one environment: a place where an analyst could query internal and external data, generate insight, collaborate with their team, and publish findings, all without breaking flow.
The design challenge wasn't feature breadth — the platform had to do a lot. It was making depth feel light. Every interaction had to be fast enough that the analyst was always thinking about the problem, not the product.






Keyboard-first query interface
The core input was designed around a shorthand command language — letting analysts build complex, precise queries inline without touching a menu. Three typed triggers did the heavy lifting:
/ reference a keyword, dataset, or topic
⌘ trigger a workflow or automation
@ tag a person, AI model, or portfolio company
So a query like Generate ⌘portfolio-analysis-insights using @Claude Opus 4.8 for @founder-updates becomes entirely composable from the input field — data sources, models, and collaborators woven into a single prompt. Autocomplete surfaced contextually as you typed. The filter panel layered on top to scope by database, industry, investment stage, geography, and category — with matching datasets shown inline alongside how many datapoints could be extracted from each.
Research depth — data and knowledge
The platform handled both internal proprietary data and external publications in the same interface, without treating them differently at the surface. Analysts could query across both simultaneously, with results showing source attribution, metadata tags, and citation counts — so every finding was traceable, not just summarised.
Query responses came with deep-dive capability: expandable findings, inline data tables with pagination, chart outputs, and a citations panel showing exactly which records or publications contributed to the analysis. Results also surfaced fetch time and total record count — small signals that kept analysts oriented in large data runs.
The distinction between "I have a hypothesis" and "I have evidence" needed to be visible in the interface — not just in the analyst's head.
Dashboard analytics — actionable portfolio intelligence
A significant part of the design work went into the analytics layer — because surfacing insight is only useful if it tells you what to do next. The dashboards were built around the firm's specific portfolio structure, not generic BI templates. Every metric, segment, and view was curated to reflect how this team actually evaluated companies and tracked momentum.
The home view aggregated portfolio performance into a single orientation layer before analysts drilled in: research automation volume, market intelligence pipeline value, and category-level revenue momentum shown quarterly across segments. Below that, portfolio companies were broken down by revenue growth, gross margin, CAC payback, retention rate, and investment signal — giving the team a living read of where to increase support, where to monitor, and where churn risk was building.
Category momentum was tracked across consumer segments — Wellness, Creator Commerce, Beauty, Subscription Brands, Functional Nutrition, Retail Media — with quarter-on-quarter figures and a portfolio growth efficiency score to give a comparative read across the book. The intent was that an analyst could open the dashboard and know, in under a minute, which parts of the portfolio needed attention and which were running well.




Making findings actionable
A research tool that surfaces insight but doesn't help you do anything with it is half a product. Atlas closed the loop: query outputs could be saved, exported as CSV, or opened in a viewer. Findings fed into report generation — structured, editorial-format documents with read-time estimates and citation counts — which could be saved to collections or shared as publications.
Both queries and reports could be gated at the individual or team level. Personal queries lived in your own space; shared queries and reports surfaced in Shared Spaces — a collaborative workspace where the team's research was organised, tagged, and discoverable. Access was controlled per collection, so sensitive analysis stayed contained while broader findings could be opened to the wider team.
Queries: Run, save, and share research queries with source attribution and metadata. Gated personally or across the team.
Reports: Generate structured, editorial-format reports from query outputs — with citations, read time, and tiered viewer access.
Shared spaces: Collaborative workspace for team-level research — collections, publications, and shared queries with per-collection access control.
AI automations [Briefly done]: Workflow-level automation triggered from the command interface — recurring insight generation and portfolio-wide analysis runs. We had also planned to connect it with APIs and external communication + office platforms to make the cycle loop of getting data to knowledge to insights to actionable triggers smoother. But the plan to do this was more leanly and not very extensively like external platforms like Glean.
Where it fell short
Known limitation: Charts generated dynamically in query responses — rendered from parsed records — were inconsistent in visual quality. Code-output charts don't reliably inherit the design system's spacing, type scale, or color logic, so some outputs looked considered while others felt rough. This is a real ceiling of AI-rendered visualisations right now, and one that required ongoing triage. Worth naming rather than quietly resolving it in a mockup.
Additionally, the late stage requirement of having AI automations made us wonder if it's worth spending internal time in making this tool more actionable as a knowledge library or to depend on 3rd party platform tools.
Working on something complex, ambitious, or hard to get right?
I’d love to hear what you’re building.
Most of my work sits with teams solving complex problems — where design needs to hold up as products evolve and companies scale. If that’s what you’re working on, we’ll likely get along well.









