Case Study:
Prisma Data Platform: new features launch.

Case Study:
Prisma Data Platform: new features launch.

Case Study:
Prisma Data Platform: new features launch.

2023

2023

2023

Context

Prisma, a Series B startup with strong VC support, has found success with its widely adopted open-source ORM, boasting a substantial user base. Seeking a parallel win, we aimed to replicate this success with its SaaS platform—the data platform. The initial release of the data platform's MVP emphasized collaborative and database management tools. After a few months without significant traction, we recognized the necessity to recalibrate and define the next strategic move.

Prisma, a Series B startup with strong VC support, has found success with its widely adopted open-source ORM, boasting a substantial user base. Seeking a parallel win, we aimed to replicate this success with its SaaS platform—the data platform. The initial release of the data platform's MVP emphasized collaborative and database management tools. After a few months without significant traction, we recognized the necessity to recalibrate and define the next strategic move.

Prisma, a Series B startup with strong VC support, has found success with its widely adopted open-source ORM, boasting a substantial user base. Seeking a parallel win, we aimed to replicate this success with its SaaS platform—the data platform. The initial release of the data platform's MVP emphasized collaborative and database management tools. After a few months without significant traction, we recognized the necessity to recalibrate and define the next strategic move.

Team

Team

Team

VP of Product

CTO

Product Manager

2 Engineering teams

Product Designer (myself)

Product Design Lead 

VP of Product

CTO

Product Manager

2 Engineering teams

Product Designer (myself)

Product Design Lead 

Problem(s)

Problem(s)

Problem(s)

The next strategic move involved addressing multiple challenges simultaneously. We needed to deliver tangible value for our users while dealing with performance and reliability issues in the platform. Moreover, design inconsistencies and an outdated UI added to the complexity. All of this within the fast-paced environment typical of startups—just a two-month window to define, design, and launch the new solution.

The next strategic move involved addressing multiple challenges simultaneously. We needed to deliver tangible value for our users while dealing with performance and reliability issues in the platform. Moreover, design inconsistencies and an outdated UI added to the complexity. All of this within the fast-paced environment typical of startups—just a two-month window to define, design, and launch the new solution.

The next strategic move involved addressing multiple challenges simultaneously. We needed to deliver tangible value for our users while dealing with performance and reliability issues in the platform. Moreover, design inconsistencies and an outdated UI added to the complexity. All of this within the fast-paced environment typical of startups—just a two-month window to define, design, and launch the new solution.

After low-effort prototypes, testing, and design thinking sessions, we further defined the target: full-stack developers creating serverless apps. To plan the next steps, a team gathered for user story mapping and exploring technical solutions. Testing low-fi prototypes with design partners, we chose the first feature: database global cache at the edge, allowing users to scale their app and perform better globally, by making database queries faster. 

After low-effort prototypes, testing, and design thinking sessions, we further defined the target: full-stack developers creating serverless apps. To plan the next steps, a team gathered for user story mapping and exploring technical solutions. Testing low-fi prototypes with design partners, we chose the first feature: database global cache at the edge, allowing users to scale their app and perform better globally, by making database queries faster. 

The starting point: low-fi Closed beta

I led the design of the closed beta version of the global cache feature named Accelerate. Collaborating closely with the engineering team and product manager, we defined the scope and created a simple UI within the existing design system to facilitate obtaining an API key and implementing cache strategies. After launching and testing the Closed beta, we gathered feedback to enhance the flow and discovered the need for more comprehensive documentation. The insights from the closed beta informed the development of the Early Access version.

The multi-platform challenge 

The technical solution required users to switch context between different platforms (data platform, console, tutorial, inspector), leading us to test different flow alternatives.

To provide users with the necessary support and guide them through the flow, I collaborated closely with the documentation and engineering teams, ensuring comprehensive coverage. We experimented with various hypotheses, ranging from having no documentation on the data platform UI to avoid redundancy and confusion, to incorporating small snippets, and finally introducing a detailed step-by-step guide. Following internal and external tests, we chose a step-by-step guide replicating the tutorial on the UI, providing redundancy as a helpful guide to address context switching.

To provide users with the necessary support and guide them through the flow, I collaborated closely with the documentation and engineering teams, ensuring comprehensive coverage. We experimented with various hypotheses, ranging from having no documentation on the data platform UI to avoid redundancy and confusion, to incorporating small snippets, and finally introducing a detailed step-by-step guide. Following internal and external tests, we chose a step-by-step guide replicating the tutorial on the UI, providing redundancy as a helpful guide to address context switching.

The one API key vs multiple connection strings challenge

The original technical proposal aimed to streamline the feature enablement process by using a unique API key to connect a project and enable the features. However, complications emerged during implementation, as the concept of a singular API key proved more intricate than initially planned, leading to challenges.

Due to time constraints, the solution involved adding steps to the flow. The DX focus shifted to helping users manage and run different connection strings, working closely with the documentation team to guide users through this process.

Due to time constraints, the solution involved adding steps to the flow. The DX focus shifted to helping users manage and run different connection strings, working closely with the documentation team to guide users through this process.

From Closed beta to Early Access to General Availability

The rapid transition of features from Closed beta to General Availability meant that a lot of definitions were still in flux while we were designing and building the flows. To address this complexity, I closely collaborated with engineers on flow design and definitions, ensuring logic and structure alignment. Simultaneously, coordination with the documentation team provided comprehensive coverage of steps, allowing the project to progress despite uncertainties.

The rapid transition of features from Closed beta to General Availability meant that a lot of definitions were still in flux while we were designing and building the flows. To address this complexity, I closely collaborated with engineers on flow design and definitions, ensuring logic and structure alignment. Simultaneously, coordination with the documentation team provided comprehensive coverage of steps, allowing the project to progress despite uncertainties.

Building the bigger picture: the new data platform

The existing platform had inconsistencies, reliability issues, and design flaws, hindering scalability for new features. I divided my time between defining the new feature and collaborating with the platform engineers team to establish the MVP for the entirely rebuilt data platform. Given time constraints, defining a concise scope and developing a mindful strategy were crucial.

Strategic approach

Collaborating with product management, engineering, and design teams, we devised a strategy to gain scope points in building the platform.

The scalable navigation and information architecture were designed to retain the essentials from the existing platform while enhancing usability. Despite scoping down the initial features, we planned the new navigation and information architecture with scalability in mind, accommodating future features.

Collaborating with product management, engineering, and design teams, we devised a strategy to gain scope points in building the platform.

The scalable navigation and information architecture were designed to retain the essentials from the existing platform while enhancing usability. Despite scoping down the initial features, we planned the new navigation and information architecture with scalability in mind, accommodating future features.

Collaborating with product management, engineering, and design teams, we devised a strategy to gain scope points in building the platform.

The scalable navigation and information architecture were designed to retain the essentials from the existing platform while enhancing usability. Despite scoping down the initial features, we planned the new navigation and information architecture with scalability in mind, accommodating future features.

Design system and system-thinking

Utilizing the Radix library as our design system foundation, we crafted scalable components that accommodated existing feature variations and future needs. Designing all flows concurrently, from diagrams to low-fi and high-fi levels, allowed comprehensive testing and iteration. This systematic approach expedited the definition of components and the design system for the new platform, enhancing platform design and implementation speed.

Utilizing the Radix library as our design system foundation, we crafted scalable components that accommodated existing feature variations and future needs. Designing all flows concurrently, from diagrams to low-fi and high-fi levels, allowed comprehensive testing and iteration. This systematic approach expedited the definition of components and the design system for the new platform, enhancing platform design and implementation speed.

Transition scenarios

The existing and new platforms were set to coexist, allowing users time to migrate and acclimate to new features. Collaborating with engineers, we explored diverse alternatives to communicate and guide new users. The proposed strategy was aligned with the marketing and sales teams to ensure comprehensive coverage of the user experience.

The existing and new platforms were set to coexist, allowing users time to migrate and acclimate to new features. Collaborating with engineers, we explored diverse alternatives to communicate and guide new users. The proposed strategy was aligned with the marketing and sales teams to ensure comprehensive coverage of the user experience.

Shipping the new platform

Defining and designing both the platform and feature flows at once, while collaborating extremely close with engineering, documentation, marketing and sales team, meant that we were able to move forward the design process significantly fast. In less that two months we were able to have ready to implement the design system and the new platform flows and features. 

Making insights visible: Hackathon Dashboard Insights

Making insights visible: Hackathon Dashboard Insights

Making insights visible: Hackathon Dashboard Insights

Our API held extensive Accelerate cache usage and performance data, yet we hadn't utilized it for user insights due to time constraints on the frontend and design. In a company hackathon, I proposed a project to leverage this existing data. With a team of front-end engineers and a data scientist, we swiftly developed a prototype to extract data from the API and use data visualization libraries to provide users with insightful project information.

Our API held extensive Accelerate cache usage and performance data, yet we hadn't utilized it for user insights due to time constraints on the frontend and design. In a company hackathon, I proposed a project to leverage this existing data. With a team of front-end engineers and a data scientist, we swiftly developed a prototype to extract data from the API and use data visualization libraries to provide users with insightful project information.

Our API held extensive Accelerate cache usage and performance data, yet we hadn't utilized it for user insights due to time constraints on the frontend and design. In a company hackathon, I proposed a project to leverage this existing data. With a team of front-end engineers and a data scientist, we swiftly developed a prototype to extract data from the API and use data visualization libraries to provide users with insightful project information.

The feature and the new platform is now live on GA, and you can try it out. A number of users already adopted Accelerate to scale their data-heavy apps, and it has already processed more than a billion queries

The feature and the new platform is now live on GA, and you can try it out. A number of users already adopted Accelerate to scale their data-heavy apps, and it has already processed more than a billion queries

The feature and the new platform is now live on GA, and you can try it out. A number of users already adopted Accelerate to scale their data-heavy apps, and it has already processed more than a billion queries

© Sof Andrade 2024