Quotas and usage for Marketplace app developers

OVERVIEW

Designing the usage monitoring system that gave Atlassian developers visibility into their cloud resource consumption before monetisation launched - covering IA, navigation, and the content system that scales with billing.

TIMELINE

Nov 24 - Feb 25

ROLE

Content design lead, contributing to product design

SERVICES

UX design
Information architecture
Data analysis
Research
Prototyping
Content design

TOOLS

Figma
Optimal Workshop
Atlassian Design System
v0

Why this project matters

This project required restructuring an information architecture, defining a navigation taxonomy, and designing a content system that could absorb billing features that hadn't been built yet. The output was a product surface, not just copy.

I structured information architectures, ran card sorts, named navigation categories, shaped mental models around technical concepts, and influenced interaction decisions through content-first thinking.

Overview

Atlassian Forge is the cloud-native app development platform that powers the Atlassian Marketplace. Apps built on Forge consume three categories of AWS-backed infrastructure resources: Compute (Lambda), Storage (DynamoDB), and Logs (CloudWatch). Until this project, developers had zero visibility into how much of those resources their apps consumed.

That was about to become a serious problem. Atlassian announced plans to monetize Forge based on resource usage - meaning developers would soon be charged for compute, storage, and logging consumption. But the platform had no usage dashboard, no quota tracking, no alerting, and no cost estimation tools.

You can't charge developers for something they can't see. The trust risk was real: without visibility, developers might migrate back to Atlassian Connect, the legacy platform the company was actively moving away from.

My role as the content designer operating across disciplines

My role as the content designer operating across disciplines

I was the content designer on this project - but the nature of the work required me to operate beyond traditional content design boundaries. I was embedded in decisions about information architecture, interaction patterns, navigation structure, and concept design. Not as an observer, but as a co-author of those decisions.

I owned:

  • Content strategy, UI copy across all Usage screens, and developer documentation on DAC (developer.atlassian.com)

  • The card sort research program - design, facilitation, and analysis

  • Information architecture for the Quotas and Limits documentation restructuring

  • Navigation naming and taxonomy decisions

  • The content framework that translated AWS infrastructure concepts into language developers could act on

I also contributed meaningfully to interaction design decisions - particularly around go-to-market progressive disclosure, the overview-to-detail navigation model, and how content hierarchy influenced layout. The product designer and I worked as close collaborators, with content decisions frequently driving visual and structural choices.

I was the content designer on this project - but the nature of the work required me to operate beyond traditional content design boundaries. I was embedded in decisions about information architecture, interaction patterns, navigation structure, and concept design. Not as an observer, but as a co-author of those decisions.

I owned:

  • Content strategy, UI copy across all Usage screens, and developer documentation on DAC (developer.atlassian.com)

  • The card sort research program - design, facilitation, and analysis

  • Information architecture for the Quotas and Limits documentation restructuring

  • Navigation naming and taxonomy decisions

  • The content framework that translated AWS infrastructure concepts into language developers could act on

Problem and context

The core tension

Atlassian needed a monetization path for Forge. The business case was clear: Forge apps run on Atlassian's AWS infrastructure, and as adoption grew, so did the cost of subsidizing that compute. But you can't charge developers for resource usage if they have no way to see, understand, or manage that usage.

GB·sGBr/w
Inherently confusing concepts

Usage data spanned three distinct AWS services, each with two different units of measurement:

  • GB-seconds for compute

  • GB for storage, number of reads/writes for key-value operations and logs

"GB-seconds" is not a unit most developers encounter daily. My job was to make these concepts comprehensible without oversimplifying them. The content had to bridge that gap precisely.

Dual audiences, conflicting models

Through persona work, the team identified two core archetypes:

  • The Developer (cares about real-time performance, per-function granularity, anomaly detection)

  • The Billing Admin (cares about cost forecasting, budget constraints, cross-app visibility).

I had to design a content system that served both with the same data but overlapping and distinct vocabulary.

Fragmented information architecture

Quota and limit information was buried in a single monolithic documentation page. It mixed compute quotas, storage limits, installation caps, and UI constraints into one unstructured wall of text

?
No existing content patterns

The Developer Console had no usage or billing UI, and no established content patterns for resource dashboards. There was no consistent precedent for how Atlassian explains consumption data in-product.

I created the content framework - terminology, labelling conventions, description patterns, and the relationship between in-console content and external documentation.

Q3 FY26?
Phased delivery, uncertain endpoints

Monetization was scheduled for Q3 FY26, but commerce integration timelines were fluid. The content system had to be scalable given the ambiguity. Labels, descriptions, and navigation structures that could accommodate billing features minimizing any rewrites when they arrived.

Discovery and research

Card sort: How developers think about quotas

The existing "Platform quotas and limits" page was my starting point. It contained approximately 31 distinct terms and concepts spanning compute resources, storage quotas, installation limits, UI resources, and error handling—all on a single page. As a content designer, this was the clearest possible signal that the information architecture was broken.

I designed and facilitated a dual-method card sort to determine how developers naturally group this information, and to uncover the mental models that should drive the restructuring.

Moderated open card sort (conducted in Figma with private EAP participants): I observed reasoning in real time, noting not just where cards landed but the language participants used to describe groupings. The labels they created organically became candidates for navigation categories.

Unmoderated hybrid card sort (via Optimal Workshop, with participants from the Community Developer Advisory Council): This gave me statistical patterns at scale—agreement matrices and category frequency analysis across a broader group.

The card sort wasn't just a research exercise - it was a content architecture tool. Every finding below directly changed how information was labeled, grouped, or sequenced in the product.

Card sort: How developers think about quotas

The existing "Platform quotas and limits" page was my starting point. It contained approximately 31 distinct terms and concepts spanning compute resources, storage quotas, installation limits, UI resources, and error handling—all on a single page. As a content designer, this was the clearest possible signal that the information architecture was broken.

I designed and facilitated a dual-method card sort to determine how developers naturally group this information, and to uncover the mental models that should drive the restructuring.

Moderated open card sort (conducted in Figma with private EAP participants): I observed reasoning in real time, noting not just where cards landed but the language participants used to describe groupings. The labels they created organically became candidates for navigation categories.

Unmoderated hybrid card sort (via Optimal Workshop, with participants from the Community Developer Advisory Council): This gave me statistical patterns at scale—agreement matrices and category frequency analysis across a broader group.

The card sort wasn't just a research exercise - it was a content architecture tool. Every finding below directly changed how information was labeled, grouped, or sequenced in the product.

Findings from both card sorts translated into options

Finding 1: Five emergent categories.

Participants consistently grouped content into: Quotas and limits, Usage and metrics, Pricing, Resources and services, and General definitions. This directly informed the documentation restructuring - I split the monolithic page into distinct sections mapped to these mental models, each with its own navigation entry.

Finding 2: Quotas ≠ Usage in developers' minds.

54% of participants associated "quotas" with platform limits (the ceiling), while associating "usage" with current consumption (where they are now). This distinction was the single most important content finding of the project. It shaped the decision to separate the terminology of quota from the in-console usage dashboard.

Finding 3: Error messages belong with actions, not definitions.

9% of participants grouped error messages with "resources"- expecting them near the things that trigger errors, not in an abstract reference section. This validated placing alerting content near the usage detail views rather than in a separate error-handling area.

Content as a collaborative discipline

I participated in team and partner ideation/sparring sessions that included sketching, whiteboarding, and user flow mapping. These sessions weren't siloed by discipline - I contributed to interaction flows and layout decisions alongside the product designer, not waiting for handoff. Content decisions were on the table from the start, shaping what got sketched rather than annotating what had already been decided.

Final solution

I was the content designer on this project - but the nature of the work required me to operate beyond traditional content design boundaries. I was embedded in decisions about information architecture, interaction patterns, navigation structure, and concept design. Not as an observer, but as a co-author of those decisions.

I owned:

  • Content strategy, UI copy across all Usage screens, and developer documentation on DAC (developer.atlassian.com)

  • The card sort research program - design, facilitation, and analysis

  • Information architecture for the Quotas and Limits documentation restructuring

  • Navigation naming and taxonomy decisions

  • The content framework that translated AWS infrastructure concepts into language developers could act on

I also contributed meaningfully to interaction design decisions - particularly around go-to-market progressive disclosure, the overview-to-detail navigation model, and how content hierarchy influenced layout. The product designer and I worked as close collaborators, with content decisions frequently driving visual and structural choices.

I was the content designer on this project - but the nature of the work required me to operate beyond

Reflections and impact

By designing the content system months before billing went live, Atlassian gave developers a trust-building runway. Developers could learn the vocabulary and mental models of usage monitoring before any charges applied, reducing the likelihood of invoice confusion and support burden when monetization launched.

What I've learnt

Content design is trust infrastructure. The core challenge wasn't adding charts and tables—it was building confidence. Developers needed to believe the data was accurate, understand the units, trust the refresh timing, and feel confident they wouldn't be surprised by a bill. Every content decision was, at some level, a trust decision. "Refreshed daily at 09:30 UTC" didn't just convey information—it said we're being transparent about the limitations of this data.

By designing the content system months before billing went live, Atlassian gave developers a trust-building runway. Developers could learn the vocabulary and mental models of usage monitoring before any charges applied, reducing the likelihood of invoice confusion and support burden when monetization launched.

What I've learnt

Content design is trust infrastructure. The core challenge wasn't adding charts and tables—it was building confidence. Developers needed to believe the data was accurate, understand the units, trust the refresh timing, and feel confident they wouldn't be surprised by a bill. Every content decision was, at some level, a trust decision. "Refreshed daily at 09:30 UTC" didn't just convey information—it said we're being transparent about the limitations of this data.

Measured outcomes

  • 5.6 overall SEQ score (out of 7) with 89% task completion rate. For an EAP product with no onboarding, this exceeded the team's target.

  • Highest score (6.5, 100% completion) on the filtering task—validating that the time-filter content model was intuitive

  • Lowest score (5.2, 83%) on navigation—confirming the discoverability gap and justifying the label and placement changes I recommended for M1 Open EAP.

Measured outcomes

  • 5.6 overall SEQ score (out of 7) with 89% task completion rate. For an EAP product with no onboarding, this exceeded the team's target.

  • Highest score (6.5, 100% completion) on the filtering task—validating that the time-filter content model was intuitive

  • Lowest score (5.2, 83%) on navigation—confirming the discoverability gap and justifying the label and placement changes I recommended for M1 Open EAP.