(function(w,d,s,l,i){ w[l]=w[l]||[]; w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'}); var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:''; j.async=true; j.src='https://www.googletagmanager.com/gtm.js?id='+i+dl; f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-W24L468');
For Product Managers: Building Under Discovery Compression

For Product Managers: Building Under Discovery Compression

December 23, 2024Alex Welcing7 min read
Polarity:Mixed/Knife-edge

For Product Managers: Building Under Discovery Compression

You have a roadmap. It assumes the technological landscape will be roughly stable for 12-18 months. That assumption is breaking.

Discovery compression is not a future phenomenon. It is happening now. The AI capabilities available to your product in Q4 will be meaningfully different from Q2. Your competitors are experiencing the same acceleration. Your users' expectations are recalibrating in real-time.

This is not a strategy document. It is a survival guide.

The Problem You Now Face

Traditional PM planning assumes:

  • Technology capabilities are relatively stable
  • You can spec features based on current technical constraints
  • A 12-month roadmap can be executed as planned
  • Your competitive moat comes from sustained execution

Under discovery compression:

  • Capabilities change faster than planning cycles
  • Specs written today may be obsolete before implementation
  • Roadmaps become hypotheses, not commitments
  • Moats erode unless they are built on something compression-resistant

The frameworks you learned are not wrong. They are incomplete. They need to be augmented for an environment where the ground moves.

What Actually Changes

1. Your Planning Horizon Compresses

Old model: Annual planning, quarterly adjustments, monthly reviews.

New model: Quarterly planning, monthly replanning, weekly capability scanning.

This is not about working faster. It is about holding plans more loosely. The most dangerous thing you can do is execute a 12-month roadmap without replanning for capability shifts.

Practical move: Build explicit "capability checkpoints" into your roadmap. At each checkpoint, ask: What can we now do that we couldn't when we planned this? What can competitors now do?

2. Build vs. Buy Calculus Flips Frequently

Old model: Build core differentiators, buy commodities. The decision is relatively stable.

New model: What's differentiated today is commoditized tomorrow. Today's core may be tomorrow's commodity.

When AI makes capabilities cheap, your custom-built solution becomes a liability competing against purpose-built tools. When AI shifts, purpose-built tools may lag behind what you could build.

Practical move: Track the "months until commoditized" for every feature you're building. If the answer is less than your build time, reconsider.

3. Your Competitive Intelligence Must Include AI Capabilities

Old model: Watch competitors' products and features.

New model: Watch foundational AI capabilities. Your next competitor may not be a company—it may be a foundation model update that enables anyone to build your product.

GPT-5, Claude 4, Gemini 2—these are not just tools. They are capability discontinuities. When they land, some products become trivially replicable and others become newly possible.

Practical move: Assign someone to track foundation model development. Not for features—for capability plateaus and cliffs.

4. User Expectations Are Recalibrating

Old model: Users compare you to your competitors.

New model: Users compare you to the best AI experience they've had anywhere. ChatGPT, Midjourney, Copilot—these set baselines that bleed across categories.

A user who has experienced AI that just works will not tolerate your legacy search interface. Expectations are not domain-specific anymore.

Practical move: Your baseline is not "better than competitors." It is "doesn't feel broken compared to ChatGPT."


schnell artwork
schnell
kolors

What Remains Stable

Not everything accelerates. Understanding what is compression-resistant helps you allocate attention.

Still slow (build moats here):

  • Trust relationships with customers
  • Data assets unique to your context
  • Regulatory approvals and certifications
  • Brand and reputation
  • Organizational capabilities (not just technological ones)
  • Deep domain expertise that AI cannot easily replicate
  • Physical-world integration and logistics

Accelerating (do not assume stability):

  • Any capability that is primarily computational
  • Content generation and synthesis
  • Pattern recognition and prediction
  • User interface paradigms
  • Technical architecture best practices
  • Developer productivity expectations

Build moats in the first category. Be nimble in the second.

Practical Frameworks

The Compression-Adjusted Roadmap

For each feature on your roadmap, annotate:

  1. Compression risk: How likely is AI progress to make this easier/harder/obsolete?
  2. Dependency on current limits: Are we building this because we can't do X yet? What if X becomes possible?
  3. Value if capabilities shift: Does the feature get more valuable, less valuable, or irrelevant?
  4. Pivot cost: If we need to change direction, how expensive is it?

Features with high compression risk, high dependency on current limits, and high pivot cost should be time-boxed and reversible.

The Three Layers of Product Stability

Layer 1: Problem (most stable): The user problem you solve. Problems are human. They compress slowly.

Layer 2: Solution approach (moderately stable): The general approach to solving the problem. This can shift but not weekly.

Layer 3: Implementation (least stable): The specific technical approach. This may need to change quarterly.

Build identity and strategy around Layer 1. Be flexible on Layers 2 and 3.

The Capability Trigger System

Maintain a list of "if this becomes possible, we should X" triggers:

  • If speech-to-text becomes real-time and accurate in our domain, we should rethink our input modality.
  • If AI can reliably do [specific task], we should automate [function] and redeploy [team].
  • If [competitor's moat] becomes commoditized, we should [response].

Review this list monthly. When a trigger fires, execute the response.

The Uncomfortable Truth

Discovery compression makes product management harder, not easier.

The easy version of the story: AI does more, you do less, everything is simpler. This is wrong.

The real version: AI makes more possible, but also makes more possible for everyone. The landscape of what you could build, what competitors could build, and what users expect expands simultaneously.

The job is not easier. It is different.

What gets easier:

  • Building specific capabilities
  • Prototyping and experimentation
  • Generating content and options
  • Handling routine analysis

What gets harder:

  • Maintaining differentiation
  • Long-term planning
  • Technical debt management when tech shifts
  • Predicting what will matter in 12 months

kolors artwork
kolors

What to Do Monday

  1. Audit your roadmap for compression risk. Flag any feature that assumes AI capabilities will remain static.

  2. Set up capability tracking. Subscribe to AI research updates. Read the GPT-5 release notes, not just the product updates.

  3. Time-box uncertain bets. If you're building something that could be commoditized in 6 months, cap the investment.

  4. Identify your compression-resistant assets. What do you have that AI does not obsolete? Double down there.

  5. Talk to your team. Your engineers are watching AI developments. Ask them what's shifting. They know.

The Opportunity

This is not all defensive. Discovery compression creates opportunities:

  • Features previously impossible become feasible
  • Products that required teams can be built by individuals
  • User experiences that were fantasy become achievable
  • Market entry barriers drop for you (not just competitors)

The PMs who thrive will be those who scan for these opportunities as eagerly as they defend against threats.

The ones who struggle will be those who either ignore compression (executing static roadmaps) or are paralyzed by it (refusing to commit to anything).

The move is neither rigidity nor paralysis. It is adaptive planning—holding direction firmly and implementation loosely.

Discovery compression is the new operating environment. Learn to build in it.


This is a translational piece connecting speculative mechanics to practitioner needs. For the underlying mechanic, see Discovery Compression. For related practitioner guidance, see For Executives: Scarcity Inversion and Strategic Planning.


kolors artwork
kolors
AI Art Variations (2)

Discover Related Articles

Explore more scenarios and research based on similar themes, timelines, and perspectives.

// Continue the conversation

Ask Ship AI

Chat with the AI that powers this site. Ask about this article, Alex's work, or anything that sparks your curiosity.

Start a conversation

About Alex

AI product leader building at the intersection of LLMs, agent architectures, and modern web technologies.

Learn more
Discover related articles and explore the archive