EPC Unscripted - Feb 7, 2026
The following documentation/blog summarises the session hosted by the Unscripted series of The Extended Pack Collective. The two Design Debates that matter more in the age of AI.
We are living through a very interesting time. On the one hand, information has never been more accessible. On the other hand, it has never been easier to misuse, strip of context, and scale into something shallow.
This blog captures two big questions from a group discussion and turns them into simple takeaways you can use in real product work:
- Information wants to be free, but it doesn’t deserve to be.
- Cognitive diversity is the best safeguard against blind spots.
1. Information wants to be free, but it doesn’t deserve to be.
This statement evoked emotions on both sides of the topic, because it carries two angles that fight each other:
Angle A: Free access can be liberating
When people don’t have access to knowledge, they are easier to control. History shows examples of education and books being kept away from communities to concentrate power.
The point is simple: If people can’t access information, they can’t make choices.
Angle B: Free access without context can be harmful
Information is powerful, but also dangerous when:
- Spreads without training, interpretation, or guardrails
- Becomes fuel for manipulation
- Gets used to industrialise craft and reduce human work to “templates”
One speaker said it well: information without training is like unleashing a dog without teaching it.
So the real conflict is not “free vs paid.”
It is:
1. Access vs control
2. Sharing vs exploitation
3. Data vs understanding
4. Information vs knowledge
Information can be accessible, but use and extraction should be accountable.
A simple framework: The 5 Roles Model
Before you decide “should this be open?”, map the roles:
- Creator – who produced it
- Subject – who the information is about (often the user)
- Consumer – who uses it
- Custodian – who stores/distributes it
- Enabler – who benefits from scaling it (platforms, AI labs, aggregators)
Then ask one question:
Who has power right now, and who should?
If the custodian has more power than the creator/subject, you have a fairness problem.
Another framework: The Context Test (quick and brutal)
Before you publish, expose, or automate information, run this 6-step test:
- What is it? data, info, or knowledge?
- steps count = data
- steps + meaning + what to do = information
- information + judgment + experience = knowledge
- Who is it for? expert, general public, internal team, regulators?
- What could go wrong? misinterpretation, panic, manipulation, harm?
- What context is required? definitions, limits, examples, trade-offs
- Who is accountable? Who owns consequences?
- What guardrails exist? friction, warnings, permissions, rate limits, review
If you can’t answer #3 and #5 clearly, don’t ship it “open.”
What to do in product teams
Here are practical moves designers and PMs can use:
1) Design the “interpretation layer”
Most products ship raw information and hope users interpret it correctly. Don’t.
Add one of these:
- “What this means” (plain language)
- “When this is not true” (limits)
- “What you can do next” (action choices)
If your product shows a number, it should also show a decision path.
2) Treat “free” as a spectrum, not a switch
Instead of open vs closed, use levels:
- Open: safe without context
- Open with context: needs framing and interpretation
- Permissioned: needs user control (health, finance, identity)
- Restricted: harm is too high if misused
Most teams fail because they force a binary choice.
3) Protect expertise while using AI
A key fear surfaced: people using AI instead of experts.
The fix is not banning AI. The fix is making expertise visible.
Design patterns that help:
- Show confidence levels and uncertainty
- Show sources and why they matter
- Add “consult a professional” triggers for high-risk areas
- Avoid presenting outputs as the final truth
AI should reduce busywork, not replace judgment.
2) “Cognitive diversity is the best safeguard against blind spots.”
Most people agreed, but some pushed back with a useful warning:
Diversity alone doesn’t fix thinking.
If you add more voices without depth, you can increase confusion, delay, and even create new blind spots.
So the real question becomes: When does diversity help, and when does it slow you down?
The key idea from the room
When you want action, you want less diversity. When you want to open up, you want more.
A practical framework: Diverge → Converge (with rules)
Use diversity deliberately, not as a vibe.
Phase 1: Divergence (increase diversity)
Goal: find risks, blind spots, and alternative views.
Bring in people who represent:
- The user’s reality (not your team’s reality)
- Business edge cases (sales, support, ops)
- Technical constraints (engineering, data, security)
- Compliance/regulators (when relevant)
Phase 2: Convergence (reduce diversity)
Goal: decide, execute, ship.
Now you need:
- Fewer voices
- Clear owner
- Clear trade-offs
- Clear “what we won’t do”
Because execution collapses when decision rights are fuzzy.
The “Reverse Exercise”
One speaker mentioned something powerful:
When everyone agrees, it’s scary. The room might be blind together.
So do a Pre-mortem session. Ask your team: “Assume this shipped and failed. Why did it fail?”
Everyone writes 3 reasons.
Then group reasons into:
- User misunderstanding
- Tech failure
- Business mismatch
- Ethics/privacy backlash
- Adoption failure
Pick the top 2 and design against them. This single exercise catches what normal optimism hides.
The combined lesson
These two topics connect because information without context creates blind spots, and blind spots become harmful at scale, especially in AI-driven products.
So here is the simplest takeaway you can use tomorrow:
The 3-part rule
- Make information accessible (don’t hoard power)
- Make interpretation safe (context + guardrails)
- Make decisions anti-blind-spot (diverge, then converge)
Member discussion