For founders and product teams building AI

Creating clear and human-centered AI experiences.

I help teams ship trustworthy AI products by clarifying the model, designing safe interactions, and validating with users.

Diagnose your AI product

Tell me what you're building or struggling with,you'll get a pattern-based diagnosis specific to your product.

Available on desktop

Open this page on a larger screen to use the AI diagnostic tool.

Grounded in AI UX patterns across transparency, user control, uncertainty handling, and error recovery

Designed & built withClaude CodeFigma AICursor

Measurable trust

redesign

B2B SaaS • 25–30% faster completion

Users got control over AI outputs. Accept, edit, redirect. Confidence followed.

conversational ai

Side Project • High engagement

Financial clarity through conversation, not dashboards. Users stayed longer than any screen-based prototype.

design

Ad Platform • Validated approach

AI output became a starting point, not a final answer. Users edited instead of starting from scratch.

Where I work with you

You can start with the free diagnostic below to surface what's not working. If you want to go deeper, I offer focused engagements – from a one-time foundations check to a full product partnership.

Foundation Work

Report • v1.0 • 2026-01-30

In review
TRUST SCORE72 / 100
72
Gap analysisMost risk is concentrated in onboarding clarity and prompt guardrails.
LowHigh
UX RISKS5 items

Onboarding clarity

Users don't understand what the model can/can't do.

HIGH RISK

Prompt guardrails

Lack of safe defaults + unclear failure states.

MEDIUM

Data quality

Input validation + edge cases are covered.

VALIDATED
GAP ANALYSISRadar

Trust

UX

Data

Current

Target

NEXT STEPSTimeline

Prototype onboarding

Week 2

Validate with users

Handoff + roadmap

Week 4

Good AI gets shipped.
Great AI gets used.

The gap is rarely the model. It's the patterns around it: whether users understand what's happening, feel in control, and whether the AI is solving a real problem.

01

Transparency through conversation

AI shouldn’t be a black box. I design interfaces that explain their reasoning and process.

02

User Control

Give users agency over AI outputs with explicit actions: undo/redo, edit suggestions, clear state indicators, and a confirm step before finalizing.

03

Conscious AI decisions

Every feature must solve a real problem, not just exist because it’s possible. Know when to automate and when to empower.

04

Design for trust, not hype

Building long-term value by prioritizing reliability and user confidence over novelty.

Aviad Cohen at work

The one in the yellow shirt is me. I'm Aviad :)

With more than five years designing B2B SaaS products, the last few focused on AI. What keeps me in it, honestly, is watching how people make sense of something they can't fully see.

AI keeps changing, but some of the ways users interact with it keep coming back. The hesitation before trusting an output. The moment they realize they can push back. The expectations they arrive with before they even touch the product. Users are building real mental models for AI now, with understanding and expectations to match. I researched these, and the diagnostic tool on this site is built from that work. The products that fit tend to get used.

I work closely with founders and product teams, figuring out together how it should actually behave with users.

Let's talk

Available for freelance and consulting. Berlin-based, remote-friendly.